Austin Startup Week 2020: Researching a Business Pivot During a Pandemic Q&A

Thanks to everyone who attended my talk on remote customer learning plans at Austin Startup Week, and for the great questions you asked.  I’ve collected them here, along with answers that go a bit more in depth than what I could offer live. (If you’re looking for the talk recording, slides, or resources, you can find them here.)

I tried to include every question that was asked during the talk, but if you notice one I missed, please let me know and I’ll add it in. 

Attendees primarily asked about customer learning methods, training and coaching, recruiting, and customer learning culture. You can jump to a question using the outline below.

Customer learning methods

  • Surveys: How effective are surveys conducted by apps such as SurveyMonkey or TypeForm?

  • Diary studies: Would social influencer videos qualify as diary studies?

  • Industry-specific methods: What do you do for a specific industry like the service industry?  Is there a different methodology or different considerations?

Training & coaching

  • Research for founders: Do you need a Ph.D. to do this work?  I mean, it definitely helps your practice, but how can we make this approachable for the founder out there?

  • Learning resources: Are there any books or videos you would recommend to learn how to do better product discovery?

  • Working with researchers: For folks that want to work with you to do better discovery what should they do to prep to best use your time?

Recruiting participants

  • Niche recruiting: How do you find interview candidates when the needs are niche like construction tools or ideas for new parents?

  • Difficult-to-contact customers: How do you reach out to users or customers if you can't get ahold of them through email or get their permission to reach out via email?

  • Cost: What is the cost per interview for a recruiter?

  • Startup-friendly recruiters: Can you point us toward some recruiters that are startup friendly?

Creating a customer learning culture

  • Creating culture: How can we get this more into our startup culture?  How would you recommend folks really get their heads around this, and are there any examples of either companies or research groups or startups that are doing this well?

  • Research ownership: As folks contemplate their move leaving a big company and starting their own company, or as they're building their own startup, where does this ownership of research live on the org chart?  Who should be owning this in the early stages?

  • Countering anecdotal evidence: Sometimes companies suffer, especially early stage, when they've got a very strong, opinionated founder or a board or investor that's very anecdote-heavy.  It's that kind of tyranny by anecdote.  How can this research combat that?  And if you're not the founder and want to bring some research and some methodology into your company, how can you use what you do and research like you do to combat that, "I think I know this is the way we should go, I'm gonna trust my gut" sort of thing?

  • Customer discovery in the enterprise: What would you say to folks at larger enterprises, particularly the folks that are listening today that are sitting in those companies and trying to drive their innovation...what would you say to them to help them boost their research and value of this discovery approach?

customer learning methods

Surveys: How effective are surveys conducted by apps such as SurveyMonkey or TypeForm?

 I’m assuming this question pertains to the customer discovery interviewing part of the talk, since it wouldn’t make as much sense to use a survey for customer observation or hands-on product testing under typical circumstances.

In general, surveys are not the most effective or efficient method for conducting early-stage customer discovery.  Customer interviewing is a better choice in most situations.

The goal of customer discovery is to validate (or invalidate) your hypotheses about your customers’ needs, and to quickly pivot if it turns out their needs are not what you expected.  Surveys are not the best choice for this for two big reasons:

  1. Surveys have fixed scope.  Let’s say you want to build an app for tea drinkers.  Your hypothesis is that tea drinkers need a timer to help them steep different teas for the right amount of time.  You build a 20-question survey around this question, but then in the very first question they tell you this is not a need for them.  What then?  Even if they complete the rest of the survey, it’s not helpful information, and it’s a waste of both of your time.  You’ve invalidated your hypothesis (good!), but you don’t have any signal on how to pivot to address other tea-related needs (bad).

    Surveys are fixed instruments.  They only tell you about the questions you thought to ask about upfront.  No matter how smart and experienced you are, your first hypotheses about customer needs are almost always wrong.  This is why I recommend interviewing instead.  Interviews are fluid – even if you have a script, you can always probe for more detail, or follow up on other needs potential customers might mention only in passing.  In the tea app example above, as soon as you learn that tea drinkers don’t need timers, you can switch gears and ask about what they do need.  You’ve salvaged the study and given yourself hard data to make an informed pivot.

  2. Surveys are (arguably) more error prone. Writing survey questions seems simple, but in reality it is extremely easy to screw up.  Amazon offers over 40,000 book results for “survey design,” because it is a skill that requires training to do well.  Seemingly inconsequential decisions about question wording, response choices, question order, and survey logic can lead to massive errors in your understanding of what customers want and need.

    Interviews are usually a better option.  While interviews can lead to certain types of bias as well, the risks are reduced for a few reasons: participants can more easily explain their answers, you can follow up and probe for more detail, and participants’ answers are not artificially constrained by the fixed instrument of the survey.

Surveys do have their uses, but they are also heavily abused.  Interviewing is a safer and more actionable approach for most early-stage customer discovery.

Diary studies: Would social influencer videos qualify as diary studies?

While social influencer videos could be a great source of hypotheses, they have some key differences from diary study videos.  I would caution against using influencer videos to validate or invalidate your business concepts.

Diary studies and social media videos have fundamentally different intents behind them.  The goal of a diary study is to get an accurate, unbiased glimpse into potential customers’ everyday behavior – such as every time they get a snack from the fridge, or what they do after opening a new Word document.  In contrast, influencer videos are made to entertain and get views.  Editing for social media may sensationalize events or leave out important information because it ruins the narrative or it’s less engaging.  So it’s not the best way to learn how customers are actually experiencing the real-life situations in which they might use your products or services.

Industry-specific methods: What do you do for a specific industry like the service industry?  Is there a different methodology or different considerations? 

The techniques I shared for discovering customer needs and understanding their context will generally work well across a variety of industries.  You can just as effectively use customer interviewing to learn about the needs of new parents, or baristas, or factory inspectors.  You can use remote task observation on processes as diverse as lesson planning, plumbing repairs, and medical intake processing.  These are powerful research techniques that work across a variety of users and contexts.

What will change as you conduct research across different industries is your own need for preparation. The better you know your audience, the less additional preparation you need.  If you’re already a parent, it’s pretty easy to interview another parent.  But if you’re talking to factory inspectors for the first time, and you’ve never worked in a factory yourself, you’ll want to do the work upfront to ensure you’re making the most of everyone’s time.  If you have someone on your team who is already familiar with this audience, it can be helpful to bring them along as “interpreter” (or to have them do the talking themselves).  If you don’t have that option, my advice is to give yourself ample time for a crash course to understand that industry’s processes, roles, and terminology before talking to actual people.  I also strongly recommend reviewing your interview script with someone from that industry before you start collecting data, as this can help you correct any issues in advance.

 

Training & coaching

Research for founders: Do you need a Ph.D. to do this work?  I mean, it definitely helps your practice, but how can we make this approachable for the founder out there?

I deeply believe that the way to create better research cultures at our organizations is to put customer learning tools into the hands of everyone, regardless of role. Customer research can and should be done by people at all levels of our organizations (although when it happens and how it looks can change according to role).

For founders of startups, there’s really just a small handful of research methods you need to master in order to see a major boost in the accuracy and actionability of your insights.  I covered three of those methods in my talk today.  If you learn these core research methods, you can handle most of your research needs yourself, and then partner with a researcher for the small remainder that are especially complex, time-consuming, or mission-critical.

Learning resources: Are there any books or videos you would recommend to learn how to do better product discovery?

Yes!  I’ve included recommendations on my Austin Startup Week 2020 resources page.  If you’d like a more personalized approach, I also offer coaching.

Working with researchers: For folks that want to work with you to do better discovery what should they do to prep to best use your time?

When I’m working with a new client or team, I want to understand four things:

  1. What are your overall business goals?

    There’s no one-size-fits-all customer learning plan.  Good customer research helps you get where you’re looking to go faster.  This means accounting for factors such as internal milestones and deadlines, budget, team size and priorities, and culture.

    The other reason I ask this question is because research is often especially impactful (and cheaper!) when it’s done in advance of when you need it.  An experienced researcher should be able to review your business plan and help you determine if there are: 1) questions you need to start thinking about now; or 2) ways you can begin collecting insights from the everyday customer interactions you’re already having, so you can course correct as you go without adding time or cost.

  2. What are your customer learning goals?

    I want to understand your priorities for immediate and long-term customer insights to ensure that I help you develop a learning plan that answers those questions first.

  3. How are you learning about your customers today?

    This means understanding where your product ideas and insights are coming from, who on the team is talking to customers, when and how those conversations are taking place, who they’re with, and what the objectives are.

  4. What do you already know about your customers, and how did you learn it?

    As a researcher, I want to start from what you know already and help you take the next step. Helping you use your resources efficiently and effectively is always the goal.  And this often means also doing due diligence to ensure we’re starting from a foundation of reliable customer insights (which is part of the value of working with a professional researcher!).

    Customer research always involves trade-offs between efficiency (how easy it is to get insights) and accuracy (how likely it is your insights are correct). This is a normal and healthy part of business research (and is different from in academic research). However, it also means that some kinds of insights are more reliable than others. Judging the relative reliability of different insights is an advanced research skill, and is hard to do if you don’t have considerable training. An expert researcher can help you assess which of your insights are most robust, and which ones might need further validation, before you invest additional time and money.

 

Recruiting

Niche recruiting: How do you find interview candidates when the needs are niche like construction tools or ideas for new parents?

First off, if you’re looking for niche audiences, I commend you – that suggests that you already have a good, specific sense of who your customers may be.  So kudos for that!

For niche consumer audiences, I usually recommend starting with social groups and meetups.  Many of these groups are already highly targeted.  For instance, searching for “new parents” on Meetup.com immediately brings up four new parent groups.  Expanding to “parents” more broadly brings up twenty groups on the first page alone.  This is a great place to start reaching out to group organizers and seeing if they’d be open to having you participate in and/or sponsor some of their meetings, with the intent of finding research participants.  (Note: It’s important to remember you’re a guest, and to avoid making it a sales pitch.)

For niche industry audiences, such as construction, I recommend starting by looking for professional organizations.  For this example, a quick online search turns up the Texas Construction Association, the Construction Management Association of America, the Associated General Contractors of America, and numerous others.  Most of these organizations will have their own meetings and mailing lists, and at that point the same rules apply as for social groups and meetups – reach out to an organizer, and see if they’re open to having you as a meeting sponsor and/or participant (be aware that some may charge a fee for access).

Other options include paid ads and posting on community message boards, but if you go this route you’ll need to vet your participants more carefully to make sure they truly match your target profile (it’s not uncommon for people to fudge their credentials if they think there might be a reward). If you are unsuccessful with these approaches, you can always reach out to a professional recruiter as a last resort.

Difficult-to-contact customers: How do you reach out to users or customers if you can't get ahold of them through email or get their permission to reach out via email?

Let’s unpack this a bit.  There could be multiple reasons why customers are unable or unwilling to be contacted through email.  Let’s talk through the two most common reasons I’ve seen:

  • They don’t want to be spammed or pitched to.  People are justifiably wary of giving out their email addresses these days.  If you think that customers may be suspicious of your intent, you need to convince them this is not your standard sales pitch.  Think about ways to make your research invitation stand out. Be human — this is not the time for flashy marketing language. Tell them exactly what they’re agreeing to (e.g., a 30-minute conversation), and make it really clear how you’ll be using their information. I find it can also help to say that you’re only talking to a small number of people. And it’s okay to explicitly say that you won’t try to sell them anything (but make sure to stick to that).

  • The information you’re requesting is sensitive or confidential.  I usually see this with B2B companies looking to talk to potential customers in specific industries or job roles.  This can raise concerns about whether you’ll ask them to reveal proprietary company information, which could get them in trouble with their management.  The best way to avoid this issue is to get permission from their company’s leadership before talking to individual employees.  If this isn’t possible, then B2B recruiting firms are often an effective (albeit more expensive) alternative.

If there is some other reason you can’t contact customers, the next best thing is usually to recruit for people who are similar to your customers through any of the other channels I’ve discussed here.

Cost: What is the cost per interview for a recruiter?

The cost for participant recruiting comes from two places: finding the participant (sourcing), and compensating the participant for their time (incentive). For sourcing alone, the lowest cost I’ve seen is about $40 per participant (for an easy-to-find consumer audience). This increases with how difficult it is to find the people you’re looking for. On the other end of the spectrum, sourcing industry executives can cost over $1500 per interview (but this is for a much higher touch service, and also includes the incentive).

In terms of incentive, $80 per hour (or $40 for 30 minutes) is a good starting point for remote consumer research. Expect to increase this if you’re conducting research in person, or if you’re talking to B2B audiences.

Startup-friendly recruiters: Can you point us toward some recruiters that are startup friendly?

A lot is going to depend on who you want to recruit and what your budget is.  So I think it’s more helpful to share the questions I’d ask to vet a potential recruiter:

  1. Is their rate for sourcing participants only, or does it include participant incentives (i.e., compensation)?  This can add unexpected costs.

  2. Where do their participants come from?  Most recruiters have a panel of participants they reach out to for studies.  The larger it is, the better chance there is it will include people like your customers.  Many recruiters will also specially seek out people like your target customers if they don’t have them in their panel already, but this usually adds to the cost.

  3. Do they specialize in certain kinds of participants?  By default, most recruiters tend to specialize in consumer audiences.  These recruiters can often still find professionals for you, but it takes more effort and cost, and the quality may not be as high.  However, there are also specialty recruiting firms that have existing relationships with professionals and businesses, and who can be more effective at helping you recruit for specific job roles or industries.

  4. What would they charge to recruit for your specific participant criteria?  If you know who you’re looking for, it’s best to ask this early.  Some recruiters charge a base rate for basic demographic criteria like age and gender, but ratchet up the cost for custom criteria like specific occupations or experience with certain software.

  5. How do they handle no-shows?  Things happen.  Unfortunately, not everyone you recruit for your study will always show up (although higher participant incentive definitely helps!).  Some recruiters will replace no-shows free of cost, but others may charge you regardless.  One common middle-of-the-road approach is to recruit extra participants at a lower rate, and then let you decide whether or not to use them at the full rate depending on whether everyone else shows up.

  6. How do they ensure participant quality?  There’s no one answer to this — you mostly want to hear what they have to say and make sure it makes sense.  One specific question to ask is what happens if a participant shows up and clearly is not who they said they were.  In my experience, most recruiters will replace participants in these cases, but it’s good to know in advance.

 

Creating a customer learning culture

Creating culture: How can we get this more into our startup culture?  How would you recommend folks really get their heads around this, and are there any examples of either companies or research groups or startups that are doing this well?

I’m going to start with the second half of this question.

A lot of companies pay lip service to customer research, and only a handful do it really well.  Culture is the result of many different factors, but there are a few things I’ve consistently seen in companies that have successfully incorporated research into their day-to-day functioning:

  1. There’s direction from the top for everyone to participate in research.  While I have seen individual contributors successfully bring research into an existing organization, it’s always a slog.  It’s much, much easier to create a great research culture when the founder or CEO is the driving force behind it.  And it needs to be more than words – leaders need to make it clear that providing evidence for decisions is part of their employees’ jobs, regardless of what their role on the team is.

  2. They provide models of what good research looks like in practice.  Because so few companies do research well, most people don’t have a model for what it looks like to incorporate customer evidence into their work.  If you are, say, a developer, then you’ve probably never worked anywhere that asked you to do customer research. Customer research has probably never been part of your training or your conception of what developers do.  So these models need to be provided by your company.  Leaders can do this by building in learning milestones as part of their roadmaps, and by figuring out how to incorporate customer learning activities into their standard work processes (and inviting the rest of the team along!).  If you’re still learning how to do this yourself, then look for examples of this outside your own company and share them with the team.

  3. Research gates decision-making.  Eric Ries (author of The Lean Startup) advocates for using “learning milestones” rather than development milestones.  Your progress in a startup, especially early on, should be gauged by what you’ve learned, not by what you’ve built.  When you do decide what to build, use gate reviews and incorporate your customer evidence as a central part of them.  If the evidence does not support what you were planning to build, it should be okay to scrap it.  Your evidence should be telling you what to build.

  4. They make the time to improve their research quality.  Nobody is born knowing how to do customer research, and the unfortunate truth is that a lot of the research done by entrepreneurs isn’t very good.  Studies have shown that most startup founders rely on highly biased, error-prone customer research methods, such as sending surveys to their own followers on social media. The teams that really up-level their research quality are those that seek out experts to help them learn, whether that’s in the form of classes, books (written by actual researchers, not by fellow entrepreneurs), or consultants and advisors.

Now going back to the first part of the question, what can we as a community do to get this more into our startup culture?  We can support startups in making the changes outlined above.  We can ask founders what they’re doing to promote research from the top.  We can share our own examples of what good research looks like in practice so more people have these models in their heads.  Investors and advisors can hold founders to high standards for the quality of their customer evidence.  And we can provide opportunities (like this one!) for dialog, learning, and training in public forums like Austin Startup Week.

Research ownership: As folks contemplate their move leaving a big company and starting their own company, or as they're building their own startup, where does this ownership of research live on the org chart?  Who should be owning this in the early stages?

To answer this, I’m going to call back to what I said earlier about creating good research culture: it’s much, much easier to create a great research culture when the founder or CEO is the driving force behind it.  And this starts at the beginning.  Unless you as a founder have a research specialist co-founder (which I think would be awesome), then the responsibility for research lies with you.  The most fundamental questions about a startup’s long-term viability are answered, at least in part, within the very early stages of its existence: namely, problem/solution fit, and product/market fit.  If you are waiting for someone else to drive the research on these questions, you’ve already missed the boat.

The good news is you don’t have to be a research expert to do this.  As I mentioned earlier, there are a handful of research techniques that can get you most of the way there.  Couple that with a strong research advisor, and you’ll maximize the quality and efficiency of your customer learning.

Countering anecdotal evidence:  Sometimes companies suffer, especially early stage, when they've got a very strong, opinionated founder or a board or investor that's very anecdote-heavy.  It's that kind of tyranny by anecdote.  How can research combat that?  And if you're not the founder and want to bring some research and some methodology into your company, how can you use what you do and research like you do to combat that, “I think I know this is the way we should go, I'm gonna trust my gut” sort of thing? 

This is a common situation, but it’s a real challenge to counter when it’s someone in a position of power.  My first recommendation is to avoid getting involved with a stakeholder like this in the first place if at all possible, because it means that they don’t understand evidence-based decision-making.  And that bodes badly for the long-term profitability of your company.

That being said, sometimes it’s unavoidable.  In these cases, I’ve seen people have success with two different approaches, which I think of as “head” vs. “heart” approaches.

  • “Head” approaches: If the stakeholder is more business-oriented, it can be effective to appeal to them through the language of metrics.  If you’re early-stage, you’re probably running experiments or MVPs.  Work with the entire team (including the stakeholder with the anecdotes) to get crisp with how you’re going to measure success, and make it something objective (not subjective user ratings, for instance).  Then review all possible experimental outcomes, including the one that stakeholder is pushing for.  Get everyone aligned on every stage of the experiment, from the design of the experiment itself, to how success is measured, to what action you’ll take as a result of each of the outcomes.  Once you have everyone fully bought in (preferably in writing), it becomes much harder to argue for a course of action that isn’t supported by the data.

    If you’re later stage, this usually works even better because you probably have more defined KPIs.  The overall approach remains the same: have everyone agree to an experiment, a measurement approach (probably a KPI), and a plan of action for each possible outcome; then run the experiment and see what the data tell you.

  • “Heart” approaches: Sometimes the opinionated stakeholder has a distrust of metrics, or won’t be willing to participate in the experimental approach outlined above.  In these cases, you can sometimes change their mind by giving them more first-hand experience with real customers.  I once worked with a team leader who refused to believe that users were confused by a new feature layout.  After multiple usability studies in a lab failed to change his mind, we set up a meet-and-greet with real-life fans of the product.  The team leader listened to how they talked about the product and realized for himself they had no idea how the new feature worked.  This entirely changed his mind about the feature, and fixing it became a priority.

Customer discovery in the enterprise: What would you say to folks at larger enterprises, particularly the folks that are listening today that are sitting in those companies and trying to drive their innovation...what would you say to them to help them boost their research and value of this discovery approach?

I’ve supported many innovation teams at large corporations, and the thing I’ve consistently seen them get wrong is the balance of early-stage business acumen and research capability.

Larger companies tend to have more specialized roles, and it hurts the team and the product when the business expects them to do the work of entirely separate disciplines. You’ll often see innovation at large companies centered around small teams of product managers and/or designers.  Product managers are absolutely critical when it comes to ensuring that products are designed to move business-relevant metrics. But they are not researchers. I’ve seen a lot of PM-driven innovation teams commit hard to building the wrong product for poorly vetted customer needs. Likewise, designers are incredible at creating solutions for problems with thorny constraints. But designers are not researchers, and they usually don’t deeply understand business strategy.  I’ve seen many design-led innovation teams produce exciting demos, that are ungrounded in validated user needs, and that do absolutely nothing to drive business metrics.

But this is exactly what’s great about startups and the customer discovery approach.  Whether you like it or not, your founders are your researchers, and vice versa.  Startup gurus like Steve Blank and Eric Ries make customer research central to their customer discovery processes, as something every founder should do, right alongside more traditional business activities.

(Because founders have all of these responsibilities, it also means it’s challenging to wear so many hats.  Which is why I think it’s critical to have specialist advisors who can give you that boost in the places you’re not as strong, and give you confidence that you’re pursuing an idea that’s genuinely viable and valuable.)

With the startup model in mind, my ideal core innovation team for a large enterprise company is someone who understands the business (like a PM), working closely with a researcher to define the customer problem first, all before anyone else gets on board. Business and research are the ideal initial disciplines for innovation because both focus on framing problems. Engineering and design are both mission-critical as well, but these are solution-focused disciplines, and they should not be driving early-stage customer discovery. I’ve seen countless projects fail because innovation teams were structured to focus on solutions before fully defining the customer problem.