5 mistakes to avoid in UX research questions (and what to do instead)

Regina Hong, UX Researcher

Regina Hong

UX Researcher

UX

Qualitative UX research is very much like detective work. You spend time reviewing what you don’t know, then head out into the field to talk to more folks and fill gaps in the evidence trail. The people you will speak to are strangers, but you need to establish rapport quickly enough in the short time you have together to get some answers to your questions (knowing you might not get the chance to speak to them again). 

On the surface, asking questions appears easy enough. It’s one of the first skills we learn as children, and we do it every day. But not all questions are good questions. A bad question might confuse users, or even worse, lead to inaccurate insights that misguide product design. What are some ways to do qualitative UX research that lead to meaningful insights? Read on to find out.

Mistake 1: Launching straight into product questions

"Let's get down to business"

When showing a prototype or product images in a test, it’s tempting to dive straight into business without first gathering any context about the users. However, a user’s background gives vital insight into why they are giving certain feedback. For instance, an experienced user might not think a certain feature is useful. On the other hand, a new user might find the same feature very useful, making the onboarding process more seamless for them. These kinds of nuances will be important for the product team as they decide on their features roadmap.

Gathering background information at the beginning can also be useful for building rapport, especially if you are talking to someone for the first time. And even if it’s someone you have spoken to before, the interview setting might take some getting used to, so having some initial low-stakes questions to break the ice can go a long way in getting better quality data, demographic and otherwise. 

What to do instead:

  • Ask questions around the context of use. Questions about a user’s role, how long they have been working, how they normally use a product/perform certain tasks, their descriptions of the problems they are solving, their expectations for a “good” experience, and other tools (yours or your competitors’) or product ecosystems will help the team get a better sense of existing interactions and identify opportunities for improvement. 
  • For repeat interviewees, reconfirm background details. Organization changes and role switches may have occurred since your last interview. Confirm whether the interviewee’s role/job scope is still the same at the start of the interview. 

Mistake 2: Leading the participant 

Quote: "I promised no 'gotcha questions.' But I'm gonna lead with one."

Interviewers are only human and it’s hard to resist the temptation of asking for a direct answer to the research question. For example, when asking about features for a website redesign, it might be tempting to ask “How useful is it to see your tasks at one glance on the homepage?”. A participant might then go, “Yes, it’s very useful, it helps me keep track of tasks” because the question is already referencing a task dashboard, and the word “useful” may unconsciously prime a participant to agree even though they would never have thought about the feature previously.

What to do instead:

  • Keep questions open-ended initially. For the above scenario, instead of a specific feature upfront, ask participants about features they would like to see in general and probe about why they want those e.g. “What additional features would you like to see on the website? How would you use it?”  
  • Ask about expectations before prototype interaction. Another way to uncover expectations is to ask participants what they expect to happen before they interact with something. For instance, “What do you expect to happen when you select this? What do you expect to see here?” After they have seen the interaction/next screen, check if the experience met or deviated from their expectations, and how e.g. “Was this what you expected to happen?” 
  • Use neutral language. Depending on your research objectives, you might need to ask about the utility of a specific feature rather than just keeping it general. Instead of asking how useful/helpful something is, try instead “What do you think about seeing your tasks at one glance..?” If the participant is stumped or if answers start going off-track, consider asking “How helpful or not helpful is this feature to you?” 

Mistake 3: Not probing enough

Image of a girl giving a thumbs up.

Probing, or the process of asking follow-up questions to understand more about in-depth motivations, can feel uncomfortable. In the rush of covering all the questions during the session, it’s also easy to move on once a participant says that something is “good” or “helpful” without asking for more details. But not asking probing questions can lead you to misunderstand responses or be stuck at a very superficial understanding when it comes to analyzing and sharing insights. 

What to do instead:

  • Include a list of follow-up questions in the interview guide. When asking about generic questions (desired features/changes/types of tasks performed), include follow-up questions as reminders to probe about something if it’s not mentioned. For instance, when it comes to types of tasks performed, follow up with a set of questions around how they do it, if and how they share the output of their tasks, and the tools they use when performing the tasks. Knowing these answers will help the team learn about key functions for a product redesign as well as opportunities for further integration.
  • Stay silent for a few seconds after a response. Sometimes, you can draw out more words by using none. Practicing intentional silence (keeping quiet for about 5-7 seconds after a participant’s response) can help participants to further flesh out their thoughts and reveal deeper insights than if the interviewer had followed up immediately. 

Mistake 4: Asking only general questions

General questions are great for getting the lay of the land (the participant’s mental landscape), but the real deal lies in the details. So how do we make it easier for participants to share in-depth insights? Enter storytelling. We are wired to like stories and tell them, and inviting stories from users can reveal more details than a generic question would. 

Compare the following responses: 

Interviewer 1: What factors do you consider when buying cat food?

A: Oh, I look at the price, the ingredients, and the nutritional information. 

Interviewer 2: Tell me about how you bought your most recent brand of cat food. 

A: I went to my local pet store and decided to browse through the cat food options they had. I noticed that they had a separate section for kitties, so I decided to take a look. I realized for the first time that there were separate wet food types for kittens and cats! So I decided to buy the Fancy Feast kitten version with chicken for her to try as it was the cheapest; there were member sales going on then too, so I signed up for a membership on the spot and got it for cheaper than I would have online. 

Man shrugging shoulders

In the latter response, the participant included further details about where they shopped (online and in a local pet store), their price consciousness, and their awareness of the age appropriateness of canned food. We also learned that displaying canned food by different age groups might encourage shoppers to take notice of brands they wouldn’t otherwise have. Of course, this latter insight might not hold true for everyone, so it’s important to conduct more research with participants of a similar persona (first-time cat owners) to understand the extent of the issue. 

What to do instead:

  • Ask for specific stories. The TEDW (tell me, explain, describe, walk me through) framework is useful for initiating open-ended conversations while avoiding bias in question phrasing. Each part of the acronym (T, E, D, W) can be used as a seed phrase for a statement that encourages sharing. As an example, let’s assume we are researching whether people would be interested in a meal kit subscription. Some sample questions include:
  • Tell me about the last meal you cooked for yourself. 
  • Walk me through how you bought the ingredients for that.
  • Could you describe to me how you decided on that dish? 
  • You mentioned that going to the grocery store is a chore. Could you explain more about that? 

Mistake 5: Relying only on good intentions about future actions

John Oliver quote: "The main problem with New Year's resolutions is that we set our expectations way too high".

What someone says is not always what they (will) do, as shown by fewer than 10% of adults keeping their New Year’s resolutions for more than a few months. The adage also rings true in UX research. People don’t always live up to their own expectations! Although asking someone whether they would recommend a product, a.k.a the Net Promoter Score (NPS) question, is a common question in survey research, it’s a question that assumes future intentions, which are not as reliable as actual things they have done. 

What to do instead:

  • Ask about past behavior. Want to know whether someone would purchase a hypothetical future product? There are three groups of people you could reach out to — early adopters, superfans, or people using a competitor’s product/similar product. For instance, if you want to learn about people’s appetite for a new clothes subscription kit, you could reach out to current users to understand their current shopping behavior as well as reach out to users of clothing subscription kits to learn about their experiences. 
  • Understand their jobs to be done. Think back to when you bought your first skillet. Sure, you wanted to cook food, but you ultimately wanted a good meal that would satisfy you and maybe also your friends. Having a good meal is the actual job to be done (not cooking!), and this job could just have been easily performed by a meal delivery service, takeout, or a meal kit etc. A skillet company is therefore competing with these services too, and not just other skillet makers. To dive into what your user really needs to do, ask about their daily tasks or process, problems they face, their solutions, and their workarounds. 
  • Triangulate data from other sources. Since each method of data collection has its strengths and limits, consulting a range of data sources can help corroborate or add further nuance to user research findings. These sources might already be close within reach. Usability heuristics assessments, competitor analysis, customer support call logs, and web analytics data are all examples of sources you can include in your research process as you gather insights and prioritize recommendations. 

There is a fine line between interrogation and conversation. And conversation is really what gives us the rich insights that make or break a case (or concept/prototype). Each small tweak to the process — understanding context of use, keeping questions open-ended and neutral, probing deeper, asking for specific details, asking about past behavior, and using a variety of sources — will lead to deeper insights that improve both the user experience and a company’s bottom line. It takes time and practice, but it’s effort well invested!

Want to see how we put UX theory into practice?

Check out our insights on UX and design or strategy. No time for research or unsure about how to start? We’ve got you. Get in touch and let us help!

More Ideas & Insights