This article explains how user research is used to inform design decisions and introduces some common methodologies of user research.

User research is ideally conducted by teams of people, with input from multiple departments (ie. product design, customer service, engineering), as well as experts in conducting both qualitative and quantitative research.

This article is not meant to train you to conduct user research, but rather to understand the role that it plays in the industry and to prepare you to interpret user research in a manner that would help you make user-centered design decisions.

Website Analytics

Using a basic analytics tool like Google Analytics will help you get started in understanding who your users are. There are other comparable tools, like Clicky that offer similar free or trial products.

Conducting user research can be both time and resource intensive, so before you do it, you want to make sure that you are targeting the right users.

A good place to start is by looking at some basic analytics on your users: Who are they? What are their demographics? Where is your traffic coming from? Are they coming in from search engines, social media apps, or links from other sites? How long do they stay on your site?

Surveys

Surveys are another fairly low-effort way to start to gather information about your users. Surveys can consist of open-ended responses (long answer text boxes) and a wide variety of closed-end responses (true or false, multiple choice, ranked choice, etc.). Surveys can be a good place to start building out your knowledge of your user base and what their interests and priorities really are.

There are lots of free tools that make it easy to make surveys, such as Survey Monkey and Google Forms.

Surveys can be a challenge for many reasons!

Self-reporting isn’t always accurate. Remember that people don’t have perfect knowledge of themselves or their wishes. The sample of people who respond might not match up with the true population that you want to understand. If you look at any online review site, you’ll quickly realize that when responding is optional or user-driven, people who have polarized opinions are much more likely to give reviews or write comments. Just check out sites like Rate My Professors or Glassdoor.

This effect is particularly pronounced for surveys that have a very low response rate. This is called non-response bias, and it is just one of many ways that your survey sample can be non-representative of the group that you want to measure. Survey neutrality can be difficult to achieve. The way that questions and answer options are worded and ordered can lead people to one response or another.

That doesn’t mean that survey research isn’t useful. However, if you’re seeing conflicting results between survey research and a more direct observation of user behavior (contextual research), it is probably wise to favorite the more direct observation.

User Interviews

User interviews are a great way to get to know your user. Conducting user interviews can provide more background information on who your users are and what they want, as well as other contextualizing information about them and their use of technology in general.

Interviews need to find a balance between being flexible and standardized. They should be scripted, to make sure that the users are prompted in consistent ways, but they should also be open-ended enough to allow the users to express their own opinions. An overly scripted user-interview will insufficiently take advantage of the differences between a survey and an interview.

When conducting an interview, remember that users aren’t designers. The goal of an interview should be to better understand the user and their motivations and desires for using the product. Improvements suggested by users should be taken as pieces of data that speak to their experience as a user, not necessarily as viable designs that could work with the current state of the product.

Contextual Research

Contextual research combines interviews with observations of the user using the product in the environment in which it will actually be used. This might involve visiting a user at their workplace or their home to observe them as they actually use the product.

The goal of contextual research is to immerse the interviewer in the work or home culture of the user and to help bridge the gaps between what users actually do, what users say they do, and what the company thinks their users do.

Contextual research is more demanding in terms of time and resources, so it is typically conducted after other less demanding forms of user research have identified the most important hypotheses and personas to target.

Focus Groups

A focus group is a guided group interview process. Focus groups are useful for questions that require group interaction, such as brainstorming, idea generation, or group discussion.

Focus groups should not be put together at random: they should be existing users of your product or users that closely match the user personas.

Focus groups should not be used for products that involve sensitive information about the users. This information would be more reliably gathered from 1-on-1 interviews.

Card Sorting

A persistently tricky UI challenge is deciding on a navigational structure for a website or a web application. Which content is the most related to other content? How should things be grouped? The navigation system becomes more and more important for the usability of a website as it grows to contain more and more content.

In a card sorting session, users are given cards that represent content from your website, and they are asked to sort it into groups that make sense to them. In some variations, the users name the groups that they have created (open sorting), while in some other variations the users are given groups in which to place different cards (closed sorting).

Usability.gov provides a good guide to conducting card sorting research.

A/B Testing

A/B testing involves giving users two different versions of the site (version A and version B) that are identical to each other except for one design variation.

Outcomes of interest to the site, such as click-through rates on advertisements or checkouts for an e-commerce site, are then compared for both versions. A/B testing is typically reserved for features that will impact key outcomes for the site because you generally do not want to create duplicate designs unless they have the potential to be a significant boon for your business.

Having an analytics dashboard set-up is typically a prerequisite for A/B testing as this will allow you to collect the results of your experiment.

Read more about A/B testing here.

Conclusion

This article has covered the most standard types of user research, from broad surveys that might be used to understand characteristics of your target audience to focused A/B tests that are specific to individual features that you want to test and compare.

Each type of research addresses a different need and can be used to test different kinds of hypotheses. Make sure that you have a clear goal in mind before you set out to conduct user research!

Made in NYC © 2018 Codecademy