When conducting user research, there are a variety of methods to acquire valuable data. This chart, courtesy of the Nielson Norman Group, illustrates the ranges that your research can measure.
Let’s break this down to the extreme ranges of this chart.
Ethnographic research is a fine example of behavioral research. This is where the researcher goes in to the user’s natural environment and observes the user in the user’s normal and regular context.
Surveys and Interviews are some ways to see what the user says they would or would not do something. Often users will give answers they think the research wants to hear or what they think is the “correct” answer. The key here is that the user might actually believe what they are saying is true. But in fact, when the researcher actually observes the behavior, what the user has said might not be accurate.
One-on-one interviews and ethnographic research are a couple of great ways to get qualitative research information. The researcher can devote individual time to the user, and really get deep information about them. This takes time, and therefore can be difficult to accomplish in mass quantities. But submersing yourself in the users world will provide much more in-depth information than more quantitative research methods.
Surveys accomplish quantitative research very well. Especially with the plethora of online survey tools (many of them are free), one can easily send out a survey to hundreds, if not thousands of participants and gather a large amount of data. This data can then be accumulated to show trends, make charts and post results of several people. However, this research method does not provide individual insight and appreciation that a more qualitative research will provide.
All in all, there are many research methods that a UX researcher has at his or her disposal. They key is to know which research method is best for the type of information he or she is seeking. Also, many of research methods fall within the middle ranges of this chart, and not at the extremes. I encourage you to use a variety of research methods in your next UX project.
This post originally came for Nielsen Norman Groups website in the full article “When to Use Which User-Experience Research Methods”. Here is an excerpt from that article highlighting a comprehensive list of research methods.
20 UX Methods in Brief
Here’s a short description of the user research methods shown in the above chart:
Usability-Lab Studies: participants are brought into a lab, one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service.
Ethnographic Field Studies: researchers meet with and study participants in their natural environment, where they would most likely encounter the product or service in question.
Participatory Design: participants are given design elements or creative materials in order to construct their ideal experience in a concrete way that expresses what matters to them most and why.
Focus Groups: groups of 3-12 participants are lead through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises.
Interviews: a researcher meets with participants one-on-one to discuss in depth what the participant thinks about the topic in question.
Eyetracking: an eyetracking device is configured to precisely measure where participants look as they perform tasks or interact naturally with websites, applications, physical products, or environments.
Usability Benchmarking: tightly scripted usability studies are performed with several participants, using precise and predetermined measures of performance.
Moderated Remote Usability Studies: usability studies conducted remotely with the use of tools such as screen-sharing software and remote control capabilities.
Unmoderated Remote Panel Studies: a panel of trained participants who have video recording and data collection software installed on their own personal devices uses a website or product while thinking aloud, having their experience recorded for immediate playback and analysis by the researcher or company.
Concept Testing: a researcher shares an approximation of a product or service that captures the key essence (the value proposition) of a new concept or product in order to determine if it meets the needs of the target audience; it can be done one-on-one or with larger numbers of participants, and either in person or online.
Diary/Camera Studies: participants are given a mechanism (diary or camera) to record and describe aspects of their lives that are relevant to a product or service, or simply core to the target audience; diary studies are typically longitudinal and can only be done for data that is easily recorded by participants.
Customer Feedback: open-ended and/or close-ended information provided by a self-selected sample of users, often through a feedback link, button, form, or email.
Desirability Studies: participants are offered different visual-design alternatives and are expected to associate each alternative with a set of attributes selected from a closed list; these studies can be both qualitative and quantitative.
Card Sorting: a quantitative or qualitative method that asks users to organize items into groups and assign categories to each group. This method helps create or refine the information architecture of a site by exposing users’ mental models.
Clickstream Analysis: analyzing the record of screens or pages that users clicks on and sees, as they use a site or software product; it requires the site to be instrumented properly or the application to have telemetry data collection enabled.
A/B Testing (also known as “multivariate testing,” “live testing,” or “bucket testing”): a method of scientifically testing different designs on a site by randomly assigning groups of users to interact with each of the different designs and measuring the effect of these assignments on user behavior.
Unmoderated UX Studies: a quantitative or qualitative and automated method that uses a specialized research tool to captures participant behaviors (through software installed on participant computers/browsers) and attitudes (through embedded survey questions), usually by giving participants goals or scenarios to accomplish with a site or prototype.
True-Intent Studies: a method that asks random site visitors what their goal or intention is upon entering the site, measures their subsequent behavior, and asks whether they were successful in achieving their goal upon exiting the site.
Intercept Surveys: a survey that is triggered during the use of a site or application.
Email Surveys: a survey in which participants are recruited from an email message.
When testing the Weather Channel App, I discovered a number of usability issues. Clearly, if the UX team had run some basic usability tests, a number of problems would have discovered and corrected.
Some issues I discovered:
• Make clickable items like buttons seem clickable.
• Remove ads within the feed, especially if they look like weather (editorial) content.
• Do not use ads as a background image on home page.
• Put useful information like search functionality in side drawer.
• Put more information, like a few days’ forecast on the home page.
• Use arrow indicator to notify the user to scroll down.
• Do not include every searched location in the favorites list.
• Allow user to just search a location without saving it.
• Allow users to include more than 10 locations in the favorites list.
• Clearly indicate current city with writing the city name, rather than relying on image.
• Give clues on social weather page as to what the icons mean and what will happen before a user clicks the icon.
• Make icons intuitive and less confusing and add a word them to clarify the function.
• Allow user to return to the top of page by tapping on the bar at the top of the screen.
• Move radar closer to top of feed. Or allow users to modify the order of content or remove something that does not interest them.