To ensure high response rates and avoid misleading survey results, keep your surveys short and ensure that your questions are well written and easy to answer.
Field studies should emphasize the observation of real user behavior. Simple field studies are fast and easy to conduct, and do not require a posse of anthropologists: All members of a design team should go on customer visits.
Focus groups and surveys study users' opinions - not actual behavior - so they are misleading for the design of interactive systems like websites. Automated usability measures are just as misleading.
Discount usability engineering is our only hope. We must evangelize methods simple enough that departments can do their own usability work, fast enough that people will take the time, and cheap enough that it's still worth doing. The methods that can accomplish this are simplified user testing with one or two users per design and heuristic evaluation.
Participants in a course on usability inspection methods were surveyed 7-8 months after the course. Factors which influenced adoption were cost, rated benefit of the method, relevance to current projects, and whether the methods had active evangelists.
Extensive usability testing was conducted to guide the 1995 Sun Microsystems' Web site design. This series of articles describes in detail the methods and findings of the design team.
Usability inspection is the generic name for a set of methods that are all based on having evaluators inspect a user interface. Typically, usability inspection is aimed at finding usability problems in the design, though some methods also address issues like the severity of the usability problems and the overall usability of an entire system.
A summary of statistics for 13 usability laboratories in 1994, an introduction to the main uses of usability laboratories in usability engineering, and survey of some of the issues related to practical use of user testing and computer-aided usability engineering.
了解如何运行远程审核可用性测试。This second video covers how to actually facilitate the session with the participant and how to end with debrief, incentive, and initial analysis with your team.
In remote usability studies, it's hard to identify test participants who should not be in the study because they don't fit the profile or don't attempt the task seriously. This is even harder in unmoderated studies, but it can (and should) be done.
A simple method for visually identifying strong vs. weak themes in qualitative data from user research: by placing individual observations in a spreadsheet and color-coding them.
The total customer journey and user experience quality will benefit from considering market research and user research to be highly related, and to integrate the two, instead of keeping different kinds of research teams from collaborating.
Users' answers to survey questions are often biased and not the literal truth. Examples include acquiescence bias, social desirability bias, and recency bias. Knowing about response biases will help you interpret survey data with more validity for any design decisions based on the findings.
When doing user research for a UX design project, we can ask questions in two ways: open-ended (no fixed set of response options) and close-ended (users are restricted to picking from a few answers). Both work well, but only for those research questions they are suited to answer.
There are two ways to structure a UX research study when we're testing two (or more) designs: we can have each design tested by different people, or we can reuse the same users for all conditions. Each approach has some advantages and problems.
Tree testing is a supplement to card sorting as a user research method for assessing the categories in an information architecture (especially a website IA and its proposed or existing navigation menu structure).
User research generates masses of qualitative data in the form of transcripts and observations that can be summarized and made actionable through thematic analysis to identify the main findings.
Through observation and collaborative interpretation, contextual inquiry uncovers insight about user’s that may not be available via other research methods.
Sometimes you should intentionally overrecruit test participants for one-on-one user-research studies. Backup participants must be recruited according to the same screening criteria and paid at least as much as regular participants.
Benchmark your UX by first determining appropriate metrics and a study methodology. Then track these metrics across different releases of your product by running studies that follow the same established methodology.
Exact costs will vary, but an unmoderated 5-participant study may be 20–40% cheaper than a moderated study, and may save around 20 hours of researcher time.
Uncover the story within extensive UX-research data by following a process of revisiting original research objectives and organizing findings into themes.
You can learn the right kind of things and much more in user tests if you start with broad tasks instead of immediately leading to areas of interest. Prepare additional, focused tasks that can be used to direct users.
The CIT is a research method for systematically obtaining recalled observations of significant events or behaviors from people who have first-hand experience.
By first working independently on a problem, then converging to share insights, teams can leverage the benefits of both work styles, leading to rapid data analysis, diverse ideas, and high-quality designs.