Moderated Session Mechanics

A moderated usability session is a structured conversation where the team observes a customer using a prototype or product, with a moderator guiding the session. Done well, it produces specific evidence about whether a solution actually works for the people it was designed for. Done badly, it produces opinions and confirmation bias.

Goal

To run a session that captures real reactions and decision-making from the participant, with minimal moderator influence and maximum learning for the team.

Context

Moderated sessions are usually used as part of prototype testing, but the mechanics below apply to any moderated evaluation. The mechanics matter because small differences in how the session is run produce large differences in the quality of the data.

The six-step structure

Every moderated session follows the same six steps in order:

  1. Pre-Test Questionnaire. Set a benchmark for the participant's preconceptions on usability, value and terminology. Ask qualifying questions to confirm the cohort (e.g. advanced users, new users).
  2. Orientation Script. Read a script outlining what will happen, expected behaviours, and that there are no right or wrong answers.
  3. Share the Tasks. Give the participant the printed tasks to work through.
  4. Post-Test Questionnaire. Ask the same pre-test questions again. Differences highlight changes after completing the tasks.
  5. Debrief with the Participant. Clarify unusual behaviour or points of interest from the test.
  6. Debrief with Observers. Go task by task and identify insights with the rest of the team.

Each step is covered in more detail below.

Pre and post-test questionnaires

The same questionnaire is used before and after the session, with two purposes:

Pre-test:

  • Sets a benchmark for the participant's preconceptions: usability, value, terminology.
  • Screens participants into different cohorts based on experience or demographics.

Post-test:

  • Identifies changes in perceptions after using the product.
  • Surfaces interesting mismatches. For example, the participant rates the task as easy but the moderator observed them struggling.
  • Gives the moderator time to gather their thoughts while the participant completes the form.

Orientation script

A written script ensures the same wording for every participant, which removes a source of unintentional influence between sessions. Cover three things:

  • What is happening. Clearly outline the expectations for the participant.
  • How to behave. Introduce the think-aloud process.
  • Don't worry about offending us. People know they are not supposed to tell someone their baby is ugly. Explicitly tell the participant the moderator did not design the prototype, even when they did.

Moderator role

The moderator's job is to extract honest signal without contaminating it. Four behaviours that consistently improve session quality:

  • Build rapport. Make the person feel comfortable. What this means varies by participant.
  • Act impartial, even if you're not. Present the product neutrally. "I didn't create it" is the standard line, even when the moderator did create it. Do not react to mistakes or make the person feel stupid. Be aware of voice and body language.
  • Probe and interact appropriately. In early-stage prototypes with limited interaction, probing is fine. With interactive prototypes, save questions for the debrief so the participant's flow isn't broken.
  • Don't rescue participants when they struggle. Encourage the person to keep trying. Only assist as a last resort. Where they get stuck is the most valuable signal in the session.

Thinking aloud

Ask the participant to verbalise their thoughts, feelings, and actions while using the product. Done well, this captures real-time reactions and decision-making, which is fundamentally different from retrospective feedback ("oh yeah it was fine").

Two common challenges and how to handle them:

  • Self-consciousness. Most participants are not used to narrating their thinking. Demonstrate the technique using a different example than the task: "I'm looking for the menu button, ah there it is in the corner." "I expect this click to take me to ... hmm, that's not what I expected." Once the participant has heard it once, they pick it up.
  • Silence. People stop talking when they concentrate hardest. Don't disturb them. Let them figure it out. Take note of every silence; the long ones usually indicate where the design is asking too much of the user.

Common challenges

Three failure modes that are easy to fall into:

ChallengeWhat goes wrongHow to handle it
Moderator influenceBeing too involved and leading participants. Being too knowledgeable and people look to the moderator for answers.Act as if you were not involved in creation and don't know how the product works.
Transfer learningA lot of learning happens in the first task, so the order of tasks influences performance.Shuffle the order of tasks where possible across participants.
Advanced featuresSome features need training to use. You're testing usability, not ease of learning.Prepare training for participants before the advanced-feature tasks.

Debriefing with the participant

After the participant has completed all tasks, debrief them to clarify the why behind specific actions:

  • Start generic. Ask questions related to your assumption (e.g. "Was that easy to use?").
  • Ask specific questions about behaviour. Show the recording of moments you found interesting, where the technology allows.
  • Let observers ask questions. If you have remote observers, allow them to submit questions during the debrief.
  • Review alternative designs. Contrast creates value. Would another design work better or worse? Why?
  • Intentionally lead the person. Controversial, but offering an opinion ("some people have said this is too cluttered, what do you think?") can surface whether the participant has been holding back. Use sparingly.

Debriefing with the observers

The often-skipped step. After the participant has left, the team that observed the session does its own debrief while everything is fresh:

  • Start generic. Ask the same overall question ("Was that easy to use?").
  • Go task by task. Step through each task and gather observations.
  • Document and prioritise insights. Not every observation is equally important. Capture the ones that change the team's interpretation of the assumption being tested.
  • Merge insights into the Opportunity Solution Tree. Make sure useful findings get applied to future iterations rather than living only in the moderator's notes.

The observer debrief is where the team's collective interpretation forms. Skipping it means each observer leaves with their own version of what happened, and the team's later conversation about what to do is muddier than it needs to be.

Anti-patterns

  • Skipping the orientation script. Each participant gets a slightly different framing, which introduces noise that looks like signal.
  • Moderator answers questions during the task. Destroys the data for that task. The participant now reflects what the moderator said, not what they would have done alone.
  • No observer debrief. Insights live in fragments across people's heads and decay quickly.
  • Rescuing struggling participants too early. The struggle is the data. Letting the participant work through it shows what the design is actually asking of them.

Was this page helpful?

Previous
Multivariate Testing
© ZeroBlockers, 2024-2026. All rights reserved.