User testing ultimately boils down to bridging the gap between the UX designers’ intended experience and how real life people actually use the product. You could build the best-engineered product in the world, but if it’s difficult to use or unattractive to users, it will never get any traction.
It gives you access to another pair of eyes that approach your model from a whole new perspective that you never thought of. Because let’s face it: our own subtle biases creep into product design and cloud our thinking.
It’s not the best practice to rely on yourself to evaluate your own product’s design. We’re partial towards it, and our perception of reality related to the product is skewed. This is why user testing is such a powerful tool – it eliminates ‘skew’ and lets you learn directly from the people that your work is meant for – the user. The total of their experiences with your UX design allows you to create the best possible product at an early stage.
This is why UX professionals rely on user testing to obtain feedback on their products and websites. Here’s how it is usually done:
Use their experiences to create a feedback loop cycle that responds to their pain points and criticism and promptly correct course.
There are generally two ways of running a usability test:
While UX designers have to be familiar with both methods’ nuances, there are a few tips that apply to each testing type.
Early testing helps you understand the user experience during the initial stages of the design process. This gives designers more information to build into their iterative design process as more and more features are added. Early testing gives you feedback about whether or not you’re on the right track while there is still time to pivot.
Early-stage testing quickly identifies flaws and lets your designers make appropriate revisions before they become too costly. With user testing, you can increase the likelihood of releasing a successful product that makes an impact right out of the gate.
It is more costly and time-intensive to make significant changes after releasing the product to the market. By catching on to your flaws early on, you prevent them from hampering the user experience, thereby saving a tremendous amount of time later on.
Early testing can be an opportunity to use simple design mockups and wireframes to get a feel for your user’s main pain points. It becomes even more important to rely on early testing before investing the time into more advanced mockups or a working application. This requires defining the test objectives and explaining to test users what’s required of them, which takes us to our next tip!
For user testing to yield good results, it’s important to define your goals as clearly as possible. Start by asking yourself why you want to test the product. What are you aiming to learn from user testing? Once you understand these questions, you can then start to identify features and areas you want feedback on.
Here are a few objectives common to most user testing projects:
Once you have identified your objectives, you can define your test’s specifics that will answer your questions. This will keep your test user-focused. In case you’re dealing with a complex piece of design with several moving parts, you’ll want to run several usability tests, each focused on a certain task or hypothesis you want to validate.
Every product will require a customized usability test, so you should set aside 30 to 60 minutes for each user. Giving users more than 1 hour will tire them out, which often results in lower quality of feedback.
Pro Tip: Start collecting feedback and initiating discussions immediately after conducting a test. This is important because the test will be fresh in the user’s mind.
Create Actionable Tasks
Every task in the test should be actionable. This means that when your user completes a certain task, it moves them closer to whatever the main goal is. These actionable tasks will relate to the specifics of the product that you want to test. Typical examples include:
Prioritize Usability Issues
These are tasks that the system has to support, by definition. For example, if the ‘check-out’ workflow on your website or app is not intuitive, all of your users will be affected. But if a visual design element fails to load properly, it wouldn’t be perceived as a huge negative by comparison. In this case, you’ll want to prioritize the check-out page over the visual elements.
This is why it’s important to list your tasks in the product and order them by priority.
Have a Goal for Each Task
The test outcomes should be clearly defined with a goal. For instance, if you’re working on a check-out page, the users should be able to complete the entire process within a specific period of time (usually 2 to 5 minutes). The more specific your goal, the easier it will be to judge the result.
Note: You don’t have to necessarily share the goal with participants and doing so could skew their results.
Sure you can ask your neighbor next door to test the product or ask your friends on Facebook to send in their suggestions, but these results won’t offer the deep insights you need for a successful product. While this method certainly gets the ball rolling for usability testing, it should not be heavily relied upon. If your product design is more refined or the users have specialized skills, you’ll have to look elsewhere.
Relying on the wrong kinds of users will pollute your user test results and could completely miss issues that actual users will encounter.
Ideally, your current user-base can be used to test new versions of your product. One way to do this is by using pop-ups and opt-in boxes to solicit volunteers. Of course, existing users won’t be expecting a long, drawn-out test, so you’ll need to plan accordingly once you’ve established your goals.
If you already have an existing roster of clients, then a quick analysis of available information can help you gain more insights about the user experience. This is often done with customer support tickets, surveys, previous usability sessions, and analytics data. You might wish to focus on your “power users” or those that have demonstrated difficulty navigating your software in the past. Identifying these user profiles can yield massive dividends on the validity of your user tests.
How to Define the User Test Criteria
Before signing up volunteers, you’ll have to define the user test criteria and select them accordingly. For example, if you are testing a mobile app for an eCommerce business, you should try obtaining feedback from people who order items regularly. You can even get more specific here. For instance, the volunteers should have some experience of placing an order at least twice a month from different delivery services, having tested at least two services.
You should define criteria for individuals you won’t be testing in your session. This may include early adopters, tech-savvy users, as well as participants who may have a conflict of interest (such as competitors).
Experts believe that you should test five users because this will unveil 85% of your major red flags in your design. The most important problems are easy to spot for people who are new to your product, and difficult for you to spot because you have approached the problem from a different perspective. Often, you’ll learn a lot from the first user you test, a little less from the next, and so on.
Provide Instructions on How to Join the Session
Before starting a test session, make sure all your participants know the following things:
Unfortunately, some users won’t fully commit to attending the testing session and will ultimately not show up at all. They’ll get busy, forget, or may have other priorities in life than to participate in your study. An effective way to minimize no-shows is to ask users to reply to your email to confirm their session. This can be done by using the right ‘triggers’ in your confirmation email. For example, the subject line could read as: “Testing session scheduled on August 19, 2 PM. Please reply to confirm).“
Alternatively, you could also call users to remind them about their appointment one day before the session.
Building rapport is the most underrated aspect of conducting a user study. We cannot stress enough how important it is to make sure your volunteers feel comfortable and open as they speak with you. This allows you to receive the most authentic feedback to improve the design experience of your product. As a general rule of thumb, the deeper your volunteer’s trust, the more frank their feedback will be.
Here are a few best practices to keep in mind:
Allow Users to Think Aloud
It is important to listen and observe your users during a test as you moderate them on their tasks. It is not uncommon for users to feel uncomfortable and awkward as you silently observe them from afar. So, try to break that silence and encourage your users to think aloud when they perform the tasks.
This can prove beneficial to user testing as well. When users think aloud, they walk you through the thought process that is behind their actions the actions they perform on each task. By allowing them to speak while they perform their actions, the users have the opportunity to be more candid and authentic with each response. This, in turn, provides you with valuable insight into their actual pain points.
You earn bonus points if you can get them to curse at the screen! That means you’ve discovered an essential design flaw and can quickly course correct.
Best Practices for Test Recording
Make sure to collect the tester’s consent before recording their session. Look up ‘user testing consent form’ on Google and you’ll find what you need.
Once you’ve collected consent, you can begin recording. Your objective is to record how long it takes users to complete their tasks.
It is imperative to not help testers during this period. Let them find the solution themselves. Do not intervene even if they appear stressed out or frustrated – this is a sign they’ve encountered a few issues. Observe if and how they overcome their hurdles.
For a remote unmoderated test, you can use tools like Lookback, Validately, and UserTesting. They get the job done.
You can conduct remote moderated sessions with Google Hangouts, Microsoft Teams, or Zoom. Simply ask the testers to share their screen, and then observe how they interact with your product.
Always Close with a Warm Thank You
When you have reached the end of your test, always thank your users for their time and effort. Depending on your industry and budget, rewarding users with cash or gift cards can be a great gesture. It’s also a great way to conclude a warm and welcoming experience.
An iterative design process is a simple concept, and one that fits well with our recommended approach to building software using agile and Scrum. Instead of trying to design all of the screens for the entire product up front, start with a small number of the most important features – the ones that the development team plans to implement in their first few sprints. Once you’ve identified your users and generated a prototype to address their pain points, tested them, and collected their feedback, it’s time to put their feedback to good use.
User testing is not a “one and done” activity. Incorporating user testing at various stages throughout the development process will help to merge customer satisfaction and the agile process to increase your return on investment.
User testing allows designers to test their prototypes quickly and efficiently. Prototypes that show promise can be improved upon until they reach your product goals; those that fail can be eliminated or re-designed. It’s a cost-effective approach that ultimately puts your users’ best interests at the heart of your software development process.