Measuring usability quantitatively

Guest Post By Enric Quintero,

The usability concept is understood as the measurement of the difference between the ideal interaction and the real one taken by our target audience.

At the start of the internet, both types of interactions (ideal and actual) differed greatly. There wasn’t much of a handicap for a user given the novelty of the web and the unlimited possibilities this new channel offered. You could double your conversion rates simply by solving design errors. Today, that time has passed and a usable site isn’t enough to convert, but only enough to reduce bounce rate and keep users on your site. Additionally, current website users are "impatient" in nature, considering the average wait time for a page to load before leaving a site is all but 5 seconds.

Jakob Nielsen, one of the top usability gurus, notes this when he mentions that the current user has gone from navigating through menus sequentially and slowly to scanning frantically through a website in seconds.

Ideally, you should not try to correct usability problems; rather extract the maximum potential from your products by relying on an excellent web user experience.

How do you get it?

Good design is not just a creative process.

The good designer and /or architect is one who tries to contrast his feelings with the end user’s perception in order to create the best web experience.

At the end the day, the outcome is still based on opinion since a UX expert still guesses what the end user really prefers. That is why it is necessary to use user testing in addition to a good creative process. This way, we blend the scientific with the creative.

What to do after the user testing?

Performing these tests before putting a site into production assures us that our design will avoid the interactions we don’t want, but is it foolproof?

The answer is no, since actual website users behave much differently than test subjects. This is due to the difference in conditions or restrictions in: web browser, internet connection speed, purpose of the visit, screen resolution, etc.

This brings us to the conclusion that once a site is live we need to collect information from users' behavior to correct problems and keep improving the relationship between the user and the interface.
To try to quantify these problems and improve usability, we have different weapons:

• Traditional web analytics (WA: Web Analytics) to determine the specific interaction
• User experience analytics (WIA: Web Interaction Analytics) to test alternatives for improvement

Measuring usability with quantitative tools

In order to improve user interaction with our site, we can use three types of tools to help us quantitatively identify usability problems.

First, find out “what's happening," identify, and prioritize which areas of our site are creating problems (and how these obstacles influence conversions) using Web Analytics (WA) tools. Examples of these tools include Google Analytics and Omniture SiteCatalyst.

Once we have identified where to focus our efforts, we need to find as many usability issues as possible. This is where Web Interaction Analytics (WIA) comes into play. WIA shows us where our users do not click, where they move the mouse, how far they scroll, how much of the page is viewed, how users behave in a process, etc. In short, these tools explain the "how“ at both the page and process levels, aggregated or segmented, and these tools even allow us to observe the unique behaviors of individual users.

We can summarize the use of these tools, mentioning the most common usability problems we are trying to solve:

1.) Poor visibility of key page elements.
2.) User interactions are not as expected.
3.) Users do not submit or provide correct information on forms.
4.) Users behave unpredictably.

We can divide the set of tools available to analyze user behavior into two categories: hardware (ex. Eye Tracking Tools), and software (ex. Customer Experience Analytics such as ClickTale). We will concentrate on the second group which is much more readily available and affordable.

Once we know how the user interacted and why the problem occurred, we can move on to the stage of building a better alternative.

Imagine web analytics identifies a page with a high abandonment rate and WIA tools show that users did not see the call-to-action button to buy. Now it's time to build an alternate page with a much more visible button. Is this the end of the process?

Again the answer is no, because if we directly substitute this alternative, we could not determine whether this new page has helped lower the abandonment rate or if it is another factor affecting the results (such as a campaign, technical error, etc.)?

What can we do?

This is where we use online testing tools (not to be confused with user testing) like A/B or Multivariate testing. The method is quite simple, both the original and alternate pages are live and the testing platform randomly sends samples of users to either page for a proper head-to-head analysis. Examples of these tools include Google Website Optimizer, Optimizely, and Test & Target.

Tools to measure user interaction

We’ll discuss how to use WIA tools to solve the usability problems listed above.

1.) Poor visibility of key web page elements

We can conduct a scroll analysis to determine whether or not an area or element of a page is visible by viewing the distance users have scrolled down the page. In the screen shots we see an example of this where red areas were viewed by more users.

We then complement this analysis by measuring the amount of time spent at each point on the page, using the same color scheme.

2.) User interactions are not as expected.

To view visitors’ actions (and compare them to your own expectations) whether a hover or click, link or plain text, there are so-called Heatmaps. Following the same color scheme as before, we see that the areas with darker reds received much more attention from users. These heatmaps are segmented according to the behavior or characteristics of users, so that we can identify blurred or misleading promotions.

3.) Users do not submit or provide correct information on forms.
In this case, the key functionality is the analysis of forms -- not just to measure completion rates, but to highlight which fields caused abandonment, how long it took to fill in each field, which fields were skipped, which contained errors, etc.

4.) Users behave unpredictably.
Attempting to view a recording of every visit to a website seems more like torture than research. The task becomes much less daunting when we select a particular segment for analysis. This can be users who had a particular error, abandoned within one step of conversion, had problems with payment, etc.

To select and view a sample of visits, we do have a user test with real traffic and therefore with actual purchase intent.

Measure, but to what extent?

In conclusion, we can say that today, the problem is not the lack of tools available to conduct usability analysis. The problem is rather knowing how to optimally use these tools without becoming slaves to them for hours, which is another blog post altogether.


Talk to us to explore how customer experience analytics can improve your business