Usability testing: getting to grips with the NVivo user experience

10 September 2014 - IN accessibility, experience, testing, usability, user

Usability testing: getting to grips with the NVivo user experience

At QSR, we are constantly trying to tighten the feedback loop between our customers and our development team.  In software tools like NVivo, where customers may use it for 8 hours a day, day-in, day-out, it is critical for any user interface design to be both easy and efficient to use.

Our most recent product, NVivo for Mac posed some great design challenges.

We were trying to balance the pros from an existing and well known interface for researchers, with designing a new interface that meets the needs for an experience that is ‘native’ and ‘intuitive’ for Mac users.  Goals that at times were at odds with each other.

One new approach we have enlisted is usability testing.  Our first usability test was run with two groups of testers over two weekends and our most recent was a more focused usability test, one-on-one with testers. Both were used to test our user interface design for NVivo for Mac.

So what is usability testing?

Usability testing involves representative audience members attending sessions to test the software. Each participant performs a number of agreed tasks within the software application, and gives their feedback.

Tasks are monitored and any issues and recommendations are documented, along with examples of direct user feedback to help guide design improvements.

Why usability testing?

As blogged about previously QSR has had great success with Beta programs to help garner user feedback on feature direction and indirectly our interface design.  But there are some limitations, such as the ability to guide testers on what areas of the product they use and being able to explore user feedback, which we have started supplementing with usability tests.

The primary goal of our usability tests has been to get direct feedback and discuss with users the current user interface to ensure design decisions meet the NVivo 10 for Mac vision and architectural brief of:

  • “Delivering a Mac OSX based qualitative analysis application that provides customers with the same rich user experience and functionality that is provided with NVivo for Windows”
  • “A consistent NVivo user experience across the NVivo products”
  • “Respect the culture of the platform”

Getting this balancing act right, where often design guidelines are in direct conflict, was seen as one of the biggest challenges of the NVivo for Mac project.  It was thought a Usability test would be better able to solicit the kind of “Usability” feedback we needed to supplement feedback gathered from the already planned Beta program.

It is important to note that the usability test was not aimed at getting feedback on general features or issues encountered (bugs) while using the software.

QSR’s approach to usability testing

There are various approaches to usability testing.  The two approaches we considered for our usability test:

  1. User Goal Driven
  2. Free form (exploratory)

It was deemed more appropriate to have a more “User Goal Driven” approach to enable us to get specific feedback on areas of the design we are most interested in (from a design perspective) and also enable us to better guide users who may not be familiar with NVivo.

The first of our usability tests consisted of three phases:

Phase 1 – User based scenarios

Participants were given a set of tasks (or scenarios) that set out to achieve particular User Goals within the Application.  The Tasks shaped which aspects of the user interface and hence design, we were able to elicit feedback on.

The Tasks consisted of a scenario, supporting data, a rough guide for time to spend on the scenario and an area for testers to give feedback on how easy or hard the scenario was to achieve. An example can be been below:

 

During the user based scenario phase, QSR staff were on-hand should participants run into any issues or have questions about the scenarios or software.  Any assistance from testers was limited as much as possible, with the aim to simply get the user over any issue they may have, but being careful not to skew results, by walking users through what they need to do.

Phase 2 - Questionnaire

At the end of the usability test, participants completed a survey, which asked a set of questions relating to usability and design.  The questions were made up of general usability related questions e.g. what worked and what did not work, but also a specific set of questions depending the user’s experience e.g. whether or not the participant has NVivo or Mac experience.

Phase 3 – Usability tester collaboration

In the first usability test we ran a focus group to allow participants to discuss their experience, perceptions, opinions and attitudes towards the current NVivo for Mac design. A QSR staff member acted as moderator; where we used the skills of our Business Analyst to help elicit comments and help understand user feedback.

In our second usability test we conducted one on one interviews with testers which allowed more free form discussion to gather and dig into feedback.

Both methods were successful.  Given our first usability test was more general (across the product) the focus group made perfect sense.   The second usability test was on specific functions, in particular transcribing in NVivo for Mac, so one-on-one questioning allowed us to tailor questions based on the tester’s transcription, Mac and NVivo experience.

Goals of the usability test

We outlined at the outset what our goals were for the usability test.  Both tests had a very different focus, based primarily what we wanted feedback on.

This was an important step and helped shape the tasks or scenarios we had testers work through.  It also focused both who we recruited and the way we questioned our testers (sought feedback).

Recruiting for a usability test

The skills and experience of your testers should tie directly back to your goal or focus of the usability test.  If you want feedback from someone with a particular skill level or specialization, it is critical the recruitment of testers for the usability test focus on getting the right spread of users to make up your test group.

At QSR we have generally tried to get an even spread between completely new users to specialists.  This has allowed us to gauge the usability and learnability of the software and to determine how our capabilities compare to other such tools in the market.

In terms of our approach to recruitment, we have tried many avenues, from putting posters up at universities, to mailing groups of users, to users we have worked with before and know have the skill or experience we are after.

I must say we are very lucky at QSR to have a really passionate and active user base that we can draw from.   That said, so far primarily the users we have recruited are new contacts.

Setting up for a successful test

Like most things, good planning is crucial to a successful test.

One thing we were very careful of was to ensure all testers hardware (machines, monitors, keyboards and mice) were exactly the same.   We didn’t want to invite the possibility of bias in one tester’s feedback, based on them getting a different user experience because of their designated hardware.

The test tasks or scenarios play a very important role in ensuring the testers have acquired the experience required to provide feedback on user interface design we are interested in.  We had several QSR staff, with varying levels of experience, from complete novices to experts in NVivo, try out the tasks.  We wanted to make sure our tasks were clear for any tester to understand and that testers could get through the tasks in the time available.

The environment is just as important.  Our second usability test involved testers transcribing in NVivo for Mac.  For this test we opted to split up each tester into separate rooms, so the act of listening to audio and video while transcribing would not adversely impact each tester.

Tools to support usability testing

Our two usability tests to date have been targeted at NVivo for Mac, so we sourced a usability tool called Silverback which is designed to capture key strokes, tester audio and video of the tester’s expressions as they run through the tasks.

We used the video collected to show us if the user ‘actually’ completed the goal of the scenario.  We also tagged in the video where users got confused and where they may have looked for a particular feature.  All this information was great in establishing, based on the users experience, how learnable, understandable and generally efficient our NVivo interface design was.

The most important part of all…

Analyze and act on the feedback!

Feedback has come in many different forms.  Whether it is feedback from direct questioning or the very subtle nuances seen in video of tester’s interaction with the product. There is very little benefit running any test if you walk in with preconceived outcomes and/or don’t act on what you learn from the usability test.

So we planned time into our schedule so feedback could be analyzed and changes could be accommodated.