iPad app UX testing : Observations from the field
We’ve just finished a fascinating project for a major Telco who asked us to test an iPad app prototype to determine how useful and usable it will be for its intended audience.
Through the course of the project I kept a list that I imaginatively entitled ‘interesting stuff’ which I’d like to share, which hopefully will inform any iPad research that you do.
Does concept testing require a different approach?
The majority of our projects focus on whether users can use something for its intended purpose. This project differed because we needed to validate that the concept of the app was actually going to be of use to people. We recruited two users per session and gave them open-ended tasks in an effort to promote discussion and exploration of the prototype.
This approach allowed us to get more people to evaluate the prototype within the allocated testing time. It also led to a number of unprompted discussions between participants who debated opinions on functions and features. It was also interesting to observe users helped one another to articulate their likes and dislikes.
The halo effect
The iPad is a beautiful device that clearly enhances the content that it delivers. I noticed during the testing how users forgave the failings of the app because they were smitten by the device.
The device clearly has a halo effect that needs to be taken into consideration when evaluating feedback from users who are reporting success while clearly having all sorts of problems trying to complete their tasks.
‘You already know how to use it’
Despite the confident claims of the Apple ad campaigns we let users orientate themselves with the device prior to testing. We set them up with a few simple tasks like ‘find your house on Google maps’ which allowed them to get used to the interface and play with a few gestures.
As you would expect iPhone users took to the device quickly and had certain expectations of how it would behave. Once we were happy that users felt comfortable with the device we knew we could focus on the prototype and errors would be down to problems with the app and not with the device.
Think about your clients in the observation room
Carefully positioned overhead cameras are good in theory but as users get engrossed with the device so much that they quickly obscure the view. We marked out areas to keep the iPads in with butchered post-its, which went some way to help our observers to see what was happening.
We also ran into problems due to the highly reflective screen on the iPad. The screens reflected the halogen lights back into the cameras leaving areas of the screen completely obscured. Spend extra time positing the devices prior to testing to help to alleviate some of these problems.
Gestures are interesting
It was really interesting to watch people’s gestures when using the device and question them on how they expected the app to respond to them. It can make for difficult moderation at times as users don’t always articulate their gestures as they do them without thinking.
We found that users often repeated a gesture to try and get it to illicit a response from the interface, which made it easier to spot and then question them about it.
Social use of apps
For the first time ever during conducting user research I found users describing how they would use different parts of the app depending on whom they were with at the time. Users described how they would use one particular interface with their mates because it was more abstract and far cooler than a more conventional alternative.
This made us realise how users were happy to sacrifice usability if there was an alternative that made them look good in front of their mates.
Get your nomenclature sorted
With new devices often there isn’t always an established vocabulary to use. I found it tricky to not talk about ‘clicks’ and ‘desktops’ when giving users instructions. I decided to come up with a set of terms that I stuck to throughout the sessions for consistency. This made it easier for users to understand me and for observers to follow the sessions.
Don’t forget the dry run
It’s always wise to run a dry run of usability tests to check timings and tasks but it’s all too easy to not get round to it. When testing new devices a dry run is vital to help to both iron out practicalities such as task duration and also to get an indication of how people will respond to the device.
After doing a dry run I ended up reducing the scope of the test plan by a third because I had underestimated how long it would take users to get through the tasks.
A client who was also a UX guy once kindly didn’t attend the first session of some research as he was allowing me to find my feet. By having done dry run you feel so much more confident with your first session, which can make all the difference when you are trying to impress a new client.
Apple really are quite strict
It’s not every day a client rings you up and asks you to buy 4 iPads to bring to a meeting on the following day. Off I strutted to the Apple store feeling like loadsamoney only to be told I could only buy 2 at a time. Remember to factor this into your project planning as it’s unavoidable practicalities such as this can really trip you up.
What have you found?
I’d love to hear what your experiences have been from testing with iPads, please add you own observations in the comments below.