Designing for voice in health: my week as Alexa

Illustration of people with speech bubbles

As part of NHS England’s Transformation Directorate, our small but perfectly formed Innovation Lab/cxpartners team spent ten weeks over the summer of 2022, testing how voice-based technologies could be used to facilitate an improved patient care experience. The project’s aim was to explore the use of voice as a channel for the NHS and how to implement this successfully by identifying potential effective use cases, benefits, risks and considerations.

After carrying out some primary research into how people use voice assistants, we agreed to build the project around the skill of ordering repeat prescriptions, using the current NHS app and NHS.UK as our model. We attempted to recreate this experience through voice, prototyping on an Alexa smart speaker.

Digital health platforms are expanding

There are lots of new platforms that are starting to provide health services digitally. Google, Apple and Samsung, are all entering into this world to gather health data and present their analysis back to users. So it makes complete sense for the NHS to look into the capabilities of a voice activated system to give people the chance to take control of their health care. Ordering repeat prescriptions is such a common and regular activity that this was an ideal focus for our project team. The current statistics around voice assistants clearly demonstrate that this is a popular and growing market: 

86% people under 54 use voice 

30% people who don’t use voice will start to this year 

51% people use voice for health every month

Becoming Alexa

As part of the research team and also as a content designer, I was fascinated to listen in on our first rounds of research into how people respond and relate to voice assistants in their lives. Although I don’t think it was ever the original plan to use one of our team to play the voice of Alexa, it did prove useful to ‘mockup’ a potential ‘real life’ situation with participants as soon as possible. Hearing at first hand just what works and doesn’t work for people very much informed how our script was constructed and how Alexa would respond in the most appropriate and realistic way. 

Perhaps I was too quick to volunteer to be the voice of Alexa as I have quite a love/hate relationship with my own Alexa device. If she hears me and responds accordingly then all is well. But I have also been known to become pretty aggressive when all does not go to plan. So in practising my Alexa tonalities, I had to make my peace with her and really listen to how she says certain phrases - especially when she may not hear or understand a response. I think I got pretty good at “I’m sorry, I didn’t quite catch that” or words to that effect.

My first attempts were a little bumpy. It’s quite a skill to become Alexa and read a script at the same time whilst also listening to the person’s responses and choice. I would liken it to rubbing your stomach and patting your head at the same time. However, once we had run through each scenario a few times, I do think I became pretty convincing and our participants really didn’t seem to know the difference between an actual Alexa and our ‘deep fake’ version aka myself. Thankfully my services as the voice were only required for a short time as we then moved into the prototype phase where the actual Alexa was programmed to respond. But I’m not bitter..I can still fool the odd person with my Alexa voice when required. Now I just need to fool Alexa herself..

Kate is Senior UX and Content Consultant at cxpartners. Working mainly on government and charity accounts, Kate is an expert in designing and advising on accessible, inclusive information for different audiences.