Interaction design in the age of algorithms

As Alan Cooper said:

No matter how cool
 your user interface is, it would be better 
if there were less of it.

These are words to live by.

Less interface should mean less work for the user.

And the kinds of interactive thing we’re designing now - connected lamps, smartwatch notifications - well, you don’t want them to have expansive user interfaces.

That’s one reason we’re talking about algorithms and artificial intelligence so much. You can use their power to simplify interactions.

But what does that mean for interaction design practice?

The algorithm is where the action is

Take Spotify Discover Weekly. Once a week Spotify updates you with couple of dozen tracks that their algorithm thinks you’ll like.

When I talk to people about Spotify, this feature is something they tend to call out as important, delightful and the reason they keep paying their subscription.

But the design? Well it’s a playlist, just like any other. If you’d been the interaction designer on this project, what would your contribution have been?

Maybe this: you take a screenshot of a playlist and say ‘stuff the user wants goes here’.

There’s interaction design happening here - what fills that box is determined by users’ behaviour in subtle and clever ways, and behaviour is the stuff of interaction design. But that’s being done by the engineer who defines the algorithm.

And algorithms are changing more complex interactions.

Take booking a train ticket. Interaction designers at cxpartners have spent a good deal of time over the years trying to figure out how to display the complex, quirky and often nonsensical world of UK train ticket pricing.

But in the future, I expect people will just have a natural language text chat with the train company’s chatbot.

Chat interfaces are familiar and they don’t really need redesigning. So are designers about to be replaced by data scientists?

Here’s what we’ve learned about the future of interaction design.

Designing around algorithms

Imagine we’re designing for a bus company. Our user need is pretty simple. She wants to know: where’s my bus? How can an algorithm help?

Algorithms need data. And if you have unique data, you can build a uniquely valuable service.

We have timetable data: we know where buses are supposed to be and when. But that’s public data. We also have GPS data from the buses. That’s unique data - only the bus company has it. Now we can compare actual journeys with the timetable. We can see when the buses were late. That’s interesting, but not very valuable.

We can add in data about local weather. About traffic congestion. About school holidays. About roadworks. We could find correlations in the past and make predictions about the future.

Get the right data and we could build an app that says ‘your bus will be 10 minutes late’ the day before it leaves the station. That’s valuable.

But algorithms aren’t magic. Someone needs to build them. So if you come up with an idea, you need to know enough about algorithms to have a sensible conversation with an engineer about whether and how it could be built.

Here are the basics of that conversation.

Talking to engineers

You can break down the engineering task like this:

A set of inputs. That’s our data layers about the bus timetable, the weather, the school holidays and so on. Then an algorithm. And some outputs. In our case, whether or not the bus is going to be on time.

Let’s work back from there.

Outputs

As a designer, you’re going to need to know what kind of output is useful to the user.

Is it enough to predict that the bus will be late? Do you need to say ‘more than five minutes late’? Do you need to give a precise number ‘eight minutes late’?

The more detailed the output, the more complex the engineering task, so it’s worth knowing before you start. Which is where user research and Wizard of Oz prototyping comes in.

Algorithms

So what about the algorithm that got us there? When you're developing the service, you start out with a raw algorithm and you train it to recognise situations and make predictions.

You do that by giving it a sample set of inputs (that’s the weather, the roadworks and so on) and some known outputs (when the bus actually arrived). The engineer adjusts the algorithm to fit inputs and outputs.

There are many different classes of algorithm. Picking the appropriate one is the engineer’s job.

Inputs

When you’re thinking about data, there are things you need to worry about. Quality, for instance. In our example, our GPS data should be pretty accurate.

But sometimes the training data can be inaccurate or ‘noisy’. If our GPS didn’t work well in some built up areas, we’d have noisy data. If your data is noisy that can lead to overfitting - your algorithm making inaccurate predictions based on errors in its training.

So your engineers will want to know about quality.

Data volume

If you have a problem that’s complex, you rely on lots of different data layers, or you want answers that are precise, then you’re going to need more training data. One year’s worth. Two year’s worth. More? It can be hard to find data that goes back a long way.

Engineers will get nervous if you keep trying to add data layers. The more layers you add the greater the volume of data they need.

So don’t just chuck in extra layers. Do you need that layer that tells you school vacation dates? Or will the traffic data give you what you really need?

If you have a sense of what the main drivers are, you’ll get to the optimal solution faster.

How to be wrong

If you don’t have much data, or if the relationship you’re trying to figure out is particularly complex then you’ll struggle for accuracy. But you can choose between two ways of being wrong.

'High bias' means you’re wrong, but in a predictable way. 'High variance' means that on average you’re right, but any one guess could be wildly off.

If you’re trying to guess how late a bus will be, based on poor data, it’s better to build an algorithm that’s biased towards saying the bus will arrive on time. That way no one misses their bus.

But it won’t be the spooky accuracy you were hoping for. So let’s assume we’ve got plenty of data.

Complex inputs

If the data in each of those layers is unnecessarily complex, then the algorithm may end up being slow, or unreliable. So rather than throw raw data at the algorithm, it’s a good idea to simplify what’s in each data set.

Do you need to know precise times of rainfall? Or just whether it rained in a particular hour? Or at some point in the morning? Do you need to differentiate between heavy rain and light rain?

That’s going to determine how much information is in your weather data.

Sometimes simpler data can lead to a more accurate output. Like turning up the contrast on a scanned image of text to make it more legible.

Good enough?

At the end of all of this you should have a trained algorithm that’s delivering the information you want based on the data you have. Now you can set your algorithm on some real data.

Chances are, it still won’t be accurate enough. You can tweak it by running a closed beta or a live service with a feedback loop from users. Your algorithm can learn without the need for supervision from someone feeding it training data.

A prediction machine

So look at that, we built a prediction machine. All the way through there’s a dialogue between designer and engineer about what’s possible and how to present it to the user.

As tools and APIs proliferate, perhaps more designers will be taking on the job of training algorithms in the next few years.

But the real place designers add value is in defining what the outputs should be and how they’re presented to the user. It’s easy to get that wrong.

Good manners

Wrap up your recommendations in an interface that promises human-like interactions with less than human manners and abilities, like Microsoft Office’s infamous Clippy, and people will revolt.

If there’s a high chance of error, then a quieter, more humble approach is required.

When iOS Mail sees that I’m writing a message to David and Richard it suggests that I probably want to send it to fellow team members Verity and Paul, too. It’s a decent guess - but it’s subtle and one I can easily ignore.

A lot of next generation interaction design is going to be around the etiquette of suggestions and assistance.

Designing conversations

What about more complex situations like natural language interfaces? Well, you can think of them as collections of algorithms. So the same rules apply.

You need a set of training data - like transcripts of conversations from a customer contact centre.

You’re probably going to need to simplify that data set - look for the successful conversations. Look for the ones that got to success in the fewest possible steps.

You have to remember that you’ll approximate conversation rather than create something that’s perfect. So you need to flag that to the user.

A friend of mine, Pete Trainor, just launched a chat service for a UK bank that deliberately refers to itself as ‘We’ not ‘I’ - that conversational weirdness reminds the user that they’re talking to a non-human. It’s the ethical thing to do. No one likes to be tricked, right?

But it also reminds users to keep the conversation simple.

A few years ago, the text adventure game ‘lost pig’ had the user telling an Orc called Grunk what to do to help him find his pig. Because you’re dealing with an Orc, you know to keep your language simple and you expect dumb replies.

It’s a cute trick that has personality, humour and a practical engineering purpose.

Back to the beginning

I’ve always looked to human conversation patterns to figure out how to solve interaction design problems. Now, I’m finding that understanding human to human conversation is crucial design knowledge.

And what about Discover Weekly?

Well I spoke to Matthew Ogle at Spotify and it turns out that a large part of the design work here was about understanding how to package up the service.

The playlist format was familiar. And limiting the size of the playlist to a couple of dozen tracks was a key insight from user research. It gives the service the feel of a mix tape from a friend, rather than data dump from an algorithm.

The interaction designers made the service elegant and approachable.

Our core skills are still important. There’s a rich future for interaction design. But we’ll have to evolve our practice and knowledge to stay relevant. That journey is just beginning.

Giles founded cxpartners with Richard Caddick in 2004. He's author of 'Simple and usable' and an invited speaker at design conferences around the world.