- Whiteboard
- Posts
- What we can learn from Star Trek about AI and UX
What we can learn from Star Trek about AI and UX
Tea, Earl Grey, Hot

I grew up in the era of Star Trek: The Next Generation. Where the Enterprise and its intrepid Captain Jean Luc Picard - slightly less fruity with the ladies than his predecessor, Kirk - chartered their way through strange new worlds and explored the unknown.
Characters came and went, grew as people (or androids), and even had their sight restored by new technologies.
There was one constant, though. One thing that never changed through the entire series. One thing that logically, calmly, helped the crew through thick and thin.
The Computer.
The trusty ship’s computer, voiced by the unmistakable Majel Barrett, basically ran the place. It helped with diagnostics, creating products and food with the replicators, and answering complicated questions from the crew.
While one or two technologies depicted in Star Trek are beginning to come to life in the world around us, most have remained elusive. Tablet computers, for example, have been around for years - though the characters tapping away on black slates in Star Trek always looked so fanciful and sci-fi.
And although we’re pushing ahead toward the ‘Holodeck’ with AR experiences and spatial computing with Apple’s new Vision Pro, we still need to find a way to lose the headset.
But AI has suddenly taken leaps and bounds away from our current understanding and begun to enter the realm of our previous experience with fiction.
Inputs differed, as did the outputs
Watching Star Trek back today, it’s interesting to see the different ways the computer responds in the context it’s in and who’s asking.
Crewmembers would ask it to perform tasks, and it might follow up with a clarification or respond with the information it thought would be useful. It never seemed to give too much information or run on and on, only to be told to shut up (looking at you, Alexa).
It very rarely didn’t know how to respond (or suggested that you would have to unlock your phone to view results, looking at you, Siri). But it sometimes showed results on a screen if that seemed like the most useful output.
It’s also interesting to see the character’s interactions with the computer. Some were more natural, asking the computer in natural language to accomplish a task or for the answer to a question.
Others would specify the parameters of a query in detail and end their sentence with ‘execute’, even when not needed. But the computer always waited patiently until the crewmember had finished speaking, rather than interrupting while they were mid-thought.

This higher level of understanding seemed like technological magic when ST:TNG was first aired. Nowadays, LLMs like ChatGPT and Perplexity can regularly deal with varied, nuanced, and misspelt inputs, and they can ask clarifying questions and suggest alternatives. The only aspect that appears to be missing is accomplishing tasks - something that Rabbit’s new R1 device aims to solve by allowing the machine to carry out chained commands within our apps on our behalf.
So now the science fiction stories of the past are beginning to blend with the science reality of today, what can we learn from our imaginings of AI from The Next Generation?
1. Don’t give too much info
When you’re providing an answer, give them exactly what they want. No more, no less. With a conversational AI, people will ask follow-up questions and clarifications if needed.
AI currently struggles with this, but if we’re building services that use intelligent processing, just answer the question. Provide the detail if it feels like the context requires it.
And speaking of context…
2. Consider the context
Think about why, where and when someone is asking a question and provide an answer that is useful to them at that point.
If they’re asking when the buses leave their local bus stop, and they’re at home, it might be because they’re trying to plan a journey in the future. So your response could contain the arrival times and when they would be likely to reach their destination (especially if they have something in the diary or have recently made plans with a friend).
If they’re out at a bus stop, perhaps they’re asking because the bus is late. Check, provide the bus time and delay, and offer to let their next appointment know that they might be a little late. This intelligence is now possible, so let’s start using it to solve real problems for people.
3. Emotion is overrated
ChatGPT’s voice interactions feel like they’re trying to essentially emulate human interaction. Differentiating between the outputs of humans and technology is becoming harder by the day.
But do we really want that kind of relationship with our computers? I’m not sure that people want their machines to be emotional at this point. Perhaps a more monotone, factual, straightforward voice is what we actually want.
Robots, speak more robot.
4. Sometimes, friction is good
Whenever the computer is asked to self-destruct the ship, it asks them for confirmation.
And other people to confirm the order.
And for them all to provide long access codes.
And then it implements a timer so they can escape.
Then, if they want to cancel the order, it just does it immediately.
Friction can be really useful, as can an ‘undo’. If someone wants to buy something using an AI, make sure they’re certain before you place the order, and get them to confirm. And if they want to cancel it, remember what you last did for them and stop it immediately, no messing.
So, as we continue to boldly go where no one has gone before, we should remember that the goal of AI isn’t to replace biological humans with synthetic humans and replicate our behaviour and emotions. It’s to create efficient, effective, user-friendly tools that we can use to enhance our lives and alleviate our burdens.
Until of course, we reach AGI (artificial general intelligence), and that sentence begins to sound (to an intelligent, self-aware being) like slavery.
In which case, forget I said anything.