This wasn’t supposed to be an op-ed, you know. But I ended a recent interview with an AI startup with a certain sense of unease. Why? Because the founder was earnestly describing to me an AI use case that I had watched go horribly awry in an old episode of "Black Mirror" the night before.
But let’s back up. I was chasing a story about on-device AI, what it requires, what the use cases are and what – if anything – the shift to this device edge means for telcos. In this case, the edge devices in question can be as small as a sensor or as large as a private server.
Yubei Chen, an assistant professor at the University of California and co-founder of AIZip, was explaining that in order to make on-device AI possible, models need to learn faster (learning efficiency), be smaller (model efficiency) and use less power (power efficiency).
Theoretically, this should be possible, he said. After all, the human brain can only take in less than 1 billion tokens worth of data over the course of a lifetime and only uses about 20 watts of power. The trick is closing the gap between human processing and AI.
Already, we’ve seen advances in learning efficiency. Using techniques like retrieval augmented generation (RAG) and others, AI companies are aiming to increase learning efficiency by feeding AI the right data at the right time. And on the size front, everyone from IBM and Google to OpenAI and AIZip itself have been iterating on mini models that can bring key generative AI capabilities to more devices.
Qualcomm has also been on about this for a few years as well.
Red flags and "Black Mirror"
It was all sounding really cool. And I’ll be honest and say I was enthusiastically nodding along. At least until Chen began discussing why we should even care about on-device AI anyway.
Asked what more people should be thinking about, Chen pointed out one end game of all this development could be AI that’s customized down to the individual level.
“At the end of the day, what truly matters is the AI model for you. It can serve 500 million people all at 90% [accuracy], but what if I tell you I can give you a much smaller model that can serve you at 98% accuracy?” he said.
Beyond providing personalized assistance with everyday tasks, Chen said AI could eventually become a tool to help with memory as well.
“Let’s say we are successful and debut AI models for everyone. That is not only a model to serve the people but also is their memory - how I talk and the way I interact with the world,” he said. In the past there were photos but those are static, dead captures of a moment in time which only contain partial information. But AI-enhanced memories could, for instance, be used to reconstruct a deceased person’s voice using Generative AI to give a speech after their passing.
And if the AI model could somehow record your life? Well, then that might be a way to help augment memory more broadly.
To be clear, Chen was coming at this question with good intentions. The example he gave for the use case was helping an elderly family member with dementia remember people and events.
Any other time I might have been onboard with the idea. But I had just watched the "Black Mirror" episode “The Entire History of You” the night before. And woof. If you know, you know.
For those that don’t, "Black Mirror" is basically a modern-day tech-focused "Twilight Zone" that offers a glimpse of the potential dark sides innovation can bring. And the premise of the episode in question is exactly what Chen described – AI captures and catalogues video memories – but it all goes horribly wrong for one couple. (Warning, it's intense and not an episode to watch at work.)
Questions around AI responsibility
With that fresh in my mind, it was hard to be enthusiastic about what I was hearing. So, I pushed back.
"Won’t custom AI bubbles exacerbate the divides which have already been caused by living in social media echo chambers? Have you ever thought about how someone could abuse access to video memories given the chance? And – perhaps most importantly – have you ever seen "Black Mirror"?"
Chen's answer to the last was no. And judging from other conversations I’ve had with AI players, plenty of others haven’t either.
It is heartwarming that so many innovators and founders want to do truly good things with AI. But it is also their responsibility to think about how it can be misused.
Granted, "Black Mirror" offers a bit of a hyperbolic perspective in terms of what could go wrong, but watching it is a good thought exercise.
Because tech is my job I can only stomach one episode at a time, but I’m starting to think more of us working in this space should be forced to sit with these hard questions.
After all, aren’t there some things we all would rather NOT remember?
Op-eds from industry experts, analysts or our editorial staff are opinion pieces that do not represent the opinions of Fierce Network.