- Training an AI LLM to support a twisted agenda is straightforward, and the consequences can be fatal for humans
- UnitedHealthcare offers a prime example of how this can play out in the real world
- Regulatory oversight reduces the risk of AI malfeasance, yet Big Tech is stymieing the necessary regulations
2024 was the year of artificial intelligence, and vendors were eager to jump aboard the AI money train. But not all the AI news has been positive. In fact, it’s clear some companies are exploiting artificial intelligence in unethical ways.
The murder of UnitedHealthcare’s CEO in New York shed light on the company’s use of AI to deny patient claims. News outlets have focused on allegations that the company used a faulty AI tool, nH Predict, with a 90% error rate to deny patients care.
But that presumption contains a giant logic bomb.
The word ‘faulty’ implies that the AI program is defective or broken. If that were the case, at least some of its decisions would have favoured the claimants. However, the data shows that didn’t happen. In fact, nH Predict enabled UnitedHealthcare to reject nearly a third of claims in 2023, double the industry average, making it the number one insurer in denying coverage.
Far from being defective, from UnitedHealthcare’s perspective, NH Predict proved to be hugely effective wonder-code that saved the company tens of billions of dollars.
So, is it more likely that the software performed exactly as intended or that it was somehow broken? Give me a break. Seriously.
It’s time our industry and the media that covers it started taking a far more jaundiced view of artificial intelligence. AI programs are basically idiot savants. They can do lots of really hard, clever things really fast, but they’re also innocents—Big Tech’s Babes in the Woods.
They say, ‘You are what you eat,’ and it’s the same with AI; It takes on the characteristics of the data that you feed it. It’s disturbingly easy for a competent AI whisperer to get an LLM to support whatever agenda they prefer, whether that’s pumping up profits by denying palliative care or killing civilians in the Middle East.
Load in some skewed data, skip the ethics audit and drop some hints that four legs are good, and two legs are bad, and – et voilà – welcome to Orwell’s 1984.
The time to address AI malfeasance is now. As Industry 4.0 spreads to every industry in the world – transport, nuclear energy, the grid – the opportunities for companies to abuse it for profit and power are virtually limitless.
Strong regulatory oversight is the antidote to AI abuse, but U.S. Big Tech vendors and tame politicians are doing their best to eliminate it altogether by employing armies of lawyers and lobbyists to fight any legislation or regulation that might infringe on their ability to do whatever they want with AI, whenever and however they please. These companies spend an average of 15% of their revenues on these activities—not millions, billions—significantly more than they spend on customer service, which is a useful indicator of where their priorities lie.
How real is the danger posed by AI malfeasance? I leave you with former Google CEO Eric Schmidt explaining how he legally registered as an arms dealer to develop flying AI murderbots (note the toadying interviewer laughing along in the background because, hilarious, right?).
When Schmidt, like Palantir’s Alex Karp, is comfortable publicly humble bragging about using AI to kill people, you have to imagine what other tech bros are cooking up under the cover of their limitless wealth and sickening entitlement.
Op-eds from industry experts, analysts or our editorial staff are opinion pieces that do not represent the opinions of Fierce Network.