AI

AI needs to perform differently for different industries

  • It's easy to think of AI as a monolithic technology, but different industries need it to function in different ways
  • In financial services, repeatable, traceable results are the name of the game
  • In healthcare, nuance is more helpful in accounting for patient differences

AWS RE:INVENT, LAS VEGAS: It seems like it would be a good thing to have an artificial intelligence (AI) model that spits out the same output for a given set of inputs. But that’s not always the case. It turns out that different industries don’t just need specialist models trained on their data, they also need those models to actually perform in different ways.

Two conversations at AWS re:Invent perfectly illustrate the range of demands AWS and others are trying to field when it comes to AI and model performance.

On one hand there is the no-nonsense crowd, a group that includes financial services titans like J.P. Morgan. The key for AI here is repeatable results, Jack Gibson, head of payments engineering, architecture and APIs for J.P. Morgan Payments told Fierce Network.

According to Gibson, AI does have utility today for things like making predictions and helping customers improve the way they transact. But he added the company isn’t willing to allow AI to fully take over in the customer service realm until AI can deliver consistent repeatable results. Why? Because when you’re dealing with money, there’s no margin for error for wrong answers.

“I think we’re in a situation now where I could give an LLM a question and I’ll get a different response from the same question. I won’t always get the same answer,” he said. That’s the challenge. When it becomes repeatable, with the same data set and you expect the same outcome and we do that at scale, that’s when this stuff starts making its way into regulated markets.”

But believe it or not that’s not the goal for all enterprises. Some actually want AI that can return different outputs from the same set of inputs.

 

I think we’re in a situation now where I could give an LLM a question and I’ll get a different response from the same question.
Jack Gibson, Head of Payments Engineering, J.P. Morgan

 

Katie White, head of engineering at healthcare tech company Inception Health, told Fierce that in the medical field, you can have the exact same inputs (i.e. symptoms) in different people and have their diagnoses be different.

“AI needs to be nuanced enough to address different conditions in different people,” she said. “So, I have asthma and I’m short of breath and I come in, well is it because of asthma or is it because of something else? If it’s just a decision tree, they’re like ‘you have asthma, take your inhaler.’ But if you have AI, if it actually understands the nuance, it can advise your doctor ‘hey this person has asthma, but also ask these three other questions.’”

There’s still a long road ahead to making this vision a reality and ensuring nuanced AI is also unbiased and equitable.

For starters, Inception CTO Melek Somai said, the data that we have today to train medical models “is already biased.” Even the definition of pain is race dependent, he said.

“We are not even 3% to understanding how to do things the right way,” he concluded.