AI

When bots talk: The next big thing in AI

  • The next evolution in AI could be bots talking to bots
  • Multi-agent technology allows AI bots to interact and complete complex tasks autonomously
  • However, implementing bot-to-bot communication on a large scale poses challenges, including the need for stringent oversight and increased cybersecurity measures

Move over generative AI, the next AI evolution is coming: bots talking to bots.

Early attempts at getting robots to talk to each other were more funny than useful, but now machines talking to each other “is not a futuristic concept,” Verizon Business CTO Wael Faheem told Fierce Network. 

“It’s not something that is science fiction that's going to happen in years and years to come. Actually, you will soon see some proofs of concept," he said. 

Indeed, Avasant Senior Research Director Dave Wagner agreed this will be the next big thing in AI as hype around GenAI settles down. For AI to work in many settings, it needs to be “freed to work quickly and without human interaction,” Wagner said.

The concept of bots talking to bots is also known as multi-agent technology, which enables AI bots to interact and complete complex tasks by leveraging large language models (LLMs). These advanced AI agents go beyond simply following pre-determined or trained instructions — they can comprehend, interpret, adapt and act autonomously, according to Moor Insights & Strategy VP and Principal Tech Analyst Melody Brue.

“This is definitely a 2.0 trend that we are seeing everywhere, from customer experience to dev environments,” Brue told Fierce.

What might this look like in the telco world? Well, a network optimization bot could respond to a weather bot with information about an incoming storm, noted Omdia analyst Bradley Shimmin. And those bots could tell a customer service bot to communicate with users when certain network events are happening, and “bridge that gap between back-end systems and front-end systems.”

Alternatively, carriers might have a series of models that are hyper-specialized for accessing and querying information like customer records for payments and marketing. Getting those models to talk each other could get information, like upgrade options, to customers faster.

“Those kinds of things are really hard for telecoms because what the user is experiencing and what the telecom can provide, there's quite a disconnect between the two,” Shimmin said. “You could have autonomous agentic processes that can speak the language of all those different systems.”

Bot army

Machines talking to machines might evoke a certain unease, given the science fiction genre’s obsession with robot-driven apocalypses. Thankfully, recent experiments like "Bing Meets ChatGPT" indicate that bots aren’t interested in having social conversations with each other, never mind plotting world domination. 

In fact, having bots work together might solve some of GenAI’s biggest problems to date.

While multi-modality, demonstrated by models like Google's Gemini and ChatGPT 4.0, has garnered attention, it doesn't address one of the fundamental issues with GenAI: that we can't always trust what these models say.

Enabling models to interact with one another can enhance the quality and reliability of their responses, Shimmin said.

Telcos using AI for customer service could even use one bot to oversee the customer-facing bot, a sort of “God AI.” Essentially, if the customer-facing bot comes up with a response not in-line with corporate policy, they might have another model that can serve as a “guardrail” to look at that output and rerun it if needed.

In one example, Crew AI, a multi-agent framework built on LangChain, is designed to facilitate collaborative work among multiple AI agents. This system allows for the creation of AI "crews" that can collectively tackle complex tasks by breaking them down into smaller, manageable components handled by specialized agents.

Is this safe?

As expected, there might be some caveats in letting bots get chatty with each other.

Already, sharing data is common across companies, Wagner pointed out. Manufacturers, retailers and wholesalers will often share data to allow for faster ordering and shipping. However, doing this on an AI-level requires "more trust, a lot of rules and deeper partnerships."

It also could open companies to more cybersecurity threats. As bots share with bots, the amount of traffic will increase exponentially and with it the need to monitor the traffic for bad actors.

“I'd expect for most use cases, you'd still have some form of dedicated connection between two partners similar to an API that governed what the AIs have access to, rules around data usage, etc.,” Wagner said.

In the future, consumers will likely interact with AI bots and not even know that the AI is routing their requests through various third-party access arbiters. All of that will go on in the background with humans only interacting with a single conversational interface.

Verizon’s Faheem concluded that enterprises and consumers alike will need “the right skill sets to manage, monitor and track” bot activity in order to avoid miscommunications, data leakage and wrong information.