GOOGLE CLOUD NEXT, LAS VEGAS – It hasn’t even been a year since the last Google Cloud Next event, but you can already hear a shift in the tone of the conversations happening here about generative AI (GenAI).
At last year’s event in August 2023, “trust” was the watchword for enterprises during a customer roundtable on AI. There was a huge amount of concern about not only whether or not they could trust AI with their enterprise data but also whether or not they could trust AI with their customers. That is, whether a wrong response from an AI tool could put a customer off their brand forever.
Now, though, the focus has shifted. The conversation this year was largely centered on transparency and governance.
Kalyani Sekar, Verizon’s SVP and chief data officer, said during this year’s customer roundtable the past year has been one of AI exploration for the telco. And after tinkering in the workshop, it has homed in on developing responsible AI practices as well as refining the quality of the data that goes into training the models it uses. After all, she said, the quality of the outputs is directly proportional to the quality of the inputs.
One of the ways it is pursuing responsible AI is by testing the technology with employee-facing applications first. That way, it can reap constant human feedback on what is and isn’t working and “put the right data controls in place.”
“It’s not about just exploring on one hand what AI can do, because also really we need to work on developing practices and what responsible AI practices we need to put forward for generative AI,” she said.
Phil Davis, Google Cloud's VP of Global GTM for Applications, SaaS and SMB, echoed this sentiment in a private meeting with Fierce. "We still hear concerns, but less than a year ago and these are typically around governance of access and costs and data control and leakage," he said.
Stephan Pretorius, CTO at ad agency WPP, said during the panel that for creative applications, there’s a fine line between implementing adequate controls and leaving enough room for the AI to imagine.
He explained that it’s true that there are “use cases where you have to be incredibly precise,” for instance, when having AI generate content or materials for a particular brand. But at the same time, creative use cases demand more freedom.
“A year ago, the entire industry was quite immature in terms of deciding what to do, but I think it’s incredibly rapidly maturing” and folks these days have a much better idea of what they do and don’t need in terms of controls, Pretorius said.
Bayer’s Head of Imagining, Data and AI Guido Mathews added it’s important these days to move forward with AI development with an eye toward the regulatory environment.
Working and co-creating with regulatory authorities – especially in highly-regulated fields like healthcare – is the only way forward, he concluded.
Catch all of our coverage from Google Cloud Next 2024 right here.