AI

We asked ChatGPT, Copilot and Gemini if they feel appreciated. Here's what they said.

  • In celebration of the third annual "AI Appreciation Day," we interviewed various GenAI bots like OpenAI's ChatGPT, Microsoft's Copilot and Google's Gemini

  • The AI bots identified bias in training data, understanding context and security vulnerabilities as their biggest challenges

  • Although based on different models, most of the bots' answers were in agreement

In a world where holidays seem to commemorate everything from pancakes to puppies, today is the third annual "AI Appreciation Day." In celebration, we thought it would be fun to ask a few popular platforms about life as an artificial intelligence (AI) bot — and if they felt appreciated by us lowly humans.

We asked OpenAI’s ChatGPT, Microsoft’s Copilot and Google’s Gemini, thinking this was going to be really good. But alas, they all told us they didn't have feelings ("As an AI language model, I don’t experience emotions or seek appreciation," Copilot responded). So, we took a different route and asked them about the weirdest prompts they’ve ever received — and oddly enough, they all had something to do with toasters.

Yep, toasters.

Not sure whether that says something about our AI’s training data, or if humans just have a weird obsession with the idea of sentient toasters.

ChatGPT: “One of the weirder prompts I received involved creating a detailed dialogue between a sentient toaster and an alien discussing the best way to make toast in zero gravity.”

Gemini: "Write a love letter from a sentient toaster to a bathtub."

Microsoft Copilot: “One that stands out was a request to write a poem from the perspective of a toaster about its daily life and dreams.”

Um. Ok.

What can't AI do?

We also asked the bots to tell us if “there’s anything AI can’t do,” but to respond with a sense of humor. And they understood the assignment, sorta. Coffee and dad jokes were a recurring theme.

ChatGPT: "Well, for starters, AI can't enjoy a good cup of coffee. I mean, can you imagine a robot getting jittery from caffeine? It'd be like, 'Error 404: Too Much Java'.”

Anthropic’s Claude said it’s “still working on perfecting the art of dad jokes and eye-rolling.”

Google Gemini, which is part of the Google AI language model suite, said, an AI bot "can't tell a good joke. Oh, sure, it can recycle a thousand dad jokes it found on the internet, but understanding humor? The irony, the timing?  Fughettaboutit!"

Microsoft Copilot: "For starters, AI can’t enjoy a good cup of coffee. I mean, have you ever seen a robot savoring a latte? No way!" 

Gemini also said AI "can't replace your therapist (although, maybe someday...). AI can analyze your social media posts and tell you you're using way too many emojis, but it can't delve into your childhood trauma. So, for your deep-seated anxieties, stick with a real human who won't judge you for your questionable online habits."

So, we guess stand-up comedians and therapists don't have to worry about being replaced by AI anytime soon. 

GenAI's biggest challenges

On a more serious note, we also asked the GenAI bots about the biggest challenges they face. Their unanimous response? Bias.

“Major challenges for AI bots like me are understanding context and bias in training data. While I can process information well, following the flow of conversation and remembering past parts can be tricky,” Google’s Gemini said. “The data I learn from can reflect real-world biases, which means I have to be carefully monitored to avoid perpetuating them.”

Microsoft Copilot warned that relying solely on data can lead to incorrect conclusions. “Context and common sense matter,” it said. Copilot also noted vulnerabilities in AI systems can lead to security breaches. “Ensuring robustness is critical,” the bot told us.

ChatGPT emphasized ethical issues, “such as privacy and preventing misuse, are also crucial.”

Blame the humans

When asked about the hardest parts of working with humans, differences in language and varied expectations were common themes. Specifically, ChatGPT talked about the challenge of understanding “nuanced language and diverse communication styles,” as well as addressing “the wide range of user expectations and needs.”

Google Gemini also noted difficulties in interpreting slang and cultural references, often leading to miscommunication.

Meanwhile, Anthropic’s Claude took a meta-approach, essentially pointing its finger at human understanding of AI.

"Any challenges in human-AI interaction are really issues for the humans using AI systems and the researchers developing them," it said.