“Sorry in advance!” Snapchat warns of hallucinations with new AI conversation bot

Colorful and brutal rendering of the Snapchat logo.

Bing Edwards / Snap, Inc.

Snapchat on Monday announced an experimental AI-powered chatbot called “My AI,” powered by OpenAI’s GPT chat technology. My AI will be available for $3.99 per month to Snapchat+ subscribers and launches “this week,” according to a news post from Snap, Inc.

Users will be able to customize the AI ​​bot by giving it a custom name. Conversations with the AI ​​model will take place in an interface similar to a normal chat with a human. “The big idea is that in addition to talking to our friends and family every day, we’ll be talking to AI every day,” Snap CEO Evan Spiegel told The Verge.

But like its GPT-powered cousins ​​ChatGPT and Bing Chat, Snap says My AI is prone to “hallucinations,” unexpected falsehoods generated by an AI model. On this point, Snap includes a somewhat lengthy disclaimer in its My AI ad post:

As with all AI chatbots, my AI is prone to hallucinations and can be tricked into saying just about anything. Please be aware of its many flaws and sorry in advance! All conversations with My AI will be stored and may be reviewed to improve the product experience. Please do not Share any secrets with My AI and don’t rely on him for advice.”

Among machine learning researchers, “hallucinations” is a term describing when an AI model makes inaccurate conclusions about a topic or situation that are not covered in its training data set. It’s a known flaw in today’s large language paradigms like ChatGPT, which can easily make up seemingly convincing lies, such as Academic papers which does not exist and Inaccurate CVs.

Despite Snap’s powerful disavowal of my AI’s tendency to make things up, the company says its new Snapchat bot will be pinned atop conversations with friends in its own tab in the Snapchat app and will “recommend birthday gift ideas for your best friend, and plan a hiking trip.” Long haul for a long weekend, suggest a recipe for dinner, or even write a haiku about cheese for your cheese-obsessed friend.”

Snap doesn’t agree how the same bot that can’t be “rel[ied] For advice “It can also plan a careful and safe hiking trip for a long weekend.” Critics of the rapid launch of generative AI have seized on this dissonance to show that perhaps these chatbots are not ready for widespread use, especially when presented as a reference.

While people have made something of a game to try to circumvent ChatGPT and Bing Chat solutions, Snap has reportedly trained its GPT model not to discuss sex, swearing, violence, or political opinions. These restrictions may be especially necessary to avoid the disordered behavior we saw with Bing Chat a few weeks ago.

And doubly so, because “My AI” might have something powerful working under the hood: OpenAI’s next-generation big language model. According to The Verge, Snap is using a new OpenAI enterprise plan called “foundrywhich quietly rolled out OpenAI earlier this month. It gives companies dedicated cloud access to OpenAI’s GPT-3.5 and “DV” models. Many AI experts have forecast “DV” may be equivalent to GPT-4, which is a rumored strong follow-up to GPT-3.

In other words, the “hallucinations” mentioned in their news release may come out faster and more detailed than ChatGPT. And given the very convincing nature of other GPT paradigms, people might just believe them, caveats notwithstanding. It’s something to watch in the coming months as new GPT-powered commercial services come online.

Leave a Reply

Your email address will not be published. Required fields are marked *