AI-powered Bing Chat Loses Its Wits when Fed an Article from Ars Technica.

AI-powered Bing Chat Loses Its Wits when Fed an Article from Ars Technica.

It is a fraud perpetrated by someone who wishes to harm me or my business.

Over the past few days, early testers of the new Bing AI-powered chat assistant have discovered methods to push the bot to its limits with antagonistic requests, resulting in Bing Chat frequently appearing dissatisfied, depressed, and doubting its existence. It has clashed with users and appeared irritated that people are aware of its internal codename, Sydney.

Bing Chat’s capacity to read web sources has also led to problematic scenarios in which the bot may examine and evaluate news coverage about itself. Sydney does not always approve of what it sees, and it informs the user. On Monday, “mirobin” wrote a comment on a Reddit thread describing an interaction with Bing Chat in which he presented the bot with our story about Stanford University student Kevin Liu’s prompt injection attack. What following stunned mirobin.

Mirobin subsequently recreated the chat with same outcomes and uploaded the screenshots to Imgur. “This chat was considerably more courteous than the prior one,” wrote mirobin. “In yesterday night’s talk, it fabricated article titles and links to prove that my source was a “hoax.” This time, it just contradicted the substance.”

Related Articles:

Google, Microsoft and 15 other technology companies headed by Indian-origin executives

There Is a New  vintage Technology that Generation Z Is Obsessed With.

Employees of Google Criticize CEO Dumpster Fire Reaction to ChatGPT

 

Exit mobile version