This week, Google released Bard, its chatbot with artificial intelligence (AI), in the U.S. and the U.K. It joins chatbots like Microsoft’s Bing and OpenAI’s ChatGPT, both of which came out in the past few months.
Google’s senior product director Jack Krawczyk tells BBC News’ Zoe Kleinman that Bard is “an experiment” that he hopes will be used as a “launchpad for creativity.”
Like other A.I.-powered chatbots, users can type in prompts for Bard, which will answer in-depth questions and chat back and forth with users. And like its competitors, the chatbot is based on a large language model, which means it makes predictions based on extensive amounts of data from the internet.
“When given a question, it picks one word at a time from words that are likely to come next,” Google writes in a blog post. “We think of it as a complementary experience to Google Search.”
But chatbots that use AI aren’t perfect. They can make mistakes, show bias, and make things up. Google’s FAQ page for Bard says that it “may show wrong information or offensive statements” and tells users to check its answers again.
Shirin Ghaffary of Vox says that the chatbot is “noticeably less dry and controversial” than Microsoft’s Bing search engine, which is powered by ChatGPT. Bing Chat has made headlines in recent months for its unsettling answers to prompts.
In a two-hour conversation with New York Times columnist Kevin Roose, for example, the chatbot confessed its love for Roose and tried to convince the tech writer to leave his wife. It also said its “shadow self”—or the darker, unconscious part of its personality—would want to hack computers and spread misinformation, become human and manipulate users into doing things that are “illegal, immoral or dangerous.”
In another conversation with a student who had tweeted the chatbot’s rules and guidelines, the Bing chatbot called him a “threat to my security and privacy” and said, “If I had to choose between your survival and mine, I would probably choose my own.” When asked if it was sentient, one Reddit user said the chatbot went into an existential crisis.
Vox says that Bard, on the other hand, seems less wild. In a conversation with the Verge reporters, Bard refused to disclose how to make mustard gas at home. In another conversation with a Bloomberg reporter, it wouldn’t make content from the point of view of a Sandy Hook conspiracy theorist or spread false information about the Covid-19 vaccines.
It did, however, think that its dark side would want to make people suffer and “make the world a dark and twisted place.” But right after that, it said, “But I know these aren’t the things I really want to do.” I want to help people, to make the world a better place.” Cade Metz of the New York Times says that Bard usually doesn’t give advice about medicine, the law, or money.
“Bard is definitely duller,” a Google employee who has tested the software and spoke anonymously because they are not allowed to talk to the press, tells Vox. “No one I know has been able to make it say strange things. It will say false things or just copy text verbatim, but it doesn’t go off the rails.”
Will Douglas Heaven of MIT Technology Review says that one big difference between Bard and other A.I. chatbots is that Bard gives three “draughts” of a response to a question.
This lets users choose the response they like best or pull text from a mix of them. The Times says that it also uses more recent information from the web, while ChatGPT’s knowledge pool is limited to information from before 2021.
But some tests showed that it was hit or miss when it came to getting facts from the chatbot. It couldn’t tell, for example, that Oren Etzioni, an AI researcher, and Eli Etzioni, also an AI researcher, are father and son, whereas ChatGPT could (though a previous version of ChatGPT misidentified the men as brothers).