Not long after OpenAI released ChatGPT, Google released Bard, its own artificial intelligence robot. Since then, the conversational AI program has made some embarrassing mistakes, like when the demo gave out wrong information about the James Webb Space Telescope.
Since some Google workers knew how limited the software was, it seems like they weren’t too happy that the company rushed Bard’s release.
A new story from Bloomberg shows how people inside Bard felt about it before it came out. After looking at internal documents and talking to 18 current and former employees, the report gives a negative picture of what Bard can do and how its workers feel about the company.
One worker at the store said that the chatbot was “cringe-worthy.” Another worker said, “Bard is worse than useless: please do not launch.” When asked about scuba diving, Bard gave statements that “would probably cause serious injury or death,” said one of his employees.
Google chose to launch the AI chatbot anyway, even though people seemed to hate it so much. This choice may have been made because of the “code red” that management issued after ChatGPT became popular. Employees say that the company’s leaders thought that if they called new products “experiments,” the public might be willing to overlook their flaws.
It seems like Google didn’t care all that much about ethics. The report says that workers who were in charge of making sure that new products were safe and ethical were told “not to get in the way or try to kill any of the generative AI tools that were in development.”
No matter how the Google staff feels about Bard, the company keeps experimenting with AI. Recently, it came out that Google might release a bunch of AI Search features before Google I/O.