Generative artificial intelligence, the type of software that powers OpenAI’s ChatGPT, Microsoft’s (MSFT) Bing, and Google’s (GOOG, GOOGL) Bard, is all the rage. But the proliferation of generative A.I., so-called because it develops “new” material based on web data, is increasingly scrutinised by consumers and professionals.
Concerns that the software could be used to assist students in cheating on exams and to offer false, odd responses to user queries have raised doubts about the accuracy and capabilities of the platforms. Yet some are questioning whether the products were issued too soon for their own good.
Rajesh Kandaswamy, senior vice president and fellow at research firm Gartner, told Yahoo Finance, “The genie is out of the bottle.” “The question now is how you intend to regulate them.”
Microsoft has implemented additional limits on Bing, which is still in restricted preview, to limit the number of inquiries users may perform per session and per day in an effort to address the chatbot’s odd responses. The theory is that the fewer requests users send to the bot, the less anxious it will get. Google, meanwhile, keeps testing its Bard software with a limited number of trusted users.
“I believe that what it means for individuals to adopt [these technologies] or not differs considerably,” Kandaswamy added. “[The pushback] will contribute to the notion that artificial intelligence is frightening and unreliable.”
We do not anticipate the objections to extinguish the enthusiasm surrounding generative A.I. anytime soon. If anything, it is just getting started.
Generative A.I. is winning over users
What makes generative A.I. platforms such as ChatGPT and Bing so intriguing is their human-like responses to user queries. Ask Bing if the Mets will win the 2023 World Series, and it will provide the Mets’ World Series odds, the other clubs that will fight for the crown, and the Mets’ important offseason acquisitions.
These types of reactions are what make chatbots useful. It’s especially shocking when they’re incorrect or off-base, such as when Bing told one user that it can snoop on Microsoft employees’ webcams (it can’t).
Kim Yoon, a professor of electrical engineering and computer science at the Massachusetts Institute of Technology, stated, “This was inevitable given that we’re still a long way off from having a completely human-like A.I. system.” “Thus, these systems will inevitably have restrictions. And these restrictions would become apparent.”
OpenAI, Microsoft, and Google will not cease their efforts only because these constraints are attracting more attention.
“There will be others who look at this raw power and attempt to harness it. “This is always the case,” stated Kandaswamy. “The technology will continue to evolve prior to widespread adoption by the general populace. It will then reach a point where it is safe for widespread adoption.”
Already, ChatGPT and Bing attract millions of users. ChatGPT was released to the public in December and currently has 100 million users, increasing faster than the short-form video app TikTok. Bing? One million individuals from 169 countries have already registered for Microsoft’s preview. In addition, the business said on Wednesday that it is releasing mobile versions of its chatbot for both Android and iOS.
The Criticism Will Keep Coming
Yet, if more users sign up for these services and bombard them with an increasing number of inquiries, we will certainly continue to receive wacky, erroneous responses. This will result in more criticism.
Douglass Rushkoff, author and professor of media studies at the City University of New York’s Queens College, remarked, “This is what happens when you employ experimental technology for something important.”
“Most A.I.s are essentially probability engines, attempting to recreate the most likely accurate outcomes based on past events. They do not account for many other factors, such as facts or copyright. Hence, the issue is not the A.I. itself, but how it is being deployed. Every technology cannot be used for every purpose.
Concerns concerning generative A.I. extend beyond its accuracy and creepy or sarcastic replies. Bing, ChatGPT, and Bard are also facing content-training-related concerns. After all, these generative algorithms utilise information extracted from the internet, some of which includes news articles and social media posts.
If A.I. platforms are aggregating news organisation information and summarising it for readers, fewer people will visit the news organisations’ websites, hence reducing their ad revenue. Do you have recourse if you are an independent artist and a chatbot is trained on your work? That is currently unclear.
Microsoft, its subsidiary Github, and OpenAI are already facing lawsuits over their usage of third-party computer code, while media outlets such as Bloomberg and The Wall Street Journal have criticised chatbot developers for exploiting their unpaid labour to train models.
Kandaswamy stated, “I do not expect these challenges to be resolved in the next month or two.”