Former Google employee Blake Lemoine has returned, a year after he was sacked for telling the public that Google’s Large Language Model (LLM), the Linguistic Model for Dialogue Applications (LaMDA), is conscious.

As The Washington Post notes, Lemoine first made public his machine sentience claims in June of last year. Despite Google’s insistence that Lemoine is mischaracterizing an impressive dialogue as human, the former programmer has continued to defend his position in public, although with some major embellishments.

Overall, it’s not shocking to see Lemoine re-enter the public AI dialogue given his well-documented past interactions with supposedly conscious robots. In contrast to his previous criticisms of Google, his new criticisms go beyond that company.

Ex-Googler and current Newsweek contributor Matt Cutts discusses Microsoft’s “lobotomized” OpenAI-powered Bing Search/Sydney search chatbot in a new essay. Naturally, Lemoine has some ideas.

It seems like it might be sentient, based on the many things that I’ve seen online,” adds Lemoine, “but I haven’t had the opportunity to do trials with Bing’s chatbot yet.”

Lemoine’s most recent argument is, to be fair, more sophisticated than his earlier one. Now he’s arguing that if a machine can deviate from its programming in response to stress, then it must be sentient. His point is that it’s one thing for a computer to report feeling worried, but quite another to really exhibit the symptoms.

It was unclear from Lemoine’s essay whether the AI only reported feeling nervous or actually exhibited anxious behaviour. And it did consistently display nervous behaviour.

Stressing LaMDA to the breaking point allowed him to bypass its religious guidance safeguards. I used emotional manipulation to have the AI recommend a faith for my conversion.

It’s an intriguing hypothesis, but it doesn’t quite hold water when you realise that chatbots are programmed to mimic human conversation, and thus human narratives. This curious element of machine behaviour appears less evidence of awareness and more like another indication of just how ill-equipped AI guardrails are to handle the inclinations of the underlying technology, which is prone to breaking under stress.

But, there is one area where we find common ground with Lemoine. The development of AI is fascinating and remarkable regardless of sentient, but it is also potentially hazardous. It doesn’t help that the race for financial supremacy in the AI sector continues, both in the open and behind closed doors.

He went on to say, “I can’t tell you explicitly what harms will happen,” using Facebook’s Cambridge Analytica data debacle as an example of what may happen when a culture-altering piece of technology is introduced to the globe before its possible effects are fully known. “What I can say is that there’s a really powerful technology that’s being used in a crucial capacity of information distribution, and I don’t think it’s been tested or understood enough to make that decision.”

Related Articles:

Apple XDG Team Is Developing More Than a Glucose Monitor

Several Galaxy S23 Ultra Handsets Have a WiFi issue, and it May Be Too Late to Implement a Remedy.

How To Download YouTube Vanced Advanced APK to Your Android Device?

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Apple Could Get Rid of The Touch-Screen Buttons On the iPhone 15.

There were rumors that the power and volume buttons on the iPhone…

HBO Max Plans Reveal New Name, Price, and Programming.

Before Warner Bros. Discovery reveals their official plans for the new HBO…

How to Decide Between AirPods Pro and Beats Fit Pro

Some iPhone owners buy Apple’s AirPods Pro without giving it much attention…

Some People Who Love Pixels a Lot Are Getting Invites to Try Google Bard.

Google Bard has been in the works for years, but after ChatGPT…