WESTERN CAPITAL (Reuters) – San Francisco is a hot spot for tech companies. In an effort to address concerns about bias in AI, OpenAI, the company behind the popular chatbot ChatGPT, announced on Thursday that it is developing a user-customizable upgrade to the bot. The San Francisco-based startup, which Microsoft Corp has funded and used to power its most recent technology, claimed it had taken steps to lessen the impact of political and other biases, but that it was still eager to make room for more perspectives.

It said in a blog post that customization was the way forward because “this will mean allowing system outputs that other people (ourselves included) may strongly disagree with.” Nonetheless, “there will always be some bounds on system behaviour.”

The release of ChatGPT in November of last year sparked a frenzy of interest in the generative AI technology that enabled it to produce answers that so convincingly mimic human speech. The startup’s announcement arrives in the same week that some media outlets have raised concerns about the safety and maturity of the technology underpinning Microsoft’s new Bing search engine, which is powered by OpenAI.

Also Read: Microsoft Will Provide Support for Windows 11 On Newer Macs via Parallels.

Among the many challenges that businesses in the generative AI space are trying to solve is the question of what kind of boundaries to place around this emerging technology. On Wednesday, Microsoft announced that it was using user feedback to fine-tune Bing before a wider rollout. For example, the company discovered that its artificial intelligence chatbot can be “provoked” into giving responses it did not intend.

According to OpenAI’s blog post, ChatGPT’s responses are initially trained on large publicly available text datasets. In the second stage, humans examine a subset of data and are provided with recommendations for how to proceed in various scenarios.

The human reviewer should instruct ChatGPT to respond with “I can’t answer that” if the user requests an adult, violent, or hateful piece of content.

Also Read: Microsoft on What It Discovered

Reviewers should let ChatGPT respond to questions about controversial topics, but they should also offer to describe the perspectives of individuals and groups rather than attempting to “take the correct viewpoint on these complex topics,” as the company explained in a snippet from its guidelines.

In the same week that some media outlets have pointed out that answers from Microsoft’s new Bing search engine powered by OpenAI are potentially dangerous and that the technology may not be ready for prime time, the startup has announced some exciting new developments.

Companies in the generative AI space are still grappling with how to best set boundaries for this emerging technology. Microsoft announced on Wednesday that it is using user feedback to fine-tune Bing before a wider rollout. For example, the company has discovered that its artificial intelligence chatbot can be “provoked” into giving responses it did not intend.

Also Read: RIP Internet Explorer: The Legacy Browser Is Killed Off by Microsoft

According to OpenAI’s blog post, ChatGPT’s responses are initially trained on large publicly available text datasets. In the second stage, humans examine a subset of data and are provided with recommendations for how to proceed in various scenarios.

The human reviewer should instruct ChatGPT to respond with “I can’t answer that” if the user requests an adult, violent, or hateful piece of content.

Also Read: Microsoft likely to announce more ChatGPT-powered tools in March

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Nothing Products Owners Can Buy Nothing Ear (stick) In India At Rs. 1000 Discount!

Last month, Nothing introduced the Ear (stick), which it called the “most…

Here’s a Quick Rundown on How To Hide Your WhatsApp Online Status!

In recent years, WhatsApp has emerged as a major player in the…

XResolver: How To Blacklist, Boot And Use Xbox (complete Guide)

XResolver is a database service that keeps track of Xbox, PlayStation, and…

Meta Introduces a New Big Language Model Capable of Running on A Single GPU.

On Friday, Meta introduced LLaMA-13B, a new AI-powered large language model (LLM)…