This article was published 2 yearsago

The ChatGPT fervor has well and truly taken the global tech sector by storm – so much that it worried titans such as Google and persuaded others like Microsoft to drive billions of dollars in investing in artificial intelligence. The recent wave of AI chatbots and advancements in AI can be laid at the feet of ChatGPT developer OpenAI, who has done remarkably well ever since it unveiled the (now viral) chatbot.

As fascinating as ChatGPT has been, the chatbot remains far from perfect and continues to have chinks in its armor. There have been instances where users received outputs that they consider to be politically biased, offensive, or otherwise objectionable – and OpenAI admits that the concerns raised in multiple cases have been valid. In a blog post, the organisation addressed many of these concerns, as well as some of its plans regarding the chatbot in the near future.

Going forward, the startup is working on an upgrade to ChatGPT that will allow users to easily customize its behavior. OpenAI acknowledged that while it has worked to mitigate political and other biases, it also wanted to accommodate more diverse views. This means that it would have to allow system outputs that others may strongly disagree with. Nonetheless, it added that there will “always be some bounds on system behavior.” The challenge, OpenAI said, lies in identifying and defining the bounds.

The San Francisco-based startup plans to avoid undue concentration of power by giving users the ability to influence the rules of the systems. Currently, it is in the early stages of piloting efforts that will solicit public input on topics like system behavior, disclosure mechanisms, and deployment policies in a broader manner. Furthermore, it is looking into teaming up with external organizations to conduct third-party audits of our safety and policy efforts.

“We believe that many decisions about our defaults and hard bounds should be made collectively, and while practical implementation is a challenge, we aim to include as many perspectives as possible. As a starting point, we’ve sought external input on our technology in the form of red teaming. We also recently began soliciting public input on AI in education (one particularly important context in which our technology is being deployed),” OpenAI said in the blog post.

The developer revealed that ChatGPT’s answers are first trained on large text datasets available on the Internet. In the next step, humans review a smaller dataset and are given guidelines for what to do in different situations. the models then generalize from the reviewer feedback in order to respond to a wide array of specific inputs provided by users.

In the blog post, the startup went on to address biases in the design and impact of AI systems, saying that users are “rightly worried.” To address these concerns, OpenAI shared some of its guidelines that pertain to political and controversial topics. These guidelines include how the chatbot should respond to tricky topics or when users ask for inappropriate content in their chats. The guidelines explicitly state that the developer remains neutral when it comes to favoring political groups and is searching for ways to make a more understandable and controllable fine-tuning process. The developer is also investing in research and engineering to reduce biases in how ChatGPT responds to different inputs.

It cannot be denied that ChatGPT has made a strong debut and has opened the gateway for further advancements in this nascent technology. This development comes soon after Microsoft revealed that user feedback was helping its efforts to improve Bing before a wider rollout – the new AI-powered Bing Chat has a tendency to engage in unnerving conversations with users and can be “provoked” to give responses it did not intend.