OpenAI has had the capability of accurately detecting text generated by its AI model, ChatGPT, reported Wall Street Journal. However, despite its readiness, this tool remains unreleased due to internal debates. According to several media reports, the AI firm is taking a “deliberate approach” due to “the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”

OpenAI has developed a text watermarking method designed to identify AI-generated content. This technique subtly alters the way ChatGPT predicts subsequent words and phrases, embedding a detectable pattern within the text. Although this watermarking method is highly accurate in controlled settings, it faces significant challenges when tampered with. Techniques such as translation, rewording using another generative model, or inserting and deleting special characters can effectively undermine the watermark, making it vulnerable to manipulation by bad actors.

One of the primary ethical concerns surrounding the release of this detection tool is its potential to disproportionately affect non-native English speakers. A spokesperson for the firm revealed that it could “disproportionately impact groups like non-English speakers,” as well as stigmatize the use of AI tools as valuable aids for individuals who rely on them for language assistance.

The watermarking tool is still a useful one, helping deter its use in fields like education. This tool could deter students from using AI to complete their assignments, as well as help editors and publishers identify AI-generated text in their works. Still, there is a flip side to it, and the implementation of such a detection tool might discourage the use of AI for content creation, which can be a valuable resource for generating ideas, drafting articles, and enhancing productivity as a whole.

Within OpenAI, there is significant debate over whether to release the watermarking tool. On one hand, releasing the tool could enhance transparency and trust in AI-generated content, providing educators and other stakeholders with a means to verify the authenticity of text. On the other hand, there are concerns that the tool’s release could deter users from utilizing ChatGPT, potentially impacting OpenAI’s user base and revenue. The company risks losing a significant portion of its user base if watermarking is perceived negatively.

Public opinion on the release of an AI detection tool appears to be generally favorable. A survey commissioned by OpenAI found that there is strong support for the idea, with a four-to-one margin in favor of such a tool. However, nearly 30% of surveyed ChatGPT users indicated that they might reduce their usage of the software if watermarking were implemented.

In addition to text watermarking, OpenAI is investigating other methods for ensuring text provenance. These include classifiers and embedding metadata within the text. Metadata embedding, which involves cryptographic signing, is still in its early stages of development. This method aims to provide a tamper-resistant way to verify the origin of text without the drawbacks associated with watermarking. As of now, OpenAI continues to research text watermarking and its alternatives.