Google
Credits: Wikimedia Commons

Tech titan Google may be one of the biggest proponents of AI in recent times, but that does not mean that the company is blind to its faults, or the danger that it poses. In a surprising turn of events, Google, a major supporter and investor in AI, has issued a warning to its own staff regarding the potential risks associated with chatbot technology, reported Reuters. This cautionary note comes as a significant development, considering Google’s strong backing of AI and its continued efforts to advance the field.

Ever since OpenAI’s ChatGPT made its debut in November, the popularity of generative AI continued to rise. The increasing demand for similar chatbots birthed Microsoft’s Bing AI and Google’s Bard, and now, Google-parent Alphabet is cautioning its employees about the usage of such chatbots. In its warning, the company advised its staff not to enter confidential information on AI chatbots, especially since said chatbots require access to vast amounts of data to provide personalized responses and assistance. Reuters reports that around 43% of professionals were using ChatGPT or similar AI tools as of January 2023, often without informing their bosses, according to a survey by networking site Fishbowl.

A Google privacy notice warns users against this, stating, “Don’t include confidential or sensitive information in your Bard conversations.” From the looks of it, Microsoft – another major proponent in AI – agrees with the sentiment. According to Yusuf Mehdi, Microsoft’s consumer chief marketing officer, it “makes sense” that companies would not want their staff to use public chatbots in the workplace. Cloudflare CEO Matthew Prince had a quaint view of the matter, said that typing confidential matters into chatbots was like “turning a bunch of PhD students loose in all of your private records.”

There is always a risk of data breaches or unauthorized access. If a chatbot platform lacks sufficient security measures, user information could be vulnerable to exploitation or misuse. And in case human reviewers read the chats and come across sensitive information about users, then the data may be used for targeted advertising, profiling, or even sold to third parties without explicit user consent. Users may find their personal information being used in ways they did not anticipate or authorize, leading to concerns about privacy and control over their data.

Another issue regarding chatbots is the accuracy – there is a risk of propagating misinformation or providing inaccurate responses. In sensitive and knowledge-intensive work environments, such as legal or medical fields, relying solely on chatbots for critical information can lead to erroneous advice or incorrect conclusions – a New York lawyer discovered this to his detriment. The dangers of using AI chatbots go on and on – their limited ability to comprehend contexts out of the prompts given and nuances in human communication, the risk of spread of misinformation due to inaccurate responses, and others – only demonstrate the need for robust legislations and safeguards on AI chatbots and other tools.

Apart from cautioning against putting sensitive information on chatbots, Alphabet cautioned its engineers to avoid directly using computer code that can be generated by chatbots, according to media reports. Alphabet elaborated that Bard can make undesired code suggestions, but helps programmers nonetheless.