With an ever-increasing confluence around A.I., potential misuse and serious harm are also coming into light. Such is the urgency around holistic regulation, that the UN recently held its first-ever meet, focused on AI regulation. There now seems to be some more postive movement towards AI regulation, not by governments though, but by some of the most prominent backers of new-gen AI.
Seven of the top AI companies convened at the White House on Friday and agreed to bring forth a set of voluntary safeguards to mitigate the risks of AI, and “to help move toward safe, secure, and transparent development of AI technology.” The companies who convened include Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, and now seek to address many of the risks posed by AI. OpenAI ofcourse represents a much larger group of partner companies.
The executives are Adam Selipsky (CEO of Amazon Web Services), Dario Amodei (Anthropic CEO), Kent Walker (Google head of global affairs), Mustafa Suleyman (Inflection CEO), Nick Clegg (Meta head of global affairs), Brad Smith (Microsoft President) and OpenAI President Greg Brockman. “As part of our mission to build safe and beneficial AGI, we will continue to pilot and refine concrete governance practices specifically tailored to highly capable foundation models like the ones that we produce. We will also continue to invest in research in areas that can help inform regulation, such as techniques for assessing potentially dangerous capabilities in AI models,” OpenAI said in a blog post in the matter.
“Policymakers around the world are considering new laws for highly capable AI systems. Today’s commitments contribute specific and concrete practices to that ongoing discussion. This announcement is part of our ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance,” said Anna Makanju, VP of Global Affairs.
“We welcome the President’s leadership in bringing the tech industry together to hammer out concrete steps that will help make AI safer, more secure, and more beneficial for the public,” Microsoft said in a blog post on Friday.
These are but baby steps before formal legislation is passed to regulate and guide the swiftly-advancing technology, of course, but it is still a step in the right direction and an improvement over reacting blindly when the concerns regarding AI evolve into problems with serious repercussions. The dangers of a technology that can provide sophisticated, creative and conversational responses to simple text and image-based prompts cannot be underscored enough, and fears have already been aired about moving too quickly where the AI sector is concerned. The voluntary commitments made by the companies to the White House include the implementation of measures such as watermarking AI-generated content to help make the technology safer and prevent or mitigate the dissemination of misinformation to the masses.
The commitments also include the testing of AI tools for security (by independent experts) before they are released to the public, as well as disseminating information on best practices and attempts to get around safeguards with other industry players, governments and outside experts. Other measures include the reporting of the tech and issuing guidance on the appropriate use of AI tools, as well as prioritizing research on societal risks of AI, including around discrimination and privacy. Last but not the least, the commitments include the development of AI that can help to mitigate societal challenges such as climate change and disease.