A UK parliamentary committee has recommended that the government act immediately and proactively to tackle “a host of social, ethical and legal questions” coming from the rise of autonomous technologies such as artificial intelligence.
The committee took on an enquiry into AI and robotics in March this year, taking as many as 67 written submissions and hearing from 12 witnesses in person. They also visited Google DeepMind’s London office. The committee report into robotics and AI was finally published today. The Science and Technology committee brought to light several issues that it says need “serious, ongoing consideration”, including:
- taking steps to minimise bias being accidentally built into AI systems;
- ensuring that the decisions they make are transparent;
- instigating methods that can verify that AI technology is operating as intended and that unwanted, or unpredictable, behaviours are not produced.
The committee also said:
“While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now,” says the committee. “Not only would this help to ensure that the UK remains focused on developing ‘socially beneficial’ AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time.”
At this point, the committee suggests that the government establish a standing Commission on Artificial Intelligence aimed at “identifying principles to govern the development and application of AI”, and also that it provide advice and encourage public dialogue about automation technologies. The report summary said:
“While the UK is world-leading when it comes to considering the implications of AI, and is well-placed to provide global intellectual leadership on this matter, a coordinated approach is required to harness this expertise,”
IN-DEPTH
The report itself is quite extensive, with many detailed sections. These include ethical and legal issues, minimizing bias, privacy and consent, accountability and liability, and others. The sections talk about measures such as transparency of decision-making, devoting more attention to designing the AI systems in a way that safeguards against discriminatory, data-driven ‘profiling’. It also talks about the particular hurdles that arise when AI is applied to healthcare data, and mentions the need for accountability for the operation of autonomous weapons and lethal autonomous weapons systems being “critically important”.
Adrian Weller, a senior researcher in the Machine Learning Group at the University of Cambridge, commented on the challenges of auditing AI to TechCrunch:
“There is rapidly growing focus on such important topics, for example see the website for the new Leverhulme centre for the future of intelligence in Cambridge. Also see upcoming workshops/symposia at the important machine learning conference NIPS. I’m involved there with one on ML and the Law (privacy, liability, transparency and fairness), one on Reliable ML in the wild, and one on interpretability of algorithms.”
Author and data science consultant Cathy O’Neil, who has written a book on how big data can increase inequality, gives a perspective that lies outside academia:
“The number one thing is that data scientists and technologists do not acknowledge the problem at all. They don’t even acknowledge that you can build bias into AI. They also don’t acknowledge any responsibility that they might have due to the influential algorithms that they deploy. If you talk to a Facebook engineer, or a Google engineer they don’t really acknowledge the feedback loops that they engender with their algorithms. There’s really no responsibility that’s been assumed by the most powerful among us technologists.”
She goes on to add:
“We don’t have any tools yet. That’s why I started my company because we need to develop tools,” she continues. “And I need clients because I don’t have access to the data… It would be much easier for one of the companies that is building the AI that’s deciding whether someone deserves a job or not to develop these tools because they actually have all that data. It’s impossible to audit these algorithms unless you have access to the actual algorithms and the data going into them. Everybody has bias at all times, the question is whether the bias embedded in it is the bias we want there.”
SKILL SHIFT: GOING DIGITAL
The Science and Technology committee report also weighs the implications of increasing automation on the jobs and skills landscape in the U.K., and goes on to criticize the government for a lack of leadership in the area of digital skills, urging the publication of its long delayed Digital Strategy. The committee strongly stands in favor of more attentionto adapting education and training systems to update skills to mesh with emergent technologies. This is something it stands by even though there is no consensus on the impact of AI and robotics on the domestic workforce (how jobs might change, be destroyed, or created).
The committee writes:
“The Government must commit to addressing the digital skills crisis through a Digital Strategy, published without delay.”
It also points out that there is a lack of leadership across robotics and autonomous systems (RAS), and that the government still hasn’t established a RAS Leadership Council that was promised in March 2015. The report says:
“This should be remedied immediately and a Leadership Council established without further delay. The Leadership Council should work with the Government and the Research Councils to produce a Government-backed ‘National RAS Strategy’, setting out the Government’s ambitions, and financial support, for this ‘great technology’,”