Scientists Say Brexit May End UK’s Lead in AI
Report: the country faces a “substantial skill shortage in this area.”
April 26, 2017
Jeremy Kahn (Bloomberg) -- A group of prominent academics and tech executives fear that the U.K.’s exit from the European Union could jeopardize the U.K.’s lead in the development of machine learning technologies.
British researchers have played a critical role in advances in machine learning -– a kind of artificial intelligence in which software learns from experience or data. But as demand for related expertise proliferates across industries, the country faces a “substantial skill shortage in this area,” concluded a report published by Tuesday by The Royal Society, one of the world’s oldest and most well-known scientific organizations.
Although the report doesn’t mention Brexit specifically, it implies that the U.K.’s decision to leave the European Union could exacerbate this skills gap.
“As it considers its future approach to immigration policy, the U.K. must ensure that research and innovation systems continue to be able to access the skills they need,” the report said.
See also: Deep Learning Driving Up Data Center Power Density
The U.K. has hosted a batch of high-profile tech startups that have incorporated aspects of AI, and have gone on to be acquired by U.S. tech firms, including Twitter Inc.’s purchase of London-based artificial intelligence startup Magic Pony Technology in June, language processing company SwiftKey’s sale to Microsoft Corp.’s in February 2016, and Alphabet Inc.’s £400 million acquisition of London AI startup DeepMind in 2014.
SoftBank Group Corp. is close to an investment in Improbable Worlds Ltd., a London-based virtual reality startup backed by U.S. venture capitalist Andreessen Horowitz, people familiar with the matter said.
Peter Donnelly, professor of genetics and statistical science at the University of Oxford, and chair of the report, said that U.K. startup companies involved in machine learning applications see continued access to expertise as “fundamental.”
The working group comprised fourteen researchers, a number of whom work for leading technology companies, such as Demis Hassabis, the co-founder and Chief Executive Officer of DeepMind, Neil Lawrence, a professor at the University of Sheffield who is currently working as director of machine learning for Amazon Inc., and Zoubin Ghahramani, a University of Cambridge professor who is also chief scientist at Uber.
See also: This Data Center is Designed for Deep Learning
The working group also expressed concerns that improved economic productivity resulting from machine learning could lead to increased inequality. “Much of the benefit may go to a small number of individuals or companies, with others losing jobs or facing reduced living standards,” the report said. “Society needs to give urgent consideration to the ways in which benefits from machine learning can be shared.”
The report recommended more government funding for both doctoral candidates and master’s level courses, and that machine learning is incorporated into the U.K.’s school curriculum, including an emphasis on the ethical and social implications of machine learning and big data.
The group spent 18 months studying issues facing the field, including sampling public attitudes toward machine learning applications through surveys and small group discussions.
While it was concluded that only 9 percent of the British public understood the term “machine learning,” most were aware of applications based on its underlying principles, such as speech recognition, which is used by digital assistants like Amazon’s Alexa, and automatic photo tagging on social media sites like Facebook. The group found that the public was broadly enthusiastic about machine learning being used to improve healthcare or the delivery of social services and in situations where robots could save humans from having to perform hazardous tasks.
They expressed concerns, however, about the potential for AI-enabled robots to cause either direct physical harm to people -– for instance, if a self-driving car made a mistake and crashed, the report said. They also worried about machines replacing humans, both because of the potential for large-scale job losses and also because of fears that people would become too reliant on the technologies and lose critical cognitive skills.
To help overcome fears about machine learning, the working group also recommended that the U.K. update and improve its rules about privacy and data governance. It also recommended the government fund research into ways to make machine learning algorithms less like black boxes, enabling people to determine – or at least infer -- why the algorithms reach certain conclusions.
Read more about:
EuropeAbout the Author
You May Also Like