policy-brief
Understanding the Security Implications of the Machine-Learning Supply Chain
Author
Programmes
Published by
Interface
October 29, 2020
The hopes and expectations connected to artificial intelligence are staggering. All major powers have started investing heavily in the research and development of artificial intelligence – especially machine learning. This progress may be driven by a goal that has been described in – an oversimplified but clear way – by Vladimir Putin. He has famously been quoted as saying that the nation that leads in artificial intelligence “will be the ruler of the world”. Countries such as the United States and China, and especially their respective private sectors, seem to have the upper hand in research and application right now. However, a vast number of affected sectors and possible specializations – such as securing artificial intelligence – enable a number of states and non-state-actors to meaningfully engage in this domain.
Unfortunately, drivers of technological developments frequently follow the “move fast and break things” mentality, sometimes resulting in destabilizing effects for the entire Internet ecosystem. Governments and companies must not repeat a grave mistake of the past: having security only as an afterthought. In order to create an enabling environment for the development and deployment of artificial intelligence, security considerations must urgently be addressed across the entire machine-learning supply chain.
Applications leveraging artificial intelligence will be highly integrated into the cyber domain and will likely experience adverse effects accordingly. These include but are not limited to geopolitical cyber operations, illegal transfer of intellectual property, national surveillance apparatuses, financial theft, and cybercrime. Every new technology attracts adversaries who will exploit it for their own gain, be it financially, politically, or otherwise motivated. Thus, there will be a number of capable and willing threat actors out there who want to meddle with systems powered by artificial intelligence.
Therefore, it is crucial to understand the supply chain and secure it against adversarial interference. The paper recommends decision-makers implement the following to achieve this goal:
-
Design a security approach rooted in conventional information security
-
Increase transparency, traceability, validation, and verification
-
Identify, adopt, and apply best practices
-
Require fail-safes and resiliency measures
-
Create a machine-learning security ecosystem
-
Set up a permanent platform for threat exchange
-
Develop a compliance-criteria catalog for service providers
-
Foster machine-learning literacy across the board
This paper is preceded by a first paper on the attack surface of machine learning which you can find here.
Author
Dr. Sven Herpig
Lead Cybersecurity Policy and Resilience