Is The AI Age An Unsafe One?

With AI and other new technologies it can often seem as though there is as much misinformation published on its capabilities as there is fact.  Hype surrounds it, either in puffing up expectations or exaggerating risks.  What side does a recent report, from a consortium including Oxford University’s Future of Humanity Institute and Cambridge University’s Centre for the Study of Existential Risk on, the security implications of new AI technologies sit?

It certainly errs towards the pessimistic side, warning of a rapid rise in cybercrime, a misuse of drones and an increase in propaganda and misinformation spread by bots.

The authors recommend a number of policy interventions to help mitigate some of the risks posed by technologies such as AI, including:

  • Greater understanding of the risks of malicious use of AI, and how those risks can be prepared for by policy makers.
  • A range of best practices can be gleaned from other disciplines that have a long history of dual-use technologies that can do ill as well as good.
  • A wider, and deeper, pool of stakeholders needs to be engaged in preventing and mitigating the risks of malicious use of AI.

They believe that AI can facilitate a rise in automated hacking, with speech synthesis used to impersonate targets.  The authors also worry about the proliferation of drones that could allow attackers to deploy fleets of attackers.  There is also the risk of fleets of autonomous vehicles being hacked and their safety compromised.

In the political sphere, this could manifest itself in extremely detailed and targeted propaganda, with fake videos used to manipulate public opinion.

The paper fleshes out several possible scenarios whereby AI might be used maliciously to illustrate some of the potential risks we face in the near future.

“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this,” the authors say.

There’s no doubting the calibre of names behind the report.  They do run the risk however of presenting such a dark and dystopian vision of the future that people prefer to bury their heads in the sand than heed the dire warnings outlined in the report.