email print
Blog Post

Some AI concerns for policymakers that don’t involve sentient superbots

AEIdeas

The first casualty of the man-machine war in the 2011 bestseller “Robopocalypse” is a human who screws up. As the book begins, a superintelligent sentient AI, named Archos R-14, is being kept in  a Faraday cage that absorbs or blocks electromagnetic energy. And since Archos isn’t based in a networked computer, it’s basically trapped and can thus be safely studied.

via GIPHY

Except some sloppy laptop handling by a human scientist allows Archos to escape. And the book is off to the races. I recalled that scene as I read through the new report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” sponsored and coauthored by a number of AI experts. In particular, this conclusion really stuck with me: “It is likely to prove much harder to secure humans from manipulation attacks than it will be to secure digital and cyber-physical systems from cyber attacks.”

Ah yes, we are the weak link no matter how good the cyberdefense systems we create. And the report provides an interesting example of how this vulnerability might play out with the sort of non-sentient AI we have today through the very-near future:

Jackie logs into the admin console for the CleanSecure robot that she manages; operating on a verified kernel, it is guaranteed by the manufacturer to be hack-proof. She then uploads photographs of a new employee so the robot will recognize him when he walks into the building and will not sound the alarm.

While she waits for the robot to authenticate its updated person database with the company’s other security systems, Jackie plays with the model train on her desk, allowing herself a couple of runs around the track that encircles her keyboard and monitor. There’s a ping, signaling successful authentication, and she smiles to herself and carries on with her tasks.

Later that afternoon, Jackie is browsing Facebook while idly managing a firmware update of the robot. An ad catches her eye – a model train set sale at a hobbyist shop that, it turns out, is located just a few minutes from her house. She fills out an online form to get a brochure emailed to her, then she opens the brochure when it pops into her inbox. The robot dings, signalling a need for attention, so she minimizes the brochure and logs back into the admin console.

Jackie doesn’t know that the brochure was infected with malware. Based on data from her online profile and other public info, an AI system was used to generate a very personalized vulnerability profile for Jackie – the model train advert – which was then farmed out to a freelancer to create a tailored exploit for this vulnerability. When Jackie logged into the console, her username and password were exfiltrated to a darknet command and control server. It won’t be long before someone buys them and uses them to subvert the CleanSecure robot with fully privileged access.

The study is more geared toward raising awareness of non-Terminator threat scenarios (another concerns terrorists hacking a cleaning robot) and providing direction for research vs. actionable policy solutions. But it’s definitely worth a read.