In late February 2017, nearly two dozen leading researchers gathered in centuries-old Oxford, England, to warn of the most modern of hazards: malicious use of AI.
Among the red flags they raised was an attack called adversarial machine learning. In this scenario, AI systems’ neural networks are tricked by intentionally modified external data. An attacker ever so slightly distorts these inputs for the sole purpose of causing AI to misclassify them. An adversarial image of a spoon, for instance, is exactly that — a spoon — to human eyes. To AI, there is no spoon.
Leave a reply