
image credit: pixabay
Artificial intelligence (AI) promises to transform major sectors like healthcare, transportation, finance, and government over the coming years. But the advanced machine learning (ML) models powering this AI revolution also introduce new vectors of attack for malicious actors. As adoption accelerates, so too do emerging cybersecurity risks.
That troubling dynamic motivates a comprehensive new report on AI security published by the U.S. National Institute of Standards and Technology (NIST). The report maps out a detailed taxonomy of current adversarial threats to AI systems across different modalities such as computer vision, natural language processing, speech recognition, and tabular data analytics. It also summarizes known techniques to mitigate these multifaceted threats.