Synopses & Reviews
A robust and engaging account of the single greatest threat faced by AI and ML systems
In Not With A Bug, But With A Sticker: Attacks on Machine Learning Systems and What To Do About Them, a team of distinguished adversarial machine learning researchers deliver a riveting account of the most significant risk to currently deployed artificial intelligence systems: cybersecurity threats. The authors take you on a sweeping tour - from inside secretive government organizations to academic workshops at ski chalets to Google's cafeteria - recounting how major AI systems remain vulnerable to the exploits of bad actors of all stripes.
Based on hundreds of interviews of academic researchers, policy makers, business leaders and national security experts, the authors compile the complex science of attacking AI systems with color and flourish and provide a front row seat to those who championed this change. Grounded in real world examples of previous attacks, you will learn how adversaries can upend the reliability of otherwise robust AI systems with straightforward exploits.
The steeplechase to solve this problem has already begun: Nations and organizations are aware that securing AI systems brings forth an indomitable advantage: the prize is not just to keep AI systems safe but also the ability to disrupt the competition's AI systems.
An essential and eye-opening resource for machine learning and software engineers, policy makers and business leaders involved with artificial intelligence, and academics studying topics including cybersecurity and computer science, Not With A Bug, But With A Sticker is a warning — albeit an entertaining and engaging one — we should all heed.
How we secure our AI systems will define the next decade. The stakes have never been higher, and public attention and debate on the issue has never been scarcer.
The authors are donating the proceeds from this book to two charities: Black in AI and Bountiful Children's Foundation.
About the Author
Ram Shankar Siva Kumar is a Data Cowboy working on the intersection of machine learning and security. At Microsoft, he founded the AI Red Team, bringing together an interdisciplinary group of researchers and engineers to proactively attack AI systems. His research has been covered by Harvard Business Review, Bloomberg, VentureBeat, Wired, and Geekwire. Most notably, his work on adversarial machine learning appeared notably in the National Security Commission on Artificial Intelligence (NSCAI) Final report presented to the United States Congress and the President. He founded the Adversarial ML Threat Matrix, an ATT&CK style framework enumerating threats to machine learning. He is also a Tech Policy Fellow at UC Berkeley and an affiliate at the Berkman Klein Center for Internet and Society at Harvard University.
Hyrum Anderson is Distinguished ML Engineer at Robust Intelligence. He received his PhD in electrical engineering from University of Washington, with an emphasis on signal processing and ML, with BS and MS degrees in electrical engineering from Brigham Young University. He directed security research at MIT Lincoln Laboratory, Sandia National Laboratories, Mandiant, as Chief Scientist at Endgame (acquired by Elastic), and Principal Architect of Trustworthy Machine Learning at Microsoft. While at Microsoft, he founded Microsoft’s AI Red Team and served as chair of its governing board. Hyrum co-founded the Conference on Applied Machine Learning in Information Security, and co-authored the book Not With a Bug, But With a Sticker: Attacks on Machine Learning Systems and What To Do About Them.