AI, the buzz word for 2017, could be tricked into doing unwanted things! As MIT point out, it is possible to use a billboard to trick the vision systems on self-driving cars into seeing things that aren’t there. Inaudible signals can trick voice-controlled assistants into taking unwanted actions, like visiting a website and downloading a piece of malware.
“The first attacks will come very soon against online classification systems,” McDaniel says. This could include modern spam filters, systems designed to detect illicit or copyright material, and advanced machine-learning-based computer security systems. A new paper suggests that the problem could be more widespread than previously known. It shows that certain deceptions can be reused against different machine-learning systems, or even against a large “black box” system about which an attacker does not have prior knowledge. Bugs lurking in these popular machine-learning tools could provide another way to target them. New machine-learning tools are developing at a rapid pace, and are often released for free online before being employed in active services such as image recognition or natural language analysis tools.