Researchers at MIT found a way to hack into Google’s Cloud Vision API by penetrating past its shields of codes and logics proving that AI is not completely foolproof.
How Did They Do It
Google’s Cloud Vision inspects, examines and then identifies different objects in a picture. The tool is basically designed to help people identify objects from persons to animals to cars.
This is a cool software that is being used by people all around the world, but now that it has found to be hackable, questions have started to pour in.
The team first worked black box conditions, and could not see the workings of the software and could only view the output provided by it.
To fool the system, they designed a method to rapidly generate adversarial examples for the black box to confuse it.
The researchers used output and slowly altered photos pixel by pixel to fool the system. After about a million queries, they were able to fool the technology into believing the images to be dissimilar.
For example , they changed the image of a helicopter pixel by pixel, to look like this picture with rifles, while maintaining the “helicopter” classification.
The MIT team didn’t just alter random photos. They used a standardized method to target the image recognition system, and inch by inch or here we should say pixel by pixel they moved towards a picture through which they would be able to fool the software .
The Bottom Line
It was fortunate that this hack was done by the MIT team, but also leaves a question about the use of AI.
Latest posts by Unallocated Author (see all)
- Your Ultimate Antivirus Software Guide - November 19, 2019
- 6 OSINT Tools That Make a Pentester’s Life Easier - November 18, 2019
- Cyber Security Threats to Consider in 2019 and Beyond - November 15, 2019