When the Microsoft AI chatbot Tay was turned into a racist and genocidal application the company had to shut down the chatbot within 24 hours and so it is more than speculation to presume that an AI can be turned ‘evil’. A group of researchers from the Massachusetts Institute of Technology (MIT) has taken this one step further and corrupted an AI by using some biased data and whom they decided to name Norman a homage to the villain in Alfred Hitchcock’s psycho.
The team pulled the images and many quotes from an infamous subreddit which is dedicated to death and violence, in doing so they have incorporated them into how objects could be described. The team then gave the images of Rorschach inkblots to analyse the AIs responses and also compared them to the standard image captioning neural network, the results of which were very alarming. For example when the AI interpreted an open umbrella Norman saw a wife screaming in grief as her husband died. It is believed that the corrupted AI was blinded to any possibilities besides pain and murder within the interpretation meaning it only reached morbid assumptions. Using the darkest corner of Reddit as its only dataset for actually interpreting the inkblots meant Norman was destined to become monstrous after being exposed to the controlled image data set.
The Researchers at MIT have shown that any AI can have its own interpretive criteria and can also be corrupted based on which source of the data is coming. AI is generally used for facial and image recognition by many companies like Facebook which also uses the Instagram captions to teach the AI how to interpret the images. The creators of Norman have asked people to provide their own interpretations of inkblots in a Google Doc with the aim of ‘fixing’ Norman. This particular research can help the AI creators to determine how the remove the bad data sources from their datasets and also ensure the AI that they have created is as untainted and unbiased as possible.
Take your time to comment on this article.