As the old saying goes, garbage in, garbage out—AI is only as good as the data that it’s trained on, and if that data is bad, or depicting bad stuff, then the AI is going to see the world as bad, too. Common AI sense already tells us this, but that didn’t stop a group of MIT researchers from seeing how an AI trained on gory, gruesome images of people dying culled from Reddit would identify standard ink blot images. Not surprisingly, the “Norman” model, named after the psychopathic killer in Alfred Hitchcock’s Psycho, saw death and destruction in a series of ink blot images, while a similar AI trained in traditional images such as non-dying cats, dogs, and landscapes, saw more positive things going on in the same ink blot images. The experiment proves how easily biased today's machine learning models, all of which are so narrowly focused, can be, and a reminder of just how far we are from general AI. So don’t worry: There’s no way Norman will ever be deployed to keep you company. He’s not only a buzzkill; he’s basic.
sign up for our newsletter