What Do Human Brains and Artificial Machines Have in Common?

Message Preview :

... has shared an article with you on gumgum.com.

What Do Human Brains and Artificial Machines Have in Common?

Neural Networks Neural Networks

Behind many of today’s modern conveniences—smartphones that seamlessly recognize our voice commands, semi-autonomous vehicles, facial recognition—lies good artificial intelligence (AI). And behind good AI, there’s often machine learning. There are a number of techniques for computers to understand the world around them, but one of the most popular today is the neural network, a machine learning approach that’s loosely inspired by the human brain.

Like the squishy stuff between our ears, neural networks, also known as deep learning, consist of a series of dozens to thousands of connected neurons. Unlike the human brain, where neurons are interconnected every which way but loose, artificial neurons (essentially nodes or transistors on a computer chip) are stacked sequentially in rows that are known as layers, and they’re usually interconnected with the layer of neurons before and after them via inputs and outputs.

THE HUMAN BRAIN BUILDING BLOCKS


Neural networks learn mainly by ingesting and processing vast quantities of prelabeled data sets—images, video, search queries, and whatever else you want a computer to understand—a process known as supervised learning. The more a neural network looks at and analyzes the training data, the more refined its capabilities become. It’s not unlike the way a child learns how to identify and understand the world. (The average three-year-old has seen hundreds of millions of pictures of the real world, according to computer scientist Fei-Fei Li, who runs the computer vision lab at Stanford.)

Let’s say you want to teach your neural network to identify apples and oranges in pictures, a process known as image recognition. To do this you’ll need to feed the network thousands and thousands of pictures of apples and oranges, each labeled “apple” or “orange,” depending on what’s in the picture.

BEGINNING TO MAKE SENSE OF PIXELS


At first, the network will treat images simply as digital pixels—literally tiny dots—and using math and various methods of pattern recognition and mapping, it will start to develop an idea of the various arrangements of pixels that define what it means to be an apple and what it means to be an orange. This is done over the course of several layers of interconnected neurons, each one acting as a different sort of filter to help the network learn the defining visual characteristics of apples and oranges—stems, the color red, round shapes—and refine the identification process.

SELF-SUSTAINING IDENTIFICATION

In the initial stages, the predictions that the neural network makes in the final “output” layer tend to be way off; when analyzing a picture of an apple, the network might say it’s 10 percent an apple and 90 percent an orange. This is clearly the wrong answer, but through a process known as backpropagation, it can, with each successive picture, tweak different neurons in each of the filter layers within the middle layers to improve accuracy. And the more pictures the network sees, the better it’s able to refine the process until the predictions are within an acceptable level of accuracy (usually about 85 percent correct).

And amazingly, this network didn’t learn how to identify apples and oranges in pictures via descriptions of various characteristics of these fruits—it wasn’t told that apples have stems or are red or green in color, or that oranges are round—it simply taught itself by looking at a multitude of pictures of objects labeled “apple” or “orange” and figured out what they had in common, respectively. And that’s why it’s called an artificial neural network. It is the “A” in “A.I.”.

Illustrations by Ellen Porteus