The computer vision systems in semi- and fully-autonomous cars are getting better every day at identifying specifically what kinds of people and objects are in their way, but they’re not so good at taking it to the next level—namely, the intent of that pedestrian, whether they’re planning to cross the road or not, a feat made even more challenging when the view is partially obscured by fog or branches. Perceptive Automata has found an interesting way to train its machine learning models how to perceive intent, by watching the reactions of humans as they try to figure out what pedestrians are doing in pictures that are partially obscured. By tracking eye movement and hesitation in these human observers, the system can then identify that at least certain pixel groupings around the pedestrian in these images need further examination. It turns out these machines might learn a thing or two from humans.
sign up for our newsletter