COMPUTER VISION 2017: THE YEAR IN REVIEWComputer Vision
The Trouble With Power-Hungry Self-Driving CarsWired
Cameras, radar and on-board computer processing uses up an extraordinary amount of power in semi- and fully-autonomous vehicles, as much as 2,500 watts every 30 seconds, which is enough to power 40 incandescent lightbulbs, according to Wired. New platforms from Nvidia and other manufacturers are aiming to tackle this perennial challenge of more and faster processing that uses less energy. Hurry up, because we’re not even close to having enough e-charging stations to handle the current EV hordes.
Catching Poachers in Real TimeUSC
Infrared-camera-equipped drones patrol national parks in Malawi and Zimbabwe at night, when the majority of poachers they are looking for tend to be active. But poring over hours and hours of infrared images is not only challenging, it’s not fast enough to stop the illegal hunters in real time. After labeling 180,000 infrared images of animals and humans, scientists at USC have trained and developed an algorithm to discern the difference between humans and animals in those images in less than half a second, even with intermittent coverage on any laptop. SPOT, or Systematic POacher deTector, as the algorithm is known, will launch first in Botswana.
Home Depot’s Foray Into Computer Vision for Home ImprovementConcrete Products
Home Depot and Google are making investments in Hover, a technology that creates interactive 3D models of any building by applying computer vision to images of that building. The idea here is to eventually eliminate everything from laborious measuring tape surveys to inaccurate contractor estimates, but it has the potential to also speed up insurance claims with more accurate measurements. It also presumably could cut down on buyer’s remorse and restocking issues for the home improvement supplies retailer, since purchases of wood, siding, doors and the like will be more accurate.
The “Dark Side” of AIQuartz
It’s hard to believe it only took two years for bad actors to use Google’s gift of the TensorFlow software library for nefarious purposes, developing tools such as FakeApp to automatically and realistically superimpose the faces and voices of anyone into any video and then sharing it on subreddits. It’s the “dark side of open source,” as Quartz writes, but eliminating open source isn’t the solution. For starters, open source providers can offer guidance and disclaimers around ethical uses of their software, and the platforms where this altered video shows up can also play a role. Either way, AI-generated fake video is likely and increasingly to be as big of a scourge as all the other fact-challenged content out there today.