< BackHome/stories/collision-2017-conference-ai-takeaways

Will AI Replace Humans (and Other Takeaways From Collision 2017)

From the time CES 2017 kicked off in January, Artificial Intelligence (AI) has permeated every tech gathering, from Mobile World Congress to SXSW to the recent Collision 2017 conference. Held in early May in New Orleans, for the second year in a row, Collision gathers an unusual but vibrant mix of nearly 20,000 journalists, marketers, entrepreneurs, and entertainers, many with bold-faced names, in a three-day extravaganza of mini-conferences, panel discussions, startup demos, and networking. USA Today called it “the anti-CES” last year, and that is true in many ways. Attendees are spared the tidal wave of new hardware product launches, for example–the show floor is mostly software and service startups, but one characteristic stands out more than anything: It’s geographically compact, fitting pretty much into just one hall in the Ernest N. Morial Convention Center. And yet, it somehow manages to pack in a half-dozen or so separate conferences-within-the-c0nference on music, robotics, the environment, creativity, technology, and marketing, as well as host 605 startups that rotate in and out of the show floor each day. If you’re just one attendee, it’s impossible to cover in its entirety. Attend a panel, browse the startups, or just blink, and you’ll definitely miss everything else that’s going on that minute. Like the technology on which it focuses, Collision keeps you on your toes.

One topic that made its way across the various disciplines: AI and its implications in business and the world, the promise and the peril, including the roles of humans in the future. Will AI take our jobs? When will machines be as good as or better than us? Who will be the winners and losers in an increasingly automated world, and what are the best strategies for tackling the road ahead? Read on for a few AI takeaways from the Collision 2017 talks and show floor.

There’s AI and then there’s AI

Given all the hype around AI and deep learning, there’s increasing talk about the difference between narrow (or specialized) and general artificial intelligence. The former is specified around deep learning in specific verticals—object recognition, speech recognition, etc.  The latter is closer to more what we’d associate with and expect of artificial intelligence at the movies: a machine that can react and be smart around pretty much any topic at any time, and which takes into account so many other human traits that affect decision making, from emotion and common knowledge to psychology. This kind of AI doesn’t exist—yet—as anyone who has ever tried to ask Amazon’s Alexa or Apple’s Siri anything outside of boilerplate factual requests can attest.

“[AI] still has problems with understanding language, common sense reasoning, inference in general,” said NYU psychology professor Gary Marcus in the ‘Where do Bots Go From Here?’ panel. “Machines are very good with speech perception, but no one has a deep learning natural language system that can understand what this conversation right now is about and answer queries about it. When you talk to Siri today, it’s still mostly one sentence at a time.” In other words, a computer probably can’t randomly review a movie or give a hot take on a conference talk on the fly, unless it’s been specifically trained on those tasks. “I think a bot that’s as flexible in conversation as the Scarlet Johansson character in Her is 20 or 30 or 40 years away,” Marcus predicted.

Not surprisingly, the expectation for generalized AI can be a thorn in the side of any company that says it’s using AI. “There’s a big motivation to get to generalized AI, but nobody has that today,” said SoundHound CEO and founder Keyvan Mohajer at the ‘AI Where You’d Least Expect It’ panel. “If as a consumer you’re expecting that kind of interaction, you will always be disappointed; and if you have a product and you make that promise, you will disappoint your users.” Narrow AI, the kind that still requires a lot of data just to work on one specific task, is what virtually all companies that use AI are working with today. And that requires setting the right expectations for clients and customers. “Part of our effort is always the education of users to not expect that they’re talking to a human,” Mohajer said. Companies building their missions, marketing or otherwise, around artificial intelligence need to make sure they are spelling out specifically their AI product’s capabilities rather than tossing around the moniker too loosely and nebulously.

Automation and its discontents

Among the big fears around AI and automation is the loss of jobs, a consequence not entirely without truth, as everyone from stockbrokers and bus drivers to media buyers and personal assistants may well be replaced by machines in the future. However, as with any technological shift, AI will create new jobs too, and augment existing ones. “People who understand computational thinking—that will be a growth area,” said Wolfram Research founder and CEO Stephen Wolfram, in a panel entitled ‘Is There a Future for Humans?’ So, obviously engineers, data scientists, and computer scientists are needed to help run, iterate, and optimize the algorithms, but also anyone who knows how to interpret data in a creative way.

Creative jobs, especially in advertising and marketing, may flourish, thanks to increased data-driven insights around what works in campaigns, which can then inform ideation and execution of future campaigns. “When it comes to creative development and that question of humans versus machines and what role the data plays in all that, it very much depends on how big the creative is that you’re creating, whether it’s many pieces of small content that are segmented and targeted versus a large-scale campaign that you’re trying to get to capture an entire audience population,” said Deloitte Digital CMO Alicia Hatch in the ‘Data-Driven Marketing: Getting the Balance Right’ panel. In other words, big picture creative ideation and execution requires big picture thinking, which can certainly be informed by machine-culled insights, but also by the old-fashioned human gut. For now, there’s still a lot of data machines can’t provide.

“We’ve learned in the last 10 years of neuroscience that our decisions are mostly driven by our emotions, and we’re not capturing well a lot of emotional data, “said Hatch. “Our emotional intelligence is the ability to capture the heat I’m emitting from my hand into my phone as I’m interacting with a piece of content, or the pheromones that I’m releasing. Whatever those indicators are, they become that much more powerful for big idea, ‘big C’ creative.”

sign up for our newsletter