While artificial intelligence has been kicked around academic computer science departments for decades, it’s enjoying an unprecedented public moment as the fruits of machine learning and neural networks become an inescapable part of our daily lives.
At CES 2019, products that leverage some form of artificial intelligence were expected to be ubiquitous.
And, according to analysts tracking the development of artificial intelligence, we’d better get used to it.
[Is artificial intelligence friend or foe to radio?]
IN THE RECIPE
To understand how AI will evolve, it helps to think of it less as a thing by itself and more as an “ingredient technology,” said Sayon Deb, senior research analyst at the Consumer Technology Association. Like salt, it will be sprinkled liberally into in a wide range of products, software and services but not in the same way or to the same degree. Asking how big the demand for AI will be in consumer and business markets is a bit like asking about the demand for USB ports, Deb noted. “It’s so large because it’s everywhere.”
In the near-term, look for AI-powered improvements to arrive in any device that uses sensors to interact with the real world, in particular, via voice-based interfaces, predicted Bob O’Donnell, president and chief analyst at TECHnalysis Research. Advances in natural language processing will enable devices such as smart speakers to better understand and respond to verbal commands. It will also deliver voice interaction to new product categories. The spread of Amazon’s Alexa is a good example of the trend, Deb said.
Edge devices will increasingly be able to perform their own local learning.
Any device with a camera will be the beneficiary of advances in machine vision and neural network-powered object classification, enabling cameras to differentiate objects in a scene, recognize human faces and more. Home security cameras, for instance, can learn to distinguish home owners from visitors and analyze exterior behavior for signs of trouble, O’Donnell said. While sophisticated facial-recognition technologies do raise privacy concerns, some of the early use-cases (like unlocking your phone) have proven very popular among consumers, Deb added.
One of the big shifts that’s underway concerns how AI devices acquire knowledge.
Today, much of the machine learning that powers AI capabilities is performed in the cloud, where developers can harness massive amounts of computing power and ingest huge data sets that no local desktop or tiny electronic device has the memory or processing power to cope with. The results of this learning get loaded onto so-called edge devices (your security camera, your smart speaker) which then interact with the world, but no longer acquire any new knowledge about it.
IT’S IN THE CONTEXT
But edge devices will increasingly be able to perform their own local learning, O’Donnell said.
Chips from NVIDIA, AMD, Qualcomm and others are increasingly capable of running AI algorithms and conducting some sparse local learning without a server farm. This improvement in edge-device intelligence will mean a more personalized experience — devices that are smart enough to learn your unique patterns and even attempt to anticipate them, O’Donnell said. You could, for instance, have user interfaces on devices that refine themselves on the basis of real-time feedback from the user.
[Need to Know: AI and Machine Learning]
This so-called “contextual awareness” will be extremely important for autonomous vehicles and personalized robotics as well, O’Donnell said. Both need an immense of data to navigate on their own, but the real world constantly throws new data at them. Vehicles and/or robots that can perform localized learning but then upload those results to the cloud will help in the collective effort to make robotic devices more intelligence.
“There’s bound to be growing pains, but the potential of AI is boundless.” — Sayon Deb
This two-way communication does raise privacy concerns, particularly when it comes to the kind of granular, location-based data that contextually aware devices can generate, O’Donnell said. Ironically, the better devices get at edge learning, the less they’ll need to send personalized data up to the cloud, he added.
While personalized devices grow more responsive, AI will also be leveraged by more businesses to automate and augment the work previously done by humans. According to a recent report from Forrester Research, natural language processing will combine with robot process automation to build more responsive chatbots, organize unstructured business data and automate a variety of business tasks.
This business automation naturally gives rise to concerns that as AI gets smarter, we’ll collectively be automated out of a job. One widely cited study from Oxford University’s Martin School noted that 47 percent of jobs, including many white-collar professions, are vulnerable to automation. CTA’s Deb sees those fears as unfounded, at least for now. What studies like the one from Oxford can’t measure is the new jobs that AI may create, Deb said.
“There’s bound to be growing pains,” Deb said, “but the potential of AI is boundless.”
This article originally appeared in the CES 2019 Daily.