Do you use Waze for navigation? Customer support chatbots? Apple voice memos? Facebook photo auto-tagging? Electronic check deposits? Amazon purchase recommendations? Grammarly writing tips? LinkedIn language translation? Google search? If so, then you already use artificial intelligence (AI) nearly every day.
AI systems have the capacity to reason and learn — attributes typically associated with humans. These differ from traditional computer systems, whose outputs are deterministic and wholly the result of machines following a sequence of human-programmed instructions. You may remember the simple but illustrative peanut butter and jelly sandwich programming exercise. I can tell my son, “Put peanut butter and jelly on the bread,” and have him basically get it right. But a computer can’t make sense of this complex instruction, which would result in heavy jars being placed on top of a loaf of bread. Instead, we must be specific and deliberate: “Open the peanut butter jar, using a knife scoop out two tablespoons of peanut butter, then spread that on one slice of bread….”
But an AI system is different. Instead of responding solely to pre-programmed instructions, it learns and gets better and better over time. The computer science field of AI has been inspired by the work of biologists and physiologists of the early 1900s who studied the brain, identifying neurons, axons, and synapses that transmit signals and are connected with memory and learning. (The human brain is sometimes revered as an “ultimate computer” — with remarkable power efficiency and multi-modal parallel processing capability. Yet even today we still have a fairly rudimentary understanding of exactly how the brain works, and no AI system comes close to replicating brain function.)
Mathematicians and computer scientists created “artificial” neural networks, abstractions intended to mimic a learning function. These artificial neural networks must be “trained” with labeled data before they are useful. For instance, images of cats and dogs might be fed in and the system will “learn” which is which. Then, after sufficient training, the neural network can perform on its own: give it a new picture of a dog or cat — one it has never seen before — and it will be able to differentiate quickly and with near-perfect accuracy. Let’s be clear: no programmer told the network that cats have whiskers or dogs have longer snouts; there is no algorithm that measures the spacing between the eyes and infers the species. Instead, the AI system has learned, and learned well, what constitutes an image of a dog or cat. But give that same system pictures of apples and bananas and it will hiccup because it hasn’t been trained on fruit.
Ever-more-sophisticated artificial neural network designs have been used to rapidly advance the field of AI. Networks with many layers of artificial neurons are known as deep neural networks; the field that leverages such networks is known as deep learning. In the past decade, deep learning has exploded, with rapid progress driven by the availability of large amounts of training data, advances in specialized computing technology, and new algorithms. Together these have allowed AI to achieve super-human accuracy for tasks like language translation, speech transcription, and image classification. And this has enabled all those remarkable consumer applications like automated search and photo tagging.
Innovators are pushing the frontiers of learning and reasoning, pursuing new approaches to make AI more broadly applicable and less reliant on massive amounts of training data. Techniques like neuro-symbolic AI show promise for combining deep learning with symbolic representations to enable machine reasoning. Enterprise applications — particularly for regulated industries like financial services and healthcare — demand advances in AI security, transparency, and lifecycle management.
In all cases, a focus on trust is essential. In order for AI systems to provide insights that support human decision-making, we must be able to trust the system and the output. Developers and users must understand how it works, the data used to train it, and feel confident in its security and reliability. We need to be assured that the technology is fair, unbiased, and will not cause harm. Responsible innovation also means we bring the benefits to everyone, not just a select few.
While AI has become pervasive, we are a far cry from fully autonomous AI systems. AI is not everywhere in the frightening “AI overlord” sense brought to life by science fiction writers and creative movie producers. HAL 9000 from 2001: A Space Odyssey (1968) is the archetype of this sentient general intelligence system. Movies like The Terminator (1984), The Matrix (1999), and Avengers: Age of Ultron (2015) paint a picture of a dystopian future in which AI-powered machines overpower humanity. I found Dan Brown’s depiction of the powerful AI personified as “Winston” in his novel Origin (2017) particularly compelling.
However, as much as I understand the captivating nature of these pop culture stories, we must also take care to separate fact from fiction. Just as we recognize that alien invasions in ET and Arrival are wholly imagined, we must also acknowledge the fictitious nature of HAL and Winston. Certainly, there is need for pause — for thoughtful, responsible AI technology development and deployment. But we should also appreciate that AI today is not a runaway train, and we should not live in abject fear of chatbots taking over the world. Instead of being dreaded, advances in AI should be celebrated for the promise they bring to help society tackle our most pressing problems — from enhancing decision-making to improving healthcare and addressing the climate crisis.