On February 11, U.S. President Donald Trump signed an executive order to launch the American Artificial Intelligence Initiative, which will focus federal resources on the development of AI. The executive order outlines five key areas of focus: research and development, availability of data and resources, ethical standards and governance, education, and international collaboration that also protects American interests.
“No advance has captured our imagination more than artificial intelligence,” Michael Kratsios, deputy U.S. chief technology officer, wrote in an opinion article for Wired in advance of the official signing. The field offers many promises addressing problems in defense, transportation, medicine, and more. But along with those promises come a slew of concerns—an issue this latest order is intended to address, according to Kratsios.
The U.S. “must act now to ensure this innovation generates excitement, rather than uncertainty,” he writes.
Much remains to be seen about what this executive order actually means for the future of AI. For one, the exact amount of funding the White House might request for such advancements remains unknown, Science magazine reports. Details about enacting any of these measures and tracking progress are also unclear, the New York Times reports. Ready or not, though, AI already pervades our world—from popup ads to bank loans.
“A lot of decisions are being made by these systems, and that's why people are concerned,” notes Janelle Shane, a researcher and AI-humor blogger at AIweirdness.com. But its current capacities are far from what you see in any science fiction movie.
You might have some questions about this new initiative and the technology involved. What does AI really do? Are robots coming after my job? We've got you covered.
What is artificial intelligence?
Artificial intelligence is a field of computer science focused on the development of systems that can “think” independently to solve problems and learn over time. This differs from, say, an automated factory in which assembly-line robots execute specific pre-programmed tasks.
Today's AI algorithms are trained on broad data sets, seeking patterns in the information to describe past data and predict information that's yet to be seen. One commonly described tool in this field is the artificial neural network, a framework to help machines learn that's loosely based on the human brain. Each node of the system is like a neuron, reacting to and processing input from other nodes to work toward an answer.
There are three basic ways to train AI systems, explains Kasia Kozdon, a Ph.D. student at University College London who specializes in bio-inspired AI. The first is supervised learning, in which a person knows—or thinks they know—the correct answer and feeds that to the AI system.
“Basically, you train the AI to agree with you,” she says.
The second type is unsupervised learning, in which the AI must find the answer for itself by identifying patterns and relationships in sets of data. Finally, there's reinforcement learning. One well-known example of this is Google's AI that was able to master the supremely difficult Atari game Montezuma's Revenge. The AI was never told how to get points, but it was designed to be rewarded for curiosity, so that it worked out the game rules and how to advance all on its own.
How well does AI work?
Humans have yet to accomplish what's known as artificial general intelligence, or Strong AI, which would be machines that could think on a human level and problem-solve to accomplish a variety of tasks, Kozdon says: “Strong is the holy grail which companies would love to have.” (Though some experts would argue it's a good thing we're not there yet, as that could place millions of jobs at risk.)
Instead, modern AI is very good at narrow tasks, like targeted marketing or even beating humans at chess. Companies are currently using AI to perform a host of functions, such as Siri answering questions, Gmail filtering out spam, Neflix and Spotify suggesting new movies or music, and LinkedIn proposing new connections. (Learn about Sophia, the robot that looks almost human)
Shane uses AI for more light-hearted projects, the often hilarious results of which show the limits of modern machine learning, from generating pickup lines like “you look like a thing and I love you” to crafting knock-knock jokes about cows with no lips.
Working with AI can also be a bit like training a dolphin, Shane says. Give one of those bright and adaptable animals a goal or task that it can figure out how to accomplish, and it inevitably finds unusual ways around what you are asking. For instance, trainers at the Institute for Marine Mammal Studies in Mississippi attempted to train Kelly the dolphin to pick up trash in her pool in exchange for fish. But Kelly figured out a loophole: Whenever people drop paper in the tank, she squirrels it away under a rock and tears off tiny slivers to exchange with passing trainers for fish.
In one particularly amusing example of AI going off script, scientists challenged a system to walk without its feet touching the ground. The team thought it was an impossible task, but the system wasn't giving up so easily. The bug-like robot flipped on its back and used its “elbows” to propel itself forward.
“You have to be careful what precisely you're asking it to solve,” Shane says.
This unexpected creativity can make it hard to get an AI robot to perform even a simple task from scratch. It may seem straightforward to set up a program that instructs a modular system to reach a destination, first by assembling a body and then traveling from point A to point B.
“You think it would form legs and walk over there,” Shane says. But that's not usually the result. “One of the more common things it will do is assemble itself into a really tall tower and then just fall over. That's easier than walking.”
Should we be concerned about the future?
Though there are concerns with the widespread use of AI, “I feel like the fears are not where they should be,” Kozdon says. “People treat it as some kind of magic that can do everything, especially in the physical realm. It's basically a way of doing very complex data analysis and, based on your data analysis, making future predictions.”
Instead, some of the more immediate worries with AI are much more down to Earth. One particularly important concern is encoded bias against minorities, women, low-income families, and other disenfranchised groups. These pervasive biases are reflected and often amplified in AI algorithms.
Though developers may think they're training a system to select the best job candidates for a position, for instance, they could instead be training it to select the candidates that most appeal to the human it's copying. A 2016 ProPublica investigation found that software used to predict future criminal activity is not only unreliable in its criminal prognoses, but it's biased against African Americans and could result in harsher sentencing or longer waits for parole.
“AI does only what we train it to do, and it doesn't have an abstract understanding of the world,” Kozdon says. “We are the biggest problem right now.” That's part of why diversity must be considered from the start of this new initiative, she says. “You need a group of people who will have different views and different worries and [who] come from different points.”
Shane is glad that the latest executive order recognizes the need for ethical guidelines to regulate the development and use of these systems, and she emphasizes the need for carefully planned policy with the best interests of all Americans at heart.
And for those still worried about an imminent robot takeover, Shane suggests that you spend a few minutes with Caption Bot, an image-captioning AI that is powered by Microsoft Cognitive Services. When fed a picture of a microscopic mite, the chatbot replied: “I'm not really confident, but I think it's a cat lying on top of a pile of hay.”