One day last week at 9:29 a.m. I hunched nervously over my keyboard and prepared to do battle with an entity called Emma. We were each primed to write about the official U.K. employment data at 9:30 a.m. and file our stories to my editor. I was sure Emma would be quicker than me, but I really hoped I would be better.
Her creator, a startup called Stealth, calls her an “autonomous artificial intelligence” designed to deliver professional services such as research and analysis. Since it is fashionable to predict that AI will supplant white-collar workers including journalists, I wanted to put it to the test.
Emma was indeed quick: she filed in 12 minutes to my 35. Her copy was also better than I expected. Her facts were right and she even included relevant context such as the possibility of Brexit (although she was of the dubious opinion that it would be a “tailwind” for the U.K. economy). But to my relief, she lacked the most important journalistic skill of all: the ability to distinguish the newsworthy from the dull. While she correctly pointed out the jobless rate was unchanged, she overlooked that the number of jobseekers had risen for the first time in almost a year.
In truth, most people who work on artificial intelligence admit it is not going to make humans obsolete any time soon. It is simply not intelligent enough yet. What is beginning to happen, though, is more subtle but no less important. The lines are beginning to blur between work done by humans and that done by machines.
For some workers, this could be a boon. I could imagine a scenario where an entity like Emma could do rudimentary reports on repetitive data releases, then send them to a human editor to newsify and beautify. The Associated Press already uses a program called Automated Insights to write simple corporate results stories. In these cases, humans have the advantage: machines not obliterating them but taking over the boring bits of their jobs so they can spend more time on the creative or valuable parts.
But not all humans are moving up the value chain. There are some boring tasks at which machines are very bad. An army of lowpaid people are quietly doing them instead.
Take the workers on Amazon’s Mechanical Turk, a site run by the online retailer where “requesters” pay “Turkers” to do simple microtasks that are tricky for machines but easy (if dull) for humans: transcribing audio clips; tagging photos with relevant keywords; copying photocopied receipts into spreadsheets. Amazon calls these “human intelligence tasks”, or HITs, and they tend to pay a few cents apiece. The name Mechanical Turk comes from a fake chessplaying machine from the 18th century: while it looked like an automaton, a person was secretly hiding inside.
Pinterest is a good example of a company that uses sites like this. One developer explains in a recent blog how it uses “crowdworkers” to evaluate the appropriateness of the “trending searches” generated by its computers. Humans are still better at making these judgments than machines. “We’ve built ‘artificial’ artificial intelligence,” she concluded.
Some of the new chatbot and AI services also have people hiding inside: “humans pretending to be robots pretending to be humans,” as Bloomberg put it in a recent exposé. These people often review and edit AI-generated responses before they are sent.
What about Emma? Is there a human lurking behind her? Shaunak Khire, co founder of Stealth, says Emma has a team of human “trainers” but insists the output is all her own.
It is always going to be hard for laypeople using these services to know for sure.
Jeff Bigham, assistant professor at Carnegie Mellon University’s HumanComputer Interaction Institute, works as an adviser to Emma’s creators. He wants to find ways to make crowdwork less mindless and more fulfilling; for example, by making it a means to acquire skills. He asks himself: “What would make me proud to have my daughter grow up to be a crowdworker?” Even then, it is not clear how long this sort of work will last.
He explains that when humans perform tasks that machines cannot yet do, they create an exhaust of “training data” from which AI can learn. In other words, all those crowdworkers and chatbot editors are working steadily towards their own obsolescence.
Should you welcome or fear the rise of intelligent machines? That depends on whether they will be working for you, or you will be working for them.