A quick pointer to today's A1 New York Times story on a phenomenon we've been following on this blog for the past year: as algorithmic entities explode across the web, humans remain central to their operation. Automation only goes so far and for all Watson's Jeopardy wins, there are still many, many tasks on which computers are terrible and humans are effortlessly amazing. Like understanding language, say, or knowing what's happening in a photograph.
We noted this phenomenon in our work on Google Maps, which has a team of thousands of humans who handcorrect every single map. Here's the September story's key paragraphs:
There is an analogy to be made to one of Google's other impressive projects: Google Translate. What looks like machine intelligence is actually only a recombination of human intelligence. Translate relies on massive bodies of text that have been translated into different languages by humans; it then is able to extract words and phrases that match up. The algorithms are not actually that complex, but they work because of the massive amounts of data (i.e. human intelligence) that go into the task on the front end.
Google Maps has executed a similar operation. Humans are coding every bit of the logic of the road onto a representation of the world so that computers can simply duplicate (infinitely, instantly) the judgments that a person already made.