Crowdsourcing Competitions Encourage Malicious Behavior, Study Finds

igor kisselev/Shutterstock.com

Crowdsourcing generally espouses openness and broad-based cooperation, but it also brings out people’s worst competitive instincts.

Crowdsourcing competitions have fundamentally changed the way idea-sharing takes place online. Famous contests such as the 2012 Coca-Cola crowdsourced campaign for a new logo and Chicago History Museum’s crowdsourced project for a new exhibit last year have created buzz around the practice.

By tapping into the collective intelligence of the internet masses, information and ideas can be generated, edited, verified, and published without a middle man (and without, as some critics of the practice have suggested, a competent professional). Now, a new study says that the same open-source platforms that make crowdsourcing contests possible also make them vulnerable to malicious behavior.

The study, conducted by researchers from the University of Southampton in the UK and the National Information and Communications Technology Australia (NICTA), looked at several recent crowdsourcing competitions online and analyzed participants’ behavior through the “Prisoner’s Dilemma” scenario. The analysis, often used in game theory, shows that even when it’s in their common interest two people might not cooperate with each other.

Crowdsourcing generally espouses openness and broad-based cooperation, but the researchers explained that it also brings out people’s worst competitive instincts.

“[T]he openness makes crowdsourcing solutions vulnerable to malicious behaviour of other interested parties,” said one of the study’s authors, Victor Naroditskiy from the University of Southampton, in a release on the study. “Malicious behaviour can take many forms, ranging from sabotaging problem progress to submitting misinformation. This comes to the front in crowdsourcing contests where a single winner takes the prize.”

One competition the researchers looked at was the US-based Defense Advanced Research Projects Agency’s (DARPA) shredder challenge, which was comprised of five separate puzzles looking at a number of destroyed documents from war zones. The challenge was for participants to identify the document subject matter and provide the answer to a puzzle embedded in the content of the reconstructed document. The number of documents, document subject matter, and the method of shredding were varied randomly.

The team from the University of California at San Diego (UCSD) had a lead and were on track to win, the researchers explained, but were victims of a “relentless number of coordinated overnight attacks.” The fact that each teams’ progress was publicly known on an open-source platform did not deter individuals’ behavior: Though it was in other participants’ interest to let the UCSD team provide clues to their own search, the competing teams impeded their progress in malicious hacking attacks.

The researchers said their findings showed that “despite crowdsourcing being a more efficient way of accomplishing many tasks, it’s also a less secure approach.”

Reprinted with permission from Quartz. The original story can be found here

(Image via igor kisselev/Shutterstock.com)