A new project tested 1,000 sites' abilities to defend against bots.
Dot-govs are completely unprepared to fight advanced bots that emulate human behavior, a new report finds.
Distil Networks, a bot detection company, recently assessed about 1,000 websites for their ability to defend against bot attacks of varying complexity. Sites related to consumer, government and financial services were unable to detect the most “advanced” level of bots, according to the assessment.
The company analyzed sites covered in advocacy group the Online Trust Alliance’s Honor Roll, which recognizes organizations with a certain level of commitment to consumer security and privacy.
Other sectors didn't fare much better against advanced bots, either. Only 0.8 percent of retail sites, and 0.9 percent of news and media sites, detected them.
Government sites defended against 70 percent of “crude bots,” or bots that were very obviously bots, Distil Networks chief executive Rami Essaid explained to Nextgov. That was a higher detection rate than that of financial services, which detected 65 percent, and news and media groups, which repelled 64 percent. Consumer services turned away 75 percent of bots.
Dot-govs were only able to defend against 7 percent of “simple bots” -- those whose browsers weren’t “well-formed,” and none of the hidden bots, according to Distil.
Government’s generally poor performance reflects a “lack of understanding about bots,” Essaid said.
“They don’t think of it as a concern, even though bots have been responsible for a lot of government breaches," he said.
Earlier this year, an automated bot helped identity thieves generate E-file PINs associated with Social Security numbers, for instance.
The Online Trust Alliance’s report covered several government sites, including those belonging to the Census Bureau, the departments of Education and Health and Human Services, the FBI, NASA, the Social Security Administration and the National Institutes of Health.
Across sectors, anti-bot protection capabilities faltered this year compared to 2015, according to that report.
This year, 26 percent of sites were vulnerable to attack from basic bots, compared to 15 percent last year, that report said. Some of that can be attributed to new testing criteria this year, “but sites should ensure they are protected from emerging, more sophisticated bot attacks," the OTA report said.