An Algorithm That Hides Your Online Tracks With Random Footsteps

Mycola Huba/Shutterstock.com

Can “polluting” browsing history with fake traffic make it harder for ISPs to spy on you?

Last week, President Donald Trump signed a controversial new law, allowing internet providers to continue gathering sensitive information on their users and selling that data to advertisers. News sites erupted with recommendations for keeping browsing history private—but because all the data people send and receive online goes through their service providers, that’s easier said than done.

Many news stories recommended setting up a virtual private network, or VPN, to shield browsing data. Using a VPN re-routes all your browsing data through an encrypted tunnel, keeping it private from your provider, but it requires shifting your trust from your internet provider to often-sketchy and unaccountable VPN companies.

Another option is Tor, a browser that routes web traffic through a random network of servers to anonymize it—but Tor can slow down browsing significantly, and breaks some flashy website features. Even HTTPS encryption, which hides the data sent to and from protected websites, could soon be circumvented with clever de-anonymizing attacks.

Each of these tools is imperfect, but used in concert, they can reduce the stream of information available to snooping internet providers. Meanwhile, some are looking for new ways to thwart providers.

Rather than trying to dry up the stream of unencrypted information produced by browsing the web, one technique is taking the opposite approach: polluting the stream with a flood of fake data. Or to put it differently, it tries to drown out the signal with a lot of artificial noise.

The basic idea is simple. Internet providers want to know as much as possible about your browsing habits in order to sell a detailed profile of you to advertisers. If the data the provider gathers from your home network is full of confusing, random online activity, in addition to your actual web-browsing history, it’s harder to make any inferences about you based on your data output.

Steven Smith, a senior staff member at MIT’s Lincoln Laboratory, cooked up a data-pollution program for his own family last month, after the Senate passed the privacy bill that would later become law. He uploaded the code for the project, which is unaffiliated with his employer, to GitHub. For a week and a half, his program has been pumping fake web traffic out of his home network, in an effort to mask his family’s real web activity.

Smith’s algorithm begins by stringing together a few words from an open-source dictionary and googling them. It grabs the resulting links in a random order, and saves them in a database for later use. The program also follows the Google results, capturing the links that appear on those pages, and then follows those links, and so on. The table of URLs grows quickly, but it’s capped around 100,000, to keep the computer’s memory from overloading.

A program called PhantomJS, which mimics a person using a web browser, regularly downloads data from the URLs that have been captured—minus the images, to avoid downloading unsavory or infected files. Smith set his program to download a page about every five seconds. Over the course of a month, that’s enough data to max out the 50 gigabytes of data that Smith buys from his internet service provider.

Although it relies heavily on randomness, the program tries to emulate user behavior in certain ways. Smith programmed it to visit no more than 100 domains a day, and to occasionally visit a URL twice—simulating a user reload. The pace of browsing slows down at night, and speeds up again during the day. And as PhantomJS roams around the internet, it changes its camouflage by switching between different user agents, which are identifiers that announce what type of browser a visitor is using. By doing so, Smith hopes to create the illusion of multiple users browsing on his network using different devices and software.

“I’m basically using common sense and intuition,” Smith said.

In addition to crawling through the results of random Google queries, Smith’s script pulls data from a preset list of popular webpages. He hard-coded 20 constantly updated popular news sites—from The Huffington Post to The Daily Caller—to imitate a range of online news consumption.

A pair of security experts I spoke to about Smith’s project were supportive of his search for another anonymizing technique—but wary of how he implemented it. Bruce Schneier, a fellow at Harvard’s Berkman Center and the author of Schneier on Security, warned against underestimating internet providers’ ability—and drive—to see through data-obfuscation tactics.

“The question is, after 100 years of coding theory, how good are those algorithms at finding the signal in the noise?” he asked.

He hypothesized a system masking a person’s browsing history by layering in copies of other people’s browsing patterns might be more useful. That way, the internet provider isn’t looking for a needle in a haystack, but instead is looking for one particular needle in a pile of other needles. “It would be a Tor-like system where anonymity comes through shared usage,” Schneier offered.

Kenneth White, a security researcher and co-director of the Open Crypto Audit Project, balked at the script’s use of random browsing.

“As written, it is actively dangerous,” he wrote in an email. Smith’s program does use a blacklist to avoid visiting problematic websites, and avoids pages that are categorized as having to do with drugs, gambling, hacking and porn, among other topics. Still, White worried, random Google searches could send the program down a dark rabbit hole, without the user’s knowledge.

“After crawling several hundred links, you may get an unwanted visit from law enforcement—defendable, no doubt, but not without some awkward explanations,” White said. He added: “It's an interesting idea, but this particular proof of concept is an academic/hobby exercise that adds more problems than it solves.”

Smith acknowledged the critiques, some of which he’s used to improve his script. He emphasized he created the tool for his own use, and anyone who wanted to use the script he posted on GitHub should consider the benefits and drawbacks.

“I’m eating my own dog food, and I don’t want to run into trouble myself,” he said.

Smith’s technical background lends itself to the project: He specializes in “detecting difficult-to-find signals in noise,” so he’s well equipped to develop a program that would make it harder to do just that. But it has its limitations.

It’s not designed, for instance, to create plausible deniability, the way some obfuscation systems are. It doesn’t cover up sensitive web activity, which would remain easily accessible to an ISP looking for it. (That’s what Tor is good for.)

And it’s not perfectly frictionless, either: Smith said his wife has noticed she’s often asked to fill in a CAPTCHA—an online quiz to prove that she’s human—when she visits Google, a measure designed to prevent bots like Smith’s from overloading Google with automated searches.

Smith and the security experts who reviewed his code all believe that users have to do more to protect their privacy, because internet providers won’t protect it for them.

“We’re in an adversarial role with ISPs,” Smith said. Where appropriate, they say, use a VPN and Tor. Soon, more peer-reviewed privacy tools will likely appear that further hide, blur, or drown out your signal.