Can Twitter Fit Inside the Library of Congress?

Sean Pavone/Shutterstock.com

Six years ago, the world’s biggest library decided to archive every single tweet. Turns out that’s pretty hard to do.

In 2010, the Library of Congress and Twitter announced a historic and incongruous partnership: Together, they would archive and preserve every tweet ever posted, creating a massive store of short-form thoughts. It was odd: a 210-year-old institution partnering with a four-year-old startup, cataloging the internet’s ephemeral #brunchtweets. It was also fascinating: equal parts futuristic and anachronisticI imagined library scribes copying tweets by hand onto vellum or cranking feeds through a printing press. The news actually frightened some folks: Does this mean my future grandkids will read my live-tweets of Parks and Recreation?

Yet, however dubious the task seemed back then, no one doubted the Library of Congress would get the work done. If Twitter could handle a few million tweets a day, surely the largest library in the world could, too.

But as it turns out, it couldn’t. Six years after the announcement, the Library of Congress still hasn’t launched the heralded tweet archive, and it doesn’t know when it will. No engineers are permanently assigned to the project. So, for now, staff regularly dump unprocessed tweets into a server—the digital equivalent of throwing a bunch of paperclipped manuscripts into a chest and giving it a good shake. There’s certainly no way to search through all that they’ve collected. And, in the meantime, the value of a vast tweet cache has soared. This frustrates researchers, who had hoped to mine the archive for insights about language and society—and who currently have to pay heavy licensing fees to Twitter for its data.

The library has been handed a Gordian knot, an engineering, cyber, and policy challenge that grows bigger and more complicated every day—about 500 milliontweets a day more complicated. Will the library finally untie it—or give in and cut the thing off?

“This is a warning as we start dealing with big data—we have to be careful what we sign up for,” said Michael Zimmer, a professor at the University of Wisconsin-Milwaukee who has written on the library’s efforts. “When libraries didn’t have the resources to digitize books, only a company the size of Google was able to put the money and the bodies into it. And that might be where the Library of Congress is stuck.”

Things looked easier in 2010, when the library launched the Twitter partnership with a jaunty press release, “How Tweet It Is”:

Have you ever sent out a “tweet” on the popular Twitter social media service?  Congratulations: Your 140 characters or less will now be housed in the Library of Congress.

Back then, Twitter users posted around 55 million tweets a day. That’s a lot, but it’s peanuts compared with the traffic Twitter sees today. And tweets were less complicated back then. They didn’t have embedded media, like photos or videos, and sharing Tweets was mostly still a copy-and-paste affair—though some early adopters were giving this new “retweet” button a try.

That April, Twitter and the Library of Congress signed a short agreement—just two pages. In it, Twitter promised to hand over all the tweets posted since the company’s launch in 2006, as well as a regular feed of new submissions. In return, the library agreed to embargo the data for six months and ensure that private and deleted tweets were not exposed.

As the library explained later that year:

Private account information and deleted tweets will not be part of the archive. Linked information such as pictures and websites is not part of the archive, and the Library has no plans to collect the linked sites. There will be at least a six-month window between the original date of a tweet and its date of availability for research use.

This turned out to be a tougher challenge than anyone expected. For one, the flood of tweets flew faster and faster, jumping from 55 million a day in 2010 to 140 million in 2011, before climbing to nearly 500 million in 2012. And the tweets got bigger, too. Individual tweets could be connected by a conversation thread. Users embedded photos, then video, and then live video. All this new metadata weighed down the Library of Congress’s daily downloads and forced staff to consider building an archival system that would change as often as Twitter did.

In 2013, with academics clamoring for access to the archive, the library admitted things weren’t going so well:

It is clear that technology to allow for scholarship access to large data sets is lagging behind technology for creating and distributing such data. Even the private sector has not yet implemented cost-effective commercial solutions because of the complexity and resource requirements of such a task…

The Library has not yet provided researchers access to the archive. Currently, executing a single search of just the fixed 2006-2010 archive on the Library’s systems could take 24 hours. This is an inadequate situation in which to begin offering access to researchers, as it so severely limits the number of possible searches.

At the same time, with the library sidelined, Twitter itself ramped up its own efforts to expose—and sell—its massive archive. In 2010, it partnered with data firm Gnip to offer feeds of raw tweets—for hundreds of thousands of dollars. Twitter eventually cut out the middleman and bought Gnip in 2014, consolidating distribution of its valuable data.

Steep prices put the data out of reach for researchers like Annie Franco, a Ph.D. candidate at Stanford who studies political communication. She’s examining how citizens talk to legislators online, and she’d love to use Twitter’s data. But the rinky-dink streams Twitter offers for free aren’t statistically rigorous, and buying the full feed is prohibitively expensive. “As a graduate student, I wouldn’t be able to afford that,” Franco said. “It would certainly be out of the question for most graduate students unless they had a large research grant.” But, by just using Twitter’s public feed, “it’s hard to get the big picture,” she continued. “And I think that’s been my biggest frustration.”

That said, the fact that Twitter even offers a feed is praiseworthy. Other social networks—here’s looking at you, Facebook—are much stingier about sharing their data. And tweets turn out to be great for research: They’re short, well-structured, and mind-bogglingly numerous.

Hence the persistent hope, however distant, that the Library of Congress archive will still work out. For its part, the library says the project continues to be a priority. Staffers say they’re trying to figure out how to update the dataset when someone deletes a tweet or makes their account private; they’ll also need to come up with a more efficient way to catalog and search the petabytes of information they’ve collected. All of this is new, said Mark Sweeney, the associate librarian for library services. “We got into this because we wanted to understand emerging social media and what the challenges would be,” he said. “I think we’ve learned that the challenges are constant and changing. But some institution needed to step up and try to understand this. I think the Library of Congress has done that.”

Twitter also says it hasn’t given up on the project, delivering this statement:

We are optimistic that the Library will be able to establish a secure, sustainable process for receiving and preserving an ongoing stream of Tweets within the bounds of our privacy policy. Academic researchers may access public Tweets for free via our API, or at a discounted rate via Gnip.

So everyone wants this to happen. But the incentives seem a bit skewed. Twitter, after all, makes millions from selling its data. Even if only a fraction of that revenue comes from researchers, when the library archive goes public Twitter stands to lose a measure of control over one of its most valuable assets. Meanwhile, during my conversations with overworked Library of Congress employees, several mentioned the need to balance the cost of the project against the thousands of other things the library needs to do.

They must yearn for 2010, when success and discovery seemed to be just around the corner. “I’m no Ph.D., but it boggles my mind to think what we might be able to learn about ourselves and the world around us from this wealth of data,” wrote a library spokesman in the original blog post announcing the partnership. “And I’m certain we’ll learn things that none of us now can even possibly conceive.” As far as this project goes, he was absolutely correct.