Researchers are using the present health crisis to help improve search systems and enable clinicians to find better answers quicker.
The White House Office of Science and Technology Policy and National Institute of Standards and Technology on Wednesday joined forces with two institutes and two universities to help develop search engines that streamline COVID-19 research suited for health care experts fighting the pandemic.
Through the strategic “TREC-COVID” effort, NIST will work with the Allen Institute for Artificial Intelligence, the National Library of Medicine, Oregon Health & Science University and the University of Texas Health Science Center at Houston to apply the Text Retrieval Conference—or TREC, a 28-year-old information retrieval technology review program—to the COVID-19 Open Research Dataset. Also known as “CORD-19,” OSTP initially launched that resource in mid-March, and it now incorporates more than 44,000 pieces of scholarly literature and data about the novel coronavirus and its broader family of viruses.
“The TREC-COVID program goals include creating datasets and using an independent assessment process that will help search engine developers to evaluate and optimize their systems in meeting the needs of the research and health-care communities,” officials wrote in the official announcement. U.S. Chief Technology Officer Michael Kratsios added that the program will support the creation of “powerful and accurate search engines that extract knowledge from [CORD-19] literature, tailored to the needs” of the nation’s research officials.
Here’s what TREC-COVID entails: First, a series of sample queries that were created by the National Library of Medicine, OHSU and UT Health will be released. Interested organizations that aim to participate can sign up on the NIST website, and those who register will run those initial queries on their information retrieval and search systems against the CORD-19 document set, and subsequently return their results to NIST. From there, biomedical experts will assess the results “including document relevance rankings” to evaluate the retrieval systems’ overall performance. The agency will score the submissions and post scores on the TREC-COVID website, along with the retrieval results, and lists of critical reference documents. Those “test collections,” in turn, can be utilized by information retrieval, or IR researchers to review and improve their own search engines’ performance capabilities.
“Participants will have one week to submit their search results, and within about a week NIST will post results, with an expected spacing of about two weeks between each new dataset round being released,” officials wrote. “The team initially anticipates conducting five consecutive rounds of search system assessments.”
Ellen Voorhees, NIST’s project manager for TREC, and now TREC-COVID, told Nextgov Thursday that one of the effort’s best case scenario outcomes “is creating a high-quality test collection that will allow IR researchers to investigate how search systems can best support medical researchers when the information space is changing quickly—i.e., all the new research papers on COVID-19—and there is a lot of conflicting information since the scientific community has not had time to come to consensus.”
The work will also likely produce “search systems that will allow clinicians to find better answers (or find answers more quickly) in future crises,” she said.
NIST established TREC in 1992 to invigorate IR research, and now, the program hosts workshops and efforts to promote large-scale assessments of search technologies. Voorhees noted that TREC supports the IR field of study in two distinct ways.
“The first and most important way is producing test collections—like we will be doing in TREC-COVID—for a wide variety of different search contexts,” she explained. Test collections, such as those that will soon be produced, are a type of benchmark task, she said, which essentially “sample search queries for which the right documents to be retrieved are known.” They help researchers rapidly trial potential improvements to their own search engines. “By comparing the quality of the search results on the test collection before and after they make a change to the search algorithm, they can decide whether the change is worthwhile,” Voorhees said.
The second way TREC supports IR efforts, according to Voorhees, is by building research communities around a given search problem, which increases the pace at which all involved can reach common goals.
Regarding the other organizations in the collaboration, Voorhees noted that the Allen Institute for Artificial Intelligence is tasked with maintaining and distributing the CORD-19 dataset, which is the document set being used in TREC-COVID. The institute adds more to the document set generally once a week to capture the most up-to-date articles and information. Later rounds of the TREC-COVID program will likely contain larger releases of CORD-19 and expanded query sets. Voorhees said NLM, OHSU, and UT Health have been working with NIST on TREC tracks that focus on the health and biomedical domains.
“Test collections for a particular domain need domain experts to define the search task that makes sense for the domain, to find—or create—appropriate queries, and to make the judgments as to whether a given document is relevant to a query,” she said. “Our collaborators from NLM, OHSU and UT Health have domain expertise themselves and also have access to a sufficiently large body of other experts to recruit judges.”
This entire initiative took less than a month to align. NIST initially had a phone meeting with OSTP on March 18, following the release of the CORD-19 dataset, which Voorhees reiterated was the first spark that inspired this work.
“The fact that the dataset was to be incrementally updated meant that there was an opportunity to capture a real example of how both the scientific literature expands as well as the types of questions that are asked in midst of an epidemic, with the belief that future epidemics would follow similar paths—so lessons learned would be worthwhile,” she said.