House panels probe Airbnb, Anysphere over use of Chinese AI models

Saulo Angelo/Getty Images

GOP lawmakers are seeking details on companies’ reliance on Chinese-developed systems, citing risks tied to data security, censorship and alleged AI distillation campaigns.

Republican-led House committees are investigating Airbnb and Anysphere, the maker of the AI coding platform Cursor, over their use of artificial intelligence models developed by Chinese companies.

The House Homeland Security Committee and the House Select Committee on the Chinese Communist Party sent letters Wednesday to the companies’ CEOs requesting details about their use of Chinese-built AI systems, the rationale behind those choices and any communications the firms have had with the model providers.

The letters — signed by Homeland Security Chairman Andrew Garbarino, R-N.Y., and China Select Committee Chairman John Moolenaar, R-Mich. — also ask that employees involved in those decisions participate in an in-person briefing with lawmakers.

The probe was first reported by Semafor. It reflects mounting concern among lawmakers that U.S. companies are increasingly integrating AI models developed by firms in China, raising potential national security and cybersecurity risks tied to data access, supply chains and model behavior.

The inquiry specifically targets Anysphere’s recently released Composer 2 model, which the company said performs on par with leading systems from U.S. firms at a lower cost. The company later disclosed that the model is built on Kimi, developed by Beijing-based Moonshot AI.

Lawmakers also raised concerns about Airbnb’s use of Qwen, an AI model developed by the Chinese marketplace company Alibaba, to power customer service tools. Airbnb CEO Brian Chesky previously described the model as “fast and cheap.”

Cursor and Airbnb did not immediately respond to requests for comment.

The investigation comes as lawmakers and the White House sharpen warnings about alleged efforts by Chinese AI companies to replicate the capabilities of leading American systems through large-scale distillation campaigns — a technique used to extract knowledge from AI models.

National security officials have long argued that technologies linked to China could pose surveillance and sabotage risks. Policy analysts often cite a 2017 Chinese law requiring domestic companies to assist state intelligence efforts, fueling concerns that firms operating with overseas units could be compelled to hand over their data to Beijing.

Anthropic in February accused three Chinese-based AI companies — DeepSeek, Moonshot AI and MiniMax — of overwhelming its Claude model with 16 million exchanges from roughly 24,000 fraudulent accounts. 

The same month, OpenAI sent a letter to members of the House China Select Committee that said it had seen evidence “indicative of ongoing attempts by DeepSeek to distill frontier models of OpenAI and other US frontier labs, including through new, obfuscated methods.”