Industry surveys show 84 percent of San Francisco’s 300,000-plus software engineers now use AI coding assistants, with most using them every day. Leading AI coding platforms such as Cursor and Windsurf have integrated Chinese large language models into their infrastructure. Washington has so far paid little attention to this dependency, which makes concerns about TikTok censorship look minor by comparison.
With one million users and 360,000 paying subscribers, Cursor has captured 18 percent of the global market for premium AI coding tools. Enterprise adoption in the US has exploded: Coinbase reported that “every engineer had utilized Cursor” by early 2025. Conservative estimates suggest 20–30 percent of Bay Area engineers today—55,000–85,000 people—use tools that have integrated Chinese AI models.
America’s most elite technology startups have developed an even deeper dependency: One in four companies in Y Combinator’s Winter 2025 batch used AI tools to write 95 percent of their codebase. At least some of these are fine-tuned versions of models provided by Zhipu AI, a Beijing-based company sanctioned by the US government as a national security threat.
Users first noticed when Chinese characters appeared in their code outputs. When developers jailbroke the models to interrogate them directly, the systems confirmed they were built on Zhipu’s GLM (General Language Model) architecture. Windsurf eventually acknowledged the same behavior in its product, SWE-1.5. Subsequent technical analysis indicated that both platforms had used Zhipu’s flagship product, GLM-4.6, as its base model.
There is a reason for the rise of Zhipu, DeepSeek, and other open-weight Chinese models in American and global markets: they are cheaper to run and easier to access than their closed-source counterparts.GLM-4.6 uses 15 percent fewer tokens than comparable alternatives—significant when handling millions of requests each day. Investor Chamath Palihapitiya put it bluntly: “Although the models of OpenAI and Anthropic perform well, they are simply too expensive.”
There is nothing inherently wrong with using an open-source Chinese AI model—after all, they perform impressively and can be run locally, rather than requiring expensive subscriptions to access. Banning them outright would eviscerate American software innovation. But allowing their use to go unchecked among American AI developers risks outsourcing the very foundation of the American AI stack.
The first problem arises when individual developers use Chinese AI coding tools. Some tools send code snippets to remote servers for processing, which could be visible to China’s security services. When a developer uses a Chinese model via its API (as opposed to downloading the open weights and running it locally), every code snippet they send for processing travels to whatever infrastructure hosts that model—potentially including servers in China. Some of these risks can be hard to spot: For example, Cursor’s technical documentation specifies each request transmits code to third-party routing services, including some that use Chinese models. If those routing services don’t host their own model instances—which is likely, since this can become quite expensive—then developers risk leaking segments of their codebase.
The individual risk from any single coding session is modest. The reality is that American software engineers are not accidentally uploading whole trade secrets to Chinese servers. But taken in aggregate, Beijing could be gaining valuable intelligence from their fractional use. Across tens of thousands of developers, Chinese models likely process millions of proprietary fragments of American companies’ computer code each day—including security measures, authentication methods, and exploitable vulnerabilities. Subject to China’s 2017 National Intelligence Law, Chinese AI developers must “support, cooperate with, and collaborate in national intelligence work.” Labs like DeepSeek, Qwen, and Zhipu have no legal recourse to refuse such requests.
There is a second, distinct set of problems associated with American AI startups building their products on Chinese base models. A startup can take Zhipu’s open-weight GLM model, fine-tune it for specialized tasks, and then sell the resulting system to enterprises as an American product. Subscribers may not understand its actual provenance: There is currently no legal requirement to disclose use of Chinese-origin models. While concerns about censorship might be manageable or irrelevant for many industries, this same failure mode could create blind spots that are even more consequential and difficult to detect.
American AI startups have already fine-tuned DeepSeek and GLM models for automated legal research and medical billing. Sensitive professional data—including legal strategies or patient medical records—is today subject to judgment and advice engineered in Beijing. The American companies buying these services have no systematic way to track their provenance, let alone potential backdoors.
For US policymakers, the question is not whether Chinese open-weight models may pose risks to US national security (they do) or how pervasive the problem might eventually become (it is already widespread). Rather, the question is how to address these risks without punishing small developers or sacrificing the free-market principles that underpin American AI leadership.






