Chinese AI models are rapidly dominating open-source deployments as Western labs restrict open-weight releases. A SentinelOne and Censys study mapped 175,000 exposed AI hosts across 130 countries over 293 days and found that Alibaba’s Qwen2 ranks second only to Meta’s Llama in global deployment.
Qwen2 appears on 52% of systems running multiple AI models, making it the most common alternative to Llama. Researchers identified 40,694 hosts running both Llama and Qwen2, accounting for more than half of all multi-family deployments. Qwen2 also showed “zero rank volatility,” consistently holding its position across all measurement methods.
Gabriel Bernadett-Shapiro, distinguished AI research scientist at SentinelOne, said Chinese-origin model families are expected to play a more central role in the open-source LLM ecosystem over the next 12–18 months as Western frontier labs slow or constrain open-weight releases.
Key findings include:
- 175,000 exposed AI hosts across 130 countries
- 23,000 persistent hosts with 87% average uptime
- 48% of exposed hosts advertising tool-calling capabilities
- 16–19% of infrastructure unattributed to identifiable owners
The study highlights geographic concentration. Beijing accounts for 30% of Chinese deployments, while Virginia represents 18% of US hosts, reflecting AWS infrastructure density.
The research also raises governance concerns. Open-weight models operate outside centralized platform controls. Nearly half of exposed hosts can execute code and access external systems. When combined with reasoning-optimized models, these systems can autonomously plan multi-step operations.
As hardware portability and release openness diverge globally, Chinese AI models are increasingly becoming the default choice for open-source deployments due to availability and practical deployment advantages.
Source:

