Two balls, one with a frowning face and one with a smiling face, on either side of a scale
Feature

Inside Hugging Face’s Strategic Shift: APIs, Safety & Surviving the AI Platform Wars

5 minute read
David Gordon avatar
By
SAVED
Once the heartbeat of open-source AI, Hugging Face faces critical challenges — from security threats to new community dynamics — as it fights to stay central.

In early 2025, Hugging Face laid off 4% of its staff (10 employees), primarily those focused on enterprise services. The move marked a shift away from custom deployment work and toward recurring revenue streams such as APIs and subscriptions. For a company once seen as the heartbeat of open-source AI, the change marks a pivotal moment.

Hugging Face remains essential, with millions of users and a vast model hub. But beneath the surface, a new reality is taking shape. It is defined by platform fatigue, concentrated usage, security concerns and growing competition. The company that helped establish the norms of openness in AI is now working to redefine its role.

The adjustment reflects a deeper shift in community dynamics. While new models arrive on the hub every day, most activity centers on a narrow slice of contributions. A small number of models drive the majority of downloads, and top developers carry much of the maintenance load. Meanwhile, new players have emerged across the stack, including model labs, inference services, data pipelines and evaluation tools.

Hugging Face is adapting to scale, competition and the pressure to define its long-term identity. Its founding principles remain intact, but the landscape around them is changing. 

Open Platforms, Open Risks

As Hugging Face’s popularity grows, so do its risks. Researchers have uncovered models on the platform that execute malicious code when loaded. These attacks often hide inside PyTorch pickle files, which can carry arbitrary commands. In one case, a model quietly opened a remote shell. In another, hidden malware slipped through automated scans.

Hugging Face has responded by introducing the safetensors format — which avoids code execution — and by displaying warning labels on risky files. A recent audit scanned more than 4 million files for threats. These steps reduce exposure, but they rely on users to choose safe models and remain alert to unfamiliar code.

Security researchers continue to test the system, with some finding ways to bypass existing safeguards. The platform is open by design, and that openness creates a wide surface for attack. It also has strategic limits, particularly for enterprise use.

“There seems to be a ceiling for openness,” said Mayur Naik, a professor at the University of Pennsylvania specializing in programming languages and AI. “There is a lot of proprietary data in enterprises, and entire sectors like healthcare, which will never become publicly available. Customers who possess such data are far more likely to use a proprietary fine-tuning service like OpenAI’s to build custom models that they have no incentive to host on Hugging Face.”

As more companies build on top of Hugging Face, the platform’s ability to protect its ecosystem becomes central. Safety now matters as much as speed or scale.

Related Article: Leaked Files Reveal OpenAI's Secret Plan to Release Open-Source AI Model

A Small Core Drives Most Usage

Hugging Face draws more than 35 million visits per month, according to Ahrefs data. Over two thousand new models land on the hub daily, with millions of downloads following. The scale is massive, and the pace feels unstoppable.

Beneath that surface, the pattern tells a sharper story. A single percent of models drive nearly all usage. Most go unused. Engagement clusters around a small group of projects, leaving many to sit in place, untouched. This is how power law communities grow. A dense core drives attention, while a long tail stretches wide with minimal overlap. The result is both scale and gravity. Contributors orbit around what already works.

Thomas Wolf, chief science officer at Hugging Face, recently noted, “A lot of people do [open source] for the mission. They think it's better to maybe earn less but have something that's freely accessible that everyone can use without having to pay.” 

Hugging Face remains essential infrastructure for open AI, yet its community increasingly moves through established grooves. The challenge now is clear: build systems that surface more than the center.

But the broader quality of what’s available also matters. “The net result is that the vast majority of datasets and models on Hugging Face right now aren’t interesting,” Naik said. “There is an open research question whether one can effectively extend or merge weaker models available on Hugging Face to obtain a powerful model that outperforms a proprietary one; it seems unlikely at least in the short term.”

Rivals in the Open-Source Arena

Hugging Face helped shape open-source AI, but in 2025, it operates alongside a growing field of contenders.

New labs like Mistral AI are releasing high-performing models that command attention. Together AI offers inference through a simple API, sidestepping the need for self-hosting. Chinese groups such as Baidu, Alibaba and Zhipu are rising fast and often distribute models through region-specific platforms.

Tooling competition is also intensifying. OpenXLA, backed by major tech firms, builds a unified compiler stack. PyTorch, LangChain, Ray, AWS Bedrock and GCP Vertex offer built-in services that compete with Hugging Face’s hosting and APIs.

The Hugging Face hub remains a gathering point. It hosts models from many of these external labs. Hugging Chat, the company’s demo assistant, now runs on Alibaba’s Qwen model by default. Hugging Face plays a central role as the infrastructure that ties systems together.

This shift brings resilience: as long as top models pass through its platform, Hugging Face stays relevant. But influence now comes from integration. Developers have more choices, and communities like CivitAI and Replicate attract focused user bases with different priorities. To stay ahead, Hugging Face must continue to offer reach, trust and usability across a fragmented ecosystem.

Related Article: Healthcare's AI Crossroads: Open Source or Commercial Foundation Models?

Expanding Into Robotics and Edge AI

Hugging Face is extending its reach beyond models. In the past year, it has moved into robotics, hardware kits and edge-friendly apps.

The company acquired Pollen Robotics in 2025 and began offering Reachy, an open-source robotic arm. It followed up with Reachy Mini, a desk-sized robot priced at $299 and designed for developers. These devices integrate with Hugging Face Spaces, letting users run model-powered behaviors on physical systems.

Spaces themselves have grown into a live showcase, keeping users on the platform and creating another layer of engagement. The platform now hosts over half a million apps, turning models into full experiences. Researchers and developers use Spaces to share demos, tools and data explorers.

This ecosystem is expanding. Nvidia released a foundation model for robotics on the hub in 2025, fine-tuned to run on Hugging Face hardware. And community adoption is growing: LeRobot, Hugging Face’s robotics library, is now the most-starred repo in its category. These moves give the company a path into physical AI and let it stand out from pure model hosts.

Learning Opportunities

Wolf said the goal is to make robotics more modular and evolving: “If it’s connected to an open-source repository that’s growing with people inventing new things, you could have something that actually grows with time.”

Where the Data Lives

Alongside models and apps, Hugging Face is investing in data. It now hosts over 75,000 datasets and is adding tools for versioning, validation and collaboration.

The company’s acquisition of XetHub, a version control platform, helped improve dataset tracking. Its Dataset Hub and Datasets library are becoming anchors for how teams prepare training data. Hugging Face now runs competitions, funds benchmarks and promotes tools to standardize evaluation.

Owning the dataset layer creates stickiness, improves reproducibility and positions Hugging Face as a home for every part of the machine learning lifecycle.

What Now and What’s Next

Hugging Face has moved from breakout star to core infrastructure. It is no longer defined by novelty. Its value now rests on execution: maintaining a healthy platform, drawing in developers and offering reliable tools across models, data and deployment.

It faces pressure from all sides. Rivals are building their own ecosystems. Model development is happening elsewhere. Even its own users are more selective, drawn to tools that are fast, simple or better integrated with enterprise systems.

But Hugging Face still holds something few others do: a broad, visible community. If it can turn that community into long-term momentum through open standards, safe tools and trusted infrastructure, it can stay central even as the field evolves.

This next phase is about depth, clarity and delivery. Hugging Face helped define open-source AI. Its next challenge is proving it can sustain it.

About the Author
David Gordon

David Gordon investigates executive leadership for a global investment firm, freelances in tech and media and writes long-form stories that ask more questions than they answer. He’s always chasing the narrative that undoes the easy version of the truth. Connect with David Gordon:

Main image: Roman M on Adobe Stock, Generated With AI
Featured Research