Hugging Face has evolved its offering for organizations building with machine learning, phasing out the on-premises Private Hub in favor of the cloud-based Enterprise Hub. The new Enterprise Hub combines managed SaaS convenience with enterprise-grade security, including deployment options for Inference Endpoints across cloud and on-premises infrastructure, plus SSO-based administration and access controls.
The original Private Hub was introduced to address a critical challenge: roughly 90% of machine learning models never reach production. Teams struggle with unfamiliar tools, duplicated efforts, and difficulty showcasing work to stakeholders. The Hub aimed to unify ML workflows from research to production, making collaboration simpler and more productive.
The broader Hugging Face ecosystem remains central to this vision. The public Hub hosts over 60,000 models, 6,000 datasets, and 6,000 interactive Spaces—all open source. These assets are stored as Git-based repositories with version control, commit history, pull requests, and discussions, enabling peer review and team collaboration.
For organizations needing privacy, the Enterprise Hub now serves as the primary solution. It allows companies to host private models, datasets, and Spaces while integrating with Inference Endpoints for scalable deployment. Hugging Face encourages interested teams to contact their Enterprise team to find the best setup for their needs.