As OpenAI releases GPT-4 and Google debuts Bard in beta, enterprises around the world are excited to leverage the power of foundation models. However, most companies are not equipped to properly take advantage of these advanced AI systems due to their size, cost, and governance risks.
Snorkel AI bridges this gap with its Snorkel Flow platform, which enables users to adapt foundation models for specific use cases. The platform starts by examining a model's predictions on company data, then helps users identify and correct errors through programmatic labeling and prompts. The model is fine-tuned iteratively until it meets quality standards.
Hugging Face, known for its vast repository of open-source models, provides more than 150,000 models that can be accessed via its Inference Endpoint service. This partnership allows Snorkel Flow users to tap into a wide variety of specialized models—such as BioBERT and SciBERT for medical data—without the need to host them individually. The service includes a pause-and-resume feature that keeps costs manageable.
"We were pleasantly surprised to see how straightforward Hugging Face Inference Endpoint service was to set up," said Braden Hancock, CTO and co-founder of Snorkel AI. "All the configuration options were pretty self-explanatory, but we also had access to all the options we needed in terms of what cloud to run on, what security level we needed, etc."
Clement Delangue, co-founder and CEO of Hugging Face, added: "With Snorkel AI and Hugging Face Inference Endpoints, companies will accelerate their data-centric AI applications with open source at the core. Machine Learning is becoming the default way of building technology, and building from open source allows companies to build the right solution for their use case and take control of the experience they offer to their customers."
Together, Snorkel and Hugging Face make it easier for large companies, government agencies, and AI innovators to get value from foundation models without investing in extensive training resources.