The huggingface_hub library now makes it easier than ever to find models, datasets, and Spaces without leaving your Python environment. New classes like ModelSearchArguments and ModelFilter remove the guesswork from building search queries, enabling developers to programmatically discover resources with precision.
Previously, searching the Hugging Face Hub via code required manually crafting API parameters, a tedious trial-and-error process. The latest update introduces ModelSearchArguments, a namespace that exposes all valid search parameters in a human-readable format. For instance, to find models for text classification trained on the GLUE dataset with PyTorch, you can simply:
from huggingface_hub import HfApi, ModelSearchArguments
api = HfApi()
model_args = ModelSearchArguments()
models = api.list_models(filter=(
model_args.pipeline_tag.TextClassification,
model_args.dataset.glue,
model_args.library.PyTorch
))
This returns a list of 140 matching models (as of writing), each with detailed metadata.
For more complex queries, ModelFilter allows combining multiple criteria. For example, to find models supporting both text classification and zero-shot classification, trained on Multi NLI and GLUE datasets, and compatible with PyTorch and TensorFlow:
from huggingface_hub import ModelFilter
filt = ModelFilter(
task=["text-classification", "zero-shot-classification"],
trained_dataset=[model_args.dataset.multi_nli, model_args.dataset.glue],
library=['pytorch', 'tensorflow']
)
models = api.list_models(filt)
The result is a single model: Jiva/xlm-roberta-large-it-mnli, which matches all conditions perfectly.
This streamlined API eliminates the need to remember exact parameter formats, making the Hub's vast resources more accessible for integration into workflows and automated pipelines.
Note: Ensure you have the latest
huggingface_hubversion installed (pip install huggingface_hub -U) to use these features.