Hugging Face's multimodal learning group has published an ethical charter that places principles like transparency, fairness, and accountability at the core of its research lifecycle. The document, developed by machine learning researchers and engineers with input from ethics and data governance experts, aims to proactively address potential harms such as algorithmic bias, privacy violations, and malicious use.
"By defining these ethical principles at the beginning of the project, we make them core to our machine learning lifecycle," the team states.
The charter outlines specific prohibited uses, including generating hate speech, violating privacy or human rights, and deploying models in high-risk domains like medicine or finance without caution. It also commits to open and reproducible research, ensuring that datasets, tools, and model checkpoints are accessible while respecting licensing and attribution.
Acknowledging that ethical AI lacks universal definitions, the team treats the charter as a living document, tracking updates on GitHub and documenting the operationalization of its values throughout the project. It also recognizes potential conflicts between values, such as openness versus privacy, and emphasizes case-by-case risk-benefit analysis.