A new initiative aims to recognize and rank security researchers who stress-test AI systems by attempting to find vulnerabilities. The Red-Teaming Resistance Leaderboard, as it's called, will highlight the top contributors who help improve robustness by reporting flaws. This effort is seen as a crucial step toward responsible AI development, encouraging ethical hacking practices within the community.
"By shining a light on these efforts, we hope to foster a culture of proactive defense," said a spokesperson.
The leaderboard will track metrics such as the severity and novelty of discovered issues, with top participants gaining recognition and potential rewards. It's part of a broader push to make AI safer through collaborative red-teaming.