This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

What's Trending

Tracking trends critical to life sciences and technology companies. Subscribe to stay up to date.

| 1 minute read

Developments Toward Standardizing AI 'Red Teaming'

AI “red teaming” involves simulating attacks on AI systems to uncover vulnerabilities and enhance security. It is becoming an increasingly important practice, as regulatory frameworks—such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF)—emphasize the importance of red teaming. Additionally, some AI vendors may require red teaming in order to benefit from certain indemnification commitments. However, the lack of standardized practices in AI red teaming can lead to varying methodologies and hinder objective comparisons of AI system safety.

Anthropic recently released its latest in a line of red teaming research, delineating the methods they've explored (both manual and automated), and also providing some policy recommendations. Google Deepmind researchers also recently developed and published details about STAR, a sociotechnical framework for red teaming large language models. Establishing consistent red teaming methods is important for managing current risks and preparing for future threats. Hence a careful eye should be kept not only on emerging industry best practices, but also on regulatory developments, such as NIST's Assessing Risks and Impacts of AI (ARIA) program. ARIA is designed to provide more concrete guidance related to the “measure” pillar of the AI RMF, and it includes a specific red teaming evaluation to be developed, making ARIA an important program to keep track of for those looking to understand and implement AI red teaming best practices.

The lack of standardized practices for AI red teaming further complicates the situation. Developers might use different techniques to assess the same type of threat model, and even when they use the same technique, the way they go about red teaming might look quite different in practice. This inconsistency makes it challenging to objectively compare the relative safety of different AI systems. To address this, the AI field needs established practices and standards for systematic red teaming. We believe it is important to do this work now so organizations are prepared to manage today’s risks and mitigate future threats when models significantly increase their capabilities. In an effort to contribute to this goal, we share an overview of some of the red teaming methods we have explored.

Tags

intellectual property, privacy & cybersecurity, regulatory, technology transactions, ai & machine learning, software