This workshop convened both practitioners from the more traditional software security world, and those developing AI systems, researchers studying AI security and vulnerability, and researchers examining the social and ethical implications of AI.
AI systems can be exploited by a variety of triggers, from outright adversarial attacks, to bugs, to simple misapplication. Given the proliferation of AI systems through sensitive social domains it is important to understand not simply where and how such vulnerabilities occur (and our capacity to predict such vulnerabilities) throughout the “stack,” but what their implications might be within a given context of use. This workshop explored the link between technical research and practice examining security and vulnerability, and research looking at AI bias and fairness. Our hope was to expand the current discussion of security and bias to account for AI’s technical capabilities and failings, how these might manifest in harm when exploited in practice, and what cross-sectoral efforts might be needed to contend with these risks.
Some topics discussed:
- AI Vulnerabilities and Security: What’s New?
- Concrete Examples of AI Vulnerabilities and Evaluation Methods
- Responding to Vulnerabilities: Methodologies for Achieving Robustness
- Facing Ambiguity: Historical and Political Perspectives on Cybersecurity
- Assuring Safety in Practice – Learning from Safety-Critical Systems
- Interventions for Research, Practice, Policy & Advocacy