The BIG Argument for AI Safety Cases
Ibrahim Habli
Professor, University of York
Richard Hawkins
Associate Professor, University of York
In this presentation we will present our Balanced, Integrated and Grounded (BIG) argument for assuring the safety of AI systems. The BIG argument adopts a whole-system approach to constructing a safety case for AI systems of varying capability, autonomy and criticality. Whether the AI capability is narrow and constrained or general-purpose and powered by a frontier or foundational model, the BIG argument insists on a meaningful treatment of safety. It respects long-established safety assurance norms such as sensitivity to context, traceability and risk proportionality. Further, it places a particular focus on the novel hazardous behaviours emerging from the advanced capabilities of frontier AI models and the open contexts in which they are rapidly being deployed. These complex issues are considered within a broader AI safety case that approaches assurance from both technical and socio-technical perspectives. We will provide examples illustrating the use of the BIG argument.
About Ibrahim Habli
Ibrahim Habli is a Professor of Safety-Critical Systems at the University of York. He specialises in the design and safety assurance of software-intensive systems, with a particular focus on AI and autonomous applications. He currently serves as the Director of the UKRI Centre for Doctoral Training in Safe AI Systems (SAINTS), a £16M PhD programme that brings together five academic departments (Computer Science, Law, Philosophy, Health Sciences and Sociology) and 35 industry, policy and regulatory partners. Professor Habli is also the Research Director of the Centre for Assuring Autonomy (CfAA), a £10m partnership between Lloyd's Register Foundation and the University of York, dedicated to pioneering evidence-based and impactful research at the intersection of AI and safety. Prior to these roles, he was the Head of Research and Deputy Head of the Department of Computer Science.
About Richard Hawkins
Richard Hawkins is an Associate Professor in the Department of Computer Science at the University of York. Working at the University's Centre for Assuring Autonomy (CfAA), his research focuses on safety assurance and safety cases for autonomous systems and AI. He has been working with safety related systems for over 20 years both in academia and in industry. Richard has previously worked as a software safety engineer for BAE Systems and as a safety advisor in the nuclear industry.