Formal Methods for Accountable Cyber-physical Systems

speaker info

Ruzica Piskac is a Professor of Computer Science at Yale University, where she leads the Rigorous Software Engineering (ROSE) group. Her research interests span the areas of software verification, security and applied cryptography, automated reasoning, and code synthesis. Much of her research has focused on using formal techniques to improve software reliability and trustworthiness. Piskac joined Yale’s Department of Computer Science in 2013. She was previously an Independent Research Group Leader at the Max Planck Institute for Software Systems in Germany. Her research has received a range of professional honors, including multiple Amazon Research Awards, Yale University’s Ackerman Award for Teaching and Mentoring, the Facebook Communications and Networking Award, and the Microsoft Research Award for the Software Engineering Innovation Foundation (SEIF). In 2019, Yale named Piskac the Donna L. Dubinsky Associate Professor of Computer Science. Piskac holds a Ph.D. from the Swiss Federal Institute of Technology (EPFL), where her dissertation won the Patrick Denantes Prize. Her current and recent professional activities include service as Program Chair of the 37th International Conference on Computer Aided Verification and the Steering Committee of the Formal Methods in Computer-Aided Design conference. Piskac has graduated five PhD students, four of them are currently holding a position of an assistant professor of computer science.

Abstract

Our lives are increasingly impacted by the decisions made by artificial
intelligence. While AI algorithms aim to improve human welfare, it is a harsh reality that they often fail. Car accidents involving autonomously driven vehicles have been fatal, and biased autonomous decision-making has led to discriminatory assessment and abuse. Ideally, computer scientists would verify algorithms to assess their correctness before deployment. Using formal techniques of program verification, accidents, discrimination, and social harms would be automatically detected. Once algorithmic errors are revealed, developers can adjust the algorithm before being put into the field. Unfortunately, the formal verification of these algorithms is not only theoretically undecidable, but it also requires a degree of computational power that makes the task practically infeasible.

While we cannot formally verify the correctness of decision algorithms, in this talk we will focus on how to deploy formal methods to aid in the assessment of accountability. We developed a tool called soid that uses formal methods to query autonomous vehicles for accountability Our tool gives us the ability to model legal investigations. We use symbolic execution over an autonomous agent’s decision logic, constrained by the log of the agent’s behavior. Using our framework, an expert user can ask tailored counterfactual questions to better understand the functional intention of the decision algorithm, e.g., to distinguish accidents from honest design failures from malicious design consequences. This understanding can help the user to infer the most likely interpretation of the agent’s intentions.

At the core of our framework for modeling of accountability processes is so-called the (Counter)factual- Guided Abstraction Refinement (CFGAR) loop. We define a CFGAR loop oracle using SMT-based automated reasoning, including symbolic execution to generate the decision logic. We implemented this oracle on top of the KLEE-Float symbolic execution engine and Z3 SMT solver.

We will conclude this talk by outlining how different areas of computer science should be combined with legal reasoning to improve our understanding of accountability for cyber-physical systems.