Keynote II

Neuro-Symbolic Verification (Verification Beyond Fairness and
Robustness)

Speaker – Prof. Daniel Neider, TU Dortmund

Abstract

Neural network verification has emerged as a critical tool for ensuring the safety and reliability of neural networks, particularly in safety-critical applications where errors can have significant consequences. While most existing research has focused on verifying relatively simple properties, such as local robustness and fairness, most approaches fall short in addressing more complex semantic requirements. In this talk, we introduce a framework called neuro-symbolic verification, which allows incorporating neural networks directly into the logical specifications. This innovative approach enables the expression and verification of intricate semantic properties that go beyond traditional methods.

Additionally, we demonstrate how neuro-symbolic verification can be used to restrict the verification process to in-distribution inputs. By focusing on inputs that are representative of the real-world data distribution, this technique enhances both the relevance and utility of counterexamples generated by failed verification attempts.