Verification of Neuro-Symbolic Multi-Agent Systems
Abstract
Many artificial intelligence (AI) systems are designed to perform complex tasks in safety-critical scenarios. To verify safety and more general properties about the evolution of such systems, in this talk I will present a powerful formalisation of AI agents as neural-symbolic agents composed of a neural perception system and a symbolic controller. The neural component performs an observation of the environment, based on which the controller chooses available actions. This formalisation allows us to capture a wide range of expressive AI systems, including single or multiple agents. I present an approach to verifying such systems against bounded temporal logic properties, which can be used for identifying shallow bugs in the system, and demonstrate it on a complex scenario of multiple aircraft operating an air-traffic collision avoidance system (VCAS).