Ensuring Reliable Outcomes in Deep Learning: The Key Role of Requirements
Abstract
In recent years, deep learning has seen remarkable progress, leading to significant breakthroughs across many different fields. Yet, the integration of these technologies into high-stakes or safety-critical environments is significantly slowed down by their brittleness and unreliability. In this talk, I argue that requirements definition and satisfaction are key to adapting deep learning models to suit more sensitive domains. I will begin by showing that often deep learning models violate even the most simple requirements. Following that, I will discuss how to design models which are compliant by-design with given requirements, and, at the same time, are able to learn from the background knowledge the requirements express. I will conclude by highlighting the wide-ranging applicability of my research through examples from different fields where the developed requirements-driven approach has been shown to not only enhance the models’ safety but also boost their overall performance.