AI-enabled Engineered Systems are increasingly being developed in sectors such as aerospace, automotive, and manufacturing, where ensuring reliability and safety is paramount. Verifying AI systems, especially those utilizing complex models like neural networks, presents unique challenges distinct from traditional software verification. This is due to their data-driven, inherent non-deterministic nature and the opacity of their decision-making processes. As AI permeates safety-critical industries, there is an urgent need to establish robust verification methodologies that can ensure these systems operate safely and as intended.
In this talk, we will explore the current trends, applications, and challenges in AI verification, highlighting key methodologies such as abstract interpretation methods, which allow to analyze the mathematical properties of AI models to ensure they meet specified constraints and safety requirements. We will discuss seminal contributions, including the FoRMuLA report by Collins Aerospace and the European Union Aviation Safety Agency (EASA), which focuses on emphasizing opportunities for the adoption of formal methods techniques in the design assurance process of machine-learning-enabled systems. Additionally, we will examine the role of constrained deep learning, an approach that incorporates domain-specific constraints into the training of deep neural networks. This technique ensures desirable behavior in safety-critical scenarios by embedding constraints directly into the model construction and training processes. The importance of runtime monitors, which dynamically assess the performance and safety of AI systems during operation, will also be highlighted. These topics will be contextualized within a broader discussion of the applications and challenges faced by industries integrating AI technologies into their systems, offering insights into potential solutions and future directions to enhance safety and reliability.