Driven by the success of deep learning algorithms in domains ranging from image and speech recognition to autonomous driving, artificial intelligence (AI) technologies are poised to significantly impact our everyday lives. As deep learning systems gain widespread deployment, there is a critical need for verification and validation techniques that can guarantee their safety and security. In many respects, these challenges mirror those faced by both the hardware design and software systems communities; yet, the scale and computational patterns of deep neural networks (DNNs) introduce new challenges from a verification and validation standpoints. This session features talks by leading experts in the emerging domain of DNN verification (that is testing and verifying the properties of DNNs).

The first talk discusses new approaches for verifying properties of DNNs. The second presents new formal methods for the specification, verification and synthesis of deep learning systems. The third talk draws on lessons from software testing to develop new approaches for design time validation DNN properties.