Modern verification is a highly automated process that involves many tools and subsystems. These verification tools produce large amount of data that is essential for understanding the state of the verification process. The growing amount of data the verification process produces, and the complex relations between verification tools calls for data science techniques such as statistics, data visualization, and machine learning to extract the essence of the data and present it to the users in a simple and clear manner.
The goal of the tutorial is to teach the audience how to harness the powers of data science into tools that improve the verification process and assist in understanding and managing it. The tutorial begins with a brief overview of the challenges and benefits of building a system that stores, processes and analyzes the verification data of verification projects. We then describe various components and aspects in such a system. The main focus of the tutorial is on specific analysis techniques that are accompanied by concrete examples. In that sense, the tutorial sheds the light on the application of advanced data analysis techniques in many steps of the verification cycle, such as accelerating coverage data analysis, estimating formal verification effort, and anomaly detection in post-silicon validation. We conclude with open research problems that can lead to the “holy grail” of cognitive verification.
The tutorial uses real life examples from the IBM Verification Cockpit, a platform for collecting and analyzing verification data and Mentor Graphics state-of-the-art functional verification solutions.