|Ph.D Student||Meller Yael|
|Subject||Model Checking Techniques for Behavioral UML Models|
|Department||Department of Computer Science||Supervisors||PROFESSOR EMERITUS Orna Grumberg|
|DR. Karen Yorav|
|Full Thesis text|
The Unified Modeling Language (UML) is a widely accepted modeling language for embedded and safety critical systems. As such the correct behavior of systems represented as UML models is crucial. Model checking is a successful automated verification technique for checking whether a system satisfies a desired property. In this thesis, we present several approaches to enhancing model checking to behavioral UML systems.
The applicability of model checking is often impeded by its high time and memory requirements. The first approach we propose aim at avoiding this limitation by adopting software model checking techniques for verification of UML models. We translate UML to verifiable C code which preserves the high level structure of the models, and abstracts details that are not needed for verification. We combine static analysis and bounded model checking for verifying LTL safety properties and absence of livelocks. We implemented our approach on top of the bounded software model checker CBMC. We compared it to an IBM research tool that verifies UML models via a translation to IBM’s hardware model checker RuleBasePE. Our experiments show that our approach is more scalable and more robust for finding long counterexamples. We also demonstrate the usefulness of several optimizations that we introduced into our tool.
A successful approach to avoiding the high time and memory requirements of model checking is CounterExample-Guided Abstraction-Refinement (CEGAR). In the second approach we propose a CEGAR-like method for UML systems. We present a model-to-model transformation that generates an abstract UML system from a given concrete one, and formally prove that our transformation creates an over-approximation. The abstract system is often much smaller, thus model checking is easier. Because the abstraction creates an over-approximation we are guaranteed that if the abstract model satisfies the property then so does the concrete one. If not, we check whether the resulting abstract counterexample is spurious. In case it is, we automatically refine the abstract system, in order to obtain a more precise abstraction.
Another successful approach to tackle the limitations of model checking is compositional verification. Recently, great advancements have been made in this direction via automatic learning-based Assume-Guarantee reasoning. In the last approach we present a framework for automatic Assume-Guarantee reasoning for behavioral UML systems. We apply an off-the-shelf learning algorithm for incrementally generating assumptions on the environment that guarantee satisfaction of the property. A unique feature of our approach is that the generated assumptions are UML state machines. Moreover, our Teacher works at the UML level: All queries from the learning algorithm are answered by generating and verifying behavioral UML systems.