Concurrent algorithms and data structures for model checking

Research output: Contribution to book/anthology/report/proceedingArticle in proceedingsResearchpeer-review

Model checking is a successful method for checking properties on the state space of concurrent, reactive systems. Since it is based on exhaustive search, scaling this method to industrial systems has been a challenge since its conception. Research has focused on clever data structures and algorithms, to reduce the size of the state space or its representation; smart search heuristics, to reveal potential bugs and counterexamples early; and high-performance computing, to deploy the brute force processing power of clusters of compute-servers. The main challenge is to combine these approaches – brute-force alone (when implemented carefully) can bring a linear speedup in the number of processors. This is great, since it reduces model-checking times from days to minutes. On the other hand, proper algorithms and data structures can lead to exponential gains. Therefore, the parallelization bonus is only real if we manage to speedup clever algorithms. There are some obstacles though: many linear-time graph algorithms depend on a depth-first exploration order, which is hard to parallelize. Examples include the detection of strongly connected components (SCC) and the nested depth-first-search (NDFS) algorithm. Both are used in model checking LTL properties. Symbolic representations, like binary decision diagrams (BDDs), reduce model checking to “pointer-chasing”, leading to irregular memory-access patterns. This poses severe challenges on achieving actual speedup in (clusters of) modern multi-core computer architectures. This talk presents some of the solutions found over the last 10 years, which led to the high-performance model checker LTSmin [2]. These include parallel NDFS (based on the PhD thesis of Alfons Laarman [3]), the parallel detection of SCCs with concurrent Union-Find (based on the PhD thesis of Vincent Bloemen [1]), and concurrent BDDs (based on the PhD thesis of Tom van Dijk [4]). Finally, I will sketch a perspective on moving forward from high-performance model checking to high-performance synthesis algorithms. Examples include parameter synthesis for stochastic and timed systems, and strategy synthesis for (stochastic and timed) games.

Original languageEnglish
Title of host publication30th International Conference on Concurrency Theory, CONCUR 2019
EditorsWan Fokkink, Rob van Glabbeek
PublisherSchloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
Publication yearAug 2019
Article number4
ISBN (Electronic)9783959771214
DOIs
Publication statusPublished - Aug 2019
Event30th International Conference on Concurrency Theory, CONCUR 2019 - Amsterdam, Netherlands
Duration: 27 Aug 201930 Aug 2019

Conference

Conference30th International Conference on Concurrency Theory, CONCUR 2019
LandNetherlands
ByAmsterdam
Periode27/08/201930/08/2019
SeriesLeibniz International Proceedings in Informatics, LIPIcs
Volume140
ISSN1868-8969

    Research areas

  • Concurrent datastructures, Model checking, Parallel algorithms

See relations at Aarhus University Citationformats

ID: 168253670