Sound black-box checking in the LearnLib

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

Dokumenter

DOI

In black-box checking (BBC) incremental hypotheses on the behavior of a system are learned in the form of finite automata, using information from a given set of requirements, specified in Linear-time Temporal Logic (LTL). The LTL formulae are checked on intermediate automata and potential counterexamples are validated on the actual system. Spurious counterexamples are used by the learner to refine these automata. We improve BBC in two directions. First, we improve checking lasso-like counterexamples by assuming a check for state equivalence. This provides a sound method without knowing an upper-bound on the number of states in the system. Second, we propose to check the safety portion of an LTL property first, by deriving simple counterexamples using monitors. We extended LearnLib’s system under learning API to make our methods accessible, using LTSmin as model checker under the hood. We illustrate how LearnLib’s most recent active learning algorithms can be used for BBC in practice. Using the RERS 2017 challenge, we provide experimental results on the performance of all LearnLib’s active learning algorithms when applied in a BBC setting. We will show that the novel incremental algorithms TTT and ADT perform the best. We also provide experiments on the efficiency of various BBC strategies.

OriginalsprogEngelsk
TidsskriftInnovations in Systems and Software Engineering
Vol/bind15
Nummer3-4
Sider (fra-til)267-287
Antal sider21
ISSN1614-5046
DOI
StatusUdgivet - sep. 2019

Se relationer på Aarhus Universitet Citationsformater

ID: 163924260