Model-based testing (MBT)

What is Model based testing ?



Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing.


 



Models can be used to represent the desired behaviour of a System Under Test (SUT), or to represent testing strategies and a test environment. 

Must  Read :



Tests can be derived from models in different ways. Because testing is usually experimental and based on heuristics, there is no known single best approach for test derivation. It is common to consolidate all test derivation related parameters into a package that is often known as "test requirements", "test purpose" or even "use case(s)". This package can contain information about those parts of a model that should be focused on, or the conditions for finishing testing (test stopping criteria).


Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing.

Especially in Model Driven Engineering or in Object Management Group's (OMG's) model-driven architecture, models are built before or parallel with the corresponding systems. Models can also be constructed from completed systems. Typical modelling languages for test generation include UML, SysML, mainstream programming languages, finite machine notations, and mathematical formalisms such as Z, B, Event-B, Alloy.

There are various known ways to deploy model-based testing, which include on line testing, offline generation of executable tests, and offline generation of manually deploy able tests.
On line testing means that a model-based testing tool connects directly to an SUT and tests it dynamically.



Offline generation of executable tests means that a model-based testing tool generates test cases as computer-readable assets that can be later run automatically; for example, a collection of Python classes that embodies the generated testing logic.

Offline generation of manually deploy able tests means that a model-based testing tool generates test cases as human-readable assets that can later assist in manual testing; for instance, a PDF document describing the generated test steps in a human language.

Deriving tests algorithmically

The effectiveness of model-based testing is primarily due to the potential for automation it offers. If a model is machine-readable and formal to the extent that it has a well-defined behavioural interpretation, test cases can in principle be derived mechanically.




From finite state machines

Often the model is translated to or interpreted as a finite state automaton or a state transition system. This automaton represents the possible configurations of the system under test. To find test cases, the automaton is searched for executable paths. A possible execution path can serve as a test case. This method works if the model is deterministic or can be transformed into a deterministic one. Valuable off-nominal test cases may be obtained by leveraging unspecified transitions in these models.
Depending on the complexity of the system under test and the corresponding model the number of paths can be very large, because of the huge amount of possible configurations of the system. To find test cases that can cover an appropriate, but finite, number of paths, test criteria are needed to guide the selection. This technique was first proposed by Offutt and Abdurazik in the paper that started model-based testing. Multiple techniques for test case generation have been developed and are surveyed by Rushby. Test criteria are described in terms of general graphs in the testing textbook.

Theorem proving


Recommended Read :


Theorem proving has been originally used for automated proving of logical formulas. For model-based testing approaches the system is modelled by a set of logical expressions (predicates) specifying the system's behaviour.For selecting test cases the model is partitioned into equivalence classes over the valid interpretation of the set of the logical expressions describing the system under development. Each class is representing a certain system behaviour and can therefore serve as a test case. The simplest partitioning is done by the disjunctive normal form approach. The logical expressions describing the system's behaviour are transformed into the disjunctive normal form.