We exploited the results of TRUCONF for a new application, for testing protocols of the Internet of Things (IoT). The work has been accepted at the QEST 2018.
Bernhard K. Aichernig and Richard Schumi: “How Fast is MQTT? Statistical Model Checking and Testing of IoT Protocols”. In Quantitative Evaluation of Systems – 15th International Conference, QEST 2018, Beijing, China, September 4-7, 2018, Proceedings, Lecture Notes in Computer Science. Springer, 2018, In press. (PDF)
The paper will be presented at the International Conference on Quantitative Evaluation of Systems on Sep. 4-7 2018 in Beijing, China.
MQTT is one of the major messaging protocols in the Internet of things (IoT). In this work, we investigate the expected performance of MQTT implementations in various settings. We present a model-based performance testing approach that allows a fast simulation of specific usage scenarios in order to perform a quantitative analysis of the latency. Out of automatically generated log-data, we learn the distributions of latencies and apply statistical model checking to analyse the functional and timing behaviour. The result is a novel testing and verification technique for analysing the performance of IoT protocols. Two well-known open source MQTT implementations are evaluated and compared.
A new publication has been accepted at the SETTA 2018.
Bernhard K. Aichernig, Severin Kann and Richard Schumi: “Statistical Model Checking of Response Times for Different System Deployments“, In Dependable Software Engineering. Theories, Tools, and Applications – 4th International Symposium, SETTA 2018, Beijing, China, Sep. 4-6, 2018.Lecture Notes in Computer Science. Springer, 2018, In press. (PDF)
The paper will be presented at the Symposium on Dependable Software Engineering: Theories, Tools and Applications on Sep. 4-6 2018 in Beijing, China.
Performance testing is becoming increasingly important for interactive systems. Evaluating their performance with respect to user expectations is complex, especially for different system deployments. Various load-testing approaches and performance-simulation methods aim at such analyses.However, these techniques have certain disadvantages, like a high testing effort for load testing, and a questionable model accuracy for simulation methods. Hence, we propose a combination of both techniques. We apply statistical model checking with a learned timed model and evaluate the results on the real system with hypothesis testing. Moreover, we check the established hypotheses of a reference system on various system deployments (configurations), like different hardware or network settings, and analyse the influence on the performance. Our method is realised with a property-based testing tool that is extended with algorithms from statistical model checking. We illustrate the feasibility of our technique with an industrial case study of a web application.
Christian Burghard has written his Master’s Thesis as part of the TRUCONF project. The thesis describes the domain-specific language which was designed specifically for the modelling of AVL measurement device state machines, as well as the associated modelling tool and test case generation toolchain.
Christian Burghard: “Model-based Testing of Measurement Devices Using a Domain-specific Modelling Language”. Graz University of Technology, Institute of Software Technology, 2018. (PDF)
The work was supervised by Bernhard K. Aichernig. The defense is scheduled on May 24th, 2018.
The practice of model-based testing finds increasing application in industry, due to its potential to cope with the ever rising complexity of technical systems. For this reason, the AVL List GmbH is introducing a model-based testing methodology for the application to its portfolio of automotive measurement devices. In a previous project by AVL, the Graz University of Technology and the Austrian Institute of Technology, a model-based mutation testing approach has been developed. While this approach has been successfully validated in terms of functionality, it was rejected by AVL’s test engineers as they deemed its UML-based modelling front-end too difficult to use in their specific industrial setting. In the thesis at hand, we examine the tool composition-, usability- and experience-related reasons which have lead to the rejection of this modelling approach. We present a textual domain-specific language which we specifically tailored to the sole purpose of modelling AVL’s measurement device state machines. To ensure the intended improvement in the user experience of the modelling formalism, we developed the language in close and frequent collaboration with the test engineers. The resulting domain-specific language, called MDML, turned out to be very easy to learn and to be able to efficiently encode measurement device state machine models. In conjunction with MDML, we further developed a dedicated modelling tool, based on the well-known Eclipse-IDE. As we did with the language, we tailored this modelling tool to our use case and we also enriched it with a number of features providing user guidance and direct connection to AVL-internal data sources. Most importantly, we integrated a test case generation toolchain which we built around the pre-existing MoMuT test case generator. This toolchain involves a model transformation from MDML into object-oriented action systems to serve as input for the generator. It further involves the concretion of MoMuT’s abstract test case into executable test code. Lastly, we show that the capabilities of our model-based testing methodology are at least on-par with those of the previous UML-based one by means of a case study involving the generation and execution of tests for one of AVL’s measurement devices.
Model-Based Mutation Testing; Domain-Specific Language; State Machines; UML vs. DSL; Test Case Generation; User Experience; Model Transformation; OOAS
As a concluding event for the project, we held a workshop at AVL on the 31th January 2018. During this workshop, we presented the methods developed within the project and the obtained evaluation results to an audience of AVL staff members. We arranged two sessions, one for each use case that was tackled within TRUCONF.
The major outcome of the first use case that was concerned with testing measurement devices, was a textual domain-specific modelling language called MDML. This language is easy to learn and to use so that non-experts can create models for model-based test-case generation [Burghard et al.].
The second use case targeted performance testing of a web-service-application for test bed management, called Test Factory Management Suite (TFMS). For this use case, we presented a testing technique that combines two existing methods property-based testing and statistical model checking in order to predict and check if the system requests satisfies certain response time limits [Schumi et al.].
We got a lot of positive feedback for our work and our results were praised by AVL. Moreover, the success of the project laid the foundation for future collaboration and projects.
Project Team at the Final Workshop
Silvio Marcovic contributed with his Master’s Thesis to the test-case generation methods and tools of TRUCONF.
Silvio Marcovic: “Integrating an External Test-Case Generator into a Property-Based Testing Tool for Testing Web-Services”. Graz University of Technology, Institute of Software Technology, 2017. (PDF)
The work was supervised by Bernhard K. Aichernig and co-supervised by Richard Schumi. It was defended on the 9th November 2017.
A flyer that summarises the results of TFUCONF and the achievements within the project has been created. (PDF)
A publication has been accepted at the MODELSWARD 2018.
Gerald Stieglbauer, Christian Burghard, Stefan Sobernig and Robert Korošec: “A Daily Dose of DSL – MDE Micro Injections in Practice” In Proceedings of the 6th International Conference on Model-Driven Engineering and Software Development (MODELSWARD 2018), Funchal, Portugal, 22-24 January 2018, pages 642-651, SCITEPRESS, 2018. (PDF)
The paper will be presented at the 6th International Conference on Model-Driven Engineering and Software Development (MODELSWARD 2018) in Funchal, Portugal (22-24 January 2018).
Another publication has been accepted at the International Journal on Software and Systems Modeling (SoSyM).
Bernhard K. Aichernig and Richard Schumi: “Property-Based Testing of Web Services by Deriving Properties from Business-Rule Models”. International Journal on Software and Systems Modeling (SoSyM), 2017, Open Access. (PDF) (doi:10.1007/s10270-017-0647-0)
Property-based testing is well suited for web-service applications, which was already shown in various case studies. For example, it has been demonstrated that JSON schemas can be used to automatically derive test-case generators for web forms. In this work, we present a test-case generation approach for a rule engine-driven web-service application. Business-rule models serve us as input for property-based testing. We parse these models to automatically derive generators for sequences of web-service requests together with their required form data. Property-based testing is mostly applied in the context of functional programming. Here, we define our properties in an object-oriented style in C# and its tool FsCheck. We apply our method to the business-rule models of an industrial web-service application in the automotive domain.
A new publication has been accepted at the ICTSS 2017.
Richard Schumi, Priska Lang, Bernhard K. Aichernig, Willibald Krenn, and Rupert Schlick: “Checking Response-Time Properties of Web-Service Applications Under Stochastic User Profiles“, In Testing Software and Systems – 29th IFIP WG 6.1 International Conference, (ICTSS 2017), St. Petersburg, Russia, October 9-11, 2017, Proceedings, volume 10533 of Lecture Notes in Computer Science, pages 293-310. Springer, 2017. (PDF)(doi:10.1007/978-3-319-67549-7_18)
The paper was presented at the 29th IFIP International Conference on Testing, Software and Systems (ICTSS 2017) in St-Petersburg, Russia (9-11 October 2017).
Performance evaluation of critical software is important but also computationally expensive. It usually involves sophisticated load-testing tools and demands a large amount of computing resources. Analysing different user populations requires even more effort, becoming infeasible in most realistic cases. Therefore, we propose a model-based approach. We apply model-based test-case generation to generate log-data and learn the associated distributions of response times. These distributions are added to the behavioural models on which we perform statistical model checking (SMC) in order to assess the probabilities of the required response times. Then, we apply classical hypothesis testing to evaluate if an implementation of the behavioural model conforms to these timing requirements. This is the first model-based approach for performance evaluation combining automated test-case generation, cost learning and SMC for real applications. We realised this method with a property-based testing tool, extended with SMC functionality, and evaluate it on an industrial web-service application.
Another publication has been accepted at A-MOST 2017.
Bernhard K. Aichernig, Silvio Marcovic and Richard Schumi: “Property-Based Testing with External Test-Case Generators“, In IEEE 10th International Conference on Software Testing, Verification, and Validation Workshops (ICSTW), 13th Workshop on Advances in Model Based Testing (A-MOST 2017), Tokyo, Japan, 13-17 March, 2017, pages 337–346. IEEE Computer Society, 2017. (PDF)(doi:10.1109/ICSTW.2017.62)
The paper will be presented on the 13th Workshop on Advances in Model Based Testing A-MOST 2017 in Tokyo, Japan on 17 March 2016. The workshop is part of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST 2017).
Previous work has demonstrated that property-based testing (PBT) is a flexible random testing technique that facilitates the generation of complex form data. For example, it has been shown that PBT can be applied to web-service applications that require various inputs for web-forms. We want to exploit this data generation feature of PBT and combine it with an external test-case generator that can generate test cases via model-based mutation testing. PBT already supports the generation of test cases from stateful models, but it is limited, because it normally only considers the current state during exploration of the model. We want to give the tester more control on how to produce meaningful operation sequences for test cases. By integrating an external test-case generator into a PBT tool, we can create test cases that follow certain coverage criteria. This allows us to reduce the test execution time, because we do not need a large number of random tests to cover certain model aspects. We demonstrate our approach with a simple example of an external generator for regular expressions and perform an industrial case study, where we integrate an existing model-based mutation testing generator.