TU BRAUNSCHWEIG

Alexander Knüppel, M.Sc.

Institut für Softwaretechnik und Fahrzeuginformatik
Technische Universität Braunschweig

Informatikzentrum
Mühlenpfordtstr. 23
38106 Braunschweig

Raum: IZ 412
Sprechstunde: Nach Vereinbarung

Tel. +49 531 391-2289

Email: a.knueppel[at]tu-bs.de
ResearchGate

Forschungsschwerpunkte

  • Feature-oriented software development (FOSD)
  • Software Produktlinien (SPLs) und Analyse
  • Feature-Modellierung
  • Formale Methoden/Formale Verifikation: Deduktive Verifikation, Model Checking, Type Systems, statische Analyse, Assertion checking
  • Formale Spezifikation: Design by Contracts (DbC), Z-Notation, B-Method
  • Theorembeweisen mit Coq

Aktivitäten

Program Committees

  • SPLC'18 22nd International Systems and Software Product Line Conference (Artifact Track PC Member)
  • SEFM'18 16th International Conference on Software Engineering and Formal Methods (Proceedings Manager)

Projekte

  • (CCC: Controlling Concurrent Change)

Lehre

Wintersemester 2018/2019

Sommersemester 2018

Wintersemester 2017/2018

Sommersemester 2017

Wintersemester 2016/2017

  • Teamprojekt: Plugin-Entwicklung mit Eclipse (Organisation)

Sommersemester 2016

Aktuell angebotene studentische Arbeiten:

Schaut bitte hier nach: aktuelle Abschlussarbeiten. Bei Interesse einfach melden.

Betreute Studentische Arbeiten

Veröffentlichungen

2018

  • Tobias Runge, Ina Schaefer, Alexander Knüppel, Loek Cleophas, Derrick Kourie, and Bruce W. Watson. Tool support for Confidentiality by Construction. In HILT 2018 Workshop on Languages and Tools for Ensuring Cyber-Resilience in Critical Software-Intensive Systems (HILT'18), November 2018. To appear.
    In many software applications, it is necessary to preserve confidentiality of information. Therefore, security mechanisms are needed to enforce that secret information does not leak to unauthorized users. However, most language-based techniques to enable information flow control work post-hoc, deciding whether a specific program violates a confidentiality policy. In contrast, we have proposed a refinement-based approach to derive programs that preserve confidentiality-by-construction in previous work. This approach follows the principles of Dijkstra’s correctness-by-construction. In this extended abstract, we present the implementation and tool support of that refinement-based approach allowing to specify the information flow policies first and to create programs which comply to these policies by construction. In particular, we present the idea of confidentiality-by-construction using an example and discuss the IDE we have developed.
  • Ina Schaefer, Tobias Runge, Alexander Knüppel, Loek Cleophas, Derrick Kourie, and Bruce W. Watson. Towards Confidentiality-by-Construction. In 8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation (ISoLA'18), November 2018. Springer.
    Guaranteeing that information processed in computing systems remains confidential is vital for many software applications. To this end, language-based security mechanisms enforce fine-grained access control policies for program variables to prevent secret information from leaking through unauthorized access. However, approaches for language-based security by information flow control mostly work post-hoc, classifying programs into whether they comply with information flow policies or not after the program has been constructed. Means for constructing programs that satisfy given information flow control policies are still missing. Following the correctness-by-construction approach, we propose a development method for specifying information flow policies first and constructing programs satisfying these policies subsequently. We replace functional pre- and postcondition specifications with confidentiality properties and define rules to derive new confidentiality specifications for each refining program construct. We discuss possible extensions including initial ideas for tool support. Applying correctness-by construction techniques to confidentiality properties constitutes a first step towards security-by-construction.
  • Alexander Knüppel, Thomas Thüm, Carsten I. Pardylla, and Ina Schaefer. Scalability of Deductive Verification Depends on Method Call Treatment. In 8th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation (ISoLA'18), November 2018. Springer.
    Today, software verification is vital for safety-critical and security-critical applications applied in industry. However, specifying large-scale software systems for efficient verification still demands high effort and expertise. In deductive verification, design by contract is a widespread software methodology to explicitly specify the behavior of programs using Hoare-style pre- and postconditions in a modular fashion. During verification, a method call can either be replaced by an available method contract or by inlining the method's implementation. We argue that neither approach alone is feasible for verifying real-world software systems. Only relying on method inlining does not scale, as the number of inlined methods may lead to a combinatorial explosion. But specifying software is in itself notoriously hard and time-consuming, making it economically unrealistic to specify large-scale software completely. We discuss circumstances in which one of the two approaches is preferred. We evaluate the program verifier KeY with large programs varying in the number of method calls of each method and the maximum depth of the stack trace. Our analyses show that specifying 10% additional methods in a program can reduce the verification costs by up-to 50%, and, thus, an effective combination of contracting and method inlining is indispensable for the scalability of deductive verification.
  • Alexander Knüppel, Thomas Thüm, Carsten Pardylla, and Ina Schaefer. Experience Report on Formally Verifying Parts of OpenJDK's API with KeY. In Fourth Workshop on Formal Integrated Development Environment (F-IDE), July 2018. To appear.
    Deductive verification of software has not yet found its way into industry, as complexity and scalability issues require highly specialized experts. The long-time perspective is, however, to develop verification tools aiding industrial software developers to find bugs or bottlenecks in software systems faster and more easily. The KeY project constitutes a framework for specifying and verifying software systems, aiming at making formal verification tools applicable for mainstream software development. To help the developers of KeY, its users, and the deductive verification community, we summarize our experiences with KeY 2.6.1 in specifying and verifying real-world Java code from a users perspective. To this end, we concentrate on parts of the Collection-API of OpenJDK, where an informal specification exists. While we describe how we bridged informal and formal specification, we also exhibit accompanied challenges that we encountered. Our experiences are that (a) in principle, deductive verification for API-like code bases is feasible, but requires high expertise, (b) developing formal specifications for existing code bases is still notoriously hard, and (c) the under-specification of certain language constructs in Java is challenging for tool builders.
  • Alexander Knüppel, Thomas Thüm, Carsten Pardylla, and Ina Schaefer. Understanding Parameters of Deductive Verification: An Empirical Investigation of KeY. In 9th International Conference on Interactive Theorem Proving (ITP), Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 2018. Springer.
    As formal verification of software systems is a complex task comprising many algorithms and heuristics, modern theorem provers offer numerous parameters that are to be selected by a user to control how a piece of software is verified. Evidently, the number of parameters even increases with each new release. One challenge is that default parameters are often insufficient to close proofs automatically and are not optimal in terms of verification effort. The verification phase becomes hardly accessible for non-experts, who typically must follow a time-consuming trial-and-error strategy to choose the right parameters for even trivial pieces of software. To aid users of deductive verification, we apply machine learning techniques to empirically investigate which parameters and combinations thereof impair or improve provability and verification effort. We exemplify our procedure on the deductive verification system KeY 2.6.1 and formulate 38 hypotheses of which only three have been invalidated. We identified parameters that represent a trade-off between high provability and low verification effort, enabling the possibility to prioritize the selection of a parameter for either direction. Our insights give tool builders a better understanding of their control parameters and constitute a stepping stone towards automated deductive verification and better applicability of verification tools for non-experts.
  • Alexander Knüppel, Thomas Thüm, Stephan Mennicke, Jens Meinicke, and Ina Schaefer. Is There a Mismatch Between Real-World Feature Models and Product-Line Research?. In Software Engineering, Bonn, Germany, March 2018. Gesellschaft für Informatik (GI).
    This work has been presented at the joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering in Paderborn, Germany. Feature modeling has emerged as the de-facto standard to capture variability of a software product line in a compact and understandable fashion. Multiple feature modeling languages that evolved over the last decades to manage industrial-size product lines have been proposed. However, less expressive languages, solely permitting require and exclude constraints, are permanently and carelessly used in product-line research. We address the problem whether those less expressive languages are sufficient for industrial product lines. We developed an algorithm to eliminate complex cross-tree constraints in a feature model, enabling the combined usage of tools and algorithms working with different feature model dialects in a plug-and-play manner. However, the scope of our algorithm is limited. Our evaluation on large feature models, including the Linux kernel, gives evidence that require and exclude constraints are not sufficient to express real-world feature models. Hence, we promote that research on feature models needs to consider arbitrary propositional formulas as cross-tree constraints prospectively.

2017

  • Alexander Knüppel, Thomas Thüm, Stephan Mennicke, Jens Meinicke, and Ina Schaefer. Is There a Mismatch Between Real-World Feature Models and Product-Line Research?. In Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE). 2017. To appear. Software artifacts available: GitHub
    Feature modeling has emerged as the de-facto standard to compactly capture the variability of a software product line. Multiple feature modeling languages have been proposed that evolved over the last decades to manage industrial-size product lines. However, less expressive languages, solely permitting require and exclude constraints, are permanently and carelessly used in product-line research. We address the problem whether those less expressive languages are sufficient for industrial product lines. We developed an algorithm to eliminate complex cross-tree constraints in a feature model, enabling the combination of tools and algorithms working with different feature model dialects in a plug-and-play manner. However, the scope of our algorithm is limited. Our evaluation on large feature models, including the Linux kernel, gives evidence that require and exclude constraints are not sufficient to express real-world feature models. Hence, we promote that research on feature models needs to consider arbitrary propositional formulas as cross-tree constraints prospectively.

2016


  aktualisiert am 02.09.2016
TU_Icon_E_Mail_1_17x17_RGB Zum Seitenanfang