Technische Universität Braunschweig
  • Study & Teaching
    • Beginning your Studies
      • Prospective Students
      • Degree Programmes
      • Application
      • Why TU Braunschweig?
    • During your Studies
      • Fresher's Hub
      • Term Dates
      • Courses
      • Practical Information
      • Beratungsnavi
      • Additional Qualifications
      • Financing and Costs
      • Special Circumstances
      • Health and Well-being
      • Campus life
    • At the End of your Studies
      • Discontinuation and Credentials Certification
      • After graduation
      • Alumni
    • For Teaching Staff
      • Strategy, Offers and Information
      • Learning Management System Stud.IP
    • Contact
      • Study Service Centre
      • Academic Advice Service
      • Student Office
      • Career Service
  • Research
    • Research Profile
      • Core Research Areas
      • Clusters of Excellence at TU Braunschweig
      • Research Projects
      • Research Centres
      • Professors‘ Research Profiles
    • Early Career Researchers
      • Support in the early stages of an academic career
      • PhD-Students
      • Postdocs
      • Junior research group leaders
      • Junior Professorship and Tenure-Track
      • Habilitation
      • Service Offers for Scientists
    • Research Data & Transparency
      • Transparency in Research
      • Research Data
      • Open Access Strategy
      • Digital Research Announcement
    • Research Funding
      • Research Funding Network
      • Research funding
    • Contact
      • Research Services
      • Academy for Graduates
  • International
    • International Students
      • Why Braunschweig?
      • Degree seeking students
      • Exchange Studies
      • TU Braunschweig Summer School
      • Refugees
      • International Student Support
      • International Career Service
    • Going Abroad
      • Studying abroad
      • Internships abroad
      • Teaching and research abroad
      • Working abroad
    • International Researchers
      • Welcome Support for International Researchers
      • Service for Host Institutes
    • Language and intercultural competence training
      • Learning German
      • Learning Foreign Languages
      • Intercultural Communication
    • International Profile
      • Internationalisation
      • International Cooperations
      • Strategic partnerships
      • International networks
    • International House
      • About us
      • Contact & Office Hours
      • News and Events
      • International Days
      • 5th Student Conference: Internationalisation of Higher Education
      • Newsletter, Podcast & Videos
      • Job Advertisements
  • TU Braunschweig
    • Our Profile
      • Aims & Values
      • Regulations and Guidelines
      • Alliances & Partners
      • The University Development Initiative 2030
      • Ecoversity – the TU Braunschweig as a university ecosystem
      • Facts & Figures
      • Our History
    • Career
      • Working at TU Braunschweig
      • Vacancies
      • Information and Offers for New Employees
    • Economy & Business
      • Entrepreneurship
      • Friends & Supporters
    • General Public
      • Check-in for Students
      • CampusXperience
      • The Student House
      • Access to the University Library
    • Media Services
      • Communications and Press Service
      • Services for media
      • Film and photo permits
      • Advices for scientists
      • Topics and stories
    • Contact
      • General Contact
      • Getting here
  • Organisation
    • Presidency & Administration
      • Executive Board
      • Designated Offices
      • Administration
      • Committees
    • Faculties
      • Carl-Friedrich-Gauß-Fakultät
      • Faculty of Life Sciences
      • Faculty of Architecture, Civil Engineering and Environmental Sciences
      • Faculty of Mechanical Engineering
      • Faculty of Electrical Engineering, Information Technology, Physics
      • Faculty of Humanities and Education
    • Institutes
      • Institutes from A to Z
    • Facilities
      • University Library
      • Gauß-IT-Zentrum
      • Professional and Personnel Development
      • International House
      • The Project House of the TU Braunschweig
      • Transfer Service
      • University Sports Center
      • Facilities from A to Z
    • Equal Opportunity Office
      • Equal Opportunity Office
      • Family
      • Diversity for Students
  • Search
  • Quicklinks
    • People Search
    • Webmail
    • cloud.TU Braunschweig
    • Messenger
    • Cafeteria
    • TUconnect (student platform)
    • Courses
    • Emergency
    • Stud.IP
    • Library Catalogue
    • IT Services
    • Information Portal (employees)
    • Link Collection
    • DE
    • EN
    • Instagram
    • YouTube
    • LinkedIn
    • Mastodon
    • Bluesky
Menu
  • Organisation
  • Faculties
  • Carl-Friedrich-Gauß-Fakultät
  • Institutes
  • Institute of Software Engineering and Automotive Informatics
  • Teaching
Logo Institut für Softwaretechnik und Fahrzeuginformatik der TU Braunschweig
Theses and Projects
  • Teaching
    • Software Engineering 1
    • Software Engineering 2
    • Software Development Project
    • Team Project
    • Software Product Lines
    • Constraint Solving
    • Ramp Up Course Computer Science
    • IT Law: Contract Law and Liability Law
    • Industrial Software Development Management
    • Bachelor Seminar
    • Master Seminar
    • Theses and Projects

Theses and Projects

Topic Presentation for the Summer Term 2026

We present all current topics for projects as well as Bachelor's and Master's theses at the ISF. In addition, we will briefly present our teaching offer for the summer term 2026. All interested parties are cordially invited.

When? Tuesday, February 3rd, 2026, from 11:30 am to 1 pm

Where? PK 11.1

The slides of the presentation will be published here on the same day.

 

  • Teaching in Summer Term 2026 and Open Theses Topics

Topics

We regularly update the following list of topics. We are also open to your own topic suggestions.

Looking for a teamproject? Here you can find information on the current teamproject at ISF.

Contact

If you are interested and/or have any questions, feel free to contact the responsible member. 

They can then further discuss the topic with you and try to adapt it to your personal preferences.

Legend

Shorthand Full
P Project
B Bachelor's Thesis
M Master's Thesis
R Topic Reserved

Software Product Line Reengineering

Analyzing the Research Workflow in Python Research Scripts (P/M)

Context

While every Data Science research script has an input and an output at some point, there are multiple steps in between that are used to transform the provided data. Given a big enough sample, it should be possible to derive a workflow that most scientists adhere to and compare it with suggested workflows described in the literature [1].

This workflow can then be used for further analyses, such as the number of function calls for each stage in the workflow. To facilitate that, we need to create a mapping of popular data science functions to those identified stages.

A short example on how function calls are annotated to a stage in the workflow can be found here:

sample <- read.csv("sample.csv", sep = ";") #import

plot(sample$var1 ~ sample$var2, pch = 20, col = "grey58", ylim = c(0, 1), xlim = c(0, 1)) #visualize

abline(lm(sample$var1 ~ sample$var2)) #visualize


Research Problem

In this work, we want to explore if there is a common workflow across disciplines (such as Chemistry[2], Biology, Social Sciences[3], etc.) that high-grade papers adhere to. By exploring outstanding journals and conferences in that field, we want to collect samples of the way they structure their scripts. The derived workflow is then compared to literature on proposed workflows to check if there is any overlap. Furthermore, we want to create a mapping for the functions used in the process to their respective stage in the derived workflow.


Tasks

  • Identify a set of conferences/journals as a basis for a literature review.
  • Collect recent publications that do data science in python from this set.
  • Derive a multi-stage workflow and compare it to literature on data science workflows.
  • Identify popular libraries that are used and map their functions to the stages of your workflow.

Related Work and Further Reading

[1] Huber, F. (2025). Hands-on Introduction to Data Science with Python. v0.23, 2025, Zenodo. https://doi.org/10.5281/zenodo.10074474

[2] Davila-Santiago, E.; Shi, C.; Mahadwar, G.; Medeghini, B.; Insinga, L.; Hutchinson, R.; Good, S.; Jones, G. D. Machine learning applications for chemical fingerprinting and environmental source tracking using non-target chemical data. Environ. Sci. Technol. 2022, 56 (7), 4080–4090. DOI: 10.1021/acs.est.1c06655.

[3] Di Sotto S, Viviani M. Health Misinformation Detection in the Social Web: An Overview and a Data Science Approach. International Journal of Environmental Research and Public Health. 2022; 19(4):2173. https://doi.org/10.3390/ijerph19042173


Contact

Ruben Dunkel

Reengineering of R Research Scripts using gardenR (P/M)

Context

Data Science in R comes with a certain fallacy. Most of the time, a script is created for a single publication and then left to rot. Additionally, most of the published R scripts are in no state to reproduce results [1]. To combat this single use practice, the gardenR tool has been created. gardenR uses a set of predefined functions calls to create a dependency graph using program slicing, which is then converted into a Software Product Line. While the application has been tested on research data, there has been no evaluation on whether the annotated SPL meets expectations of researchers in the field yet.


Research Problem

In this work, we want to collect publications of recent R Data Science scripts, that are then annotated through gardenR. Those scripts are then hosted on a website of your creation that allows to select configuration of the corresponding Software Product Line, which are created by execution of C Preprocessor functionality on the client-side [2]. The generated variant can then be downloaded. Finally, want to contact the authors of the publication and present our annotated version, including a visual representation of their script. In an interview, the researcher is then questioned on the potential they see in the annotated code and voice further ideas for improvement.


Tasks

  • collect recent R Data Science scripts.
  • annotate them through gardenR into an SPL.
  • create an online configurator that allows for the annotated scripts to be turned into a selected variant using C Preprocessor statements.
  • conduct interviews with the researcher that published the script on the usability of the annotated script and its derivatives.

Related Work and Further Reading

[1] Vidoni, Melina. "Software engineering and r programming: A call for research." (2021).

[2] https://gcc.gnu.org/onlinedocs/cpp/Ifdef.html


Contact

Ruben Dunkel

(R) Overview on the Usage of Programming Languages in Data Science (B/M/P))

Context

With empirical science creating large sets of data, the discipline of data science is more important than ever to wrangle conclusions from those heaps of unstructured data [1]. There are several popular languages used in Data Science such as Python, R or Julia. While they are generally seen as the most prevalent, there is no data on how popular they are in different disciplines (such as Chemistry, Biology, Social Sciences, etc.).


Research Problem

We want to create a comprehensive overview over the distribution of programming languages and libraries in the different fields of research. By comparing the use and distribution we want to gain insight on what the current stack of tools used for data science looks like.


Tasks

  • Create or modify a tool that accesses zenodo and stores data science artifacts by field of research.
  • Analyze the artifacts on which language/framework/libraries are used and extract the provided functions of the libraries.
  • Create rankings for language/framework/libraries by field of research.

Related Work and Further Reading

[1] Van Der Aalst, Wil. "Process mining: Overview and opportunities." ACM Transactions on Management Information Systems (TMIS) 3.2 (2012): 1–17.


Contact

Ruben Dunkel

Mapping Popular Functions in Python Research Scripts to a Data Science Workflow (B)

Context

Data Science research scripts generally have some data as input and a form of visualization as output, but between that multiple actions are performed to transform the provided data. This underlying workflow [1] can be used for further analyses, such as counting the amount of function calls that correspond to each stage (import/visualization/tidying/...) of the workflow. To facilitate that, we need to create a mapping of popular data science functions to the different stages of a given workflow.

A short example on how function calls can be annotated to a stage in the workflow can be found here:

sample <- read.csv("sample.csv", sep = ";") #import

plot(sample$var1 ~ sample$var2, pch = 20, col = "grey58", ylim = c(0, 1), xlim = c(0, 1)) #visualize

abline(lm(sample$var1 ~ sample$var2)) #visualize


Research Problem

There is work on categorizing Jupyter notebook cells as part of a data science step [2], but we want to create a mapping with higher granularity. For this we want to collect popular Python packages and map their exposed functions to the steps used for classifying the notebook cells [2]. The created mapping helps in the extension of a tool that is used to transform research scripts into Software Product Lines.


Tasks

  • Collect recent publications that perform data science in Python.
  • Gather all packages used in those papers.
  • Create a mapping of the provided functions to labels used in the DASWOW dataset [3]

Related Work and Further Reading

[1] Huber, F. (2025). Hands-on Introduction to Data Science with Python. v0.23, 2025, Zenodo. https://doi.org/10.5281/zenodo.10074474

[2] Ramasamy, D., Sarasua, C., Bacchelli, A. et al. Workflow analysis of data science code in public GitHub repositories. Empir Software Eng 28, 7 (2023). https://doi.org/10.1007/s10664-022-10229-z

[3] https://doi.org/10.5281/zenodo.5635475


Contact

Ruben Dunkel

Comparison of Static Program Slicers for Python (B)

Context

Program Slicing allows for the creation of a subset of a given program without changing the behaviour of the subset in comparison to the original program. A slicer can be evaluated using multiple different metrics, such as the correctness, accuracy (size of slice), execution time, etc. Depending on the use case, a trade-off between those metrics has to be made to find the perfect slicer for your needs.



Research Problem

In this work, we want to compare multiple static program slicers for Python. Since we need to perform many slicing calls in a short time our evaluation metrics will be centered around correctness, speed and lastly accuracy. Based on this evaluation, a slicer will be selected to incorporate Python support into a tool that is used to transform research scripts into Software Product Lines. 


Tasks

  • Gather recently published/updated static program slicers for Python.
  • Set up a testing pipeline to evaluate correctness, accuracy and speed of the program slicers on your evaluation set.
  • Create an evaluation set consisting of multiple real world Python scripts in the Data Science Domain.

Related Work and Further Reading

[1] M. Weiser, "Program Slicing," in IEEE Transactions on Software Engineering, vol. SE-10, no. 4, pp. 352-357, July 1984, https://doi.org/10.1109/TSE.1984.5010248.


Contact

Ruben Dunkel

Feature Model Features

Exploring the Usage of Feature Models for Feature Model Analysis Benchmarking (P/B/M)

Exploring the Usage of Feature Models for Feature Model Analysis Benchmarking

Context

New algorithms and approaches for feature model analysis are typically analyzed empirically for their publication. This process requires feature models that can be used for benchmarking the evaluated algorithm. However, it is unclear which feature models are used in which publication and how impactful the selection is on the results of the evaluation.


Research Problem

Extend the existing feature model benchmark with new feature models and track their usage in existing publications. Identify peculiarities in the generated data set and try to explain found correlations.


Tasks

  1. Extend the feature model benchmark for newer publications (2023 till now)
  2. Extract which paper uses which feature model in its analysis
  3. Analyze the usage behavior of feature models in feature model analyis benchmarking and identify correlations and outliers

Related Work and Further Reading

  • Chico Sundermann, Vincenzo Francesco Brancaccio, Elias Kuiter, Sebastian Krieter, Tobias Heß, and Thomas Thüm. 2024. Collecting Feature Models from the Literature: A Comprehensive Dataset for Benchmarking. In Proceedings of the 28th ACM International Systems and Software Product Line Conference (SPLC '24). Association for Computing Machinery, New York, NY, USA, 54–65. https://doi.org/10.1145/3646548.3672590
  • github.com/SoftVarE-Group/feature-model-benchmark

Contact

Raphael Dunkel

Analyzing the Reproduciblity of Feature Model Analysis Evaluations (P/B/M)

Analyzing the Reproduciblity of Feature Model Analysis Evaluations

Context

New algorithms and approaches for feature model analysis are typically analyzed empirically for their publication. Replicating these results is important to validate research findings, ensure scientific integrity, and allow for the re-usage of tools in further research. However, often the evaluation is not easily reproducible because of missing data or broken tooling.


Research Problem

Reproduce existing research in the context of feature model analysis by generating functioning replication packages and partially re-computing their evaluations. Further try to reproduce these findings on new and unused feature models.


Tasks

  1. Select relevant studies that evaluated the performance of a feature model analysis algorithm (criteria may be provided)

  2. Reproduce the selected studies

  3. Replicate the selected studies on a small subset of new feature models


Related Work and Further Reading

  • Chico Sundermann, Vincenzo Francesco Brancaccio, Elias Kuiter, Sebastian Krieter, Tobias Heß, and Thomas Thüm. 2024. Collecting Feature Models from the Literature: A Comprehensive Dataset for Benchmarking. In Proceedings of the 28th ACM International Systems and Software Product Line Conference (SPLC '24). Association for Computing Machinery, New York, NY, USA, 54–65. https://doi.org/10.1145/3646548.3672590

  • Carver, J.C., Juristo, N., Baldassarre, M.T. et al. 2014. Replications of software engineering experiments. Empir Software Eng 19, 267–276. doi.org/10.1007/s10664-013-9290-8


Contact

Raphael Dunkel

Exploring Feature Engineering without Complex Feature Model Transformations (P/B/M)

Exploring Feature Engineering without Complex Feature Model Transformations

Context

Feature engineering is the basis for machie learning on feature models. Currently, the feature extraction is typically (at least partially) performed on the CNF-representation of the feature model. However, this representation can be very difficult to compute, which motivates feature extraction approaches that operate before any complex transformations of the feature model.


Research Problem

There are few features that are directly compute on the feature model and they have not been specifically analyzed on their own yet. Feature extraction without complex feature model transformations could significantly speed up the feature engineering computation.


Tasks

  1. Select existing and create new features that operate before the CNF transformation of a feature model

  2. Integrate your features into the feature engineering framework fe4femo

  3. Evaluate the effectiveness of your features and analyze the trade-off between performance and computation time


Related Work and Further Reading

  • Isabelle Guyon, Steve Gunn, Masoud Nikravesh, and Lotfi A. Zadeh. 2006. Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing). Springer-Verlag, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-35488-8

  • Elias Kuiter, Sebastian Krieter, Chico Sundermann, Thomas Thüm, and Gunter Saake. 2023. Tseitin or not Tseitin? The Impact of CNF Transformations on Feature-Model Analyses. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE '22). Association for Computing Machinery, New York, NY, USA, Article 110, 1–13. https://doi.org/10.1145/3551349.3556938

  • Jose M. Horcas, Jose A. Galindo, Mónica Pinto, Lidia Fuentes, and David Benavides. 2022. FM fact label: a configurable and interactive visualization of feature model characterizations. In Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B (SPLC '22), Vol. B. Association for Computing Machinery, New York, NY, USA, 42–45. https://doi.org/10.1145/3503229.3547025 


Contact

Raphael Dunkel

Exploring Autoencoders for Feature Extraction on Feature Models (P/M)

Exploring Autoencoders for Feature Extraction on Feature Models

Context

Autoencoders can automatically extract features that perform very well while often being very different to human-created ones. As a well-performing feature set is the basis for the successful use of machine learning for feature model analysis, autoencoders are a promising tool that could improve current feature extraction set-ups.


Research Problem

Autoencoders could significantly improve the quality of feature sets for machine learning on feature models. However, they currently have not yet been developed and tested for feature model inputs.


Tasks

  1. Create and implement a graph encoding for feature models

  2. Create and implement an autoencoder architecture for feature extraction from feature models

  3. Evaluate your autoencoder and perform an explainability analysis


Related Work and Further Reading

  • M. Dalla, A. Visentin and B. O’Sullivan, "Automated SAT Problem Feature Extraction using Convolutional Autoencoders," 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), Washington, DC, USA, 2021, pp. 232-239, https://doi.org/10.1109/ICTAI52525.2021.00039 

  • J. Park, M. Lee, H. J. Chang, K. Lee and J. Y. Choi, "Symmetric Graph Convolutional Autoencoder for Unsupervised Graph Representation Learning," 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 6518-6527, https://doi.org/10.1109/ICCV.2019.00662 

  •  

    Wang, Yasi and Yao, Hongxun and Zhao, Sicheng. 2015. Auto-Encoder Based Dimensionality Reduction. Neurocomputing. 184. https://doi.org/10.1016/j.neucom.2015.08.104 


Contact

Raphael Dunkel

Sample-Based Testing of the Linux Kernel

(R) Linux Kernel Configurations (B/M)

Evaluating Translations of Linux Kernel Configurations

Context
The Linux kernel is one of the largest feature-oriented software systems in the public domain and a focus of much software product line (SPL) research. Its high complexity stems from the fact that the Linux kernel can be configured for many diverse use cases, ranging from embedded systems to high-performance computing. However, the same complexity that makes it versatile also often means that analyzing the Linux kernel in an SPL context is hard.

The underlying feature model of the Linux kernel is defined through the dedicated configuration language KConfig. Any instance of the kernel is built using a configuration file, which defines the features to be included in the compiled kernel. Applying SPL algorithms (e.g., sampling) to the Linux kernel requires translating the KConfig model into a boolean feature model formula. Due to peculiarities of the Kconfig language for Linux variability, such translations generally differ depending on the implementation.

Research Problem

There are different solutions for translating a specific Linux kernel version into a boolean feature model formula, as well as for translating a kernel configuration to a corresponding assignment. The other direction - translating a formula assignment to a kernel configuration - is rarely considered in research. However, to use SPL algorithms that work on feature models for Linux, the translation must work in both directions. Hence, there is a need to evaluate whether existing translations of Linux kernel versions into boolean feature model formulas accurately handle configurations, and whether the results can be translated back into Linux configurations.

Tasks

  1. Compile an overview of Linux kernel feature model translations
  2. Analyze their behavior when translating configurations and non-boolean features
  3. Devise and evaluate a back-translation of formula assignments to Linux configurations

Related Work and Further Reading

David Fernandez-Amoros, Ruben Heradio, Christoph Mayr-Dorn, and Alexander Egyed. 2019. A Kconfig Translation to Logic with One-Way Validation System. In Proceedings of the 23rd International Systems and Software Product Line Conference - Volume A (SPLC '19). Association for Computing Machinery, New York, NY, USA, 303–308. https://doi.org/10.1145/3336294.3336313

Elias Kuiter, Chico Sundermann, Thomas Thüm, Tobias Hess, Sebastian Krieter, and Gunter Saake. 2025. How Configurable is the Linux Kernel? Analyzing Two Decades of Feature-Model History. ACM Trans. Softw. Eng. Methodol. Just Accepted (April 2025). https://doi.org/10.1145/3729423

KConfig Language Documentation: https://docs.kernel.org/kbuild/kconfig-language.html

Contact

christopher.rau(at)tu-braunschweig.de

(B/M/P) Assessing the Configuration-Space Coverage of Sample-Based Linux Kernel Testing

Context

T-wise interaction sampling, where each valid t-tuple of feature selections is present in at least one configuration, is a common strategy for testing software product lines. For the Linux kernel, creating such samples is infeasible for more recent versions due to its complexity. Nonetheless, there are real sample-based approaches for testing the Linux kernel, such as the Linux Kernel Performance (LKP) system. These samples consist of various predefined and generated configurations.


Research Problem

It is unclear how well LKP samples cover the configuration space, i.e., the set of all possible configurations. More specifically, the t-wise interaction coverage of LKP samples has not yet been investigated. Furthermore, we hypothesize that a high t-wise interaction coverage facilitates the detection of defects.


Tasks

  1. Simulate LKP sampling on Linux kernel versions
  2. Analyze the 2-wise interaction coverage (perhaps include t=1 and t=3)
  3. Approximate the coverage of real LKP samples to investigate whether there is a correlation with defect detection

Literature

  • Mahsa Varshosaz, Mustafa Al-Hajjaji, Thomas Thüm, Tobias Runge, Mohammad Reza Mousavi, and Ina Schaefer. 2018. A classification of product sampling for software product lines. In Proceedings of the 22nd International Systems and Software Product Line Conference - Volume 1 (SPLC '18). Association for Computing Machinery, New York, NY, USA, 1–13. doi.org/10.1145/3233027.3233035
  • Elias Kuiter, Chico Sundermann, Thomas Thüm, Tobias Heß, Sebastian Krieter, and Gunter Saake. 2025. How Configurable Is the Linux Kernel? Analyzing Two Decades of Feature-Model History. ACM Trans. Softw. Eng. Methodol. 35, 1, Article 27 (January 2026), 48 pages. doi.org/10.1145/3729423
  • Elias Kuiter. torte: Reproducible Feature-Model Experiments à la Carte. In Proc. Int’l Conf. on Software Engineering (ICSE), Rio de Janeiro, Brazil, Apr 2026. raw.githubusercontent.com/SoftVarE-Group/Papers/main/2026/2026-ICSE-Kuiter-Torte.pdf
  • Christopher Rau. Mining Bugs in Linux to Assess the Effectiveness of Automated Variability Testing. 2025. Master's Thesis. To appear.
  • Linux* Kernel Performance www.intel.com/content/www/us/en/developer/topic-technology/open/linux-kernel-performance/overview.html
(B/M/P) How Far Can T-wise Sampling Scale for the Linux Kernel?

Context

T-wise interaction sampling, where each valid t-tuple of feature selections is present in at least one configuration, is a common strategy for testing software product lines. For the Linux kernel, creating such samples is only feasible for old versions, as the kernel has grown in complexity over time. However, at the same time, there are regular novel sampling algorithms that can handle increasingly complex systems.

Research Problem

The scalability of t-wise interaction sampling algorithms for the Linux kernel is insufficiently understood. In particular, it is unclear up to which kernel version state-of-the-art sampling algorithms can successfully generate t-wise (e.g., pairwise) samples. Moreover, for those kernel versions where sampling is feasible, it is unknown how the size of the resulting samples evolves over time and how this corresponds to the increasing complexity of the Linux kernel.

Tasks

  1. Select state-of-the-art t-wise interaction sampling algorithms
  2. Evaluate the algorithms on increasingly recent Linux kernel revisions
  3. Analyze the sample size evolution for kernel versions where sampling is feasible

Literature

  • Mahsa Varshosaz, Mustafa Al-Hajjaji, Thomas Thüm, Tobias Runge, Mohammad Reza Mousavi, and Ina Schaefer. 2018. A classification of product sampling for software product lines. In Proceedings of the 22nd International Systems and Software Product Line Conference - Volume 1 (SPLC '18). Association for Computing Machinery, New York, NY, USA, 1–13. doi.org/10.1145/3233027.3233035
  • Elias Kuiter, Urs-Benedict Braun, Thomas Thüm, Sebastian Krieter, and Gunter Saake. Can SAT Solvers Keep Up With the Linux Kernel’s Feature Model? In Proc. Int’l Conf. on Software Engineering (ICSE), Rio de Janeiro, Brazil, Apr 2026. raw.githubusercontent.com/SoftVarE-Group/Papers/main/2026/2026-ICSE-Kuiter.pdf
  • Sándor P. Fekete, Phillip Keldenich, Dominik Krupke, and Michael Perk. Efficient Heuristics and Exact Methods for Pairwise Interaction Sampling. 2026 Proceedings of the SIAM Symposium on Algorithm Engineering and Experiments (ALENEX). doi.org/10.1137/1.9781611978957.16

Knowledge Compilation

Automated Reasoning with Currently Unexplored Formats (B/M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks. Popular formats are d-DNNFs [1] or BDDs [2]. Both have been successfully applied in product-line engineering to substantially accelerate practice-relevant analyses.


Research Problem

The promising results of the already applied knowledge-compilation formats [3, 4], motivates the exploration of further formats. Many formats have been suggested but reusing them for automated reasoning has not or only sparsely been explored.


Tasks

  1. Inspect available unexplored formats 

  2. Develop operations to enable feature-model analyses

  3. Implement prototype

  4. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Bryant, R.E. (2018). Binary Decision Diagrams. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds) Handbook of Model Checking. Springer, Cham. https://doi.org/10.1007/978-3-319-10575-8_7
[3] Darwiche, Adnan, and Pierre Marquis. "A knowledge compilation map." Journal of Artificial Intelligence Research 17 (2002): 229-264., https://doi.org/10.1613/jair.989
[4] Sundermann, C., Kuiter, E., Heß, T. et al. On the benefits of knowledge compilation for feature-model analyses. Ann Math Artif Intell 92, 1013–1050 (2024). doi.org/10.1007/s10472-023-09906-6

Knowledge Compilation Beyond Boolean Logic (M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks. Popular formats are d-DNNFs [1] or BDDs [2]. Both have been successfully applied in product-line engineering to substantially accelerate practice-relevant analyses.


Research Problem

While available knowledge compilation strategies all appear to focus on propositional logic, many problems in practice rely on more expressive constraints (e.g., with numeric variables). Extending knowledge compilation to cope with such expressive constraints could yield substantial runtime benefits.


Tasks

  1. Design beyond-propositional target language

  2. Develop compilation from feature model

  3. Implement prototype

  4. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Bryant, R.E. (2018). Binary Decision Diagrams. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds) Handbook of Model Checking. Springer, Cham. https://doi.org/10.1007/978-3-319-10575-8_7
[3] Darwiche, Adnan, and Pierre Marquis. "A knowledge compilation map." Journal of Artificial Intelligence Research 17 (2002): 229-264., https://doi.org/10.1613/jair.989
[4] Sundermann, C., Kuiter, E., Heß, T. et al. On the benefits of knowledge compilation for feature-model analyses. Ann Math Artif Intell 92, 1013–1050 (2024). doi.org/10.1007/s10472-023-09906-6

 

Variational d-DNNFs (M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks. The deterministic decomposable negation normal form (d-DNNF) is a format that has been succesfully applied in various domains including feature-model analysis.

Research Problem

In many cases, slight deviations of a feature models need to be analyzed. With current techniques, a whole new d-DNNF has to be compiled inducing immense computational efforts.


Tasks

  1. Develop mechanisms to include similar feature-model variants in a single d-DNNF

  2. Develop compilation strategy

  3. Adapt reasoning algorithms on the resulting d-DNNF

  4. Implement prototype

  5. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Chico Sundermann, Heiko Raab, Tobias Heß, Thomas Thüm, and Ina Schaefer. 2024. Reusing d-DNNFs for Efficient Feature-Model Counting. ACM Trans. Softw. Eng. Methodol. 33, 8, Article 208 (November 2024), 32 pages. doi.org/10.1145/3680465

Exploiting Structure of Pseudo-Boolean Constraints for d-DNNF Compilation (B/M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks [1]. The deterministic decomposable negation normal form (d-DNNF) is a format that has been succesfully applied in various domains including feature-model analysis [2]. We recently developed a d-DNNF compiler based on pseudo-Boolean logic, that substantially accelerates compilation for various variability-modeling constructs.

Research Problem

The initial results of our compiler are very promising, but we have not yet exploited the structure of pseudo-Boolean constraints during compilation, but rather used existing optimizations tailored to CNFs. Tailoring the algorithm to the special structure of pseudo-Boolean constraints could yield further benefits and improve scalability in practice.


Tasks

  1. Develop strategies/heuristics to incorporate structure of pseudo-Boolean constraints to compiling

  2. Adopt our pseudo-Boolean compiler p2d with your ideas

  3. Evaluate performance improvements


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Chico Sundermann, Heiko Raab, Tobias Heß, Thomas Thüm, and Ina Schaefer. 2024. Reusing d-DNNFs for Efficient Feature-Model Counting. ACM Trans. Softw. Eng. Methodol. 33, 8, Article 208 (November 2024), 32 pages. https://doi.org/10.1145/3680465
[3] Chico Sundermann, Stefan Vill, Elias Kuiter, Sebastian Krieter, Thomas Thüm, Matthias Tichy, Pseudo-Boolean d-DNNF Compilation for Expressive Feature Modeling Constructs, arXiv Tech. Report (currently under review), https://doi.org/10.48550/arXiv.2505.05976

Tackling the Scalability of Very Hard Feature Models for d-DNNF Compilation (B/M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks. Popular formats are d-DNNFs [1] or BDDs [2]. Both have been successfully applied in product-line engineering to substantially accelerate practice-relevant analyses.


Research Problem

Even though knowledge compilation to d-DNNF has been shown to be beneficial for feature-model analyses [4], there are still various practical-relevant instances, like the Linux kernel, that cannot be compiled or analyzed with counting technology at all. Further improving the scalability for such instances is essential for the applicability of knowledge compilation in product-line-engineering practice.


Tasks

  1. Gather strategies to improve the scalability (e.g, pre-processings on the formula, parameterization of the solver)

  2. Realize strategies that have no implementation yet

  3. Tailor strategies to the complex instances

  4. Evaluate advances over state of the art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Bryant, R.E. (2018). Binary Decision Diagrams. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds) Handbook of Model Checking. Springer, Cham. https://doi.org/10.1007/978-3-319-10575-8_7
[3] Darwiche, Adnan, and Pierre Marquis. "A knowledge compilation map." Journal of Artificial Intelligence Research 17 (2002): 229-264., https://doi.org/10.1613/jair.989
[4] Sundermann, C., Kuiter, E., Heß, T. et al. On the benefits of knowledge compilation for feature-model analyses. Ann Math Artif Intell 92, 1013–1050 (2024). doi.org/10.1007/s10472-023-09906-6

XORiented d-DNNF Compilation (B/M/P)

Context

Knowledge compilation refers to translating an input problem to a format that enables efficient subsequent operations, such as satisfiability checks. Popular formats are d-DNNFs [1] or BDDs [2]. Both have been successfully applied in product-line engineering to substantially accelerate practice-relevant analyses.


Research Problem

Even though knowledge compilation to d-DNNF has been shown to be beneficial for feature-model analyses [4], there are still various practical-relevant instances, like the Linux kernel, that cannot be compiled or analyzed with counting technology at all. Further improving the scalability for such instances is essential for the applicability of knowledge compilation in product-line-engineering practice.


Tasks

  1. Develop strategy to incorporate alternatives constraints in the compilation process

  2. Adapt existing heuristics to consider the alternatives

  3. Implement prototype in existing d-DNNF compiler (d4 [5] or p2d [6])

  4. Evaluate advances over state of the art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Adnan Darwiche. 2002. A compiler for deterministic, decomposable negation normal form. In Eighteenth national conference on Artificial intelligence. American Association for Artificial Intelligence, USA, 627–634. https://cdn.aaai.org/AAAI/2002/AAAI02-094.pdf
[2] Bryant, R.E. (2018). Binary Decision Diagrams. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds) Handbook of Model Checking. Springer, Cham. https://doi.org/10.1007/978-3-319-10575-8_7
[3] Darwiche, Adnan, and Pierre Marquis. "A knowledge compilation map." Journal of Artificial Intelligence Research 17 (2002): 229-264., https://doi.org/10.1613/jair.989
[4] Sundermann, C., Kuiter, E., Heß, T. et al. On the benefits of knowledge compilation for feature-model analyses. Ann Math Artif Intell 92, 1013–1050 (2024). https://doi.org/10.1007/s10472-023-09906-6
[5] 
github.com/SoftVarE-Group/d4v2
[6] https://github.com/TUBS-ISF/p2d

Configuration Counting

(R) Approximate #SAT Solving (B/M/P)

Context

Configuration counting refers to computing the number of valid configurations for a given feature model, which enables a plethora of automated analyses [1]. To enable these analyses, configuration counting is often reduced to #SAT (i.e., propositional model counting).


Research Problem

#SAT is a computationally complex problem which often induces unacctepable runtimes for large product lines in practice. Sometimes, a valid option to enable analyses may be approximating the number of valid configurations, but available approximate #SAT solvers fail to scale for product-line instances [2].


Tasks

  1. Gather effective simplifications

  2. Identify promising approximations

  3. Implement prototype

  4. Compare advances to state of the art


Contact

Chico Sundermann


Related Work and Further Reading

[1] Chico Sundermann, Michael Nieke, Paul M. Bittner, Tobias Heß, Thomas Thüm, and Ina Schaefer. 2021. Applications of #SAT Solvers on Feature Models. In Proceedings of the 15th International Working Conference on Variability Modelling of Software-Intensive Systems (VaMoS '21). Association for Computing Machinery, New York, NY, USA, Article 12, 1–10. https://doi.org/10.1145/3442391.3442404
[2] Sundermann, C., Heß, T., Nieke, M. et al. Evaluating state-of-the-art # SAT solvers on industrial configuration spaces. Empir Software Eng 28, 29 (2023). doi.org/10.1007/s10664-022-10265-9

Universal Variability Language

Efficient Conversion Strategies for the Universal Variability Language (B/M/P)

Context

The Universal Variability Language (UVL) is a format for specifying feature models [1]. It is developed as a community effort from researchers around the globe [2]. The adoption of UVL and its tooling landscape are continously growing. UVL has an extensible language design that allow users to select a subsets of available feature-modeling constructs.


Research Problem

One goal of UVL is enabling exchange between different tools and users. To this end, different feature-modeling constructs that are part of an extension can be translated to simpler constructs with conversion strategies. However, the currently employed conversion strategies are often inefficient and fail to scale for complex feature models.


Tasks

  1. Identify existing (but not yet applied) conversions

  2. Develop missing conversions

  3. Implement prototype

  4. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] David Benavides, Chico Sundermann, Kevin Feichtinger, José A. Galindo, Rick Rabiser, Thomas Thüm, UVL: Feature modelling with the Universal Variability Language, Journal of Systems and Software, Elsevier, doi.org/10.1016/j.jss.2024.112326
[2] https://universal-variability-language.github.io/

Reasoning Recommender System (B/M/P)

Context

The Universal Variability Language (UVL) is a format for specifying feature models [1]. It is developed as a community effort from researchers around the globe [2]. The adoption of UVL and its tooling landscape are continously growing. UVL has an extensible language design that allow users to select a subsets of available feature-modeling constructs.


Research Problem

Depending on the selected UVL extension, different reasoning engines (i.e., tools enabling automated analysis) are (1) applicable and (2) promising regarding efficiency. Often, it is unclear which reasoning engine to use for a problem at hand for the users.


Tasks

  1. Collect promising off-the-shelf solutions

  2. Provide a mapping between solutions and extensions

  3. Develop concepts for missing solutions

  4. Implement a prototype recommender system

  5. Compare advances to state-of-the-art


Contact

Chico Sundermann


Related Work and Further Reading

[1] David Benavides, Chico Sundermann, Kevin Feichtinger, José A. Galindo, Rick Rabiser, Thomas Thüm, UVL: Feature modelling with the Universal Variability Language, Journal of Systems and Software, Elsevier, doi.org/10.1016/j.jss.2024.112326
[2] https://universal-variability-language.github.io/

(R) Deriving Software Variants with UVL (B/P)

Context

The Universal Variability Language (UVL) is a format for specifying feature models [1]. It is developed as a community effort from researchers around the globe [2]. The adoption of UVL and its tooling landscape are continously growing.


Research Problem

For software-product line engineering, generating code for a given configuration is essential.  However, in research on UVL, this is currently mostly neglected, as researchers heavily focuses on the variability modeling side of product-line engineering. The lack of available options to conveniently apply UVL for code generation may hinder its adoption.


Tasks

  1. Develop standard specification for UVL configurations

  2. Implement eco-system that connects the variability modeling tools (e.g. our VSCode extension) with modern code generators (e.g., cfg in rust)

  3. Evaluate its usability


Contact

Chico Sundermann


Related Work and Further Reading

[1] David Benavides, Chico Sundermann, Kevin Feichtinger, José A. Galindo, Rick Rabiser, Thomas Thüm, UVL: Feature modelling with the Universal Variability Language, Journal of Systems and Software, Elsevier, doi.org/10.1016/j.jss.2024.112326
[2] https://universal-variability-language.github.io/
[3] 
github.com/Universal-Variability-Language/uvl-lsp
[4] https://doc.rust-lang.org/reference/conditional-compilation.html


Topic Presentations (past semesters)

Winter Term 2025/2026

We present all current topics for projects as well as Bachelor's and Master's theses at the ISF. In addition, we will briefly present our teaching offer for the winter term 2025/2026. All interested parties are cordially invited.

When? Tuesday, July 15th, 2025, from 4:45 pm to 6:15 pm

Where? PK 11.2

The slides of the presentation are available for download:

  • Teaching in Winter Term 2025 and Open Theses Topics
Summer Term 2025

We present all current topics for projects as well as Bachelor's and Master's theses at the ISF. In addition, we will briefly present our teaching offer for the summer term 2025. All interested parties are cordially invited.

When? Monday, January 27th, 2025, from 4:45 pm to 6:15 pm

Where? IZ 161 and in Webex (hybrid)

The slides of the presentation are available for download:

  • Teaching in Summer Term 2025 and Open Theses Topics
Photo credits on this page

For All Visitors

Vacancies of TU Braunschweig
Career Service' Job Exchange 
Merchandising

For Students

Term Dates
Courses
Degree Programmes
Information for Freshman
TUCard

Internal Tools

Glossary (GER-EN)
Change your Personal Data

Contact

Technische Universität Braunschweig
Universitätsplatz 2
38106 Braunschweig

P. O. Box: 38092 Braunschweig
GERMANY

Phone: +49 (0) 531 391-0

Getting here

© Technische Universität Braunschweig
Legal Notice Privacy Accessibility

TU Braunschweig uses the software Matomo for anonymised web analysis. The data serve to optimise the web offer.
You can find more information in our data protection declaration.