Institut für Informatik | Sitemap | LMU-Portal
English
  • Startseite
  • Studieninteressierte
  • Studierende
    • Ersti Infopoint
    • Semesterplanung
    • Prüfungsleistungen und Studienordnung
    • Abschlussarbeiten
      • Bachelor
      • Master
      • Projekt
      • Freie Themen
      • Richtlinien
    • Benutzerstudien
    • Auslandsaufenthalte
    • Vorlagen
    • FAQ
    • Ansprechpartner
  • Lehrveranstaltungen
  • Forschung
  • Personen
  • Kontakt
  • Besucher
  • Jobs
  • FAQ
  • Intern
Startseite > Studierende > Abschlussarbeiten > Freie Themen

Offene Themen für Abschlussarbeiten

  • Informationen zur Themenfindung
  • Liste freier Themen
  • Weitere Themen

Informationen zur Themenfindung

Informationen finden Sie in den FAQs.

Liste freier Themen

Auf dieser Webseite sind offene Bachelor-, Master- und Projektarbeitsthemen bei unseren Mitarbeiter*innen zu finden. Am Anfang jeder Zeile ist angegeben für welchen Typ von Arbeit sich das Thema eignet. Ein Klick auf ein Thema bringt weitere Informationen.

Zeige: alle, Bachelor-Arbeiten, Master-Arbeiten, Projektarbeiten, PWAL

Type Advisor Title
MT/BT/PT Prof. Dr. Florian Alt, Doruntina Murtezaj, Verena Winterhalter, Oliver Hein, Felix Dietz, Viktorija Paneva, Sarah Delgado Rodriguez, Lukas Mecke, Katharina Barlage
Abschlussarbeiten im Bereich Human-Centered Security and Privacy

Below you will find focus areas in the research field "Human-Centered Security and Privacy" for which we offer Bachelor's and Master's theses. For a specific topic and any questions about these focus areas, please contact the relevant person.

Public Security User Interfaces

The rapid development of digital technologies and the increasing threat of cybersecurity have led to a growing need for innovative security solutions in public spaces. One example of user interfaces that can improve security behavior are so-called Public Security User Interfaces. These are interfaces positioned in shared, non-personal areas that offer information or interactions on security-related topics. These interfaces play an important role in providing security information, improving situational awareness, and promoting secure behavior. The main goal of this research is to investigate the design, implementation, and impact of user interfaces that enhance security behavior, in order to facilitate the transition from cybersecurity awareness to habitual secure behavior.

The theses in this area deal with topics such as:

  • Behavior analysis of user interaction with Public Security User Interfaces
  • Personalization strategies to support secure behavior
  • Selection of content and dynamic adaptation to the target group and contextual factors

Recommended knowledge and interests

  • Knowledge in human-centered design
  • Experience in conducting user studies
  • Interest in conducting a thorough literature review
  • Independent thinking and creative problem solving
  • Optional: Interest in public display research

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Doruntina Murtezaj

Social Engineering

Cybercrime currently causes a global economic loss amounting to several trillion euros. According to expert analyses, up to 90% of these damages are a direct or indirect result of attacks in which the human element is at the center. Attackers exploit authority, fear, curiosity, or helpfulness with the goal of manipulating their victims to obtain sensitive data. Examples include phone calls to obtain user login credentials, emails containing malware attachments to gain access to protected networks, or deep fakes used to impersonate someone's identity.

Theses in this area address a variety of questions:

  • How do people behave during social engineering attacks?
  • How can social engineering attacks be detected?
  • Which contextual factors facilitate social engineering attacks?
  • How can user interfaces be developed to protect against social engineering attacks?

Recommended knowledge and interests

  • Interest in human-centered attacks
  • Knowledge of qualitative and/or quantitative research methods
  • Interest in conducting a thorough literature review
  • Independent thinking and creative problem solving

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Felix Dietz

Security and Privacy in Mixed Reality

Mixed reality devices are quickly finding their way into users’ daily lives, particularly in the form of head-mounted displays. Users can immerse themselves in virtual worlds or enrich the virtual world with physical content, supporting a wide range of applications in the areas of entertainment, work, education, and well-being. While these technologies support an ever-increasing number of features in the aforementioned areas, they also present challenges and create opportunities for security and privacy.

Theses in this area essentially deal with topics in the context of two general questions: (1) How can mixed reality solve existing challenges in terms of privacy and security? (2) What challenges in terms of privacy and security arise in the context of mixed reality, and how can these be addressed?

Recommended knowledge and interests

  • Interest in VR/AR technology
  • Knowledge of qualitative and/or quantitative research methods
  • Interest in conducting a thorough literature review
  • Willingness to learn, e.g., Unity

Readings | Literature

  • Ethics Emerging: the Story of Privacy and Security Perceptions in Virtual Reality
    https://www.usenix.org/system/files/conference/soups2018/soups2018-adams.pdf
  • Exploring the Unprecedented Privacy Risks of the Metaverse
    https://arxiv.org/pdf/2207.13176.pdf

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Verena Winterhalter

Viktorija Paneva

On-Body Security and Privacy Interfaces

The rapid integration of wearable sensors and head-mounted displays (HMDs) makes on-body computing increasingly relevant for security and privacy research. In this area, we focus on biometric authentication, privacy-preserving wearables, physiological sensing, and secure interaction paradigms for augmented reality (AR) and virtual reality (VR). Possible topics include the development of novel authentication methods for wearable devices, privacy-preserving approaches to continuous physiological monitoring, secure interaction concepts in AR and VR environments, and adaptive security/privacy mechanisms that enhance user trust and system reliability. By addressing current challenges and future opportunities, we aim to develop resilient, privacy-conscious, and user-friendly on-body systems that prioritize both security and seamless interaction experiences.

Recommended knowledge and interests

  • Interest in wearables / hardware prototyping
  • Knowledge of qualitative and/or quantitative research methods
  • Interest in conducting a thorough literature review
  • Willingness to learn (e.g., Unity)

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Oliver Hein

Tangible Security and Privacy User Interfaces

In the age of ubiquitous computing, users' IT security and privacy are at risk almost anytime. IT security and privacy assistants help users become aware of these risks and take appropriate measures to protect their data. However, these systems are often too complex, unintuitive, and not visually appealing. In order to enable even less technologically savvy or inexperienced individuals to use IT security and privacy assistants, such mechanisms must become tangible, i.e., physically manipulable and touchable by humans.

Recommended knowledge and interests

  • Interest in Usable Security
  • Knowledge in the field of Human-Computer Interaction and qualitative and/or quantitative research methods
  • Independent thinking and creative problem solving
  • For some projects: Interest in Fabrication (e.g., 3D modeling/printing, electronics, soldering)

Readings | Literature

  • Take Your Security and Privacy Into Your Own Hands! Why Security and Privacy Assistants Should be Tangible https://dl.gi.de/handle/20.500.12116/37360
  • Making Privacy Graspable: Can we Nudge Users to use Privacy Enhancing Techniques? https://arxiv.org/abs/1911.07701
  • Privacy Itch and Scratch: On Body Privacy Warnings and Controls https://dl.acm.org/doi/10.1145/2851581.2892475
  • Privacy Care: A Tangible Interaction Framework for Privacy Management https://dl.acm.org/doi/10.1145/3430506

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Sarah Delgado Rodriguez

Behavioral Biometrics

The use of biometric mechanisms—i.e., authentication based on unique features of a user's physiology or behavior—is a convenient and fast alternative to classical token- or knowledge-based authentication. Popular examples include fingerprint, facial recognition, or typing behavior biometrics. However, these systems typically rely on machine learning algorithms, making their decisions both difficult for the user to comprehend and subject to manipulation.

In this research area, we investigate novel approaches that enable users to understand and influence the results of biometric (black-box) systems, and develop new approaches with a focus on the user.

The following questions are particularly interesting:

  • How can users explore and understand influences on the decision-making process of biometric systems?
  • How can user interfaces for biometric systems be designed to more clearly communicate the robustness and accuracy of predictions?
  • How can users influence how they are recognized, i.e., by changing their behavior?
  • How can users be encouraged to exhibit more distinctive behavior?
  • How can biometric authentication be embedded in natural interaction?

Concrete research approaches include, among others, investigating (real) user behavior (e.g., through observations, interviews, surveys) and designing, implementing, and evaluating novel security and privacy concepts.

Recommended knowledge and interests

  • General interest in biometrics, authentication, and machine learning
  • Knowledge of qualitative and/or quantitative research methods
  • Solid programming skills (e.g., Python or Android)

Readings | Literature

  • Comparing passwords, tokens, and biometrics for user authentication http://www.nikacp.com/images/10.1.1.200.3888.pdf
  • An introduction to biometric recognition https://www.cse.msu.edu/~rossarun/pubs/RossBioIntro_CSVT2004.pdf
  • Touch me once and I know it’s you! Implicit Authentication based on Touch Screen Patterns https://www.medien.ifi.lmu.de/pubdb/publications/pub/deluca2012chi/deluca2012chi.pdf

Example Thesis

Reauthentication Concepts for Biometric Authentication Systems on Mobile Devices

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Lukas Mecke

Personalized Privacy / Security Interventions

My research focuses on personalized privacy and security interventions: how systems can adapt the way they protect, inform, and support users based on who they are and the situation they are in. I am interested in a wide range of personalization factors, including personality traits, prior experience, domain knowledge, current context, physiological signals, and demographic background. I also explore how different interface paradigms (from traditional UIs to conversational agents, ambient displays, or mixed-initiative systems) shape users’ understanding, trust, and behavior. I am particularly excited about “funky” or unconventional forms of personalization, especially when they allow us to investigate how LLMs and intelligent assistants can deliver tailored security and privacy support without overwhelming or misleading users.

A second major pillar of my work is privacy-preserving and cryptographic technologies, and especially how to make them usable and meaningful in real-world systems. Many powerful techniques, such as homomorphic encryption, secure multi-party computation, and private biometrics, promise strong privacy guarantees but remain difficult to understand, configure, and trust. I aim to bridge the gap between applied cryptography and usable security and privacy, studying how these technologies can be designed, communicated, and integrated into interfaces so that people can actually benefit from their protections. This combination of human-centered design and advanced security engineering defines the core of my research agenda.

Recommended knowledge and interests

  • Programming experience (e.g., Python, JavaScript, Java, or similar)
  • Experience with or curiosity about machine learning / LLMs
  • Basic knowledge of cryptography or security concepts (or willingness to learn)
  • UX, UI, or interaction design skills
  • Prototyping tools (e.g., Figma, web frameworks, mobile frameworks)
  • Critical thinking about ethics, privacy, and responsible technology

Contact

Interested students are asked to submit their CV, academic transcript, and intended start date.

Katharina Barlage


Details
BT/MT/PT Katharina Barlage
Can We Trust AI to Teach Security? Quality Assurance for Animated AI-Generated Cybersecurity Learning Content

Generative AI systems are increasingly used to create educational content, including animated learning materials that aim to explain complex cybersecurity concepts in an engaging way. While such systems can scale content production, they also introduce risks such as incorrect explanations, misleading visuals, or insecure recommendations.

In this thesis, you will work with an existing prototype that generates animated cybersecurity learning materials using AI. The goal is to systematically assess and improve the quality of these materials from both a security and user perspective.

You will:
  • Define a quality framework tailored to animated learning materials (e.g., correctness, clarity, visual accuracy, pedagogical effectiveness, engagement)
  • Generate sample learning units (e.g., phishing, password security, encryption basics)
  • Identify common issues (e.g., hallucinated explanations, misleading animations, oversimplifications)
  • Conduct a user study to evaluate how learners perceive and understand the generated content

The user study may investigate:

  • Learning outcomes (e.g., comprehension, retention)
  • Trustworthiness and credibility
  • Engagement and usability of animated AI-generated materials
  • Differences between AI-generated and curated (baseline) content

Details
BT/MT Jesse Grootjen
Adaptive RSVP System Based on Pupil Dilation

Description

Project Overview
This thesis project presents a unique opportunity for students to contribute to innovative research on adaptive RSVP (Rapid Serial Visual Presentation) systems, following up from [1]. The focus is on developing an intelligent RSVP system that adapts to user attention and cognitive load by leveraging pupil dilation data. Pupil dilation has been shown to correlate with cognitive processing, providing valuable insight into the user's mental state while reading or processing visual stimuli. By incorporating this biometric feedback into the RSVP system, the project aims to create a more intuitive and personalized reading experience, especially for users with attention challenges or disabilities.

Project Motivation

Traditional RSVP systems often rely on fixed speeds or manual adjustments, which may not suit every user's cognitive capacity. This project seeks to enhance user engagement and efficiency by using real-time pupil dilation data to adjust the speed and presentation style dynamically. By doing so, the RSVP system can become more responsive to individual reading habits, reducing cognitive overload and improving comprehension and retention of information. This work has important implications for accessibility, enabling better interaction for users with reading difficulties or neurological impairments.

Project Goals

This thesis will explore the development and evaluation of an adaptive RSVP system, with a focus on the following key objectives:

  • Experiment Design: Participants will engage with an RSVP system where reading speeds are adjusted in real-time based on pupil dilation data, providing insights into the correlation between cognitive load and visual presentation speed.
  • Model Development: Develop a model that interprets pupil dilation changes and optimizes the RSVP presentation in response to varying cognitive loads, ensuring that reading pace and comprehension are maximized.

You will

  • Conduct a literature review on pupil dilation and cognitive load in relation to visual stimuli
  • Develop or modify an RSVP system to integrate real-time pupil tracking data
  • Implement a preprocessing pipeline to analyze pupil dilation data during RSVP tasks
  • Collect and analyze data, focusing on how pupil dilation correlates with reading performance and user engagement
  • Summarize findings in a thesis and present them to an audience
  • (Optional) Co-write a research paper based on the results

You need

  • Strong communication skills in English
  • Experience with eye-tracking technologies and software
  • Basic knowledge of machine learning for modeling data (e.g., Python, TensorFlow)

References

  • [1] Grootjen, J., Thalhammer, P., & Kosch, T. (2024). Your eyes on speed: Using pupil dilation to adaptively select speed-reading parameters in virtual reality. In Proceedings of the 26th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '24). ACM. https://doi.org/10.1145/3676531

Details
BT/MT Jesse Grootjen, Prof. Dr. Sven Mayer
Investigating Gaze Estimation Accuracy in Collaborative Virtual Environments (CVEs)

Description

Project OverviewThis thesis project offers an exciting opportunity for students to contribute to cutting-edge research on gaze estimation in interactive systems. The focus is on enhancing the accuracy of gaze interpretation within Collaborative Virtual Environments (CVEs), where effective communication is often dependent on understanding where participants are looking. Gaze serves as a vital non-verbal communication cue, yet people frequently struggle to accurately determine another persons gaze direction (i.e., where someone is looking), especially over distances.

Project Motivation

In CVEs, precise gaze estimation is crucial for natural and effective interaction. While previous research has explored distant pointing as an interaction mechanism, this project shifts focus to gaze estimation. By addressing common inaccuracies in gaze prediction, this research aims to significantly improve how users interpret each others gaze during virtual interactions, ultimately enhancing the overall immersive experience.

Project Goals

This thesis will investigate how accurately gaze estimation can be performed in CVEs, focusing on two main aspects:
  • 1. Gaze Estimation Experiments: Participants will perform gaze tasks directed at targets on a screen from two different distances. The data collected will help evaluate the performance of current gaze estimation methods in these scenarios.
  • 2. Model Development: Using the insights from distant pointing research, the project aims to develop a mathematical model to correct (potential) systematic displacements in gaze estimation.

You will

  • Perform a literature review
  • Modify an existing VR environment
  • Implement an preprocessing pipeline for eye-tracking data
  • Collect and analyze eye-tracking data, focussing on developing a model to correct potential systematic displacement in gaze estimation
  • Summarize your findings in a thesis and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Strong communication skills in English
  • Good knowledge of Unity

References

  • [1] Schweigert, R., Schwind, V., & Mayer, S. (2019). EyePointing: A gaze-based selection technique. In Proceedings of Mensch und Computer 2019. ACM. https://doi.org/10.1145/3340764.3344897
  • [2] Mayer, S., Schwind, V., Schweigert, R., & Henze, N. (2018). The effect of offset correction and cursor on mid-air pointing in real and virtual environments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 653:1–653:13). ACM. https://doi.org/10.1145/3173574.3174227
  • [3] Mayer, S., Wolf, K., Schneegass, S., & Henze, N. (2015). Modeling distant pointing for compensating systematic displacements. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 4165–4168). ACM. https://doi.org/10.1145/2702123.2702332

Details
BT/MT Teodora Mitrevska
Evaluating Presentation Methods for Cognitive Reflection

Description

Project Overview

Neurofeedback, or EEG biofeedback, is a non-invasive technique that supports self-regulation byhelping users influence their brain activity through real-time feedback. Using electrodes placed on the scalp, systems measure brainwave activity and translate it into signals that users can respond to during training. However, raw EEG signals are difficult to translate into actionable insight. To prevent cognitive overload, ambiguity, and misinterpretation, consumer-facing systems require feedback designs that display neural outcomes in an understandable and trustworthy way. While prior work emphasizes signal acquisition and training protocols, fewer studies compare how different feedback representations influence interpretability, engagement, and trust in consumer contexts.

Project Goals

In this project, we will be exploring different visualization and data presentation techniques for cognitive feedback

You will

  • Test an existing system for data interpretation.
  • Explore different designs and data visualizations.
  • Run a user study to evaluate them.
  • Summarize findings in a thesis and present them.
  • (Optional) Co-write a research paper based on the results.

You need

  • Strong communication skills in English.
  • Basic understanding of web apps.
  • Affinity for design.

Details
BT/MT Teodora Mitrevska
Aligning LLMs with Human Mental Models

Description

Project Overview

Mental models are internal cognitive representations that people construct to understand, reason about, and predict occurrences in their environment, reflecting both the structure of the external world and the individual’s prior knowledge. In interactions with LLMs where humans are generating content, the LLM usually generates output that is grammatically correct and contextually plausible. However, the outputs do not always match the expectations humans form during the dialogue. In this thesis, we will explore the alignment in human-AI interaction on a perceptual level.

Project Goals

In this project, we will be exploring the alignment between the model generated output and the human expectation in a discourse completion task.

  • Experiment Design: Participants will be shown a sentence they have to complete by typing on a keyboard. Then, another or the same sentence will be presented to them after which they will express how much it matched their input on a scale.
  • Data Analysis: Preprocess the received data and analyze ERP components.
  • (Optional) Model Training: Train a model on the collected data that predicts different levels of semantic match.

You will

  • Test an existing data collection system.
  • Conduct a user study with EEG and Eye Tracking.
  • Collect and analyze the collected data.
  • Summarize findings in a thesis and present them.
  • (Optional) Co-write a research paper based on the results.

You need

  • Strong communication skills in English
  • Some Python understanding

References

  • Sara C. Sereno, Keith Rayner. “Measuring word recognition in reading: eye movementsand event-related potentials”. https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(03)00259-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661303002596%3Fshowall%3Dtrue

Details
PT/BT/MT Viktorija Paneva
Prototyping Privacy Awareness Interfaces in Virtual Reality

Description

As VR technologies become increasingly embedded in everyday life, concerns around privacy and data collection grow more pressing. This project draws on the design concepts and guidelines presented in recent research on usable privacy in immersive environments to develop and evaluate novel VR user interfaces for privacy awareness and control for different use cases and contexts.

Project Goal

The goal of this practical / thesis is to implement specific privacy interfaces across multiple application contexts (e.g. gaming, learning, social VR).

You will

  • Perform a literature review
  • Modify an existing privacy interface in VR
  • Adapt the UIs for multiple VR use cases / virtual environments
  • (thesis) Design and conduct a user study to evaluate the prototypes; data analysis
  • Summarize your findings in written form and present them to an audience
  • (Optional) co-writing a research paper

You need

  • Interest in VR, HCI, and privacy/usability topics
  • Good knowledge of Unity/C#
  • Basic knowledge of qualitative/quantitative research methods is a plus

References

  • [1] V. Paneva, M. Strauss, V. Winterhalter, S. Schneegass and F. Alt, "Privacy in the Metaverse:," in IEEE Pervasive Computing, vol. 23, no. 3, pp. 73-78, July-Sept. 2024, https://doi.org/10.1109/MPRV.2024.3432953.

Details
BT Sophia Sakel
Understanding the Impact of BeReal on Real-World Social Interaction

Details
BT/MT Kathrin Schnizer
Does Belief Shape How We Read Time Series Charts? An Eye-Tracking Study

Description

In visualization comprehension research, the majority of work evaluates viewers' performance using accuracy-based measures [1-6]. Users are presented with a predefined task and assessed on whether they can correctly extract the requested information from a visualization. This approach has proven effective for quantifying basic chart-reading skills and has informed the development of widely used literacy assessments such as VLAT [1] and CALVI [3]. However, it captures only one dimension of how people engage with visualized data: whether they arrive at a correct answer.

Interacting with a data visualization is not limited to extracting individual values. Viewers also interpret relationships, trends, and patterns, and, critically, they bring their own knowledge and beliefs to the interaction. Most evaluation studies control for this by using arbitrary or unfamiliar data categories, ensuring that prior knowledge does not confound performance [7, 8]. But in real-world settings, visualizations carry meaning. A chart showing the relationship between vaccination rates and disease incidence, or between income and level of education, is not neutral to the viewer. Users may agree or disagree with the relationship depicted, and this may shape how they process the visualization.

Understanding whether and how agreement with visualized relationships influences viewing behavior has direct practical relevance. Characterizing how viewers react to information that confirms or contradicts their expectations contributes a complementary dimension of visualization comprehension beyond task accuracy. Furthermore, if agreement is reflected in gaze, this opens the long-term possibility of detecting disagreement or misconceptions from viewing behavior, enabling adaptive systems that respond to what users believe. Before any such applications are feasible, however, we need to establish whether agreement is reflected in gaze at all.

Prior work suggests that agreement may be detectable through gaze behavior. Gaze and facial expressions can be used to infer when users disagree with the output of a machine learning system [9], and eye-tracking attention maps differ systematically when viewers arrive at disagreeing interpretations of the same visual stimulus [10]. These findings have not yet been extended to the domain of data visualization.

In this thesis, we investigate whether the strength of a viewer's agreement with the content of a data visualization is reflected in their viewing behavior. To address this, we examine how gaze patterns relate to self-reported agreement when viewers inspect visualizations depicting relationships that align with or contradict their prior beliefs.

Research Phases

  1. Literature review: Review existing work on gaze behavior in the context of agreement, expectation violation, surprise, and cognitive conflict — focusing on eye-tracking studies in information processing and visualization comprehension. Identify which gaze metrics (e.g., fixation duration, revisits, scanpath patterns) have been linked to belief-congruent vs. belief-incongruent processing in related domains.
  2. Pilot study (online): Design and run an online survey (minimum 10 participants) to identify real-world data relationships (e.g., education and salary, smoking and life expectancy) for which there is strong population-level consensus on the expected direction. Select a balanced set of congruent and incongruent relationships to serve as the basis for stimulus design.
  3. Stimulus design: Create scatterplot visualizations depicting the selected relationships. For each relationship, produce both a congruent version (matching expected direction) and an incongruent version (opposing expected direction). Control for visual complexity, data legibility, and number of data points across stimuli.
  4. Experiment implementation: Implement the laboratory experiment using PsychoPy and EyeLink. Each trial presents a data visualization followed by a continuous agreement rating on a Likert scale, with an opt-out option ("I have no prior belief about this relationship"). Implement trial randomization and counterbalancing.
  5. Data collection: Conduct the laboratory study (minimum 30 participants).
  6. Data analysis: Preprocess gaze data using an existing feature extraction pipeline. Analyze the relationship between agreement strength (continuous Likert rating) and gaze metrics using regression-based methods.
  7. Thesis and presentation: Summarize motivation, method, results, and implications in a written thesis and present findings to an audience.
  8. (Optional) Co-author a research paper based on the study results.

You Will

  • Conduct a literature review on gaze correlates of agreement, expectation violation, and cognitive conflict.
  • Design and run an online pilot study to select stimulus topics.
  • Create scatterplot stimuli for the laboratory experiment.
  • Implement the experiment in PsychoPy with EyeLink eye-tracking integration.
  • Run a laboratory study with a minimum of 30 participants.
  • Analyze gaze data in relation to agreement strength using regression-based methods.
  • Document your work in a thesis and present your findings.
  • (Optional) Contribute to co-authoring a research publication.

You Need

  • Good written and verbal communication skills in English.
  • Solid Python skills for stimulus generation and experiment implementation.
  • Basic knowledge of R for statistical analysis.
  • Familiarity with eye-tracking is a plus, but not required.

References

  • [1] S. Lee, S.-H. Kim, and B. C. Kwon, "VLAT: Development of a Visualization Literacy Assessment Test," IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 551–560, Jan. 2017, doi: 10.1109/TVCG.2016.2598920.
  • [2] S. Pandey and A. Ottley, "Mini-VLAT: A Short and Effective Measure of Visualization Literacy," Comput. Graph. Forum, vol. 42, no. 3, pp. 1–11, 2023, doi: 10.1111/cgf.14809.
  • [3] L. W. Ge, Y. Cui, and M. Kay, "CALVI: Critical Thinking Assessment for Literacy in Visualizations," in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, in CHI '23. New York, NY, USA: Association for Computing Machinery, Apr. 2023, pp. 1–18. doi: 10.1145/3544548.3581406.
  • [4] Y. Cui, L. W. Ge, Y. Ding, F. Yang, L. Harrison, and M. Kay, "Adaptive Assessment of Visualization Literacy," Aug. 27, 2023, arXiv: arXiv:2308.14147. Accessed: Aug. 27, 2024. [Online]. Available: http://arxiv.org/abs/2308.14147
  • [5] J. Boy, R. A. Rensink, E. Bertini, and J.-D. Fekete, "A Principled Way of Assessing Visualization Literacy," IEEE Trans. Vis. Comput. Graph., vol. 20, no. 12, pp. 1963–1972, Dec. 2014, doi: 10.1109/TVCG.2014.2346984.
  • [6] G. J. Quadri, A. Z. Wang, Z. Wang, J. Adorno, P. Rosen, and D. A. Szafir, "Do You See What I See? A Qualitative Study Eliciting High-Level Visualization Comprehension," in Proceedings of the CHI Conference on Human Factors in Computing Systems, in CHI '24. New York, NY, USA: Association for Computing Machinery, May 2024, pp. 1–26. doi: 10.1145/3613904.3642813.
  • [7] D. Peebles and N. Ali, "Expert interpretation of bar and line graphs: The role of graphicacy in reducing the effect of graph format," Frontiers in Psychology, vol. 6, p. 1673, 2015.
  • [8] E. E. Firat, A. Joshi, and R. S. Laramee, "Interactive visualization literacy: The state-of-the-art," Information Visualization, vol. 21, no. 3, pp. 285–310, 2022.
  • [9] O. Bhatti, M. Barz, and D. Sonntag, "Leveraging implicit gaze-based user feedback for interactive machine learning," in German Conference on Artificial Intelligence (Künstliche Intelligenz). Cham: Springer International Publishing, 2022.
  • [10] S. Hindennach, L. Shi, and A. Bulling, "Explaining Disagreement in Visual Question Answering Using Eye Tracking," in Proceedings of the 2024 Symposium on Eye Tracking Research and Applications, 2024.

Details
BT/MT Philipp Thalhammer, Katharina Barlage
Physical AI Switch

Preliminary Abstract

Today's world is shaped by generative artificial intelligence (genAI) more than by any other technology. In a rush to be early adopters, companies try to include genAI in their products, neglecting their users' need for agency, control, and trust. We propose a layered approach to give users control over whether they want to use genAI and to what degree. As tangible interfaces have proven to increase trust when it comes to privacy & security, we developed several tangible control mechanisms for users to switch between different stages and evaluated them in an explorative study.

Goal

In this thesis, you will explore the design space of tangible interfaces for controlling generative AI access. Specifically, you will conceptualize and prototype a couple of distinct physical "AI switches", each representing different design languages (e.G. user-centered design, critical design). The process includes the development of functional prototypes using microcontrollers (e.g., ESP32), electronic components (buttons, switches, displays), and digital fabrication techniques (e.g., 3D printing, laser cutting). The prototypes will be evaluated in a comparative, exploratory user study using a Wizard-of-Oz setup, simulating the different AI access levels without needing a fully functional backend.

What we expect

  • You have experience with hardware (working with microcontrollers, 3D printing etc.)
  • You can work and solve problems independently
  • You are creative

What you get

  • Two committed supervisors, weekly meetings, and hands-on advice
  • Being part of state-of-the-art research on AI-user-interaction
  • A bachelors/masters thesis

Details
BT/MT Philipp Thalhammer, Thomas Weber
The Intelligent Rubber Duck

Preliminary Abstract

In software engineering, the concept of “rubber-ducking”, where people verbally explain their problems to an inanimate object in order to externalise them, is a well-established debugging practice. While this can be helpful, traditional rubber-ducking remains entirely passive: the artifact does not respond, adapt, or provide feedback. We envision an AI-powered physical artifact that enables users to receive feedback in different stages of the debugging process and at different levels of detail through multimodal outputs, ranging from simple attention signals to actual help with coding problems. In this thesis, you will investigate the effect of different levels of feedback developers need from a rubber duck to support reflection and successful coding without disrupting their problem-solving process.

Goal

In this thesis, you will build a physical artifact in the form of a duck that supports voice input and offers multimodal forms of feedback. The system leverages existing AI technologies for software creation and debugging to generate situational feedback. This interface will then be evaluated in a user study (with CS students or developers) to find out how AI can support the problem-solving process in software engineering without completely taking over the process, therefore maintaining desirable properties like developer satisfaction, learning, and code understanding.

What you will do

  • Find existing literature on software development and debugging with AI, human factors for software developers, and Tangible User Interfaces for software creation
  • Design and implement a physical artifact that supports developer interaction for rubber-ducking
  • Design and conduct an evaluation of your artifact
  • Write a thesis documenting your process and its findings

What we expect

  • You have experience with hardware prototyping (incl. working with microcontrollers, 3D printing, etc.)
  • You have experience with software development, particularly AI-assisted software creation
  • Solid skills in English reading and writing
  • You can work and solve problems independently
  • You are creative

What you get

  • Two committed supervisors, weekly meetings, and hands-on advice
  • Being part of state-of-the-art research on AI-user-interaction
  • A bachelors/masters thesis

Details
BT/MT Steeven Villa
Understanding the Role of External Feedback in Motor Learning using EMS and Robotic Actuation

Description

When learning complex motor skills-like playing piano or dancing -we refine our movements over time through repetition and conscious correction. Neuroscience suggests that this process relies on internal execution and reflection. However, recent findings challenge this view: a study using Electrical Muscle Stimulation (EMS) showed that externally actuating corrective movements can enhance both performance and learning.This project investigates a key question:Is it the muscular stimulation or the external actuation that drives learning?You'll explore this by comparing EMS-based feedback with robotic physical guidance using our custom setup that includes a Novint Falcon haptic device, dual EMS systems, and Unity3D. The work will involve building interactive scenarios, conducting controlled user studies, and analyzing behavioral outcomes.

You will Gain

  • Practical experience with EMS and robotic feedback systems
  • Skills in motor learning research, experimental design, and human-subject studies
  • The chance to contribute to understanding how humans learn with external support

Requirements / Willingness to Learn:

  • Experience or strong interest in Unity3D and haptic interfaces
  • Interest in neuroscience, motor control, or human augmentation
  • Comfort working with participants and running empirical studies

References

  • Steeven Villa, Finn Jacob Eliyah, Yannick Weiss, Robin Welsch, Thomas Kosch. 2025. Understanding the Influence of Electrical Muscle Stimulation on Motor Learning: Enhancing Motor Learning or Disrupting Natural Progression?

Details
BT/MT Steeven Villa
Translating Attitudes Toward Human Augmentation

Description

As technologies like AI, EMS, and XR enter our daily lives, new social dilemmas arise. Imagine taking a test while another student uses AI support you don't have access to. How do people perceive fairness, advantage, or identity in these contexts?In prior research, we developed a psychometric scale (SHAPE) to measure public attitudes toward technologically augmented humans. However, the tool is currently only available in English.This project asks:How can we make this tool accessible across cultures while preserving its meaning and scientific validity?You'll work on a standardized translation of the SHAPE scale into either German or Japanese-depending on your native language. The process includes linguistic validation, focus groups, expert panels, and online surveys to ensure cultural nuance and psychometric rigor.

You will Gain:

  • Insight into cross-cultural HCI and public perception of emerging technologies
  • Experience with psychometric translation and qualitative research
  • A chance to shape international research on human augmentation

Requirements / Willingness to Learn:

  • Native fluency in German or Japanese, and excellent command of English
  • Interest in psychology, human augmentation, or cultural studies
  • Willingness to conduct focus groups and engage with diverse participants

References

  • Steeven Villa, Jasmin Niess, Takuro Nakao, Jonathan Lazar, Albrecht Schmidt, and Tonja-Katrin Machulla. 2023. Understanding Perception of Human Augmentation: A Mixed-Method Study.
  • Steeven Villa, Jasmin Niess, Albrecht Schmidt, Robin Welsch. Society's Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale

Details
BT/MT Steeven Villa, Abdallah El Ali
Exploring Haptic Illusions from a Third-Person Perspective and Through Virtual Avatars

Description

Haptic illusions occur when people experience the sensation of touch through non-tactile senses like vision or hearing. For example, visual manipulation of hand movements can evoke perceptions of weight, stiffness, or friction-even in the absence of actual physical feedback. This project investigates a compelling question: Can haptic illusions still occur when individuals view themselves from a third-person perspective or interact through a virtual avatar? You'll explore how immersive VR environments and avatar-based interactions influence the perception of haptic feedback. The work will involve experimental design, development in virtual reality, and user studies.

You will Gain

  • Hands-on experience in cutting-edge VR research
  • Skills in multimodal perception and human-computer interaction
  • An opportunity to contribute to the scientific understanding of virtual embodiment and haptics

Requirements / Willingness to Learn:

  • VR development experience using Unity3D
  • Familiarity with Meta Quest devices
  • Interest in learning how to control haptic interfaces, such as the Novint Falcon

References

  • Yannick Weiss, Steeven Villa, Albrecht Schmidt, Sven Mayer, Florian Muller. 2023. Using Pseudo-Stiffness to Enrich the Haptic Experience in Virtual Reality.
  • Albrecht Schmidt Augmenting Human Intellect and Amplifying Perception and Cognition

Details
BT/MT Thomas Weber
In-Sketch Auto-Completion vs. Conversational Sketch Generation for CAD

This thesis is available from October 2025 at the earliest. If you require a thesis before this date, please consider another topic.

Parametric CAD software like Fusion360 or FreeCAD use sketches to define 3D geometry and ultimately design complex parts. Creating these sketches can be a time-intensive, complex process. Modern AI systems promise to increase the productivity of complex tasks like this by automating tedious and repetitive aspects.

In this thesis, your goal will be to explore how the presentation and interaction design of AI-assistance in these situations affects the engineers productivity. To this end, you will extend existing CAD tools to facilitate AI support in two ways: fine-granular AI completions at the level of individual sketch parameters (e.g. dimensions) and high-level generation from descriptions, e.g. through a conversational interface (roughly similar to inline auto-completion and conversational interfaces in coding). You will then conduct a user study to evaluate the differences between these interaction paradigms with respect to productivity.


Details
BT Thomas Weber
Benchmarking Generative AI for 3d Modelling

This thesis is available from October 2025 at the earliest. If you require a thesis before this date, please consider another topic.

Generative AI has proven highly successful in generating code, excelling in many benchmarks. While code typically is used to define system behavior, it can also be a way to, for example, generate 3d geometries. OpenSCAD is one example for such a scripting language to define complex 3d objects.

In this thesis, your goal will be to evaluate the performance of different Large Language Model regarding their performance for generating 3d geometries through OpenSCAD or other means.


Details

BT = bachelor thesis - PT = project thesis - MT = master thesis - PWAL = practical research course

Weitere Themen

Institut für Digitales Management und Neue Medien

Eine Betreuung dort ist für Studierende mit Anwendungsfach Medienwirtschaft in Zusammenarbeit mit Prof. Butz möglich, wobei die Regeln (insbes. Bearbeitungszeit und Anmeldung) des IfI Anwendung finden. Weitere Informationen

Lehrstuhl für Ergonomie (TUM-LFE)

Der Lehrstuhl für Ergonomie der Technischen Universität München (TUM) (Prof. Bengler) bietet studentische Arbeiten u.a. in den Themengebieten Umgang mit zukünftigen Assistenzsystemen und hochautomatisierten Systemen, Untersuchung multimodaler Mensch-Maschine-Interaktion, Digitale Menschmodellierung.

Eine Betreuung dort ist in Zusammenarbeit mit Prof. Butz möglich, wobei die Regeln (insbes. Bearbeitungszeit und Anmeldung) der LMU Anwendung finden. Weitere Informationen

Lehrstuhl für Architekturinformatik

Der Lehrstuhl für Architekturinformatik der Technischen Universität München (TUM) (Prof. Petzold) bietet studentische Arbeiten in den Themengebieten: Gamification - Kooperativ in der Planung.

Eine Betreuung dort ist in Zusammenarbeit mit Prof. Butz möglich, wobei die Regeln (insbes. Bearbeitungszeit und Anmeldung) der LMU Anwendung finden. Weitere Informationen

Ansprechpartner am Lehrstuhl Architekturinformatik ist Herr Gerhard Schubert.

Lehrstuhl für Fahrzeugtechnik (FTM)

The Chair of Automotive Technology (FTM) at the Technical University of Munich (TUM) (Prof. Lienkamp) offers student work in the areas of autonomous driving, human-machine interaction, teleoperation, and driving simulation. The chair has a licensed autonomous Level 4 vehicle that can be used to conduct studies and tests.

Eine Betreuung dort ist in Zusammenarbeit mit Prof. Butz möglich, wobei die Regeln (insbes. Bearbeitungszeit und Anmeldung) der LMU Anwendung finden. Weitere Informationen

  • International and interdisciplinary teamwork
  • Preparation of study papers ( bachelor / master thesis)
  • Support of the projects by industrial companies
  • Automated vehicle EDGAR
  • RoboRacer RC cars for user studies

Lehrstuhl für Medientechnik (LMT-TUM)

Der Lehrstuhl für Medientechnik (LMT) der Technischen Universität München (TUM) (Prof. Steinbach) bietet studentische Arbeiten u.a. in den Themengebieten Kompression und Kodierung multimedialer Information.

Eine Betreuung dort ist in Zusammenarbeit mit Prof. Butza> möglich, wobei die Regeln (insbes. Bearbeitungszeit und Anmeldung) der LMU Anwendung finden. Weitere Informationen

Lancaster University

In Großbritannien kann man an unserer Partneruniversität in Lancaster Abschlussarbeiten schreiben.

Eine Betreuung dort ist in Zusammenarbeit mit Prof. Butzmöglich, wobei die Regeln (insbes. Bearbeitungszeit und Anmeldung) der LMU Anwendung finden. weitere Informationen und eine Themenliste

Queensland University of Technology

Auch in Australien an unserer Partneruniversität der QUT in Brisbane ist es möglich seine Abschlussarbeit zu schreiben.

Eine Betreuung dort ist in Zusammenarbeit mit Prof. Butz möglich, wobei die Regeln (insbes. Bearbeitungszeit und Anmeldung) der LMU Anwendung finden.
Nach oben
Impressum – Datenschutz – Kontakt  |  Letzte Änderung am 05.09.2025 von Rainer Fink (rev 44979)