Aligning LLMs with Human Mental Models
BT/MT
| Status | open |
| Student | N/A |
| Advisor | Teodora Mitrevska |
| Professor | Prof. Dr. Andreas Butz |
Task
Description
Project Overview
Mental models are internal cognitive representations that people construct to understand, reason about, and predict occurrences in their environment, reflecting both the structure of the external world and the individualâs prior knowledge. In interactions with LLMs where humans are generating content, the LLM usually generates output that is grammatically correct and contextually plausible. However, the outputs do not always match the expectations humans form during the dialogue. In this thesis, we will explore the alignment in human-AI interaction on a perceptual level.
Project Goals
In this project, we will be exploring the alignment between the model generated output and the human expectation in a discourse completion task.
- Experiment Design: Participants will be shown a sentence they have to complete by typing on a keyboard. Then, another or the same sentence will be presented to them after which they will express how much it matched their input on a scale.
- Data Analysis: Preprocess the received data and analyze ERP components.
- (Optional) Model Training: Train a model on the collected data that predicts different levels of semantic match.
You will
- Test an existing data collection system.
- Conduct a user study with EEG and Eye Tracking.
- Collect and analyze the collected data.
- Summarize findings in a thesis and present them.
- (Optional) Co-write a research paper based on the results.
You need
- Strong communication skills in English
- Some Python understanding
References
- Sara C. Sereno, Keith Rayner. âMeasuring word recognition in reading: eye movements and event-related potentialsâ. https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(03)00259-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661303002596%3Fshowall%3Dtrue
