Publikations-Information
Can We Trust AI to Teach Security? Quality Assurance for Animated AI-Generated Cybersecurity Learning Content
BT/MT/PT
| Status | open |
| Advisor | Katharina Barlage |
| Professor | Prof. Dr. Florian Alt |
Task
Generative AI systems are increasingly used to create educational content, including animated learning materials that aim to explain complex cybersecurity concepts in an engaging way. While such systems can scale content production, they also introduce risks such as incorrect explanations, misleading visuals, or insecure recommendations.
In this thesis, you will work with an existing prototype that generates animated cybersecurity learning materials using AI. The goal is to systematically assess and improve the quality of these materials from both a security and user perspective.
You will:- Define a quality framework tailored to animated learning materials (e.g., correctness, clarity, visual accuracy, pedagogical effectiveness, engagement)
- Generate sample learning units (e.g., phishing, password security, encryption basics)
- Identify common issues (e.g., hallucinated explanations, misleading animations, oversimplifications)
- Conduct a user study to evaluate how learners perceive and understand the generated content
The user study may investigate:
- Learning outcomes (e.g., comprehension, retention)
- Trustworthiness and credibility
- Engagement and usability of animated AI-generated materials
- Differences between AI-generated and curated (baseline) content
