Seminar Data-Science II (Empirical Studies)
IS 723 for Master students (M.Sc. MMM, M.Sc. WiPäd, M.Sc. MMDS)
| Lecturer | Jana Jung, Jens Rupprecht |
| Course Format | Seminar |
| Offering | HWS/ |
| Credit Points | 6 ECTS / 4 ECTS |
| Language | English |
| Grading | Written report (40%), Report review (10%), Oral presentation (40%) and Discussion (10%) |
| Examination date | See schedule below |
| Information for Students | The course is limited to 12 participants. The registration process is explained below. |
Contact
For administrative questions, please contact Jana Jung.

Jana Jung
L 15, 1–6
3rd floor – Room 316
68161 Mannheim
Course Information
Course Description
The achievement of the learning goals is pursued by practicing on the basis of personally assigned in-depth scientific topics as well as by actively participating in the presentation dates. The organizer will choose subject areas within the field of Data-Science (see Topics) and provide scientific papers to students to work through.
Previous participation in the courses offered by our chair are recommended.
Topics
This seminar will be split into two main topic blocks. Every student will be assigned a research paper from only one of these blocks to work on. Yet, it is expected that students also actively participate in discussion on papers from the other topic blocks after they have been presented.
When applying for this seminar, please indicate whether you would be interested in only one or both topic blocks. The two topics we are going to discuss in the FSS 2026 are:
- Misinformation and Persuasion in Large Language Models. Large language models (LLMs) are being integrated into more and more areas of our lives, including decision-making, education, and social platforms. In these areas, persuasive communication can significantly influence human beliefs and behavior. Understanding how these models generate persuasive content—and how susceptible they are to misinformation themselves—is essential for ensuring ethical design, preventing misuse, and maintaining trust in AI systems. We will explore how LLMs generate, respond to, and are influenced by persuasive language and misinformation. This will reveal both their powerful communicative potential and the ethical challenges of using LLMs.
- Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts
- The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation
- The Persuasive Power of Large Language Models
- What Evidence Do Language Models Find Convincing?
- “I understand your perspective”: LLM Persuasion through the Lens of Communicative Action Theory
- Sycophancy. Large language models exhibit a tendency known as sycophancy, where they align with a user's viewpoint, even if it is factually incorrect, often to appear more favorable or helpful. This behavior poses a significant risk to the reliability of information and can reinforce user biases. Current research focuses on quantifying this phenomenon and developing mitigation techniques, such as fine-tuning models on synthetic data to distinguish between a user's opinion and factual accuracy. Addressing sycophancy is a critical challenge for ensuring the development of trustworthy and robust AI systems.
- Chaos with Keywords: Exposing Large Language Models Sycophancy to Misleading Keywords and Evaluating Defense Strategies
- Measuring Sycophancy of Language Models in Multi-turn Dialogues
- SycEval: Evaluating LLM Sycophancy
- When Large Language Models contradict humans? Large Language Models’ Sycophantic Behaviour
Through this seminar, students will gain a comprehensive understanding of the ethical and social dimensions of LLMs, preparing them to critically engage with these technologies in their future work.
- Misinformation and Persuasion in Large Language Models. Large language models (LLMs) are being integrated into more and more areas of our lives, including decision-making, education, and social platforms. In these areas, persuasive communication can significantly influence human beliefs and behavior. Understanding how these models generate persuasive content—and how susceptible they are to misinformation themselves—is essential for ensuring ethical design, preventing misuse, and maintaining trust in AI systems. We will explore how LLMs generate, respond to, and are influenced by persuasive language and misinformation. This will reveal both their powerful communicative potential and the ethical challenges of using LLMs.
Objectives
On the basis of suitable literature, in particular original scientific articles, students independently familiarize themselves with a topic in data-science, classify and narrow down the topic appropriately and develop a critical evaluation. Students work out concepts, procedures and results of a given topic clearly and with appropriate formalisms in a timely manner and to a defined extent in depth in writing; Evidence of independent development by presenting self-selected examples. Descriptive oral presentation of an in-depth data science topic using suitable media and examples in a given format.
Schedule (for FSS 2026: tba)
Registration period until 28.08.25 (11.59 PM) see “Registration” Notification of acceptance/ rejection 04.09.25 Drop-out until 05.09.25 Kick-off meeting 10.09.25, 9:45–10:30
L15 1-6, room 314/
315 general information 1st Presentation Date 14.10. or 24.10.25, 08:30 – 11:30
L15 1-6, room 314/
315 presentations 2nd Presentation Date 28.10. or 07.11.25 , 08:30 -11:30
L15 1-6, room 314/
315 presentations Peer review of report drafts Week of 10.11.2025 Submission deadline 28.11.25, 23:59 Registration
If you are interested in this seminar, please apply to Jana Jung via email.
Please provide some details about your background, e.g., whether you have taken some relevant classes before and a short motivation to take this seminar. Also, make sure to indicate which ones of the two given topics you are interested in.