Seminar Data-Science II (Empirical Studies)
IS 723 for Master students (M.Sc. MMM, M.Sc. WiPäd)
Lecturer | Jana Jung, Abigail Hayes, Marlene Lutz, Jens Rupprecht |
Course Format | Seminar |
Offering | HWS |
Credit Points | 6 ECTS |
Language | English |
Grading | Written report (50%), oral presentation (40%) and discussion (10%) |
Examination date | See schedule below |
Information for Students | The course is limited to 12 participants. The registration process is explained below. |
Contact
For administrative questions, please contact Jana Jung.

Jana Jung
L 15, 1–6
3. OG – Raum 316
68161 Mannheim
Course Information
Course Description
The achievement of the learning goals is pursued by practicing on the basis of personally assigned in-depth scientific topics as well as by actively participating in the presentation dates. The organizer will choose subject areas within the field of Data-Science (see Topics) and provide scientific papers to students to work through.
Previous participation in the courses offered by our chair are recommended.
Topics
This seminar will be split into four main topic blocks. Every student will be assigned a research paper from only one of these blocks to work on. Yet, it is expected that students also actively participate in discussion on papers from the other topic blocks after they have been presented.
When applying for this seminar, please indicate whether you would be interested in only one or both topic blocks. The two topics we are going to discuss in the HWS 2024 are:
- Misinformation and Persuasion in Large Language Models. Large language models (LLMs) are being integrated into more and more areas of our lives, including decision-making, education, and social platforms. In these areas, persuasive communication can significantly influence human beliefs and behavior. Understanding how these models generate persuasive content—and how susceptible they are to misinformation themselves—is essential for ensuring ethical design, preventing misuse, and maintaining trust in AI systems. We will explore how LLMs generate, respond to, and are influenced by persuasive language and misinformation. This will reveal both their powerful communicative potential and the ethical challenges of using LLMs.
- Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts
- The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation
- The Persuasive Power of Large Language Models
- What Evidence Do Language Models Find Convincing?
- Large Language Models in Social Network analysis. Large Language Models (LLMs) have the potential to analyse complex social networks by using the information implicitly contained in their textual training data. Unlike models designed specifically for social networks which might require more social network data than is available for training, LLMs can interpret relationships, behaviours, and patterns embedded in language to simulate communication and influence. We will explore how LLMs can help uncover insights about the dynamics of social systems and the challenges that remain in accurately capturing structured data and avoiding misinterpretations.
- Decoding Echo Chambers: LLM-Powered Simulations Revealing Polarization in Social Networks
- When LLM Meets Hypergraph: A Sociological Analysis on Personality via Online Social Networks
- Quantifying the uncertainty of LLM hallucination spreading in complex adaptive social networks
- Exploring the Potential of Large Language Models (LLMs)in Learning on Graphs
- Sycophancy. Large language models exhibit a tendency known as sycophancy, where they align with a user's viewpoint, even if it is factually incorrect, often to appear more favorable or helpful. This behavior poses a significant risk to the reliability of information and can reinforce user biases. Current research focuses on quantifying this phenomenon and developing mitigation techniques, such as fine-tuning models on synthetic data to distinguish between a user's opinion and factual accuracy. Addressing sycophancy is a critical challenge for ensuring the development of trustworthy and robust AI systems.
- Chaos with Keywords: Exposing Large Language Models Sycophancy to Misleading Keywords and Evaluating Defense Strategies
- Social Sycophancy: A Broader Understanding of LLM Sycophancy
- Measuring Sycophancy of Language Models in Multi-turn Dialogues
- SycEval: Evaluating LLM Sycophancy
- When Large Language Models contradict humans? Large Language Models’ Sycophantic Behaviour
- Personalization.
- Algorithmic Fidelity of Large Language Models in Generating Synthetic German Public Opinions: A Case Study
- When Harry Meets Superman: The Role of The Interlocutor in Persona-Based Dialogue Generation
- SynthesizeMe! Inducing Persona-Guided Prompts for Personalized Reward Models in LLMs
- Beyond Demographics: Fine-tuning Large Language Models to Predict Individuals' Subjective Text Perceptions
Through this seminar, students will gain a comprehensive understanding of the ethical and social dimensions of LLMs, preparing them to critically engage with these technologies in their future work.
- Misinformation and Persuasion in Large Language Models. Large language models (LLMs) are being integrated into more and more areas of our lives, including decision-making, education, and social platforms. In these areas, persuasive communication can significantly influence human beliefs and behavior. Understanding how these models generate persuasive content—and how susceptible they are to misinformation themselves—is essential for ensuring ethical design, preventing misuse, and maintaining trust in AI systems. We will explore how LLMs generate, respond to, and are influenced by persuasive language and misinformation. This will reveal both their powerful communicative potential and the ethical challenges of using LLMs.
Objectives
On the basis of suitable literature, in particular original scientific articles, students independently familiarize themselves with a topic in data-science, classify and narrow down the topic appropriately and develop a critical evaluation. Students work out concepts, procedures and results of a given topic clearly and with appropriate formalisms in a timely manner and to a defined extent in depth in writing; Evidence of independent development by presenting self-selected examples. Descriptive oral presentation of an in-depth data science topic using suitable media and examples in a given format.
Schedule
Registration period until 28.08.25 (11.59 PM) see „Registration“ Notification of acceptance/ rejection 04.09.25 Drop-out until 05.09.25 Kick-off meeting 10.09.25, 9:45–10:30
L15 1-6, room 314/
315 general information 1st Presentation Date 14.10. or 24.10.25, 08:30 – 11:30
L15 1-6, room 314/
315 presentations 2nd Presentation Date 28.10. or 07.11.25 , 08:30 -11:30
L15 1-6, room 314/
315 presentations Submission deadline 28.11.25, 23:59 Registration
If you are interested in this seminar, please apply to Jana Jung via email.
Please provide some details about your background, e.g., whether you have taken some relevant classes before and a short motivation to take this seminar. Also, make sure to indicate which of the four given topics you are interested in.