AI as a Team Member: Smarter Decisions or Greater Complexity?

Its integration as an active member of teams composed of specialists and managers has the potential to fundamentally transform the world of work. AI no longer merely supports teams—it becomes part of them. The central question, therefore, is: Does this also lead to better decisions?
Opportunities and risks for team decisions
AI can generate content, provide information, and support complex decision-making processes. This creates a tension between the opportunities and risks for the quality of team decisions: On the one hand, AI team members can generate content, provide information, and support decision-making processes.
On the other hand, there is a risk that they will generate incomplete or distorted information (so-called hallucinations). It is often difficult for human team members to distinguish correct content from incorrect content. The result: The quality of decisions may suffer rather than improve.
A recent study from the University of Mannheim by Dr. Désirée Zercher and Prof. Dr. Armin Heinzl, in collaboration with co-authors (TU Darmstadt and formerly KIT), shows that this risk increases particularly when AI recommendations are difficult to understand.
The crux of the matter is social validation: people only trust AI-generated information if they can verify it against their own knowledge. Without this ability, mistrust and rejection arise, and the potential benefits of AI are undermined.
Experimental Study
To better understand these relationships, the researchers conducted an experimental study based on the information asymmetry model. The goal was to investigate whether and under what conditions AI actually improves the decision-making quality of teams and to what extent this depends on the AI member’s level of knowledge.
The model describes a central problem faced by many teams: knowledge is often insufficient, and information is frequently distributed unevenly. This leads to distorted information processing in the decision-making process and suboptimal results.
Three scenarios were compared:
- Purely human teams
- Teams with a centrally informed AI that had access to all relevant information
- Teams with an asymmetrically informed AI whose knowledge was incomplete, comparable to that of the human team members
Result: Teams with centrally informed AI benefit the most
The results show that incorporating AI team members reduces human decision-making biases in both AI knowledge configurations. However, in the end, only the team with the centrally informed AI benefits significantly. In these teams, the AI helped overcome two typical behaviors of human team members: clinging to initial positions and the tendency to discuss only already known information. By consistently recommending the alternative most strongly rejected by the human team members, it encouraged the teams to broaden their focus and incorporate previously unconsidered data points into their deliberations. The AI acted as a catalyst for more objective information processing and compensated for human cognitive limitations.
However, one factor was decisive: trust. This arose only when team members could understand and verify the AI’s contributions. When this was the case, the AI’s information was integrated more deeply into the decision-making process, leading to better results.
When AI knowledge is limited: More doubt than support
A different picture emerged in teams with asymmetric AI knowledge. In these teams, the content provided by the AI could not be fully verified. Trust in the AI declined accordingly. Although the AI’s contributions were discussed, they were often ultimately rejected. Instead, the teams relied on human assessments and majority preferences, even when these were flawed. The lack of social validation led to mistrust and an overly critical approach to AI. In these cases, AI did not solve problems in the decision-making process—it created additional uncertainty.
Implications for businesses
The results make it clear: Integrating AI into teams is not just a technological challenge.
When integrating AI into decision-making processes, psychological and social interaction effects between humans and machines must be taken into account. This includes creating the necessary conditions so that:
- deployed AI systems operate transparently and reliably,
- employees possess the necessary AI competencies to collaborate effectively with AI team members,
- the AI has access to relevant data, and
- AI contributions are comprehensible and verifiable for human team members.
Without these conditions, AI is more likely to be perceived as an error-prone tool rather than a valuable team member.
Further Readings
- The full research paper can be read here.
- Research by the same authors on this topic: