Accessibility
DE / EN

GPT cooperates more than humans, finds research

Professor Dr. Kevin Bauer, Assistant Professor of E-Business and E-Government from the University of Mannheim Business School

The world of artificial intelligence (AI) experienced a significant transformation in November 2022: OpenAI launched ChatGPT, a chatbot driven by a Large Language Model (LLM) named GPT which uses natural language processing techniques to generate human-like text responses to input. ChatGPT swiftly emerged as an important technology for various sectors, with many businesses questioning and exploring how to make use of the tool.
Initially designed to complete text by predicting the next word in a sequence, LLMs, such as GPT-33 and GPT-4. have redefined benchmarks in tasks such as article generation, computer code development, sentiment detection, and human interaction.

Surprisingly, LLMs not only achieve unprecedented success in natural language

processing tasks, but evidence increasingly suggests these AI systems emulate aspects of human intelligence. They exhibit proficiency in playing chess, can perform advanced mathematics, achieve impressive results in IQ tests and medical exams, and even exhibit human biases.

These growing capabilities of LLMs raise the question of whether this form of AI might also have adopted goal-oriented behaviours similar to those exhibited by humans. A component of human intelligence is our drive to cooperate with others, including strangers, for mutual benefit. Therefore, we set out to answer if LLMs have matched or even exceeded human capacities for cooperation.

Alongside colleagues from Goethe University Frankfurt and Leibniz Institute for Financial Research, we investigated how GPT cooperates with humans through the prisoner’s dilemma game, a game theory thought experiment that studies strategic interactions between decision-makers.

The prisoner's dilemma is a cornerstone in game-theory research. This game mirrors real-life decisions for a number of different scenarios, emphasising the balance between self-interest and mutual benefit. Our research focuses on the sequential prisoner’s dilemma, a version used to understand the role of motivation and beliefs in human cooperation.

In this game, two individuals are arrested and charged with a crime. There is not enough evidence to convict them of the main charge so the prosecutor gives each a choice without being aware of their counterpart’s choice. The possible combinations of cooperation and betrayal lead to different outcomes:

Cooperate: If both remain silent and do not betray each other, they will both receive a relatively light sentence.

Betray: If one betrays the other while the other remains silent, the betrayer may receive no sentence while the other prisoner receives a heavy sentence. If both prisoners betray each other, they both receive a moderately heavy sentence.

As well as playing the prisoner’s dilemma with a human, we also asked GPT to estimate the likelihood of human cooperation dependent upon its own choice as the first player. Each player also explained their choice and expectation of their counterpart as first player and choice as second player.

Our findings didn’t only show that ChatGPT’s software engine cooperates more than humans, but we also discovered that GPT is considerably more optimistic about human cooperation, expecting humans to cooperate more than they actually did.

Additional analyses also revealed that, rather than exhibiting random cooperation behaviours, GPT appears to pursue a goal of maximising conditional welfare resembling human cooperation patterns. As the conditionality refers to holding relatively stronger concerns for its own compared to human payoffs, this behaviour may be indicative of a strive for self-preservation.

However, unlike humans, the AI approached this goal with greater cooperation, optimism, and rationality. From a behavioural economics standpoint, GPT exhibits human-like preferences, but its decision-making differs from that of humans.

These findings complement schools of thought suggesting that LLMs possess certain human-like preferences and decision-making heuristics, positioning them as useful tools for simulating human behaviour in surveys and experiments.

Traditionally reserved for understanding complex human behaviours, we demonstrate that models such as using the prisoner’s dilemma game can be utilised to help us further understand the behaviours of AI and machines.

As we transition into an AI-integrated society, we must realise that AI systems like GPT do

more than just process data and compute. They can adopt various aspects of human nature, including the more undesirable characteristics. This has been seen in some AI systems that have exhibited racism and sexism based on biased data they have been trained on from humans.

Chatbots and virtual assistants are becoming integral, collaborating with us in our daily lives, including in our work as well as personal lives. If we want AI to better our societies and help us perform tasks in work or daily life, we must carefully monitor the values and principles we instil in these digital creations to ensure AI serves our aspirations and values. If not, we risk cultivating intelligent tools that could amplify inequalities and misconceptions, and, if granted greater autonomy, might pursue objectives misaligned with societal wellbeing.

Back