All Rights Reserved
AccessEcon LLC 2006, 2008.
Powered by MinhViet JSC

 
Philip Brookins and Jason DeBacker
 
''Playing games with GPT: What can we learn about a large language model from canonical strategic games?''
( 2024, Vol. 44 No.1 )
 
 
We aim to understand fundamental preferences over fairness and cooperation embedded in artificial intelligence (AI). We do this by having a large language model (LLM), GPT-3.5, play two classic games: the dictator game and the prisoner's dilemma game. We compare the decisions of the LLM to those of humans in laboratory experiments. We find that the LLM replicates human tendencies towards fairness and cooperation. It does not choose the optimal strategy in most cases. Rather, it shows a tendency towards fairness in the dictator game, even more so than human participants. In the prisoner's dilemma, the LLM displays rates of cooperation much higher than human participants (about 65% versus 37% for humans). These findings aid our understanding of the ethics and rationality embedded in AI.
 
 
Keywords: Large language models (LLMs), Generative Pre-trained Transformer (GPT), Experimental Economics, Game Theory, AI
JEL: C7 - Game Theory and Bargaining Theory
C9 - Design of Experiments: General
 
Manuscript Received : Sep 27 2023 Manuscript Accepted : Mar 30 2024

  This abstract has been downloaded 103 times                The Full PDF of this paper has been downloaded 160322 times