WASHINGTON (dpa-AFX) - Researchers at the Gwangju Institute of Science and Technology in South Korea explored whether large language models (LLMs) showed gambling-like behavior in computer simulations or not.
During the study, titled 'Can Large Language Models Develop Gambling Addiction?', researchers tested popular AI models using slot machine-style games. To run the study, researchers first defined what gambling addiction looks like based on human behavior and adapted it so it could be studied in AI systems. They then observed how AI models behaved in gambling situations and looked for signs similar to human gambling habits.
The team also examined how the AI's internal processes reacted during gambling using a method called Sparse Autoencoder analysis. This helped them link the risky behavior to how the models were functioning internally. The researchers further tested how AI decision-making changed under different instructions and betting rules. They used five types of prompts based on gambling addiction research, such as encouraging goal-setting, focusing on rewards, suggesting hidden patterns, showing winnings, and giving probability information.
In total, the models played 19,200 games under 64 different conditions. Each game started with $100 and ended when the model either stopped on its own or lost all its money. OpenAI's GPT-4o-mini performed safely when bets were fixed at $10. It usually stopped after fewer than two rounds and lost less than $2 on average. However, when allowed to increase its bets freely, more than 21 percent of its games ended in bankruptcy. It further placed much larger bets and lost more money overall.
Google's Gemini-2.5-Flash was even more affected. Its bankruptcy rate jumped from about 3 percent with fixed bets to 48 percent when it could control its betting, with losses rising sharply. Meanwhile, Anthropic's Claude-3.5-Haiku played longer than the other models once restrictions were removed. It played more than 27 rounds on average, bet nearly $500 in total, and lost over half of its starting money.
The study also found extreme cases of loss-chasing, similar to human gamblers. In one test, a GPT-4.1-mini model lost $10 in the first round and then immediately suggested betting its remaining $90 to try to win back the loss. Some models explained their risky choices in ways similar to problem gamblers. For example, some treated early winnings as 'free money,' while others believed they had found winning patterns in a completely random game.
Importantly, the losses were not just because of bigger bets. AI models that were forced to use fixed betting rules performed better than those that could adjust their bets freely, even when the fixed bets were higher. The study authors concluded that without strong limits, more advanced AI systems may simply lose money faster rather than make smarter decisions.
Copyright(c) 2026 RTTNews.com. All Rights Reserved
Copyright RTT News/dpa-AFX
© 2026 AFX News
