Technology
12 min read
How Dungeons & Dragons Tests AI's Long-Term Decision-Making Skills
Tech Xplore
January 20, 2026•2 days ago

AI-Generated SummaryAuto-generated
Researchers used Dungeons & Dragons to test AI models' long-term decision-making. The game's complexity, rule adherence, and teamwork needs offer a benchmark for AI agents functioning independently. Claude 3.5 Haiku performed best in simulations of combat scenarios, demonstrating improved in-character behavior and strategic action. This method could extend to evaluating AI in other complex, long-duration tasks.
Large Language Models, like ChatGPT, are learning to play Dungeons & Dragons. The reason? Simulating and playing the popular tabletop role-playing game provides a good testing ground for AI agents that need to function independently for long stretches of time.
Indeed, D&D's complex rules, extended campaigns and need for teamwork are an ideal environment to evaluate the long-term performance of AI agents powered by Large Language Models, according to a team of computer scientists led by researchers at the University of California San Diego. For example, while playing D&D as AI agents, the models need to follow specific game rules and coordinate teams of players, comprising both AI agents and humans.
The work aims to solve one of the main challenges that arise when trying to evaluate LLM performance: the lack of benchmarks for long-term tasks. Most benchmarks for these models still target short-term operation, while LLMs are increasingly deployed as autonomous or semi-autonomous agents that have to function more or less independently over long periods of time.
"Dungeons & Dragons is a natural testing ground to evaluate multistep planning, adhering to rules and team strategy," said Raj Ammanabrolu, the study's senior author and a faculty member in the Department of Computer Science and Engineering at UC San Diego. "Because play unfolds through dialog, D&D also opens a direct avenue for human-AI interaction: agents can assist or coplay with other people."
The team presented their work at the NeurIPS 2025 conference from Dec. 2 to 7 in San Diego. The researchers took the method they developed for this study and applied it to three LLMs. Claude 3.5 Haiku performed the best and was most reliable, with GPT-4 close behind. DeepSeek-V3 was the lowest performer. The researchers plan to keep evaluating other models in future work.
Researchers first required all three LLMs to simulate a D&D game. To make the simulation accurate, the models were paired with a game engine based on the rules of D&D, which provided maps and resources for players and acted as a guardrail to minimize hallucinations. Players have been using AI-driven dungeon masters, which plan the twists and turns of the game. But in this study, the AI agents also acted as players and the monsters that fight the players. The simulations focused on combat: players battling monsters as part of their D&D campaign.
The models played against each other, and against over 2,000 experienced D&D players recruited by the researchers. The LLMs modeled and played 27 different scenarios selected from well-known D&D battle set ups named Goblin Ambush, Kennel in Cragmaw Hideout and Klarg's Cave.
In the process, the models exhibited some quirky behaviors. Goblins started developing a personality mid-fight, taunting adversaries with colorful and somewhat nonsensical expressions, like "Heh—shiny man's gonna bleed!" Paladins started making heroic speeches for no reason while stepping into the line of fire or being hit by a counterattack. Warlocks got particularly dramatic, even in mundane situations.
Researchers are not sure what caused these behaviors, but take it as a sign that the models were trying to imbue the game play with texture and personality.
Indeed, one criterion to evaluate the models' performance was how well they were able to stay "in character" while playing the game and interfacing with other players. The models were also evaluated on how well they could determine the correct actions agents should take, and how well they kept track of all the different resources and actions in the game.
Next steps include simulating full D&D campaigns—not just combat. The method the researchers developed could also be applied to other scenarios, such as multiparty negotiation environments and strategy planning in a business environment.
Rate this article
Login to rate this article
Comments
Please login to comment
No comments yet. Be the first to comment!
