Games people — and machines — play: Untangling strategic reasoning to advance AI

Assistant Professor Gabriele Farina mines the foundations of decision-making in complex multi-agent scenarios.

Gabriele Farina grew up in a small town in a hilly winemaking region of northern Italy. Neither of his parents had college degrees, and although both were convinced they “didn’t understand math,” Farina says, they bought him the technical books he wanted and didn’t discourage him from attending the science-oriented, rather than the classical, high school.

By around age 14, Farina had focused on an idea that would prove foundational to his career.

“I was fascinated very early by the idea that a machine could make predictions or decisions so much better than humans,” he says. “The fact that human-made mathematics and algorithms could create systems that, in some sense, outperform their creators, all while building on simple building blocks, has always been a major source of awe for me.”

At age 16, Farina wrote code to solve a board game he played with his 13-year-old sister.

“I used game after game to compute the optimal move and prove to my sister that she had already lost long before either of us could see it ourselves,” Farina says, adding that his sister was less enthralled with his new system.

Now an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), Farina combines concepts from game theory with such tools as machine learning, optimization, and statistics to advance theoretical and algorithmic foundations for decision-making.

Enrolling at Politecnico di Milano for college, Farina studied automation and control engineering. Over time, however, he realized that what activated his interest was not “just applying known techniques, but understanding and extending their foundations,” he says. “I gradually shifted more and more toward theory, while still caring deeply about demonstrating concrete applications of that theory.”

Farina’s advisor at Politecnico di Milano, Nicola Gatti, professor and researcher in computer science and engineering, introduced Farina to research questions in computational game theory and encouraged him to apply for a PhD. At the time, being the first in his immediate family to earn a college degree and living in Italy, where doctoral degrees are handled differently, Farina says he didn’t even know what a PhD was.

Nevertheless, one month after graduating with his undergraduate degree, Farina began a doctoral degree in computer science at Carnegie Mellon University. There, he won distinctions for his research and dissertation, as well as a Facebook Fellowship in Economics and Computation.

As he was finishing his doctorate, Farina worked for a year as a research scientist in Meta’s Fundamental AI Research Labs. One of his major projects was helping to develop Cicero, an AI that was able to beat human players in a game that involves forming alliances, negotiating, and detecting when other players are bluffing.

Farina says, “when we built Cicero, we designed it so that it would not agree to form an alliance if it was not in its interest, and it likewise understood whether a player was likely lying, because for them to do as they proposed would be against their own incentives.”

A 2022 article in the MIT Technology Review said Cicero could represent advancement toward AIs that can solve complex problems requiring compromise.

After his year at Meta, Farina joined the MIT faculty. In 2025, he was distinguished with the National Science Foundation CAREER Award. His work — based on game theory and its mathematical language describing what happens when different parties have different objectives, and then quantifying the “equilibrium” where no one has a reason to change their strategy — aims to simplify massive, complex real-world scenarios where calculating such an equilibrium could take a billion years.

“I research how we can use optimization and algorithms to actually find these stable points efficiently,” he says. “Our work tries to shed new light on the mathematical underpinnings of the theory, better control and predict these complex dynamical systems, and uses these ideas to compute good solutions to large multi-agent interactions.”

Farina is especially interested in settings with “imperfect information,” which means that some agents have information that is unknown to other participants. In such scenarios, information has value, and participants must be strategic about acting on the information they possess so as not to reveal it and reduce its value. An everyday example occurs in the game of poker, where players bluff in order to conceal information about their cards.

According to Farina, “we now live in a world in which machines are far better at bluffing than humans.”

A situation with “massive amounts of imperfect information,” has brought Farina back to his board-game beginnings. Stratego is a military strategy game that has inspired research efforts costing millions of dollars to produce systems capable of beating human players. Requiring complex risk calculation and misdirection, or bluffing, it was possibly the only classical game for which major efforts had failed to produce superhuman performance, Farina says.

With new algorithms and training costing less than $10,000, rather than millions, Farina and his research team were able to beat the best player of all time — with 15 wins, four draws, and one loss. Farina says he is thrilled to have produced such results so economically, and he hopes “these new techniques will be incorporated into future pipelines,” he says.

“We have seen constant progress towards constructing algorithms that can reason strategically and make sound decisions despite large action spaces or imperfect information. I am excited about seeing these algorithms incorporated into the broader AI revolution that’s happening around us.”