Vijay Subramanian awarded $7.5M MURI to rethink game theory in dynamic environments
The interactions of today’s world are increasingly complex, as humans regularly interface with semi- and fully-autonomous artificial intelligence (AI) systems. Michigan ECE is taking the lead on improving our understanding, and predicting the outcomes, of these interactions through a $7.5M, five year Multidisciplinary University Research Initiatives (MURI) called New Game Theory for New Agents: Foundations and Learning Algorithms for Decision-Making Mixed-Agents.
“There are lots of different agents that are interacting, including the usual players––humans––which could be big entities, like corporations, governments, or other institutions,” explained Vijay Subramanian, professor of Electrical and Computer Engineering and project director. “But in today’s world, we have these new AI agents as well. What we want to understand is: how do these computational agents interact?”
Game theory models how individuals strategize and make decisions, either collaboratively or competitively. Each player should attempt to maximize their progress toward an individual or shared goal, using the information available to them. This information could include the rules of the game––such as in poker, Go, or trading in the stock market––as well as any knowledge about the other players’ goals or intentions. When none of the players can improve their outcomes by changing their decisions alone, the game has reached a state called equilibrium.
Over several decades, researchers in economics, mathematics, computer science, engineering, and even biology have developed game theory to predict the outcomes and equilibria of various scenarios. Now, AI systems are overtaking humans in their ability to quickly handle and process huge amounts of data, adding an element of the unknown into these assessments.
Our goal is to transcend existing theory and develop new theory that can address this mixture of autonomous, semi-autonomous, algorithmic, and human agents.
Vijay Subramanian
“The existing theory makes very stringent assumptions on the computing or reasoning capabilities of agents––and the AI agents that I mentioned need not have all of those,” said Subramanian. “Our goal is to transcend that and develop new theory that can address this mixture of autonomous, semi-autonomous, algorithmic, and human agents.”
If the research team can predict the outcomes of interactions that involve AI agents, they can design environments and projects to be carried out more efficiently and accurately.
One real-world example of a scenario that would benefit from this type of analysis is the rescue and cleanup operations in a disaster zone––say, after an earthquake or airstrike. In a modern disaster zone, humans may work together with robots to clear debris from the area and provide medical care to injured survivors.
“In this case, those robots are AI agents, but they get signals from and have to follow the humans. And the humans have to react to these agents as well,” Subramanian said. “It’s important to understand how such systems would perform and come up with an algorithm to get the system to achieve your goals.”
“You will have some agents that are more capable and some that are less capable,” he added, “Can the more capable agents direct the systems toward achieving their objectives more often?”
In addition to the complexities introduced by the presence of multiple types of agents, the players must anticipate or react to any environmental changes produced by their actions. For example, in the context of rescue and cleanup operations in a disaster zone, as the area is cleared, it may become easier for humans and robots to move around; conversely, further obstacles could be created by falling debris that restricts movement or alters the number of workers.
These types of complex scenarios have presented challenges to existing game theory. Subramanian’s team aims to bring together the many years of game theory development that incorporate dynamic settings with the mixed capabilities of today’s AI agents.
Other examples of modern multi-agent systems include combatting poachers; assessing the likelihood of and thereafter preventing systemic failures in the financial system, like the Great Depression (1930s) and the Great Recession (2000s); and deploying fleets of automated cars.
“We are thinking of the methodology being composed of three core components,” Subramanian said, “Agents have to form the models of each other, the environment, and themselves. Based on that, we have to create algorithms that estimate those models and make decisions. And thereafter, we must understand what the outcomes result in. These three things together predict equilibria––their interplay will determine what happens in the game.”
These steps happen in a loop, helping the researchers predict the outcomes of their modeled scenarios. If the outcome doesn’t satisfy their goals, they can change the algorithms, the communication between agents, or the incentives to direct the result toward preferred configurations.
The research will be conducted with MURI collaborators Dirk Bergemann (Yale University), Avrim Blum (Toyota Technological Institute Chicago), Rahul Jain (University of Southern California), Elchanan Mossel (Massachusetts Institute of Technology), Milind Tambe (Harvard University), Omer Tamuz (California Institute of Technology), and Eva Tardos (Cornell University).