Papers by topic, in reverse chronological order:

[ .pdf ] EC'12

[ .pdf ] SAGT'10

[ .pdf ] SODA'08

[ .pdf ] Economic Theory 42(1), 2010

[ .pdf ] Computers and Games'06

[ .pdf ] AAMAS'12

[ .pdf ] AAMAS'08

[ .pdf ] AAAI'07

[ .pdf ] AAMAS'07

[ .pdf ] ESA'11

[ .pdf ] WINE'08

[ .pdf ] AAAI'08

[ .pdf ] CiE'08

[ .pdf ] COCOON'07

[ .pdf ] CoopMAS'11

[ .pdf ] AAMAS'08

# Equilibrium Refinements:

### Computing a Proper Equilibrium of a Bimatrix Game

Troels Bjerre Sørensen.[ .pdf ] EC'12

We provide the first pivoting-type algorithm that finds an exact

*proper*equilibrium of a bimatrix game. This is achieved by using Lemke's algorithm to solve a linear complementarity problem (LCP) of polynomial size. This resolves an open problem in the positive; computing a proper equilibrium of a bimatrix game*is*in**PPAD**. The algorithm also computes a witness in the form of a parameterized strategy that is an ε-proper equilibrium for any given sufficiently small ε, allowing polynomial-time verification of the properties of the refined equilibrium. The same technique can be applied to matrix games, thereby computing a parameterized ε-proper strategy in polynomial time using linear programming.### The computational complexity of trembling hand perfection and other equilibrium refinements

Kristoffer Arnsfelt Hansen, Peter Bro Miltersen and Troels Bjerre Sørensen.[ .pdf ] SAGT'10

The king of refinements of Nash equilibrium is trembling hand perfection. We
show that it is NP-hard and Sqrt-Sum-hard to decide if a given pure strategy
Nash equilibrium of a given three-player game in strategic form with integer
payoff is trembling hand perfect. Analogous results are shown for a number of
other solution concepts, including proper equilibrium, (the strategy part of)
sequential equilibrium, quasi-perfect equilibrium and CURB.
The proofs all use a reduction from the problem of comparing the minmax value
of a three-player game in strategic form to a given rational number. This
problem was previously shown to be NP-hard by Borgs et al., while a Sqrt-Sum
hardness result is given in this paper. The latter proof yields bounds on the
algebraic degree of the minmax value of a three-player game that may be of
independent interest.

### Fast algorithms for finding proper strategies in game trees

Peter Bro Miltersen and Troels Bjerre Sørensen.[ .pdf ] SODA'08

We show how to find a normal form proper equilibrium in
behavior strategies of a given two-player zero-sum
extensive form game with imperfect information but perfect
recall. Our algorithm solves a finite sequence of linear
programs and runs in polynomial time. For the case of a
perfect information game, we show how to find a normal form
proper equilibrium in linear time by a simple backwards
induction procedure.

### Computing a quasi-perfect equilibrium of a two-player game

Peter Bro Miltersen and Troels Bjerre Sørensen.[ .pdf ] Economic Theory 42(1), 2010

Refining an algorithm due to Koller, Megiddo and von
Stengel, we show how to apply Lemke's algorithm for solving
linear complementarity programs to compute a quasi-perfect
equilibrium in behavior strategies of a given two-player
extensive-form game of perfect recall. A quasi-perfect
equilibrium is known to be sequential, and our algorithm
thus resolves a conjecture of McKelvey and McLennan in the
positive. A quasi-perfect equilibrium is also known to be
normal-form perfect and our algorithm thus provides an
alternative to an algorithm by von Stengel, van den Elzen
and Talman. For the case of a zero-sum game, we devise
variants of the algorithm that rely on linear programming
rather than linear complementarity programming and use the
simplex algorithm or other algorithms for linear
programming rather than Lemke's algorithm. We argue that
these latter algorithms are relevant for recent
applications of equilibrium computation to artificial
intelligence.

[
.pdf ] SODA'06, title: Computing sequential equilibria for two-player games
Koller, Megiddo and von Stengel showed how to efficiently
compute minimax strategies for two-player extensive-form
zero-sum games with imperfect information but perfect
recall using linear programming and avoiding conversion to
normal form. Koller and Pfeffer pointed out that the
strategies obtained by the algorithm are not necessarily
sequentially rational and that this deficiency is often
problematic for the practical applications. We show how to
remove this deficiency by modifying the linear programs
constructed by Koller, Megiddo and von Stengel so that
pairs of strategies forming a sequential equilibrium are
computed. In particular, we show that a sequential
equilibrium for a two-player zero-sum game with imperfect
information but perfect recall can be found in polynomial
time. In addition, the equilibrium we find is
normal-formperfect. Our technique generalizes to
general-sum games, yielding an algorithm for such games
which is likely to be prove practical, even though it is
not polynomial-time.

### Computing Proper Equilibria of Zero-Sum Games

Peter Bro Miltersen and Troels Bjerre Sørensen.[ .pdf ] Computers and Games'06

We show that a proper equilibrium of a matrix game can
be found in polynomial time by solving a linear (in the
number of pure strategies of the two players) number of
linear programs of roughly the same dimensions as the
standard linear programs describing the Nash equilibria of
the game.

# Approximately Solving Games:

### Repeated zero-sum games with budget

Troels Bjerre Sørensen.[ .pdf ] AAMAS'12

When a zero-sum game is played once, a risk-neutral player will want to
maximize his expected outcome in that single play. However, if that single
play instead only determines how much one player must pay to the other, and the
same game must be played again, until either player runs out of money, optimal
play may differ. Optimal play may require using different strategies depending
on how much money has been won or lost. Computing these strategies is rarely
feasible, as the state space is often large. This can be addressed by playing
the same strategy in all situations, though this will in general sacrifice
optimality. Purely maximizing expectation for each round in this way can be
arbitrarily bad. We therefore propose a new solution concept that has
guaranteed performance bounds, and we provide an efficient algorithm for
computing it. The solution concept is closely related to the Aumann-Serrano
index of riskiness, that is used to evaluate different gambles against each
other. The primary difference is that instead of being offered fixed gambles,
the game is adversarial.

### A heads-up no-limit Texas Hold'em poker player: Discretized betting models and automatically generated equilibrium-finding programs

Andrew Gilpin, Tuomas Sandholm and Troels Bjerre Sørensen.[ .pdf ] AAMAS'08

We present Tartanian, a game theory-based player for
heads-up no-limit Texas Hold'em poker. Tartanian is built
from three components. First, to deal with the virtually
infinite strategy space of no-limit poker, we develop a
discretized betting model designed to capture the most
important strategic choices in the game. Second, we employ
potential-aware automated abstraction algorithms for
identifying strategically similar situations in order to
decrease the size of the game tree. Third, we develop a new
technique for automatically generating the source code of
an equilibrium-finding algorithm from an XML-based
description of a game. This automatically generated program
is more efficient than what would be possible with a
general-purpose equilibrium-finding program. Finally, we
present results from the AAAI-07 Computer Poker
Competition, in which Tartanian placed second out of ten
entries.

### Potential-aware Automated Abstraction of Sequential Games, and Holistic Equilibrium Analysis of Texas Hold'em Poker

Andrew Gilpin, Tuomas Sandholm and Troels Bjerre Sørensen.[ .pdf ] AAAI'07

We present a new abstraction algorithm for sequential
imperfect information games. While most prior abstraction
algorithms employ a myopic expected-value computation as a
similarity metric, our algorithm considers a
higherdimensional space consisting of histograms over
abstracted classes of states from later stages of the game.
This enables our bottom-up abstraction algorithm to
automatically take into account potential: a hand can
become relatively better (or worse) over time and the
strength of different hands can get resolved earlier or
later in the game. We further improve the abstraction
quality by making multiple passes over the abstraction,
enabling the algorithm to narrow the scope of analysis to
information that is relevant given abstraction decisions
made for earlier parts of the game. We also present a
custom indexing scheme based on suit isomorphisms that
enables one to work on significantly larger models than
before. We apply the techniques to heads-up limit Texas
Hold'em poker. Whereas all prior game theory-based work for
Texas Hold'em poker used generic off-the-shelf linear
program solvers for the equilibrium analysis of the
abstracted game, we make use of a recently developed
algorithm based on the excessive gap technique from convex
optimization. This paper is, to our knowledge, the first to
abstract and gametheoretically analyze all four betting
rounds in one run (rather than splitting the game into
phases). The resulting player, GS3, beats BluffBot, GS2,
Hyperborean, Monash-BPP, Sparbot, Teddy, and Vexbot, each
with statistical significance. To our knowledge, those
competitors are the best prior programs for the game.

### A Near-Optimal Strategy for a Heads-Up No-Limit Texas Hold'em Poker Tournament

Peter Bro Miltersen and Troels Bjerre Sørensen.[ .pdf ] AAMAS'07

We analyze a heads-up no-limit Texas Hold'em poker
tournament with a fixed small blind of 300 chips, a fixed
big blind of 600 chips and a total amount of 8000 chips on
the table (until recently, these parameters defined the
headsup endgame of sit-n-go tournaments on the popular
Party- Poker.com online poker site). Due to the size of
this game, a computation of an optimal (i.e. minimax)
strategy for the game is completely infeasible. However,
combining an algorithm due to Koller, Megiddo and von
Stengel with concepts of Everett and suggestions of
Sklansky, we compute an optimal jam/fold strategy, i.e. a
strategy that would be optimal if any bet made by the
player playing by the strategy (but not bets of his
opponent) had to be his entire stack. Our computations
establish that the computed strategy is nearoptimal for the
unrestricted tournament (i.e., with post-flop play being
allowed) in the rigorous sense that a player playing by the
computed strategy will win the tournament with a
probability within 1.4 percentage points of the probability
that an optimal strategy (allowing post-flop play) would
give.

# Complexity of Equilibrium Computation:

### On the Approximation Performance of Fictitious Play in Finite Games

Paul W. Goldberg, Rahul Savani, Troels Bjerre Sørensen, and Carmine Ventre.[ .pdf ] ESA'11

We study the performance of Fictitious Play, when used as a heuristic for
finding an approximate Nash equilibrium of a two-player game. We exhibit a
class of two-player games having payoffs in the range [0, 1] that show that
Fictitious Play fails to find a solution having an additive approximation
guarantee significantly better than 1/2. Our construction shows that for
n x n games, in the worst case both players may perpetually have mixed
strategies whose payoffs fall short of the best response by an additive
quantity 1/2-O(1/n^(1-δ)) for arbitrarily small δ. We also show an
essentially matching upper bound of 1/2-O(1/n).

### Approximability and parameterized complexity of minmax values

Kristoffer Arnsfelt Hansen, Thomas Dueholm Hansen, Peter Bro Miltersen and Troels Bjerre Sørensen.[ .pdf ] WINE'08

We consider approximating the minmax value of a
multi-player game in strategic form. Tightening recent
bounds by Borgs et al., we observe that approximating the
value with a precision of ε log n digits (for any
constant ε>0) is NP-hard, where n is the size of
the game. On the other hand, approximating the value with
a precision of c log log n digits (for any constant c≥1)
can be done in quasi-polynomial time. We consider the
parameterized complexity of the problem, with the parameter
being the number of pure strategies k of the player for
which the minmax value is computed. We show that if there
are three players, k=2 and there are only two possible
rational payoffs, the minmax value is a rational number and
can be computed exactly in linear time. In the general
case, we show that the value can be approximated with any
polynomial number of digits of accuracy in time
n

^{O(k)}. On the other hand, we show that minmax value approximation is W[1]-hard and hence not likely to be fixed parameter tractable. Concretely, we show that if k-CLIQUE requires time n^{Ω(k)}then so does minmax value computation.### On Range of Skill

Thomas Dueholm Hansen, Peter Bro Miltersen, and Troels Bjerre Sørensen.[ .pdf ] AAAI'08

At AAAI'07, Zinkevich, Bowling and Burch introduced the
Range of Skill measure of a two-player game and used it as
a parameter in the analysis of the running time of an
algorithm for finding approximate solutions to such games.
They suggested that the Range of Skill of a typical natural
game is a small number, but only gave heuristic arguments
for this. In this paper, we provide the first methods for
rigorously estimating the Range of Skill of a given game.
We provide some general, asymptotic bounds that imply that
the Range of Skill of a perfectly balanced game tree is
almost exponential in its size (and doubly exponential in
its depth). We also provide techniques that yield concrete
bounds for unbalanced game trees and apply these to
estimate the Range of Skill of Tic-Tac-Toe and Heads-Up
Limit Texas Hold'em Poker. In particular, we show that the
Range of Skill of Tic-Tac-Toe is more than 100,000.

### Deterministic Graphical Games Revisited

Daniel Andersson, Kristoffer Arnsfelt Hansen, Peter Bro Miltersen and Troels Bjerre Sørensen.[ .pdf ] CiE'08

We revisit the deterministic graphical games of Washburn.
A deterministic graphical game can be described as a simple
stochastic game (a notion due to Anne Condon), except that
we allow arbitrary real payoffs but disallow moves of
chance. We study the complexity of solving deterministic
graphical games and obtain an almost-linear time
comparison-based algorithm for computing an equilibrium of
such a game. The existence of a linear time
comparison-based algorithm remains an open problem.

[
.pdf ] Journal of Logic and Computation
Starting from Zermelo's classical formal treatment of chess,
we trace through history the analysis of two-player
win/lose/draw games with perfect information and potentially
infinite play. Such chess-like games have appeared in many
different research communities, and methods for solving them,
such as retrograde analysis, have been rediscovered
independently. We then revisit Washburn's deterministic
graphical games (DGGs), a natural generalization of
chess-like games to arbitrary zero-sum payoffs. We study the
complexity of solving DGGs and obtain an almost-linear time
comparison-based algorithm for finding optimal strategies in
such games. The existence of a linear time comparison-based
algorithm remains an open problem.

### Finding Equilibria in Games of No Chance

Kristoffer Arnsfelt Hansen, Peter Bro Miltersen and Troels Bjerre Sørensen.[ .pdf ] COCOON'07

We consider finding maximin strategies and equilibria of
explicitly given extensive form games with imperfect
information but with no moves of chance. We show that a
maximin pure strategy for a two- player game with perfect
recall and no moves of chance can be found in time linear
in the size of the game tree and that all pure Nash
equilibrium outcomes of a two-player general-sum game with
perfect recall and no moves of chance can be enumerated in
time linear in the size of the game tree. We also show that
finding an optimal behavior strategy for a one-player game
of no chance without perfect recall and determining whether
an equilibrium in behavior strategies exists in a
two-player zero-sum game of no chance without perfect
recall are both NP-hard.

# Workshop Papers:

### Path coalitional games

Haris Aziz and Troels Bjerre Sørensen.[ .pdf ] CoopMAS'11

We present a general framework to model strategic aspects and stable and fair
resource allocations in networks via variants and generalizations of path
coalitional games. In these games, a coalition of edges or vertices is
successful if it can enable an s-t path. We present polynomial-time algorithms
to compute and verify least core payoffs of cost-based generalizations of path
coalitional games and their duals, thereby settling a number of open problems.
The least core payoffs of path coalitional games are completely characterized
and a polynomial time algorithm for computing the nucleolus of edge path
coalitional games on undirected series-parallel graphs is presented.

# Software Demonstrations:

### GS3 and Tartanian: Game theory-based heads-up limit and no-limit Texas Hold'em poker-playing programs

Andrew Gilpin, Tuomas Sandholm and Troels Bjerre Sørensen.[ .pdf ] AAMAS'08

We demonstrate two game theory-based programs for headsup
limit and no-limit Texas Hold'em poker. The first player,
GS3, is designed for playing limit Texas Hold'em, in which
all bets are a fixed amount. The second player, Tartanian,
is designed for the no-limit variant of the game, in which
the amount bet can be any amount up to the number of chips
the player has. Both GS3 and Tartanian are based on our
potential-aware automated abstraction algorithm for
identifying strategically similar situations in order to
decrease the size of the game tree. Tartanian, in order to
deal with the virtually infinite strategy space of no-limit
poker, in addition uses a discretized betting model
designed to capture the most important strategic choices in
the game. The strategies for both players are computed
using our improved version of Nesterov's excessive gap
technique specialized for poker. In this demonstration,
participants will be invited to play against both of the
players, and to experience first-hand the sophisticated
strategies employed by our players.