31 min read

> "The key idea is to use a cost function that an automated market maker uses to set prices. Traders trade against this function rather than against each other, guaranteeing that anyone who wants to trade can always find a counterparty."

Chapter 8: Automated Market Makers

"The key idea is to use a cost function that an automated market maker uses to set prices. Traders trade against this function rather than against each other, guaranteeing that anyone who wants to trade can always find a counterparty." --- Robin Hanson, Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation (2003)


Imagine you create a prediction market asking: "Will it rain tomorrow?" You set it up, post it online, and wait. One person shows up and wants to buy "Yes" shares at 40 cents. They wait. Nobody comes along to sell. Hours pass. The buyer leaves. Your market has failed --- not because the question was bad, not because people lacked opinions, but because there was no one on the other side of the trade.

This is the thin market problem, and it nearly killed prediction markets before they could prove their worth. The solution came from an economist named Robin Hanson, who asked a radical question: What if we replaced human counterparties with a mathematical function?

The answer was the automated market maker (AMM) --- an algorithm that always stands ready to buy or sell at mathematically determined prices. AMMs transformed prediction markets from fragile curiosities into robust information-gathering tools. Today, every major prediction market platform uses some form of automated market maker.

In this chapter, you will learn exactly how these algorithms work, from the elegant mathematics of Hanson's Logarithmic Market Scoring Rule to the constant-product formulas that power decentralized finance, and everything in between.


8.1 Why Automated Market Makers?

The Thin Order Book Problem

In Chapter 7, we explored how traditional order books match buyers and sellers. This works beautifully for liquid markets like the New York Stock Exchange, where thousands of traders are active every second. But prediction markets are different:

  • Questions are specific and time-limited. "Will the UK leave the EU by March 2019?" attracts far fewer traders than "Buy/sell Apple stock."
  • Markets are fragmented. A platform might host thousands of questions, each with only a handful of interested participants.
  • Traders arrive sporadically. Unlike stock markets, prediction market participants often trade once and leave.

The result is thin order books --- buy and sell orders sit unmatched for hours or days. This creates three serious problems:

  1. Wide bid-ask spreads. If the best buy offer is 30 cents and the best sell offer is 70 cents, the market price is essentially useless as a probability estimate.
  2. Delayed price discovery. News breaks, opinions shift, but prices cannot update until someone happens to place a matching order.
  3. Discouraging participation. Traders who cannot execute immediately simply leave, making the thin market even thinner --- a vicious cycle.

The AMM Solution: Algorithmic Liquidity

An automated market maker solves this by replacing the order book with a cost function --- a mathematical formula that determines the price of every trade. Here is the key insight:

An AMM is a robot trader with infinite patience and a mathematical pricing rule. It will always buy from you, and always sell to you, at a price determined by the current state of the market.

The AMM does not "want" to make money. It does not have opinions about probabilities. It simply enforces a pricing rule that has certain desirable mathematical properties. Someone (the market operator) funds the AMM with an initial subsidy, and in return, the market always has liquidity.

AMM vs. Central Limit Order Book (CLOB)

Let us compare these two approaches directly:

Feature Central Limit Order Book (CLOB) Automated Market Maker (AMM)
Liquidity source Other traders Mathematical function
Counterparty Another human The algorithm itself
Can always trade? No --- requires matching orders Yes --- always available
Price determination Supply and demand of orders Cost function formula
Bid-ask spread Varies with liquidity Determined by parameters
Cost to operate Low (just matching) Requires initial subsidy
Best for High-volume markets Thin markets, many questions
Price manipulation Harder (many participants) Possible if subsidy is small
Capital efficiency Higher (funds locked only in orders) Lower (subsidy always at risk)

When to Use Each Approach

Use an AMM when: - You expect thin trading volume - You need guaranteed liquidity from day one - You are running many markets simultaneously (like a platform with thousands of questions) - Speed of price discovery matters more than capital efficiency

Use a CLOB when: - You expect high trading volume - You want to minimize operator subsidy costs - Traders are sophisticated and will provide liquidity themselves - You need complex order types (limit orders, stop-loss, etc.)

Use a hybrid when: - You want an AMM as a backstop but prefer peer-to-peer trading when available - The market starts thin but may grow (begin with AMM, transition to CLOB)

Many real-world platforms use hybrids. Polymarket, for instance, uses an order book for its most liquid markets but AMM-style mechanisms for thinner ones. Manifold Markets has historically relied on an AMM variant for all its markets.


8.2 The Logarithmic Market Scoring Rule (LMSR)

Hanson's Invention

In 2003, Robin Hanson published a paper that would become the foundation of modern prediction markets. He proposed the Logarithmic Market Scoring Rule (LMSR), an automated market maker with elegant mathematical properties.

The key idea builds on proper scoring rules --- formulas that reward forecasters for honest probability assessments (which we explored in Chapter 4). Hanson's insight was to chain scoring rule payments together: each trader "updates" the market's probability, paying or receiving the difference in scores between the old and new states.

The Cost Function

The heart of LMSR is its cost function. For a market with outcomes $1, 2, \ldots, n$, where $q_i$ is the number of shares of outcome $i$ that have been purchased, the cost function is:

$$C(q_1, q_2, \ldots, q_n) = b \cdot \ln\left(\sum_{i=1}^{n} e^{q_i / b}\right)$$

Where: - $q_i$ = total quantity of shares purchased for outcome $i$ (can be negative if shares were sold) - $b$ = the liquidity parameter (a positive constant we choose) - $\ln$ = natural logarithm - $e$ = Euler's number (approximately 2.71828)

In plain language: The cost function tracks the cumulative cost of all shares purchased so far. When a trader wants to buy shares, they pay the difference in the cost function before and after their trade.

How Trades Work

Suppose a trader wants to buy $\Delta$ shares of outcome $i$. The cost they pay is:

$$\text{Trade Cost} = C(q_1, \ldots, q_i + \Delta, \ldots, q_n) - C(q_1, \ldots, q_i, \ldots, q_n)$$

This is simply: new cost state minus old cost state. The cost function acts like a running total, and each trade is priced as the marginal change.

The Price Function

The current price (instantaneous probability) of outcome $i$ is the partial derivative of $C$ with respect to $q_i$:

$$p_i = \frac{\partial C}{\partial q_i} = \frac{e^{q_i / b}}{\sum_{j=1}^{n} e^{q_j / b}}$$

If this formula looks familiar, it should --- this is the softmax function, widely used in machine learning. It has a beautiful property: all prices automatically sum to 1, meaning they can be directly interpreted as probabilities.

$$\sum_{i=1}^{n} p_i = 1$$

Worked Example: A Two-Outcome Market

Let us trace through a concrete example. We create a market: "Will Team A win the championship?" with two outcomes: Yes and No.

Setup: - Liquidity parameter: $b = 100$ - Initial quantities: $q_{\text{Yes}} = 0$, $q_{\text{No}} = 0$

Initial prices:

$$p_{\text{Yes}} = \frac{e^{0/100}}{e^{0/100} + e^{0/100}} = \frac{1}{1 + 1} = 0.50$$

$$p_{\text{No}} = \frac{e^{0/100}}{e^{0/100} + e^{0/100}} = \frac{1}{1 + 1} = 0.50$$

Both outcomes start at 50% probability. This makes sense --- we have no information yet.

Trade 1: Alice buys 10 "Yes" shares.

First, the cost of this trade:

$$C_{\text{before}} = 100 \cdot \ln(e^{0/100} + e^{0/100}) = 100 \cdot \ln(2) \approx 69.31$$

$$C_{\text{after}} = 100 \cdot \ln(e^{10/100} + e^{0/100}) = 100 \cdot \ln(e^{0.1} + 1) = 100 \cdot \ln(1.10517 + 1) \approx 100 \cdot \ln(2.10517) \approx 74.44$$

$$\text{Alice pays} = 74.44 - 69.31 = 5.13$$

Alice pays about $5.13 for 10 "Yes" shares. Note this is slightly more than $5.00 (which would be the price at exactly 50 cents per share), because the price rises as she buys.

New prices after Alice's trade:

$$p_{\text{Yes}} = \frac{e^{10/100}}{e^{10/100} + e^{0/100}} = \frac{1.10517}{1.10517 + 1} = \frac{1.10517}{2.10517} \approx 0.525$$

$$p_{\text{No}} \approx 0.475$$

The price of "Yes" has moved from 50% to about 52.5%, reflecting Alice's purchase.

Trade 2: Bob buys 20 "No" shares.

$$C_{\text{before}} \approx 74.44$$

$$C_{\text{after}} = 100 \cdot \ln(e^{10/100} + e^{20/100}) = 100 \cdot \ln(1.10517 + 1.22140) \approx 100 \cdot \ln(2.32657) \approx 84.55$$

$$\text{Bob pays} = 84.55 - 74.44 = 10.11$$

New prices after Bob's trade:

$$p_{\text{Yes}} = \frac{e^{10/100}}{e^{10/100} + e^{20/100}} = \frac{1.10517}{2.32657} \approx 0.475$$

$$p_{\text{No}} \approx 0.525$$

Bob's larger purchase of "No" shares has pushed the probability of "Yes" back down to about 47.5%.

The Liquidity Parameter $b$

The parameter $b$ is the single most important design choice in LMSR. It controls how sensitive prices are to trades:

  • Small $b$ (e.g., 10): Prices move dramatically with each trade. A 10-share purchase might swing the price by 20 percentage points. The market is very reactive but volatile.
  • Large $b$ (e.g., 1000): Prices barely budge. The same 10-share purchase might move the price by only 0.5 percentage points. The market is stable but slow to respond.

Mathematically, the price impact of buying $\Delta$ shares of outcome $i$ is approximately:

$$\Delta p_i \approx \frac{\Delta}{4b} \quad \text{(for a two-outcome market near 50/50)}$$

So doubling $b$ cuts the price impact in half. We will explore this parameter in depth in Section 8.7.

Bounded Loss Property

One of LMSR's most important properties is bounded loss. No matter how traders trade, the maximum amount the market maker can lose is:

$$\text{Maximum Loss} = b \cdot \ln(n)$$

Where $n$ is the number of outcomes. For a two-outcome market with $b = 100$:

$$\text{Maximum Loss} = 100 \cdot \ln(2) \approx \$69.31$$

This means the market operator knows in advance the worst-case cost of running the market. This is a subsidy paid for the privilege of gathering information through the market. We will explore this subsidy in Section 8.8.

Python LMSR Implementation

Here is a complete LMSR class (also available in code/example-01-lmsr.py):

import numpy as np
from typing import List

class LMSR:
    """Logarithmic Market Scoring Rule automated market maker."""

    def __init__(self, num_outcomes: int, b: float):
        """
        Initialize the LMSR market maker.

        Args:
            num_outcomes: Number of possible outcomes (e.g., 2 for Yes/No)
            b: Liquidity parameter (higher = more liquidity, less price impact)
        """
        self.num_outcomes = num_outcomes
        self.b = b
        self.quantities = np.zeros(num_outcomes)  # shares outstanding per outcome

    def cost(self, quantities: np.ndarray = None) -> float:
        """
        Compute the cost function C(q) = b * ln(sum(exp(q_i / b))).

        Args:
            quantities: Optional quantity vector; uses current state if None.

        Returns:
            The value of the cost function.
        """
        if quantities is None:
            quantities = self.quantities
        # Use log-sum-exp trick for numerical stability
        q_over_b = quantities / self.b
        max_q = np.max(q_over_b)
        return self.b * (max_q + np.log(np.sum(np.exp(q_over_b - max_q))))

    def prices(self) -> np.ndarray:
        """
        Compute current prices (probabilities) for all outcomes.
        p_i = exp(q_i / b) / sum(exp(q_j / b))   (softmax)

        Returns:
            Array of prices summing to 1.
        """
        q_over_b = self.quantities / self.b
        # Softmax with numerical stability
        max_q = np.max(q_over_b)
        exp_q = np.exp(q_over_b - max_q)
        return exp_q / np.sum(exp_q)

    def trade_cost(self, outcome: int, quantity: float) -> float:
        """
        Calculate the cost of buying `quantity` shares of `outcome`.
        Negative quantity means selling.

        Args:
            outcome: Index of the outcome (0-indexed).
            quantity: Number of shares to buy (positive) or sell (negative).

        Returns:
            The dollar cost of the trade (positive = trader pays, negative = trader receives).
        """
        cost_before = self.cost()
        new_quantities = self.quantities.copy()
        new_quantities[outcome] += quantity
        cost_after = self.cost(new_quantities)
        return cost_after - cost_before

    def execute_trade(self, outcome: int, quantity: float) -> float:
        """
        Execute a trade: buy or sell shares and update state.

        Args:
            outcome: Index of the outcome (0-indexed).
            quantity: Number of shares to buy (positive) or sell (negative).

        Returns:
            The dollar cost paid by the trader.
        """
        cost = self.trade_cost(outcome, quantity)
        self.quantities[outcome] += quantity
        return cost

    def max_loss(self) -> float:
        """
        Calculate the maximum possible loss for the market maker.
        Max loss = b * ln(n)

        Returns:
            The maximum subsidy cost.
        """
        return self.b * np.log(self.num_outcomes)

Using the class:

# Create a two-outcome market (Yes/No) with b=100
market = LMSR(num_outcomes=2, b=100)

print(f"Initial prices: {market.prices()}")
# Output: Initial prices: [0.5 0.5]

# Alice buys 10 "Yes" shares
cost = market.execute_trade(outcome=0, quantity=10)
print(f"Alice pays: ${cost:.2f}")
print(f"Prices after Alice: {market.prices()}")
# Output: Alice pays: $5.13
# Output: Prices after Alice: [0.52498 0.47502]

# Bob buys 20 "No" shares
cost = market.execute_trade(outcome=1, quantity=20)
print(f"Bob pays: ${cost:.2f}")
print(f"Prices after Bob: {market.prices()}")
# Output: Bob pays: $10.11
# Output: Prices after Bob: [0.47502 0.52498]

8.3 Constant Product Market Makers (CPMM)

The x * y = k Invariant

While LMSR emerged from economics and proper scoring rules, a completely different AMM design emerged from decentralized finance (DeFi): the Constant Product Market Maker (CPMM), popularized by Uniswap in 2018.

The core idea is beautifully simple. A CPMM maintains reserves of two assets and enforces an invariant --- a quantity that must remain constant before and after every trade:

$$x \cdot y = k$$

Where: - $x$ = reserve quantity of asset A (e.g., "Yes" shares) - $y$ = reserve quantity of asset B (e.g., "No" shares) - $k$ = the constant product (set at initialization and preserved by every trade)

When a trader wants to buy "Yes" shares, they deposit "No" shares (or a base currency) into the pool, and the contract gives them "Yes" shares, adjusting the reserves so that the product $x \cdot y$ still equals $k$.

Adapting CPMM for Prediction Markets

In DeFi, CPMM is used for swapping tokens. For prediction markets, we adapt it slightly:

  • The two reserves represent the two outcomes (e.g., "Yes" and "No" shares)
  • Traders buy one outcome by effectively selling the other
  • Alternatively, traders deposit a base currency, which the AMM uses to mint both share types, then keeps one and gives the other to the trader

For a two-outcome prediction market, the price of outcome A is:

$$p_A = \frac{y}{x + y}$$

$$p_B = \frac{x}{x + y}$$

In plain language: The price of an asset is inversely related to its reserve. If "Yes" shares are abundant in the pool (high $x$), "Yes" is cheap. If "Yes" shares are scarce (low $x$), "Yes" is expensive.

Worked Example

Setup: - Initial reserves: $x = 100$ (Yes shares), $y = 100$ (No shares) - Constant product: $k = 100 \times 100 = 10{,}000$

Initial prices:

$$p_{\text{Yes}} = \frac{100}{100 + 100} = 0.50$$

Trade: Alice buys 10 "Yes" shares.

Alice removes 10 "Yes" shares from the pool, so $x_{\text{new}} = 90$. To maintain the invariant:

$$90 \cdot y_{\text{new}} = 10{,}000$$ $$y_{\text{new}} = 111.11$$

Alice must deposit $111.11 - 100 = 11.11$ "No" shares into the pool (or the equivalent in base currency).

New prices:

$$p_{\text{Yes}} = \frac{111.11}{90 + 111.11} = \frac{111.11}{201.11} \approx 0.5525$$

The price moved from 50% to about 55.25% --- a larger jump than LMSR would produce for the same trade size with comparable liquidity settings.

Slippage

Slippage is the difference between the price you expect to pay and the price you actually pay. In a CPMM, slippage is inherent because the price changes during your trade.

For a small trade, the price is approximately $p_A = y / (x + y)$. For a large trade that moves reserves significantly, you end up paying an average price that is worse than the starting price.

The effective price Alice paid for her 10 "Yes" shares is:

$$\text{Effective price} = \frac{11.11}{10} \approx 1.11 \text{ (in "No" share units)}$$

Converting to probability terms, this is more than the starting price of 0.50 and more than the ending price of 0.5525 --- she paid a premium due to slippage.

Slippage Formula

For buying $\Delta x$ shares from a CPMM with reserves $(x, y)$:

$$\text{Cost} = \frac{y \cdot \Delta x}{x - \Delta x}$$

$$\text{Slippage} = \text{Cost} - p_{\text{start}} \cdot \Delta x$$

Note the important constraint: you cannot buy more than $x$ shares, because the reserve cannot go negative. This is fundamentally different from LMSR, where there is no hard limit on how many shares can be bought.

LMSR vs. CPMM: Key Differences

Feature LMSR CPMM
Invariant Cost function Constant product
Bounded loss Yes: $b \cdot \ln(n)$ No: loss can grow with volume
Price range (0, 1) always (0, 1) but approaches extremes slowly
Infinite liquidity Yes (can always trade) Limited by reserves
Parameter $b$ (liquidity) Initial reserve sizes
Origin Scoring rules (Hanson, 2003) DeFi (Uniswap, 2018)
Multi-outcome Natural extension Requires product of all reserves

Python CPMM Implementation

class CPMM:
    """Constant Product Market Maker for a two-outcome prediction market."""

    def __init__(self, reserve_a: float, reserve_b: float):
        """
        Initialize with reserves for outcomes A and B.

        Args:
            reserve_a: Initial reserve of outcome A shares.
            reserve_b: Initial reserve of outcome B shares.
        """
        self.reserve_a = reserve_a
        self.reserve_b = reserve_b
        self.k = reserve_a * reserve_b  # the constant product invariant

    def prices(self) -> tuple:
        """
        Current prices (probabilities) for both outcomes.

        Returns:
            Tuple of (price_a, price_b).
        """
        total = self.reserve_a + self.reserve_b
        return (self.reserve_b / total, self.reserve_a / total)

    def cost_to_buy_a(self, quantity: float) -> float:
        """
        Cost (in B-shares or base currency) to buy `quantity` of A-shares.

        Args:
            quantity: Number of A-shares to buy.

        Returns:
            Cost in B-share equivalents.
        """
        if quantity >= self.reserve_a:
            raise ValueError("Cannot buy more than available reserve.")
        new_reserve_a = self.reserve_a - quantity
        new_reserve_b = self.k / new_reserve_a
        return new_reserve_b - self.reserve_b

    def cost_to_buy_b(self, quantity: float) -> float:
        """Cost (in A-shares or base currency) to buy `quantity` of B-shares."""
        if quantity >= self.reserve_b:
            raise ValueError("Cannot buy more than available reserve.")
        new_reserve_b = self.reserve_b - quantity
        new_reserve_a = self.k / new_reserve_b
        return new_reserve_a - self.reserve_a

    def execute_buy_a(self, quantity: float) -> float:
        """Buy A-shares, updating reserves. Returns cost paid."""
        cost = self.cost_to_buy_a(quantity)
        self.reserve_a -= quantity
        self.reserve_b += cost
        return cost

    def execute_buy_b(self, quantity: float) -> float:
        """Buy B-shares, updating reserves. Returns cost paid."""
        cost = self.cost_to_buy_b(quantity)
        self.reserve_b -= quantity
        self.reserve_a += cost
        return cost

    def slippage(self, outcome: str, quantity: float) -> float:
        """
        Calculate slippage for a trade.

        Args:
            outcome: 'A' or 'B'.
            quantity: Number of shares to buy.

        Returns:
            Slippage amount (extra cost beyond initial price * quantity).
        """
        price_a, price_b = self.prices()
        if outcome == 'A':
            cost = self.cost_to_buy_a(quantity)
            ideal_cost = price_a * quantity
        else:
            cost = self.cost_to_buy_b(quantity)
            ideal_cost = price_b * quantity
        return cost - ideal_cost

8.4 The Liquidity-Sensitive LMSR (LS-LMSR)

Motivation: Why Fixed $b$ Is Suboptimal

Standard LMSR has a fundamental tension. The liquidity parameter $b$ is set once when the market is created, and it never changes:

  • If $b$ is too small, the market is cheap to operate (low maximum loss) but prices swing wildly. Early trades can dominate the price, potentially deterring later participants.
  • If $b$ is too large, the market is stable but expensive. The operator subsidizes a large amount of liquidity that may never be needed.

The ideal behavior would be: start with low liquidity (and low cost) when the market is new, then automatically increase liquidity as more traders participate and more money flows in.

Othman's LS-LMSR

In 2013, Abraham Othman (who later joined Metaculus) proposed the Liquidity-Sensitive LMSR (LS-LMSR). The key idea: make the liquidity parameter $b$ a function of trading activity.

Specifically, $b$ becomes a function of the total quantity of shares traded:

$$b(\mathbf{q}) = \alpha \cdot \sum_{i=1}^{n} q_i$$

Where: - $\alpha$ is a scaling constant (the new parameter we must choose) - $\sum q_i$ is the total number of shares purchased across all outcomes - As more shares are bought, $b$ grows, providing more liquidity

The cost function for LS-LMSR becomes:

$$C(\mathbf{q}) = b(\mathbf{q}) \cdot \ln\left(\sum_{i=1}^{n} e^{q_i / b(\mathbf{q})}\right)$$

This is the same formula as standard LMSR, but with $b$ now depending on $\mathbf{q}$ rather than being constant.

How LS-LMSR Behaves

Let us trace through an example:

  1. Market opens. $q_{\text{Yes}} = 0, q_{\text{No}} = 0$. Total shares = 0. We need a minimum $b$ to avoid division by zero, so we set $b_{\text{min}} = 10$.

  2. Early trading. First few traders buy 20 total shares. $b = \max(\alpha \cdot 20, b_{\text{min}})$. With $\alpha = 1$, we get $b = 20$. Prices are responsive --- each trade has a noticeable impact.

  3. Active trading. After 500 total shares traded, $b = 500$. Now prices are much more stable. It takes a larger trade to move the price significantly.

  4. Heavy trading. After 5000 total shares, $b = 5000$. The market behaves almost like a deep CLOB --- enormous trades are needed to move the price.

This adaptive behavior is exactly what we want: the market's liquidity scales with its activity.

Advantages of LS-LMSR

  1. No maximum loss guarantee (this is also a disadvantage). Unlike standard LMSR where maximum loss is $b \cdot \ln(n)$, LS-LMSR's loss grows as $b$ grows. But the operator also collects more from trades.
  2. Self-funding. In practice, LS-LMSR markets can earn enough from the spread to fund the increasing liquidity, making them closer to self-sustaining.
  3. Better price discovery. Early prices are more sensitive (responding quickly to initial information), while later prices are more stable (not easily manipulated).
  4. No need to guess optimal $b$. The market automatically adapts.

Python LS-LMSR Implementation

class LSLMSR:
    """Liquidity-Sensitive LMSR: b adapts based on trading volume."""

    def __init__(self, num_outcomes: int, alpha: float, b_min: float = 1.0):
        """
        Args:
            num_outcomes: Number of possible outcomes.
            alpha: Scaling factor for b relative to total shares.
            b_min: Minimum value of b (prevents division by zero).
        """
        self.num_outcomes = num_outcomes
        self.alpha = alpha
        self.b_min = b_min
        self.quantities = np.zeros(num_outcomes)

    def _compute_b(self, quantities: np.ndarray = None) -> float:
        """Compute the adaptive liquidity parameter."""
        if quantities is None:
            quantities = self.quantities
        total_shares = np.sum(np.maximum(quantities, 0))
        return max(self.alpha * total_shares, self.b_min)

    def cost(self, quantities: np.ndarray = None) -> float:
        """Compute cost function with adaptive b."""
        if quantities is None:
            quantities = self.quantities
        b = self._compute_b(quantities)
        q_over_b = quantities / b
        max_q = np.max(q_over_b)
        return b * (max_q + np.log(np.sum(np.exp(q_over_b - max_q))))

    def prices(self) -> np.ndarray:
        """Current prices using adaptive b."""
        b = self._compute_b()
        q_over_b = self.quantities / b
        max_q = np.max(q_over_b)
        exp_q = np.exp(q_over_b - max_q)
        return exp_q / np.sum(exp_q)

    def current_b(self) -> float:
        """Return the current adaptive b value."""
        return self._compute_b()

    def trade_cost(self, outcome: int, quantity: float) -> float:
        """Cost to buy `quantity` shares of `outcome`."""
        cost_before = self.cost()
        new_quantities = self.quantities.copy()
        new_quantities[outcome] += quantity
        cost_after = self.cost(new_quantities)
        return cost_after - cost_before

    def execute_trade(self, outcome: int, quantity: float) -> float:
        """Execute a trade and return cost."""
        cost = self.trade_cost(outcome, quantity)
        self.quantities[outcome] += quantity
        return cost

8.5 Cost Functions and Their Properties

What Makes a Valid Cost Function?

Not every mathematical function works as an AMM. A valid cost function for a prediction market must satisfy several important properties. Understanding these properties helps you evaluate any proposed AMM mechanism and ensures that the market behaves fairly.

Property 1: Convexity

The cost function $C(\mathbf{q})$ must be convex. In intuitive terms, this means buying more shares of an outcome gets progressively more expensive --- the price goes up as you buy.

Mathematically, for any two quantity vectors $\mathbf{q}_1$ and $\mathbf{q}_2$ and any $\lambda \in [0, 1]$:

$$C(\lambda \mathbf{q}_1 + (1 - \lambda) \mathbf{q}_2) \leq \lambda C(\mathbf{q}_1) + (1 - \lambda) C(\mathbf{q}_2)$$

Why it matters: Convexity prevents arbitrage. If the cost function were concave (prices decrease as you buy), traders could make risk-free profit by splitting trades into pieces.

Verification for LMSR: The Hessian matrix (matrix of second derivatives) of the LMSR cost function is positive semi-definite, confirming convexity.

Property 2: Path Independence

The total cost of reaching a given state must be the same regardless of the order of trades:

$$C(\mathbf{q}_{\text{final}}) - C(\mathbf{q}_{\text{initial}}) = \text{same regardless of intermediate steps}$$

In plain language: If Alice buys 10 "Yes" then Bob buys 5 "No", the total money collected by the AMM must be the same as if Bob had traded first and Alice second.

Why it matters: Path independence means traders cannot manipulate costs by timing or ordering their trades strategically. The AMM charges fairly regardless of sequence.

LMSR satisfies this because the cost of going from state $\mathbf{q}_0$ to state $\mathbf{q}_f$ is always $C(\mathbf{q}_f) - C(\mathbf{q}_0)$, regardless of the path taken.

Property 3: No-Arbitrage (Prices Sum to 1)

For a valid prediction market, the prices must always sum to exactly 1:

$$\sum_{i=1}^{n} p_i = 1$$

This means buying a complete set of shares (one of each outcome) always costs the same amount (typically $1), and there is no way to create or destroy value by trading combinations of outcomes.

Why it matters: If prices summed to more than 1, a trader could sell a complete set for a guaranteed profit. If they summed to less than 1, a trader could buy a complete set at a discount.

Property 4: Translational Invariance

Adding the same constant to all quantity elements does not change prices:

$$\text{prices}(q_1 + c, q_2 + c, \ldots, q_n + c) = \text{prices}(q_1, q_2, \ldots, q_n)$$

In plain language: If every trader buys one share of every outcome, prices do not change. Only relative demand matters.

Why it matters: This means the market responds to information, not to undirected trading volume. Buying all outcomes equally (which carries no information) leaves prices unchanged.

Property 5: Bounded Loss (for LMSR)

The total amount the market maker can lose is bounded:

$$\text{Loss} \leq b \cdot \ln(n)$$

Why it matters: The market operator can budget exactly how much they are willing to spend as a subsidy.

Python Verification of Properties

def verify_cost_function_properties(market, num_trials=100):
    """Verify key properties of an AMM cost function."""
    import numpy as np

    n = market.num_outcomes

    # Property 1: Prices always sum to 1
    print("=== Property: Prices sum to 1 ===")
    for _ in range(num_trials):
        # Set random quantities
        market.quantities = np.random.randn(n) * 50
        price_sum = np.sum(market.prices())
        assert abs(price_sum - 1.0) < 1e-10, f"Prices sum to {price_sum}, not 1"
    print("PASSED: Prices always sum to 1.0")

    # Property 2: Prices are always positive
    print("\n=== Property: Prices are positive ===")
    for _ in range(num_trials):
        market.quantities = np.random.randn(n) * 100
        prices = market.prices()
        assert np.all(prices > 0), f"Found non-positive price: {prices}"
    print("PASSED: All prices always positive")

    # Property 3: Path independence
    print("\n=== Property: Path independence ===")
    for _ in range(num_trials):
        market.quantities = np.zeros(n)
        target = np.random.rand(n) * 20

        # Path A: trade outcome 0 first, then outcome 1, etc.
        market.quantities = np.zeros(n)
        cost_a = market.cost(target) - market.cost(np.zeros(n))

        # Path B: trade in reverse order (different intermediate steps)
        # Since cost depends only on endpoints, this should be the same
        market.quantities = np.zeros(n)
        cost_b = market.cost(target) - market.cost(np.zeros(n))

        assert abs(cost_a - cost_b) < 1e-10, f"Path dependence found: {cost_a} vs {cost_b}"
    print("PASSED: Path independence holds")

    # Property 4: Translational invariance of prices
    print("\n=== Property: Translational invariance ===")
    for _ in range(num_trials):
        market.quantities = np.random.randn(n) * 50
        prices_before = market.prices().copy()

        c = np.random.randn() * 100
        market.quantities += c
        prices_after = market.prices()

        assert np.allclose(prices_before, prices_after, atol=1e-10), \
            f"Translational invariance violated"
    print("PASSED: Translational invariance holds")

    # Property 5: Convexity (buying more shares gets more expensive per share)
    print("\n=== Property: Convexity (marginal cost increases) ===")
    market.quantities = np.zeros(n)
    costs = []
    for i in range(20):
        cost = market.trade_cost(0, 5)
        market.execute_trade(0, 5)
        costs.append(cost)

    for i in range(1, len(costs)):
        assert costs[i] >= costs[i-1] - 1e-10, \
            f"Convexity violated at step {i}: {costs[i]} < {costs[i-1]}"
    print("PASSED: Marginal cost is non-decreasing")

    print("\nAll property checks passed!")

8.6 Pricing and Trading with AMMs

How to Calculate Trade Prices

Understanding how AMMs price trades is essential for anyone who wants to trade strategically in prediction markets. Let us walk through the different ways to think about trade prices.

Instantaneous (Marginal) Price. This is the price of an infinitesimally small trade --- the current "market price" displayed to traders:

$$p_i = \frac{\partial C}{\partial q_i}$$

For LMSR, this is the softmax of $q_i / b$. This is the price you see quoted on the platform.

Effective (Average) Price. For any trade of finite size, the actual average price per share is:

$$\text{Effective price} = \frac{\text{Total cost}}{\text{Shares purchased}} = \frac{C(\mathbf{q}_{\text{after}}) - C(\mathbf{q}_{\text{before}})}{\Delta q_i}$$

The effective price is always worse (higher for buying, lower for selling) than the instantaneous price, because the price moves against you as you trade.

Final (Post-Trade) Price. This is the new instantaneous price after the trade completes. It represents the market's updated probability estimate.

Slippage Calculation

Slippage is the cost you pay beyond the quoted price:

$$\text{Slippage} = \text{Effective price} - \text{Instantaneous price (before trade)}$$

For LMSR, buying $\Delta$ shares of outcome $i$:

def calculate_slippage(market, outcome, quantity):
    """Calculate the slippage for a trade."""
    current_price = market.prices()[outcome]
    ideal_cost = current_price * quantity
    actual_cost = market.trade_cost(outcome, quantity)
    slippage = actual_cost - ideal_cost
    slippage_pct = (slippage / ideal_cost) * 100
    return {
        'current_price': current_price,
        'ideal_cost': ideal_cost,
        'actual_cost': actual_cost,
        'slippage_amount': slippage,
        'slippage_percent': slippage_pct,
        'effective_price': actual_cost / quantity
    }

Large Trade Impact

What happens when you try to move the price dramatically? Let us examine LMSR with $b = 100$ for a two-outcome market:

Shares bought Starting price Ending price Total cost Effective price/share
1 0.500 0.502 $0.50 | $0.501
10 0.500 0.525 $5.13 | $0.513
50 0.500 0.622 $28.07 | $0.561
100 0.500 0.731 $61.55 | $0.616
200 0.500 0.881 $143.15 | $0.716
500 0.500 0.993 $430.69 | $0.861

Notice how the price escalates dramatically for large trades. Moving the price from 50% to 99.3% costs over $430 --- almost the entire $b \cdot \ln(2) \approx \$69.31$ maximum subsidy, plus the cost of the shares themselves.

Multi-Step Trades

A trader might want to move the price to a target level. How many shares must they buy, and what will it cost?

For LMSR, to move the price of outcome $i$ from $p_{\text{start}}$ to $p_{\text{target}}$ in a two-outcome market:

$$\Delta q_i = b \cdot \ln\left(\frac{p_{\text{target}} / (1 - p_{\text{target}})}{p_{\text{start}} / (1 - p_{\text{start}})}\right)$$

This is the difference in log-odds, scaled by $b$.

def shares_to_reach_target(market, outcome, target_price):
    """
    Calculate shares needed to move outcome's price to target.
    Only works for two-outcome LMSR markets.
    """
    b = market.b
    current_price = market.prices()[outcome]

    # Log-odds formula
    current_logit = np.log(current_price / (1 - current_price))
    target_logit = np.log(target_price / (1 - target_price))

    shares_needed = b * (target_logit - current_logit)
    cost = market.trade_cost(outcome, shares_needed)

    return {
        'shares_needed': shares_needed,
        'total_cost': cost,
        'effective_price': cost / shares_needed if shares_needed > 0 else 0
    }

Python Trade Simulator

Here is a complete trade simulator that tracks all relevant metrics:

class TradeSimulator:
    """Simulates and tracks trades on an AMM."""

    def __init__(self, market):
        self.market = market
        self.trade_history = []
        self.price_history = [market.prices().copy()]

    def execute(self, trader_name: str, outcome: int, quantity: float):
        """Execute a trade and record all metrics."""
        prices_before = self.market.prices().copy()
        cost = self.market.execute_trade(outcome, quantity)
        prices_after = self.market.prices().copy()

        record = {
            'trader': trader_name,
            'outcome': outcome,
            'quantity': quantity,
            'cost': cost,
            'effective_price': cost / abs(quantity) if quantity != 0 else 0,
            'price_before': prices_before[outcome],
            'price_after': prices_after[outcome],
            'slippage': (cost / abs(quantity)) - prices_before[outcome] if quantity > 0 else 0,
            'all_prices_after': prices_after
        }
        self.trade_history.append(record)
        self.price_history.append(prices_after.copy())
        return record

    def summary(self):
        """Print a summary of all trades."""
        print(f"{'Trader':<10} {'Side':<6} {'Qty':>6} {'Cost':>8} "
              f"{'Eff.Price':>10} {'Slippage':>10} {'New Price':>10}")
        print("-" * 70)
        for t in self.trade_history:
            side = "BUY" if t['quantity'] > 0 else "SELL"
            print(f"{t['trader']:<10} {side:<6} {abs(t['quantity']):>6.1f} "
                  f"${t['cost']:>7.2f} {t['effective_price']:>10.4f} "
                  f"{t['slippage']:>10.4f} {t['price_after']:>10.4f}")

8.7 The Liquidity Parameter Deep Dive

The liquidity parameter $b$ in LMSR is the single most important design decision when creating a prediction market. Get it wrong, and your market will either be uselessly volatile or prohibitively expensive. This section gives you the tools to choose $b$ wisely.

How $b$ Affects the Bid-Ask Spread

In a traditional order book, the bid-ask spread is the gap between the best buy and sell prices. AMMs do not have explicit bids and asks, but they have an analogous concept: the cost of buying and then immediately selling a small number of shares.

For LMSR, the effective spread for a round-trip trade of $\epsilon$ shares (buy then sell) around the midpoint (50/50) is approximately:

$$\text{Spread} \approx \frac{\epsilon}{2b}$$

$b$ value Spread (for $\epsilon = 1$) Character
10 5.0% Highly reactive, wide spread
50 1.0% Moderately reactive
100 0.5% Balanced
500 0.1% Very stable, tight spread
1000 0.05% Extremely stable

How $b$ Affects Subsidy Cost

Remember, the maximum loss for the market maker is $b \cdot \ln(n)$. For a two-outcome market:

$b$ value Maximum loss Interpretation
10 $6.93 Very cheap to operate; suitable for casual markets
50 $34.66 Moderate cost
100 $69.31 Standard for small-stakes markets
500 $346.57 Substantial subsidy required
1000 $693.15 Expensive; suitable for high-importance questions

How $b$ Affects Price Sensitivity

A useful rule of thumb: in a two-outcome market near 50/50, buying $b$ shares moves the price by roughly 12 percentage points (from 0.50 to about 0.62).

More precisely, buying $\Delta$ shares starting from equal prices:

Shares bought as fraction of $b$ Approximate new price
0.1b 0.525
0.25b 0.562
0.5b 0.622
1.0b 0.731
2.0b 0.881
3.0b 0.953
5.0b 0.993

Choosing Optimal $b$

The right $b$ depends on several factors:

  1. Expected trading volume. If you expect 100 shares to be traded, setting $b = 1000$ means trading will barely move the price. Set $b$ to be roughly proportional to expected volume.

  2. Acceptable subsidy. If you can only afford $\$50$ of subsidy, then $b < 50 / \ln(2) \approx 72$ for a two-outcome market.

  3. Desired price sensitivity. If you want a single 10-share trade to move the price by about 2.5%, set $b \approx 100$.

  4. Number of outcomes. With more outcomes, the maximum loss grows as $b \cdot \ln(n)$, so you may need to reduce $b$ for multi-outcome markets.

A practical formula for choosing $b$:

$$b \approx \frac{\text{Expected total trading volume}}{10}$$

This ensures that the total expected trading can move prices across a meaningful range without exhausting the market maker's liquidity too quickly.

Empirical Analysis: Different $b$ Values

def analyze_b_values(b_values, trade_sequence, num_outcomes=2):
    """
    Analyze how different b values affect market behavior
    under the same trading sequence.
    """
    results = {}

    for b in b_values:
        market = LMSR(num_outcomes=num_outcomes, b=b)
        prices = [market.prices().copy()]
        costs = []
        total_cost = 0

        for outcome, qty in trade_sequence:
            cost = market.execute_trade(outcome, qty)
            costs.append(cost)
            total_cost += cost
            prices.append(market.prices().copy())

        results[b] = {
            'final_prices': market.prices(),
            'price_path': prices,
            'trade_costs': costs,
            'total_collected': total_cost,
            'max_loss': market.max_loss(),
            'quantities': market.quantities.copy()
        }

    return results

8.8 Market Maker Subsidy and Loss

Who Pays for AMM Liquidity?

An AMM always stands ready to trade, but this service is not free. The AMM's operator --- typically the platform --- bears the cost. Here is how the economics work:

  1. The operator funds the AMM with an initial subsidy. For LMSR, this is up to $b \cdot \ln(n)$.
  2. Traders pay the AMM when they buy shares. The AMM collects money.
  3. The AMM pays out when the market resolves. Holders of winning shares receive $1 per share.
  4. The AMM's profit/loss is: money collected from trades minus payouts to winners.

Maximum Loss Calculation

For LMSR, the worst-case scenario is that all trading pushes one outcome's price to (nearly) 1 and that outcome wins. In this case:

$$\text{Loss} = \text{Payouts to winners} - \text{Money collected from trades} \leq b \cdot \ln(n)$$

Proof sketch for two outcomes:

The cost function starts at $C_0 = b \cdot \ln(2)$ (when $q_1 = q_2 = 0$). In the worst case, trading drives one outcome to quantity $Q$ and the other stays at 0. The money collected is $C(Q, 0) - C(0, 0)$. The payout if outcome 1 wins is $Q$ dollars (Q shares at $1 each). The loss is:

$$\text{Loss} = Q - [C(Q, 0) - C(0, 0)]$$

Taking the derivative and setting to zero, the maximum loss occurs and equals $b \cdot \ln(2)$.

Subsidy as Information Elicitation Cost

Think of the AMM subsidy as paying for information. The market operator says: "I will spend up to $X to learn the crowd's probability estimate for this event."

This reframes the cost:

  • A weather prediction market with $b = 100$ costs at most $\$69.31$ to operate. In exchange, you get a continuously updated probability estimate from the crowd.
  • Compare this to hiring a forecaster ($500+/hour) or running a survey ($1000+).
  • The AMM subsidy is often remarkably cost-effective for information gathering.

Comparing Subsidy Costs Across Mechanisms

Mechanism Maximum loss formula Loss for 2 outcomes Loss for 10 outcomes
LMSR ($b = 100$) $b \cdot \ln(n)$ $69.31 | $230.26
LMSR ($b = 500$) $b \cdot \ln(n)$ $346.57 | $1,151.29
CPMM (reserves = 100 each) Unbounded (grows with volume) Varies Varies
LS-LMSR ($\alpha = 1$) Unbounded (but self-funding tendency) Varies Varies

LMSR's bounded loss is a major advantage for budget-conscious operators. CPMM and LS-LMSR can potentially lose more, but they may also earn more from fees and spreads.

Actual vs. Maximum Loss

In practice, the AMM rarely loses the maximum amount. The maximum loss requires all trading to be on the winning side --- meaning every trader correctly predicted the outcome. In reality:

  • Some traders bet wrong, and the AMM profits from those bets
  • Trading on both sides partially offsets
  • Typical actual losses are 30-60% of maximum loss in empirical studies
def calculate_amm_pnl(market, winning_outcome):
    """
    Calculate the AMM's profit/loss after market resolution.

    Args:
        market: An LMSR market that has been traded.
        winning_outcome: Index of the outcome that occurred.

    Returns:
        Dictionary with P&L breakdown.
    """
    # Money collected = current cost - initial cost
    initial_cost = market.b * np.log(market.num_outcomes)
    current_cost = market.cost()
    money_collected = current_cost - initial_cost

    # Money paid out = shares of winning outcome
    payout = market.quantities[winning_outcome]

    # Net P&L for AMM (positive = profit)
    pnl = money_collected - max(payout, 0)

    return {
        'money_collected': money_collected,
        'payout': max(payout, 0),
        'net_pnl': pnl,
        'max_possible_loss': market.max_loss(),
        'loss_ratio': -pnl / market.max_loss() if pnl < 0 else 0
    }

8.9 Multi-Outcome AMMs

Extending LMSR to $n$ Outcomes

One of LMSR's greatest strengths is its natural extension to multiple outcomes. The formulas are exactly the same --- just with more outcomes:

$$C(q_1, q_2, \ldots, q_n) = b \cdot \ln\left(\sum_{i=1}^{n} e^{q_i / b}\right)$$

$$p_i = \frac{e^{q_i / b}}{\sum_{j=1}^{n} e^{q_j / b}}$$

For a market with 10 possible outcomes (e.g., "Which party will win the election?" with 10 parties), each outcome starts at 10% probability, and traders can buy shares in any outcome.

Computational Challenges

With many outcomes, numerical issues arise:

  1. Overflow. $e^{q_i / b}$ can become astronomically large if $q_i$ is much larger than $b$. Solution: use the log-sum-exp trick (subtract the maximum value before exponentiating).

  2. Underflow. For outcomes with very low probability, $e^{q_i / b}$ can become so small that floating-point arithmetic rounds it to zero. Solution: work in log-space where possible.

  3. Many outcomes. With 100+ outcomes, even basic operations like computing prices require summing 100+ exponentials. This is still fast in practice (microseconds), but matters for high-frequency applications.

Normalization

LMSR automatically normalizes prices to sum to 1, which is critical for multi-outcome markets. Compare this to running separate binary markets for each outcome, which requires manual normalization and can lead to arbitrage opportunities.

For example, in an election market with 5 candidates:

  • Separate binary markets: Each candidate has their own Yes/No market. Prices might sum to 1.15 (overround) or 0.90 (underround), requiring arbitrage to correct.
  • Single LMSR market: All 5 candidates in one market. Prices automatically sum to 1.00. No normalization needed.

Python $n$-Outcome LMSR

The LMSR class we built in Section 8.2 already handles $n$ outcomes. Here is how to use it:

# Election market: 5 candidates
candidates = ["Alice", "Bob", "Charlie", "Diana", "Eve"]
market = LMSR(num_outcomes=5, b=200)

print("Initial probabilities:")
for name, prob in zip(candidates, market.prices()):
    print(f"  {name}: {prob:.1%}")
# Each starts at 20%

# Poll shows Alice leading
cost = market.execute_trade(0, 50)  # Buy 50 "Alice" shares
print(f"\nAfter buying 50 Alice shares (cost: ${cost:.2f}):")
for name, prob in zip(candidates, market.prices()):
    print(f"  {name}: {prob:.1%}")

# Bob drops out, sell his shares
cost = market.execute_trade(1, -30)  # Sell 30 "Bob" shares
print(f"\nAfter selling 30 Bob shares (received: ${-cost:.2f}):")
for name, prob in zip(candidates, market.prices()):
    print(f"  {name}: {prob:.1%}")

Multi-Outcome CPMM

Extending CPMM to multiple outcomes uses the constant product of all reserves:

$$\prod_{i=1}^{n} x_i = k$$

For 3 outcomes with reserves $(x_1, x_2, x_3)$:

$$x_1 \cdot x_2 \cdot x_3 = k$$

The price for outcome $i$ becomes:

$$p_i = \frac{1/x_i}{\sum_{j=1}^{n} 1/x_j}$$

This works but has a significant drawback: with many outcomes, individual reserves can become very large or very small, leading to extreme slippage for some outcomes.


8.10 Comparing AMM Mechanisms

Head-to-Head Comparison

Let us run the same sequence of trades through LMSR, CPMM, and LS-LMSR and compare their behavior.

Scenario: A two-outcome market with 10 sequential trades.

Property LMSR ($b = 100$) CPMM (reserves = 100) LS-LMSR ($\alpha = 0.5$)
Initial price 50/50 50/50 50/50
Price after 10 "Yes" shares 52.5% 55.3% ~54% (depends on total volume)
Price after 50 "Yes" shares 62.2% 75.0% ~58% (b has grown)
Slippage on 10-share trade $0.13 | $1.11 ~$0.20
Max operator loss $69.31 Unbounded Unbounded
Can always trade? Yes Limited by reserves Yes
Price near extremes (99%) Still tradeable Very expensive Still tradeable
Multi-outcome support Excellent Functional but less elegant Excellent

When to Use Each

Choose LMSR when: - You need predictable, bounded subsidy costs - You are operating many markets with a fixed budget - Multi-outcome markets are common - You want well-understood theoretical properties - Example: A company running internal prediction markets for strategic planning

Choose CPMM when: - The market will attract significant liquidity from participants - You want simplicity and familiarity (especially in DeFi contexts) - Fees can compensate for unbounded loss - Two-outcome markets dominate - Example: A decentralized prediction market on a blockchain

Choose LS-LMSR when: - You want adaptive liquidity without manual parameter tuning - Market activity is unpredictable (could be thin or heavy) - You are willing to accept potentially higher costs for better behavior - Example: A platform like Manifold Markets with diverse market types

Real-World Platform Choices

Platform Mechanism Notes
Manifold Markets Modified AMM (historically CPMM-like, evolved over time) Transitioned through several AMM designs as the platform grew
Polymarket Hybrid (CLOB + AMM backstop) Uses order books for liquid markets; AMM for thin ones
Metaculus Scoring rule (not AMM-based) Uses aggregated probability distributions, not share trading
Kalshi CLOB Regulated exchange with traditional order matching
Augur AMM (various) Decentralized; has experimented with multiple AMM designs
Gnosis/Omen LMSR and variations Early DeFi prediction market; used LMSR

8.11 Advanced: AMM Research Frontiers

This section surveys active research areas in AMM design. These topics are at the frontier of prediction market theory and practice.

Dynamic AMMs

Beyond LS-LMSR, researchers have explored AMMs whose entire cost function structure changes over time:

  • Time-weighted AMMs: Liquidity decreases as the market approaches its resolution date. Rationale: near resolution, information is most valuable, so prices should be more responsive.
  • Volatility-adjusted AMMs: $b$ increases during periods of rapid price movement (to prevent manipulation) and decreases during calm periods (to encourage trading).
  • Market-state-dependent AMMs: Different AMM formulas apply depending on whether the price is near 50%, near 0%, or near 100%.

Combinatorial AMMs

Many interesting questions are not independent. "Will it rain tomorrow?" and "Will the outdoor concert be cancelled?" are correlated. Combinatorial AMMs handle markets over combinations of events:

  • A market over $m$ binary events has $2^m$ possible combined outcomes
  • Naive LMSR over all $2^m$ outcomes is computationally intractable for even moderate $m$
  • Research focuses on efficient approximations and structured cost functions
  • Hanson's original paper proposed a combinatorial LMSR, but practical implementations remain challenging

AMMs with Fees

Standard LMSR collects no explicit fees --- the spread is implicitly built into the cost function. Adding explicit fees changes the dynamics:

  • Proportional fees: Charge $f\%$ on each trade. Revenue helps offset the subsidy.
  • Dynamic fees: Fees increase during high volatility (preventing manipulation) and decrease during calm periods (encouraging trading).
  • Fee distribution: In DeFi, fees are distributed to liquidity providers, creating incentives for others to fund the AMM.

The challenge: fees break some of LMSR's elegant theoretical properties (like exact path independence). The trade-off is worth it in practice for most platforms.

Concentrated Liquidity

Inspired by Uniswap v3, concentrated liquidity AMMs allow the market maker to focus liquidity in a specific price range:

  • If the current probability is around 70%, concentrating liquidity in the 50%-90% range gives better prices for trades in that range.
  • Outside the range, liquidity drops off sharply.
  • This is more capital-efficient but requires active management.

For prediction markets, concentrated liquidity is particularly interesting near resolution time, when the true probability is likely near 0% or 100%.

Bayesian Market Makers

Some researchers have proposed AMMs that explicitly model beliefs using Bayesian updating:

  • The AMM starts with a prior distribution over the true probability.
  • Each trade is treated as evidence, updating the posterior.
  • Prices reflect the AMM's posterior mean (or median).

This approach has theoretical appeal but adds computational complexity and requires specifying a prior.

AMMs for Continuous Outcomes

Most AMMs handle discrete outcomes (Yes/No, or a finite set). But many interesting questions have continuous answers: "What will GDP growth be?" or "What will the temperature be on July 4th?"

Research in this area includes: - Discretizing the continuous range into bins and running a multi-outcome AMM - Using parametric distributions (e.g., the AMM maintains a Gaussian with mean and variance, and trades adjust these parameters) - Kernel-based approaches that maintain a non-parametric distribution


8.12 Chapter Summary

This chapter has covered the theory and practice of automated market makers for prediction markets. Here are the key ideas:

The Problem. Prediction markets suffer from thin order books --- not enough traders to ensure continuous liquidity. Without liquidity, prices are unreliable and markets fail.

The LMSR Solution. Robin Hanson's Logarithmic Market Scoring Rule provides algorithmic liquidity through a cost function $C(\mathbf{q}) = b \cdot \ln(\sum e^{q_i/b})$. Prices are the softmax of quantities scaled by $b$. LMSR has bounded loss ($b \cdot \ln(n)$), path independence, and natural multi-outcome support.

The CPMM Alternative. Constant Product Market Makers use the $x \cdot y = k$ invariant. They are simpler and familiar from DeFi, but have unbounded loss and less elegant multi-outcome support.

LS-LMSR Adaptation. The Liquidity-Sensitive LMSR makes $b$ adaptive, growing with trading volume. This gives the best of both worlds: responsive early prices and stable mature prices.

Cost Function Properties. Valid cost functions must be convex, path-independent, arbitrage-free (prices sum to 1), and translationally invariant. These properties ensure fair, consistent pricing.

The Liquidity Parameter. $b$ controls the trade-off between price sensitivity and subsidy cost. Choose $b$ based on expected volume, acceptable subsidy, and desired responsiveness.

Market Maker Subsidy. AMM liquidity is not free --- someone pays for it. The subsidy is the cost of information elicitation. LMSR's bounded loss makes this cost predictable.

Practical Choices. Real platforms choose among LMSR, CPMM, LS-LMSR, and hybrids based on their specific needs for budget predictability, multi-outcome support, adaptive liquidity, and integration with existing systems.


What's Next

In Chapter 9: Order Books and Matching Engines, we will dive deep into the alternative to AMMs: the central limit order book (CLOB). You will learn how order books work, how matching engines prioritize trades, and how professional market makers provide liquidity without algorithmic subsidies. We will also explore hybrid systems that combine the best of both worlds.

With AMMs and order books in hand, you will have a complete understanding of the two fundamental approaches to prediction market microstructure --- the foundation for everything that follows in Part III on trading strategies.


Key Equations Reference

Concept Formula
LMSR cost function $C(\mathbf{q}) = b \cdot \ln\left(\sum_{i} e^{q_i/b}\right)$
LMSR price $p_i = e^{q_i/b} / \sum_j e^{q_j/b}$
LMSR max loss $b \cdot \ln(n)$
CPMM invariant $x \cdot y = k$
CPMM price $p_A = y / (x + y)$
LS-LMSR adaptive $b$ $b(\mathbf{q}) = \alpha \cdot \sum_i q_i$
Trade cost $C(\mathbf{q}_{\text{after}}) - C(\mathbf{q}_{\text{before}})$
Slippage Effective price $-$ instantaneous price
Price impact (2-outcome, near 50%) $\approx \Delta / (4b)$