> "The question is no longer whether prediction markets work --- the evidence is overwhelming that they do. The question is where they work best and how to deploy them effectively."
In This Chapter
- 40.1 The Application Landscape
- 40.2 Corporate Prediction Markets
- 40.3 Scientific Forecasting
- 40.4 Pandemic and Public Health Forecasting
- 40.5 Geopolitical Forecasting
- 40.6 Economic and Financial Forecasting
- 40.7 Sports and Entertainment
- 40.8 Climate and Weather
- 40.9 Technology and Product Forecasting
- 40.10 Government and Policy
- 40.11 Building Your Own Application
- 40.12 Chapter Summary
Chapter 40: Real-World Applications
"The question is no longer whether prediction markets work --- the evidence is overwhelming that they do. The question is where they work best and how to deploy them effectively." --- Robin Hanson, Overcoming Bias, 2012
Prediction markets are not merely an academic curiosity or a speculative toy. They have been deployed --- sometimes quietly, sometimes with fanfare --- across an extraordinary range of domains. From the corridors of Google's Mountain View campus to the situation rooms of intelligence agencies, from pharmaceutical boardrooms evaluating clinical trial success to epidemiologists tracking pandemic trajectories, prediction markets have proven their value as aggregators of distributed knowledge.
This chapter provides an exhaustive tour of real-world prediction market applications. We examine what has worked, what has failed, and what lessons emerge for practitioners who want to deploy these tools in new contexts. We go beyond anecdote to present data, frameworks, and working code that you can adapt.
40.1 The Application Landscape
40.1.1 Mapping the Territory
Prediction markets have been applied, or proposed for application, in at least a dozen major domains. Before diving into each, it helps to see the full landscape.
Table 40.1: Prediction Market Application Domains
| Domain | Maturity | Example Platforms | Key Metrics |
|---|---|---|---|
| Elections / Politics | Mature | PredictIt, Polymarket, IEM | Brier scores vs. polls |
| Sports / Entertainment | Mature | Betfair, various bookmakers | Market efficiency |
| Corporate (internal) | Established | Google, HP, Microsoft, Intel | Decision improvement |
| Scientific Forecasting | Emerging | Replication Markets, DARPA | Calibration accuracy |
| Pandemic / Public Health | Emerging | Metaculus, Good Judgment Open | Timeliness, coverage |
| Geopolitical / Intelligence | Established | Good Judgment Project, IARPA | Brier scores vs. analysts |
| Economic / Financial | Mature | CME Fed funds futures, TIPS | Forecast accuracy |
| Climate / Weather | Early | CME weather derivatives | Hedging effectiveness |
| Technology | Early | Metaculus, Manifold | Coverage gaps |
| Government / Policy | Proposed/Early | DARPA (cancelled), Metaculus | Policy relevance |
40.1.2 The Adoption Curve
Prediction markets follow a characteristic adoption pattern across domains:
- Academic demonstration --- researchers show that markets outperform alternatives in a controlled setting.
- Pilot deployment --- a forward-thinking organization runs a small-scale internal market.
- Scaling challenges --- liquidity, participation, and incentive problems emerge at scale.
- Institutional integration --- the market becomes part of standard decision-making processes.
- Regulatory accommodation --- legal and regulatory frameworks catch up to practice.
Most domains sit between stages 2 and 4. Elections and sports betting have reached stage 5 in many jurisdictions, while domains like climate prediction markets remain at stage 1 or 2.
40.1.3 Proven vs. Emerging Use Cases
Proven use cases share several characteristics:
- Clear, verifiable resolution criteria
- Sufficient participant diversity
- Adequate incentive structures
- Institutional tolerance for the mechanism
- Questions that aggregate genuinely distributed information
Emerging use cases typically struggle with one or more of these requirements. Scientific forecasting, for example, has excellent resolution criteria (the paper replicates or it does not) but thin participation. Geopolitical forecasting has strong institutional demand but ambiguous resolution criteria (when exactly has a "conflict" begun?).
40.1.4 Adoption Barriers
Across all domains, prediction markets face common barriers:
Regulatory barriers. In the United States, real-money prediction markets face significant legal constraints under the Commodity Exchange Act. The CFTC has been reluctant to grant broad permission for event contracts, though this has evolved. The 2024 Kalshi ruling on election contracts marked a significant shift.
Cultural resistance. Many organizations resist the idea of "betting" on outcomes, particularly when those outcomes involve human welfare (pandemic deaths, conflict casualties). Framing matters enormously --- "forecasting tournament" is more palatable than "betting market."
Thin markets. Many potentially valuable prediction markets fail because they cannot attract enough participants to generate meaningful prices. This is the fundamental chicken-and-egg problem: markets need liquidity to be useful, but participants only join useful markets.
Incentive alignment. In corporate settings, employees may fear that revealing private information through market trades could have career consequences. A sales representative who bets against their own division's target may be seen as disloyal rather than honest.
Resolution ambiguity. Many of the most interesting questions --- "Will AI cause significant economic disruption by 2030?" --- are difficult to operationalize with clear resolution criteria. Markets on ambiguous questions tend to produce ambiguous signals.
40.2 Corporate Prediction Markets
40.2.1 The Promise of Internal Markets
Corporate prediction markets represent one of the most promising applications of the technology. Large organizations possess enormous amounts of distributed knowledge --- information scattered across thousands of employees, each with their own expertise, observations, and judgments. Traditional hierarchical information flows (reports moving up the chain of command) are slow, filtered, and subject to systematic distortions. Prediction markets offer a mechanism to aggregate this distributed knowledge rapidly and honestly.
The economic argument is straightforward. If a company can improve its quarterly sales forecasts by even a few percentage points, the value in terms of better inventory management, staffing decisions, and capital allocation can be enormous. A major technology company with $100 billion in annual revenue that improves forecast accuracy by 2% might save hundreds of millions in operational efficiency.
40.2.2 Case Studies
Google has operated internal prediction markets since 2005, making it one of the longest-running corporate implementations. Bo Cowgill's research on Google's internal markets (published in 2009 and expanded in subsequent work) provides some of the most detailed evidence on corporate prediction market performance.
Key findings from Google's experience:
- Accuracy. Google's markets outperformed official internal forecasts for product launch dates, user adoption metrics, and various operational targets.
- Participation. Markets attracted broad participation across the company, though engineers were overrepresented relative to other roles.
- Biases. Markets exhibited detectable biases. Employees tended to be optimistic about their own projects and about Google overall. New employees were better calibrated than long-tenured ones, perhaps because they had less emotional investment.
- Information aggregation. The markets were particularly good at surfacing "bad news" that might not travel well through normal reporting channels. When a product was behind schedule, market prices often reflected this before official project updates did.
- Trading patterns. Trading activity spiked around internal announcements and product milestones, suggesting that participants were responding to real information rather than trading randomly.
The compensation structure used play money with prizes, which sidestepped regulatory issues while still providing meaningful incentives.
Hewlett-Packard
HP's prediction market experiments, led by Kay-Yut Chen and Charles Plott, are among the most rigorously studied in the corporate context. Beginning in the late 1990s, HP used internal markets to forecast printer sales.
Key findings:
- Superior accuracy. In 6 out of 8 test cases, the prediction market outperformed HP's official sales forecasts, which were produced by experienced analysts using sophisticated statistical models.
- Small participant pools. Remarkably, these markets operated with only 12-20 participants, challenging the conventional wisdom that markets need many participants to function well.
- Mechanism design. HP experimented with different market structures, including combinatorial markets that allowed trading on multiple dimensions simultaneously (e.g., sales volume AND profit margin).
- Persistence. Despite the positive results, HP's prediction markets did not become a permanent institution. Organizational changes, leadership transitions, and the difficulty of maintaining champion support all contributed to their eventual discontinuation.
Ford Motor Company
Ford used prediction markets to forecast demand for new vehicle models. The application was particularly challenging because product launch predictions involve long time horizons (years from concept to showroom) and high stakes.
Ford's experience highlighted:
- Long-horizon challenges. Markets on events far in the future tend to be illiquid and influenced by discounting effects.
- Integration with existing processes. Ford attempted to integrate market signals with existing demand forecasting models, using the market as one input among many.
- Cross-functional insight. Markets were most valuable when they attracted participants from different functional areas --- engineering, marketing, sales, finance --- each bringing different information to bear.
Microsoft
Microsoft has experimented with prediction markets in several contexts, most notably for predicting software quality metrics. In the mid-2000s, Microsoft ran markets on the number of bugs that would be found in different software products before release.
The "bug markets" produced several insights:
- Developer knowledge. Developers who worked on specific components had genuine private information about code quality, and markets helped surface this information.
- Incentive tensions. There was an inherent tension: developers who knew their code was buggy could profit from the market, but the act of revealing this information (through trading) might reflect poorly on their work. Microsoft addressed this partly through anonymity protections.
- Complementarity. The markets worked best as a complement to, not a replacement for, existing quality assurance processes.
Intel
Intel used prediction markets for supply chain forecasting and demand prediction for semiconductor products. Given the enormous capital costs of semiconductor manufacturing (a single fabrication facility costs $10-20 billion), even small improvements in demand forecasting can justify significant investment in prediction market infrastructure.
Intel's markets focused on:
- Yield predictions. Forecasting the percentage of usable chips from a manufacturing run.
- Demand timing. Predicting when specific customer segments would adopt new chip generations.
- Competitive intelligence. Aggregating employee assessments of competitor product timelines.
40.2.3 What Works and What Doesn't
What works in corporate prediction markets:
- Questions with clear resolution. "Will Product X ship by Date Y?" is much better than "Will Product X be successful?"
- Cross-functional questions. Markets add the most value when the question requires information from multiple departments.
- Short to medium time horizons. Questions resolving in 1-6 months tend to generate the most engagement and the best-calibrated prices.
- Executive sponsorship. Markets that have visible support from senior leadership attract more participation.
- Low-stakes anonymity. When employees can trade without fear of career consequences, information flows more freely.
What doesn't work:
- Questions everyone knows the answer to. Markets add no value when information is already centralized.
- Questions no one knows the answer to. Pure noise produces no signal, regardless of the aggregation mechanism.
- Small, homogeneous participant pools. If all traders share the same information set, the market cannot aggregate diversity.
- Markets without champions. Corporate prediction markets that lose their internal advocate tend to wither.
- Mandatory participation. Forcing employees to trade leads to random trading and poor price quality.
40.2.4 Implementation Guide
For organizations considering corporate prediction markets, the following framework provides a starting point:
Phase 1: Assessment (2-4 weeks) - Identify 5-10 candidate questions where forecasting accuracy matters - Assess cultural readiness and leadership support - Determine regulatory constraints (real vs. play money) - Estimate the potential participant pool
Phase 2: Design (4-8 weeks) - Select a market mechanism (continuous double auction, LMSR, etc.) - Design the incentive structure - Build or acquire the platform - Create resolution criteria for initial questions
Phase 3: Pilot (8-16 weeks) - Launch with 50-200 participants - Run 10-20 markets simultaneously to ensure engagement - Monitor trading patterns for quality signals - Compare market forecasts to official forecasts
Phase 4: Evaluation (2-4 weeks) - Measure forecast accuracy against baselines - Survey participants on experience - Assess organizational impact - Decide whether to scale
Phase 5: Scaling - Expand participant pool - Integrate market signals into decision processes - Build internal expertise in market design - Establish ongoing governance
40.2.5 Python Corporate PM Simulator
The following Python code (see code/example-01-corporate-pm.py) simulates a corporate prediction market with realistic features including heterogeneous information, strategic trading, and comparison to traditional forecasting methods.
import numpy as np
class CorporatePMSimulator:
"""Simulates a corporate prediction market for sales forecasting."""
def __init__(self, true_value, n_traders=50, n_rounds=100):
self.true_value = true_value
self.n_traders = n_traders
self.n_rounds = n_rounds
self.prices = [0.5] # Starting price
def run(self):
# Each trader has a noisy signal about the true value
signals = np.clip(
self.true_value + np.random.normal(0, 0.15, self.n_traders),
0.01, 0.99
)
price = 0.5
for r in range(self.n_rounds):
trader = np.random.randint(self.n_traders)
signal = signals[trader]
# Trade toward signal
trade_size = 0.05 * (signal - price)
price = np.clip(price + trade_size, 0.01, 0.99)
self.prices.append(price)
return self.prices
The full implementation includes multi-division simulation, comparison with analyst forecasts, and visualization tools.
40.3 Scientific Forecasting
40.3.1 The Replication Crisis
The replication crisis in science --- the discovery that many published findings cannot be reproduced --- represents one of the most significant challenges to scientific credibility in recent decades. Studies in psychology, medicine, economics, and other fields have found replication rates far below what the published literature would suggest.
Key findings:
- The Reproducibility Project: Psychology (2015) found that only 36% of 100 psychology studies replicated successfully.
- The SSRP (Social Sciences Replication Project, 2018) found that 62% of social science studies published in Nature and Science replicated.
- Drug development sees approximately 90% of compounds that enter clinical trials fail to reach approval.
These numbers suggest that the scientific community's own assessment of the reliability of its findings is poorly calibrated. Could prediction markets help?
40.3.2 Prediction Markets for Replicability
Several research projects have tested whether prediction markets can identify which studies will replicate before replication is attempted.
The Replication Markets Project
The Replication Markets project, funded by DARPA, asked participants to predict whether specific social and behavioral science claims would replicate. The project ran prediction markets alongside traditional surveys of expert opinion.
Key results:
- Markets outperformed surveys. Prediction markets were better calibrated than surveys of the same experts, suggesting that the market mechanism itself adds value beyond simply collecting opinions.
- Accuracy was good. Markets correctly identified non-replicating studies roughly 70% of the time, significantly above chance.
- Signals were interpretable. The market prices corresponded to meaningful features of the original studies: effect size, sample size, and methodological rigor all predicted both market prices and actual replication outcomes.
- Cost-effectiveness. Running prediction markets was far cheaper than conducting full replications ($50,000-$500,000 per study). Markets could serve as a triage tool, directing replication resources toward the most uncertain findings.
Mechanism
Participants in replication markets trade contracts that pay $1 if a study replicates and $0 otherwise. A market price of $0.35 for a given study implies the market collectively judges the probability of replication at 35%.
The mathematical framework for evaluating these markets uses standard scoring metrics. The Brier score for a set of $n$ predictions is:
$$\text{Brier} = \frac{1}{n} \sum_{i=1}^{n} (p_i - o_i)^2$$
where $p_i$ is the market's predicted probability and $o_i \in \{0, 1\}$ is the actual outcome. Lower is better, with 0 being perfect and 0.25 being the score of a constant 50% predictor.
For the Replication Markets project, the Brier score was approximately 0.17, compared to 0.21 for expert surveys --- a meaningful improvement.
40.3.3 Clinical Trial Outcome Prediction
Predicting the success or failure of clinical trials is enormously valuable. Each phase of clinical development costs millions to hundreds of millions of dollars, and early identification of likely failures could save enormous resources.
Several studies have examined whether markets can predict clinical trial outcomes:
- Luckner and Weinhardt (2007) ran prediction markets on Phase III clinical trial outcomes and found moderate predictive power.
- Pharmaceutical company experiments (mostly proprietary and unpublished) have tested internal markets for prioritizing drug development pipelines.
- Academic work has shown that publicly available information (publication records, mechanism of action data, biomarker results) can be aggregated through market mechanisms to produce useful forecasts.
The challenge is that clinical trials involve deep scientific uncertainty, not merely information asymmetry. When no one truly knows whether a drug will work, prediction markets aggregate ignorance rather than knowledge. Markets perform best when someone in the participant pool has relevant information.
40.3.4 Research Prioritization and Grant Funding
A more speculative application is using prediction markets to allocate research funding. The current peer-review-based funding system has well-documented problems: conservatism (reviewers favor incremental work), bias (toward established researchers and institutions), and poor calibration (funded projects are not systematically more impactful than unfunded ones).
Prediction markets could address these problems by:
- Predicting impact. Markets on "Will this research program produce a high-impact publication within 5 years?" could help identify promising proposals.
- Predicting feasibility. Markets on "Will this proposed experiment produce the predicted result?" could identify overconfident proposals.
- Comparative assessment. Markets on relative impact ("Will Project A produce more citations than Project B?") might provide better signal than absolute assessments.
The DARPA Forecasting Science and Technology program (ForeST) explored some of these ideas, with mixed results. The fundamental challenge remains incentivizing participation by people with genuine scientific expertise.
40.3.5 Python Replication Market Simulator
import numpy as np
def simulate_replication_market(n_studies=100, n_traders=30, n_rounds=50):
"""Simulate a prediction market for scientific replicability."""
# Generate ground truth: each study has a true replication probability
true_probs = np.random.beta(2, 3, n_studies) # Skewed toward non-replication
outcomes = np.random.binomial(1, true_probs)
# Each trader has expertise in some subset of studies
predictions = np.full(n_studies, 0.5)
for study in range(n_studies):
price = 0.5
for r in range(n_rounds):
trader = np.random.randint(n_traders)
signal = np.clip(
true_probs[study] + np.random.normal(0, 0.2), 0.01, 0.99
)
price += 0.1 * (signal - price)
price = np.clip(price, 0.01, 0.99)
predictions[study] = price
brier = np.mean((predictions - outcomes) ** 2)
return predictions, outcomes, brier
The full implementation in code/example-01-corporate-pm.py and related files extends this with calibration analysis and comparison to expert surveys.
40.4 Pandemic and Public Health Forecasting
40.4.1 The COVID-19 Test Case
The COVID-19 pandemic provided the most significant real-world test of prediction market-style forecasting for public health. As the pandemic unfolded from early 2020 onward, several platforms and projects generated forecasts on key pandemic parameters.
Metaculus hosted hundreds of pandemic-related questions, including: - When would vaccines receive emergency use authorization? - What would cumulative case counts reach by specific dates? - Would specific interventions (lockdowns, mask mandates) be implemented? - When would pandemic-era restrictions be lifted?
Good Judgment Open ran pandemic forecasting challenges, recruiting both amateur and expert forecasters.
Polymarket (and other crypto-based platforms) hosted markets on pandemic-related policy decisions.
40.4.2 Performance Assessment
How well did prediction markets and market-like mechanisms perform during the pandemic?
Strengths: - Vaccine timeline. Forecasting platforms generally produced better estimates of vaccine development timelines than the mainstream expert consensus early in the pandemic. In March 2020, many experts suggested vaccines would take 18-24 months or longer; well-calibrated forecasters assigned significant probability to faster timelines, which proved correct. - Aggregation speed. Markets incorporated new information (variant emergence, trial results, policy changes) faster than official forecasts from institutions like the WHO or national health agencies. - Uncertainty quantification. Unlike point forecasts from epidemiological models, markets provided natural probability distributions, explicitly quantifying uncertainty.
Weaknesses: - Thin participation. Many pandemic prediction markets had too few participants to generate reliable signals. - Domain expertise gaps. Few virologists or epidemiologists participated in public prediction markets, limiting the domain-specific knowledge available for aggregation. - Anchoring and herding. Early prices strongly influenced subsequent trading, and some markets showed herding behavior as participants followed the crowd rather than trading on independent information. - Resolution challenges. Questions like "When will the pandemic end?" proved almost impossible to operationalize.
40.4.3 Case Study: Vaccine Timeline Forecasting
One of the most studied pandemic forecasting successes was the prediction of COVID-19 vaccine timelines. Let us trace the forecast evolution:
March 2020: Metaculus community median prediction for first vaccine EUA was approximately October 2021, broadly in line with expert opinion.
May 2020: After early Phase I trial results from Moderna and Pfizer-BioNTech, forecasts shifted earlier, to approximately June 2021.
July 2020: With positive Phase I/II results and Operation Warp Speed funding, forecasts moved to early 2021.
October 2020: With Phase III trials underway and interim data suggesting high efficacy, forecasts converged on December 2020-January 2021.
Actual outcome: Pfizer-BioNTech received EUA on December 11, 2020. Moderna followed on December 18, 2020.
The forecasting community --- operating through market-like mechanisms --- tracked the evolving evidence well and maintained appropriate uncertainty ranges throughout.
40.4.4 CDC Flu Forecasting
Before COVID-19, the most established public health forecasting effort was the CDC's FluSight challenge, which has run since the 2013-2014 flu season. Teams submit probabilistic forecasts of influenza-like illness activity at the national and regional level.
While most participating teams use statistical or mechanistic models, the ensemble approach --- combining forecasts from multiple models --- echoes the logic of prediction markets. The CDC's "ensemble" forecast, which averages across submitted forecasts, has consistently been among the most accurate, demonstrating the wisdom-of-crowds principle even when the "crowd" consists of models rather than human traders.
40.4.5 Lessons Learned
The pandemic experience yielded several key lessons for public health forecasting:
-
Speed matters. In a crisis, even moderately accurate early forecasts are more valuable than highly accurate late ones. Markets' speed advantage is particularly important here.
-
Incentive design for crisis contexts. During a pandemic, intrinsic motivation (civic duty, intellectual curiosity) may be sufficient to drive participation. However, accuracy incentives remain important to prevent markets from becoming mere opinion polls.
-
Expert integration. The best pandemic forecasts came from mechanisms that combined domain expertise (epidemiologists, virologists) with forecasting skill (experienced superforecasters, quantitative analysts). Pure crowd wisdom without expert input performed poorly on technical questions.
-
Communication challenges. Probabilistic forecasts are difficult to communicate to policymakers and the public. A market price of 0.30 on "Will there be a new variant of concern by Q2?" is not self-explanatory.
-
Ethical considerations. Markets on pandemic death tolls raise ethical concerns. While the information is valuable, the framing of "betting on deaths" is troubling to many. Careful design can mitigate these concerns (e.g., focusing on policy outcomes rather than body counts).
40.4.6 Python Pandemic Dashboard
import numpy as np
class PandemicForecaster:
"""Multi-signal pandemic forecasting aggregator."""
def __init__(self):
self.signals = {}
self.weights = {}
def add_signal(self, name, forecast, confidence):
self.signals[name] = forecast
self.weights[name] = confidence
def aggregate(self):
if not self.signals:
return None
total_weight = sum(self.weights.values())
weighted_sum = sum(
self.signals[k] * self.weights[k] for k in self.signals
)
return weighted_sum / total_weight
The full pandemic dashboard implementation is available in the chapter's code directory.
40.5 Geopolitical Forecasting
40.5.1 The Intelligence Community's Forecasting Problem
Intelligence agencies face an extraordinarily difficult forecasting challenge: predicting events in complex, adversarial environments with incomplete and often deliberately misleading information. The consequences of poor forecasting --- the 9/11 intelligence failures, the Iraq WMD assessment, the surprise of the Arab Spring --- can be measured in lives and trillions of dollars.
Traditional intelligence analysis relies on individual analysts or small teams producing narrative assessments. These assessments are subject to well-documented cognitive biases: confirmation bias, anchoring, groupthink, and the "intelligence cycle" problem where collection priorities are shaped by existing hypotheses.
40.5.2 The Good Judgment Project
The Good Judgment Project (GJP), led by Philip Tetlock and Barbara Mellers, represents the most rigorous test of crowd-based forecasting for geopolitical questions. Funded by IARPA (Intelligence Advanced Research Projects Activity) as part of the ACE (Aggregative Contingent Estimation) program, the GJP tested whether crowds of interested amateurs could outperform professional intelligence analysts.
Design. The GJP recruited thousands of volunteers and asked them to make probabilistic forecasts on hundreds of geopolitical questions over four years (2011-2015). Questions covered topics like: - "Will North Korea conduct a nuclear test before the end of 2013?" - "Will the Syrian government use chemical weapons before June 2014?" - "Will Scotland vote for independence in the 2014 referendum?"
Key findings:
-
Superforecasters. A small fraction of participants (roughly 2%) were dramatically better than the rest. These "superforecasters" outperformed professional intelligence analysts with access to classified information by approximately 30% as measured by Brier scores.
-
Team superiority. When superforecasters were grouped into teams, their performance improved further. The best teams outperformed the intelligence community's benchmark by over 40%.
-
Aggregation methods matter. Simple averaging of forecasts was good; more sophisticated aggregation methods (extremizing the average, weighting by past performance) were better. The optimal aggregation algorithm was:
$$p_{\text{agg}} = \frac{\bar{p}^a}{\bar{p}^a + (1-\bar{p})^a}$$
where $\bar{p}$ is the simple average forecast and $a > 1$ is the extremizing parameter (typically $a \approx 2.5$). This transformation pushes the average toward 0 or 1, correcting for the tendency of averages to be insufficiently extreme.
-
Training helps. Brief training in probabilistic reasoning, cognitive debiasing, and base rate estimation improved forecast accuracy by approximately 10%.
-
Updating frequency. Better forecasters updated their forecasts more frequently in response to new information, but avoided both under-reaction and over-reaction.
40.5.3 Conflict Prediction
Predicting armed conflicts is one of the highest-stakes applications of forecasting. Several approaches have been tested:
Statistical models. The Political Instability Task Force (now the PITF) has developed statistical models of state failure and conflict onset using structural variables (regime type, infant mortality, neighborhood conflict). These models provide useful base rates but struggle with timing and specific trigger events.
Expert elicitation. Structured expert judgment, as in the Delphi method, produces calibrated forecasts but is slow and expensive.
Prediction markets. Markets on conflict questions have been run by Good Judgment Open, Metaculus, and other platforms. They tend to perform well on questions about continuation of existing conflicts but poorly on onset of new ones, reflecting the base rate problem (most country-years do not see conflict onset).
The mathematical challenge of conflict prediction can be framed as a signal detection problem. Let $s$ be the signal of impending conflict and $\theta$ be the decision threshold. The probability of detection (true positive rate) is:
$$P_D = P(s > \theta | \text{conflict}) = 1 - \Phi\left(\frac{\theta - \mu_1}{\sigma}\right)$$
And the probability of false alarm is:
$$P_{FA} = P(s > \theta | \text{no conflict}) = 1 - \Phi\left(\frac{\theta - \mu_0}{\sigma}\right)$$
where $\mu_1 > \mu_0$ are the signal means under conflict and no-conflict conditions, and $\Phi$ is the standard normal CDF. The ROC curve for conflict prediction systems reveals the fundamental trade-off between catching true positives and generating false alarms.
40.5.4 Election Forecasting Globally
Election forecasting is perhaps the most visible application of prediction markets. Markets have been used to forecast elections in:
- United States. The Iowa Electronic Markets (since 1988), PredictIt, Polymarket, and various others have forecast presidential, congressional, and state elections.
- United Kingdom. Betfair and other British bookmakers provide liquid markets on UK elections, Brexit referenda, and party leadership contests.
- European Union. Prediction markets have been used for European Parliament elections and national elections across EU member states.
- India. Small prediction markets and forecasting tournaments have been applied to Indian elections, which involve enormous complexity (hundreds of millions of voters, thousands of candidates, coalition politics).
Performance summary. Meta-analyses of election prediction market accuracy show: - Markets outperform polls in approximately 75% of elections when the comparison is forecast accuracy in the final weeks before the election. - Markets are particularly superior at longer time horizons (months before the election) when polls are less reliable. - Markets are better at estimating the probability of victory than the margin of victory. - The largest errors occur in elections with unusual dynamics (unexpected candidate entries, major scandals, pandemic-era voting changes).
40.5.5 Diplomatic Outcome Markets
An underexplored application is using prediction markets to forecast diplomatic outcomes: trade negotiations, treaty ratification, sanctions regimes, and international organization decisions. These events are often determined by a small number of decision-makers, making them potentially susceptible to manipulation but also information-rich for insiders.
40.5.6 Python Geopolitical Forecaster
import numpy as np
class GeopoliticalForecaster:
"""Aggregate multiple forecaster signals for geopolitical events."""
def __init__(self, n_forecasters=100):
self.n_forecasters = n_forecasters
self.track_records = np.ones(n_forecasters)
def forecast(self, true_prob, expertise_levels=None):
if expertise_levels is None:
expertise_levels = np.random.uniform(0.1, 0.5, self.n_forecasters)
signals = np.clip(
true_prob + np.random.normal(0, expertise_levels),
0.01, 0.99
)
# Performance-weighted aggregation
weights = 1.0 / (self.track_records + 0.01)
weighted_avg = np.average(signals, weights=weights)
# Extremize
a = 2.5
extremized = weighted_avg**a / (weighted_avg**a + (1-weighted_avg)**a)
return extremized, weighted_avg, np.mean(signals)
40.6 Economic and Financial Forecasting
40.6.1 Interest Rate Predictions
Federal funds futures, traded on the CME, are among the most liquid and well-studied prediction markets in existence. These contracts settle based on the average effective federal funds rate during their contract month, making them implicit markets on Federal Reserve policy decisions.
Accuracy. Fed funds futures have been extensively studied and are generally well-calibrated at short horizons (1-3 months). At longer horizons, they reflect a combination of expectations and risk premia, making interpretation more complex.
The Fed funds futures-implied probability of a rate hike at the next meeting is calculated as:
$$P(\text{hike}) = \frac{r_{\text{futures}} - r_{\text{current}}}{r_{\text{hike}} - r_{\text{current}}}$$
where $r_{\text{futures}}$ is the rate implied by the futures contract, $r_{\text{current}}$ is the current target rate, and $r_{\text{hike}}$ is the expected rate after a hike (typically the current rate plus 25 basis points).
CME FedWatch Tool. The CME Group publishes the FedWatch Tool, which translates futures prices into implied probabilities for each possible rate outcome at each upcoming FOMC meeting. This tool has become a standard reference for financial market participants and journalists.
40.6.2 Inflation Markets
Treasury Inflation-Protected Securities (TIPS) and the breakeven inflation rate derived from them function as a prediction market on future inflation. The breakeven rate is:
$$\text{Breakeven} = r_{\text{nominal}} - r_{\text{TIPS}}$$
This represents the market's expectation of average CPI inflation over the maturity of the bonds, subject to adjustments for liquidity premia and inflation risk premia.
Performance. TIPS breakevens have a mixed forecasting record. They tend to be reasonable at 5-10 year horizons but can be distorted by liquidity events (as during the 2008 financial crisis, when TIPS breakevens turned negative despite no expectation of sustained deflation).
Inflation swaps provide a cleaner measure of inflation expectations, as they are less affected by liquidity concerns. The 5-year, 5-year-forward inflation swap rate has become the Federal Reserve's preferred measure of long-term inflation expectations.
40.6.3 GDP Forecasting
GDP prediction markets have been less developed than interest rate or inflation markets, but several efforts have been made:
- Iowa Electronic Markets ran GDP growth markets as part of the Federal Reserve Bank of New York's experimental program.
- Consensus Economics surveys, while not markets, aggregate economist forecasts in a way that echoes market mechanisms.
- Prediction platforms like Metaculus host GDP growth questions that attract participation from economically literate forecasters.
The Survey of Professional Forecasters (SPF), maintained by the Federal Reserve Bank of Philadelphia, provides a useful benchmark. Research has shown that prediction market-style aggregation of SPF responses --- weighting by past accuracy, extremizing --- can improve upon simple averages.
40.6.4 Employment Data
Predicting employment figures (Non-Farm Payrolls, unemployment rate) is a major activity in financial markets. While explicit prediction markets for employment data are uncommon, the options market on Treasury bonds and equity indices contains implicit forecasts of employment outcomes through their sensitivity to these releases.
40.6.5 Comparison to Professional Forecasts
How do prediction markets compare to the Survey of Professional Forecasters and other expert forecast collections?
| Metric | Prediction Markets | SPF | Advantage |
|---|---|---|---|
| Interest rates (1-month) | Brier ~0.05 | Brier ~0.07 | Markets |
| Inflation (1-year) | RMSE ~0.8% | RMSE ~0.7% | SPF (slight) |
| GDP growth (1-quarter) | RMSE ~1.2% | RMSE ~1.1% | Comparable |
| Recession probability | Better calibrated | Under-predicts | Markets |
The general finding is that prediction markets are competitive with expert forecasts for most macroeconomic variables and superior for binary outcomes (recession vs. no recession, rate hike vs. hold).
40.7 Sports and Entertainment
40.7.1 The Largest Market Segment
By volume, sports betting is by far the largest prediction market application. Global sports betting revenue is estimated at over $200 billion annually (as of the mid-2020s), dwarfing all other prediction market domains combined.
This scale has important implications:
- Liquidity. Sports betting markets are extraordinarily liquid, with major football (soccer) matches attracting hundreds of millions of dollars in wagering.
- Efficiency. With so much money at stake, prices reflect available information very efficiently. Exploitable inefficiencies, while they exist, are small and quickly corrected.
- Infrastructure. The technology, regulation, and operational expertise developed for sports betting can be adapted to other prediction market applications.
40.7.2 Sports Prediction Markets
Betfair Exchange represents the purest prediction market form in sports, operating as a continuous double auction where bettors can both back and lay outcomes. Key features:
- Price discovery. Betfair prices have been shown to be well-calibrated: events priced at 50% happen roughly 50% of the time, events at 20% happen roughly 20% of the time.
- In-play trading. Betfair allows trading during events, with prices updating in real-time as the game unfolds. This creates a fascinating dynamic price process that reflects the arrival of new information (goals, injuries, red cards).
- Market microstructure. Betfair has been extensively studied by market microstructure researchers, who have used it to examine questions about price efficiency, order flow, and insider trading.
Findings from the academic literature:
- Favorite-longshot bias. Longshots (unlikely outcomes) tend to be slightly overpriced, and favorites slightly underpriced. This bias is smaller in exchange markets (like Betfair) than in traditional bookmaker markets.
- Late-informed trading. Prices become more efficient as event start time approaches, consistent with the gradual revelation of information (team news, weather conditions, etc.).
- Cross-market efficiency. Prices across different markets and bookmakers are generally consistent, with arbitrage opportunities rare and fleeting.
40.7.3 Entertainment Markets
Prediction markets for entertainment awards have a long history:
- Oscar markets on platforms like PredictIt and the now-defunct Hollywood Stock Exchange (HSX) have generated forecasts for Academy Award outcomes since the late 1990s.
- Emmy and Grammy predictions attract smaller but dedicated communities.
- Box office prediction --- forecasting opening weekend and cumulative gross --- was a major feature of the Hollywood Stock Exchange.
Entertainment markets present interesting challenges: - Small voter pools. Oscar voting involves roughly 10,000 Academy members, making it potentially susceptible to insider information. - Social dynamics. Award outcomes are influenced by campaigning, personal relationships, and social dynamics that are difficult to quantify. - Limited liquidity. Most entertainment prediction markets have thin participation.
40.7.4 Daily Fantasy Sports Overlap
Daily Fantasy Sports (DFS) platforms like DraftKings and FanDuel occupy an interesting position between pure sports betting and prediction markets. While legally classified as "contests of skill" in many jurisdictions, DFS involves forecasting player performance, which is conceptually similar to prediction market trading.
The DFS industry has driven innovations in: - Real-time data processing and pricing - User interface design for probabilistic forecasting - Regulatory navigation for prediction-like products - Customer acquisition and retention in competitive markets
40.8 Climate and Weather
40.8.1 Weather Derivatives as Prediction Markets
Weather derivatives, traded on the CME and in over-the-counter markets, function as prediction markets on weather outcomes. The most common contracts are:
- Heating Degree Day (HDD) futures --- settle based on cumulative heating degree days during a contract month, providing a prediction of cold weather intensity.
- Cooling Degree Day (CDD) futures --- the summer equivalent, predicting hot weather intensity.
- Hurricane futures --- settled based on the CME Hurricane Index, providing predictions of hurricane activity.
These markets are used primarily by energy companies, agricultural firms, and municipalities for hedging, but they also provide genuine forecasts of weather outcomes.
The pricing of weather derivatives involves modeling the temperature process. A common model is:
$$T_t = \mu(t) + \sigma(t) \cdot X_t$$
where $T_t$ is the temperature at time $t$, $\mu(t)$ is the seasonal mean, $\sigma(t)$ is the seasonal volatility, and $X_t$ is an Ornstein-Uhlenbeck process:
$$dX_t = -\kappa X_t \, dt + dW_t$$
The mean-reversion parameter $\kappa$ captures the tendency of temperature anomalies to decay over time.
40.8.2 Climate Target Prediction
Long-term climate prediction markets are a proposed application with enormous potential value. Questions like:
- "Will global average temperature exceed 1.5C above pre-industrial levels before 2030?"
- "Will atmospheric CO2 concentration exceed 450 ppm before 2035?"
- "Will the Arctic be ice-free in summer before 2040?"
These questions face two major challenges: (1) the long time horizons make market participation unattractive due to discounting, and (2) resolution depends on measurement methodologies that may themselves be contested.
Metaculus and other forecasting platforms have hosted climate-related questions, but participation remains thin and prices are weakly informative compared to expert climate models.
40.8.3 Disaster Probability Markets
Markets on natural disaster probabilities --- earthquake likelihood, hurricane landfall probabilities, flood risk --- could provide valuable information for insurance pricing, infrastructure investment, and emergency preparedness.
Catastrophe bonds (cat bonds) function as a crude form of prediction market for natural disasters. The spread on a cat bond reflects the market's assessment of the probability and severity of the triggering catastrophe:
$$\text{Spread} \approx \text{Expected Loss} + \text{Risk Premium}$$
The expected loss component provides an implicit probability forecast, while the risk premium reflects the market's aversion to correlated catastrophic risk.
40.8.4 Parametric Insurance Connection
Parametric insurance --- insurance that pays out based on a measurable parameter (wind speed, rainfall amount, earthquake magnitude) rather than actual losses --- creates natural prediction market-like structures. Pricing parametric insurance requires forecasting the probability distribution of the triggering parameter, which is equivalent to operating a prediction market on that parameter.
40.9 Technology and Product Forecasting
40.9.1 Product Launch Dates
Technology companies face enormous uncertainty about product launch dates. Hardware products involve complex supply chains with multiple potential failure points. Software products face the well-known difficulty of estimating development timelines.
Prediction markets for product launch dates can help by: - Aggregating information from across the supply chain - Surfacing known problems earlier - Providing honest estimates free from the optimism bias that typically afflicts project management
40.9.2 Technology Adoption Curves
Predicting the adoption rate of new technologies is critical for investment, strategic planning, and policy. Prediction markets can provide early signals of technology adoption by aggregating assessments from: - Technology developers and engineers - Early adopters and beta testers - Market analysts and industry experts - Potential customers and users
The S-curve model of technology adoption provides a mathematical framework:
$$N(t) = \frac{K}{1 + e^{-r(t - t_0)}}$$
where $N(t)$ is the number of adopters at time $t$, $K$ is the maximum potential adoption, $r$ is the growth rate, and $t_0$ is the inflection point. Prediction markets can help estimate $r$ and $t_0$ before sufficient adoption data is available for statistical estimation.
40.9.3 AI Milestone Markets
AI milestone markets have become increasingly popular on platforms like Metaculus and Manifold Markets. Questions include:
- "When will an AI system pass a specific benchmark?"
- "When will autonomous vehicles reach a specific level of deployment?"
- "When will large language models achieve a specific capability?"
These markets have produced mixed results. They tend to be poorly calibrated on very long time horizons (decades) but reasonably useful for near-term predictions (1-3 years). The rapid pace of AI development makes these markets particularly volatile, with dramatic price swings following breakthrough announcements.
40.9.4 Startup Success Prediction
Can prediction markets forecast startup success? Several experiments have tested this:
- AngelList and similar platforms implicitly aggregate investor opinion through funding decisions, which function like a prediction market.
- Explicit markets on startup outcomes have been proposed but face challenges: resolution is slow (startups take years to succeed or fail), definitions of "success" are ambiguous, and insider information is highly asymmetric.
Research suggests that crowd-based forecasts of startup success are modestly predictive but far from reliable. The extreme uncertainty inherent in early-stage ventures limits the accuracy any forecasting method can achieve.
40.10 Government and Policy
40.10.1 Policy Impact Prediction
One of the most valuable potential applications of prediction markets is predicting the impact of proposed government policies. If a government is considering a new tax policy, environmental regulation, or healthcare reform, a prediction market could provide forecasts of the policy's effects on relevant outcomes (GDP growth, emissions levels, healthcare costs).
Conditional prediction markets are the key mechanism for policy evaluation. These markets ask: "What will outcome Y be, conditional on policy X being implemented?" By comparing the price of this contract with a parallel contract --- "What will outcome Y be, conditional on policy X NOT being implemented?" --- policymakers can estimate the causal effect of the policy.
Formally, the policy effect is estimated as:
$$\hat{\tau} = E[Y | X = 1] - E[Y | X = 0]$$
where the expectations are read from the two conditional market prices. Robin Hanson has argued that this "futarchy" approach could fundamentally improve governance, replacing legislation-by-debate with legislation-by-prediction.
40.10.2 DARPA's Policy Analysis Market
The most (in)famous government prediction market was DARPA's Policy Analysis Market (PAM), proposed in 2003 as part of the FutureMAP program. PAM would have allowed trading on geopolitical and economic indicators in the Middle East, including questions related to political stability and conflict.
The program was cancelled after congressional opposition, with critics arguing that it would amount to a "terrorism futures market." The cancellation is widely regarded as an overreaction that set back prediction market development by years, as the proposed market would not have allowed betting on specific terrorist attacks and could have provided valuable intelligence signals.
40.10.3 Regulatory Outcome Markets
Markets on regulatory decisions --- whether a drug will receive FDA approval, whether a merger will pass antitrust review, whether a new environmental regulation will be promulgated --- provide valuable information for affected parties.
These markets face a unique challenge: market participants may include parties who can influence the outcome (lobbyists, regulated companies, regulators themselves). This creates potential for manipulation but also ensures that highly informed parties participate.
40.10.4 Court Decision Markets
Predicting Supreme Court decisions has been the subject of several forecasting projects:
- FantasySCOTUS runs a prediction tournament on Supreme Court case outcomes.
- Academic research has shown that simple statistical models (based on case characteristics, lower court decisions, and oral argument features) can predict Supreme Court outcomes with 70-75% accuracy.
- Expert surveys of legal scholars perform only slightly better than statistical models.
Prediction markets could potentially outperform both by combining legal expertise with statistical base rates.
40.10.5 Legislative Prediction
Predicting whether specific legislation will pass is valuable for lobbying strategy, regulatory planning, and policy analysis. Markets on legislative outcomes face the challenge that the legislative process is highly path-dependent and subject to last-minute negotiations and horse-trading.
Platforms like PredictIt have hosted markets on major legislation (Affordable Care Act repeal votes, infrastructure bills, etc.) with moderate accuracy.
40.11 Building Your Own Application
40.11.1 The Decision Framework
When considering whether to deploy a prediction market in a new domain, the following framework can guide the decision:
Step 1: Needs Assessment
Ask the following questions: 1. Is there a forecasting problem that matters? (If current forecasts are adequate, markets add no value.) 2. Is information genuinely distributed? (If one person or group has all the information, just ask them.) 3. Are there barriers to information flow? (Hierarchy, incentive misalignment, geographical dispersion.) 4. Can the question be operationalized? (Clear, verifiable resolution criteria.) 5. Is there a sufficient participant pool? (At minimum 20-30 diverse participants.)
Step 2: Design
Key design decisions: - Market mechanism. LMSR (Logarithmic Market Scoring Rule) is appropriate for thin markets; continuous double auction for thicker markets. - Incentive structure. Real money (strongest incentives, regulatory complexity), play money with prizes (moderate incentives, simpler), or reputation only (weakest incentives, simplest). - Question design. Binary (yes/no), scalar (numeric outcome), or categorical (multiple exclusive outcomes). - Time horizon. Shorter is generally better for engagement; longer may be more useful for planning. - Resolution source. Automated (linked to data feed), manual (human judge), or consensus (participant agreement).
The LMSR cost function for a market with $n$ outcomes is:
$$C(\mathbf{q}) = b \cdot \ln\left(\sum_{i=1}^{n} e^{q_i / b}\right)$$
where $\mathbf{q} = (q_1, \ldots, q_n)$ is the vector of outstanding shares for each outcome and $b$ is the liquidity parameter. The current price for outcome $i$ is:
$$p_i = \frac{e^{q_i / b}}{\sum_{j=1}^{n} e^{q_j / b}}$$
The maximum loss to the market maker is $b \cdot \ln(n)$, which determines the cost of subsidizing the market.
Step 3: Pilot
Run a small-scale pilot with the following goals: - Test the technology platform - Assess participant engagement - Calibrate question difficulty - Identify operational issues - Gather baseline accuracy data
Step 4: Scale
Scaling involves: - Increasing the participant pool - Expanding the question set - Integrating market signals into decision processes - Building organizational capability - Establishing governance and oversight
40.11.2 The Technology Stack
A prediction market application requires several components:
- Trading engine. Handles order matching, price calculation, and position management.
- User interface. Web or mobile frontend for participant interaction.
- Question management. Tools for creating, managing, and resolving questions.
- Data pipeline. Feeds for resolution data, historical prices, and performance analytics.
- Administration. User management, incentive tracking, and compliance tools.
For internal corporate markets, open-source platforms like Manifold Markets (open source), Augur (blockchain-based), and various academic prototypes provide starting points.
40.11.3 Python Application Framework
The following code provides a framework for building a prediction market application in a new domain. See code/example-02-application-framework.py for the full implementation.
import numpy as np
from dataclasses import dataclass, field
from typing import List, Dict, Optional
from datetime import datetime
@dataclass
class Market:
question: str
resolution_date: datetime
current_price: float = 0.5
trades: List[dict] = field(default_factory=list)
resolved: bool = False
outcome: Optional[bool] = None
class PredictionMarketPlatform:
def __init__(self, liquidity_param=100.0):
self.markets: Dict[str, Market] = {}
self.b = liquidity_param
self.user_balances: Dict[str, float] = {}
def create_market(self, market_id, question, resolution_date):
self.markets[market_id] = Market(
question=question,
resolution_date=resolution_date
)
def trade(self, market_id, user_id, direction, amount):
market = self.markets[market_id]
if direction == "buy":
cost = self._lmsr_cost(market, amount)
else:
cost = -self._lmsr_cost(market, -amount)
market.trades.append({
"user": user_id, "direction": direction,
"amount": amount, "cost": cost,
"timestamp": datetime.now()
})
return cost
def _lmsr_cost(self, market, shares):
q_yes = sum(
t["amount"] for t in market.trades if t["direction"] == "buy"
)
q_no = sum(
t["amount"] for t in market.trades if t["direction"] == "sell"
)
old_cost = self.b * np.log(np.exp(q_yes/self.b) + np.exp(q_no/self.b))
new_cost = self.b * np.log(
np.exp((q_yes + shares)/self.b) + np.exp(q_no/self.b)
)
return new_cost - old_cost
40.11.4 Common Pitfalls
Based on the experience of dozens of prediction market deployments, the most common pitfalls are:
-
Over-engineering. Start simple. A Google Form collecting probability estimates is a prediction market in its most primitive form. Technology should scale with demonstrated value.
-
Under-incentivizing. Participants need a reason to trade thoughtfully. Even small prizes ($50-100 per quarter for top forecasters) dramatically improve engagement and accuracy.
-
Poor question design. Ambiguous resolution criteria are the single most common cause of prediction market failure. Invest heavily in question wording and resolution rules.
-
Ignoring the social context. Prediction markets exist within organizations and communities. Political dynamics, power structures, and cultural norms all affect participation and accuracy.
-
Expecting perfection. Prediction markets are not oracles. They produce probabilistic forecasts that will sometimes be wrong. The right comparison is against the alternative forecasting methods, not against perfect foresight.
40.12 Chapter Summary
This chapter has surveyed the landscape of real-world prediction market applications, from the well-established (sports betting, election forecasting, interest rate markets) to the emerging (scientific replicability, pandemic response, climate forecasting) to the speculative (policy futarchy, AI milestone markets, startup prediction).
Key themes across all domains:
-
Information aggregation works. Across every domain studied, prediction markets successfully aggregate distributed information to produce forecasts that are at least competitive with, and often superior to, traditional forecasting methods.
-
Design matters. The details of market design --- mechanism, incentive structure, question wording, resolution criteria --- have a large effect on forecast quality. There is no one-size-fits-all design.
-
Participation is the bottleneck. The most common failure mode is insufficient participation. Markets need a critical mass of informed, motivated participants to function.
-
Context determines value. Prediction markets add the most value where information is distributed, stakes are high, and existing forecasting methods are inadequate.
-
Institutional barriers persist. Legal, regulatory, and cultural barriers remain significant obstacles to prediction market adoption in many domains.
-
The technology is ready; the institutions are catching up. The technical infrastructure for prediction markets is well-developed. The limiting factors are regulatory, cultural, and organizational.
What's Next
In Chapter 41, we will examine the ethical dimensions of prediction markets in greater depth, exploring questions about moral hazard, manipulation, information asymmetry, and the normative implications of market-based governance. We will also consider the frontier applications that prediction markets might enable in the coming decades and the governance frameworks needed to realize their potential responsibly.
Key Equations Summary:
| Equation | Description |
|---|---|
| $\text{Brier} = \frac{1}{n}\sum(p_i - o_i)^2$ | Forecast evaluation metric |
| $p_{\text{agg}} = \frac{\bar{p}^a}{\bar{p}^a + (1-\bar{p})^a}$ | Extremized aggregation |
| $C(\mathbf{q}) = b \ln\left(\sum e^{q_i/b}\right)$ | LMSR cost function |
| $P(\text{hike}) = \frac{r_f - r_c}{r_h - r_c}$ | Fed funds implied probability |
| $N(t) = \frac{K}{1 + e^{-r(t-t_0)}}$ | Technology adoption S-curve |
| $\hat{\tau} = E[Y|X=1] - E[Y|X=0]$ | Conditional market policy effect |