The Cost of Information February 2019
with Philipp Strack and Omer Tamuz.
We develop an axiomatic theory of costly information acquisition. Our axioms capture the idea of constant marginal costs in information production: the cost of generating two independent signals is the sum of their costs, and the cost of generating a signal with probability half equals half the cost of generating it deterministically. Together with a monotonicity and a continuity conditions, these axioms completely determine the cost of a signal up to a vector of parameters, one for each pair of states of nature. These parameters have a clear economic interpretation and determine the difficulty of distinguishing between different states. The resulting cost function, which we call log-likelihood ratio cost, is a linear combinations of the Kullback-Leibler divergences (i.e., the expected log-likelihood ratios) between the conditional signal distributions. We argue that this cost function is a versatile modeling tool, and that in various examples of information acquisition it leads to more realistic predictions than the approach based on Shannon entropy.
Stochastic Dominance under Independent Noise November 2018
with Philipp Strack and Omer Tamuz
(Forthcoming at Journal of Political Economy)
We show that the presence of statistically independent noise facilitates the ranking of random variables in terms of stochastic dominance. In particular, we address the following question: Given two random variables, X and Y, under what conditions is it possible to find a random variable Z, independent from X and Y, so that X + Z first-order stochastically dominates Y + Z?
We show that such a Z exists whenever X has higher expectation than Y. In addition, if X and Y have equal mean, but the first has lower variance, then Z can be chosen so that X + Z dominates Y + Z in terms of second-order stochastic dominance. We present applications to choice under risk, the axiomatization of mean-variance preferences, and mechanism design with risk averse agents.
Testable Forecasts September 2018
Predictions about the future are often evaluated through statistical tests. As shown by recent literature, many known tests are subject to adverse selection problems and are ineffective at discriminating between forecasters who are competent and forecasters who are uninformed but predict strategically.
This paper presents necessary and sufficient conditions under which it is possible to discriminate between informed and uninformed forecasters. It is shown that optimal tests take the form of likelihood-ratio tests comparing forecasters’ predictions against the predictions of a hypothetical Bayesian outside observer. The paper also illustrates a novel connection between the problem of testing strategic forecasters and the classical Neyman-Pearson paradigm of hypothesis testing.
Stable Matching under Forward-Induction Reasoning 2018
A standing question in the theory of matching markets is how to define stability under incomplete information. The crucial obstacle is that a notion of stability must include a theory of how beliefs are updated in a blocking pair. This paper proposes a novel epistemic approach.
Agents negotiate through offers. Offers are interpreted according to the highest possible degree of rationality that can be ascribed to their proponents, in line with the principle of forward-induction reasoning. This approach leads to a new definition of stability. The main result shows an equivalence between this notion and “incomplete-information stability,” a cooperative solution concept recently put forward by Liu, Mailath, Postlewaite and Samuelson (2014). The result implies that forward-induction reasoning leads to efficient matchings under standard supermodularity conditions.
Aggregate Risk and the Pareto Principle 2017, with Nabil Al-Najjar
(R&R at Journal of Economic Theory)
A crucial distinction in the evaluation of public policies is between plans that involve purely idiosyncratic risk and plans that generate aggregate, correlated risk. While elementary, such a dichotomy is not captured by standard utilitarian aggregators. In this paper we revisit Harsanyi (1955) celebrated theory of preferences aggregation and develop a parsimonious generalization of utilitarianism. The theory we propose can capture sensitivity to aggregated risk, it is apt for studying large populations and is characterized by two simple axioms of preferences aggregation.