causal & statistical inference |
|
J. Dubois, H. Oya, J. M. Tyszka, M. Howard III, F. Eberhardt and R. Adolphs (2018). Causal mapping of emotion networks in the human brain: Framework and initial findings. In Neuropsychologia.
Abstract:
Emotions involve many cortical and subcortical regions, prominently including the amygdala. It remains unknown how these multiple network components interact, and it remains unknown how they cause the behavioral, autonomic, and experiential effects of emotions. Here we describe a framework for combining a novel technique, concurrent electrical stimulation with fMRI (es-fMRI), together with a novel analysis, inferring causal structure from fMRI data (causal discovery). We outline a research program for investigating human emotion with these new tools, and provide initial findings from two large resting-state datasets as well as case studies in neurosurgical patients with electrical stimulation of the amygdala. The overarching goal is to use causal discovery methods on fMRI data to infer causal graphical models of how brain regions interact, and then to further constrain these models with direct stimulation of specific brain regions and concurrent fMRI. We conclude by discussing limitations and future extensions. The approach could yield anatomical hypotheses about brain connectivity, motivate rational strategies for treating mood disorders with deep brain stimulation, and could be extended to animal studies that use combined optogenetic fMRI.
|
[abstract] [pdf] [link] |
A. Hyttinen, S. Plis, M. Järvisalo, F. Eberhardt, and D. Danks (2017). A Constraint Optimization Approach to Causal Discovery from Subsampled Time Series Data. In International Journal of Approximate Reasoning 90: 208-225.
Abstract:
We consider causal structure estimation from time series data in which measurements
are obtained at a coarser timescale than the causal timescale of the
underlying system. Previous work has shown that such subsampling can lead to
significant errors about the system's causal structure if not properly taken into
account. In this paper, we first consider the search for system timescale causal
structures that correspond to a given measurement timescale structure. We
provide a constraint satisfaction procedure whose computational performance
is several orders of magnitude better than previous approaches. We then consider
finite-sample data as input, and propose the first constraint optimization
approach for recovering system timescale causal structure. This algorithm optimally
recovers from possible conflicts due to statistical errors. We then apply
the method to real-world data, investigate the robustness and scalability of
our method, consider further approaches to reduce underdetermination in the
output, and perform an extensive comparison between different solvers on this
inference problem. Overall, these advances build towards a full understanding of non-parametric estimation of system timescale causal structures from subsampled
time series data.
|
[abstract] [pdf] [link] |
Zhalama, J. Zhang, F. Eberhardt & W. Mayer (2017). SAT-based causal discovery under weaker assumptions. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI).
Abstract:
Using the flexibility of recently developed methods for causal discovery based on Boolean satisfiability (SAT) solvers, we encode a variety of assumptions that weaken the Faithfulness assumption. The encoding results in a number of SAT-based algorithms whose asymptotic correctness relies on weaker conditions than are standardly assumed. This implementation of a whole set of assumptions in the same platform enables us to systematically explore the effect of weakening the Faithfulness assumption on causal discovery. An important effect, suggested by simulation results, is that adopting weaker assumptions greatly alleviates the problem of conflicting constraints and substantially shortens solving time. As a result, SAT-based causal discovery is potentially more scalable under weaker assumptions.
|
[abstract] [pdf] [link] |
K. Chalupka, F. Eberhardt & P. Perona (2017). Causal feature learning: an overview. In Behaviormetrika 44(1): 137-167.
Abstract:
Causal feature learning (CFL) is a causal inference framework rooted in the language
of causal graphical models, and computational
mechanics. CFL is
aimed at discovering high-level causal relations from low-level data, and at
reducing the experimental effort to understand confounding among the high-level
variables. We first review the scientific motivation for CFL, then present a detailed
introduction to the framework, laying out the definitions and algorithmic steps. A
simple example illustrates the techniques involved in the learning steps and provides
visual intuition. Finally, we discuss the limitations of the current framework and list
a number of open problems. |
[abstract] [pdf] [link][code] |
F. Eberhardt (2017). Introduction to the foundations of causal discovery. In International Journal of Data Science and Analytics 3.2: 81-91.
Abstract:
This article presents an overview of several known approaches to causal discovery. It is organized by relating the different fundamental assumptions that the methods depend on. The goal is to indicate that for a large variety of different settings the assumptions necessary and sufficient for causal discovery are now well understood. |
[abstract] [pdf] [link] |
F. Eberhardt, E. Bareinboim, M. Maathuis, J. Mooij & R. Silva, editors (2017). Proceedings of the UAI
Workshop on Causation: Foundation to Application. In CEUR Workshop Proceedings Vol-1792.
|
[link] |
A. Hyttinen, S. Plis, M. Järvisalo, F. Eberhardt & D. Danks (2016). Causal Discovery from Subsampled Time Series Data by Constraint Optimization. In Proceedings of the International Conference on Probabilistic Graphical Models (PGM).
Abstract:
This paper focuses on causal structure estimation from time series data in which measurements are obtained at a coarser timescale than the causal timescale of the underlying system. Previous work has shown that such subsampling can lead to significant errors about the system's causal structure if not properly taken into account. In this paper, we first consider the search for the system timescale causal structures that correspond to a given measurement timescale structure. We provide a constraint satisfaction procedure whose computational performance is several orders of magnitude better than previous approaches. We then consider finite-sample data as input, and propose the first constraint optimization approach for recovering the system timescale causal structure. This algorithm optimally recovers from possible conflicts due to statistical errors. More generally, these advances allow for a robust and non-parametric estimation of system timescale causal structures from subsampled time series data. |
[abstract] [pdf] [link] |
K. Chalupka, T. Bischoff, P. Perona & F. Eberhardt (2016). Unsupervised Discovery of El Nino Using Causal Feature Learning on
Microlevel Climate Data. In Proceedings of the 32st Conference on Uncertainty in Artificial Intelligence (UAI).
Abstract:
We show that the climate phenomena of El Nino and La Nina arise naturally as states of macro-variables when our recent causal feature learning framework (Chalupka et al., 2015, 2016) is applied to micro-level measures of zonal wind (ZW) and sea surface temperatures (SST) taken over the equatorial band of the Pacific Ocean. The method identifies these unusual climate states on the basis of the relation between ZW and SST patterns without any input about past occurrences of El Nino or La Nina. The simpler alternatives of (i) clustering the SST fields while disregarding their relationship with ZW patterns, or (ii) clustering the joint ZW-SST patterns, do not discover El Nino. We discuss the degree to which our method supports a causal interpretation and use a low-dimensional toy example to explain its success over other clustering approaches. Finally, we propose a new robust and scalable alternative to our original algorithm (Chalupka et al., 2016), which circumvents the need for high-dimensional density learning. |
[abstract] [pdf] [link][code] |
K. Chalupka, P. Perona & F. Eberhardt (2016). Multi-Level Cause-Effect Systems. In Journal of Machine Learning, Workshop and Conference Proceedings (AISTATS 2016).
Abstract:
We present a domain-general account of causation that applies to settings in which macro-level causal relations between two systems are of interest, but the relevant causal features are poorly understood and have to be aggregated from vast arrays of micro-measurements. Our approach generalizes that of Chalupka et al. (2015) to the setting in which the macro-level effect is not specified. We formalize the connection between micro- and macro-variables in such situations and provide a coherent framework describing causal relations at multiple levels of analysis. We present an algorithm that discovers macro-variable causes and effects from microlevel measurements obtained from an experiment. We further show how to design experiments to discover macro-variables from observational micro-variable data. Finally, we show that under specific conditions, one can identify multiple levels of causal structure. Throughout the
article, we use a simulated neuroscience multi-unit recording experiment to illustrate the ideas and the algorithms. |
[abstract] [pdf] [link] |
K. Chalupka, P. Perona & F. Eberhardt (2015). Visual Causal Feature Learning. In Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence (UAI).
Abstract: We provide a rigorous definition of the visual cause of a behavior that is broadly applicable to the visually driven behavior in humans, animals, neurons, robots and other perceiving systems. Our framework generalizes standard accounts of causal learning to settings in which the causal variables need to be constructed from micro-variables. We prove the Causal Coarsening Theorem, which allows us to gain causal knowledge from observational data with minimal experimental effort. The theorem provides a connection to standard inference techniques in machine learning that identify features of an image that correlate with, but may not cause, the target behavior. Finally, we propose an active learning scheme to learn a manipulator function that performs optimal manipulations on the image to automatically identify the visual cause of a target behavior. We illustrate our inference and learning algorithms in experiments based on both synthetic and real data. |
[abstract] [pdf] [link][code] |
A. Hyttinen, F. Eberhardt & M. Järvisalo (2015). Do-calculus when the true graph is unknown. In Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence (UAI).
Abstract: One of the basic tasks of causal discovery is to estimate the causal effect of some set of variables on another given a statistical data set. In this article we bridge the gap between causal structure discovery and the do-calculus by proposing a method for the identification of causal effects on the basis of arbitrary (equivalence) classes of semi-Markovian causal models. The approach uses a general logical representation of the equivalence class of graphs obtained from a causal structure discovery algorithm, the properties of which can then be queried by procedures implementing the do-calculus inference for causal effects. We show that the method is more efficient than determining causal effects using a naive enumeration of graphs in the equivalence class. Moreover, the method is complete with respect to the identifiability of causal effects for settings, in which extant methods that do not require knowledge of the true graph, offer only incomplete results. The method is entirely modular and easily adapted for different background set- tings. |
[abstract] [pdf] [link] |
F. Eberhardt (forthcoming). Green and Grue Causal Variables. In Synthese.
Abstract: The causal Bayes net framework specifies a set of axioms for causal discovery. This article explores the set of causal variables that func- tion as relata in these axioms. Spirtes (2007) had shown how a causal system can be equivalently described by two different sets of variables that stand in a non-trivial translation-relation to each other, suggest- ing that there is no "correct" set of causal variables. I extend Spirtes' result to the general framework of linear structural equation models and then explore to what extent the possibility to intervene or a preference for simpler causal systems may help in selecting among sets of causal variables. |
[abstract]
|
A. Hyttinen, F. Eberhardt & M. Järvisalo (2014). Constraint-based Causal Discovery: Conflict Resolution with Answer Set Programming. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence (UAI).
Abstract: Recent approaches to causal discovery based on Boolean satisfiability solvers have opened new opportunities to consider search spaces for causal models with both feedback cycles and unmea- sured confounders. However, the available meth- ods have so far not been able to provide a prin- cipled account of how to handle conflicting con- straints that arise from statistical variability. Here we present a new approach that preserves the ver- satility of Boolean constraint solving and attains a high accuracy despite the presence of statisti- cal errors. We develop a new logical encoding of (in)dependence constraints that is both well suited for the domain and allows for faster solv- ing. We represent this encoding in Answer Set Programming (ASP), and apply a state-of-the- art ASP solver for the optimization task. Based on different theoretical motivations, we explore a variety of methods to handle statistical errors. Our approach currently scales to cyclic latent variable models with up to seven observed vari- ables and outperforms the available constraint- based methods in accuracy. |
[abstract] [pdf] [link][code] |
R. E. Tillman & F. Eberhardt (2014). Learning causal structure from multiple datasets with similar variable sets. In Behaviormetrika 41(1): 41-64.
Abstract: While randomized controlled experiments are often considered the gold standard for predicting causal relationships between variables, they are expensive if one is interested in understanding the complete set of causal relationships governing a large set of variables and it may not be possible to manipulate certain variables due to ethical or practical constraints. To address these scenarios, procedures have been developed which use conditional independence relationships among variables when they are passively observed to predict which variables may or may not be causally related to other variables. Until recently, most of these procedures assumed that the data consisted of a single i.i.d. dataset of observations, but in practice researchers often have access to multiple similar datasets, e.g. from multiple labs studying the same problem, which measure slightly different variable sets and where recording conventions and procedures may vary. This paper discusses recent state of the art approaches for predicting causal relationships using multiple observational and experimental datasets in these contexts. |
[abstract] [pdf] [link] |
F. Eberhardt (2013). Direct Causes and the Trouble with Soft Interventions. In Erkenntnis 79(4): 755-777.
Abstract: An interventionist account of causation characterizes causal relations in terms of changes resulting from particular interventions. I provide a new example of a causal relation for which there does not exist an intervention satisfying the common interventionist standard. I consider adaptations that would save this standard and describe their implications for an interventionist account of causation. No adaptation preserves all the aspects that make the interventionist account appealing. Part of the fallout is a clearer account of the difficulties in characterizing so-called "soft" interventions. |
[abstract] [pdf] [link] |
A. Hyttinen, F. Eberhardt & P. O. Hoyer (2013). Experiment Selection for Causal Discovery. In Journal of Machine Learning Research, 14:3041-3071.
Abstract: Randomized controlled experiments are often described as the most reliable tool available to scientists for discovering causal relationships among quantities of interest. However, it is often unclear how many and which different experiments are needed to identify the full (possibly cyclic) causal structure among some given set of variables. Recent results in the causal discovery literature have explored various identifiability criteria that depend on the assumptions one is able to make about the underlying causal process, but these criteria are not directly constructive for selecting the optimal set of experiments. Fortunately, many of the needed constructions already exist in the combinatorics literature, albeit under terminology which is unfamiliar to most of the causal discovery community. In this paper we translate the theoretical results, often just given in terms of various asymptotic bounds (and their proofs), to the concrete problem of experiment selection. For a variety of settings we give explicit constructions of the optimal set of experiments and adapt some of the general combinatorics results to answer questions relating to the problem of experiment selection. |
[abstract] [pdf] [link] |
F. Eberhardt (2013). Experimental Indistinguishability of Causal Structures. In Philosophy of Science (contributed paper to PSA 2012).
Abstract: Using a variety of different results from the literature, I show how causal discovery with experiments is limited unless substantive assumptions about the underlying causal structure are made. These results undermine the view that experiments, such as randomized controlled trials, can independently provide a gold standard for causal discovery. Moreover, I present a concrete example in which causal underdetermination persists despite exhaustive experimentation, and argue that such cases undermine the appeal of an interventionist account of causation as its dependence on other assumptions is not spelled out. |
[abstract] [pdf] [link] |
A. Hyttinen, P. O. Hoyer, F. Eberhardt & M. Järvisalo (2013). Discovering Cyclic Causal Models with Latent Variables: A General SAT-Based Procedure. In Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI).
Abstract: We present a very general approach to learning the structure of causal models based on d-separation constraints, obtained from any given set of overlapping passive observational or experimental data sets. The procedure allows for both directed cycles (feedback loops) and the presence of latent variables. Our approach is based on a logical representation of causal pathways, which permits the integration of quite general background knowledge, and inference is performed using a Boolean satisfiability (SAT) solver. The procedure is complete in that it exhausts the available information on whether any given edge can be determined to be present or absent, and returns "unknown" otherwise. Many existing constraint-based causal discovery algorithms can be seen as special cases, tailored to circumstances in which one or more restricting assumptions apply. Simulations illustrate the effect of these assumptions on discovery and how the present algorithm scales. |
[abstract] [pdf] [link] |
A. Hyttinen, F. Eberhardt & P. O. Hoyer (2012). Learning Linear Cyclic Causal Models with Latent Variables. In Journal of Machine Learning Research, 13(Nov):3387-3439.
Abstract: Identifying cause-effect relationships between variables of interest is a central problem in science. Given a set of experiments we describe a procedure that identifies linear models that may contain cycles and latent variables. We provide a detailed description of the model family, full proofs of the necessary and sufficient conditions for identifiability, a search algorithm that is complete, and a discussion of what can be done when the identifiability conditions are not satisfied. The algorithm is comprehensively tested in simulations, comparing it to competing algorithms in the literature. Furthermore, we adapt the procedure to the problem of cellular network inference, applying it to the biologically realistic data of the DREAM challenges. The paper provides a full theoretical foundation for the causal discovery procedure first presented by Eberhardt et al. (2010) and Hyttinen et al. (2010). |
[abstract] [pdf] [link] |
A. Hyttinen, F. Eberhardt & P. O. Hoyer (2012). Causal Discovery of Linear Cyclic Models from Multiple Experimental Data Sets with Overlapping Variables. In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence (UAI).
Abstract: Much of scientific data is collected as randomized experiments intervening on some and observing other variables of interest. Quite often, a given phenomenon is investigated in several studies, and different sets of variables are involved in each study. In this article we consider the problem of integrating such knowledge, inferring as much as possible concerning the underlying causal structure with respect to the union of observed variables from such experimental or passive observational overlapping data sets. We do not assume acyclicity or joint causal sufficiency of the underlying data generating model, but we do restrict the causal relationships to be linear and use only second order statistics of the data. We derive conditions for full model identifiability in the most generic case, and provide novel techniques for incorporating an assumption of faithfulness to aid in inference. In each case we seek to establish what is and what is not determined by the data at hand. |
[abstract] [pdf] [link] |
A. Hyttinen, F. Eberhardt & P. O. Hoyer (2011). Noisy-OR models with Latent Confounding. In Proceedings of 27th Conference in Uncertainty in Artificial Intelligence (UAI).
Abstract: Given a set of experiments in which varying subsets of observed variables are subject to intervention, we consider the problem of identifiability of causal models exhibiting latent confounding. While identifiability is trivial when each experiment intervenes on a large number of variables, the situation is more complicated when only one or a few variables are subject to intervention per experiment. For linear causal models with latent variables Hyttinen et al. (2010) gave precise conditions for when such data are sufficient to identify the full model. While their result cannot be extended to discrete-valued variables with arbitrary cause-effect relationships, we show that a similar result can be obtained for the class of causal models whose conditional probability distributions are restricted to a `noisy-OR' parameterization. We further show that identification is preserved under an extension of the model that allows for negative influences, and present learning algorithms that we test for accuracy, scalability and robustness. |
[abstract] [pdf] [link] |
A. Hyttinen, F. Eberhardt & P. O. Hoyer (2010). Causal discovery for linear cyclic models with latent variables. In Proceedings of the 5th European Workshop on Probabilistic Graphical Models (PGM).
Abstract: We consider the problem of identifying the causal relationships among a set of variables in the presence of both feedback loops and unmeasured confounders. This is a challenging task which, for full identification, typically requires the use of randomized experiments. For linear systems, Eberhardt et al (2010) recently provided a procedure for integrating data from several experiments, and gave a corresponding, but demanding, identifiability condition. In this paper we (i) characterize the underdetermination of the model when the identifiability condition is not fully satisfied, (ii) show that their algorithm is complete with regard to the search space and the assumptions, and (iii) extend the procedure to incorporate the common assumption of faithfulness, and any prior knowledge. The resulting method typically resolves much additional structure and often yields full identification with many fewer experiments. We demonstrate our procedure using simulated data, and apply it to the protein signaling dataset of Sachs et al (2005). |
[abstract] [pdf] [link] |
F. Eberhardt, P. O. Hoyer & R. Scheines (2010). Combining Experiments to Discover Linear Cyclic Models with Latent Variables. In Journal of Machine Learning, Workshop and Conference Proceedings (AISTATS 2010), 9:185-192.
Abstract: We present an algorithm to infer causal relations between a set of measured variables on the basis of experiments on these variables. The algorithm assumes that the causal relations are linear, but is otherwise completely general: It provides consistent estimates when the true causal structure contains feedback loops and latent variables, while the experiments can involve surgical or `soft' interventions on one or multiple variables at a time. The algorithm is `online' in the sense that it combines the results from any set of available experiments, can incorporate background knowledge and resolves conflicts that arise from combining results from different experiments. In addition we provide a necessary and sufficient condition that (i) determines when the algorithm can uniquely return the true graph, and (ii) can be used to select the next best experiment until this condition is satisfied. We demonstrate the method by applying it to simulated data and the flow cytometry data of Sachs et al (2005). |
[abstract] [pdf] [link] |
F. Eberhardt (2010). Causal Discovery as a Game. In Journal of Machine Learning, Workshop and Conference Proceedings (NIPS 2008 causality workshop), 6:87-96.
Abstract: This paper presents a game theoretic approach to causal discovery. The problem of causal discovery is framed as a game of the Scientist against Nature, in which Nature attempts to hide its secrets for as long as possible, and the Scientist makes her best effort at discovery while minimizing cost. This approach provides a very general framework for the assessment of different search procedures and a principled way of modeling the effect of choices between different experiments. The framework builds on an approach to hypothesis testing developed by Abraham Wald, but generalizes it to sequences of experiments. |
[abstract] [pdf] [link] |
F. Eberhardt (2009). Introduction to the Epistemology of Causation. In The Philosophy Compass, 4(6):913-925.
Abstract: This survey presents some of the main principles involved in discover- ing causal relations. They belong to a large array of possible assumptions and conditions about causal relations, whose various combinations limit the possibilities of acquiring causal knowledge in different ways. How much and in what detail the causal structure can be discovered from what kinds of data depends on the particular set of assumptions one is able to make. The assumptions considered here provide a starting point to explore further the foundations of causal discovery procedures, and how they can be improved. |
[abstract] [pdf] [link] |
F. Eberhardt (2008). Almost Optimal Intervention Sets for Causal Discovery. In Proceedings of 24th Conference in Uncertainty in Artificial Intelligence (UAI), 161-168.
Abstract: We conjecture that the worst case number of experiments necessary and sufficient to discover a causal graph uniquely given its observational Markov equivalence class can be specified as a function of the largest clique in the Markov equivalence class. We provide an algorithm that computes intervention sets that we believe are optimal for the above task. The algorithm builds on insights gained from the worst case analysis in Eberhardt et al. (2005) for sequences of experiments when all possible directed acyclic graphs over N variables are considered. A simulation suggests that our conjecture is correct. We also show that a generalization of our conjecture to other classes of possible graph hypotheses cannot be given easily, and in what sense the algorithm is then no longer optimal. |
[abstract] [pdf] [link] |
F. Eberhardt (2008). A Sufficient Condition for Pooling Data. In Synthese, special issue, 163(3):433-442.
Abstract: We consider the problems arising from using sequences of experiments to discover the causal structure among a set of variables, none of whom are known ahead of time to be an "outcome". In particular, we present various approaches to resolve conflicts in the experimental results arising from sampling variability in the experiments. We provide a sufficient condition that allows for pooling of data from experiments with different joint distributions over the variables. Satisfaction of the condition allows for more powerful independence tests that may resolve some of the conflicts in the experimental results. The pooling condition has its own problems, but should - due to its generality - be informative to techniques for meta-analysis. |
[abstract] [pdf] [link] |
F. Eberhardt (2007). Causation and Intervention (Ph.D. Thesis). Carnegie Mellon University.
Abstract: Accounts of causal discovery have traditionally split into approaches based on passive observational data and approaches based on experimental interventions that take control of (the distribution of) one or more variables. The former includes a vast number of techniques for the inference to causal structure on the basis of statistical features of data, while the latter provides in addition a methodology of how an experiment should be performed, in order to be informative about causal structure. In this thesis, the causal Bayes net framework is used to integrate these two approaches and general guidelines are provided not only of how experiments should be performed but also which experiments should be performed to discover the causal structure among a potentially large number of random variables. In that sense this thesis aims to extend considerations found in experimental design from single experiments to sequences of experiments. To do so, the thesis provides a precise account of what constitutes an intervention that allows for, but does not necessessitate, a role of agency in interventions. We describe a space of interventions that is broader than standard randomized controlled trials, and explore what implications follow for discovery when different types of interventions are used. Results pertaining to the methodology of causal discovery, its limits, the efficiency of its search strategies and the meta-analysis of experimental results are presented. This thesis analyses the combinatorics of sequences of experiments for causal discovery, ties the discovery problem into a game-theoretic framework and points to some of the (many) difficulties that remain open research questions. |
[abstract] [pdf] |
F. Eberhardt & R. Scheines (2007). Interventions and Causal Inference. In Philosophy of Science, 74:981-995.
Abstract: The literature on causal discovery has focused on interventions that involve randomly assigning values to a single variable. But such a randomized intervention is not the only possibility, nor is it always optimal. In some cases it is impossible or it would be unethical to perform such an intervention. We provide an account of "hard" and "soft" interventions, and discuss what they can contribute to causal discovery. We also describe how the choice of the optimal intervention(s) depends heavily on the particular experimental set-up and the assumptions that can be made. |
[abstract] [pdf] [link] |
F. Eberhardt, C. Glymour & R. Scheines (2006),
N-1 Experiments Suffice to Determine the Causal Relations Among N Variables. In D. Holmes and L. Jain (eds.) Innovations in Machine Learning, Theory and Applications Series: Studies in Fuzziness and Soft Computing, Vol. 194, Springer-Verlag. See also Technical Report CMU-PHIL-161 (2005).
Abstract: By combining experimental interventions with search procedures for graphical causal models we show that under familiar assumptions, with perfect data, N-1 experiments suffice to determine the causal relations among N>2 variables when each experiment randomizes at most one variable. We show the same bound holds for adaptive learners, but does not hold for N>4 when each experiment can simultaneously randomize more than one variable. This bound provides a type of ideal for the measure of success of heuristic approaches in active learning methods of causal discovery, which currently use less informative measures. |
[abstract] [pdf] [link] |
F. Eberhardt, C. Glymour & R. Scheines (2005). On the Number of Experiments Sufficient and in the Worst Case Necessary to Identify All Causal Relations Among N Variables. In F. Bacchus and T. Jaakkola (eds.), Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI), 178-184; (Note the error described in the linked abstract.).
Abstract: We show that if any number of variables are allowed to be simultaneously and independently randomized in any one experiment, log2(N)+1 experiments are sufficient and in the worst case necessary to determine the causal relations among N ≥ 2 variables when no latent variables, no sample selection bias and no feedback cycles are present. For all K, 0 < K < 1/2N we provide an upper bound on the number experiments required to determine causal structure when each experiment simultaneously randomizes K variables. For large N, these bounds are significantly lower than the N-1 bound required when each experiment randomizes at most one variable. For kmax < N, we show that (N/kmax -1)+ N/(2kmax) log2(kmax) experiments are sufficient and in the worst case necessary. We offer a conjecture as to the minimal number of experiments that are in the worst case sufficient to identify all causal relations among N observed variables that are a subset of the vertices of a DAG.
Errata: Theorem 3.7 concerning limited intervention sets is incorrect. The bound is only sufficient. The necessary bound is described in detail in our paper "Experiment Selection for Causal Discovery" (2013). |
[abstract] [pdf] [link] |
human learning |
|
D. Danks & F. Eberhardt (2011). Keeping Bayesian models rational: The need for an account of algorithmic rationality [Commentary on Jones & Love: Bayesian Fundamentalism or Enlightenment?]. In Behavioral and Brain
Sciences, 34(4):197.
Abstract: We argue that the authors' call to integrate Bayesian models more strongly with algorithmic- and implementational-level models must go hand in hand with a call for a fully developed account of algorithmic rationality. Without such an account, the integration of levels would come at the expense of the explanatory benefit that rational models provide. |
[abstract] [link] |
F. Eberhardt & D. Danks (2011)
Confirmation in the Cognitive Sciences: The Problematic Case of Bayesian Models.
In Minds & Machines, 21(3):389-410.
Abstract: Bayesian models of human learning are becoming increasingly popular in cognitive science. We argue that their purported confirmation largely relies on a methodology that depends on premises that are inconsistent with the claim that people are Bayesian about learning and inference. Bayesian models in cognitive science derive their appeal from their normative claim that the modeled inference is in some sense rational. Standard accounts of the rationality of Bayesian inference imply predictions that an agent selects the option that maximizes the posterior expected utility. Experimental confirmation of the models, however, has been claimed because of groups of agents that "probability match" the posterior. Probability matching only constitutes support for the Bayesian claim if additional unobvious and untested (but testable) assumptions are invoked. The alternative strategy of weakening the underlying notion of rationality no longer distinguishes the Bayesian model uniquely. A new account of rationality -- either for inference or for decision-making -- is required to successfully confirm Bayesian models in cognitive science. |
[abstract] [pdf] [link] |
D. Danks & F. Eberhardt (2009).
Explaining Norms and Norms Explained [Commentary on precis of Bayesian Rationality by Oaksford & Chater].
In Behavioral and Brain Sciences, 32:86-87.
Abstract: Oaksford & Chater (O&C) aim to provide teleological explanations of behavior by giving an appropriate normative standard: Bayesian inference. We argue that there is no uncontroversial independent justification for the normativity of Bayesian inference, and that O&C fail to satisfy a necessary condition for teleological explanations: demonstration that the normative prescription played a causal role in the behavior's existence. |
[abstract] [pdf] [link] |
D. Danks & F. Eberhardt (2009).
Conceptual Problems in Statistics, Testing and Experimentation.
In J. Symons & F. Calvo (eds.) Routledge Companion to the Philosophy of Psychology, Routledge.
|
[pdf] [link] |
Hans Reichenbach |
|
F. Eberhardt (2011). Reliability via Synthetic A Priori - Reichenbach's Doctoral Thesis on Probability.
In Synthese, 181(1):125-136.
Abstract: Hans Reichenbach is well known for his limiting frequency view of probability, with his most thorough account given in The Theory of Probability in 1935/49. Perhaps less known are Reichenbach's early views on probability and its epistemology. In his doctoral thesis from 1915, Reichenbach espouses a Kantian view of probability, where the convergence limit of an empirical frequency distribution is guaranteed to exist thanks to the synthetic a priori principle of lawful distribution. Reichenbach claims to have given a purely objective account of probability, while integrating the concept into a more general philosophical and epistemological framework. A brief synopsis of Reichenbach's thesis and a critical analysis of the problematic steps of his argument will show that the roots of many of his most influential insights on probability and causality can be found in this early work. |
[abstract] [pdf] [link] |
F. Eberhardt & C. Glymour (2011).
Hans Reichenbach's Probability Logic.
In D. M. Gabbay, J. Woods & S. Hartmann (eds.), Handbook of the History of Logic, Vol. 10, Elsevier.
|
[pdf] [link] |
C. Glymour & F. Eberhardt (2008, updated 2012).
Hans Reichenbach.
Entry in Stanford Encyclopaedia of Philosophy.
Abstract: |
[link] |
F. Eberhardt & C. Glymour (2008).
The Concept of Probability in the Mathematical Representation of Reality.
Translation of Hans Reichenbach's doctoral thesis, Open Court.
Abstract: Hans Reichenbach, a prominent member of the logical empiricist movement, and student and friend of Albert Einstein, was the foremost figure in philosophy of physics in the first half of the 20th century and one of the most influential advocates of the idea that the estimation of probabilities as limits of relative frequencies lies at the foundation of science. Recent scholarship has emphasized the Kantian sources of the logical positivist and scientific philosophy movement early in the last century. Reichenbach's lucid and original doctoral thesis, never before translated, throws new light on how the "Critique of Pure Reason" was understood in some quarters at the time. The thesis is the source of several of the themes in Reichenbach's still-influential posthumous book, "The Direction of Time", and shows the early focus of Reichenbach's thought on the interdependence of physics, probability, and epistemology, even before the appearance of the quantum theory. Of much more historical interest, Reichenbach's thesis anticipates in detail the substance of recent work in philosophy of science concerned with how stable macroscopic frequencies can be produced from microscopic causal relations, and attempts a derivation of a special case of the Markov condition relating causality and probability, a subject of widespread contemporary philosophical discussion.
|
[abstract] [link] |
general machine learning / data mining |
|
B. Bryan, F. Eberhardt & C. Faloutsos (2008).
Compact Similarity Joins.
In Proceedings of 24th International Conference on Data Engineering (ICDE), IEEE: 346-355.
Abstract: Similarity joins have attracted significant interest, with applications in Geographical Information Systems, astronomy, marketing analyzes, and anomaly detection. However, all the past algorithms, although highly fine-tuned, suffer an output explosion if the query range is even moderately large relative to the local data density. Under such circumstances, the response time and the search effort are both almost quadratic in the database size, which is often prohibitive. We solve this problem by providing two algorithms that find a compact representation of the similarity join result, while retaining all the information in the standard join. Our algorithms have the following characteristics: (a) they are at least as fast as the standard similarity join algorithm, and typically much faster, (b) they generate significantly smaller output, (c) they provably lose no information, (d) they scale well to large data sets, and (e) they can be applied to any of the standard tree data structures. Experiments on real and realistic point-sets show that our algorithms are up to several orders of magnitude faster. |
[abstract] [pdf] [link] |
S. Autexier, F. Eberhardt, D. Hutter, M. Kohlhase & R. Angelhache (2003).
Distributed Knowledge Management and Version Control.
In Information Society Technologies Programme, Report n. D5.a.
Abstract: We propose an infrastructure for collaborative content management and version control for structured mathematical knowledge. This will enable multiple users to work jointly on mathematical theories with minimal interference.
We describe the API and the functionality needed to realize a CVS-like version control and distribution model. This architecture extends the CVS architecture in two ways, motivated by the specific needs of distributed management of structured mathematical knowledge on the Internet. On the one hand the one-level client/server model of CVS is generalized to a multi-level graph of client/server relations, and on the other hand the underlying change-detection tools take the math-specific structure of the data into account. |
[abstract] [pdf] |
none of the above |
|
F. Eberhardt (2015). Equidynamics and reliable reasoning about frequencies (review of M. Strevens' Tychomancy). In Metascience 24:180-183.
Abstract: Michael Strevens' Tychomancy presents a kind of Reichenbachian project.
Reichenbach would have described it as a "coordination": an effort to align a
theoretical framework with actual experiences of physical processes in the world.
The aim is not to provide an interpretation of the theoretical framework, but to show
how such an interpretation is applied in concrete cases. The focus is on what
Strevens calls a "physical probability." A physical probability is an objective
quantity concerning the occurrence of physical events given some generating
process. Perhaps the least controversial example is that of atomic decay. The
probability of decay of a particular particle is, supposedly, intrinsic to the physical
process, and is not a matter of our lack of knowledge about that process. This line
between probabilities that are free of an epistemic component and those that are not
is notoriously difficult to draw. The distinction becomes more entangled in
Tychomancy, since Strevens is interested in how we learn and reason about physical
probabilities. |
[abstract] [link] |
C. Glymour, D. Danks, F. Eberhardt, B. Glymour, J. Ramsey, R. Scheines, P. Spirtes, C. Teng & J. Zhang (2010).
Actual Causation: A Stone Soup Essay.
In Synthese, 175(2):169-192.
|
[abstract] [pdf] [link] |