Cian Eastwood

I am an ELLIS PhD student in machine learning advised by Chris Williams at the University of Edinburgh and Bernhard Schölkopf at the Max Planck Institute for Intelligent Systems, Tübingen.

Broadly, I am interested in representation and transfer learning, which has recently led me to causality (see below).

Previously, I studied Computer Science (BSc) at the National University of Ireland, Maynooth—spending time on exchange at the University of Toronto—and Artificial Intelligence (MSc) at the University of Edinburgh.

Email   /   CV   /   Google Scholar   /   Twitter   /   Github

profile photo
Research

Machine learning (ML) methods have achieved remarkable successes on problems with independent and identically-distributed (IID) data. However, real-world data is not IID—environments change, experimental conditions shift, new measurement devices are used, and selection biases are introduced. Current ML methods struggle when asked to transfer or adapt quickly to such out-of-distribution (OOD) data.

Causality [1] provides a principled mathematical framework to describe the distributional differences that arise from the aforementioned system changes. In particular, it supposes that observed system changes arise from changes to just a few underlying modules or mechanisms which function independently [2].

In my PhD studies I am exploring how best to exploit the invariances that are observed across multiple environments or experimental conditions by viewing them as imprints of (or clues about) the underlying causal mechanisms. The central hypothesis is that these invariances reveal how the system can change and thus how best to prepare for shifts that may occur in the future. My two main focuses are causal representation learning [3]—the discovery of high-level abstract causal variables from low-level observations—and the learning of invariant predictors [4,5] to enable OOD generalization. I am also excited by causal discovery, where causal relations are learned from heterogeneous data to e.g. understand cellular processes.

[1] Pearl, J. (2009). Causality. Cambridge University Press.
[2] Peters, J., Janzing, D., & Schölkopf, B. (2017). Elements of causal inference: foundations and learning algorithms. MIT Press.
[3] Schölkopf, B. et al. (2021). Toward causal representation learning. Proceedings of the IEEE, 109(5), 612-634.
[4] Peters, J., Bühlmann, P., & Meinshausen, N. (2016). Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 947-1012.
[5] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv:1907.02893.

News
  • 04/2022: Selected as a "Highlighted Reviewer" of ICLR 2022.
  • 03/2022: Our paper "Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences" was accepted to the ICLR 2022 workshop on Objects, Structure and Causality.
  • 02/2022: Excited to spend 4 months (July--October) at Meta AI, New York, as an AI Research Intern.
  • 01/2022: Our paper "Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration" was accepted to ICLR 2022 (Spotlight).
  • 12/2021: Our paper "Unit-Level Surprise in Neural Networks" won the Didactic Paper Award at the NeurIPS 2021 workshop I Can't Believe it's Not Better and was accepted for publication in PMLR.
  • 10/2021: Our paper "Unit-Level Surprise in Neural Networks" was accepted to the NeurIPS 2021 workshop I Can't Believe It's Not Better! (Spotlight).
  • 04/2021: Started my ELLIS exchange at the Max Planck Institute for Intelligent Systems, Tübingen with Bernhard Schölkopf to work on causal representation learning.
  • 09/2020: Our paper "Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views" was accepted to NeurIPS 2020 (Spotlight).
  • 07/2020: Attended the Machine Learning Summer School (MLSS) 2020.
Publications
Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences
Cian Eastwood*, Nanbo Li*, Chris Williams
ICLR 2022 Workshop: Objects, Structure and Causality

We propose a framework for explaining object-image differences in terms of the underlying object properties (e.g. pose, shape, appearance), leveraging image-space semantic alignments as counterfactual interventions on the underlying object properties.

Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Cian Eastwood*, Ian Mason*, Chris Williams, Bernhard Schölkopf
ICLR 2022 (Spotlight)
Code

We identify a type of domain shift which can be resolved by restoring the *same* features and address it in the source-free setting by using softly-binned histograms to cheaply and flexibly align the marginal feature distributions.

Unit-Level Surprise in Neural Networks
Cian Eastwood*, Ian Mason*, Chris Williams
NeurIPS 2021 Workshop: I Can't Believe it's Not Better (Spotlight, Didactic Award) and PMLR
Code / Video

We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer.

Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
Nanbo Li, Cian Eastwood, Bob Fisher
NeurIPS 2020 (Spotlight)
Code / Video

We learn accurate, object-centric representations of 3D scenes by aggregating information from multiple 2D views/observations.

A Framework for the Quantitative Evaluation of Disentangled Representations
Cian Eastwood, Chris Williams
ICLR 2018
Code

We propose a framework and three metrics for quantifying the quality of "disentangled" representations—disentanglement, completeness and informativeness.

(Previously a spotlight presentation @ NeurIPS 2017 disentanglement workshop)

Website source