Cian Eastwood

I am an ELLIS PhD student in machine learning advised by Chris Williams at the University of Edinburgh and Bernhard Schölkopf at the Max Planck Institute for Intelligent Systems, Tübingen.

Broadly, I am interested in representation and transfer learning, which has recently led me to causality (see below).

Previously, I studied Computer Science (BSc) at the National University of Ireland, Maynooth—spending time on exchange at the University of Toronto—and Artificial Intelligence (MSc) at the University of Edinburgh.

Email   /   CV   /   Google Scholar   /   Twitter   /   Github

profile photo
Research

Machine learning (ML) methods have achieved remarkable successes on problems with independent and identically-distributed (IID) data. However, real-world data is not IID—environments change, experimental conditions shift, new measurement devices are used, and selection biases are introduced. Current ML methods struggle when asked to transfer or adapt quickly to such out-of-distribution (OOD) data.

Causality [1] provides a principled mathematical framework to describe the distributional differences that arise from the aforementioned system changes. In particular, it supposes that observed system changes arise from changes to just a few underlying modules or mechanisms which function independently [2].

In my PhD studies I am exploring how best to exploit the invariances that are observed across multiple environments or experimental conditions by viewing them as imprints of (or clues about) the underlying causal mechanisms. The central hypothesis is that these invariances reveal how the system can change and thus how best to prepare for shifts that may occur at test time. My two main focuses are causal representation learning [3]—the discovery of high-level abstract causal variables from low-level observations—and the learning of invariant predictors [4,5] to enable OOD generalization. I am also excited by causal discovery, where causal relations are learned from heterogeneous data to e.g. understand cellular processes.

[1] Pearl, J. (2009). Causality. Cambridge University Press.
[2] Peters, J., Janzing, D., & Schölkopf, B. (2017). Elements of causal inference: foundations and learning algorithms. MIT Press.
[3] Schölkopf, B. et al. (2021). Toward causal representation learning. Proceedings of the IEEE, 109(5), 612-634.
[4] Peters, J., Bühlmann, P., & Meinshausen, N. (2016). Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 947-1012.
[5] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv:1907.02893.

News
Publications
Unit-level surprise in neural networks
Cian Eastwood, Ian Mason, Chris Williams
NeurIPS 2021 Workshop: I can't believe it's not better! (Spotlight)
Code

We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer.

Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Cian Eastwood, Ian Mason, Chris Williams, Bernhard Schölkopf
Preprint (under review), 2021
Code

We identify a type of domain shift which can be resolved by restoring the *same* features and address it in the source-free setting by using softly-binned histograms to cheaply and flexibly align the marginal feature distributions.

Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
Nanbo Li, Cian Eastwood, Bob Fisher
NeurIPS 2020 (Spotlight)
Code / Video

We learn accurate, object-centric representations of 3D scenes by aggregating information from multiple 2D views/observations.

A Framework for the Quantitative Evaluation of Disentangled Representations
Cian Eastwood, Chris Williams
ICLR 2018
Code

We propose a framework and three metrics for quantifying the quality of "disentangled" representations—disentanglement, completeness and informativeness.

(Previously presented @ NeurIPS 2017 workshop)

Website source