Cian Eastwood
I am a PhD student with
Chris Williams at the University of Edinburgh and
Bernhard
Schölkopf at the Max Planck Institute for Intelligent Systems, Tübingen.
Previously, I studied Computer Science (BSc) at the
National University of Ireland, Maynooth—spending time
on exchange at the University of Toronto—and
Artificial Intelligence (MSc) at the University of Edinburgh.
  /  
  /  
  /  
  /  
|
|
Broadly, my research interests lie at the intersection of distribution shift,
representation learning and causality. This often involves exploring different ways to
deal with distribution shift such as domain generalization, domain adaptation, and causal/disentangled
representation learning. I am also excited by methods that combine causal discovery and (Bayesian)
experimental design to learn causal relations in an efficient manner.
|
- 01/2023: Our paper "DCI-ES: An Extended Disentanglement Framework with Connections to
Identifiability" was accepted to ICLR 2023.
- 12/2022: I gave a talk on "Distribution shift and causal/disentangled representations" to the Computational Intelligence, Vision, and Robotics Lab (New York University).
- 11/2022: I gave a talk on our Quantile Risk Minimization paper to Jonas Peter's group (University of Copenhagen).
- 11/2022: I gave talks entitled "Shift Happens: How can we best prepare?" to the groups of Xavier Boix (MIT) and Krikamol Muandet (CIPSA).
- 10/2022: Selected as a Top Reviewer for NeurIPS 2022.
- 09/2022: Our paper "Probable Domain Generalization via Quantile Risk Minimization" was accepted to NeurIPS 2022.
- 07/2022: Our paper "On the DCI Framework for Evaluating Disentangled Representations: Extensions and Connections to Identifiability" was accepted to the UAI 2022 workshop on Causal Representation Learning.
- 04/2022: Selected as a "Highlighted Reviewer" of ICLR 2022.
- 03/2022: Our paper "Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences" was accepted to the ICLR 2022 workshop on Objects, Structure and Causality.
- 02/2022: Excited to spend 4 months (August--December) at Meta AI, New York, as an AI Research Intern.
- 01/2022: Our paper "Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration" was accepted to ICLR 2022 (Spotlight).
- 12/2021: Our paper "Unit-Level Surprise in Neural Networks" won the Didactic Paper Award at the NeurIPS 2021 workshop I Can't Believe it's Not Better and was accepted for publication in PMLR.
- 10/2021: Our paper "Unit-Level Surprise in Neural Networks" was accepted to the NeurIPS 2021 workshop I Can't Believe It's Not Better! (Spotlight).
- 04/2021: Started my ELLIS exchange at the Max Planck Institute for Intelligent Systems, Tübingen with Bernhard Schölkopf to work on causal representation learning.
- 09/2020: Our paper "Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views" was accepted to NeurIPS 2020 (Spotlight).
- 07/2020: Attended the Machine Learning Summer School (MLSS) 2020.
|
|
DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability
Cian Eastwood*,
Andrei Nicolicioiu*,
Julius von Kügelgen*,
Armin Kekić,
Frederik Träuble,
Andrea Dittadi,
Bernhard Schölkopf
ICLR 2023
We extend the DCI framework for evaluating disentangled representations and connect it to identifiability.
The key idea is to quantify the "explicitness" of a representation by the functional capacity required to use it.
|
|
Probable Domain Generalization via Quantile Risk Minimization
Cian Eastwood*,
Alexander Robey*,
Shashank Singh,
Julius von Kügelgen,
Hamed Hassani,
George J. Pappas,
Bernhard Schölkopf
NeurIPS 2022
Code
We propose Quantile Risk Minimization for learning predictors that generalize with probability α, recovering the causal predictor as α → 1.
|
|
Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences
Cian Eastwood*,
Nanbo Li*,
Chris Williams
ICLR 2022 Workshop: Objects, Structure and Causality
We propose a framework for explaining object-image differences in terms of the underlying object properties (e.g. pose, shape, appearance), leveraging image-space semantic alignments as counterfactual interventions on the underlying object properties.
|
|
On the DCI Framework for Evaluating Disentangled Representations: Extensions and Connections to Identifiability
Cian Eastwood*,
Andrei Nicolicioiu*,
Julius von Kügelgen*,
Armin Kekić,
Frederik Träuble,
Andrea Dittadi,
Bernhard Schölkopf
UAI 2022 Workshop: Causal Representation Learning
We connect DCI disentanglement to identifiability, and propose a new complementary notion of disentanglement based on the functional capacity required to use a representation.
|
|
Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration
Cian Eastwood*,
Ian Mason*,
Chris Williams,
Bernhard Schölkopf
ICLR 2022 (Spotlight)
Code
We identify a type of domain shift which can be resolved by restoring the *same* features and address it in the source-free setting by using softly-binned histograms to cheaply and flexibly align the marginal feature distributions.
|
|
Unit-Level Surprise in Neural Networks
Cian Eastwood*,
Ian Mason*,
Chris Williams
NeurIPS 2021 Workshop: I Can't Believe it's Not Better (Spotlight, Didactic Award) and PMLR
Code
/
Video
We argue that unit-level surprise should be useful for: (i) determining which few parameters should
update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to
transfer.
|
|
Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
Nanbo Li,
Cian Eastwood,
Bob Fisher
NeurIPS 2020 (Spotlight)
Code
/
Video
We learn accurate, object-centric representations of 3D scenes by aggregating information from
multiple 2D views/observations.
|
|
A Framework for the Quantitative Evaluation of Disentangled Representations
Cian Eastwood,
Chris Williams
ICLR 2018
Code
We propose a framework and three metrics for quantifying the quality of "disentangled"
representations—disentanglement (D), completeness (C) and informativeness (I).
(Previously a spotlight presentation @ NeurIPS 2017 disentanglement workshop)
|
|