I am a PhD student at The University of Edinburgh, supervised by Prof. Robert Fisher.

I have been a member of Prof. Robert Fisher's computer vision lab at The Univerisity of Edinburgh since 2017 when I was doing an MSc in Artificial Intelligence. After receiving my MSc degree (Distinction), I worked on robot vision in the lab for months as a research assistant until I started my PhD in 2018. I did my bachelor in China at Wuhan University of Technology (Outstanding Engineer, 2016).

Research

My research interests lie in machine learning and computer vision, particularly in probabilistic generative models. I am currently working on generative neural representation learning of multi-object scenes, where I look into scene factorisation at multiple levels (e.g. object-level and feature-level) to identify the underlying factors that explain the scene observations.

Nanbo Li

李南伯


nanbo.li@ed.ac.uk

Publications

Object-Centric Representation Learning with Generative Spatial-Temporal Factorization
Li Nanbo, Muhammad Ahmed Raza, Hu Wenbin, Zhaole Sun, Robert Fisher
NeurIPS, 2021
Paper / Code (to appear) / Video (to appear)

We propose a generative framework to factorise the entangled effects of observer motions and scene object dynamics from a sequence of observations, and constructs scene object spatial representations.

Duplicate Latent Representation Suppression for Multi-object Variational Autoencoders
Li Nanbo, Robert Fisher
BMVC, 2021
Paper / Code (to appear) / Video (to appear)

We propose a differentiable prior that can suppress duplicate latent object representations and achieve better variational posterior approximation.

Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
Li Nanbo, Cian Eastwood, Robert Fisher
NeurIPS, 2020   (Spotlight Presentation)
Paper / Supplemental / Code / Data / Video

We propose a generative framework for learning accurate, object-centric scene representations from multiple views.

SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks
Can pu, Radim Tylecek, Li Nanbo, Robert Fisher
Remote Sensing, 2019
Paper / Video

We fuse depths from different kinds of depth sources to improve the accuracies.

DUGMA: Dynamic uncertainty-based Gaussian mixture alignment
Can Pu, Li Nanbo, Radim Tylecek, Robert Fisher
3DV, 2018   (Oral Presentation)
Paper / Code

We propose an uncertainty-driven approach to address classic point set registration problems.

Hybrid Multi-camera Visual Servoing to Moving Target
Hanz Cuevas Velásquez, Li Nanbo, Radim Tylecek, Marcelo Saval-Calvo, Robert Fisher     (: equal contribution)
IROS, 2018
Paper / Video

We built a system that fuses different visual sensory information of different cameras to perform accurate robot visual servoing.

Website source