|Date||May 06, 2014|
|Title||Learning Structured, Robust, and Multimodal Models|
|Abstract||Building intelligent systems that are capable of extracting meaningful representationsfrom high-dimensional data lies at the core of solving many ArtificialIntelligence tasks, including visual object recognition, information retrieval,speech perception, and language understanding. |
In this talk Iwill first introduce a broad class of hierarchical probabilistic models calledDeep Boltzmann Machines (DBMs) and show that DBMs can learn useful hierarchicalrepresentations from large volumes of high-dimensionaldata with applications in information retrieval, object recognition, and speechperception. I will then describe a new class of more complex models thatcombine Deep Boltzmann Machines with structured hierarchical Bayesian modelsand show how these models can learn a deep hierarchical structure for sharingknowledge across hundreds of visual categories, which allows accurate learningof novel visual concepts from few examples. Finally, I will introduce deepmodels that are capable of extracting a unified representation that fusestogether multiple data modalities. I will show that on several tasks, includingmodelling images and text, video and sound, these models significantly improve uponmany of the existing techniques.
|Bio||Ruslan Salakhutdinov received his PhD in machine learning (computer science)from the University of Toronto in 2009. After spending two post-doctoral yearsat the Massachusetts Institute of Technology Artificial Intelligence Lab, hejoined the University of Toronto as an Assistant Professor in the Department ofComputer Science and Department of Statistics. Dr. Salakhutdinov's primaryinterests lie in statistical machine learning, Bayesian statistics, DeepLearning, and large-scale optimization. He is the recipient of the EarlyResearcher Award, Connaught New Researcher Award, Alfred P. Sloan ResearchFellowship, Microsoft Faculty Fellowship, Google Faculty Award, and a Fellow ofthe Canadian Institute for Advanced Research.|
NOTE:Please note that this is a joint ANC/ILCC Seminar. There will be a more technical talk on Monday at 11 a.m. in IF-4.31/4.33.