Invited Speakers

 

 

 

Prof. Adrian Hilton (University of Surrey, UK)

4D Vision for Interactive Content Production

Over the past decade advances in computer vision have enabled the 3D reconstruction of dynamic scenes from multiple view video. This  has allowed video-based free-viewpoint rendering with a visual realism with interactive control of camera viewpoint achieving a visual realism comparable to the captured video. This technology initially pioneered for highly controlled indoor scenes has been extended to free-viewpoint rendering of outdoor scenes such as sports for TV production. Free-viewpoint rendering is limited to the replay of the captured performance. This talk will present results of recent research which has achieved 4D performance capture by spatio-temporal alignment of frames across multiple sequences of 3D reconstruction. 4D performance capture opens up the possibility of editing and reuse of captured performance for animation.  Recent research has introduced the 4D Parametric Motion Graph which enables  animation from 4D actor performance capture. Results of this approach demonstrate the potential for video-realistic animated content production with the interactivity of both motion and viewpoint associated with conventional computer characters whilst maintaining the realism of video capture. Free-viewpoint animation and rendering on both PC and mobile platforms will be demonstrated. 4D content production allows high quality characters to be efficiently produced from video capture without the requirement for extensive manual authoring or editing.

Adrian Hilton, BSc(hons),DPhil,CEng, is Professor of Computer Vision and Graphics and Director of the Centre for Vision, Speech and Signal Processing at the University of Surrey, UK. He leads research investigating the use of computer vision for applications in entertainment content production, visual interaction and clinical analysis.His interest is n robust computer vision to model and understand real world scenes and his work in bridging-the-gap between real and computer generated imagery combines the fields of computer vision, graphics and animation to investigate new methods for reconstruction, modelling and understanding of the real world from images and video. Applications include: sports analysis (soccer, rugby and athletics), 3D TV and film production, visual effects, character animation for games, digital doubles for film and facial animation for visual communication.Contributions include technologies for the first hand-held 3D scanner, modeling of people from images and 3D video for games, broadcast and film production. Current research is focused on video-based measurement in sports, multiple camera systems in film and TV production, and 3D video for highly realistic animation of people and faces. Research is conducted in collaboration with UK companies and international institutions in the creative industries Adrian is currently the Principal Investigator of the multi-million EPSRC Progamme Grant S3A: ‘Future Spatial Audio for Immersive Listener Experience at Home’ (2013-2018), he also leads several EU and UK/ industry projects. Adrian currently holds a 5-year Royal Society Wolfson Research Merit Award (2013-2018).

 

http://kahlan.eps.surrey.ac.uk/Personal/AdrianHilton/Welcome_files/ah_photo2.jpg

Prof. Alan Chalmers (University of Warwick)

Multisensory Virtual Experiences

Virtual environments (VEs) offer the possibility of simulating potentially complex, dangerous or threatening real world experiences in a safe, repeatable and controlled manner. They can provide a powerful and fully customizable tool for a personalised experience and allow attributes of human behaviour in such environments to be examined. However, to accurately simulate reality, VEs need to be based on physical simulations and stimulate multiple senses (visuals, audio, smell, touch, temperature etc.) in a natural manner. Such environments are known as Real Virtuality. Natural delivery of multiple senses is especially important as a human’s perception may be significantly affected by interactions between all these senses. In particular, cross-modalities (the influence of one sense on another) can substantially alter the way in which a scene is perceived and the way the user behaves. At present there is no simulator available that can offer such a full sensory real-world experience. A key reason is that today’s computers are not yet powerful enough to simulate the full physical accuracy of a real scene for multiple senses in real-time.

This talk discusses how it may be possible to achieve high-fidelity multisensory virtual experiences now, because of our brain’s inability to process all the sensory input we receive at any given moment.

Alan Chalmers is a Royal Society Industrial Fellow and a Professor of Visualisation at WMG, University of Warwick, UK. He is Founder and Innovation Director of goHDR Ltd. He has an MSc with distinction from Rhodes University, 1985 and a PhD from University of Bristol, 1991. He is Honorary President of Afrigraph and a former Vice President of ACM SIGGRAPH. He has published over 220 papers in journals and international conferences on high-fidelity virtual environments, multi-sensory perception, virtual archaeology, and HDR imaging and successfully supervised 32 PhD students. Chalmers is Chair of EU COST Action “IC1005 HDR” that is co-ordinating HDR research across Europe to develop a new efficient open source standard for HDR video to facilitate its widespread uptake. In addition, he is PI of the EPSRC-JLR Psi project entitled “Visualisation and Virtual Experiences”

http://researchsystems.warwick.ac.uk/Profile/ShowFullImage.asp?UniqueID=F66BBD2D-ACEC-4217-9

Prof. Anthony Steed (University College London, UK)

Beaming: Asymmetric Telepresence Systems

Beaming is the process of virtual teleporting to a destination. Based (loosely) on the idea from Star Trek, a visitor is transported to a destination so that they can interact with other people (locals) there. We achieve this using a combination of high-end virtual reality equipment, robotic platforms and scene reconstruction software.

In this talk, I will present some of the novel situated display configurations that we have developed at UCL for the Beaming project. I will discuss the requirements for having a remote person appear as if virtually present in the room, and present some novel multi-view display prototypes that we have built.

Professor Anthony Steed is Head of the Virtual Environments and Computer Graphics group at University College London. His research interests extend from virtual reality systems, through to mobile mixed-reality systems, and from system development through to measures of user response to virtual content. He has published over 160 papers in the area, and is the main author of the recent book Networked Graphics: Building Networked Graphics and Networked Games.  He is also founder and CTO of Animal Systems, wcreators of Chirp (chirp.io).

 

http://www0.cs.ucl.ac.uk/staff/a.steed/steed2_small.gif

Dr. Hiroshi Shimodaira

Speech-driven talking head from articulatory features

We present a talking head in which the lips and head motions are controlled using articulatory movements estimated from speech. The advantage of the use of articulatory features over acoustic speech features such as MFCC and F0 is that articulatory features have a close link with head movements and they can also drive the lips. A phone-size HMM-based inversion mapping system is trained in a semi-supervised fashion to estimate articulatory features from speech. Experimental evaluation demonstrates that the proposed system outperforms the baseline system that employs low-level of speech features.

Hiroshi Shimodaira is a lecturer of the School of Informatics at the University of Edinburgh, where he is a member of the Centre for Speech Technology Research (CSTR) and the Institute for Language, Cognition and Computation (ILCC). He received a Ph.D. degree in Information Science from Tohoku University in 1988. Before he joined CSTR, he was an associate professor and co-director of the Intelligent Information Processing Laboratory, Japan Advanced Institute of Science and Technology (JAIST). His research fields cover speech recognition, on-line handwriting recognition, medical image processing, and he is particularly interested in statistical models for speech-driven 3D animated characters.

http://homepages.inf.ed.ac.uk/hshimoda/sim.gif

Dr. Hubert P. H. Shum (Northumbria University, UK)

Human Motion Synthesis for Computer Animation and Games

Due to the recent advancement of motion sensing and computer graphics hardware, the use of human motion in computer animation and games has become more popular in the past few years. On one hand, light-weight, real-time motion capture can be performed using depth cameras such as the Microsoft Kinect, which facilitates a natural user interface for human computer interaction. On the other hand, computer hardware can now handle complex simulations involving human movement and body deformation, such that the motion of virtual characters can be generated in real-time. In this talk, I will present my research on human motion synthesis, focusing on the areas of posture reconstruction, movement synthesis and crowd simulation. I will show how artificial intelligent and machine learning approaches can be adapted to solve complex problems in the domain.

Dr. Hubert P. H. Shum is a Senior Lecturer (Associate Professor) at Northumbria University and the Programme Leader of BSc (Hons) Computer Animation and Visual Effects. Before joining the university, he worked as a Lecturer in the University of Worcester, a post-doctoral researcher in RIKEN Japan, as well as a research assistant in the City University of Hong Kong. He received his PhD degree from the School of Informatics in the University of Edinburgh. He has received more than £110,000 from Northumbria University to hire PhD students and purchase research equipment, as well as £125,000 from the EPSRC to facilitate his research projects. His research interests include character animation, machine learning, human computer interaction and physical simulations. More information can be found at: http://info.hubertshum.com

 

http://hubertshum.com/info/images/myphoto6resized.jpg

Dr. Kartic Subr (Heriot Watt University, UK)

Photo-realistic Graphics in Real-time: A Speculative Perspective

I will present my talk in three phases. In the first phase, I will briefly describe my research focus and ongoing work in this direction. Then, I will present my expectation of futuristic technology that would rely on a combination of hardware and algorithms that will allow computer imagery to  be generated at real-time rates (40-60 images per second).

Next, I will mention some key challenges in the current approaches that would need to be overcome or side-stepped. Finally, I will compare the potentials of of two favourite approaches --- rasterization and ray-tracing --- in the context of futuristic real-time graphics.

Kartic Subr is a Royal Society University Research fellow and  Associate Professor at School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh. His research focuses on the analysis and development of stochastic sampling strategies applied to high-quality image synthesis, computational photography as well as computer vision.  Kartic obtained his MS in Visual Computing and PhD (2008) in Computer Science from University of California, Irvine. During his PhD, he experienced short stints at Rhythm & Hues Studios (Los Angeles), NVIDA Corporation (Santa Clara) and as an instructor at Columbia University (New York). Kartic later held post-doctoral positions at INRIA-Grenoble (2008-2010) and Walt Disney Research (2013-2014). He received a Newton International Fellowship from the Royal Society and Royal Academy in 2011 and was an honorary research associate at University College London (2011-2013).

 

http://www0.cs.ucl.ac.uk/staff/k.subr/img/profileBW.jpg

Prof. Kenji Mase (Nagoya University, Japan)

E-coaching: Wearable and Ubiquitous Sensing and Life-logging for Coaching

Ubiquitous and wearable sensing is widely spreading for activities of daily living, business and sports scenes. The sensed data is a good source for analyses, summarization, and simulation of the human living. In this talk, I will introduce three works including health care wearable sensing, manufacturing-skill coaching assistant, and multi- and free-viewpoint sports viewer.

The health care wearable sensing is a collaborative project with e-textile and medical researchers. We have recently developed a smart stretch textile for spirometry sensing and a smart pressure bed-sheet for health care purposes. They will visualize the vital conditions of patients to medical coaches; nurses and caregivers. Next, in manufacturing skill coaching assistant project, we have developed a system with cameras and wearable motion sensors to evaluate metal filing skills. The scores and visualized motions are feed-backed to the trainees for discussion with trainers. Lastly, the sports related project is targeting an end-to-end multi- and free-viewpoint viewing service. A sport event summarization system from many cameras and sensors and the content viewing interfaces will benefit sports players, coaches and general audience as it shares how to play and/or watch the games.

 

http://mase.itc.nagoya-u.ac.jp/~mase/index_files/portrait.jpg

Dr. Kenny Mitchell (Disney Research)

Reality Mixer

Kenny Mitchell is an Imagineer and research head for the Walt Disney Company Ltd, with lab located at Edinburgh University's business campus (an outpost of Disney Research Zurich). Over the past 16 years he has shipped games using high-end graphics technologies including voxels, volumetric light scattering, motion blur and curved surfaces. His PhD introduced the use of real-time 3D for information visualisation on consumer hardware, including a novel recursive perspective projection technique.

 

In between contributing to the technically acclaimed racing game, Split Second, Spielberg's Boom Blox (BAFTA award winner), Disney Infinity and the Harry Potter franchise games he is involved in developing new intellectual properties. His work on video games and mixed reality technologies includes collaboration with all Disney business units and many successful funded University collaborations. He is a member of the EPSRC strategic advisory network and an a number of computing school advisory boards. He is the most senior Disney Research representative in the UK.

cid:870A29B3-05C6-4CBB-8556-4EA57581E541@ethz.ch

Dr. Kenshi Takayama (National Institute of Informatics, Japan)

Sketch-based interfaces for computer graphics content creation

In this talk, I will present some of my past work on designing user interfaces for creating various types of visual contents used in computer graphics, with an emphasis on sketch-based approaches. First, I will present a paintbrush interface for quickly copying and pasting geometric details from one surface mesh to another, in the context of 3D modeling through digital sculpting. Second, I will present a sketch-based interface for creating a high-quality coarse quad mesh from an input dense triangle mesh, an important task in typical CG production pipelines. Finally, I will present two approaches to volumetric modeling where 3D models can be cut arbitrarily and show their cross-sections consistently, one approach based on texture synthesis and the other inspired by 2D vector graphics representations.

Kenshi Takayama is an assistant professor at National Institute of Informatics (NII) since September 2014. His general research interest lies in geometric modeling and user interface design for computer graphics. Before joining NII, he did his Bachelor (2007), Master (2009), and PhD (2012) under the supervision of Prof. Takeo Igarashi, then he did his postdoctoral research under the supervision of Prof. Olga Sorkine-Hornung at ETH Zurich, Switzerland.

 

http://research.nii.ac.jp/~takayama/portrait-3.jpg

Dr. Manfred Lau (Lancaster University, UK)

3D Modeling and User Interfaces for Digital Fabrication

The recent trend of rapid prototyping technologies such as 3D printers and laser cutters will lead to an increased demand for algorithms and user interfaces for digital fabrication. I discuss a method for converting virtual 3D furniture models to fabricatable parts and connectors that can then be built into real-world furniture. Then, I will describe a number of natural user interface tools for digital fabrication. The motivation for such  interfaces is that easy-to-use tools for 3D modeling and fabrication is still lacking, despite years of research in developing modeling tools for novice users. I will show a photo-based interface for sketching new objects, a sketch-based interface for designing and fabricating chairs, a tangible user interface for modeling 3D shapes in an augmented reality environment, and a mixed reality framework for modeling everyday objects with hand gestures.

Manfred Lau is an Assistant Professor (Lecturer in UK) in the School of Computing and Communications at Lancaster University, UK.  He was a post-doc researcher working with Prof. Takeo Igarashi in Tokyo, Japan at the JST ERATO Igarashi Design Interface Project. He received his B.Sc. degree in Computer Science from Yale University, and his Ph.D. degree in Computer Science, supervised by Prof. James Kuffner, from Carnegie Mellon University. Manfred's research interests are in computer graphics, human-computer interaction, digital fabrication, geometric modeling, and animation. His recent research in 3D modeling and fabrication focuses on building natural user interfaces for the layperson to model, design, and fabricate their own products.  His work on 3D geometry modeling draws inspiration from the areas of tangible user interfaces and industrial design.  His Ph.D. thesis work explores a combination of motion planning techniques and captured data to generate realistic crowd animation for games and films.  He is also interested in the areas of robotics and machine learning.

 

http://www.research.lancs.ac.uk/portal/files/61811734/photo_manfred.jpg

Dr. Miguel Nacenta (University of St. Andrews, UK)

Playing with perception, representation, and presentation: three ways to subvert current visualization techniques

In this talk I present three recent projects that address different aspects of the visualization pipeline. All three projects are examples that show how slightly different approaches to vision and graphics can result in very different ways to look at data and images. The first project, Gaze-Contingent Depth of Field perception addresses alternative ways of representing 3D information in flat displays. The second project is Transmogrification, a technique that allows us to change the presentation of any graphic, and turn any 2D representation into an arbitrary set of other 2D representations with a very flexible touch interface. Finally, FatFonts are a novel hybrid way to represent numbers, by using a special type of digit that turns numeric tables into visualisations.

Miguel Nacenta is a co-founder of the St Andrews Human-Computer Interaction group (SACHI) and a lecturer in human-computer interaction at the School of Computer Science, University of St Andrews. His main expertise is in input and output techniques for displaying and accessing information. His research encompasses the use of multi-touch and multi-display interfaces, the perceptual aspects of display information perception, and information visualization. He is currently HCI Theme co-leader for the Scottish Informatics and Computer Science Alliance (SICSA), program co-chair for ITS 2013, Marie Curie fellow. His work has featured in New Scientist, Wired.co.uk, Fast.Co Design, La Recherche, and other media, and he has exhibited in the Esker Foundation gallery.

http://sachi.cs.st-andrews.ac.uk/wp-content/uploads/2013/08/Miguel.jpg

Prof. Neil Dodgson (University of Cambridge, UK)

Conversion of trimmed NURBS surfaces to untrimmed Catmull-Clark subdivision surfaces

We have developed a way to convert trimmed NURBS surfaces to untrimmed subdivision surfaces with Bézier edge conditions. We take a NURBS surface and its trimming curves as input. From this, we compute a base mesh, the limit surface of which fits the trimmed NURBS surface to a specified tolerance. Our process has two stages: first we construct the topology of the base mesh by performing a cross-field based decomposition in parametric space. Second, we calculate the control point positions in the base mesh based on the limit stencils of the subdivision scheme and constraints to achieve tangential continuity across the boundary. Our method can provide the user with either an editable base mesh or a fine mesh whose limit surface approximates the input within a certain tolerance. By integrating the trimming curve as part of the desired limit surface boundary, our conversion can produce gap-free models.

http://www.cl.cam.ac.uk/~nad10/neil2009.gif

Dr. Nobuyuki Umetani (Disney Research Zurich)

Interactive Design of Functional Shapes

Physical simulation allows validation of geometric designs without tedious physical prototyping. However, since geometric modeling and physical simulation are typically separated, simulations are mainly used for rejecting bad design, and, unfortunately, not for assisting creative exploration towards better designs. In this talk, I introduce several interactive approaches to integrate physical simulation into geometric modeling to actively support creative design process. More specifically, I demonstrate the importance of (i) presenting the simulation results in real-time during user’s interactive shape editing so that the user immediately sees the validity of current design, and to (ii) providing a guide to the user so that he or she can efficiently explore the valid deign space. I present novel algorithms to achieve these requirements.

Nobuyuki Umetani is a postdoctoral researcher in Disney Research Zurich. The principle research question he addresses through his research is: how to integrate real-time physical simulation into interactive geometric modeling procedure to facilitate creativity. He broadly interested with physics simulation, especially the finite element method, applied for computer animation, biomechanics, and mechanical engineering. He earned Ph.D. degree in University of Tokyo in September 2012 under supervision of Prof. Takeo Igarashi.

You can find more at:  http://www-ui.is.s.u-tokyo.ac.jp/~ume/