Invited Speakers

 

 

 

Prof. Adrian Hilton (University of Surrey, UK)

4D Vision for Interactive Content Production

Over the past decade advances in computer vision have enabled the 3D reconstruction of dynamic scenes from multiple view video. This  has allowed video-based free-viewpoint rendering with a visual realism with interactive control of camera viewpoint achieving a visual realism comparable to the captured video. This technology initially pioneered for highly controlled indoor scenes has been extended to free-viewpoint rendering of outdoor scenes such as sports for TV production. Free-viewpoint rendering is limited to the replay of the captured performance. This talk will present results of recent research which has achieved 4D performance capture by spatio-temporal alignment of frames across multiple sequences of 3D reconstruction. 4D performance capture opens up the possibility of editing and reuse of captured performance for animation.  Recent research has introduced the 4D Parametric Motion Graph which enables  animation from 4D actor performance capture. Results of this approach demonstrate the potential for video-realistic animated content production with the interactivity of both motion and viewpoint associated with conventional computer characters whilst maintaining the realism of video capture. Free-viewpoint animation and rendering on both PC and mobile platforms will be demonstrated. 4D content production allows high quality characters to be efficiently produced from video capture without the requirement for extensive manual authoring or editing.

Adrian Hilton, BSc(hons),DPhil,CEng, is Professor of Computer Vision and Graphics and Director of the Centre for Vision, Speech and Signal Processing at the University of Surrey, UK. He leads research investigating the use of computer vision for applications in entertainment content production, visual interaction and clinical analysis.His interest is n robust computer vision to model and understand real world scenes and his work in bridging-the-gap between real and computer generated imagery combines the fields of computer vision, graphics and animation to investigate new methods for reconstruction, modelling and understanding of the real world from images and video. Applications include: sports analysis (soccer, rugby and athletics), 3D TV and film production, visual effects, character animation for games, digital doubles for film and facial animation for visual communication.Contributions include technologies for the first hand-held 3D scanner, modeling of people from images and 3D video for games, broadcast and film production. Current research is focused on video-based measurement in sports, multiple camera systems in film and TV production, and 3D video for highly realistic animation of people and faces. Research is conducted in collaboration with UK companies and international institutions in the creative industries Adrian is currently the Principal Investigator of the multi-million EPSRC Progamme Grant S3A: ‘Future Spatial Audio for Immersive Listener Experience at Home’ (2013-2018), he also leads several EU and UK/ industry projects. Adrian currently holds a 5-year Royal Society Wolfson Research Merit Award (2013-2018).

 

http://kahlan.eps.surrey.ac.uk/Personal/AdrianHilton/Welcome_files/ah_photo2.jpg

Prof. Alan Chalmers (University of Warwick)

Multisensory Virtual Experiences

Virtual environments (VEs) offer the possibility of simulating potentially complex, dangerous or threatening real world experiences in a safe, repeatable and controlled manner. They can provide a powerful and fully customizable tool for a personalised experience and allow attributes of human behaviour in such environments to be examined. However, to accurately simulate reality, VEs need to be based on physical simulations and stimulate multiple senses (visuals, audio, smell, touch, temperature etc.) in a natural manner. Such environments are known as Real Virtuality. Natural delivery of multiple senses is especially important as a human’s perception may be significantly affected by interactions between all these senses. In particular, cross-modalities (the influence of one sense on another) can substantially alter the way in which a scene is perceived and the way the user behaves. At present there is no simulator available that can offer such a full sensory real-world experience. A key reason is that today’s computers are not yet powerful enough to simulate the full physical accuracy of a real scene for multiple senses in real-time.

This talk discusses how it may be possible to achieve high-fidelity multisensory virtual experiences now, because of our brain’s inability to process all the sensory input we receive at any given moment.

Alan Chalmers is a Royal Society Industrial Fellow and a Professor of Visualisation at WMG, University of Warwick, UK. He is Founder and Innovation Director of goHDR Ltd. He has an MSc with distinction from Rhodes University, 1985 and a PhD from University of Bristol, 1991. He is Honorary President of Afrigraph and a former Vice President of ACM SIGGRAPH. He has published over 220 papers in journals and international conferences on high-fidelity virtual environments, multi-sensory perception, virtual archaeology, and HDR imaging and successfully supervised 32 PhD students. Chalmers is Chair of EU COST Action “IC1005 HDR” that is co-ordinating HDR research across Europe to develop a new efficient open source standard for HDR video to facilitate its widespread uptake. In addition, he is PI of the EPSRC-JLR Psi project entitled “Visualisation and Virtual Experiences”

http://researchsystems.warwick.ac.uk/Profile/ShowFullImage.asp?UniqueID=F66BBD2D-ACEC-4217-9

Prof. Anthony Steed (University College London, UK)

Beaming: Asymmetric Telepresence Systems

Beaming is the process of virtual teleporting to a destination. Based (loosely) on the idea from Star Trek, a visitor is transported to a destination so that they can interact with other people (locals) there. We achieve this using a combination of high-end virtual reality equipment, robotic platforms and scene reconstruction software.

In this talk, I will present some of the novel situated display configurations that we have developed at UCL for the Beaming project. I will discuss the requirements for having a remote person appear as if virtually present in the room, and present some novel multi-view display prototypes that we have built.

Professor Anthony Steed is Head of the Virtual Environments and Computer Graphics group at University College London. His research interests extend from virtual reality systems, through to mobile mixed-reality systems, and from system development through to measures of user response to virtual content. He has published over 160 papers in the area, and is the main author of the recent book Networked Graphics: Building Networked Graphics and Networked Games.  He is also founder and CTO of Animal Systems, wcreators of Chirp (chirp.io).

 

http://www0.cs.ucl.ac.uk/staff/a.steed/steed2_small.gif

Dr. Hiroshi Shimodaira

Speech-driven talking head from articulatory features

We present a talking head in which the lips and head motions are controlled using articulatory movements estimated from speech. The advantage of the use of articulatory features over acoustic speech features such as MFCC and F0 is that articulatory features have a close link with head movements and they can also drive the lips. A phone-size HMM-based inversion mapping system is trained in a semi-supervised fashion to estimate articulatory features from speech. Experimental evaluation demonstrates that the proposed system outperforms the baseline system that employs low-level of speech features.

Hiroshi Shimodaira is a lecturer of the School of Informatics at the University of Edinburgh, where he is a member of the Centre for Speech Technology Research (CSTR) and the Institute for Language, Cognition and Computation (ILCC). He received a Ph.D. degree in Information Science from Tohoku University in 1988. Before he joined CSTR, he was an associate professor and co-director of the Intelligent Information Processing Laboratory, Japan Advanced Institute of Science and Technology (JAIST). His research fields cover speech recognition, on-line handwriting recognition, medical image processing, and he is particularly interested in statistical models for speech-driven 3D animated characters.

http://homepages.inf.ed.ac.uk/hshimoda/sim.gif

Dr. Hubert P. H. Shum (Northumbria University, UK)

Human Motion Synthesis for Computer Animation and Games

Due to the recent advancement of motion sensing and computer graphics hardware, the use of human motion in computer animation and games has become more popular in the past few years. On one hand, light-weight, real-time motion capture can be performed using depth cameras such as the Microsoft Kinect, which facilitates a natural user interface for human computer interaction. On the other hand, computer hardware can now handle complex simulations involving human movement and body deformation, such that the motion of virtual characters can be generated in real-time. In this talk, I will present my research on human motion synthesis, focusing on the areas of posture reconstruction, movement synthesis and crowd simulation. I will show how artificial intelligent and machine learning approaches can be adapted to solve complex problems in the domain.

Dr. Hubert P. H. Shum is a Senior Lecturer (Associate Professor) at Northumbria University and the Programme Leader of BSc (Hons) Computer Animation and Visual Effects. Before joining the university, he worked as a Lecturer in the University of Worcester, a post-doctoral researcher in RIKEN Japan, as well as a research assistant in the City University of Hong Kong. He received his PhD degree from the School of Informatics in the University of Edinburgh. He has received more than £110,000 from Northumbria University to hire PhD students and purchase research equipment, as well as £125,000 from the EPSRC to facilitate his research projects. His research interests include character animation, machine learning, human computer interaction and physical simulations. More information can be found at: http://info.hubertshum.com

 

http://hubertshum.com/info/images/myphoto6resized.jpg

Dr. Kartic Subr (Heriot Watt University, UK)

Photo-realistic Graphics in Real-time: A Speculative Perspective

I will present my talk in three phases. In the first phase, I will briefly describe my research focus and ongoing work in this direction. Then, I will present my expectation of futuristic technology that would rely on a combination of hardware and algorithms that will allow computer imagery to  be generated at real-time rates (40-60 images per second).

Next, I will mention some key challenges in the current approaches that would need to be overcome or side-stepped. Finally, I will compare the potentials of of two favourite approaches --- rasterization and ray-tracing --- in the context of futuristic real-time graphics.

Kartic Subr is a Royal Society University Research fellow and  Associate Professor at School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh. His research focuses on the analysis and development of stochastic sampling strategies applied to high-quality image synthesis, computational photography as well as computer vision.  Kartic obtained his MS in Visual Computing and PhD (2008) in Computer Science from University of California, Irvine. During his PhD, he experienced short stints at Rhythm & Hues Studios (Los Angeles), NVIDA Corporation (Santa Clara) and as an instructor at Columbia University (New York). Kartic later held post-doctoral positions at INRIA-Grenoble (2008-2010) and Walt Disney Research (2013-2014). He received a Newton International Fellowship from the Royal Society and Royal Academy in 2011 and was an honorary research associate at University College London (2011-2013).

 

http://www0.cs.ucl.ac.uk/staff/k.subr/img/profileBW.jpg

Prof. Kenji Mase (Nagoya University, Japan)

E-coaching: Wearable and Ubiquitous Sensing and Life-logging for Coaching

Ubiquitous and wearable sensing is widely spreading for activities of daily living, business and sports scenes. The sensed data is a good source for analyses, summarization, and simulation of the human living. In this talk, I will introduce three works including health care wearable sensing, manufacturing-skill coaching assistant, and multi- and free-viewpoint sports viewer.

The health care wearable sensing is a collaborative project with e-textile and medical researchers. We have recently developed a smart stretch textile for spirometry sensing and a smart pressure bed-sheet for health care purposes. They will visualize the vital conditions of patients to medical coaches; nurses and caregivers. Next, in manufacturing skill coaching assistant project, we have developed a system with cameras and wearable motion sensors to evaluate metal filing skills. The scores and visualized motions are feed-backed to the trainees for discussion with trainers. Lastly, the sports related project is targeting an end-to-end multi- and free-viewpoint viewing service. A sport event summarization system from many cameras and sensors and the content viewing interfaces will benefit sports players, coaches and general audience as it shares how to play and/or watch the games.

 

http://mase.itc.nagoya-u.ac.jp/~mase/index_files/portrait.jpg

Dr. Kenny Mitchell (Disney Research)

Reality Mixer

Kenny Mitchell is an Imagineer and research head for the Walt Disney Company Ltd, with lab located at Edinburgh University's business campus (an outpost of Disney Research Zurich). Over the past 16 years he has shipped games using high-end graphics technologies including voxels, volumetric light scattering, motion blur and curved surfaces. His PhD introduced the use of real-time 3D for information visualisation on consumer hardware, including a novel recursive perspective projection technique.

 

In between contributing to the technically acclaimed racing game, Split Second, Spielberg's Boom Blox (BAFTA award winner), Disney Infinity and the Harry Potter franchise games he is involved in developing new intellectual properties. His work on video games and mixed reality technologies includes collaboration with all Disney business units and many successful funded University collaborations. He is a member of the EPSRC strategic advisory network and an a number of computing school advisory boards. He is the most senior Disney Research representative in the UK.

cid:870A29B3-05C6-4CBB-8556-4EA57581E541@ethz.ch

Dr. Kenshi Takayama (National Institute of Informatics, Japan)

Sketch-based interfaces for computer graphics content creation

In this talk, I will present some of my past work on designing user interfaces for creating various types of visual contents used in computer graphics, with an emphasis on sketch-based approaches. First, I will present a paintbrush interface for quickly copying and pasting geometric details from one surface mesh to another, in the context of 3D modeling through digital sculpting. Second, I will present a sketch-based interface for creating a high-quality coarse quad mesh from an input dense triangle mesh, an important task in typical CG production pipelines. Finally, I will present two approaches to volumetric modeling where 3D models can be cut arbitrarily and show their cross-sections consistently, one approach based on texture synthesis and the other inspired by 2D vector graphics representations.

Kenshi Takayama is an assistant professor at National Institute of Informatics (NII) since September 2014. His general research interest lies in geometric modeling and user interface design for computer graphics. Before joining NII, he did his Bachelor (2007), Master (2009), and PhD (2012) under the supervision of Prof. Takeo Igarashi, then he did his postdoctoral research under the supervision of Prof. Olga Sorkine-Hornung at ETH Zurich, Switzerland.

 

http://research.nii.ac.jp/~takayama/portrait-3.jpg

Dr. Manfred Lau (Lancaster University, UK)

3D Modeling and User Interfaces for Digital Fabrication

The recent trend of rapid prototyping technologies such as 3D printers and laser cutters will lead to an increased demand for algorithms and user interfaces for digital fabrication. I discuss a method for converting virtual 3D furniture models to fabricatable parts and connectors that can then be built into real-world furniture. Then, I will describe a number of natural user interface tools for digital fabrication. The motivation for such  interfaces is that easy-to-use tools for 3D modeling and fabrication is still lacking, despite years of research in developing modeling tools for novice users. I will show a photo-based interface for sketching new objects, a sketch-based interface for designing and fabricating chairs, a tangible user interface for modeling 3D shapes in an augmented reality environment, and a mixed reality framework for modeling everyday objects with hand gestures.

Manfred Lau is an Assistant Professor (Lecturer in UK) in the School of Computing and Communications at Lancaster University, UK.  He was a post-doc researcher working with Prof. Takeo Igarashi in Tokyo, Japan at the JST ERATO Igarashi Design Interface Project. He received his B.Sc. degree in Computer Science from Yale University, and his Ph.D. degree in Computer Science, supervised by Prof. James Kuffner, from Carnegie Mellon University. Manfred's research interests are in computer graphics, human-computer interaction, digital fabrication, geometric modeling, and animation. His recent research in 3D modeling and fabrication focuses on building natural user interfaces for the layperson to model, design, and fabricate their own products.  His work on 3D geometry modeling draws inspiration from the areas of tangible user interfaces and industrial design.  His Ph.D. thesis work explores a combination of motion planning techniques and captured data to generate realistic crowd animation for games and films.  He is also interested in the areas of robotics and machine learning.

 

http://www.research.lancs.ac.uk/portal/files/61811734/photo_manfred.jpg

Dr. Miguel Nacenta (University of St. Andrews, UK)

Playing with perception, representation, and presentation: three ways to subvert current visualization techniques

In this talk I present three recent projects that address different aspects of the visualization pipeline. All three projects are examples that show how slightly different approaches to vision and graphics can result in very different ways to look at data and images. The first project, Gaze-Contingent Depth of Field perception addresses alternative ways of representing 3D information in flat displays. The second project is Transmogrification, a technique that allows us to change the presentation of any graphic, and turn any 2D representation into an arbitrary set of other 2D representations with a very flexible touch interface. Finally, FatFonts are a novel hybrid way to represent numbers, by using a special type of digit that turns numeric tables into visualisations.

Miguel Nacenta is a co-founder of the St Andrews Human-Computer Interaction group (SACHI) and a lecturer in human-computer interaction at the School of Computer Science, University of St Andrews. His main expertise is in input and output techniques for displaying and accessing information. His research encompasses the use of multi-touch and multi-display interfaces, the perceptual aspects of display information perception, and information visualization. He is currently HCI Theme co-leader for the Scottish Informatics and Computer Science Alliance (SICSA), program co-chair for ITS 2013, Marie Curie fellow. His work has featured in New Scientist, Wired.co.uk, Fast.Co Design, La Recherche, and other media, and he has exhibited in the Esker Foundation gallery.

http://sachi.cs.st-andrews.ac.uk/wp-content/uploads/2013/08/Miguel.jpg

Prof. Neil Dodgson (University of Cambridge, UK)

Conversion of trimmed NURBS surfaces to untrimmed Catmull-Clark subdivision surfaces

We have developed a way to convert trimmed NURBS surfaces to untrimmed subdivision surfaces with Bézier edge conditions. We take a NURBS surface and its trimming curves as input. From this, we compute a base mesh, the limit surface of which fits the trimmed NURBS surface to a specified tolerance. Our process has two stages: first we construct the topology of the base mesh by performing a cross-field based decomposition in parametric space. Second, we calculate the control point positions in the base mesh based on the limit stencils of the subdivision scheme and constraints to achieve tangential continuity across the boundary. Our method can provide the user with either an editable base mesh or a fine mesh whose limit surface approximates the input within a certain tolerance. By integrating the trimming curve as part of the desired limit surface boundary, our conversion can produce gap-free models.

http://www.cl.cam.ac.uk/~nad10/neil2009.gif

Dr. Nobuyuki Umetani (Disney Research Zurich)

Interactive Design of Functional Shapes

Physical simulation allows validation of geometric designs without tedious physical prototyping. However, since geometric modeling and physical simulation are typically separated, simulations are mainly used for rejecting bad design, and, unfortunately, not for assisting creative exploration towards better designs. In this talk, I introduce several interactive approaches to integrate physical simulation into geometric modeling to actively support creative design process. More specifically, I demonstrate the importance of (i) presenting the simulation results in real-time during user’s interactive shape editing so that the user immediately sees the validity of current design, and to (ii) providing a guide to the user so that he or she can efficiently explore the valid deign space. I present novel algorithms to achieve these requirements.

Nobuyuki Umetani is a postdoctoral researcher in Disney Research Zurich. The principle research question he addresses through his research is: how to integrate real-time physical simulation into interactive geometric modeling procedure to facilitate creativity. He broadly interested with physics simulation, especially the finite element method, applied for computer animation, biomechanics, and mechanical engineering. He earned Ph.D. degree in University of Tokyo in September 2012 under supervision of Prof. Takeo Igarashi.

You can find more at:  http://www-ui.is.s.u-tokyo.ac.jp/~ume/

 

http://www.disneyresearch.com/wp-content/uploads/Profile_NobuyukiUmetani_156x150.png

Dr. Peter Hall (Bath University, UK)

Computer Vision for Computer Graphics

Computer Vision is increasingly being applied withing Computer Graphics. This talk will outline why Computer Vision is of value in two areas of Computer Graphics taken from my own work. The first is non-photorealistic rendering; where it will be argued that tools able to make at least a weak semantic interpretation of a scene are to be preferred over those that process an image with no reference to semantic content. The second is the acquisition of complex assets: models that can be edited and reproduced. Here we show that data-driven models need no be constrained to video reply, but can in fact be used to generate brand new three dimensional, dynamic models of complex natural phenomena (trees and water).

http://www.bath.ac.uk/comp-sci/contacts/academics/staff/hall_peter.jpg

Prof. Shigeo Morishima (Waseda University, Japan)

Modeling personal characteristics in facial shape, skin and motion to generate a photo-realistic instant cast

Capturing and modeling of personal characteristics in face is hot topic in computer graphics. Especially in this talk, our recent research result about quick modeling and retargetting personal characteristic in 3D face shape, skin rendering and motion for avatar is introduced and discussed. Our final goal is to generate a perfectly psrsonalized character easily and quickly without any high-cost capture devices.

Dr. Shigeo Morishima was born in Japan on August 20, 1959. He received the B.S., M.S. and Ph.D. degrees, all in Electrical Engineering from the University of Tokyo, Tokyo, Japan, in 1982, 1984, and 1987, respectively. From 1987 to 2001, he was an associate professor and from 2001 to 2004, a professor of Seikei University, Tokyo. Currently, he is a professor of School of Advanced Science and Engineering, Waseda University.

His research interests include Computer Graphics, Computer Vision, Multimodal Signal Processing and Human Computer Interaction.

 

Prof. Sriram Subramanian (Bristol University, UK)

Beyond Multitouch: Supporting haptics and multiple independent views for mid-air gestural interactions

Although Multi-touch devices have become common in the consumer world, users have sacrificed the tactile feedback afforded by physical buttons. The Bristol Interaction and Graphics group has been exploring various technical solutions to create the next generation of touch interfaces that support multi-point haptic feedback as well as dynamic shape manipulation through surface actuation. In this talk I will present UltraHaptics a multi-point haptic feedback system that allows users to experience haptic feedback simultaneously in multiple locations on an interactive surface. This feedback is created in mid-air – so users don’t have to touch or hold any device to experience it.

Sriram Subramanian is a Professor of Human-computer Interaction at the University of Bristol with a research interests in exploring new forms of interactive systems. He is specifically  interested in rich and expressive input combining multi-touch, haptics and touchless gestures. Before joining the University of Bristol, he worked as a senior scientist at Philips Research Netherlands and as an Assistant Professor at the Department of Computer Science of the University of Saskatchewan, Canada. You can find more details of his research interests at his groups page http://big.cs.bris.ac.uk

 

http://big.cs.bris.ac.uk/wp-content/uploads/2014/01/Sri1.jpg

Dr. Taku Komura (University of Edinburgh, UK)

Relationship descriptors for computer graphics

In this talk, I will describe about relationship descriptors that we have developed for scene recognition and synthesis where characters can be interacting with one another and/or with objects in the environment. Using our representations, complex character movements can be easily synthesized and edited without suffering from collisions and penetrations.  Also, scenes can be recognized based on the context despite the variation in the geometry of the objects and postures of the characters. Along with the experimental results, I will describe the advantages of our methods and further describe the future plans of our research.

Taku Komura is a Reader at the Institute of Perception, Action and Behavior, School of Informatics , University of Edinburgh. He is also awarded a Royal Society Industry Fellowship to work in collaboration with Disney Research. As the group leader of the Computer Animation and Visualization Unit, his research has focused on data-driven character animation, physically-based character animation, crowd simulation, cloth animation, anatomy-based modeling and robotics. Recently, his main research interests have been in indexing and animating complex close interactions, which includes character-character interactions and character-object interactions.

 

http://homepages.inf.ed.ac.uk/tkomura/taku.jpg

Dr. Tetsunari Inamura (National Institute of Informatics, Japan)

Cloud based Immersive VR Platform for Cognitive Social Robotics

Research on high level human-robot interaction systems that aim skill acquisition, concept learning, modification of dialogue strategy, and so on requires large-scaled experience database based on social and embodied interaction experiments. However, if we use real robot systems, costs for development of robots and performing various experiments will be too huge. If we choose  virtual robot simulator, limitation arises on embodied interaction between virtual robots and real users. Our group thus proposes an enhanced robot simulator that enables multiuser to connect to central simulation world, and enables users to join the virtual world through immersive user interface such as Kinect and Oculus VR HMD. This system has been applied to RoboCup@Home simulation and novel type of rehabilitation. In this talk, I introduce the configuration of our simulator platform and feasibility of the system in several applications.

http://www.iir.nii.ac.jp/images/inamura.jpg

Prof. Yoichi Sato (University of Tokyo)

Sensing and Utilizing Human Visual Attention

In this talk, I will describe our recent works on human visual attention from two aspects: sensing and utilizing our visual attention. Visual saliency models predict our eye fixations driven by our vision system’s bottom-up control triggered by visual stimuli, and it has been shown experimentally that a visual saliency map computed by a visual saliency model is highly correlated with an actual distribution of our fixation points. Based on this observation, we introduce a method for estimating gaze directions using visual saliency maps without explicit personal calibration. The key idea is to use the saliency maps of the video frames that a person is looking at as the probability distributions of the gaze points so that we can avoid cumbersome calibration procedures asking a user to fixate calibration targets. I also briefly talk about our recent attempt to use eye movements of multiple people viewing images as a collective knowledge for high-level image analysis.

http://www.hci.iis.u-tokyo.ac.jp/~ysato/images/ysato10.jpg

Dr. Yoshifumi Kitamura (Tohoku University)

Designing Interactive Content for Comfortable Interpersonal Communication

The Research Institute of Electrical Communication (RIEC) of Tohoku University has been conducting research into the scientific principles and applications of science and technology to realize a new paradigm of communications that enriches people’s lives, since its foundation in 1935. It has made a succession of pioneering achievements in laying the foundations of modern information and communications technology, including antennas, magnetic recording, semiconductor devices and optical communication, and has continued to play a world-leading role.

Our research group has just been established in 2010, one of the newest ones in the RIEC. It is charged with the new application service layer of the future electrical communication system, aiming directly to contribute to and develop new technologies of communication that promote the well-being of people. Although achieving this goal is not easy, we are researching projects to make interpersonal communication space and interaction more comfortable, i.e., active, enjoyable, and efficient, by designing interactive content that utilizes unique hardware and software. The talk will summarize a series of our group’s projects involving 1) Understanding the “atmosphere” by verbal/nonverbal behaviors of persons measured by sensors, 2) D-FLIP: dynamic & flexible interactive photo viewer, 3) TransformTable: a self-actuated shape-changing digital table, 4) IM3D: Magnetic Motion Tracking System for Dexterous 3D Interactions, among others.

Yoshifumi Kitamura is a Professor at Research Institute of Electrical Communication, Tohoku University. He received the B.Sc., M.Sc. and PhD. degrees in Engineering from Osaka University in 1985, 1987 and 1996, respectively. Prior to Tohoku University, he was an Associate Professor at Graduate School of Engineering and  Graduate School of Information Science and Technology, Osaka University (1997-2010), and before that he was a researcher at ATR Communication Systems Research Laboratories (1992-1996) and Canon Inc. (1987-1992).

 

http://www.icd.riec.tohoku.ac.jp/member/memberimg/yk-180x120.jpg