SGP 2017 is privileged to announce hosting four invited speakers:
Geometric Reasoning in Machine Learning
Monday, 09:00-10:00 (link)
Optimal Transport for Imaging and Graphics
Wednesday, 09:00-10:00 (link)
Making Legs and Practicing Neurosurgery with Mesh Processing
Tuesday, 16:00--17:00 (link)
Capturing and Editing Models of the Real World in Motion
Tuesday, 09:00-10:00 (link)
Geometric Reasoning in Machine Learning (top)
In recent years, computer graphics has been enjoyed and benefitted from advances in machine learning, and more and more graphics techniques are based on learning techniques. The question is whether machine learning techniques that are applied to geometric problems can also benefit from geometric reasoning. In this talk, I will present two novel machine learning techniques that are based on geometric reasoning, namely non-parametric clustering, and multi-dimensional scaling, and show how they are applied in graphical applications.
Daniel Cohen-Or is a professor in the School of Computer Science. He received his B.Sc. cum laude in both mathematics and computer science (1985), andM.Sc. cum laude in computer science (1986) from Ben-Gurion University, and Ph.D. from the Department of Computer Science (1991) at State University of New York at Stony Brook. He received the 2005 Eurographics Outstanding Technical Contributions Award. He was sitting on the editorial board of a number of international journals, and a member of many the program committees of several international conferences. He is now on the editorial board of ACM Transactions on Graphics. His research interests are in computer graphics, in particular, synthesis, processing and modeling techniques. His main interest right now is in few areas: image synthesis, motion and transformations, shapes and surfaces, analysis and reconstruction.
Optimal Transport for Imaging and Graphics (top)
ptimal transport (OT) has become a fundamental mathematical theoretical tool at the interface between calculus of variations, partial differential equations and probability. It took however much more time for this notion to become mainstream in numerical applications. This situation is in large part due to the high computational cost of the underlying optimization problems. There is however a recent wave of activity on the use of OT-related methods in fields as diverse as computer vision, computer graphics, statistical inference, machine learning and image processing. In this talk, I will review an emerging class of numerical approaches for the approximate resolution of OT-based optimization problems. These methods make use of an entropic regularization of the functionals to be minimized, in order to unleash the power of optimization algorithms based on Bregman-divergences geometry. This results in fast, simple and highly parallelizable algorithms, in sharp contrast with traditional solvers based on the geometry of linear programming. For instance, they allow for the first time to compute barycenters (according to OT distances) of probability distributions discretized on computational 2-D and 3-D grids with millions of points. This offers a new perspective for the application of OT in machine learning (to perform clustering or classification of bag-of-features data representations) and imaging sciences (to perform color transfer or shape and texture morphing). These algorithms also enable the computation of gradient flows for the OT metric, and can thus for instance be applied to simulate crowd motions with congestion constraints. We will also discus various extensions of classical OT, such as handling unbalanced transportation between arbitrary positive measures (the so-called Hellinger-Kantorovich/Wasserstein-Fisher-Rao problem), and the computation of OT between different metric spaces (the so-called Gromov-Wasserstein problem). This is a joint work with M. Cuturi and J. Solomon.
Gabriel Peyré is senior researcher at the Centre Nationale de Recherche Scientiﬁque (CNRS), working in DMA, Ecole Normale Supérieure, Paris. His research is focused on developing mathematical and numerical tools in sparse regularization and optimal transport, with applications in computer vision, graphics and neurosciences. Since 2005 Gabriel Peyré has co-authored 55 papers in international journals, 70 conference proceedings in top vision and image processing conferences, and two books. He is the creator of the "Numerical tour of signal processing" (www.numerical-tours.com
), a popular online repository of Matlab/Python/Julia resources to teach modern signal and image processing.
Making Legs and Practicing Neurosurgery with Mesh Processing (top)
Meshmixer is a 3D design tool which was first released in 2009, acquired by Autodesk in 2011, and has since been downloaded over a million times, by a userbase that includes both fourth graders and aerospace engineers. One result of this widespread adoption is that every day, thousands of people are using modern geometric tools like Laplacian Mesh Processing to solve real-world problems. I will describe how various techniques developed by the Geometry Processing community have been adapted for use in Meshmixer, and present some use cases in Biomedical engineering. This growing use of Meshes in CAD is being driven by the acceptance of Additive Manufacturing (3D Printing). I will also discuss some of the current challenges in the adoption of Additive techniques, and how geometry processing could have a significant impact in advanced manufacturing.
Ryan Schmidt is the founder of gradientspace, a software studio developing 3D design tools in Toronto, Canada. He previously led the Design & Fabrication Group at Autodesk Research, after a PhD at the University of Toronto. His 3D design/print tool Meshmixer, acquired by Autodesk in 2011, has been downloaded over 1 million times, with a userbase ranging from schoolchildren to industry professionals. He also works with Nia Technologies to bring 3D-printed prosthetics to the developing world.
Capturing and Editing Models of the Real World in Motion (top)
New methods for capturing highly detailed models of moving real world scenes with cameras, i.e., models of detailed deforming geometry, appearance or even material properties, become more and more important in many application areas. They are needed in visual content creation, for instance in visual effects, where they are needed to build highly realistic models of virtual human actors. Further on, efficient, reliable and highly accurate dynamic scene reconstruction is nowadays an important prerequisite for many other application domains, such as: human-computer and human-robot interaction, autonomous robotics and autonomous driving, virtual and augmented reality, 3D and free-viewpoint TV, immersive telepresence, and even video editing.
The development of dynamic scene reconstruction methods has been a long standing challenge in computer graphics and computer vision. Recently, the field has seen important progress. New methods were developed that capture - without markers or scene instrumentation - rather detailed models of individual moving humans or general deforming surfaces from video recordings, and capture even simple models of appearance and lighting. However, despite this recent progress, the field is still at an early stage, and current technology is still starkly constrained in many ways. Many of today's state-of-the-art methods are still niche solutions that are designed to work under very constrained conditions, for instance: only in controlled studios, with many cameras, for very specific object types, for very simple types of motion and deformation, or at processing speeds far from real-time.
In this talk, I will present some of our recent works on detailed marker-less dynamic scene reconstruction and performance capture in which we advanced the capabilities of existing approaches in several ways. I will show new methods for marker-less skeletal motion and performance capture in less constrained environments, even in outdoor scenes, and with a low number of cameras. I will also discuss new state-of-the-art methods for marker-less motion capture of hands, also when in interaction with objects and background clutter, from a single RGB-D camera. Finally, I will show new approaches for high-quality capture of deforming surfaces and the human face from monocular color and depth cameras, even in real-time. If time allows, I will also briefly show how dynamic scene reconstruction methods may be used for advanced video editing effects and how joining forces of machine learning and model-based methods may open up new possibilities.
Christian Theobalt is a Professor of Computer Science and the head of the research group "Graphics, Vision, & Video" at the Max-Planck-Institute for Informatics, Saarbrücken, Germany. From 2007 until 2009 he was a Visiting Assistant Professor in the Department of Computer Science at Stanford University. He received his MSc degree in Artificial Intelligence from the University of Edinburgh, his Diplom (MS) degree in Computer Science from Saarland University, and his PhD (Dr.-Ing.) from the Max-Planck-Institute for Informatics.
His research deals with algorithmic problems that lie on the boundary between the fields of Computer Vision and Computer Graphics, such as static and dynamic 3D scene reconstruction, marker-less motion and performance capture, virtual and augmented reality, computer animation, appearance and reflectance modelling, intrinsic video and inverse rendering, machine learning for graphics and vision, new sensors for 3D acquisition, advanced video processing, as well as image- and physically-based rendering. He is also interested in using reconstruction techniques for new ways of human computer interaction.
For his work, he received several awards, including the Otto Hahn Medal of the Max-Planck Society in 2007, the EUROGRAPHICS Young Researcher Award in 2009, the German Pattern Recognition Award 2012, and a Google Glass Research Award 2013. Further, in 2013 he was awarded an ERC Starting Grant by the European Union. In 2015, he was selected as one of the top 40 innovation leaders under 40 in Germany by the magazine Capital. He is a Principal Investigator and a member of the Steering Committee of the Intel Visual Computing Institute in Saarbrücken. He is also a co-founder of an award-winning spin-off company from his group - www.thecaptury.com
- that is commercializing a new generation of marker-less motion and performance capture solutions.