Videos are now available below, or on youtube!
The Symposium on Geometry Processing accompanies the graduate school the weekend before the conference. The graduate school will feature tutorials by experts at the frontiers of geometry processing, on topics such as shape analysis, shape optimization, parameterization and fabrication and is target at graduate students at all levels as well as professionals from industry wanting to refresh their knowledge on geometry processing.
|09:00-10:30||Keenan Crane - Conformal Geometry Processing|
|10:30-11:00||-- Coffee Break --|
|11:00-12:30||Amir Vaxman - Direction Fields|
|12:30-14:00||-- Lunch --|
|14:00-15:30||Marcel Campen - Quad Meshing|
|15:30-16:00||-- Coffee Break --|
|16:00-18:00||Alec Jacobson & Daniele Panozzo - Hands-on LibIGL Tutorial|
|09:00-10:30||Maks Ovsjanikov - Shape Correspondence and Functional Maps|
|10:30-11:00||-- Coffee Break --|
|11:00-12:30||Emanuele Rodola - Machine Learning Meets Geometry|
|12:30-14:00||-- Lunch --|
|14:00-15:30||Pierre Alliez - Reconstruction|
|15:30-16:00||-- Coffee Break --|
|16:00-17:30||Bernd Bickel & Niloy Mitra - Fabrication|
Digital geometry processing is the natural extension of traditional signal processing to three-dimensional geometric data. In recent years, methods based on so-called conformal (i.e., angle-preserving) transformations have proven to be a powerful paradigm for geometry processing since (i) numerical problems are typically linear, providing scalability and guarantees of correctness and (ii) conformal descriptions of geometry are often dramatically simpler or lower-dimensional than traditional encodings. Conformal geometry is also linked to constitutive laws appearing in computational mechanics and 3D fabrication. This lecture will touch on both the mathematical foundations of conformal geometry, as well as recent numerical techniques and applications in 3D geometry processing.
Bio: Keenan Crane is an Assistant Professor of Computer Science and Robotics at Carnegie Mellon University. His research draws on insights from differential geometry and computer science to develop fundamental algorithms for working with real-world geometric data. He received his BS from UIUC, was a Google PhD Fellow at Caltech, and a NSF Mathematical Sciences Postdoctoral Fellow at Columbia University.
Direction fields and vector fields play an increasingly important role in computer graphics and geometry processing. The synthesis of directional fields on surfaces, or other spatial domains, is a fundamental step in numerous applications, such as mesh generation, deformation, texture mapping, and many more. To facilitate these objectives, various representations, discretizations, and optimization strategies have been developed. These choices come with varying strengths and weaknesses. This short tutorial provides a taste of the main challenges in directional field synthesis for graphics applications, and the methods developed in recent years to address these challenges.
The students will get access to a GitHub repository containing material and demos for the tutorial. They can ideally be “run out-of-the-box”. To be able to play with them in real time, laptops with some developing suite (that CMake recognizes) are advised, but not mandatory.
Bio: Amir Vaxman is an universitair docent (assistant professor) in the Division Virtual Worlds at the Department of Information and Computing Sciences at Utrecht University, The Netherlands. Before his position in UU, he was a postdoctoral fellow in TU Wien (Vienna) at the Geometric Modeling and Industrial Geometry group, where he also received the Lise-Meitner fellowship. He earned his BSc in computer engineering, and his PhD in Computer science from the Technion-IIT. His research interests are geometry processing and discrete differential geometry, focusing on directional-field design, unconventional meshes, constrained shape spaces, architectural geometry, and medical applications.
Triangle meshes are a well-known and quite predominant representation for surfaces and other kinds of geometric data. Their simplicial structure makes handling, processing, and modification relatively easy and convenient. In certain domains though, there is a strong preference for meshes composed of quadrilateral rather than triangular elements. Their specific structure allows for alignment to the directions of principal curvature, stress or strain, the characteristic directions of PDEs on surfaces, etc. These properties provide advantages particularly in areas such as animation and simulation. There is a decades-long history of manual and assisted quad mesh generation; a process that can be highly time consuming and expensive. Recent advances in the field of geometry processing have spawned a variety of powerful and flexible automatic and semi-automatic quad mesh generation techniques that are now starting to improve efficiency in this area. In this tutorial, after a look at the historic development, we discuss in detail the modern techniques that enable the generation of quad meshes with respect to a variety of objectives and quality criteria. These may involve properties such as the size, anisotropy, orientation, and alignment of the individual quad elements, as well as the local and global structure of the quad mesh in terms of connectivity and topology. Despite the remarkable results to date, automatic quad mesh generation remains to be a very active research area. We elaborate on the open problems that are waiting to be addressed, related to reliability, robustness, optimality, and the move towards the incorporation of higher-level aspects. Finally, an outlook on the generalization of the discussed techniques to the next dimension is given, where hexahedral meshes are desired for the discretization of volumes when modeling just the surface is insufficient.
Date: 2pm 1st July 2017 (slides, not recorded)
Bio: Marcel Campen is a researcher at RWTH Aachen University. He previously was a postdoc at New York University (Courant Institute), after receiving his doctoral degree in 2014 from RWTH Aachen University. Marcel has authored numerous papers on the topic of quad mesh and quad layout generation and optimization, and is currently working on the challenging open problems in this area.
Libigl is a library of C++ code for geometry processing. Its wide functionality includes construction of common sparse discrete differential geometry operators such as the cotangent Laplacian, simple facet and edge-based topology data structures, mesh-viewing utilities for OpenGL and GLSL, and many core functions for matrix manipulation which make Eigen feel a lot more like MATLAB. In this tutorial, we will walk through an introduction of libigl via readymade examples spanning the gamut of geometry processing applications and tasks. Students will be able to follow along on their laptops. We will conclude with two live coding sessions demonstrating libigl's effectiveness at core geometry processing tasks such as parameterization or surface deformation.
The complete tutorial material including code can be downloaded from: http://libigl.github.io/libigl
Important! Please use this link to check out the source code:
git clone --recursive https://github.com/libigl/libigl.git
Bios: Alec Jacobson is an Assistant Professor of Computer Science at University of Toronto. Before that he was a post-doctoral researcher at Columbia University working with Prof. Eitan Grinspun. He received a PhD in Computer Science from ETH Zurich advised by Prof. Olga Sorkine-Hornung, and an MA and BA in Computer Science and Mathematics from the Courant Institute of Mathematical Sciences, New York University. His thesis on real-time deformation techniques for 2D and 3D shapes was awarded the ETH Medal and the Eurographics Best PhD award. Leveraging ideas from differential geometry and finite-element analysis, his work in geometry processing improves exposure of geometric quantities, while his novel user interfaces reduce human effort and increase exploration. He has published several papers in the proceedings of SIGGRAPH. He leads development of the widely used geometry processing library, libigl, winner of the 2015 SGP software award. In 2017, he received the Eurographics Young Researcher Award.
Daniele Panozzo is an Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences in New York University. Prior to joining NYU he was a postdoctoral researcher at ETH Zurich (2012-2015). He earned his PhD in Computer Science from the University of Genova (2012) and his doctoral thesis received the EUROGRAPHICS Award for Best PhD Thesis (2013). Daniele’s research interests are in digital fabrication, geometry processing, architectural geometry and discrete differential geometry. He received the EUROGRAPHICS Young Researcher Award in 2015, the NSF CAREER Award in 2017, and his work has been covered by Swiss National Television and various national and international printed media. Daniele is chairing the Graphics Replicability Stamp Initiative (www.replicabilitystamp.org), which is an initiative to promote reproducibility of research results and to allow scientists and practitioners to immediately beneﬁt from state-of-the-art research results.
This course will introduce the audience to the techniques for computing and processing correspondences between 3D shapes based on the functional map framework. This framework, based on the idea that it is often easier to establish maps between real-valued functions, rather than points or triangles on the shapes, has recently led to significant advances in non-rigid shape matching and other related areas. Our main goal is to introduce the necessary mathematical background and to provide the computational methods based on the idea of functional maps. We will assume that the participants have knowledge of basic linear algebra and some knowledge of differential geometry, to the extent of being familiar with the concepts of a manifold. We will discuss in detail the functional approach to finding correspondences between non-rigid shapes, and mention some extensions such as the design and analysis of tangent vector fields, consistent map estimation and shape variability analysis. As part of this course, we will also provide some basic code (in MATLAB) to help in getting started with practical implementations.
Bio: Maks Ovsjanikov is an Assistant Professor at Ecole Polytechnique in France with a CNRS chaire d’excellence and the Jean Marjoulet professorial chair. He received an Excellence in Research Award from the Institute for Computational and Mathematical Engineering at Stanford University for his work on spectral methods in shape analysis, and the Eurographics Young Researcher Award in 2014 "In recognition of his outstanding contributions to theoretical foundations of non-rigid shape matching." He has served on the technical program committees of various international conferences, is a member of the editorial board of Computer Graphics Forum and has co-chaired the Symposium on Geometry Processing in 2016.
The past decade in computer vision research has witnessed the re-emergence of "deep learning", and in particular convolutional neural network (CNN) techniques, allowing to learn powerful image feature representations from large collections of examples. CNNs achieve a breakthrough in performance in a wide range of applications such as image classification, segmentation, detection and annotation. Nevertheless, when attempting to apply the CNN paradigm to 3D shapes (feature-based description, similarity, correspondence, retrieval, etc.) one has to face fundamental differences between images and geometric objects. Shape analysis and geometry processing pose new challenges that are non-existent in image analysis, and deep learning methods have only recently started penetrating into our community. The purpose of this tutorial is to overview the foundations and the current state of the art on learning techniques for 3D shape analysis. Special focus will be put on deep learning techniques (CNN) applied to Euclidean and non-Euclidean manifolds for tasks of shape classification, retrieval and correspondence. The tutorial will present in a new light the problems of shape analysis and geometry processing, emphasizing the analogies and differences with the classical 2D setting, and showing how to adapt popular learning schemes in order to deal with 3D shapes. The tutorial will assume no particular background, beyond some basic working knowledge that is a common denominator for students and practitioners in graphics.
Bio: Emanuele Rodolà (PhD 2012) is a post-doctoral researcher at the University of Lugano, Switzerland. Previously he was an Alexander von Humboldt post-doctoral fellow at TU Munich (2013-2015) and a JSPS Research Fellow at the University of Tokyo (2012). His research interests include shape analysis, correspondence, reconstruction, and machine learning techniques applied to problems in these areas, and has authored about 50 papers on these topics. He received a number of Best Paper awards (3DPVT 2010, VMV 2015, SGP 2016) and was recognized as Outstanding Reviewer at top computer vision conferences, CVPR (2013, 2015, 2016), ECCV (2014, 2016) and ICCV (2015). His work on 3D shapes was featured by the national Italian television (RAI - Cose dell'altro Geo) in 2012.
The technological advances of geometric measurement devices have revolutionized our ability to digitize the world in 3D. This revolution has made possible the development of new applications such as the automatic scene interpretation and the simulation of physical phenomena at the scale of entire cities.
A key issue at the heart of this revolution is that of surface reconstruction, which consists in converting the raw measurements into a computerized surface representation. The objective is to reconstruct a surface only from measurements (most often point clouds), such that the topology and the geometry of the reconstructed surface approximate well the measured physical object. Surface reconstruction is a very ill-posed (with non-unique solutions), and the diversity of the existing approaches reflects the a priori knowledge on physical surfaces (simple, smooth, piecewise smooth) and properties sought after for the reconstructed surface (watertight, intersection-free).
This course will offer an introduction to the main families of surface reconstruction methods, in terms of the assumptions used to make the problem better posed. I will then discuss the enduring problems posed by imperfect data (imprecise, sparse or incomplete). The quest for robustness has motivated variational methods and more recent approaches inspired by the notion of optimal transport. Finally, I will discuss the emerging scientific challenges relating to several novel acquisition paradigms (sensor networks, continuous digitization, community data).
Recent Reference: A Survey of Surface Reconstruction from Point Clouds. Berger, Tagliasacchi, Seversky, Alliez, Guennebaud, Levine, Sharf and Silva. Computer Graphics Forum, 36 (1), 2016.
Bio: Pierre Alliez is a senior researcher and team leader at Inria Sophia Antipolis. He teaches at Ecole des Ponts ParisTech and at the University of Nice. His current research focuses on geometric modeling and processing, with an emphasis on the resilience of algorithms to defect-laden data. He was awarded in 2011 a 5 year 1.4ME scholarship from the European Research Council (ERC).
Background required: applied maths, basic geometric data structures and algorithms.
Good to have: install the latest CGAL library (www.cgal.org/) and compile its polyhedron demoWeb: team.inria.fr/titane/pierre-alliez/
In recent years, computer graphics researchers have contributed significantly in developing novel computational tools for 3D printing. Various methods have been presented concurrently for designing objects with functional goals, hence a coherent analysis and discussion is missing. This course starts by reviewing current state-of-the-art 3D printing hardware and software pipelines and analyzes their potential as well as shortcomings. The course then focuses on computational specification to fabrication methods, which allow designing or computing an object's shape and material composition from a functional description. These approaches are grouped into two categories, automatic methods without user interaction and interactive methods that keep the designer in the loop. We review automatic methods for translating functional specifications such as appearance and mechanical properties into functional material compositions that can be 3D printed, providing a coherent view of the underlying data structures, inverse problem formulations, and optimization strategies. We then describe recent efforts in interactive design and simulation methods for 3D printing. The aim of this course is to present a coherent review, common theory, and understanding of specification to fabrication methods, and to provide insights on current limitations on the software and hardware side that may inspire future work.
Bios: Bernd Bickel is an Assistant Professor, heading the Computer Graphics and Digital Fabrication group at IST Austria. He is a computer scientist interested in computer graphics and its overlap into animation, biomechanics, material science, and digital fabrication. His main objective is to push the boundaries of how digital content can be efficiently created, simulated, and reproduced.
Bernd obtained his Master's degree in Computer Science from ETH Zurich in 2006. For his PhD studies, Bernd joined the group of Markus Gross who is a full professor of Computer Science at ETH Zurich and the director of Disney Research Zurich. From 2011-2012, Bernd was a visiting professor at TU Berlin, and in 2012 he became a research scientist and research group leader at Disney Research, where he investigates approaches for simulating, designing, and fabricating materials and 3D objects. In early 2015 he joined IST Austria.