Datenbestand vom 23. März 2024

Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

aktualisiert am 23. März 2024

ISBN 978-3-8439-4694-0

72,00 € inkl. MwSt, zzgl. Versand


978-3-8439-4694-0, Reihe Informatik

Kripasindhu Sarkar
3D Shape Representations for Learning

189 Seiten, Dissertation Technische Universität Kaiserslautern (2020), Hardcover, A5

Zusammenfassung / Abstract

In recent years, machine learning based methods have achieved the state of the art results for many tasks involving 2D images. Even with the big progress in 2D convolutional neural networks (CNN), the application of the analogous ideas to 3D shapes is not straightforward, as a common parameterization of the 3D shapes is to be performed. In this thesis, we propose novel representations of 3D shapes to enable learning, and its subsequent applications for 3D shape processing. To this end, this thesis contributes in two ways.

The first contribution is appearance-based representations of 3D shapes, where we represent a 3D shape by a set of its rendered images. This enables us to get a high amount of data suitable for modern machine learning. Using realistic rendering along with a simple domain adaptation method, we perform the task of 3D assisted image analysis, and solve the problem of 3D object recognition in 2D images.

The second contribution is geometry centric representations of 3D shapes. To this end, we propose two types of representations. Firstly, we represent 3D shapes by fixed length and regular local patches. The locality of such features makes them suitable for fine-scale reconstruction tasks. By the application of both Matrix Factorization and CNN on the set of 3D patches, we show high quality inpainting and denoising on 3D shapes. Secondly, we represent 3D shapes by a novel global feature descriptor- Multi-Layered Height-maps (MLH). After assigning a reference grid to the 3D shape, we store multiple instances of height-maps at each grid location, thereby representing 3D shape details that are hidden behind several layers of occlusion. Using this parameterization, we learn 3D shapes using 2D CNNs models and show accurate classification result on the ModelNet dataset. Our MLH descriptor enables the use of well-investigated 2D CNNs in the context of 3D shapes, which is not possible in voxel based representation and other 3D architectures.