Eberts, MarkusUlges, AdrianEibl, MaximilianGaedke, Martin2017-08-282017-08-282017978-3-88579-669-5Deep Convolutional Neural Networks (CNNs) have recently been highly successful in various image understanding tasks, ranging from object category recognition over image classification to scene segmentation. We employ CNNs for pose estimation in a cross-modal retrieval system, which -given a photo of an object -allows users to retrieve the best match from a repository of 3D models. As our system is supposed to display retrieved 3D models from the same perspective as the query image (potentially with virtual objects blended over), the pose of the object relative to the camera needs to be estimated. To do so, we study two CNN models. The first is based on end-to-end learning, i.e. a regression neural network directly estimates the pose. The second uses transfer learning with a very deep CNN pre-trained on a large-scale image collection. In quantitative experiments on a set of 3D models and real-world photos of chairs, we compare both models and show that while the end-to-end learning approach performs well on the domain it was trained on (graphics) it suffers from the capability to generalize to a new domain (photos). The transfer learning approach on the other hand handles this domain drift much better, resulting in an average angle deviation from the ground truth angle of about 14 degrees on photos.enpose estimationimage retrievaldeep learningtransfer learningDeep Convolutional Neural Networks for Pose Estimation in Image-Graphics Search10.18420/in2017_921617-5468