Lejbølle, Aske R.Nasrollahi, KamalKrogh, BenjaminMoeslund,Thomas B.Brömme,ArslanBusch,ChristophDantcheva,AntitzaRathgeb,ChristianUhl,Andreas2017-09-262017-09-262017978-3-88579-664-0https://dl.gi.de/handle/20.500.12116/4650Person re-identification is a topic which has potential to be used for applications within forensics, flow analysis and queue monitoring. It is the process of matching persons across two or more camera views, most often by extracting colour and texture based hand-crafted features, to identify similar persons. Because of challenges regarding changes in lighting between views, occlusion or even privacy issues, more focus has turned to overhead and depth based camera solutions. Therefore, we have developed a system, based on a Convolutional Neural Network (CNN) which is trained using both depth and RGB modalities to provide a fused feature. By training on a locally collected dataset, we achieve a rank-1 accuracy of 74.69%, increased by 16.00% compared to using a single modality. Furthermore, tests on two similar publicly available benchmark datasets of TVPR and DPI-T show accuracies of 77.66% and 90.36%, respectively, outperforming state-of-the-art results by 3.60% and 5.20%, respectively.enMultimodalPerson Re-identificationConvolutional Neural NetworksFeature FusionMultimodal Neural Network for Overhead Person Re-identification1617-5468