Kremer, RobinGesellschaft für Informatik e.V.2023-02-212023-02-212022978-3-88579-752-4https://dl.gi.de/handle/20.500.12116/40239Thanks to smartphones with several cameras, capturing a scene from multiple view points has become increasingly more available. Together with the evolving computing capabilities of modern hardware, light field processing has gained a lot of attention in the last years [Br20; Fl19; Mi20]. These techniques rely on neural networks to generate representations of the light field data. Other work assumes certain scene properties to enable light field processing (like lambertian radiation). The work shown here uses depth maps to transform the light field into a froxel (frustum + voxel)[Ev15] centered representation enabling unique post processing steps and analysis of the ray distribution in a scene. But more importantly it paves the way to quantify the information distribution within a scene. Based on this information appropriate adaptive filtering techniques can be applied. The transformation into the froxel centric representation is compatible with techniques like NERF.enLightfieldsFroxelsLight FieldsFrustumVoxelNeural Radiance FieldRay ClassificationGaining insights into the information distribution of Light Fields and enabling adaptive Light Field processing1614-3213