Today, most methods for image understanding tasks rely on feed-forward neural networks. While this approach has allowed for empirical accuracy, efficiency, and task adaptation via fine-tuning, it also comes with fundamental disadvantages. Existing networks often struggle to generalize across different datasets, even on the same task. By design, these networks ultimately reason about high-dimensional scene features, which are challenging to analyze. This is true especially when predicting 3D information based on 2D images. We propose to recast 3D multi-object tracking from RGB cameras as an Inverse Rendering (IR) problem, by optimizing via a differentiable rendering pipeline over the latent space of pre-trained 3D object representations and retrieving the latents that best represent object instances in a given input image. To this end, we optimize an image loss over generative latent spaces that inherently disentangle shape and appearance properties. We investigate not only an alternate take on tracking but our method also enables examining the generated objects, reasoning about failure situations, and resolving ambiguous cases. We validate our method's generalization and scaling capabilities by learning the generative prior exclusively from synthetic data and assessing camera-based 3D tracking on the nuScenes and Waymo datasets. These datasets are completely unseen to our method and do not require fine-tuning. This work resulted in the Paper “Inverse Neural Rendering for Explainable Multi-Object Tracking” by Julian Ost*, Tanushree Banerjee*, Mario Bijelic, and Felix Heide currently under review at a conference (* denotes equal contribution).
- Tags
-