Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models

Zehan Wang1*       Ziang Zhang1*      Tianyu Pang2      Chao Du2      Hengshuang Zhao3      Zhou Zhao1
1Zhejiang University                2Sea AI Lab                3The University of Hong Kong
Note: To avoid ambiguity, Orient Anything only accept images containing a single object as input. For multi-object scenarios, we first use SAM to isolate each object and then predict their orientation separately.

Orient Anything, a robust image-based object orientation estimation model. By training on 2M rendered labeled images, it achieves strong zero-shot generalization ability for images in the wild. For visualization, the object orientation is represented by the red axis, while the blue and green axes indicate the upward and left sides of the object.

Visualizations on Images in the wild.

Visualization on Videos and 3D Objects


Qualitative Comparison

Orientation Understanding in MLLM

Object Direction Recognition

Spatial Part Reasoning

Spatial Relation Reasoning

Abstract

Orientation is a key attribute of objects, crucial for understanding their spatial pose and arrangement in images. However, practical solutions for accurate orientation estimation from a single image remain underexplored. In this work, we introduce Orient Anything, the first expert and foundational model designed to estimate object orientation in a single- and free-view image. Due to the scarcity of labeled data, we propose extracting knowledge from the 3D world. By developing a pipeline to annotate the front face of 3D objects and render images from random views, we collect 2M images with precise orientation annotations. To fully leverage the dataset, we design a robust training objective that models the 3D orientation as probability distributions of three angles and predicts the object orientation by fitting these distributions. Besides, we employ several strategies to improve synthetic-to-real transfer. Our model achieves state-of-the-art orientation estimation accuracy in both rendered and real images and exhibits impressive zero-shot ability in various scenarios. More importantly, our model enhances many applications, such as comprehension and generation of complex spatial concepts and 3D object pose adjustment.

Data Collection

The orientation data collection pipeline is composed of three steps: 1) Canonical 3D Model Filtering: This step removes any 3D objects in tilted poses. 2) Orientation Annotating: An advanced 2D VLM is used to identify the front face from multiple orthogonal perspectives, with view symmetry employed to narrow the potential choices. 3) Free-view Rendering: Rendering images from random and free viewpoints, and the object orientation is represented by the polar, azimuthal and rotation angle of the camera.

pipeline

Model Training

Orient Anything consists of a simple visual encoder and multiple prediction heads. It is trained to judge if the object in the input image has a meaningful front face and fits the probability distribution of 3D orientation.

pipeline

Citation

@article{orient_anything,
  title={Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models},
  author={Wang, Zehan and Zhang, Ziang and Pang, Tianyu and Du, Chao and Zhao, Hengshuang and Zhao, Zhou},
  journal={arXiv:2412.18605},
  year={2024}
}