May 19, 2023

3D Computer Vision: Unlocking the Third Dimension

Francisco Pereira
Machine Learning Engineer
No items found.
Subscribe to newsletter
Share this post

This article provides an overview of 3D computer vision and how this technology is being used today, thus suitable for a diverse audience interested in the subject matter.

An introduction to 3D Computer Vision

In today’s fast-paced world of technology, it’s more important than ever to understand and interpret the details of our surroundings. In recent years, we’ve seen Convolutional Neural Networks (or CNNs, for short) completely change computer vision, allowing us to analyze images with incredible accuracy. As automation, robotics, and retail applications continue to grow, so does the demand for more advanced vision systems. This is where 3D Computer Vision shines, introducing depth information and a level of understanding that was once out of reach for traditional 2D computer vision systems.

In our upcoming series of blog posts, we’ll dive deep into the advantages of 3D computer vision and explore how this technology is transforming various sectors. By approaching this topic through the framework of a typical machine learning pipeline (Figure 1), we will gain insights into the process of capturing three-dimensional data, investigate the diverse sensors involved, and ultimately explore the multitude of methods for processing and extracting value from this information.

Figure 1: The machine learning pipeline. In this blog post series we dive into how 3D computer vision is done at each step of the way.

In this first part of the series, we uncover the exciting world of 3D computer vision, its real-life applications, and how it’s shaping the future of numerous industries.

2D vs 3D Computer Vision

To truly appreciate the benefits of 3D computer vision, it’s essential to understand the differences between 2D and 3D computer vision. At its core, computer vision is a technology that processes and interprets visual data. In 2D computer vision, data is analyzed based on pixel values, colors, and textures in a flat, two-dimensional image, much like how we view photographs. While it has been highly successful in tasks like image recognition and classification, it falls short when it comes to understanding spatial relationships and depth, making it less suitable for tasks that require accurate perception of real-world environments. By providing depth information, 3D computer vision can address many of the limitations faced by 2D computer vision, such as understanding spatial relationships, handling occlusion, and overcoming issues related to lighting and shadows.

To help you see the differences between 2D and 3D computer vision, let’s use a simple, everyday example. Picture yourself looking at a photo of a cozy living room, complete with furniture arranged in various spots. With 2D computer vision, it’s easy to identify and recognize the different pieces of furniture and their colors. However, figuring out the relative distances between the objects and their actual sizes can be tricky since there’s no depth information. As humans, we have to rely on visual cues (Figure 2) like shadows, perspective and overlapping objects to make sense of depth in a 2D image; but these cues aren’t always clear-cut.

Figure 2: Monocular depth cues. For an interesting example of how perspective and visual cues can be deceiving, check out the Ames room.

Now, imagine actually stepping into that same living room. Your understanding of the room, furniture, and their positions in relation to each other suddenly becomes much clearer, thanks to the binocular depth cues our vision provides (our ability to perceive depth using both eyes). This is the kind of enhanced perception that 3D computer vision offers to machines, making it easier for them to understand and interact with their surroundings. This ability is vital in various tasks, including robotic navigation, object manipulation, and accurate volume and shape measurements, enabling machines to interact with and respond to the world more effectively.

The depth information provided by 3D computer vision also plays a critical role in improving accuracy. While 2D computer vision can sometimes struggle to differentiate between objects in a cluttered environment, 3D computer vision leverages depth data to distinguish between them, ensuring tasks are carried out with greater precision and reliability (Figure 3).

Figure 3: Using 3D vision to distinguish different products in a cluttered environment (Source)

Another noteworthy advantage of 3D computer vision is its robustness to lighting and shadows. In the world of 2D computer vision, changes in lighting conditions and the presence of shadows can significantly impact performance, as it relies solely on color and intensity data. However, utilizing depth information allows us to easily overcome these issues. Overall, 3D computer vision provides a strong resilience across a wide range of environments and lighting conditions that allow systems to perform more consistently and reliably.

So far, we have seen that 3D vision systems offer numerous advantages over 2D systems by providing an additional layer of information, which can improve performance. However, they also introduce complexities in terms of hardware setup, storage capacity, and processing times. It’s crucial to assess the specific application needs and determine if the benefits of using 3D vision outweigh the challenges. To help guide this decision-making process, in the following section, we explore how 3D data unlocks new possibilities and applications across multiple industries.

Real-World Applications and Trends

3D computer vision is making a significant impact across various industries by offering new possibilities and transforming traditional tasks. A big part of this transformation was also possible due to the advancements in deep learning models, where new model architectures and collection of more and more data have been supporting significant improvements in the field. Let’s explore some of the exciting applications and trends in several key sectors.

Manufacturing and Quality Control

In manufacturing, 3D computer vision is enhancing robotics and automation with depth perception, allowing robots to better understand their surroundings and perform tasks with increased precision, such as picking and placing items or assembling components. Inline quality control and inspection also benefit greatly from 3D computer vision and machine learning combined. 3D deep learning models provide us with accurate object detection and recognition, which can easily help systems identify defects, provide accurate and precise measurements and identify inconsistencies in manufactured products with greater reliability. This improved accuracy leads to higher product quality and reduced waste, which is crucial for maintaining a competitive edge in today’s fast-paced market. The integration of 3D computer vision with emerging technologies like Industry 4.0 and the Internet of Things (IoT) is paving the way for smart factories. Systems are becoming faster and more efficient and we can expect to see more real-time processes integrated seamlessly into manufacturing workflows.

Figure 4: Example of 3D quality inspection use case; measuring angle of lifted can tabs (Source).

Autonomous Driving

In the automotive industry, 3D computer vision is essential for self-driving cars, as it enables them to accurately perceive and understand their environment. Companies like Waymo, Cruise and Zoox are using multimodal deep learning models and advanced 3D vision technology for obstacle detection, lane tracking, and navigation, paving the way for safer and more efficient transportation. You can check this video for an interesting break down of how Zoox uses computer vision to solve autonomous driving.

Figure 5: 3D mapping of surrounding environment for autonomous navigation (Source).


Various medical applications, such as surgical assistance, diagnostics, and medical imaging make use of 3D computer vision. For example, an anatomical visualization service¹ creates 3D models of patients’ anatomy, assisting surgeons in planning and executing procedures. During surgery, the model can be viewed and manipulated on a console, improving surgical accuracy and efficiency.

Figure 6: 3D anatomical models allowing doctors to plan and execute procedures(Source)

Aerial Imagery

Drones equipped with 3D vision capabilities can provide detailed topographical data, facilitating tasks like mapping, surveying, and environmental monitoring². They also benefit agriculture by monitoring crop health, analyzing soil conditions, and optimizing resource usage. This enables precision farming practices, leading to increased yield and more sustainable agriculture. Combining drones with 3D vision also allows for to safe inspection of infrastructure and equipment like power grids, construction sites and oil and gas refineries³. The 3D scanned models can be fed to a 3D object detection

Figure 7: 3D inspection of a power grid (Source).


Retail and logistics are also experiencing the transformative power of 3D computer vision. In inventory management, 3D computer vision can accurately recognize and track individual items, even in cluttered environments, making it easier to maintain accurate stock levels and optimize warehouse organization. Furthermore, it can be integrated in optimization problems, such as minimizing costs of packaging and shipping operations by scanning objects dimensions and matching it with the available packaging space (e.g. in a container).


In retail, the technology is being integrated into customer-facing applications, such as virtual fitting rooms and augmented reality shopping experiences, offering a more engaging and personalized experience for consumers. Apple, for example, has LiDAR⁴ integrated in their iPhones’ Pro versions, enabling a new range of applications. The IKEA Place app, for example, allows users to visualize products in their homes before making a purchase (check it out in this video).

Generative AI has also been making its way into the 3D space. Deep learning models like pix2pix3D⁵ and Imagine 3D⁶ enable the creation of 3D representations of objects using hand-drawn labels and textual prompts, respectively. Although still in its early stages, this technology holds the potential to unlock intriguing use-cases within the retail sector.

Figure 8: IKEA Place app allows users to try out different furniture in their own space (Source).

As 3D computer vision continues to evolve, we can expect to see even more innovative applications and trends emerging across various industries. The ability to accurately perceive depth and spatial relationships not only enhances existing processes but also unlocks new opportunities for businesses to improve their operations and stay ahead of the competition.


As we have seen, 3D computer vision offers a wealth of advantages over traditional 2D computer vision, opening new doors for innovation and improved performance across a multitude of industries. While the manufacturing sector stands to benefit significantly from the adoption of 3D computer vision technologies, its impact extends far beyond this industry. The future of 3D computer vision is marked by expanding possibilities and emerging applications in diverse sectors such as retail, logistics, and even healthcare. By embracing this transformative technology, companies can unlock new levels of efficiency, productivity, and innovation, leveling up not only their operations but also the industries they serve.

In conclusion, the adoption of 3D computer vision is not just a technological leap, but a strategic move for forward-thinking businesses. It’s time to explore the potential of 3D computer vision solutions for your organization and stay ahead of the curve in an increasingly competitive landscape.

This first part of the blog post series served as an introduction to the world of 3D vision. Keeping in mind the pipeline illustrated in Figure 1, our upcoming post will explore the capture and storage of data in greater detail. We will examine how this data is produced and consider how the selection of sensor type may be influenced by diverse factors such as technical requirements, environmental considerations, business constraints and other relevant factors.


[1] — Iris, 3D anatomical visualization service:

[2] — Parrot Drones and Autonomous Photogrammetry:

[3] — DJI Aerial Inspection of Infrastructure:

[4] — LiDAR in Apple’s IPad Pro:

[5] — Deng et al. “3D-aware Conditional Image Synthesis”. CVPR 2023.

[6] — Imagine 3D v1.2.

Related posts

View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Foundation Models
Structured Data
Chat GPT
Voice & Sound
Front-End Development
Data Protection & Security
Responsible/ Ethical AI
Hardware & sensors
Generative AI
Natural language processing
Computer vision