RGBD 3D Scanner And AR For Eyebrow Design

In this project, I developed a system that combines an RGBD camera with a 4K RGB camera to quickly generate a 3D scan of a person’s face. Using the integrated 3D AR module, the system enables precise eyebrow design tailored to each customer’s facial features. Once the ideal eyebrow shape is designed, the software automatically creates a printable 3D model to ensure accuracy for micro-blading or eyebrow transplant procedures.

Main Attributes

  • Face 3D Scanner with RGBD sensor (Single Sensor Fusion)
  • AI face recognition in real-time 3d point cloud space
  • Detect face distance and face measurement
  • Matching a 3D scanned model on live point cloud data ( 3D AR Fusion)
  • Detect face main landmark points in 3d real-time point cloud space
  • Detect marker points in 3d real-time point cloud space
  • 3D AR just with real-time point cloud registration
  • 2D 3D AR uses an accurately detected marker and designs eyebrows in real-time and records data
  • Auto 3D mold generation from designed eyebrow data for 3d print

Why do we need this new system?

The previous software offered a wide range of features and was generally capable of handling all necessary tasks. However, there were two fundamental issues with the eyebrow services:

1- Generating the 3D photogrammetry scan file took a considerable amount of time.
2- There was a need for much more accurate AR eyebrow placement on the person’s face to ensure precise design and proper mold alignment.

Solution

Since I completed many projects in 2015 using Microsoft Kinect to develop games and interactive software, I became quite familiar with the system and RGBD camera workflows. However, limitations such as the discontinuation of Kinect, its relatively large physical size, and the need for a separate adapter prompted me to seek alternative solutions.
After extensive research into infrared and stereo cameras, I ultimately chose LiDAR sensor technology for depth measurement. The main advantages of this sensor over Time-of-Flight (ToF) sensors were its compact size, availability, user-friendliness, higher resolution, effective range, and high speed. However, I discovered that LiDAR sensors introduced significant noise, making point cloud adaptation more challenging.

So, I began by developing a 3D scanner using this depth sensor. Initially, I implemented the fusion method for 3D scanning. However, this approach required the user to rotate the camera around the person, which was not user-friendly. To address this, I designed a new scanning system that only required the person to rotate their face horizontally by ±40 degrees and vertically by ±15 degrees.

Building such a system was quite challenging. After extensive study of specialized algorithms—including point cloud registration, point cloud fusion, mesh reconstruction, and various mathematical and point cloud matching techniques—I implemented many of these methods to identify the most effective solution. However, the need for greater processing speed ultimately led me to use the PointCloud Library (PCL), which is built in C++.

To utilize this library, I needed to create a dedicated wrapper for C#. I developed a custom wrapper for this project, enabling support for many essential algorithms and successfully implementing a 3D face scanner based on these solutions. This scanner requires only 15 frames of the face from different angles, allowing the entire scanning process to be completed in just 2 minutes. In the end, not only is the scanner simple and fast to use, but it also delivers high-quality and satisfactory results.

After developing the 3D scanner, the next challenge was achieving highly accurate AR facial display—meeting the strict accuracy requirements I had set. Theoretically, my approach was to use an RGBD camera so that the scanned point cloud could be registered in real time with the live point cloud data. Despite implementing the necessary algorithms and applying various smoothing techniques to reduce noise, I still observed a 2.5 mm error at horizontal angles greater than 40 degrees. While this error was significantly lower compared to methods relying solely on artificial intelligence, my goal was to achieve an error of less than one millimeter. To reach this level of precision, I combined multiple approaches, including artificial intelligence, point tracking, and marker tracking. After considerable effort, integrating the RGBD camera with marker detection enabled me to achieve the desired results. This setup also allowed me to compensate for the lower RGB resolution of the depth camera by utilizing an additional 4K RGB camera.

Final Workflow

After 3D scanning the customer’s face, the RGBD camera identifies the specific positions of the markers. Using these points in conjunction with the 4K RGB camera, the final AR display is generated with an error margin of less than one millimeter. I also developed additional tools to make the system more user-friendly, such as proportion rulers for measuring eyebrows with Fibonacci ratios to ensure symmetry.


Automatically generate molds from designed eyebrows that can be printed with 3D printers.

I am very pleased that, after considerable effort and personal discipline, this project not only achieved its goals—satisfying the employer, customers, and end-users—but also significantly enhanced my own scientific knowledge and experience.

Currently, this system is operational in two main branches and several affiliated branches of the company. To date, it has been used for eyebrow design, micro-blading, and transplantation services for at least 1,500 individuals—a number that continues to grow every day.

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment