RGBD 3D Scanner And AR For Eyebrow Design

In this project, I set up a system using an RGBD camera and an RGB 4K camera, which can be used to prepare a 3D scan of a person’s face in a very short time, and using the 3D AR section to design suitable eyebrows for costumer’s face. Also, at the end of the designed eyebrows, a printable 3D model is made automatically for an accurate physical micro-blading or eyebrow transplant process.

Main Attributes

  • Face 3D Scanner with RGBD sensor (Single Sensor Fusion)
  • AI face recognition in real-time 3d point cloud space
  • Detect face distance and face measurement
  • Matching 3D scanned model on live point cloud data ( 3D AR Fusion)
  • Detect face main landmark points in 3d real-time point cloud space
  • Detect marker points in 3d real-time point cloud space
  • 3D AR just with real-time point cloud registration
  • 2D 3D AR use accurate detected marker and design eyebrow realtime and record data
  • Auto 3D mold generation from designed eyebrow data for 3d print

Downsides of our latest system

The previous software had many capabilities that as a general could do all the necessary work. But there were two very basic problems with eyebrow services:

1- It took a long time for the 3D photogrammetry scan file to be ready.
2- The need to display AR eyebrows on the face of the person in a much more accurate way to design and accurately line the mold

Solution

Since I did a lot of projects in 2015 using Microsoft Kinect to build games and interactive software, I had enough control over the system and RGBD cameras workflow, but Kinect limitations such as cessation of production, The relatively large physical size, and the need for a separate adapter led me to use other cameras. After much research and reviewing infrared and stereo cameras, I chose Lidar Sensor technology to measure depth. The main advantage of this kind of sensor over the TOF was its small size, availability, ease of use for the end-user, its higher resolution, effective range, and high speed. I found that this sensor has significant noise and made it harder for me to adapt PointClouds.

So I started by making a 3D scanner using this depth sensor. At first, I implemented the Fusion method for 3D scanning. Still, because this method required the user to rotate the camera around the person, it was difficult for ENDUSER to use this method. That’s why I thought of building a different scanning system that only needed to rotate the face at an angle of +-40 degrees horizontally and +-15 degrees vertically.

Well, it was very difficult to build such a system, and after studying a lot in the world of special algorithms (PointCloud Registration, PointCloud Fusion, Mesh reconstruction), mathematics, and point cloud matching methods, I implemented almost a lot of these algorithms to choose the best method, but the need for more speed led me to use the PointCloud Library built into C++.

To use this Library, I had to create a special wrapper for c#. So I developed a custom wrapper for this project with the ability to support many of the required algorithms and implemented a 3D face scanner using these algorithms.

This scanner only needs 15 frames of a face from different angles and the whole scanning process can be done in 2 minutes. Finally, in addition to the use of this scanner being very simple and fast, the scan quality was also desirable and satisfactory.

After I made the 3D scanner, it was time to show the face with accurate AR, which was a huge challenge for me to achieve the defined accuracy.
Theoretically, my idea was to use an RGBD camera for this purpose so that the scanned PointCloud would be registered in real-time with the Live PointCloud (matching Pontocloud live data).
I implemented the required algorithms and reduced the noise using various smooth algorithms, I still saw a 2.5 mm error at angles greater than 40 degrees horizontally. Of course, this error was significantly reduced compared to the use of artificial intelligence, but it was intended to achieve an error of less than one millimeter.
For this reason, I combined this method with other methods to achieve the desired accuracy. It is a combination of artificial intelligence, Point Track, Marker Track, and …
After much effort, combining an RGBD camera with marker detection was able to get me the result I wanted. This combination also helped me to cover the weak RGB resolution of the Depth camera using another 4K RGB camera.

Final Workflow

After 3D scanning of the customer’s face, the specific points of the markers are identified by the RGBD camera. Then, for design and display using these points and the 4K RGB camera, the final AR display is performed with an error of less than one millimeter.
I also developed more tools to make this system easier to work with, such as:
Proportion rulers for measuring eyebrows with Fibonacci ratios with symmetry of eyebrows.


Automatically generate molds from designed eyebrows that can be printed with 3D printers.


I am very happy that after a lot of effort and personal discipline, in addition to the success of this project and the satisfaction of the employer, customers, and the end-user, I was able to raise my scientific knowledge and experiences very high.

Currently, this system is active in two main branches and several affiliated branches of the company, and so far, during its use, design, micro-blading, and eyebrow transplantation services have been performed for at least 1,500 people, and this number is increasing every day.

Like this article?

Share on facebook
Share on Facebook
Share on twitter
Share on Twitter
Share on linkedin
Share on Linkdin
Share on pinterest
Share on Pinterest

Leave a comment