Purdue University Graduate School
Thesis_PhD_An.pdf (28.02 MB)

Multi-Scale, Multi-Modal, High-Speed 3D Shape Measurement

Download (28.02 MB)
posted on 2019-06-10, 17:58 authored by Yatong AnYatong An
With robots expanding their applications in more and more scenarios, practical problems from different scenarios are challenging current 3D measurement techniques. For instance, infrastructure inspection robots need large-scale and high-spatial-resolution 3D data for crack and defect detection, medical robots need 3D data well registered with temperature information, and warehouse robots need multi-resolution 3D shape measurement to adapt to different tasks. In the past decades, a lot of progress has been made in improving the performance of 3D shape measurement methods. Yet, measurement scale and speed and the fusion of multiple modalities of 3D shape measurement techniques remain vital aspects to be improved for robots to have a more complete perception of the real scene. In this dissertation, we will focus on the digital fringe projection technique, which usually can achieve high-accuracy 3D data, and expand the capability of that technique to complicated robot applications by 1) extending the measurement scale, 2) registering with multi-modal information, and 3) improving the measurement speed of the digital fringe projection technique.

The measurement scale of the digital fringe projection technique mainly focused on a small scale, from several centimeters to tens of centimeters, due to the lack of a flexible and convenient calibration method for a large-scale digital fringe projection system. In this study, we first developed a flexible and convenient large-scale calibration method and then extended the measurement scale of the digital fringe projection technique to several meters. The meter scale is needed in many large-scale robot applications, including large infrastructure inspection. Our proposed method includes two steps: 1) accurately calibrate intrinsics (i.e., focal lengths and principal points) with a small calibration board at close range where both the camera and projector are out of focus, and 2) calibrate the extrinsic parameters (translation and rotation) from camera to projector with the assistance of a low-accuracy large-scale 3D sensor (e.g., Microsoft Kinect). The two-step strategy avoids fabricating a large and accurate calibration target, which is usually expensive and inconvenient for doing pose adjustments. With a small calibration board and a low-cost 3D sensor, we calibrated a large-scale 3D shape measurement system with a FOV of (1120 x 1900 x 1000) mm^3 and verified the correctness of our method.

Multi-modal information is required in applications such as medical robots, which may need both to capture the 3D geometry of objects and to monitor their temperature. To allow robots to have a more complete perception of the scene, we further developed a hardware system that can achieve real-time 3D geometry and temperature measurement. Specifically, we proposed a holistic approach to calibrate both a structured light system and a thermal camera under exactly the same world coordinate system, even though these two sensors do not share the same wavelength; and a computational framework to determine the sub-pixel corresponding temperature for each 3D point, as well as to discard those occluded points. Since the thermal 2D imaging and 3D visible imaging systems do not share the same spectrum of light, they can perform sensing simultaneously in real time. We developed a hardware system that achieved real-time 3D geometry and temperature measurement at 26Hz with 768 x 960 points per frame.

In dynamic applications, where the measured object or the 3D sensor could be in motion, the measurement speed will become an important factor to be considered. Previously, people projected additional fringe patterns for absolute phase unwrapping, which slowed down the measurement speed. To achieve higher measurement speed, we developed a method to unwrap a phase pixel by pixel by solely using geometric constraints of the structured light system without requiring additional image acquisition. Specifically, an artificial absolute phase map $\Phi_{min}$, at a given virtual depth plane $z = z_{min}$, is created from geometric constraints of the calibrated structured light system, such that the wrapped phase can be pixel-by-pixel unwrapped by referring to $\Phi_{min}$. Since $\Phi_{min}$ is defined in the projector space, the unwrapped phase obtained from this method is an absolute phase for each pixel. Experimental results demonstrate the success of this proposed novel absolute-phase unwrapping method. However, the geometric constraint-based phase unwrapping method using a virtual plane is constrained in a certain depth range. The depth range limitations cause difficulties in two measurement scenarios: measuring an object with larger depth variation, and measuring a dynamic object that could move beyond the depth range. To address the problem of depth limitation, we further propose to take advantage of an additional 3D scanner and use additional external information to extend the maximum measurement range of the pixel-wise phase unwrapping method. The additional 3D scanner can provide a more detailed reference phase map $\Phi_{ref}$ to assist us to do absolute phase unwrapping without the depth constraint. Experiments demonstrate that our method, assisted by an additional 3D scanner, can work for a large depth range, and the maximum speed of the low-cost 3D scanner is not necessarily an upper bound of the speed of the structured light system. Assisted by Kinect V2, our structured light system achieved 53Hz with a resolution 1600 x 1000 pixels when we measured dynamic objects that were moving in a large depth range.

In summary, we significantly advanced the 3D shape measurement technology for robots to have a more complete perception of the scene by enhancing the digital fringe projection technique in measurement scale (space domain), speed (time domain), and fusion with other modality information. This research can potentially enable robots to have a better understanding of the scene for more complicated tasks, and broadly impact many other academic studies and industrial practices.


Degree Type

  • Doctor of Philosophy


  • Mechanical Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Song Zhang

Additional Committee Member 2

Xinyan Deng

Additional Committee Member 3

David Cappelleri

Additional Committee Member 4

Gary Cheng