WO2007015059A1 - Method and system for three-dimensional data capture - Google Patents

Method and system for three-dimensional data capture Download PDF

Info

Publication number
WO2007015059A1
WO2007015059A1 PCT/GB2006/002715 GB2006002715W WO2007015059A1 WO 2007015059 A1 WO2007015059 A1 WO 2007015059A1 GB 2006002715 W GB2006002715 W GB 2006002715W WO 2007015059 A1 WO2007015059 A1 WO 2007015059A1
Authority
WO
WIPO (PCT)
Prior art keywords
shapes
array
projected
scene
training
Prior art date
Application number
PCT/GB2006/002715
Other languages
French (fr)
Inventor
James Paterson
Andrew Fitzgibbon
Original Assignee
Isis Innovation Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isis Innovation Limited filed Critical Isis Innovation Limited
Publication of WO2007015059A1 publication Critical patent/WO2007015059A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings

Definitions

  • the present invention relates to a method and system for obtaining three-dimensional data relating to a physical scene .
  • Three-dimensional capture techniques are used to obtain three-dimensional data relating to a physical scene or object based on two-dimensional images of the scene or object. These techniques are becoming increasingly important in the field of computer graphics for applications such as virtual reality and the film industry. Three-dimensional capture techniques can be conveniently categorised into techniques designed for the reconstruction of static scenes, and those designed to capture moving objects (dynamic scenes) . Furthermore, some systems provide near instantaneous (real-time) reconstruction for continual feedback, whilst others rely on offline processing of a captured image sequence.
  • multiple images of the scene must be captured under different illuminations.
  • the requirement for multiple images means that corresponding image points must be identified in each of the multiple images (i.e. the stereo correspondence problem) .
  • the need for multiple images with different illuminations imposes the limitation that the scene must remain static or move very slowly during capture.
  • such techniques are usually only suitable for the reconstruction of static scenes.
  • the use of high resolution capture devices, such as current digital stills cameras, is precluded because of the long delay between exposures.
  • a method of obtaining three-dimensional data relating to a physical scene comprising (a) projecting a predetermined two-dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof; (b) capturing an image of the array projected onto the scene; (c) deriving correspondences between the shapes in the captured image to the finite array of projected shapes, based upon the uniqueness properties; and (d) obtaining three-dimensional data points from the correspondence between the projected array and the captured image array.
  • the method therefore enables three-dimensional data relating to the scene to be captured from just one two- dimensional image.
  • the method may be used to capture dynamic scenes since there is no need for camera synchronisation and the method is not limited by slow shutter speeds. Furthermore, the method requires only a single camera and projector so the system is relatively cheap as well as being easy to set up.
  • the step of projecting an array of shapes onto the scene means that data may be acquired from coloured objects.
  • a further benefit of the current invention is that high resolution data is acquired around the edges of each shape.
  • the projected array has uniqueness properties along mutually non-parallel lines. More advantageously, the mutually non-parallel lines are epipolar lines .
  • the present invention is able to powerfully disambiguate the correspondence problem (i.e. the array need only have uniqueness properties in one direction) .
  • the method further comprises obtaining calibration data.
  • the calibration data comprises a fundamental matrix.
  • the step of obtaining calibration data further comprises resolving a projective ambiguity.
  • the step (c) further comprises detecting edges of the shapes from the captured image array. More advantageously, the step of detecting edges of the shapes comprises determining edgels corresponding to intensity gradients within the captured image. Still more advantageously, the step (c) further comprises representing the edges of the imaged shapes as shape vectors . More advantageously again, the step (c) further comprises classifying the shapes using the shape vectors.
  • the method further comprises projecting training arrays of shapes onto a training scene; capturing training images of the training arrays projected onto the training scene; and comparing the training images with the training arrays to obtain training data.
  • the step (c) further comprises classifying the shapes using the training data.
  • the step (c) further comprises rectifying the projected array and the captured image.
  • the step (c) further comprises grouping the imaged shapes according to respective lines of shapes in the projected image, the lines of projected shapes being oriented along lines having uniqueness properties. More advantageously, the method further comprises ordering the groups of shapes. Still more advantageously, the method further comprises aligning the ordered groups of imaged shapes with respective lines of projected shapes using the uniqueness properties. In a preferred embodiment , dynamic programming is used to align the ordered groups of imaged shapes with respective lines of projected shapes.
  • the three dimensional data points are obtained by triangulation.
  • the method further comprises regularising the three-dimensional data points using a weak smoothness constraint.
  • the shapes are selected from the group consisting of circles, triangles, squares, diamonds and stars .
  • a system for obtaining three-dimensional data relating to a physical scene comprising a projector for projecting a predetermined two-dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof; a camera for capturing an image of the array projected onto the scene; and processing means for obtaining the three-dimensional data from the image.
  • Figure 1 is a camera-projector system according to one embodiment of the present invention
  • Figure 2 shows a flow chart describing a method of obtaining three-dimensional data relating to a physical scene in accordance with the present invention
  • Figure 3 shows a flow chart describing one of the steps in the method illustrated in Figure 2, in further detail;
  • Figure 4a shows an example of a captured image of an array of shapes projected by the projector of Figure 1
  • Figure 4b shows the image of Figure 4a, as captured by the camera of Figure 1, the image having been processed to show detected shape edges;
  • Figure 5 shows three examples of shape vectors obtained from captured images of shapes; and Figure 6 shows a flow chart describing a further one of the steps in the method illustrated in Figure 2, in more detail .
  • FIG. 1 shows an embodiment of a camera-projector system 10 for obtaining three-dimensional data relating to a three-dimensional physical scene 12, in accordance with an embodiment of the present invention.
  • a projector 14 projects a known array of shapes 16 onto the scene 12.
  • a camera 18 captures an image 20 of the array of shapes 16 projected onto the scene 12. Given the correspondence between the shapes in the projected array 16 and the shapes in the captured image 20, it is possible to obtain three- dimensional data relating to the scene 12.
  • the projector 14 and the camera 18 are directed towards the scene 12 at an acute angle relative to one another. In this embodiment, the projector 14 and the camera 18 are located in the same vertical plane. In an alternative embodiment, the camera 18 is displaced horizontally with respect to the projector 14.
  • Figure 2 shows a flow chart of the steps in the method of obtaining the three-dimensional data relating to the scene 12.
  • the camera-projector system 10 is calibrated to obtain calibration data concerning the positions and internal parameters of the projector 14 and camera 18.
  • the known array of shapes 16 is projected onto the scene 12 with the projector 14.
  • an image 20 of the projected array of shapes 16 is captured with the camera 18.
  • the shapes in the captured image 20 are classified into shape types (e.g. circle, triangle, etc) .
  • one- to-one shape correspondences are identified between shapes in the captured image 20 and the shapes in the projected array 16.
  • the one-to-one shape correspondences are used to obtain three-dimensional data relating to the physical scene 12.
  • the system is calibrated to determine calibration data comprising positions, orientations and internal parameters (e.g. focal length) of the projector 14 and the camera 18 as 3x4 projection matrices P P and P c respectively.
  • the calibration step 30 need only be performed once for a given set up of the camera-projector system 10.
  • the calibration data may be determined by computing a fundamental matrix F of the system and by then resolving a projective ambiguity which remains after computation of the fundamental matrix F.
  • the fundamental matrix F is computed by projecting a sequence of images in which only a single pixel is illuminated. This enables the camera 18 to capture a corresponding sequence of images of single scene points.
  • the coordinates of the captured image points are determined by image processing.
  • a linear constraint is provided on the 3x3 fundamental matrix F.
  • Each captured image provides another such p ⁇ ⁇ -» c ⁇ correspondence so that F may be determined from several such correspondences.
  • P P and Pc are then determined from F up to a projective ambiguity which comprises a choice of coordinates in projective space.
  • the projective ambiguity may be understood by considering the projector 14 and the camera 18 together being moved relative to the scene 12 such that their projective relationship remains the same while their positions and orientations are altered in real space.
  • the projective ambiguity is resolved by locating particular scene points in real space.
  • the projective ambiguity may be resolved by imaging a scene consisting of a calibration object having identifiable features across three axes, e.g.
  • a two-dimensional homography H is first derived mapping from the imaged plane of the target to the projector view.
  • F the three-dimensional location of, for example, the imaged corners of the planes is resolved up to the unknown projective ambiguity.
  • a three-dimensional homography H SPACK is then computed between the projectively ambiguous space and the known coordinate frame of the calibration object, which provides a general mapping from ambiguous space to the real space of the planar targets.
  • a known array of shapes 16 is projected onto the scene 12 at 32, and an image 20 of the array 16 is captured by the camera 18 at 34.
  • the known array of shapes 16 comprises a predetermined two-dimensional pattern of a finite array of shapes, the array having uniqueness properties along the epipolar lines of the camera projector system.
  • the epipolar lines are approximately vertical so that the array of shapes 16 has column uniqueness properties.
  • the camera and projector are located in the same horizontal plane and the epipolar lines of the system are approximately horizontal so that the array of shapes 16 has row uniqueness properties. Further alternative arrangements are also possible.
  • the shapes in the finite array are selected from a finite number of shape types.
  • the projected shapes are sufficiently distinct that they can be distinguished under moderate to severe distortion.
  • the shapes are simple enough that blurring during image capture does not disguise small details of the shapes.
  • the shape types are circles, diamonds and triangles.
  • squares and stars may additionally be used. It will be appreciated that other shape types are also possible. For example, other geometric shapes may be used, or the finite array of shapes may be made up from letters and/or numbers.
  • the shapes in the image 20 are classified into shape types at 36.
  • the step 36 of classifying the shapes in the image 20 into shape types is shown in more detail in Figure 3.
  • the edges of the imaged shapes are detected.
  • the edges of the imaged shapes are represented as shape vectors .
  • the shape vectors are classified into shape types using training data obtained at 56. It will be appreciated that the steps shown in Figure 3 provide one possible method of classifying the shapes into shape types . In alternative embodiments, morphological operators or patch comparison may be used to detect and vectorize the imaged shapes. Nonetheless, the steps shown in Figure 3 are described in more detail below.
  • the image processing to detect edges of the imaged shapes at 50 is invariant to variations caused by surface reflectance changes and allows accurate localisation of the shape boundaries.
  • One particular edge detection method is described below, however it will be appreciated that other known edge detection methods could be used in alternative embodiments.
  • a local implementation of the Canny operator is applied to the captured image 20 to determine edgels corresponding to intensity gradients within the captured image 20.
  • the edgels comprise sub-pixel accurate positions, directions and magnitudes.
  • an array of solid white shapes on a black background is projected to give maximum contrast between the shapes and the background.
  • an array of solid black shapes on a white background may be used. The projection of a black and white array of shapes ensures that the method may be used to obtain three-dimensional data relating to a coloured scene.
  • the output is an unconnected, non-ordered list of edgels.
  • the next step is to link the edgels into groups corresponding to individual shapes in the captured image 20. Linking edgels into shape edges is a common process which will not be described here, but about 95% of shapes are cleanly detected on average.
  • Figure 4a is an image 60 of a projected array of shapes
  • Figure 4b is a corresponding processed image 62 showing detected shape edges.
  • the physical scene is a human face.
  • Many image shapes 64 are cleanly detected, other image shapes 66 merge over depth discontinuities, and an image shape 68 near an eyebrow is not detected due to the noisy three-dimensional surface of the eyebrow.
  • Figure 5 illustrates the step 52 for three example imaged shapes 70, 72 and 74.
  • the groups of edgels corresponding to the individual shapes 70, 72 and 74 are fitted to ellipses which are then transformed into transformed ellipses 76, 78 and 80 respectively by mapping to the unit circle 82.
  • the transformed two-dimensional locations of the points on the transformed ellipses 76, 78 and 80 are then converted to polar coordinates (r, ⁇ ).
  • the right hand column of Figure 5 shows graphs 84, 86 and 88 of r against ⁇ for the three transformed ellipses 76, 78 and 80 respectively.
  • the graphs of r are then sampled at regular intervals in ⁇ to obtain D- dimensional shape vectors.
  • Values of D from 10 to 36 provide reasonable results, but it will be appreciated that other values of D are also possible.
  • the shape vectors are classified into shape types using training data obtained at 56.
  • the training data are obtained by using the projector 14 to project training arrays of shapes onto a training scene.
  • the training arrays may comprise projected arrays of shapes of a single shape type.
  • three training arrays are used: a first training array comprising only circles, a second training array comprising only triangles, and a third training array comprising only diamonds.
  • the camera 18 is used to capture training images of the training arrays projected onto the training scene.
  • Shape vectors are calculated for the imaged training shapes in the same way as described above .
  • Image shape vectors are classified into shape types using a nearest-neighbour classifier based on the shape vectors for the training images.
  • shapes are labelled with their shape type if a consensus among five nearest neighbours can be reached, or as "unknown" otherwise. Typically, about 95% of imaged shapes are correctly identified in this way.
  • one-to-one correspondences are identified between shapes in the captured image 20 and shapes in the projected array 16 at 38.
  • the step 38 of classifying the shapes in the image 20 into shape types is shown in more detail in Figure 6.
  • the projected array 16 and the captured image 20 are rectified.
  • imaged shapes are grouped according to respective columns of the projected image.
  • the groups of shapes are ordered into lists.
  • the ordered lists of shapes are aligned with the known columns of the array of projected shapes 16.
  • one-to-one shape correspondences are identified between the projected array 16 and the captured image 20.
  • the calibrated fundamental matrix F is used to rectify the projected array 16 and the captured image 20 so that the columns of the projected array 16 fall along epipolar lines of the camera-projector system 10.
  • Epipolar lines corresponding to the horizontally central points of the columns in the projected array 16 are then identified.
  • Shapes are then assigned to columns at 92 by scanning along the central column epipolar lines and finding shapes which ⁇ intersect the scan lines. Shapes are then ordered along the scan lines by sorting on the vertical position of the shapes ' centroids . Shapes are therefore able to be ordered along scan lines at 94 using only the column uniqueness properties of the array 16.
  • the method powerfully disambiguates the correspondence problem because the array of shapes 16 only requires uniqueness properties in one dimension thereof. Furthermore, the requirement for uniqueness properties in only one dimension considerably simplifies the construction of the projected array of shapes 16. A small percentage of imaged shapes may have been misclassified, and some imaged shapes may have been classified as "unknown” . In addition, occlusion and/or the limitations of the shape classification step 36 may lead to missing shapes in the image 20. Therefore, the extraction of shapes along scan lines is usually imperfect. Therefore, at 96, the ordered lists of imaged shapes are aligned with known lists of shapes corresponding to the columns of the array of projected shapes 16.
  • the alignment optimisation problem is well known.
  • the alignment is optimised using a dynamic programming technique.
  • This dynamic programming technique does not form a part of the present invention and will therefore be described only briefly here.
  • the DP problem is visualized in this application via a 2D graph structure, with the list of observed shapes on one axis and known projected shapes on the other. A correspondence between observed and known shape type provides a point on the graph, and typically the point is assigned a score indicating the confidence of the match; the DP task is thus to find the highest scoring path from approximately lower left to the upper right sides of the graph.
  • the dynamic programming technique is enhanced using the uniqueness properties of the projected array 16.
  • the uniqueness properties mean that aligned chains of adjacent shapes become less likely to occur randomly as the length of the chain increases. Therefore, the dynamic programming algorithm is written so as to be biased towards longer aligned chains of shapes. In this embodiment, the algorithm is biased towards aligned chains of length greater than two.
  • the output of the dynamic programming algorithm is an optimised set of one-to-one correspondences between shapes in the projected array 16 and shapes in the captured image 20.
  • the set of one-to-one shape correspondences is used to obtain three-dimensional data relating to the physical scene 12. This is done by using the one-to-one shape correspondences to obtain one-to-one point correspondences between points in the projected array 16 and points in the captured image 20. Due to the rectification step 90, the epipolar lines intersect the projected shapes at only two points per shape. Similarly, the epipolar lines intersect the imaged shapes at only two points per shape.
  • the epipolar lines which intersect the projected and imaged shapes are densely sampled to obtain two point correspondences per shape per epipolar line.
  • the point correspondences are then triangulated using the projection matrices P P and P c to obtain a dense three-dimensional representation of the shape boundaries up to the projective ambiguity.
  • the three-dimensional representation may be transformed from ambiguous space coordinates to real space coordinates by multiplying by the three-dimensional homography H SPACE -
  • the method enables high resolution three-dimensional data to be obtained from a single two- dimensional image.
  • the surface of the scene visible to the camera-projector system is known within the constraints of imaging noise. Some noise may be removed by regularising the three dimensional data using a weak smoothness constraint. Alternatively, the smoothness constraint need not be applied.
  • shape- by-shape triangulation is easily performed by connecting three-dimensional data only within shapes, giving a partial surface suitable for rendering on PC graphics hardware.
  • An alternative is to triangulate the complete point set, in which case a complete surface area manifold is presented. In either case, the data presented can be considered as a height map or as a range/depth image.

Abstract

There is described a method of obtaining three-dimensional data relating to a physical scene, comprising (a) projecting a predetermined two-dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof; (b) capturing an image of the array projected onto the scene; (c) deriving correspondences between the shapes in the captured image to the finite array of projected shapes, based upon the uniqueness properties; and (d) obtaining three-dimensional data points from the correspondence between the projected array and the captured image array. There is also described a system for obtaining three-dimensional data relating to a physical scene.

Description

METHOD AND SYSTEM FOR THREE-DIMENSIONAL DATA CAPTURE
FIELD OF THE INVENTION
The present invention relates to a method and system for obtaining three-dimensional data relating to a physical scene .
BACKGROUND OF THE INVENTION
Three-dimensional capture techniques are used to obtain three-dimensional data relating to a physical scene or object based on two-dimensional images of the scene or object. These techniques are becoming increasingly important in the field of computer graphics for applications such as virtual reality and the film industry. Three-dimensional capture techniques can be conveniently categorised into techniques designed for the reconstruction of static scenes, and those designed to capture moving objects (dynamic scenes) . Furthermore, some systems provide near instantaneous (real-time) reconstruction for continual feedback, whilst others rely on offline processing of a captured image sequence.
In some current schemes for three-dimensional capture, multiple images of the scene must be captured under different illuminations. The requirement for multiple images means that corresponding image points must be identified in each of the multiple images (i.e. the stereo correspondence problem) . Furthermore, the need for multiple images with different illuminations imposes the limitation that the scene must remain static or move very slowly during capture. Thus, such techniques are usually only suitable for the reconstruction of static scenes. Furthermore, the use of high resolution capture devices, such as current digital stills cameras, is precluded because of the long delay between exposures.
It is known in the art to employ synchronised stereo cameras which view a structured light pattern projected onto a scene to allow three-dimensional capture of dynamic scenes. In such a scheme, the stereo correspondence problem is simplified because the structured light pattern codes the scene surface points nearly uniquely. However, the difficulty of generating a truly unique coding for every surface point using bar codes or random dots means that strategies such as space-time correlation must be employed to resolve the inevitable ambiguities.
Many of the recent schemes for dynamic capture make use of stripe encoding, where a series of horizontal or vertical lines are projected. However, a difficulty with such schemes is that the codes are of necessity very limited. In order to combat ambiguities caused by surface normal and reflectance variations, only a small number of distinct codes can be formed, so that stripe encoding schemes are prone to considerable ambiguity when obtaining three- dimensional data from only a single two-dimensional image (so called "one-shot operation") .
Other known schemes involve projecting a pattern consisting of a limited number of differently coloured spots, each of which can each be uniquely identified via its near neighbours. However, the projection of coloured spots makes it difficult to acquire data from coloured objects.
The present invention seeks to address these and other such problems with the art . SUMMARY OF THE INVENTION
According to a first aspect of the present invention, there is provided a method of obtaining three-dimensional data relating to a physical scene, comprising (a) projecting a predetermined two-dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof; (b) capturing an image of the array projected onto the scene; (c) deriving correspondences between the shapes in the captured image to the finite array of projected shapes, based upon the uniqueness properties; and (d) obtaining three-dimensional data points from the correspondence between the projected array and the captured image array.
The method therefore enables three-dimensional data relating to the scene to be captured from just one two- dimensional image. The method may be used to capture dynamic scenes since there is no need for camera synchronisation and the method is not limited by slow shutter speeds. Furthermore, the method requires only a single camera and projector so the system is relatively cheap as well as being easy to set up. The step of projecting an array of shapes onto the scene means that data may be acquired from coloured objects. A further benefit of the current invention is that high resolution data is acquired around the edges of each shape.
Advantageously, the projected array has uniqueness properties along mutually non-parallel lines. More advantageously, the mutually non-parallel lines are epipolar lines . By projecting an array of shapes having uniqueness properties along epipolar lines of the camera-projector system, the present invention is able to powerfully disambiguate the correspondence problem (i.e. the array need only have uniqueness properties in one direction) .
Advantageously, the method further comprises obtaining calibration data. More advantageously, the calibration data comprises a fundamental matrix. Still more advantageously, the step of obtaining calibration data further comprises resolving a projective ambiguity.
Advantageously, the step (c) further comprises detecting edges of the shapes from the captured image array. More advantageously, the step of detecting edges of the shapes comprises determining edgels corresponding to intensity gradients within the captured image. Still more advantageously, the step (c) further comprises representing the edges of the imaged shapes as shape vectors . More advantageously again, the step (c) further comprises classifying the shapes using the shape vectors.
Advantageously, the method further comprises projecting training arrays of shapes onto a training scene; capturing training images of the training arrays projected onto the training scene; and comparing the training images with the training arrays to obtain training data. More advantageously, the step (c) further comprises classifying the shapes using the training data.
Advantageously, the step (c) further comprises rectifying the projected array and the captured image.
Advantageously, the step (c) further comprises grouping the imaged shapes according to respective lines of shapes in the projected image, the lines of projected shapes being oriented along lines having uniqueness properties. More advantageously, the method further comprises ordering the groups of shapes. Still more advantageously, the method further comprises aligning the ordered groups of imaged shapes with respective lines of projected shapes using the uniqueness properties. In a preferred embodiment , dynamic programming is used to align the ordered groups of imaged shapes with respective lines of projected shapes. Advantageously, the three dimensional data points are obtained by triangulation.
Advantageously, the method further comprises regularising the three-dimensional data points using a weak smoothness constraint. Advantageously, the shapes are selected from the group consisting of circles, triangles, squares, diamonds and stars .
According to a second aspect of the present invention, there is provided a system for obtaining three-dimensional data relating to a physical scene, the system comprising a projector for projecting a predetermined two-dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof; a camera for capturing an image of the array projected onto the scene; and processing means for obtaining the three-dimensional data from the image.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the present invention will now be described by way of example with reference to the accompanying drawings in which:
Figure 1 is a camera-projector system according to one embodiment of the present invention;
Figure 2 shows a flow chart describing a method of obtaining three-dimensional data relating to a physical scene in accordance with the present invention; Figure 3 shows a flow chart describing one of the steps in the method illustrated in Figure 2, in further detail;
Figure 4a shows an example of a captured image of an array of shapes projected by the projector of Figure 1; Figure 4b shows the image of Figure 4a, as captured by the camera of Figure 1, the image having been processed to show detected shape edges;
Figure 5 shows three examples of shape vectors obtained from captured images of shapes; and Figure 6 shows a flow chart describing a further one of the steps in the method illustrated in Figure 2, in more detail .
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT Figure 1 shows an embodiment of a camera-projector system 10 for obtaining three-dimensional data relating to a three-dimensional physical scene 12, in accordance with an embodiment of the present invention. A projector 14 projects a known array of shapes 16 onto the scene 12. A camera 18 captures an image 20 of the array of shapes 16 projected onto the scene 12. Given the correspondence between the shapes in the projected array 16 and the shapes in the captured image 20, it is possible to obtain three- dimensional data relating to the scene 12. The projector 14 and the camera 18 are directed towards the scene 12 at an acute angle relative to one another. In this embodiment, the projector 14 and the camera 18 are located in the same vertical plane. In an alternative embodiment, the camera 18 is displaced horizontally with respect to the projector 14. Figure 2 shows a flow chart of the steps in the method of obtaining the three-dimensional data relating to the scene 12.
As an overview of the method steps, at step 30 of Figure 2, the camera-projector system 10 is calibrated to obtain calibration data concerning the positions and internal parameters of the projector 14 and camera 18. At 32, the known array of shapes 16 is projected onto the scene 12 with the projector 14. At 34, an image 20 of the projected array of shapes 16 is captured with the camera 18. At 36, the shapes in the captured image 20 are classified into shape types (e.g. circle, triangle, etc) . At 38, one- to-one shape correspondences are identified between shapes in the captured image 20 and the shapes in the projected array 16. At 40, the one-to-one shape correspondences are used to obtain three-dimensional data relating to the physical scene 12.
These method steps will now be described in more detail, still referring to Figure 2. At 30, the system is calibrated to determine calibration data comprising positions, orientations and internal parameters (e.g. focal length) of the projector 14 and the camera 18 as 3x4 projection matrices PP and Pc respectively. The calibration step 30 need only be performed once for a given set up of the camera-projector system 10. The calibration data may be determined by computing a fundamental matrix F of the system and by then resolving a projective ambiguity which remains after computation of the fundamental matrix F. In this embodiment, the fundamental matrix F is computed by projecting a sequence of images in which only a single pixel is illuminated. This enables the camera 18 to capture a corresponding sequence of images of single scene points. The coordinates of the captured image points are determined by image processing. In the ith such image, the projector pixel at pi=(pxi,pyi) and the scene point detected in the camera 18 at c±=(cxi,cyi) are related by
Figure imgf000009_0001
Thus, a linear constraint is provided on the 3x3 fundamental matrix F. Each captured image provides another such p± <-» c± correspondence so that F may be determined from several such correspondences. PP and Pc are then determined from F up to a projective ambiguity which comprises a choice of coordinates in projective space. The projective ambiguity may be understood by considering the projector 14 and the camera 18 together being moved relative to the scene 12 such that their projective relationship remains the same while their positions and orientations are altered in real space. The projective ambiguity is resolved by locating particular scene points in real space. For example, the projective ambiguity may be resolved by imaging a scene consisting of a calibration object having identifiable features across three axes, e.g. a planar target of known dimensions oriented at varying depths from the camera. A two-dimensional homography H is first derived mapping from the imaged plane of the target to the projector view. Next, using F, the three-dimensional location of, for example, the imaged corners of the planes is resolved up to the unknown projective ambiguity. A three-dimensional homography HSPACK is then computed between the projectively ambiguous space and the known coordinate frame of the calibration object, which provides a general mapping from ambiguous space to the real space of the planar targets.
After the system has been calibrated at 30, a known array of shapes 16 is projected onto the scene 12 at 32, and an image 20 of the array 16 is captured by the camera 18 at 34. The known array of shapes 16 comprises a predetermined two-dimensional pattern of a finite array of shapes, the array having uniqueness properties along the epipolar lines of the camera projector system. In this embodiment, the epipolar lines are approximately vertical so that the array of shapes 16 has column uniqueness properties. In an alternative embodiment, the camera and projector are located in the same horizontal plane and the epipolar lines of the system are approximately horizontal so that the array of shapes 16 has row uniqueness properties. Further alternative arrangements are also possible.
The shapes in the finite array are selected from a finite number of shape types. The projected shapes are sufficiently distinct that they can be distinguished under moderate to severe distortion. Furthermore, the shapes are simple enough that blurring during image capture does not disguise small details of the shapes. In this embodiment, the shape types are circles, diamonds and triangles. In an alternative embodiment, squares and stars may additionally be used. It will be appreciated that other shape types are also possible. For example, other geometric shapes may be used, or the finite array of shapes may be made up from letters and/or numbers.
After the image 20 has been captured at 34, the shapes in the image 20 are classified into shape types at 36. The step 36 of classifying the shapes in the image 20 into shape types is shown in more detail in Figure 3. At 50, the edges of the imaged shapes are detected. Then, at 52, the edges of the imaged shapes are represented as shape vectors . At 54, the shape vectors are classified into shape types using training data obtained at 56. It will be appreciated that the steps shown in Figure 3 provide one possible method of classifying the shapes into shape types . In alternative embodiments, morphological operators or patch comparison may be used to detect and vectorize the imaged shapes. Nonetheless, the steps shown in Figure 3 are described in more detail below.
The image processing to detect edges of the imaged shapes at 50 is invariant to variations caused by surface reflectance changes and allows accurate localisation of the shape boundaries. One particular edge detection method is described below, however it will be appreciated that other known edge detection methods could be used in alternative embodiments. In this embodiment, a local implementation of the Canny operator is applied to the captured image 20 to determine edgels corresponding to intensity gradients within the captured image 20. The edgels comprise sub-pixel accurate positions, directions and magnitudes. In this embodiment, an array of solid white shapes on a black background is projected to give maximum contrast between the shapes and the background. Alternatively, an array of solid black shapes on a white background may be used. The projection of a black and white array of shapes ensures that the method may be used to obtain three-dimensional data relating to a coloured scene.
Having applied the Canny operator to the captured image 20, the output is an unconnected, non-ordered list of edgels. The next step is to link the edgels into groups corresponding to individual shapes in the captured image 20. Linking edgels into shape edges is a common process which will not be described here, but about 95% of shapes are cleanly detected on average. As an example, Figure 4a is an image 60 of a projected array of shapes, and Figure 4b is a corresponding processed image 62 showing detected shape edges. In this example, the physical scene is a human face. Many image shapes 64 are cleanly detected, other image shapes 66 merge over depth discontinuities, and an image shape 68 near an eyebrow is not detected due to the noisy three-dimensional surface of the eyebrow.
Having detected the edges of the imaged shapes at 50, the edges are next represented as one-dimensional shape vectors at 52. Figure 5 illustrates the step 52 for three example imaged shapes 70, 72 and 74. The groups of edgels corresponding to the individual shapes 70, 72 and 74 are fitted to ellipses which are then transformed into transformed ellipses 76, 78 and 80 respectively by mapping to the unit circle 82. The transformed two-dimensional locations of the points on the transformed ellipses 76, 78 and 80 are then converted to polar coordinates (r,θ). The right hand column of Figure 5 shows graphs 84, 86 and 88 of r against θ for the three transformed ellipses 76, 78 and 80 respectively. For the graphs 84, 86 and 88, θ=0 is chosen to be the point at which r is a maximum. The graphs of r are then sampled at regular intervals in θ to obtain D- dimensional shape vectors. Values of D from 10 to 36 provide reasonable results, but it will be appreciated that other values of D are also possible.
At 54, the shape vectors are classified into shape types using training data obtained at 56. The training data are obtained by using the projector 14 to project training arrays of shapes onto a training scene. For example, the training arrays may comprise projected arrays of shapes of a single shape type. In this embodiment, three training arrays are used: a first training array comprising only circles, a second training array comprising only triangles, and a third training array comprising only diamonds. The camera 18 is used to capture training images of the training arrays projected onto the training scene. Shape vectors are calculated for the imaged training shapes in the same way as described above . Image shape vectors are classified into shape types using a nearest-neighbour classifier based on the shape vectors for the training images. In this embodiment, shapes are labelled with their shape type if a consensus among five nearest neighbours can be reached, or as "unknown" otherwise. Typically, about 95% of imaged shapes are correctly identified in this way.
Having classified the shapes in the captured image 20 into shape types at 36, one-to-one correspondences are identified between shapes in the captured image 20 and shapes in the projected array 16 at 38. The step 38 of classifying the shapes in the image 20 into shape types is shown in more detail in Figure 6. At 90, the projected array 16 and the captured image 20 are rectified. At 92, imaged shapes are grouped according to respective columns of the projected image. At 94, the groups of shapes are ordered into lists. At 96, the ordered lists of shapes are aligned with the known columns of the array of projected shapes 16. Thus, one-to-one shape correspondences are identified between the projected array 16 and the captured image 20.
At 90, the calibrated fundamental matrix F is used to rectify the projected array 16 and the captured image 20 so that the columns of the projected array 16 fall along epipolar lines of the camera-projector system 10. Epipolar lines corresponding to the horizontally central points of the columns in the projected array 16 are then identified. Shapes are then assigned to columns at 92 by scanning along the central column epipolar lines and finding shapes which intersect the scan lines. Shapes are then ordered along the scan lines by sorting on the vertical position of the shapes ' centroids . Shapes are therefore able to be ordered along scan lines at 94 using only the column uniqueness properties of the array 16. Thus, due to the projected array 16 having uniqueness properties along the epipolar lines of the camera-projector system, and due to the rectification step 90, the method powerfully disambiguates the correspondence problem because the array of shapes 16 only requires uniqueness properties in one dimension thereof. Furthermore, the requirement for uniqueness properties in only one dimension considerably simplifies the construction of the projected array of shapes 16. A small percentage of imaged shapes may have been misclassified, and some imaged shapes may have been classified as "unknown" . In addition, occlusion and/or the limitations of the shape classification step 36 may lead to missing shapes in the image 20. Therefore, the extraction of shapes along scan lines is usually imperfect. Therefore, at 96, the ordered lists of imaged shapes are aligned with known lists of shapes corresponding to the columns of the array of projected shapes 16. The alignment optimisation problem is well known. In this embodiment, the alignment is optimised using a dynamic programming technique. This dynamic programming technique does not form a part of the present invention and will therefore be described only briefly here. The DP problem is visualized in this application via a 2D graph structure, with the list of observed shapes on one axis and known projected shapes on the other. A correspondence between observed and known shape type provides a point on the graph, and typically the point is assigned a score indicating the confidence of the match; the DP task is thus to find the highest scoring path from approximately lower left to the upper right sides of the graph. As no explicit score function is proposed from our classifier, the dynamic programming technique is enhanced using the uniqueness properties of the projected array 16. The uniqueness properties mean that aligned chains of adjacent shapes become less likely to occur randomly as the length of the chain increases. Therefore, the dynamic programming algorithm is written so as to be biased towards longer aligned chains of shapes. In this embodiment, the algorithm is biased towards aligned chains of length greater than two. The output of the dynamic programming algorithm is an optimised set of one-to-one correspondences between shapes in the projected array 16 and shapes in the captured image 20.
At 40, the set of one-to-one shape correspondences is used to obtain three-dimensional data relating to the physical scene 12. This is done by using the one-to-one shape correspondences to obtain one-to-one point correspondences between points in the projected array 16 and points in the captured image 20. Due to the rectification step 90, the epipolar lines intersect the projected shapes at only two points per shape. Similarly, the epipolar lines intersect the imaged shapes at only two points per shape.
The epipolar lines which intersect the projected and imaged shapes are densely sampled to obtain two point correspondences per shape per epipolar line. The point correspondences are then triangulated using the projection matrices PP and Pc to obtain a dense three-dimensional representation of the shape boundaries up to the projective ambiguity. The three-dimensional representation may be transformed from ambiguous space coordinates to real space coordinates by multiplying by the three-dimensional homography HSPACE- Thus, the method enables high resolution three-dimensional data to be obtained from a single two- dimensional image.
Having extracted dense three-dimensional data around the boundaries of the shapes, the surface of the scene visible to the camera-projector system is known within the constraints of imaging noise. Some noise may be removed by regularising the three dimensional data using a weak smoothness constraint. Alternatively, the smoothness constraint need not be applied.
As the three-dimensional data is reconstructed, shape- by-shape triangulation is easily performed by connecting three-dimensional data only within shapes, giving a partial surface suitable for rendering on PC graphics hardware. An alternative is to triangulate the complete point set, in which case a complete surface area manifold is presented. In either case, the data presented can be considered as a height map or as a range/depth image.
Although a preferred embodiment of the invention has been described, it is to be understood that this is by way of example only and that various modifications may be contemplated.

Claims

CLAIMS :
1. A method of obtaining three-dimensional data relating to a physical scene, comprising: (a) projecting a predetermined two-dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof;
(b) capturing an image of the array projected onto the scene/ (c) deriving correspondences between the shapes in the captured image to the finite array of projected shapes, based upon the uniqueness properties; and
(d) obtaining three-dimensional data points from the correspondence between the projected array and the captured image array.
2. The method of claim 1 wherein the projected array has uniqueness properties along mutually non-parallel lines.
3. The method of claim 2 wherein the mutually non- parallel lines are epipolar lines.
4. The method of any preceding claim wherein the method further comprises obtaining calibration data.
5. The method of claim 4 wherein the calibration data comprises a fundamental matrix.
6. The method of claim 5 wherein the step of obtaining calibration data further comprises resolving a projective ambiguity.
7. The method of any preceding claim wherein the step (c) further comprises detecting edges of the shapes from the captured image array.
8. The method of claim 7 wherein the step of detecting edges of the shapes comprises determining edgels corresponding to intensity gradients within the captured image .
9. The method of claim 7 or claim 8 wherein the step (c) further comprises representing the edges of the imaged shapes as shape vectors.
10. The method of claim 9 wherein the step (c) further comprises classifying the shapes using the shape vectors.
11. The method of any preceding claim wherein the method further comprises: projecting training arrays of shapes onto a training scene; capturing training images of the training arrays projected onto the training scene; and comparing the training images with the training arrays to obtain training data.
12. The method of claim 11 wherein the step (c) further comprises classifying the shapes using the training data.
13. The method of any preceding claim wherein the step (c) further comprises rectifying the projected array and the captured image .
14. The method of any preceding claim wherein the step (c) further comprises grouping the imaged shapes according to respective lines of shapes in the projected image, the lines of projected shapes being oriented along lines having uniqueness properties.
15. The method of claim 14 wherein the method further comprises ordering the groups of shapes.
16. The method of claim 15 wherein the method further comprises aligning the ordered groups of imaged shapes with respective lines of projected shapes using the uniqueness properties .
17. The method of claim 16 wherein dynamic programming is used to align the ordered groups of imaged shapes with respective lines of projected shapes.
18. The method of any preceding claim wherein the three dimensional data points are obtained by triangulation.
19. The method of any preceding claim wherein the method further comprises regularising the three-dimensional data points using a weak smoothness constraint.
20. The method of any preceding claim wherein the shapes are selected from the group consisting of circles, triangles, squares, diamonds and stars.
21. A system for obtaining three-dimensional data relating to a physical scene, the system comprising: a projector for projecting a predetermined two- dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof; a camera for capturing an image of the array projected onto the scene; and processing means for obtaining the three-dimensional data from the image .
PCT/GB2006/002715 2005-08-02 2006-07-20 Method and system for three-dimensional data capture WO2007015059A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0515915.7 2005-08-02
GBGB0515915.7A GB0515915D0 (en) 2005-08-02 2005-08-02 Method and system for three-dimensional data capture

Publications (1)

Publication Number Publication Date
WO2007015059A1 true WO2007015059A1 (en) 2007-02-08

Family

ID=34983971

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2006/002715 WO2007015059A1 (en) 2005-08-02 2006-07-20 Method and system for three-dimensional data capture

Country Status (2)

Country Link
GB (1) GB0515915D0 (en)
WO (1) WO2007015059A1 (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013076583A3 (en) * 2011-11-25 2013-12-27 Universite De Strasbourg Active vision method for stereo imaging system and corresponding system
EP2779027A1 (en) * 2013-03-13 2014-09-17 Intermec IP Corp. Systems and methods for enhancing dimensioning, for example volume dimensioning
US8988590B2 (en) 2011-03-28 2015-03-24 Intermec Ip Corp. Two-dimensional imager with solid-state auto-focus
US9007368B2 (en) 2012-05-07 2015-04-14 Intermec Ip Corp. Dimensioning system calibration systems and methods
US9239950B2 (en) 2013-07-01 2016-01-19 Hand Held Products, Inc. Dimensioning system
US9464885B2 (en) 2013-08-30 2016-10-11 Hand Held Products, Inc. System and method for package dimensioning
US9557166B2 (en) 2014-10-21 2017-01-31 Hand Held Products, Inc. Dimensioning system with multipath interference mitigation
US9752864B2 (en) 2014-10-21 2017-09-05 Hand Held Products, Inc. Handheld dimensioning system with feedback
US9762793B2 (en) 2014-10-21 2017-09-12 Hand Held Products, Inc. System and method for dimensioning
US9779546B2 (en) 2012-05-04 2017-10-03 Intermec Ip Corp. Volume dimensioning systems and methods
US9779276B2 (en) 2014-10-10 2017-10-03 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US9786101B2 (en) 2015-05-19 2017-10-10 Hand Held Products, Inc. Evaluating image values
US9823059B2 (en) 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
US9835486B2 (en) 2015-07-07 2017-12-05 Hand Held Products, Inc. Mobile dimensioner apparatus for use in commerce
US9841311B2 (en) 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9857167B2 (en) 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US9897434B2 (en) 2014-10-21 2018-02-20 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
US9940721B2 (en) 2016-06-10 2018-04-10 Hand Held Products, Inc. Scene change detection in a dimensioner
US9939259B2 (en) 2012-10-04 2018-04-10 Hand Held Products, Inc. Measuring object dimensions using mobile computer
US10007858B2 (en) 2012-05-15 2018-06-26 Honeywell International Inc. Terminals and methods for dimensioning objects
US10025314B2 (en) 2016-01-27 2018-07-17 Hand Held Products, Inc. Vehicle positioning and object avoidance
US10060729B2 (en) 2014-10-21 2018-08-28 Hand Held Products, Inc. Handheld dimensioner with data-quality indication
US10066982B2 (en) 2015-06-16 2018-09-04 Hand Held Products, Inc. Calibrating a volume dimensioner
US10094650B2 (en) 2015-07-16 2018-10-09 Hand Held Products, Inc. Dimensioning and imaging items
US10134120B2 (en) 2014-10-10 2018-11-20 Hand Held Products, Inc. Image-stitching for dimensioning
US10140724B2 (en) 2009-01-12 2018-11-27 Intermec Ip Corporation Semi-automatic dimensioning with imager on a portable device
US10163216B2 (en) 2016-06-15 2018-12-25 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
US10203402B2 (en) 2013-06-07 2019-02-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US10225544B2 (en) 2015-11-19 2019-03-05 Hand Held Products, Inc. High resolution dot pattern
US10249030B2 (en) 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US10247547B2 (en) 2015-06-23 2019-04-02 Hand Held Products, Inc. Optical pattern projector
US10321127B2 (en) 2012-08-20 2019-06-11 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
US10339352B2 (en) 2016-06-03 2019-07-02 Hand Held Products, Inc. Wearable metrological apparatus
US10393506B2 (en) 2015-07-15 2019-08-27 Hand Held Products, Inc. Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard
US10584962B2 (en) 2018-05-01 2020-03-10 Hand Held Products, Inc System and method for validating physical-item security
US10733748B2 (en) 2017-07-24 2020-08-04 Hand Held Products, Inc. Dual-pattern optical 3D dimensioning
US10775165B2 (en) 2014-10-10 2020-09-15 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US10909708B2 (en) 2016-12-09 2021-02-02 Hand Held Products, Inc. Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements
US11029762B2 (en) 2015-07-16 2021-06-08 Hand Held Products, Inc. Adjusting dimensioning results using augmented reality
US11047672B2 (en) 2017-03-28 2021-06-29 Hand Held Products, Inc. System for optically dimensioning
US11639846B2 (en) 2019-09-27 2023-05-02 Honeywell International Inc. Dual-pattern optical 3D dimensioning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000000926A1 (en) * 1998-06-30 2000-01-06 Intel Corporation Method and apparatus for capturing stereoscopic images using image sensors
US20030110610A1 (en) * 2001-11-13 2003-06-19 Duquette David W. Pick and place machine with component placement inspection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000000926A1 (en) * 1998-06-30 2000-01-06 Intel Corporation Method and apparatus for capturing stereoscopic images using image sensors
US20030110610A1 (en) * 2001-11-13 2003-06-19 Duquette David W. Pick and place machine with component placement inspection

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10845184B2 (en) 2009-01-12 2020-11-24 Intermec Ip Corporation Semi-automatic dimensioning with imager on a portable device
US10140724B2 (en) 2009-01-12 2018-11-27 Intermec Ip Corporation Semi-automatic dimensioning with imager on a portable device
US8988590B2 (en) 2011-03-28 2015-03-24 Intermec Ip Corp. Two-dimensional imager with solid-state auto-focus
US9253393B2 (en) 2011-03-28 2016-02-02 Intermec Ip, Corp. Two-dimensional imager with solid-state auto-focus
WO2013076583A3 (en) * 2011-11-25 2013-12-27 Universite De Strasbourg Active vision method for stereo imaging system and corresponding system
US10467806B2 (en) 2012-05-04 2019-11-05 Intermec Ip Corp. Volume dimensioning systems and methods
US9779546B2 (en) 2012-05-04 2017-10-03 Intermec Ip Corp. Volume dimensioning systems and methods
US9007368B2 (en) 2012-05-07 2015-04-14 Intermec Ip Corp. Dimensioning system calibration systems and methods
US9292969B2 (en) 2012-05-07 2016-03-22 Intermec Ip Corp. Dimensioning system calibration systems and methods
US10007858B2 (en) 2012-05-15 2018-06-26 Honeywell International Inc. Terminals and methods for dimensioning objects
US10635922B2 (en) 2012-05-15 2020-04-28 Hand Held Products, Inc. Terminals and methods for dimensioning objects
US10321127B2 (en) 2012-08-20 2019-06-11 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
US10805603B2 (en) 2012-08-20 2020-10-13 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
US9939259B2 (en) 2012-10-04 2018-04-10 Hand Held Products, Inc. Measuring object dimensions using mobile computer
US10908013B2 (en) 2012-10-16 2021-02-02 Hand Held Products, Inc. Dimensioning system
US9841311B2 (en) 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9784566B2 (en) 2013-03-13 2017-10-10 Intermec Ip Corp. Systems and methods for enhancing dimensioning
EP2966595A1 (en) * 2013-03-13 2016-01-13 Intermec IP Corp. Systems and methods for enhancing dimensioning, for example volume dimensioning
US9080856B2 (en) 2013-03-13 2015-07-14 Intermec Ip Corp. Systems and methods for enhancing dimensioning, for example volume dimensioning
EP2779027A1 (en) * 2013-03-13 2014-09-17 Intermec IP Corp. Systems and methods for enhancing dimensioning, for example volume dimensioning
US10228452B2 (en) 2013-06-07 2019-03-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US10203402B2 (en) 2013-06-07 2019-02-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US9239950B2 (en) 2013-07-01 2016-01-19 Hand Held Products, Inc. Dimensioning system
US9464885B2 (en) 2013-08-30 2016-10-11 Hand Held Products, Inc. System and method for package dimensioning
US9823059B2 (en) 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
US10240914B2 (en) 2014-08-06 2019-03-26 Hand Held Products, Inc. Dimensioning system with guided alignment
US10859375B2 (en) 2014-10-10 2020-12-08 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US10402956B2 (en) 2014-10-10 2019-09-03 Hand Held Products, Inc. Image-stitching for dimensioning
US10775165B2 (en) 2014-10-10 2020-09-15 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US10810715B2 (en) 2014-10-10 2020-10-20 Hand Held Products, Inc System and method for picking validation
US10121039B2 (en) 2014-10-10 2018-11-06 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US10134120B2 (en) 2014-10-10 2018-11-20 Hand Held Products, Inc. Image-stitching for dimensioning
US9779276B2 (en) 2014-10-10 2017-10-03 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US9752864B2 (en) 2014-10-21 2017-09-05 Hand Held Products, Inc. Handheld dimensioning system with feedback
US10218964B2 (en) 2014-10-21 2019-02-26 Hand Held Products, Inc. Dimensioning system with feedback
US9897434B2 (en) 2014-10-21 2018-02-20 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
US9762793B2 (en) 2014-10-21 2017-09-12 Hand Held Products, Inc. System and method for dimensioning
US10393508B2 (en) 2014-10-21 2019-08-27 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
US9557166B2 (en) 2014-10-21 2017-01-31 Hand Held Products, Inc. Dimensioning system with multipath interference mitigation
US10060729B2 (en) 2014-10-21 2018-08-28 Hand Held Products, Inc. Handheld dimensioner with data-quality indication
US9786101B2 (en) 2015-05-19 2017-10-10 Hand Held Products, Inc. Evaluating image values
US10593130B2 (en) 2015-05-19 2020-03-17 Hand Held Products, Inc. Evaluating image values
US11906280B2 (en) 2015-05-19 2024-02-20 Hand Held Products, Inc. Evaluating image values
US11403887B2 (en) 2015-05-19 2022-08-02 Hand Held Products, Inc. Evaluating image values
US10066982B2 (en) 2015-06-16 2018-09-04 Hand Held Products, Inc. Calibrating a volume dimensioner
US10247547B2 (en) 2015-06-23 2019-04-02 Hand Held Products, Inc. Optical pattern projector
US9857167B2 (en) 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US10612958B2 (en) 2015-07-07 2020-04-07 Hand Held Products, Inc. Mobile dimensioner apparatus to mitigate unfair charging practices in commerce
US9835486B2 (en) 2015-07-07 2017-12-05 Hand Held Products, Inc. Mobile dimensioner apparatus for use in commerce
US11353319B2 (en) 2015-07-15 2022-06-07 Hand Held Products, Inc. Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard
US10393506B2 (en) 2015-07-15 2019-08-27 Hand Held Products, Inc. Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard
US11029762B2 (en) 2015-07-16 2021-06-08 Hand Held Products, Inc. Adjusting dimensioning results using augmented reality
US10094650B2 (en) 2015-07-16 2018-10-09 Hand Held Products, Inc. Dimensioning and imaging items
US10249030B2 (en) 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US10225544B2 (en) 2015-11-19 2019-03-05 Hand Held Products, Inc. High resolution dot pattern
US10747227B2 (en) 2016-01-27 2020-08-18 Hand Held Products, Inc. Vehicle positioning and object avoidance
US10025314B2 (en) 2016-01-27 2018-07-17 Hand Held Products, Inc. Vehicle positioning and object avoidance
US10872214B2 (en) 2016-06-03 2020-12-22 Hand Held Products, Inc. Wearable metrological apparatus
US10339352B2 (en) 2016-06-03 2019-07-02 Hand Held Products, Inc. Wearable metrological apparatus
US9940721B2 (en) 2016-06-10 2018-04-10 Hand Held Products, Inc. Scene change detection in a dimensioner
US10417769B2 (en) 2016-06-15 2019-09-17 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
US10163216B2 (en) 2016-06-15 2018-12-25 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
US10909708B2 (en) 2016-12-09 2021-02-02 Hand Held Products, Inc. Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements
US11047672B2 (en) 2017-03-28 2021-06-29 Hand Held Products, Inc. System for optically dimensioning
US10733748B2 (en) 2017-07-24 2020-08-04 Hand Held Products, Inc. Dual-pattern optical 3D dimensioning
US10584962B2 (en) 2018-05-01 2020-03-10 Hand Held Products, Inc System and method for validating physical-item security
US11639846B2 (en) 2019-09-27 2023-05-02 Honeywell International Inc. Dual-pattern optical 3D dimensioning

Also Published As

Publication number Publication date
GB0515915D0 (en) 2005-09-07

Similar Documents

Publication Publication Date Title
WO2007015059A1 (en) Method and system for three-dimensional data capture
US10902668B2 (en) 3D geometric modeling and 3D video content creation
CA2079817C (en) Real time three dimensional sensing system
US7103212B2 (en) Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
EP1649423B1 (en) Method and sytem for the three-dimensional surface reconstruction of an object
EP2568253B1 (en) Structured-light measuring method and system
US20130106833A1 (en) Method and apparatus for optical tracking of 3d pose using complex markers
JP6596433B2 (en) Structured optical matching of a set of curves from two cameras
CN112097689A (en) Calibration method of 3D structured light system
WO2004044522A1 (en) Three-dimensional shape measuring method and its device
CN113505626A (en) Rapid three-dimensional fingerprint acquisition method and system
Tabata et al. High-speed 3D sensing with three-view geometry using a segmented pattern
Wenzel et al. High-resolution surface reconstruction from imagery for close range cultural Heritage applications
US20220092345A1 (en) Detecting displacements and/or defects in a point cloud using cluster-based cloud-to-cloud comparison
CN108645353B (en) Three-dimensional data acquisition system and method based on multi-frame random binary coding light field
JP2004077290A (en) Apparatus and method for measuring three-dimensional shape
Li et al. A camera on-line recalibration framework using SIFT
KR100872103B1 (en) Method and apparatus for determining angular pose of an object
JPH0814858A (en) Data acquisition device for three-dimensional object
JP2006058092A (en) Three-dimensional shape measuring device and method
Vuori et al. Three-dimensional imaging system with structured lighting and practical constraints
JP2005292027A (en) Processor and method for measuring/restoring three-dimensional shape
JP2916319B2 (en) 3D shape measuring device
JP6837880B2 (en) Image processing equipment, image processing systems, image processing methods, and programs
Matabosch et al. A refined range image registration technique for multi-stripe laser scanner

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06765045

Country of ref document: EP

Kind code of ref document: A1