US20020089675A1 - Three-dimensional input device - Google Patents

Three-dimensional input device Download PDF

Info

Publication number
US20020089675A1
US20020089675A1 US09/334,918 US33491899A US2002089675A1 US 20020089675 A1 US20020089675 A1 US 20020089675A1 US 33491899 A US33491899 A US 33491899A US 2002089675 A1 US2002089675 A1 US 2002089675A1
Authority
US
United States
Prior art keywords
light
scans
scan
detection light
measuring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/334,918
Other versions
US6424422B1 (en
Inventor
Koichi Kamon
Hideki Tanabe
Makoto Miyazaki
Toshio Norita
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minolta Co Ltd
Original Assignee
Minolta Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP17120698A external-priority patent/JP2000002520A/en
Priority claimed from JP19127898A external-priority patent/JP3740848B2/en
Application filed by Minolta Co Ltd filed Critical Minolta Co Ltd
Assigned to MINOLTA CO., LTD. reassignment MINOLTA CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TANABE, HIDEKI, JOKO, TAKUTO, KAMON, KOICHI, MIYAZAKI, MAKOTO, NORITA, TOSHIO
Publication of US20020089675A1 publication Critical patent/US20020089675A1/en
Application granted granted Critical
Publication of US6424422B1 publication Critical patent/US6424422B1/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Definitions

  • the present invention relates to a three-dimensional input device for scanning an object by projecting a detection light on the object and outputting data specifying the object shape.
  • Three-dimensional input devices of the non-contact type known as rangefinders are used for data input to computer graphics systems and computer-aided resign systems, physical measurement, and visual recognition for robots and the like, due to their high speed measurement capability compared to contact type devices.
  • the slit projection method (also referred to as light-section method) is known as a suitable measurement method for rangefinders.
  • This method produces a distance image (three-dimensional image) by optically scanning an object, and is one type of active measurement method which senses an object by illuminating the object with a specific detection light.
  • a distance image is a collection of pixels expressing the three-dimensional position at a plurality of parts on an object.
  • the slit light used as the detection light is a projection beam having a linear band-like cross section.
  • a part of an object is illuminated at a specific moment during the scan, and the position of this illuminated part can be calculated by the trigonometric survey method from the light projection direction and the high luminance position on the photoreceptive surface (i.e., photoreception direction). Accordingly, a group of data specifying the object shape can be obtained by sampling the luminance of each pixel of the photoreceptive surface.
  • rangefinders the distance to an object is measured and the detection light projection angle range is adjusted.
  • an operation control is executed to set the projection light start and end angle position so that the entirety of an object reflected on the photoreceptive surface becomes the scanning range in accordance with the image sensing magnification and the object distance.
  • sampling the luminance of the photoreceptive surface methods are known which limit the object of a single sampling not to the entire photoreceptive surface but to a part of the area where detection light is expected to enter, anc shifts this area for each sampling. Such methods are capable of scanning at high speed by reducing the time required per sample, and further reduce the load on the signal processing system by decreasing the amount of data.
  • a first objective of the present invention is to allow three-dimensional data input for an entire object with the same resolution as when the depth dimension is small even when the depth dimension of the object is large.
  • the detection light projection intensity is adjusted for the main measurement by performing a preliminary measurement.
  • an operation setting is executed wherein the detection light is projected to a part of the image sensing range and the projection intensity is optimized in accordance with the amount of the detection light entering the photoreceptive surface.
  • a second objective of the present invention is to provide a three-dimensional input device capable of obtaining three-dimensional data with a precision identical to that of an object with uniform reflectivity without receiving operation specifications a plurality of times even when there are marked differences in reflectivity depending on the part of the object.
  • the detection light projection angle range is automatically changed, and a plurality of scans are performed.
  • suitable photoreception information can be obtained by other scans even at the parts of the object from which effective photoreception information cannot be obtained by a particular scan.
  • the number of scans is theoretically expanded many fold by combining the measurable photoreception range in the depth direction with the measurable photoreception range in each scan.
  • a three-dimensional input device comprises a light projecting means for projecting detection light, and an image sensing means for receiving the detection light reflected by an object and converting said received light to electrical signals, which scans an object periodically while changing the projection direction of the detection light, and consecutively performs a plurality of scans at mutually different detection light projection angle ranges in accordance with the specifications at the start of operation.
  • the specifications at the start of the operation are set by control signals produced by operating a switch or button, or received from an external device.
  • the projection intensity is automatically changed, and a plurality of scans are performed.
  • suitable photoreception information can be obtained at other projection intensities even for an area on an object for which suitable photoreception information cannot be obtained at a particular projection intensity.
  • the amount of entrance light dependent on the reflectivity of a target area can be made to conform to the dynamic range of image sensing and signal processing in at least a single scan.
  • the objects of the present invention are attained at the moment photoreception information is obtained for a plurality of scans, and the selection of suitable photoreception information can be accomplished within the three-dimensional input device, or by an external device.
  • the photo electric conversion signals obtained by a first scan are amplified by different amplification factors.
  • photoreception information is obtained which is equivalent to the information obtained by a plurality of scans at different projection intensities, and allows the selection of suitable photoreception information for each part of an object.
  • a three-dimensional input device comprises a light projecting means for projecting detection light, and an image sensing means for receiving the detection light reflected by an object and converting the received light to electrical signals, which scans an object periodically while changing the projection direction of the detection light, and consecutively projects the projection light at different intensities for each of a plurality of scans in accordance with specifications at the start of operation.
  • the specifications at the start of the operation are set by control signals produced by operating a switch or button, or received from an external device.
  • FIG. 1 illustrates an exemplary block diagram of the measuring system of the present invention.
  • FIG. 2 illustrates an exemplary external view of a three-dimensional camera of the present invention.
  • FIG. 3 is a block diagram illustrating the functional operation of the three-dimensional camera 2 .
  • FIG. 4 is a schematic diagram of the zoom unit 51 for photoreception in accordance with the first embodiment of the present invention.
  • FIG. 5 illustrates the principle of the three-dimensional position calculation by the measuring system 1 .
  • FIG. 6 illustrates the concept of the center ip.
  • FIG. 7 is an exemplary schematic diagram illustrating the positional relationship between the object and the principal point of the optical system.
  • FIG. 8 shows an example of the positional change of the reference plane Ss.
  • FIG. 9 illustrates an example of the measurable distance range
  • FIG. 10 illustrates an example of the measurable distance range
  • FIG. 11 illustrates the setting the deflection parameters
  • FIG. 12 illustrates the setting the deflection parameters
  • FIG. 13 is an exemplary flow chart illustrating the operation of the three-dimensional camera 2 ;
  • FIG. 14 illustrates an example of the monitor display content
  • FIG. 15 illustrates an exemplary sensor reading range
  • FIG. 16 illustrates the relationship between the frames and line in the image sensing surface of the sensor
  • FIG. 17 illustrates an example of the recorded state of the photoreception data of each frame
  • FIG. 18 is a flow chart illustrating the processing sequence of the three-dimensional position calculation by the host
  • FIG. 19 is an exemplary schematic view of the zoom unit for photoreception
  • FIG. 20 illustrates the principle for calculating a three-dimensional position in the measuring system
  • FIG. 21 illustrates an exemplary positional relationship between the object and the principal point of the optical system
  • FIG. 22 is an exemplary flow chart showing the operation of the main measurement
  • FIG. 23 is an exemplary flow chart of the calculation of the slit light intensity suitable for the main measurement
  • FIG. 24 shows exemplary settings of the slit light intensity.
  • FIG. 25 illustrates the reading range of the sensor 53 ;
  • FIGS. 25 ( a )-( c ) illustrate representative examples of the relationship between the maximum value Xmax and the minimum value Xmin of the photoreception data
  • FIG. 26 is an exemplary flow chart of the output memory control
  • FIG. 27 is an exemplary bloc ⁇ diagram illustrating a variation of the three-dimensional camera 2 b described above in the foregoing embodiment.
  • FIG. 28 is an exemplary flow chart of the main measurement executed by the three-dimensional camera 2 b.
  • FIG. 29 is an exemplary block diagram of the output processing circuit 62 b of FIG. 27.
  • FIG. 30 is an exemplary block diagram of the maximum value determination circuit 643 of FIG. 29.
  • FIG. 31 shows an example of the relationship between the threshold values XLb and XHb and the maximum value Xmax2 of the photoreception data corresponding to a single pixel.
  • FIG. 1 shows the construction of a measuring system 1 of the present invention.
  • the measuring system 1 comprises a three-dimensional camera (rangefinder) 2 for performing stereoscopic measurement using the slit projection method, and a host computer 3 for processing the data output from the three-dimensional camera 2 .
  • the three-dimensional camera 2 outputs measurement data specifying the three-dimensional positions of a plurality of sampling points on an object Q, and outputs data necessary for calibration and a two-dimensional image expressing the color information of the object Q.
  • the host computer 3 manages the calculation process for determining the coordinates of the sampling points using the trigonometric survey method.
  • the host computer 3 comprises a central processing unit (CPU) 3 a , a display 3 b , a keyboard 3 c , and a mouse 3 d .
  • Software for measurement data processing is included in the CPU 3 a .
  • Data can be transferred between the host computer 3 and the three-dimensional camera 2 either online or offline using a portable recording medium 4 .
  • a magneto-optic disk (MO), minidisk (MD), memory card and the like may be used as the recording medium 4 .
  • FIG. 2 illustrates an exemplary external view of a three-dimensional camera of the present invention.
  • a projection window 20 a and a reception window 20 b are provided on the front surface of a housing 20 .
  • the projection window 20 a is positioned on the top side relative to the reception window 20 b .
  • the slit light (a band-like laser beam of predetermined width w) U emitted from an internal optical unit OU is directed toward the object being measured (photographic object) through the projection window 20 a .
  • the radiation angle ⁇ in the lengthwise direction M 1 of the slit light U is fixed.
  • the optical unit OU is provided with a two-axis adjustment mechanism for optimizing the relative relationship between the projection light axis and the reception light axis.
  • the top surface of the housing 20 is provided with zoom buttons 25 a and 25 b , manual focusing buttons 26 a and 26 b , and a shutter button 27 .
  • the back surface of the housing 20 is provided with a liquid crystal display 21 , a cursor button 22 , a selector button 23 , a cancel button 24 , an analog output pin 32 , a digital output pin 33 , and an installation aperture 30 a for the recording medium 4 .
  • the liquid crystal display (LCD) 21 is used as a display for the operation screens, and as an electronic viewfinder. A photographer sets the photographic mode by means of the various buttons 22 through 24 on the back surface. Two-dimensional image signals are output in NTSC format from the analog output pin 32 .
  • the digital output pin 33 is, for example, a SCSI pin.
  • FIG. 3 is a block diagram illustrating the functional operation of the three-dimensional camera 2 .
  • the solid arrow in the drawing indicates the flow of the electrical signals, and the broken arrow indicates the flow of light.
  • the three-dimensional camera 2 is provided with two optical units 40 and 50 on the light projection side and the light reception side, which comprise the previously mentioned optical unit OU.
  • a laser beam having a wavelength of 670 nm emitted from a semiconductor laser (LD) 41 passes through a projection lens 42 to form the slit light, which is deflected by a galvano mirror (scanning mechanism) 43 .
  • a system controller 61 controls the driver 44 of the semiconductor laser 41 , the drive system 45 of the projection lens system 42 , and the drive system 46 of the galvano mirror 43 .
  • the optical unit 50 In the optical unit 50 , light. focused by the zoom unit 51 is split by the beam splitter 52 .
  • the light in the oscillation wavelength range of the semiconductor laser 41 enters a measurement sensor 53 .
  • the light in the visible range enters the monitor color sensor 54 .
  • the sensor 53 and the color sensor 54 are both charge-coupled device (CCD) area sensors.
  • the zoom unit 51 is an internal focus type, which uses part of the entering light for autofocusing (AF).
  • the autofocus function is realized by an autofocus (AF) sensor 57 , a lens controller 58 , and a focusing drive system 59 .
  • the zooming drive system 60 is provided for non-manual zooming.
  • Image sensing information obtained by the sensor 53 is transmitted to the output processing circuit 62 synchronously with a clock signal from the drive 55 .
  • the image sensing information is temporarily stored in the memory 63 as 8-bit photoreception data.
  • the address specification in the memory 63 is performed by the memory controller 63 A.
  • the photoreception data are transmitted from the memory 63 to the center calculation circuit 73 with a predetermined timing, and the center calculation circuit 73 generates data (hereinafter referred to as “center ip”) used as a basis for the calculation of the three-dimensional position of a target object.
  • the center ip data are transmitted as a calculation result through the output memory 64 to the SCSI controller 66 .
  • the center ip data are output as monitor information from the center calculation circuit 73 to the display memory 74 , and used to display a distance image on the liquid crystal display (LCD) 21 .
  • LCD liquid crystal display
  • the output processing circuit 62 generates data the “center ip” used as a basis for the calculation of the three-dimensional position of a target object based on the input image sensing information, and outputs the center ip as a measurement result to the SCSI controller 66 .
  • the data generated by the output processing circuit 62 are output as monitor information to the display memory 74 , and used to display a distance image on the liquid crystal display 21 .
  • the output processing circuit 62 is described in detail later.
  • the image sensing information obtained by the color sensor 54 is transmitted to the color processing circuit 67 synchronously with clock signals from the driver 56 .
  • the image sensing information obtained by the color processing is output online through the NTSC conversion circuit 70 and the analog output pin 32 , or is binarized by the digital image generator 68 and stored in the color image memory 69 . Thereafter, the color image data are transmitted from the color image memory 69 to the SCSI controller 66 .
  • the SCSI controller 66 outputs the data from the output memory 64 (first embodiment) or the output processing circuit 62 (second embodiment) and the color image from the digital output pin 33 , or stores the data on the recording medium 4 .
  • the color image is an image having the same field angle as the distance image based on the output of the sensor 53 , and is used as reference information by application processing on the host computer 3 side.
  • the processes using color information include, for example, processes for generating a three-dimensional model by combining the measurement data of a plurality of groups having different camera perspectives, and processes for culling unnecessary peaks from a three-dimensional model.
  • the system controller 61 provides specifications for displaying suitable text and symbols on the screen of the liquid crystal display 21 relative to a character generator not shown in the drawings.
  • FIG. 4 is a schematic diagram of the zoom unit 51 for photoreception.
  • the zoom unit 51 comprises a front image forming unit 515 , a variator 514 , a compensator 513 , a focusing unit 512 , a back image forming unit 511 , and a beam splitter 516 for guiding part of the incidence light to the AF sensor 57 .
  • the front image forming unit 515 and the back image forming unit 511 are fixed relative to the optical path.
  • the movement of the focusing unit 512 is managed by the focus drive system 59
  • the movement of the variator 514 is managed by the zoom drive system 60
  • the focus drive system 59 is provided with a focusing encoder 59 A for specifying the moving distance (lens feedout amount Ed) of the focusing unit 512
  • the zoom drive system 60 is provided with a zooming encoder 60 A for specifying the moving distance (lens notch value fp) of the variator 514 .
  • FIG. 5 illustrates the principle of the three-dimensional position calculation in measuring system 1
  • FIG. 6 illustrates the concept of the center ip.
  • the three-dimensional camera 2 conducts 32 samplings per pixel.
  • the slit light U projected on the object Q has a relatively wide width of several pixels g extending at pitch pv of the slit width on the image sensing surface 2 of the sensor 53 .
  • the width of the slit light U is approximately 5 pixels.
  • the slit light U is deflected at equal angular speed about origin point A in vertical directions.
  • the slit light U reflected by the object Q passes through the principal point 0 (principal point H′ on the back side of zoom) of image formation, and enters the image sensing surface S 2 .
  • the object Q (specifically, the object image) is scanned by periodic sampling of the amount of light received by the sensor 53 in the projection of the slit light U. Photoelectric conversion signals are output for each frame from sensor 53 in each sampling period (sensor actuation period).
  • the amount of received light by each pixel g of the image sensing surface S 2 of the sensor 53 reaches a maximum at the time Npeak at which the optical axis. of the slit light U passes through the object surface ag is in the estimated range of pixel g, and the temporal distribution approaches a normal distribution.
  • the time Npeak occurs between the No. n sample and the previous (n-1) sample.
  • one frame is designated as 32 lines corresponding to part of the image sensing surface S 2 of the sensor 53 . Accordingly, when one pixel g is targeted on the image sensing surface S 2 , 32 samplings are executed during the scan, and 32 individual photoreception data are obtained.
  • the center ip (time center) corresponding to the time Npeak is calculated by a center calculation using the photoreception data of the aforesaid 32 frames.
  • the center ip is the center on the time axis of the distribution of the 32 photoreception data obtained by the 32 samplings as shown in FIG. 6.
  • the 32 photoreception data of each pixel are provided a sampling number of 1 through 32.
  • the No. i photoreception data are expressed as xi, where i is an integer of 1 to 32. At this time, i represents the frame number after the target pixel is entered from the effective photoreception range comprising 32 lines.
  • the center ip of the 1 through 32 photoreception data x 1 through x 32 is determined by dividing the total sum ⁇ x 1 of the data i ⁇ xi by the total sum ⁇ xi of data xi.
  • the center calculation circuit 73 calculates the center ip (i.e., the time center Npeak) using Equation 1.
  • the host 3 calculates the position coordinates) of the object Q using trigonometric survey method based on the relationship between the projection direction of the slit light U at the determined center ip and the entrance direction of the slit light U on the target pixel g. As such, it is possible to measure with higher resolution than the resolution stipulated by the pitch pv of the pixel array of the image sensing surface S 2 .
  • a user determines the camera position and direction and sets the field angle while viewing a color monitor image displayed on the LCD 21 . Either a zooming operation or focusing operation or both are performed as necessary at this time.
  • the variator 514 and the focusing unit 512 in the zoom unit 51 are moved, and the lens notch value fp and the lens feedout amount Ed at this time are successively transmitted to the system controller 61 through the lens controller 58 .
  • the system controller 61 calculates the object interval distance D using a conversion table generated beforehand based on Equation 2:
  • G represents a function
  • the field angle is set.
  • the slit light U deflection parameters (scan start angle, scan end angle, deflection angular speed) are set based on the determined object interval distance D.
  • FIG. 7 is a schematic diagram of the positional relationship between the object and the principal point of the optical system.
  • the offset Doff between the origin A and the anterior principal point H of the optical system which is the measurement reference point of the object interval distance D, in the Z direction, is considered.
  • a predetermined amount of over scan e.g., 16 pixels.
  • the scan start angle th 1 , the scan end angle th 2 , and the deflection angular speed ⁇ are expressed in the following equations:
  • pv represents the pixel pitch
  • np represents the effective pixel number in the Y direction of the image sensing surface S 2
  • L represents the base line length
  • the measurable distance ranged is dependent on the number of lines of one frame in the readout operation by the sensor 53 .
  • the present embodiments are based on this principle and use 32 lines per frame as stated above.
  • the object Q fits within the measurable distance range d, the object Q can be measured in a single scan. However, when the object Q has a large dimension in the depth direction (Z direction), such that part of the object Q extends beyond the measurable distance range, the shape data of that part cannot be obtained.
  • the three-dimensional camera 2 consecutively performs a plurality of scans (specifically, 3 ) while automatically changing the reference plane Ss. In this way shape data are obtained in a broader range than the measurable distance range d without reduction of resolution.
  • FIG. 8 shows an example of the positional change of the reference plane Ss.
  • the reference plane Ss In the first scan the reference plane Ss is set at position Z0 of the object interval distance D, in the second scan the reference plane Ss is set at position Z1of the object interval distance D 1 (D 1 ⁇ D), and in the third scan the reference plane Ss is set at the position Z2 of the object interval distance D 2 (D 2 >D). That is, in the second scan the reference plane S. is moved to the front side, and in the third scan the reference plane Ss is moved to the back side.
  • the sequence of the scan and the reference plane Ss position are optional, and the back side scan may be performed before the front side scan without problem.
  • the object interval distances D 1 and D 2 are set such that the measurable distance ranges d 2 and d 3 of the second and third scans partially overlap the measurable distance range d of the first scan. Overlapping the measurable distance ranges d, d 2 , d 3 allows the shape data obtained in each scan to be easily combined. Since the image sensing ratio is fixed in all three scans, the angle range for projecting the slit light U changes if the reference plane Ss position changes. Accordingly, the deflection parameters are calculated for each scan.
  • FIGS. 9 and 10 illustrate the measurable distance range.
  • the addresses in the Y direction are incremented 1, 2, 3 . . . , with the top address (scan start address) designated 1.
  • the scan angle ⁇ 2 at the end of the sampling of the pixel at address nc is determined by the following equation when the number of samples per pixel is designated j (i.e., 32 in the present example).
  • the measurable distance range d′ of the pixel at address nc is the depth range from the intersection point Z1 of the projection axis of the scan angle ⁇ 1 and the line of sight of the pixel at address nc to the intersection point Z2 of the projection axis of the scan angle ⁇ 2 and the line of sight of the pixel at address nc.
  • the scan angles ⁇ 1m and ⁇ 2m of the sampling start and end of address nm can be determined.
  • the measurable distance range dm of the pixel at address rim is the depth range from the intersection point Z1 of the projection axis of the scan angle ⁇ 1m and the line of sight of the pixel at address nm to the intersection point Z2 of the projection axis of the scan angle ⁇ 2m and the line of sight of the pixel at address nm as shown in FIG. 10.
  • FIGS. 11 and 12 illustrate the setting the deflection parameters.
  • the object interval distance D 1 is determined from Equation 7, and Equations 3 and 4 are applied.
  • the scan start angle th 1 a and the scan end angle th 2 a can be expressed by Equations 9 and 10:
  • th 1 a tan ⁇ 1 [ ⁇ pv ( np/ 2+16)+ L )/( D 1 +Doff)] ⁇ 180/ ⁇ (9)
  • the boundary position Z2on the back side (right side in the drawing) of the aforesaid measurable distance range d is used as a reference position to determine the scan start angle th 1 b and the scan end angle th 2 b.
  • the scan start angle th 1 b and the scan end angle th 2 b can be expressed by Equations 12 and 13:
  • th 1 b tan ⁇ 1 [ ⁇ pv ( np/ 2+16)+ L )/( D 1 +Doff)] ⁇ 180/ ⁇ (12)
  • th 2 b tan ⁇ 1 [ ⁇ pv ( np/ 2+16)+ L )/( D 1 +Doff)] ⁇ 180/ ⁇ (13)
  • the range d′ from position Z11 to position Z22 becomes the measurable distance range of the pixel at address nc by performing three scans at the determined deflection parameters (th 1 , th 2 , ⁇ ), (th 1 a , th 2 a , ⁇ a), (th 1 b , th 2 b , ⁇ b) (refer to FIG. 8,).
  • Two or more scans may be performed, or four or more scans may be performed to broaden the measurable distance range. If the deflection parameters are determined for the fourth and subsequent scans based on the deflection parameters of the second and third scans, the measurable distance range of each scan can be accurately overlapped.
  • an autofocus adjustment is executed to adjust the lens feedout amount Ed in accordance with the object interval distances D 1 and D 2 .
  • the system controller 61 determines the lens feedout amount Ed using a conversion table generated beforehand based on Equation 15, and this value is set as the control target value in the lens controller 58 .
  • the system controller 61 calculates the lens feedout amounts Ed 1 and Ed 2 for the second and third scans using the object interval distances D 1 and D 2 obtained by the first deflection parameters (th 1 , th 2 , ⁇ ) the Equations 16 and 17.
  • FIG. 13 is a flow chart briefly showing the operation of the three-dimensional camera 2 .
  • the system controller 61 calculates the deflection parameters for the first scan, and then calculates the deflection parameters for the second and third scans based on the first calculation result (Steps 50 ⁇ 52 ).
  • the deflection parameters of the first scan are set, and scanning starts (Steps 53 , 54 ).
  • the projection of the slit light U and the sampling of the amount of received light via sensor 53 are started.
  • the scan is executed until the deflection angle position attains the scan end angle th 2 .
  • the system controller 61 sets the deflection parameters for the second scan (Steps 55 , 56 ).
  • the lens feedout amount Ed 2 is determined for this deflection parameter, and the movement of focusing unit 512 is specified to the focusing drive system 59 via the lens controller 58 (Step 57 ).
  • the second scan starts (Steps 58 , 59 ).
  • the system controller 61 sets the deflection parameters for the third scan, and specifies the movement of the focusing unit 512 (Steps 60 ⁇ 62 ).
  • the third scan starts (Steps 63 , 64 ).
  • FIG. 14 shows an example of the monitor display content.
  • the object of measurement is based on the positions Z11 and Z22 as shown in FIG. 14 b , part of the object Q is measured in each scan.
  • the three distance images g 1 , g 2 , g 3 representing the result of each scan are shown aligned from left to right in the sequence of the reference plane position on the liquid crystal display 21 , as shown in FIG. 14 a . That is, the distance image g 1 corresponding to the first scan is shown in the center, the distance image g 2 corresponding to the second scan is shown on the left side, and the distance image g 3 corresponding to the right scan is shown on the right side.
  • An operator specifies whether or not the result of a scan is output by means of the cursor button 22 and the selector button 23 . Two or more scans may be specified, or one scan may be specified. The first scan and the second scan may be specified as indicated by the upward arrows in the example of FIG. 14.
  • the specified scan result (i.e., the center ip of a specific number of pixels) is output from the center calculation circuit 73 to the host 3 or the recording medium 4 via the output memory 64 and the SCSI controller 66 .
  • device information including the specifications of the sensors and the deflection parameters also are output.
  • Table 1 shows the main data transmitted by the three-dimensional camera 2 to the host 3 .
  • FIG. 15 shows the reading range of the sensor 53 .
  • the reading of one frame by the Sensor 53 is not executed for the entire image sensing surface S 2 , but is executed only for the effective photoreception region (band image) Ae of part of the image sensing surface S 2 to facilitate high speed processing.
  • the effective photoreception region Ae is a region on the image sensing surface S 2 corresponding to the measurable distance range in a specific illumination timing, and shifts one pixel at a time for each frame in conjunction with the deflection of the slit light U.
  • the number of pixels in the shift direction of the effective photoreception region Ae is fixed at 32.
  • the method for reading only part of the sensed image of a CCD area sensor is disclosed in U.S. Pat. No. 5,668,631.
  • FIG. 16 illustrates the relationship between frames and the lines in the image sensing surface S 2 of the sensor 53
  • FIG. 17 shows the recorded state of the photoreception data of each frame.
  • frame 1 which is the first frame of the image sensing surface S 2 , includes the photoreception data of 32 lines by 200 pixels from line 1 through line 32 .
  • One line comprises 200 pixels.
  • Each frame is shifted one line, i.e., frame 2 includes line 2 through line 33 , and frame 3 includes line 3 through line 34 .
  • Frame 32 includes line 32 through line 63 .
  • the photoreception data of line 1 through line 32 are sequentially subjected to analog-to-digital (A/D) conversion, and stored in the memory 63 (refer to FIG. 3). As shown in FIG. 17, the photoreception data are stored in the sequence of frame 1 , frame 2 , frame 3 and the like, and the data of line 32 included in each frame is shifted upward one line in each frame, i.e., the data are the 32 line in frame 1 , the 31 line in frame 2 , and the like.
  • the sampling of the pixels of line 32 ends by storing the photoreception data of frame 1 through frame 32 in the memory 63 .
  • the photoreception data of each pixel at the end of sampling are sequentially read from the memory 63 for the center calculation. The content of the center calculation is described below.
  • the center ip specifies a position on the surface of the object Q.
  • Three-dimensional position calculation processing is executed in the host 3 , to calculate the three-dimensional position (coordinates X, Y, Z) of 200 ⁇ 200 sampling points (pixels).
  • the sampling points are the intersections of the camera line of sight (a straight line connecting the sampling point. and the front principal point H) and the slit plane (the optical axis plane of the slit light U illuminating the sampling point).
  • FIG. 18 is a flow chart showing the processing sequence of the three-dimensional position calculation by the hoist.
  • a determination is made as to whether or not the total sum ⁇ xi of xi transmitted from the three-dimensional camera 2 exceeds a predetermined value (Step 11 ). Since too much error is included when the value xi is small, i.e., when the total sum ⁇ xi of the slit light component does not satisfy a predetermined reference, the three-dimensional position calculation is not executed for that pixel. Data expressing [error] is stored in memory for that pixel (Step 17 ). When ⁇ xi exceeds a predetermined value, three-dimensional position is calculated for that pixel because there is sufficient luminance.
  • the slit light U passage timing nop is calculated (Step 12 ).
  • the center ip is converted to the passage timing nop from the start of the scan by adding the line number.
  • the line number of the pixel of line 32 calculated at the start is [ 32 ]
  • the line number of the next line 33 is [ 33 ].
  • the line number of the line of the target pixel increases by 1 with each one line advance.
  • suitable set values can be calibrated by canceling the rotation angle the 1 around the X axis and the rotation angle the 4 around the X axis in the coefficient of Equation 20 described below.
  • Step 13 the three-dimensional position is calculated.
  • the calculated three-dimensional position is stored in a memory area corresponding tc the pixel (Step 14 ), and similar processing is then executed for the next pixel (Step 16 ).
  • the routine ends when processing is completed for all pixels (Step 10 ).
  • b represents the image distance
  • FH represents the front principal point
  • pu represents the pixel pitch in the horizontal direction of the image sensing surface
  • pv represents the pixel pitch in the vertical direction in the image sensing surface
  • u represents the pixel position in the horizontal direction in the image sensing surface
  • u 0 represents the center pixel position in the horizontal direction in the image sensinq surface
  • v represents the pixel position in the vertical direction in the image sensing surface
  • v 0 represents the center pixel position in the vertical direction in the image sensing surface.
  • the 1 represents the rotation angle around the X axis
  • the 2 represents the incline angle around the Y axis
  • the 3 represents the incline angle around the Z axis
  • the 4 represents the angular speed around the X axis
  • nop represents the passage timing,center ip+line number
  • L represents the base line length
  • s represents the offset of origin point A.
  • the geometric aberration is dependent on the field angle. Distortion is generated in a subject with the center pixel as the center. Accordingly, the amount of distortion is expressed as a function of the distance from the center pixel. In this case, the distance approaches a cubic function.
  • the secondary correction coefficient is designated d 1 and the tertiary correction coefficient is designated d 2 .
  • the pixel positions u′ and v′ are applied to Equations 21 and 22:
  • u′ u+d 1 ⁇ t 2 2 ⁇ ( u ⁇ u 0 )/ t 2 + d 2 ⁇ t 2 2 ⁇ ( u ⁇ u 0 )/ t 2 (21)
  • v′ v+d 1 ⁇ t 2 2 ⁇ ( v ⁇ v 0 )/ t 2 + d 2 ⁇ t 2 2 ⁇ ( v ⁇ v 0 )/ t 2 (22)
  • a three-dimensional position which considers aberration can be determined by substituting u′ for u and v′ for v in Equations 18 and 19. Calibration is discussed in detail by Onodera and Kanaya, “Geometric correction of unnecessary images in camera positioning,” The Institute of Electronics, Information and Communications Engineers Research Data PRU 91-113, and Ueshiba, Yoshimi, Oshima, et al., “High precision calibration method for rangefinder based on three-dimensional model optical for optical systems,” The Institute of Electronics, Information and Communications Engineers Journal D-II, vol. J74-D-II, No. 9,, pp. 1227-1235, September, 1991.
  • the object interval distance D is calculated a passive type measurement detecting the position of the focusing unit and the zoom unit in the aforesaid embodiment, it is also possible to use an active type measurement to calculate the object interval distance using a trigonometric survey method by a preliminary measurement by projecting the slit light U at a predetermined angle and detecting the incidence angle. Furthermore, the deflection parameters may be set based on the object interval distance preset without measurement, or based on an object interval distance input by a user.
  • a mode may be provided for performing only a single scan, so as to allow a user to switch between the single scan mode and a mode performing a plurality of scans.
  • the calculation to determine the coordinates from the center ip may also be executed within the three-dimensional camera 1 .
  • a function may be provided to analyze the results of a single scan to automatically determine whether or not two or more scans are necessary, and execute a plurality of scans only when required.
  • a construction may also be used to allow the host to calculate the center ip by transmitting the data ⁇ xi ⁇ xi, and ⁇ xi for each pixel as measurement results to the host 3 .
  • FIG. 19 is a block diagram of the output processing circuit 62 in accordance with a second embodiment of the present invention. It is noted that the output processing circuit 62 illustrated in FIG. 19 represents a modified embodiment of the output. processing circuit 901 illustrated in FIG. 3.
  • the photoelectric conversion signals S 53 output from the sensor 53 are placed on a sampling hold by the sampling hold circuit 621 , amplified by a predetermined amplification factor by the amplification circuit 622 , and subsequently converted to 8-bit photoreception data Xi by the analog-to-digital (A/D) converter 623 .
  • the photoreception data Xi are temporarily stored in a memory 624 , and transmitted via a predetermined timing to the maximum value determination circuit 626 and the center calculation circuit 627 .
  • the photoreception data Xi of 32 frames are recorded in the memory 624 .
  • the center is calculated for each pixel using a plurality of photoreception data Xi (32 items in the present example) obtained by shifting the image sensing cycle each sensor drive cycle.
  • the memory address specification of the memory 624 is controlled by the memory controller 625 .
  • the system controller 61 fetches the photoreception data of a predetermined pixel through a data bus (not shown in the drawings) in a preliminary measurement.
  • the center calculation circuit 627 calculates the center ip forming the basis of the three-dimensional position calculation by the host computer 3 , and transmits the calculation result to the output memory 628 .
  • the maximum value determination circuit 626 outputs a control signal CS 1 representing the suitability of the center ip for each pixel.
  • the output memory controller 629 controls the writing of the center ip to the output memory 629 in accordance with the control signal CS 1 .
  • a control signal CS 2 output from the system controller 61 specifies masking of the control signal CS 1
  • the center ip is written to memory regardless of the control signal CS 1 .
  • the output memory controller 629 is provided with a counter for address specification.
  • FIG. 20 is an exemplary block diagram of the maximum value determination circuit 626 .
  • the maximum value determination circuit 626 comprises a maximum value detector 6261 , a threshold value memory 6262 , and a comparator 6263 .
  • the 32 items of photoreception data Xi per pixel read from the memory 624 are input to the maximum value detector 6261 .
  • the maximum value detector 6261 detects and outputs the maximum value Xmax among the 32 items of photoreception data Xi per pixel.
  • the maximum value Xmax is compared to two threshold values XL and XH by the comparator 6263 , and the comparison result is output as the control signal CS 1 .
  • the threshold values XL and XH are fixed values stored in the threshold memory 6262 .
  • the three-dimensional camera 2 performs measurement sampling at 200 ⁇ 262 sampling points. That is, the image sensing surface S 2 has 262 pixels in the width direction of the slit light U, and the actual number of frames is 231.
  • a user determines the camera position and direction, and set the field angle while viewing a color monitor image displayed on the liquid crystal display 21 . At this time a zooming operation is performed if necessary. Also at this time focusing is accomplished either manually or automatically by moving the focusing unit within the zoom unit 51 , and the approximate object interval distance (do) is measured during the focusing process.
  • the system controller 61 estimates the output of the semiconductor laser 41 (slit light intensity), and the deflection conditions (scan start angle, scan end angle, deflection angular speed) of the slit light U for the main measurement.
  • FIG. 21 illustrates a summary of the estimation of the slit light intensity Ls in the preliminary measurement.
  • the summary of the estimation of the slit light intensity is described below.
  • the projection angle is set so that the reflected light is received in the center of the sensor 53 in the presence of a flat surfaced object using an object interval distance (do) estimated by the focusing operation.
  • the slit light U is projected consecutively a total of three times at three intensity levels La, Lb, Lc at the set projection angle, the output of the sensor 53 is sampled, and the relationship between each intensity La, Lb, Lc and the output of the sensor 53 is determined. Then, the slit light intensity Ls is determined for the optimum value Ss of the sensor 53 output based on the determined relationships.
  • the sampling of the output of the sensor 53 does not target the totality of the image sensing surface S 2 , but rather targets only a part of the surface.
  • the object interval distance (d) is estimated by the trigonometric survey method based on the projection angle of the slit light U at intensities La, Lb ⁇ Lc, and the receiving position of the slit light U. Then, the scan start angle, scan end angle, and deflection angular speed are calculated so as to obtain a measurement result of a predetermined resolution based on the object interval distance (d), optical conditions of the received light, and the operating conditions of the sensor 53 . The preliminary measurement continues until a suitable preliminary measurement result is obtained, and then the main measurement is executed.
  • FIG. 22 is an exemplary flow chart showing the operation of the main measurement.
  • an object is scanned a total of three times.
  • the deflection conditions of the slit light U are identical in all scans, but the slit light intensity is different in each scan.
  • the slit light intensity L 1 of the first scan is the slit light intensity La determined previously by the preliminary measurement.
  • the slit light intensity L 1 is set as the operating condition of the semiconductor laser 41 (Step 11 ), and the slit light U is projected in a range from the scan start angle to the scan end angle (Steps 12 , 13 ).
  • the slit light intensity L 2 is set, and a second scan performed (Steps 14 to 16 ), then the slit light intensity L 3 is set, and a third scan is performed (Steps 17 to 19 ).
  • Date loss in the processing performed by the output processing circuit 62 described later can be prevented by scanning at the slit light intensity Ls. Date loss is related to the reflectivity distribution of the object, the number of set levels of slit light intensity (number of scans), and the maximum value of the photoreception data of each pixel.
  • FIG. 23 is an exemplary flow chart of the calculation of the slit light intensity suitable for the main measurement
  • FIG. 24 shows exemplary settings of the slit light intensity.
  • the slit light intensity Ls is calculated based on the preliminary measurement as previously described (Step 21 ), and the slit light intensities L 2 and L 3 are determined accordance with the magnitude relationship between the obtained value and the previously determined threshold values LL and LH.
  • the threshold value LL is the value on the border of the low side among the two borders circumscribing the intensity range zL, zM, zH of three levels of low (L), medium (M), high (H) of the variable range (minimum value Lmin to maximum value Lmax) of the slit light intensity.
  • the threshold value LH is the border value on the high side.
  • the light intensity L 2 is set as the average value LMa of the intensity range zM
  • the intensity value L3 is set as the average value LHa of the intensity range zH (Steps 22 , 23 ).
  • the intensity value L2 is set as the average value LLa of the intensity range zL (Step 24 ), and the intensity value L3 is set as described below.
  • the value L3 is set as LHa (Steps 25 , 26 )
  • Ls>LH the value L3 is set as LMa (Step 27 ). That is, each of the three intensity ranges zL, zM, zH are selected one by one as the slit light intensity.
  • Each value LL, LH, LLa,, LHa are expressed by the following equations.
  • the previously mentioned output processing circuit 62 calculates the center ip using a predetermined calculation on the output from the sensor 53 , and effectively uses the center ip in the scans of each pixel in accordance with the maximum value Xmax.
  • FIG. 15 illustrates the reading range of the sensor 53 .
  • Reading of one frame by the sensor 53 is accomplished not for the entire image sensing surface S 2 , but rather the subject is only the effective photoreception area (band-like image) Ae of part of the image sensing surface S 2 to allow high speed reading.
  • the effective photoreception area Ae is the area on the image sensing surface S 2 corresponding to the measurable distance range of a specific illumination timing, and shifts pixel by pixel for each frame in accordance with the deflection of the slit light U.
  • the number of pixels of the effective photoreception area in the shift direction is fixed at 32 .
  • the method of reading part of the sensed image by the CCD area sensor is disclosed in Japanese Laid-Open Patent Application No. HEI 7-174536.
  • FIG. 16 illustrates the relationship of the line and frame on the image sensing surface S 2 of the sensor 53
  • FIG. 17 illustrates the recording state of the photoreception data of each frame.
  • the frame 1 which is the first frame on the image sensing surface S 2 contains photoreception data of the 32 (lines) by 200 pixels from line 1 through line 32 .
  • the frame 2 is shifted one line only and contains lines 2 through 33
  • the frame 3 is shifted one line only and contains lines 3 through 34 .
  • the frame 32 contains lines 32 through 63 .
  • One line comprises 200 pixels as previously described.
  • the photoreception information of frame 1 through frame 32 is sequentially converted by an A/D converter and stored in the memory 624 (refer to FIG. 4).
  • the photoreception data is stored in the memory 624 in the sequence of frame 1 , frame 2 , frame 3 and so on, and the data of line 32 included in each frame is shifted up one line per frame so as to be line 32 in frame 1 , line 31 in frame 2 and the like.
  • the sampling of each pixel of line 32 ends by storing the photoreception data from frame 1 to frame 32 in the memory 624 .
  • the photoreception data of each pixel are sequentially read from the memory 624 after sampling ends to calculate the center.
  • the center ip is the center on the time axis of the distribution of 32 individual photoreception data obtained by 32 samplings.
  • the 32 individual photoreception data of each pixel are designated by the sample numbers 1 through 32 .
  • the no. i sample data is represented by Xi, where i is an integer of 1 through 32 .
  • i represents the frame number of a single pixel after the pixel enters the effective photoreception range Ae.
  • the center ip of the photoreception data X 1 through X 32 of numbers 1 through 32 is determined in the same manner as described above.
  • the calculation process becomes inaccurate or impossible when the sampling data near the center ip attains a saturated level or a solid level, thereby causing a reduction in the accuracy of the target object position calculation.
  • the three-dimensional camera 2 of the second embodiment performs a plurality of scans at different slit light intensities and selects the scan data wherein the sampling data is not at a saturated level or a solid level among each scan of each pixel so as to accurately calculate the center relative to all pixels.
  • FIGS. 25 ( a )-( c ) illustrate representative examples of the relationship between the maximum value Xmax and the minimum value Xmin of the photoreception data.
  • the threshold values XL and XH are set at values below the saturation level Xt and above the solid level Xu in the operation characteristics of the sensor 53 .
  • FIG. 25( a ) shows an instance in which the maximum value Xmax satisfies Equation (24) described later among the sampled photoreception data of each pixel; in this case the center ip is permitted to be written to the output memory 628 .
  • FIG. 25( b ) shows an instance in which the maximum value Xmax exceeds the threshold value XH (tolerance upper limit), and the photoreception data are saturated. In this case, since there is concern that the center calculation cannot be performed accurately, the center ip is prohibited from being written to the output memory 628 .
  • FIG. 25( c ) shows an instance when the maximum value Xmax is less than the threshold value XL (tolerance lower limit); the photoreception data approach the solid level thereby greatly increasing the effects of background light and intra circuit noise, and narrowing the width of the photoreception data distribution. Since there is concern that the center calculation cannot be accurately performed in this case also, the center ip is not permitted to be written to the output memory 628 .
  • the comparator 6263 mentioned in FIG. 20 executes a comparison calculation to determine whether or not the input maximum value Xmax satisfies the conditions of Equation (24), and outputs a control signal CS 1 .
  • control signal CS 1 permits the center ip to be written to the output memory 628 when the maximum value Xmax of each pixel satisfies Equation (24), and prohibits the center ip from being written to memory when Equation (24) is not satisfied.
  • the output memory 628 is provided with a capacity capable of holding the center of all pixels of the sensor 53 .
  • FIG. 26 is an exemplary flow chart of the output memory control.
  • the maximum value Xmax is compared to the threshold value XL and XH, and the writing to the output memory 628 is either permitted or prohibited in accordance with the comparison result (Steps 37 - 40 ).
  • the center ip obtained in the second scan is written to the output memory 628 .
  • the center ip is written to the output memory 628 in the same manner as in the second scan.
  • the center ip of each pixel calculated in each of a plurality of scans in the output processing circuit 62 is updated for each scan if the maximum value Xmax satisfies the conditions of Equation (24).
  • the center calculation result is written to the output memory 628 for all pixels in the first scan to prevent data loss.
  • the slit light intensity Ls in the first scan is optimized relative to the reflectivity of the center of the object in the preliminary measurement as previously mentioned. Loss of data for other areas near the reflectivity of the center of the object is prevented by executing the first scan at the slit light intensity Ls.
  • the accuracy of the center calculation can be improved by selecting the slit light intensity appropriate to the reflectivity of each part of the target object by increasing the number of scans and the number of levels of the slit light intensity.
  • FIG. 27 is an exemplary block diagram illustrating a variation of the three-dimensional camera 2 b described above in the foregoing embodiment.
  • items common to the embodiment of FIG. 3 are identified by a “b” appended to their reference number.
  • the external view, basic internal construction, and method of use of the three-dimensional camera 2 b are identical to the previously described three-dimensional camera 2 (refer to FIGS. 1 through 3). The description of FIG. 27 is therefore abbreviated.
  • the optical unit 40 b is provided with a semiconductor laser 41 b , which emits a slit light U to scan an object which is the measurement target.
  • the optical unit 50 b is provided with a measurement sensor 53 b and a monitor color sensor 54 b , which forms the object image and converts this image to electrical signals.
  • a system controller 61 b controls the drive system 140 for operating the optical unit 40 b and the drive system 150 for driving the optical unit 50 b.
  • the photoelectric conversion signals S 53 b obtained by the sensor 53 b are transmitted to the output processing circuit 62 b .
  • the output processing circuit 62 b outputs a distance image to the display memory 74 b , and outputs data based on the calculation of the three-dimensional position to the SCSI controller 66 b .
  • the distance image is displayed on the liquid crystal display 21 b .
  • the photoelectric conversion signals obtained by the color sensor 54 b are input to the color data processing system 160 .
  • the color data processing system 160 outputs analog image signals too the pin 32 b , and transmits digital image data to the SCSI controller 66 b .
  • the SCSI controller 66 b controls data communications with external devices via the pin 33 b , and manages the access to the recording medium 4 b.
  • FIG. 28 is an exemplary flow chart of the main measurement executed by the three-dimensional camera 2 b .
  • the three-dimensional camera 2 b performs a preliminary measurement in the same sequence as the three-dimensional camera 2 , and the result of the preliminary measurement is reflected in the main measurement.
  • the system controller 61 b provides the deflection conditions for the drive system 140 , and provides the slit light intensity Ls as an output condition for the semiconductor laser 41 b (Step 51 ).
  • the scan starts to satisfy the scan start angle of the deflection conditions (Step 52 ), and the scan is executed until the scan end angle is satisfied (Step 53 ).
  • the output processing circuit 62 b performs signal processing on the successive photoelectric conversion signals output from the sensor 53 b.
  • FIG. 29 is an exemplary block diagram of the output processing circuit 62 b of FIG. 27
  • the photoelectric conversion signal S 53 b of each pixel transmitted from the sensor 53 b is placed on a sampling hold by the sampling hold circuit 630 , and is amplified by mutually different amplification factors A 1 , A 2 , A 3 in three; amplifiers 631 , 632 , 633 .
  • the relationships among the three amplification factors A 1 , A 2 , A 3 are expressed by the equations below.
  • the output of the amplifier 631 is binarized by the A/D converter 634 , then transmitted to the memory 638 as photoreception data Xil.
  • the usage of the memory 638 is identical to that shown in FIG. 16, and the photoreception data Xi 1 are written to an address specified by the memory controller 637 .
  • the output of the amplifier 632 is binarized by the A/D converter 635 , then transmitted to the memory 639 as photoreception data Xi 2
  • the output of the amplifier 633 is binarized by the A/D converter 636 , then transmitted to the memory 640 as photoreception data Xi 3 .
  • the photoreception data Xi 1 , Xi 2 , Xi 3 are all 23-bit data.
  • the photoreception data Xi 1 , Xi 2 , Xi 3 are read from the memories 638 through 640 for the pixels when writing the 32 frame ends.
  • One among the photoreception data Xi 1 , Xi 2 , Xi 3 is selected by the selectors 641 and 642 , and transmitted to the center calculation circuit 644 .
  • the selectors 641 and 642 receive selector signals SL 1 and SL 2 from the maximum value determination circuit 643 .
  • FIG. 30 is an exemplary block diagram of the maximum value determination circuit 643 of FIG. 29, and FIG. 31 shows an example of the relationship between the threshold values XLb and XHb and the maximum value Xmax 2 of the 32 photoreception data corresponding to a single pixel.
  • the maximum value determination circuit 643 comprises a maximum value detector 6431 , a threshold memory unit 6432 , and two comparators 6433 and 6434 .
  • the photoreception data Xi 1 transmitted from the memory 638 are input to the maximum value determination circuit 6431 .
  • the maximum value detector 6431 detects the maximum value Xmax 2 of the photoreception data Xi 1 (32 in the present embodiment) of each pixel.
  • the threshold memory 6432 provides the threshold values XLb and XHb for the comparator 6433 , and provides the threshold value XLb for the comparator 6434 .
  • the comparator 6433 outputs a selector signal SL 1 in accordance with the magnitude relationship of the maximum value Xmax 2 and the threshold values XLb and XHb.
  • the comparator 6434 outputs a selector signal SL 2 in accordance with the magnitude relationship of the maximum value Xmax 2 and the threshold value XLb.
  • the threshold values XLb and XHb are set below the saturation level Xtb and above the solid level Xub determined by the characteristics of the sensor 53 b .
  • the selector signal SL 1 becomes active, and the selector 641 selects the photoreception data Xi 1 from the memory 638 . That is, the photoreception data Xi 1 of amplification factor A 1 is used in the center calculation (refer to FIG. 28).
  • the selector 641 selects the photoreception data (Xi 2 or Xi 3 ) from the selector 642 .
  • the selector signal SL 2 becomes active, and the selector 642 selects the photoreception data Xi 2 from the memory 639 , and transmits the data to the selector 641 .
  • the selector 642 selects the photoreception data Xi 3 from the memory 640 .
  • the relationship Xi 3 ⁇ Xi 1 ⁇ Xi 2 obtains.
  • An accurate center calculation can be accomplished by the aforesaid process using photoreception. data that is neither a saturated level or a solid level obtained by a suitable amplification of the sensor output.
  • the three-dimensional came as 2 and 2 b of the foregoing embodiments are systems which calculate the center ip and transmit the center ip to a host computer 3
  • the three-dimensional cameras 2 and 2 b may transmit to the host computer 3 the data ⁇ i ⁇ Xi and ⁇ Xi of each pixel as measurement results, and the center calculation may be performed by the host computer 3 .
  • the photoreception data of amplification factor and suitable slit light intensity need not necessarily be selected in pixel units for each pixel, inasmuch as suitable photoreception data may be selected in units of a plurality of pixels to perform the center calculation.
  • the actual dynamic range of the received light may be broadened by performing a plurality of scans at different slit light intensities, and processing the photoelectric conversion signals obtained in each scan by different amplification factors.
  • the present invention provides significant advantages over the prior art devices.
  • the present invention provides for the input of three-dimensional data for an entire object at the same resolution as when the depth dimension is, small, even when the depth dimension of the object is large.
  • the present invention is capable of obtaining three-dimensional data with regard to an object as if the object exhibited uniform reflectivity without receiving operation specification a plurality of times, even when there are marked differences in the reflectivity of the various portions of the object.

Abstract

A three-dimensional input device including a light projecting device for projecting detection light, and an image sensing device for receiving the detection light reflected by an object and converting the received light to electrical signals. The detection light is controlled so as to scan the object periodically while changing the projection direction of the detection light, and to consecutively perform a plurality of scans at mutually different detection light projection angles in accordance with the predefined specifications at the start of operation.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to a three-dimensional input device for scanning an object by projecting a detection light on the object and outputting data specifying the object shape. [0002]
  • 2. Description of the Related Art [0003]
  • Three-dimensional input devices of the non-contact type known as rangefinders are used for data input to computer graphics systems and computer-aided resign systems, physical measurement, and visual recognition for robots and the like, due to their high speed measurement capability compared to contact type devices. [0004]
  • The slit projection method (also referred to as light-section method) is known as a suitable measurement method for rangefinders. This method produces a distance image (three-dimensional image) by optically scanning an object, and is one type of active measurement method which senses an object by illuminating the object with a specific detection light. A distance image is a collection of pixels expressing the three-dimensional position at a plurality of parts on an object. In the slit projection method, the slit light used as the detection light is a projection beam having a linear band-like cross section. A part of an object is illuminated at a specific moment during the scan, and the position of this illuminated part can be calculated by the trigonometric survey method from the light projection direction and the high luminance position on the photoreceptive surface (i.e., photoreception direction). Accordingly, a group of data specifying the object shape can be obtained by sampling the luminance of each pixel of the photoreceptive surface. [0005]
  • In rangefinders the distance to an object is measured and the detection light projection angle range is adjusted. In other words, an operation control is executed to set the projection light start and end angle position so that the entirety of an object reflected on the photoreceptive surface becomes the scanning range in accordance with the image sensing magnification and the object distance. In sampling the luminance of the photoreceptive surface, methods are known which limit the object of a single sampling not to the entire photoreceptive surface but to a part of the area where detection light is expected to enter, anc shifts this area for each sampling. Such methods are capable of scanning at high speed by reducing the time required per sample, and further reduce the load on the signal processing system by decreasing the amount of data. [0006]
  • However, in conventional rangefinders, when a measurement object extends beyond a measurable range in the depth direction, an operator must change the field angle by adjusting the focus or zooming, then re-measuring. As the field angle broadens, the measurable range in the depth direction also broadens, but resolution declines. When part of the photoreceptive surface is the target of a single sampling, the measurable range broadens if this target area broadens. As a result, scanning speed is reduced and the data processing load increases. [0007]
  • Accordingly, a first objective of the present invention is to allow three-dimensional data input for an entire object with the same resolution as when the depth dimension is small even when the depth dimension of the object is large. [0008]
  • In addition to the foregoing, in rangefinders the detection light projection intensity is adjusted for the main measurement by performing a preliminary measurement. In other words, an operation setting is executed wherein the detection light is projected to a part of the image sensing range and the projection intensity is optimized in accordance with the amount of the detection light entering the photoreceptive surface. [0009]
  • In conventional devices, however, when an object used as a measurement object has non-uniform reflectivity, i.e., when there is a large divergence between the high and low reflectivity levels of the object, the detection light reception information corresponding to that part of the object attains saturation or attains a solid level (non-responsive), such that the shape of that area cannot be measured. In this instance, measurement must be performed began to obtain the shape data of the entire object. [0010]
  • Accordingly, a second objective of the present invention is to provide a three-dimensional input device capable of obtaining three-dimensional data with a precision identical to that of an object with uniform reflectivity without receiving operation specifications a plurality of times even when there are marked differences in reflectivity depending on the part of the object. [0011]
  • SUMMARY OF THE PRESENT INVENTION
  • In accordance with the first objective noted above, in one exemplary embodiment of the present invention, the detection light projection angle range is automatically changed, and a plurality of scans are performed. As a result, suitable photoreception information can be obtained by other scans even at the parts of the object from which effective photoreception information cannot be obtained by a particular scan. In other words, the number of scans is theoretically expanded many fold by combining the measurable photoreception range in the depth direction with the measurable photoreception range in each scan. [0012]
  • In accordance with the foregoing embodiment of the present invention, a three-dimensional input device comprises a light projecting means for projecting detection light, and an image sensing means for receiving the detection light reflected by an object and converting said received light to electrical signals, which scans an object periodically while changing the projection direction of the detection light, and consecutively performs a plurality of scans at mutually different detection light projection angle ranges in accordance with the specifications at the start of operation. The specifications at the start of the operation are set by control signals produced by operating a switch or button, or received from an external device. [0013]
  • In accordance with the second objective noted above, in a second exemplary embodiment of the present invention, the projection intensity is automatically changed, and a plurality of scans are performed. As a result, suitable photoreception information can be obtained at other projection intensities even for an area on an object for which suitable photoreception information cannot be obtained at a particular projection intensity. In other words, the amount of entrance light dependent on the reflectivity of a target area can be made to conform to the dynamic range of image sensing and signal processing in at least a single scan. As such, it is possible to measure the shape of the entire area of the measurement range of an object by selecting suitable photoreception data for each part of an object from among photoreception information of a plurality of scans. The objects of the present invention are attained at the moment photoreception information is obtained for a plurality of scans, and the selection of suitable photoreception information can be accomplished within the three-dimensional input device, or by an external device. [0014]
  • Moreover, in the second exemplary embodiment of the present invention, the photo electric conversion signals obtained by a first scan are amplified by different amplification factors. As a result, photoreception information is obtained which is equivalent to the information obtained by a plurality of scans at different projection intensities, and allows the selection of suitable photoreception information for each part of an object. [0015]
  • In accordance with the second exemplary embodiment of the present invention, a three-dimensional input device comprises a light projecting means for projecting detection light, and an image sensing means for receiving the detection light reflected by an object and converting the received light to electrical signals, which scans an object periodically while changing the projection direction of the detection light, and consecutively projects the projection light at different intensities for each of a plurality of scans in accordance with specifications at the start of operation. The specifications at the start of the operation are set by control signals produced by operating a switch or button, or received from an external device. [0016]
  • Additional objects and advantages and novel features of the present invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following. [0017]
  • The invention itself, together with further objects and advantages, can be better understood by reference to the following detailed description and the accompanying drawings.[0018]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and form a part of the specification, illustrate the embodiments of the present invention and, together with the description, serve to explain the principles of the invention. [0019]
  • FIG. 1 illustrates an exemplary block diagram of the measuring system of the present invention. [0020]
  • FIG. 2 illustrates an exemplary external view of a three-dimensional camera of the present invention. [0021]
  • FIG. 3 is a block diagram illustrating the functional operation of the three-[0022] dimensional camera 2.
  • FIG. 4 is a schematic diagram of the [0023] zoom unit 51 for photoreception in accordance with the first embodiment of the present invention.
  • FIG. 5 illustrates the principle of the three-dimensional position calculation by the [0024] measuring system 1.
  • FIG. 6 illustrates the concept of the center ip. [0025]
  • FIG. 7 is an exemplary schematic diagram illustrating the positional relationship between the object and the principal point of the optical system. [0026]
  • FIG. 8 shows an example of the positional change of the reference plane Ss. [0027]
  • FIG. 9 illustrates an example of the measurable distance range; [0028]
  • FIG. 10 illustrates an example of the measurable distance range; [0029]
  • FIG. 11 illustrates the setting the deflection parameters; [0030]
  • FIG. 12 illustrates the setting the deflection parameters; [0031]
  • FIG. 13 is an exemplary flow chart illustrating the operation of the three-[0032] dimensional camera 2;
  • FIG. 14 illustrates an example of the monitor display content; [0033]
  • FIG. 15 illustrates an exemplary sensor reading range; [0034]
  • FIG. 16 illustrates the relationship between the frames and line in the image sensing surface of the sensor; [0035]
  • FIG. 17 illustrates an example of the recorded state of the photoreception data of each frame; [0036]
  • FIG. 18 is a flow chart illustrating the processing sequence of the three-dimensional position calculation by the host; [0037]
  • FIG. 19 is an exemplary schematic view of the zoom unit for photoreception; [0038]
  • FIG. 20 illustrates the principle for calculating a three-dimensional position in the measuring system; [0039]
  • FIG. 21 illustrates an exemplary positional relationship between the object and the principal point of the optical system; [0040]
  • FIG. 22 is an exemplary flow chart showing the operation of the main measurement; [0041]
  • FIG. 23 is an exemplary flow chart of the calculation of the slit light intensity suitable for the main measurement; [0042]
  • FIG. 24 shows exemplary settings of the slit light intensity. [0043]
  • FIG. 25 illustrates the reading range of the [0044] sensor 53;
  • FIGS. [0045] 25(a)-(c) illustrate representative examples of the relationship between the maximum value Xmax and the minimum value Xmin of the photoreception data;
  • FIG. 26 is an exemplary flow chart of the output memory control; [0046]
  • FIG. 27 is an exemplary bloc< diagram illustrating a variation of the three-dimensional camera [0047] 2 b described above in the foregoing embodiment.
  • FIG. 28 is an exemplary flow chart of the main measurement executed by the three-dimensional camera [0048] 2 b.
  • FIG. 29 is an exemplary block diagram of the output processing circuit [0049] 62 b of FIG. 27.
  • FIG. 30 is an exemplary block diagram of the maximum value determination circuit [0050] 643 of FIG. 29.
  • FIG. 31 shows an example of the relationship between the threshold values XLb and XHb and the maximum value Xmax2 of the photoreception data corresponding to a single pixel.[0051]
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • In the following description, numerous specific details are set forth in order to provide a thorough and detailed understanding of the system of the present invention. It will be obvious, however,, to one skilled in the art that these specific details need not be employed exactly as set forth herein to practice the present invention. [0052]
  • FIG. 1 shows the construction of a [0053] measuring system 1 of the present invention. The measuring system 1 comprises a three-dimensional camera (rangefinder) 2 for performing stereoscopic measurement using the slit projection method, and a host computer 3 for processing the data output from the three-dimensional camera 2.
  • The three-[0054] dimensional camera 2 outputs measurement data specifying the three-dimensional positions of a plurality of sampling points on an object Q, and outputs data necessary for calibration and a two-dimensional image expressing the color information of the object Q. The host computer 3 manages the calculation process for determining the coordinates of the sampling points using the trigonometric survey method.
  • The [0055] host computer 3 comprises a central processing unit (CPU) 3 a, a display 3 b, a keyboard 3 c, and a mouse 3 d. Software for measurement data processing is included in the CPU 3 a. Data can be transferred between the host computer 3 and the three-dimensional camera 2 either online or offline using a portable recording medium 4. A magneto-optic disk (MO), minidisk (MD), memory card and the like may be used as the recording medium 4.
  • FIG. 2 illustrates an exemplary external view of a three-dimensional camera of the present invention. A [0056] projection window 20 a and a reception window 20 b are provided on the front surface of a housing 20. The projection window 20 a is positioned on the top side relative to the reception window 20 b. The slit light (a band-like laser beam of predetermined width w) U emitted from an internal optical unit OU is directed toward the object being measured (photographic object) through the projection window 20 a. The radiation angle φ in the lengthwise direction M1 of the slit light U is fixed. The optical unit OU is provided with a two-axis adjustment mechanism for optimizing the relative relationship between the projection light axis and the reception light axis.
  • The top surface of the [0057] housing 20 is provided with zoom buttons 25 a and 25 b, manual focusing buttons 26 a and 26 b, and a shutter button 27. As shown in FIG. 2b, the back surface of the housing 20 is provided with a liquid crystal display 21, a cursor button 22, a selector button 23, a cancel button 24, an analog output pin 32, a digital output pin 33, and an installation aperture 30 a for the recording medium 4.
  • The liquid crystal display (LCD) [0058] 21 is used as a display for the operation screens, and as an electronic viewfinder. A photographer sets the photographic mode by means of the various buttons 22 through 24 on the back surface. Two-dimensional image signals are output in NTSC format from the analog output pin 32. The digital output pin 33 is, for example, a SCSI pin.
  • FIG. 3 is a block diagram illustrating the functional operation of the three-[0059] dimensional camera 2. The solid arrow in the drawing indicates the flow of the electrical signals, and the broken arrow indicates the flow of light.
  • The three-[0060] dimensional camera 2 is provided with two optical units 40 and 50 on the light projection side and the light reception side, which comprise the previously mentioned optical unit OU. In the optical unit 40, a laser beam having a wavelength of 670 nm emitted from a semiconductor laser (LD) 41 passes through a projection lens 42 to form the slit light, which is deflected by a galvano mirror (scanning mechanism) 43. A system controller 61 controls the driver 44 of the semiconductor laser 41, the drive system 45 of the projection lens system 42, and the drive system 46 of the galvano mirror 43.
  • In the [0061] optical unit 50, light. focused by the zoom unit 51 is split by the beam splitter 52. The light in the oscillation wavelength range of the semiconductor laser 41 enters a measurement sensor 53. The light in the visible range enters the monitor color sensor 54. The sensor 53 and the color sensor 54 are both charge-coupled device (CCD) area sensors. The zoom unit 51 is an internal focus type, which uses part of the entering light for autofocusing (AF). The autofocus function is realized by an autofocus (AF) sensor 57, a lens controller 58, and a focusing drive system 59. The zooming drive system 60 is provided for non-manual zooming.
  • Image sensing information obtained by the [0062] sensor 53 is transmitted to the output processing circuit 62 synchronously with a clock signal from the drive 55. In accordance with a first embodiment of the present invention, the image sensing information is temporarily stored in the memory 63 as 8-bit photoreception data. The address specification in the memory 63 is performed by the memory controller 63A. The photoreception data are transmitted from the memory 63 to the center calculation circuit 73 with a predetermined timing, and the center calculation circuit 73 generates data (hereinafter referred to as “center ip”) used as a basis for the calculation of the three-dimensional position of a target object. The center ip data are transmitted as a calculation result through the output memory 64 to the SCSI controller 66. The center ip data are output as monitor information from the center calculation circuit 73 to the display memory 74, and used to display a distance image on the liquid crystal display (LCD) 21.
  • In accordance with a second embodiment of the present invention, the [0063] output processing circuit 62 generates data the “center ip” used as a basis for the calculation of the three-dimensional position of a target object based on the input image sensing information, and outputs the center ip as a measurement result to the SCSI controller 66. The data generated by the output processing circuit 62 are output as monitor information to the display memory 74, and used to display a distance image on the liquid crystal display 21. The output processing circuit 62 is described in detail later.
  • The image sensing information obtained by the [0064] color sensor 54 is transmitted to the color processing circuit 67 synchronously with clock signals from the driver 56. The image sensing information obtained by the color processing is output online through the NTSC conversion circuit 70 and the analog output pin 32, or is binarized by the digital image generator 68 and stored in the color image memory 69. Thereafter, the color image data are transmitted from the color image memory 69 to the SCSI controller 66.
  • The [0065] SCSI controller 66 outputs the data from the output memory 64 (first embodiment) or the output processing circuit 62 (second embodiment) and the color image from the digital output pin 33, or stores the data on the recording medium 4. The color image is an image having the same field angle as the distance image based on the output of the sensor 53, and is used as reference information by application processing on the host computer 3 side. The processes using color information include, for example, processes for generating a three-dimensional model by combining the measurement data of a plurality of groups having different camera perspectives, and processes for culling unnecessary peaks from a three-dimensional model. The system controller 61 provides specifications for displaying suitable text and symbols on the screen of the liquid crystal display 21 relative to a character generator not shown in the drawings.
  • First Embodiment [0066]
  • In accordance with the first embodiment of the present invention, FIG. 4 is a schematic diagram of the [0067] zoom unit 51 for photoreception.
  • The [0068] zoom unit 51 comprises a front image forming unit 515, a variator 514, a compensator 513, a focusing unit 512, a back image forming unit 511, and a beam splitter 516 for guiding part of the incidence light to the AF sensor 57. The front image forming unit 515 and the back image forming unit 511 are fixed relative to the optical path.
  • The movement of the focusing unit [0069] 512 is managed by the focus drive system 59, and the movement of the variator 514 is managed by the zoom drive system 60. The focus drive system 59 is provided with a focusing encoder 59A for specifying the moving distance (lens feedout amount Ed) of the focusing unit 512. The zoom drive system 60 is provided with a zooming encoder 60A for specifying the moving distance (lens notch value fp) of the variator 514.
  • FIG. 5 illustrates the principle of the three-dimensional position calculation in measuring [0070] system 1, and FIG. 6 illustrates the concept of the center ip. In is noted that although only five photoreception samplings are shown in FIG. 5 to simplify the discussion, the three-dimensional camera 2 conducts 32 samplings per pixel.
  • As shown in FIG. 5, the slit light U projected on the object Q has a relatively wide width of several pixels g extending at pitch pv of the slit width on the [0071] image sensing surface 2 of the sensor 53. Specifically, the width of the slit light U is approximately 5 pixels. The slit light U is deflected at equal angular speed about origin point A in vertical directions. The slit light U reflected by the object Q passes through the principal point 0 (principal point H′ on the back side of zoom) of image formation, and enters the image sensing surface S2. The object Q (specifically, the object image) is scanned by periodic sampling of the amount of light received by the sensor 53 in the projection of the slit light U. Photoelectric conversion signals are output for each frame from sensor 53 in each sampling period (sensor actuation period).
  • When the surface of the object Q is flat and there is no noise in the characteristics of the optical system, the amount of received light by each pixel g of the image sensing surface S[0072] 2 of the sensor 53 reaches a maximum at the time Npeak at which the optical axis. of the slit light U passes through the object surface ag is in the estimated range of pixel g, and the temporal distribution approaches a normal distribution. In the examples shown in FIG. 5b, the time Npeak occurs between the No. n sample and the previous (n-1) sample.
  • In the present embodiment, one frame is designated as 32 lines corresponding to part of the image sensing surface S[0073] 2 of the sensor 53. Accordingly, when one pixel g is targeted on the image sensing surface S2, 32 samplings are executed during the scan, and 32 individual photoreception data are obtained. The center ip (time center) corresponding to the time Npeak is calculated by a center calculation using the photoreception data of the aforesaid 32 frames.
  • The center ip is the center on the time axis of the distribution of the 32 photoreception data obtained by the [0074] 32 samplings as shown in FIG. 6. The 32 photoreception data of each pixel are provided a sampling number of 1 through 32. The No. i photoreception data are expressed as xi, where i is an integer of 1 to 32. At this time, i represents the frame number after the target pixel is entered from the effective photoreception range comprising 32 lines.
  • The center ip of the 1 through 32 photoreception data x[0075] 1 through x32 is determined by dividing the total sum Σ·x1 of the data i·xi by the total sum Σxi of data xi. ip = i = 1 32 i · xi i = 1 32 xi ( 1 )
    Figure US20020089675A1-20020711-M00001
  • The center calculation circuit [0076] 73 calculates the center ip (i.e., the time center Npeak) using Equation 1.
  • The [0077] host 3 calculates the position coordinates) of the object Q using trigonometric survey method based on the relationship between the projection direction of the slit light U at the determined center ip and the entrance direction of the slit light U on the target pixel g. As such, it is possible to measure with higher resolution than the resolution stipulated by the pitch pv of the pixel array of the image sensing surface S2.
  • The measurement sequence and combination of the operation of the [0078] host 3 and the operation of the three-dimensional camera 2 of the first embodiment of the present invention are described below.
  • A user (photographer) determines the camera position and direction and sets the field angle while viewing a color monitor image displayed on the [0079] LCD 21. Either a zooming operation or focusing operation or both are performed as necessary at this time. In response to the operation, the variator 514 and the focusing unit 512 in the zoom unit 51 are moved, and the lens notch value fp and the lens feedout amount Ed at this time are successively transmitted to the system controller 61 through the lens controller 58. The system controller 61 calculates the object interval distance D using a conversion table generated beforehand based on Equation 2:
  • D=G(fp,Ed)  (2)
  • Where G represents a function. [0080]
  • When a user turns ON the [0081] shutter button 27, the field angle is set. When this setting is received by the system controller 61, the slit light U deflection parameters (scan start angle, scan end angle, deflection angular speed) are set based on the determined object interval distance D.
  • FIG. 7 is a schematic diagram of the positional relationship between the object and the principal point of the optical system. [0082]
  • The basic content of setting the deflection parameters are described below. Assume the presence of a reference plane Ss at the position of the object interval distance D, and infer a region reflected (projected) on the image sensing surface S[0083] 2 in the reference plane Ss from the image sensing magnification. The scan start angle and the scan end angle are determined so that the slit light U is projected from the bottom edge of the whole region, and the deflection angular speed is determined so that the required time of the scan is a constant value.
  • In actual settings, the offset Doff between the origin A and the anterior principal point H of the optical system, which is the measurement reference point of the object interval distance D, in the Z direction, is considered. In order to assure a measurable distance range d similar to the center area even at the edges of the scan, a predetermined amount of over scan (e.g., 16 pixels) is used. The scan start angle th[0084] 1, the scan end angle th2, and the deflection angular speed ω are expressed in the following equations:
  • th 1=tan−1 [β×pv(np/2+16)+L)/(D+Doff)]×180/π  (3)
  • th 2=tan−1 [−β×pv(np/2+16)+L)/(D+Doff)]×180/π  (4)
  • ω=(th 1th 2)/np  (5)
  • Where β represents the image sensing magnification (=D/real focal length freal), pv represents the pixel pitch, np represents the effective pixel number in the Y direction of the image sensing surface S[0085] 2, and L represents the base line length.
  • The measurable distance ranged is dependent on the number of lines of one frame in the readout operation by the [0086] sensor 53. The greater the number of lines, the broader the measurable distance range becomes, and the readout time lengthens, data quantity increases and the processing load increases. The present embodiments are based on this principle and use 32 lines per frame as stated above.
  • If the object Q fits within the measurable distance range d, the object Q can be measured in a single scan. However, when the object Q has a large dimension in the depth direction (Z direction), such that part of the object Q extends beyond the measurable distance range, the shape data of that part cannot be obtained. [0087]
  • The three-[0088] dimensional camera 2 consecutively performs a plurality of scans (specifically, 3) while automatically changing the reference plane Ss. In this way shape data are obtained in a broader range than the measurable distance range d without reduction of resolution.
  • FIG. 8 shows an example of the positional change of the reference plane Ss. [0089]
  • In the first scan the reference plane Ss is set at position Z0 of the object interval distance D, in the second scan the reference plane Ss is set at position Z1of the object interval distance D[0090] 1(D1<D), and in the third scan the reference plane Ss is set at the position Z2 of the object interval distance D2(D2>D). That is, in the second scan the reference plane S. is moved to the front side, and in the third scan the reference plane Ss is moved to the back side. The sequence of the scan and the reference plane Ss position are optional, and the back side scan may be performed before the front side scan without problem. The object interval distances D1 and D2 are set such that the measurable distance ranges d2 and d3 of the second and third scans partially overlap the measurable distance range d of the first scan. Overlapping the measurable distance ranges d, d2, d3 allows the shape data obtained in each scan to be easily combined. Since the image sensing ratio is fixed in all three scans, the angle range for projecting the slit light U changes if the reference plane Ss position changes. Accordingly, the deflection parameters are calculated for each scan.
  • FIGS. 9 and 10 illustrate the measurable distance range. [0091]
  • When the reference plane Ss is set at position Z0 as in FIG. 9, and the address in the Y direction of the pixel through which pass the photoreception axis in the image sensing surface S[0092] 2 is designated nc, the scan angle θ1 at the start of sampling of the pixel at address nc can be expressed as shown below:
  • θ1= th 1−ω  (6)
  • The addresses in the Y direction are incremented 1, 2, 3 . . . , with the top address (scan start address) designated 1. [0093]
  • The scan angle θ2 at the end of the sampling of the pixel at address nc is determined by the following equation when the number of samples per pixel is designated j (i.e., 32 in the present example).[0094]
  • θ2 =θ1−ω×(j−1)  (7)
  • The measurable distance range d′ of the pixel at address nc is the depth range from the intersection point Z1 of the projection axis of the scan angle θ1 and the line of sight of the pixel at address nc to the intersection point Z2 of the projection axis of the scan angle θ2 and the line of sight of the pixel at address nc. [0095]
  • Similarly, other than the address nc, the scan angles θ1m and θ2m of the sampling start and end of address nm can be determined. The measurable distance range dm of the pixel at address rim is the depth range from the intersection point Z1 of the projection axis of the scan angle θ1m and the line of sight of the pixel at address nm to the intersection point Z2 of the projection axis of the scan angle θ2m and the line of sight of the pixel at address nm as shown in FIG. 10. [0096]
  • FIGS. 11 and 12 illustrate the setting the deflection parameters. [0097]
  • Regarding the second scan, the boundary position Z1 on the front side (left side in the drawing) of the aforesaid measurable distance range d is used as a reference position to determine the scan start angle th[0098] 1 a and the scan end angle th2 a. Since tanθ1=L/(Doff+D1), the following relationship holds:
  • D 1=(L/tan θ1)−Doff  (8)
  • The object interval distance D[0099] 1 is determined from Equation 7, and Equations 3 and 4 are applied. The scan start angle th1 a and the scan end angle th2 a can be expressed by Equations 9 and 10:
  • th 1 a=tan−1 [β×pv(np/2+16)+L)/( D 1+Doff)]×180/π  (9)
  • th 2 a=tan−1 [−β×pv(np/2+16)+L)/( D 1+Doff)]×180/π  (10)
  • At this time, the deflection angular speed ωa is:[0100]
  • ωa=(th 1 a−th 2 a)/np  (11)
  • Regarding the third scan, the boundary position Z2on the back side (right side in the drawing) of the aforesaid measurable distance range d is used as a reference position to determine the scan start angle th[0101] 1 b and the scan end angle th2 b. The scan start angle th1 b and the scan end angle th2 b can be expressed by Equations 12 and 13:
  • th 1 b=tan−1 [β×pv(np/2+16)+L)/( D 1+Doff)]×180/π  (12)
  • th 2 b=tan−1 [−β×pv(np/2+16)+L)/( D 1+Doff)]×180/π  (13)
  • At this time, the deflection angular speed ωb is:[0102]
  • ωb=(th 1 b−th 2 b)/np  (14)
  • As such, the range d′ from position Z11 to position Z22 becomes the measurable distance range of the pixel at address nc by performing three scans at the determined deflection parameters (th[0103] 1, th2, ω), (th1 a, th2 a, ωa), (th1 b, th2 b, ωb) (refer to FIG. 8,).
  • Two or more scans may be performed, or four or more scans may be performed to broaden the measurable distance range. If the deflection parameters are determined for the fourth and subsequent scans based on the deflection parameters of the second and third scans, the measurable distance range of each scan can be accurately overlapped. [0104]
  • In each scan, an autofocus adjustment is executed to adjust the lens feedout amount Ed in accordance with the object interval distances D[0105] 1 and D2. The system controller 61 determines the lens feedout amount Ed using a conversion table generated beforehand based on Equation 15, and this value is set as the control target value in the lens controller 58.
  • Ed=F(D,fp)  (15)
  • Where F represents a function. [0106]
  • In each scan, the [0107] variator 514 is fixed in the state set by the user, and the lens notch value fp=fp0. The system controller 61 calculates the lens feedout amounts Ed1 and Ed2 for the second and third scans using the object interval distances D1 and D2 obtained by the first deflection parameters (th1, th2, ω) the Equations 16 and 17.
  • Ed 1=F( D 1,fp 0)  (16)
  • Ed 2=F( D 2,fp 0)  (17)
  • FIG. 13 is a flow chart briefly showing the operation of the three-[0108] dimensional camera 2. When the measurement start is specified by the shutter button 27, the system controller 61 calculates the deflection parameters for the first scan, and then calculates the deflection parameters for the second and third scans based on the first calculation result (Steps 50˜52).
  • The deflection parameters of the first scan are set, and scanning starts ([0109] Steps 53, 54). The projection of the slit light U and the sampling of the amount of received light via sensor 53 are started. The scan is executed until the deflection angle position attains the scan end angle th2.
  • When the first scan ends, the [0110] system controller 61 sets the deflection parameters for the second scan (Steps 55, 56). The lens feedout amount Ed2 is determined for this deflection parameter, and the movement of focusing unit 512 is specified to the focusing drive system 59 via the lens controller 58 (Step 57).
  • When the movement of the focusing unit [0111] 512 ends, the second scan starts (Steps 58, 59). When the second scan ends, the system controller 61 sets the deflection parameters for the third scan, and specifies the movement of the focusing unit 512 (Steps 60˜62). When focusing ends, the third scan starts (Steps 63, 64).
  • When the third scan ends, a distance image representing the three scan measurement results is displayed on the monitor display, and data processing control is executed to output the center ip based on the photoreception data obtained by the scans designated by the user ([0112] Steps 65, 66).
  • FIG. 14 shows an example of the monitor display content. When the object of measurement is based on the positions Z11 and Z22 as shown in FIG. 14[0113] b, part of the object Q is measured in each scan. The three distance images g1, g2, g3 representing the result of each scan are shown aligned from left to right in the sequence of the reference plane position on the liquid crystal display 21, as shown in FIG. 14a. That is, the distance image g1 corresponding to the first scan is shown in the center, the distance image g2 corresponding to the second scan is shown on the left side, and the distance image g3 corresponding to the right scan is shown on the right side.
  • An operator specifies whether or not the result of a scan is output by means of the [0114] cursor button 22 and the selector button 23. Two or more scans may be specified, or one scan may be specified. The first scan and the second scan may be specified as indicated by the upward arrows in the example of FIG. 14.
  • The specified scan result (i.e., the center ip of a specific number of pixels) is output from the center calculation circuit [0115] 73 to the host 3 or the recording medium 4 via the output memory 64 and the SCSI controller 66. At the same time, device information including the specifications of the sensors and the deflection parameters also are output. Table 1 shows the main data transmitted by the three-dimensional camera 2 to the host 3.
    TABLE 1
    Data Content Data Range
    Measurement Σxi 200 × 200 × 13 bit
    Data Σi.xi 200 × 200 × 18 bit
    Photographic Image distance b 0.000˜200.000
    conditions Front principal point 0.00˜300.00
    position FH
    Slit deflection start
    angle th1
    Deflection angular speed
    ω
    Device Measurement pixel number
    Information (Number of samples, ˜0.00516˜
    X, Y in
    directions)
    Sensor pixel pitch pu, pv 0.00˜±90.00
    Projection system posture 0.00˜±300.00
    around X, Y, Z axes)
    Projection system posture
    (in X, Y, Z directions)
    Lens distortion 0.00˜256.00
    correction
    coefficient d1, d2
    Sensor center pixel u0, v0
    Two-dimensional R plane 512 × 512 × 8 bit 0˜255
    image G plane 512 × 512 × 8 bit 0˜255
    B plane 512 × 512 × 8 bit 0˜255
  • The data processing executed in the [0116] measurement system 1 in accordance with the first embodiment is described below.
  • FIG. 15 shows the reading range of the [0117] sensor 53. The reading of one frame by the Sensor 53 is not executed for the entire image sensing surface S2, but is executed only for the effective photoreception region (band image) Ae of part of the image sensing surface S2 to facilitate high speed processing. The effective photoreception region Ae is a region on the image sensing surface S2 corresponding to the measurable distance range in a specific illumination timing, and shifts one pixel at a time for each frame in conjunction with the deflection of the slit light U. In the present embodiment, the number of pixels in the shift direction of the effective photoreception region Ae is fixed at 32. The method for reading only part of the sensed image of a CCD area sensor is disclosed in U.S. Pat. No. 5,668,631.
  • FIG. 16 illustrates the relationship between frames and the lines in the image sensing surface S[0118] 2 of the sensor 53, and FIG. 17 shows the recorded state of the photoreception data of each frame.
  • As shown in FIG. 16, [0119] frame 1, which is the first frame of the image sensing surface S2, includes the photoreception data of 32 lines by 200 pixels from line 1 through line 32. One line comprises 200 pixels. Each frame is shifted one line, i.e., frame 2 includes line 2 through line 33, and frame 3 includes line 3 through line 34. Frame 32 includes line 32 through line 63.
  • The photoreception data of [0120] line 1 through line 32 are sequentially subjected to analog-to-digital (A/D) conversion, and stored in the memory 63 (refer to FIG. 3). As shown in FIG. 17, the photoreception data are stored in the sequence of frame 1, frame 2, frame 3 and the like, and the data of line 32 included in each frame is shifted upward one line in each frame, i.e., the data are the 32 line in frame 1, the 31 line in frame 2, and the like. The sampling of the pixels of line 32 ends by storing the photoreception data of frame 1 through frame 32 in the memory 63. The photoreception data of each pixel at the end of sampling are sequentially read from the memory 63 for the center calculation. The content of the center calculation is described below.
  • The center ip specifies a position on the surface of the object Q. The closer a position on the surface of the object Q is to the three-[0121] dimensional camera 2, the larger the value of the center ip, and the farther away the position is from the camera 2, the smaller the value of the center ip. Accordingly, a distance distribution can be realized by displaying a variable density image using the center ip as density data.
  • Three-dimensional position calculation processing is executed in the [0122] host 3, to calculate the three-dimensional position (coordinates X, Y, Z) of 200×200 sampling points (pixels). The sampling points are the intersections of the camera line of sight (a straight line connecting the sampling point. and the front principal point H) and the slit plane (the optical axis plane of the slit light U illuminating the sampling point).
  • FIG. 18 is a flow chart showing the processing sequence of the three-dimensional position calculation by the hoist. First, a determination is made as to whether or not the total sum Σxi of xi transmitted from the three-[0123] dimensional camera 2 exceeds a predetermined value (Step 11). Since too much error is included when the value xi is small, i.e., when the total sum Σxi of the slit light component does not satisfy a predetermined reference, the three-dimensional position calculation is not executed for that pixel. Data expressing [error] is stored in memory for that pixel (Step 17). When Σxi exceeds a predetermined value, three-dimensional position is calculated for that pixel because there is sufficient luminance.
  • Before the three-dimensional position calculation, the slit light U passage timing nop is calculated (Step [0124] 12). The passage timing nop is calculated by calculating the (Σx·xi)/(Σxi) (where i=1˜32), and determining the center ip (time center Npeak), and adding this value to the line number.
  • Since the calculated center ip is the timing within the 32 frames obtained by the pixel output, the center ip is converted to the passage timing nop from the start of the scan by adding the line number. Specifically, the line number of the pixel of [0125] line 32 calculated at the start is [32], and the line number of the next line 33 is [33]. The line number of the line of the target pixel increases by 1 with each one line advance. Other suitable values are also possible. The reason for this is that when calculating a three-dimensional position, suitable set values can be calibrated by canceling the rotation angle the1 around the X axis and the rotation angle the4 around the X axis in the coefficient of Equation 20 described below.
  • Then, the three-dimensional position is calculated (Step [0126] 13). The calculated three-dimensional position is stored in a memory area corresponding tc the pixel (Step 14), and similar processing is then executed for the next pixel (Step 16). The routine ends when processing is completed for all pixels (Step 10).
  • The method of calculating the three-dimensional position is described below. The camera line of sight equations are [0127] Equations 18 and 19:
  • (u−u 0)=(xp)=(b/pu)×[X/(Z−FH)]  (18)
  • (v−v 0)=(yp)=(b/pv)×[Y/(Z−FH)]  (19)
  • Where b represents the image distance, FH represents the front principal point, pu represents the pixel pitch in the horizontal direction of the image sensing surface, pv represents the pixel pitch in the vertical direction in the image sensing surface, u represents the pixel position in the horizontal direction in the image sensing surface, u[0128] 0 represents the center pixel position in the horizontal direction in the image sensinq surface, v represents the pixel position in the vertical direction in the image sensing surface, and v0 represents the center pixel position in the vertical direction in the image sensing surface.
  • The slit [0129] plane equation 20 is shown below. [ cos ( the3 ) - sin ( the3 ) 0 sin ( the3 ) cos ( the3 ) 0 0 0 1 ] × [ cos ( the2 ) 0 sin ( the2 ) 0 1 0 - sin ( the2 ) 0 cos ( the2 ) ] × [ 1 0 0 0 cos ( the1 + the4 · nop ) - sin ( the1 + the4 · nop ) 0 sin ( the1 + the4 · nop ) cos ( the1 + the4 · nop ) ]
    Figure US20020089675A1-20020711-M00002
    x [ 0 1 0 ] [ X Y - L Z - s ] = 0 ( 20 )
    Figure US20020089675A1-20020711-M00003
  • Where the[0130] 1 represents the rotation angle around the X axis, the2 represents the incline angle around the Y axis, the3 represents the incline angle around the Z axis, the4 represents the angular speed around the X axis, nop represents the passage timing,center ip+line number), L represents the base line length, and s represents the offset of origin point A.
  • The geometric aberration is dependent on the field angle. Distortion is generated in a subject with the center pixel as the center. Accordingly, the amount of distortion is expressed as a function of the distance from the center pixel. In this case, the distance approaches a cubic function. The secondary correction coefficient is designated d[0131] 1 and the tertiary correction coefficient is designated d2. After correction, the pixel positions u′ and v′ are applied to Equations 21 and 22:
  • u′=u+d 1× t 2 2×(u−u 0)/ t 2+d 2× t 2 2×(u−u 0)/t 2  (21)
  • v′=v+d 1× t 2 2×(v−v 0)/ t 2+d 2× t 2 2×(v−v 0)/t 2  (22)
  • where: t[0132] 2=(t1)−2 and, t1=(u−u0)2+(v−v0)2
  • A three-dimensional position which considers aberration can be determined by substituting u′ for u and v′ for v in [0133] Equations 18 and 19. Calibration is discussed in detail by Onodera and Kanaya, “Geometric correction of unnecessary images in camera positioning,” The Institute of Electronics, Information and Communications Engineers Research Data PRU 91-113, and Ueshiba, Yoshimi, Oshima, et al., “High precision calibration method for rangefinder based on three-dimensional model optical for optical systems,” The Institute of Electronics, Information and Communications Engineers Journal D-II, vol. J74-D-II, No. 9,, pp. 1227-1235, September, 1991.
  • Although the object interval distance D is calculated a passive type measurement detecting the position of the focusing unit and the zoom unit in the aforesaid embodiment, it is also possible to use an active type measurement to calculate the object interval distance using a trigonometric survey method by a preliminary measurement by projecting the slit light U at a predetermined angle and detecting the incidence angle. Furthermore, the deflection parameters may be set based on the object interval distance preset without measurement, or based on an object interval distance input by a user. [0134]
  • A mode may be provided for performing only a single scan, so as to allow a user to switch between the single scan mode and a mode performing a plurality of scans. The calculation to determine the coordinates from the center ip may also be executed within the three-[0135] dimensional camera 1. Using this construction a function may be provided to analyze the results of a single scan to automatically determine whether or not two or more scans are necessary, and execute a plurality of scans only when required. a construction may also be used to allow the host to calculate the center ip by transmitting the data Σxi·xi, and Σxi for each pixel as measurement results to the host 3.
  • Second Embodiment [0136]
  • FIG. 19 is a block diagram of the [0137] output processing circuit 62 in accordance with a second embodiment of the present invention. It is noted that the output processing circuit 62 illustrated in FIG. 19 represents a modified embodiment of the output. processing circuit 901 illustrated in FIG. 3.
  • The photoelectric conversion signals S[0138] 53 output from the sensor 53 are placed on a sampling hold by the sampling hold circuit 621, amplified by a predetermined amplification factor by the amplification circuit 622, and subsequently converted to 8-bit photoreception data Xi by the analog-to-digital (A/D) converter 623. The photoreception data Xi are temporarily stored in a memory 624, and transmitted via a predetermined timing to the maximum value determination circuit 626 and the center calculation circuit 627. The photoreception data Xi of 32 frames are recorded in the memory 624. In this way, the center is calculated for each pixel using a plurality of photoreception data Xi (32 items in the present example) obtained by shifting the image sensing cycle each sensor drive cycle. The memory address specification of the memory 624 is controlled by the memory controller 625. The system controller 61 fetches the photoreception data of a predetermined pixel through a data bus (not shown in the drawings) in a preliminary measurement. The center calculation circuit 627 calculates the center ip forming the basis of the three-dimensional position calculation by the host computer 3, and transmits the calculation result to the output memory 628. The maximum value determination circuit 626 outputs a control signal CS1 representing the suitability of the center ip for each pixel. The output memory controller 629 controls the writing of the center ip to the output memory 629 in accordance with the control signal CS1. When a control signal CS2 output from the system controller 61 specifies masking of the control signal CS1, the center ip is written to memory regardless of the control signal CS1. The output memory controller 629 is provided with a counter for address specification.
  • FIG. 20 is an exemplary block diagram of the maximum value determination circuit [0139] 626. The maximum value determination circuit 626 comprises a maximum value detector 6261, a threshold value memory 6262, and a comparator 6263. The 32 items of photoreception data Xi per pixel read from the memory 624 are input to the maximum value detector 6261. The maximum value detector 6261 detects and outputs the maximum value Xmax among the 32 items of photoreception data Xi per pixel. The maximum value Xmax is compared to two threshold values XL and XH by the comparator 6263, and the comparison result is output as the control signal CS1. The threshold values XL and XH are fixed values stored in the threshold memory 6262.
  • The operation of the three-[0140] dimensional camera 2 and the host computer 3 and sequence of measurement in accordance with the second embodiment are now described.
  • The three-[0141] dimensional camera 2 performs measurement sampling at 200×262 sampling points. That is, the image sensing surface S2 has 262 pixels in the width direction of the slit light U, and the actual number of frames is 231.
  • A user (photographer) determines the camera position and direction, and set the field angle while viewing a color monitor image displayed on the [0142] liquid crystal display 21. At this time a zooming operation is performed if necessary. Also at this time focusing is accomplished either manually or automatically by moving the focusing unit within the zoom unit 51, and the approximate object interval distance (do) is measured during the focusing process.
  • When a user presses the [0143] shutter button 27, a preliminary measurement is performed prior to the main measurement. In the preliminary measurement, the system controller 61 estimates the output of the semiconductor laser 41 (slit light intensity), and the deflection conditions (scan start angle, scan end angle, deflection angular speed) of the slit light U for the main measurement.
  • FIG. 21 illustrates a summary of the estimation of the slit light intensity Ls in the preliminary measurement. The summary of the estimation of the slit light intensity is described below. The projection angle is set so that the reflected light is received in the center of the [0144] sensor 53 in the presence of a flat surfaced object using an object interval distance (do) estimated by the focusing operation. The slit light U is projected consecutively a total of three times at three intensity levels La, Lb, Lc at the set projection angle, the output of the sensor 53 is sampled, and the relationship between each intensity La, Lb, Lc and the output of the sensor 53 is determined. Then, the slit light intensity Ls is determined for the optimum value Ss of the sensor 53 output based on the determined relationships. The sampling of the output of the sensor 53 does not target the totality of the image sensing surface S2, but rather targets only a part of the surface.
  • In the estimation of the deflection conditions, the object interval distance (d) is estimated by the trigonometric survey method based on the projection angle of the slit light U at intensities La, Lb<Lc, and the receiving position of the slit light U. Then, the scan start angle, scan end angle, and deflection angular speed are calculated so as to obtain a measurement result of a predetermined resolution based on the object interval distance (d), optical conditions of the received light, and the operating conditions of the [0145] sensor 53. The preliminary measurement continues until a suitable preliminary measurement result is obtained, and then the main measurement is executed.
  • FIG. 22 is an exemplary flow chart showing the operation of the main measurement. In the main measurement of the second embodiment, an object is scanned a total of three times. The deflection conditions of the slit light U are identical in all scans, but the slit light intensity is different in each scan. [0146]
  • The slit light intensity L[0147] 1 of the first scan is the slit light intensity La determined previously by the preliminary measurement. The slit light intensity L1 is set as the operating condition of the semiconductor laser 41 (Step 11), and the slit light U is projected in a range from the scan start angle to the scan end angle (Steps 12, 13). Similarly, the slit light intensity L2 is set, and a second scan performed (Steps 14 to 16), then the slit light intensity L3 is set, and a third scan is performed (Steps 17 to 19).
  • Data loss in the processing performed by the [0148] output processing circuit 62 described later can be prevented by scanning at the slit light intensity Ls. Date loss is related to the reflectivity distribution of the object, the number of set levels of slit light intensity (number of scans), and the maximum value of the photoreception data of each pixel.
  • FIG. 23 is an exemplary flow chart of the calculation of the slit light intensity suitable for the main measurement, and FIG. 24 shows exemplary settings of the slit light intensity. Initially, the slit light intensity Ls is calculated based on the preliminary measurement as previously described (Step [0149] 21), and the slit light intensities L2 and L3 are determined accordance with the magnitude relationship between the obtained value and the previously determined threshold values LL and LH. The threshold value LL is the value on the border of the low side among the two borders circumscribing the intensity range zL, zM, zH of three levels of low (L), medium (M), high (H) of the variable range (minimum value Lmin to maximum value Lmax) of the slit light intensity. The threshold value LH is the border value on the high side.
  • When Ls≦LL, the light intensity L[0150] 2 is set as the average value LMa of the intensity range zM, and the intensity value L3 is set as the average value LHa of the intensity range zH (Steps 22, 23). When Ls>LL:, the intensity value L2 is set as the average value LLa of the intensity range zL (Step 24), and the intensity value L3 is set as described below. If Ls≦LH, the value L3 is set as LHa (Steps 25, 26), and if Ls>LH, the value L3 is set as LMa (Step 27). That is, each of the three intensity ranges zL, zM, zH are selected one by one as the slit light intensity. Each value LL, LH, LLa,, LHa are expressed by the following equations.
  • LL=(Lmax−Lmin)/3+Lmin  (23a)
  • LH=2(Lmax−Lmin)/3+Lmin  (23b)
  • LLa=(LL−Lmin)/2+Lmin  (23c)
  • LMa=(LH−LL)/2+LL  (23d)
  • LHa=(Lmax−LH)/2+LH  (23e)
  • In each of the total of three scans at the slit intensities set as described above, the previously mentioned [0151] output processing circuit 62 calculates the center ip using a predetermined calculation on the output from the sensor 53, and effectively uses the center ip in the scans of each pixel in accordance with the maximum value Xmax.
  • FIG. 15 illustrates the reading range of the [0152] sensor 53. Reading of one frame by the sensor 53 is accomplished not for the entire image sensing surface S2, but rather the subject is only the effective photoreception area (band-like image) Ae of part of the image sensing surface S2 to allow high speed reading. The effective photoreception area Ae is the area on the image sensing surface S2 corresponding to the measurable distance range of a specific illumination timing, and shifts pixel by pixel for each frame in accordance with the deflection of the slit light U. In the present embodiment, the number of pixels of the effective photoreception area in the shift direction is fixed at 32. The method of reading part of the sensed image by the CCD area sensor is disclosed in Japanese Laid-Open Patent Application No. HEI 7-174536.
  • As stated above, FIG. 16 illustrates the relationship of the line and frame on the image sensing surface S[0153] 2 of the sensor 53, and FIG. 17 illustrates the recording state of the photoreception data of each frame.
  • As shown in FIG. 16, the [0154] frame 1 which is the first frame on the image sensing surface S2 contains photoreception data of the 32 (lines) by 200 pixels from line 1 through line 32. The frame 2 is shifted one line only and contains lines 2 through 33, and the frame 3 is shifted one line only and contains lines 3 through 34. The frame 32 contains lines 32 through 63. One line comprises 200 pixels as previously described.
  • The photoreception information of [0155] frame 1 through frame 32 is sequentially converted by an A/D converter and stored in the memory 624 (refer to FIG. 4). As shown in FIG. 17, the photoreception data is stored in the memory 624 in the sequence of frame 1, frame 2, frame 3 and so on, and the data of line 32 included in each frame is shifted up one line per frame so as to be line 32 in frame 1, line 31 in frame 2 and the like. The sampling of each pixel of line 32 ends by storing the photoreception data from frame 1 to frame 32 in the memory 624. The photoreception data of each pixel are sequentially read from the memory 624 after sampling ends to calculate the center.
  • Referring again to FIG. 6, the center ip is the center on the time axis of the distribution of 32 individual photoreception data obtained by 32 samplings. The 32 individual photoreception data of each pixel are designated by the [0156] sample numbers 1 through 32. The no. i sample data is represented by Xi, where i is an integer of 1 through 32. At this time, i represents the frame number of a single pixel after the pixel enters the effective photoreception range Ae.
  • The center ip of the photoreception data X[0157] 1 through X32 of numbers 1 through 32 is determined in the same manner as described above.
  • In the center calculation process, the calculation process becomes inaccurate or impossible when the sampling data near the center ip attains a saturated level or a solid level, thereby causing a reduction in the accuracy of the target object position calculation. The three-[0158] dimensional camera 2 of the second embodiment performs a plurality of scans at different slit light intensities and selects the scan data wherein the sampling data is not at a saturated level or a solid level among each scan of each pixel so as to accurately calculate the center relative to all pixels.
  • FIGS. [0159] 25(a)-(c) illustrate representative examples of the relationship between the maximum value Xmax and the minimum value Xmin of the photoreception data. The threshold values XL and XH are set at values below the saturation level Xt and above the solid level Xu in the operation characteristics of the sensor 53.
  • FIG. 25([0160] a) shows an instance in which the maximum value Xmax satisfies Equation (24) described later among the sampled photoreception data of each pixel; in this case the center ip is permitted to be written to the output memory 628.
  • FIG. 25([0161] b) shows an instance in which the maximum value Xmax exceeds the threshold value XH (tolerance upper limit), and the photoreception data are saturated. In this case, since there is concern that the center calculation cannot be performed accurately, the center ip is prohibited from being written to the output memory 628.
  • FIG. 25([0162] c) shows an instance when the maximum value Xmax is less than the threshold value XL (tolerance lower limit); the photoreception data approach the solid level thereby greatly increasing the effects of background light and intra circuit noise, and narrowing the width of the photoreception data distribution. Since there is concern that the center calculation cannot be accurately performed in this case also, the center ip is not permitted to be written to the output memory 628.
  • The comparator [0163] 6263 mentioned in FIG. 20 executes a comparison calculation to determine whether or not the input maximum value Xmax satisfies the conditions of Equation (24), and outputs a control signal CS1.
  • XL<Xmax<XH  (24)
  • When the control signal CS[0164] 1 permits the center ip to be written to the output memory 628 when the maximum value Xmax of each pixel satisfies Equation (24), and prohibits the center ip from being written to memory when Equation (24) is not satisfied. The output memory 628 is provided with a capacity capable of holding the center of all pixels of the sensor 53.
  • FIG. 26 is an exemplary flow chart of the output memory control. When the first scan of the main measurement starts (Step [0165] 30), the control signal CS1 is masked by the control signal CS2 in the output memory controller 629 (Step 31), and the center ip of all pixels are permitted to be written to the output memory 628 (Step 32). When the first scan for writing the center ip of all pixels to the output memory 628 ends (Step 33, Step 34), writing to the output memory 628 is prohibited (Step 35), and the mask on the control signal CS1 is cleared (Step 36). In the second scan, The maximum value Xmax is compared to the threshold value XL and XH, and the writing to the output memory 628 is either permitted or prohibited in accordance with the comparison result (Steps 37-40). When writing is permitted, the center ip obtained in the second scan is written to the output memory 628. In the third scan, the center ip is written to the output memory 628 in the same manner as in the second scan.
  • As described above, the center ip of each pixel calculated in each of a plurality of scans in the [0166] output processing circuit 62 is updated for each scan if the maximum value Xmax satisfies the conditions of Equation (24). When pixels do not satisfy the conditions of Equation (24) in all scans, data loss occurs. In the present embodiment, the center calculation result is written to the output memory 628 for all pixels in the first scan to prevent data loss. The slit light intensity Ls in the first scan is optimized relative to the reflectivity of the center of the object in the preliminary measurement as previously mentioned. Loss of data for other areas near the reflectivity of the center of the object is prevented by executing the first scan at the slit light intensity Ls.
  • Although the slit light intensity is switched to perform three scans in the main measurement by the three-[0167] dimensional camera 2 of the present example, the accuracy of the center calculation can be improved by selecting the slit light intensity appropriate to the reflectivity of each part of the target object by increasing the number of scans and the number of levels of the slit light intensity.
  • FIG. 27 is an exemplary block diagram illustrating a variation of the three-dimensional camera [0168] 2 b described above in the foregoing embodiment. In the drawing, items common to the embodiment of FIG. 3 are identified by a “b” appended to their reference number. The external view, basic internal construction, and method of use of the three-dimensional camera 2 b are identical to the previously described three-dimensional camera 2 (refer to FIGS. 1 through 3). The description of FIG. 27 is therefore abbreviated.
  • The optical unit [0169] 40 b is provided with a semiconductor laser 41 b, which emits a slit light U to scan an object which is the measurement target. The optical unit 50 b is provided with a measurement sensor 53 b and a monitor color sensor 54 b, which forms the object image and converts this image to electrical signals. A system controller 61 b controls the drive system 140 for operating the optical unit 40 band the drive system 150 for driving the optical unit 50 b.
  • The photoelectric conversion signals S[0170] 53 b obtained by the sensor 53 b are transmitted to the output processing circuit 62 b. The output processing circuit 62 b outputs a distance image to the display memory 74 b, and outputs data based on the calculation of the three-dimensional position to the SCSI controller 66 b. The distance image is displayed on the liquid crystal display 21 b. The photoelectric conversion signals obtained by the color sensor 54 b are input to the color data processing system 160. The color data processing system 160 outputs analog image signals too the pin 32 b, and transmits digital image data to the SCSI controller 66 b. The SCSI controller 66 b controls data communications with external devices via the pin 33 b, and manages the access to the recording medium 4 b.
  • The characteristics of the three-dimensional camera [0171] 2 b are described below in terms of the construction and operation of the output processing circuit 62 b.
  • FIG. 28 is an exemplary flow chart of the main measurement executed by the three-dimensional camera [0172] 2 b. The three-dimensional camera 2 b performs a preliminary measurement in the same sequence as the three-dimensional camera 2, and the result of the preliminary measurement is reflected in the main measurement.
  • When the process moves from the preliminary measurement to the main measurement, the system controller [0173] 61 b provides the deflection conditions for the drive system 140, and provides the slit light intensity Ls as an output condition for the semiconductor laser 41 b (Step 51). The scan starts to satisfy the scan start angle of the deflection conditions (Step 52), and the scan is executed until the scan end angle is satisfied (Step 53). During the scan, the output processing circuit 62 b performs signal processing on the successive photoelectric conversion signals output from the sensor 53 b.
  • FIG. 29 is an exemplary block diagram of the output processing circuit [0174] 62 b of FIG. 27 The photoelectric conversion signal S53 b of each pixel transmitted from the sensor 53 b is placed on a sampling hold by the sampling hold circuit 630, and is amplified by mutually different amplification factors A1, A2, A3 in three; amplifiers 631, 632, 633. The relationships among the three amplification factors A1, A2, A3 are expressed by the equations below.
  • A 1×b 1=A 2 ( b 1>1.0)
  • A 1×b 2=A 3 ( b 2<1.0)
  • A 3<A 1<A 2
  • The output of the amplifier [0175] 631 is binarized by the A/D converter 634, then transmitted to the memory 638 as photoreception data Xil. The usage of the memory 638 is identical to that shown in FIG. 16, and the photoreception data Xi1 are written to an address specified by the memory controller 637. Similarly, the output of the amplifier 632 is binarized by the A/D converter 635, then transmitted to the memory 639 as photoreception data Xi2, and the output of the amplifier 633 is binarized by the A/D converter 636, then transmitted to the memory 640 as photoreception data Xi3. The photoreception data Xi1, Xi2, Xi3 are all 23-bit data.
  • The photoreception data Xi[0176] 1, Xi2, Xi3 are read from the memories 638 through 640 for the pixels when writing the 32 frame ends. One among the photoreception data Xi1, Xi2, Xi3 is selected by the selectors 641 and 642, and transmitted to the center calculation circuit 644. The selectors 641 and 642 receive selector signals SL1 and SL2 from the maximum value determination circuit 643.
  • FIG. 30 is an exemplary block diagram of the maximum value determination circuit [0177] 643 of FIG. 29, and FIG. 31 shows an example of the relationship between the threshold values XLb and XHb and the maximum value Xmax2 of the 32 photoreception data corresponding to a single pixel.
  • The maximum value determination circuit [0178] 643 comprises a maximum value detector 6431, a threshold memory unit 6432, and two comparators 6433 and 6434. The photoreception data Xi1 transmitted from the memory 638 are input to the maximum value determination circuit 6431. The maximum value detector 6431 detects the maximum value Xmax2 of the photoreception data Xi1 (32 in the present embodiment) of each pixel. The threshold memory 6432 provides the threshold values XLb and XHb for the comparator 6433, and provides the threshold value XLb for the comparator 6434. The comparator 6433 outputs a selector signal SL1 in accordance with the magnitude relationship of the maximum value Xmax2 and the threshold values XLb and XHb. The comparator 6434 outputs a selector signal SL2 in accordance with the magnitude relationship of the maximum value Xmax2 and the threshold value XLb.
  • As shown in FIG. 30, the threshold values XLb and XHb are set below the saturation level Xtb and above the solid level Xub determined by the characteristics of the sensor [0179] 53 b. When the maximum value Xmax2 satisfies the condition (XLb<Xmax2<XHb), the selector signal SL1 becomes active, and the selector 641 selects the photoreception data Xi1 from the memory 638. That is, the photoreception data Xi1 of amplification factor A1 is used in the center calculation (refer to FIG. 28). When the condition is not satisfied, the selector 641 selects the photoreception data (Xi2 or Xi3) from the selector 642. When the maximum value Xmax2 satisfies the condition (XLb>Xmax2), the selector signal SL2 becomes active, and the selector 642 selects the photoreception data Xi2 from the memory 639, and transmits the data to the selector 641. When the condition is not satisfied, the selector 642 selects the photoreception data Xi3 from the memory 640. When a single pixel is targeted, the relationship Xi3<Xi1<Xi2 obtains.
  • An accurate center calculation can be accomplished by the aforesaid process using photoreception. data that is neither a saturated level or a solid level obtained by a suitable amplification of the sensor output. [0180]
  • Although the three-dimensional came as [0181] 2 and 2 b of the foregoing embodiments are systems which calculate the center ip and transmit the center ip to a host computer 3, it is to be noted that the three-dimensional cameras 2 and 2 b may transmit to the host computer 3 the data Σi·Xi and ΣXi of each pixel as measurement results, and the center calculation may be performed by the host computer 3. The photoreception data of amplification factor and suitable slit light intensity need not necessarily be selected in pixel units for each pixel, inasmuch as suitable photoreception data may be selected in units of a plurality of pixels to perform the center calculation. The actual dynamic range of the received light may be broadened by performing a plurality of scans at different slit light intensities, and processing the photoelectric conversion signals obtained in each scan by different amplification factors.
  • The present invention provides significant advantages over the prior art devices. For example, the present invention provides for the input of three-dimensional data for an entire object at the same resolution as when the depth dimension is, small, even when the depth dimension of the object is large. [0182]
  • In addition, the present invention is capable of obtaining three-dimensional data with regard to an object as if the object exhibited uniform reflectivity without receiving operation specification a plurality of times, even when there are marked differences in the reflectivity of the various portions of the object. [0183]
  • Variations of the specific embodiments of the present invention disclosed herein are possible. The present embodiments are therefor to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefor intended to be embraced therein. [0184]

Claims (40)

What is claimed is:
1. A three-dimensional input device for obtaining data relating to the three-dimensional shape of an object, said input device comprising:
a light projecting device for projecting detection light on an object, and
an image sensing device for receiving the detection light reflected by said object and converting said received light to electrical signals,
wherein said object is scanned periodically by said detection light by varying the projection angle of the light projecting device, and
a plurality of scans are consecutively executed, each of said plurality of scans encompassing a different light projection angle range.
2. The three-dimensional input device according to claim 1, wherein three scans of said object are consecutively executed.
3. The three-dimensional input device according to claim 1, where each of said light projection angle ranges overlap one another.
4. The three-dimensional input device according to claim 1, wherein said light projecting device is capable varying the angle of projection of said detection light.
5. The three-dimensional input device according to claim 4, wherein said light projecting device further controls the speed of varying the angle of projection of said detection light such that each of said plurality of scans is performed at a different speed.
6. The three-dimensional input device according to claim 1, wherein each light projection angle range associated with a given one of said plurality of scans is predetermined and stored in memory.
7. The three-dimensional input device according to claim 1, further comprising a display device for displaying images generated on the basis of said detection light reflected by said object, each of said images corresponding to each of the plurality of scans.
8. The three-dimensional input device according to claim 1, further comprising a selector operable for selecting one of said images on the display.
9. A three-dimensional input device for obtaining data relating to the three-dimensional shape of an object, said input device comprising:
a light projecting device for scanning an object by projecting a detection light on said object while changing the projection direction of the detection light;
an image sensing device for receiving the detection light projected from said light projecting device and reflected by an object, and converting the received detection light to electrical signals; are
a control device for controlling said light projecting device to sequentially execute a plurality of scans using detection light of a different intensity for each said scan.
10. The three-dimensional input device according to claim 9, wherein three scans of said object are consecutively executed.
11. The three-dimensional input device according to claim 9, further comprising a display device for displaying images generated on the basis of said detection light reflected by said object, each of said images corresponding to each of the plurality of scans.
12. The three-dimensional input device according to claim 9, further comprising a selector operable for selecting one of said images on the display.
13. A three-dimensional input device comprising:
a light projecting device for scanning an object by projecting a detection light on said object while changing the projection direction of the detection light, said light projecting device performing a plurality of said scans;
an image sensing device for receiving the detection light projected from said light projecting means and reflected by said object, and converting the received detection like electrical signals; and
a signal processing device for amplifying said electrical signals at mutually different amplification factors for each of said plurality of scans, and generating a plurality of photoreception data corresponding to each pixel.
14. The three-dimensional input device according to claim 13, wherein three scans of said object are consecutively executed.
15. The three-dimensional input device according to claim 13, further comprising a display device for displaying images generated on the basis of said detection light reflected by said object, each of said images corresponding to each of the plurality of scans.
16. The three-dimensional input device according to claim 13, further comprising a selector operable for selecting one of said images on the display.
17. The method of measuring a three-dimensional image, said method comprising the steps of:
projecting a detection light produced by a light projecting device on an object, and receiving the detection light reflected by said object and converting said received light to electrical signals,
said object being scanned periodically by, said detection light by varying the projection angle of the light projecting device, and
a plurality of scans are consecutively executed, each of said plurality of scans encompassing a different light projection angle range.
18. The method of measuring a three-dimensional image according to claim 17, wherein three scans of said object are consecutively executed.
19. The method of measuring a three-dimensional image according to claim 17, where each of said light projection angle ranges overlap one another.
20. The method of measuring a three-dimensional image according to claim 17, wherein said light projecting device is capable varying the angle of projection of said detection light.
21. The method of measuring a three-dimensional image according to claim 17, wherein said light projecting device further controls the speed of varying the angle of projection of said detection light such that each of said plurality of scans is performed at a different speed.
22. The method of measuring a three-dimensional image according to claim 17, wherein each light projection angle range associated with a given one of said plurality of scans is predetermined and stored in memory.
23. The method of measuring a three-dimensional image according to claim 17, further comprising the step of displaying images generated on the basis of said detection light reflected by said object, each of said images corresponding to each of the plurality of scans.
24. The method of measuring a three-dimensional image according to claim 17, further comprising the step of selecting one of said images on the display.
25. A method of measuring a three-dimensional image, said method comprising the steps of:
scanning an object by projecting a detection light produced by a light projecting device on said object while changing the projection direction of the detection light;
receiving the detection light projected from said light projecting device and reflected by an object, and converting the received detection light to electrical signals;
controlling said light projecting device to sequentially execute a plurality of scans using detection light of a different intensity for each said scan in accordance with specifications defined prior to the start of operation.
26. The method of measuring a three-dimensional image according to claim 25, wherein three scans of said object are consecutively executed.
27. The method of measuring a three-dimensional image according to claim 26, wherein the intensities of the light projecting device utilized during said three scans is determined based on a reference light intensity, Ls, which is determined prior to performing said three scans.
28. The method of measuring a three-dimensional image according to claim 27, wherein said reference light intensity, Ls, is determined by projecting said detection light at three different intensities at a set projection angle, and analyzing the output of a light sensor receiving said detection light after reflection by said object.
29. The method of measuring a three-dimensional image according to claim 27, wherein said reference light intensity, Ls, is less than a low level threshold, the light intensity of said light projecting device is set equal to Ls for the first scan of said three consecutive scans, the light intensity of said light projecting device is set equal to an average of a medium intensity range of said light projecting device for the second scan of said three consecutive scans, and the light intensity of said light projecting device is set equal to an average of a high intensity range of said light projecting device for the third scan of said three consecutive scans.
30. The method of measuring a three-dimensional image according to claim 27, wherein if said reference light intensity, Ls, is between a low level threshold and a high level threshold, the light intensity of said light projecting device is set equal to an average of a low intensity range of said light projecting device for the first scan of said three consecutive scans, the light intensity of said light projecting device is set equal to Ls for the second scan of said three consecutive scans, and the light intensity of said light projecting device is set equal to an average of a high intensity range of said light projecting device for the third scan of said three consecutive scans.
31. The method of measuring a three-dimensional image according to claim 27, wherein if said reference light intensity, Ls, is greater than a high level threshold, the light intensity of said light projecting device is set equal to an average of a low intensity range of said light projecting device for the first scan of said three consecutive scans, the light intensity of said light projecting device is set equal to an average of a medium intensity range of said light projecting device for the second scan of said three consecutive scans, and the light intensity of said light projecting device is set equal to Ls for the third scan of said three consecutive scans.
32. The method of measuring a three-dimensional image according to claim 25, further comprising the step of prohibiting the writing of data representing the detection light reflected by the object and output by a light sensor for a given scan, unless the maximum value of the output of said light sensor during said scan is within a predefined range.
33. The method of measuring a three-dimensional image according to claim 32, wherein said data obtained during said given scan is stored in memory if the maximum value of said data is within said predefined range, and said data stored in memory is only updated during a subsequent scan if the maximum value of the data corresponding to said subsequent scan is within said predefined range.
34. A method of measuring a three-dimensional image according to claim 25, further comprising the step of displaying images generated on the basis of said detection light reflected by said object, each of said images corresponding to each of the plurality of scans.
35. A method of measuring a three-dimensional image according to claim 25, further comprising the step of selecting one of said images on the display.
36. A method of measuring a three-dimensional image, said method comprising:
scanning an object by projecting a detection light produced by a light projecting device on said object while changing the projection direction of the detection light;
receiving the detection light projected from said light projecting device and reflected by said object, and converting the received detection light into electrical signals; and
amplifying said electrical signals by mutually different amplification factors, and generating a plurality of photoreception data.
37. The method of measuring a three-dimensional image according to claim 36, wherein said electrical signals are amplified by a first predefined amplification factor if the maximum value of said electrical signals representing the detection light reflected by the object and output by a light sensor for a given scan is within a predefined range, are amplified by a second predefined amplification factor if the maximum value of said electrical signals is greater than first predefined limit, and are amplified by a third predefined amplification factor if the maximum value of said electrical signals is less than a second predefined limit.
38. The method of measuring a three-dimensional image according to claim 36, wherein three scans of said object are consecutively executed.
39. The method of measuring a three-dimensional image according to claim 38, further comprising the step of displaying images generated on the basis of said detection light reflected by said object, each of said images corresponding to each of the plurality of scans.
40. The method of measuring a three-dimensional image according to claim 38, further comprising the step of selecting one of said images on the display.
US09/334,918 1998-06-18 1999-06-17 Three-dimensional input device Expired - Fee Related US6424422B1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP10-171206 1998-06-18
JP17120698A JP2000002520A (en) 1998-06-18 1998-06-18 Three-dimensional input apparatus
JP10-191278 1998-07-07
JP19127898A JP3740848B2 (en) 1998-07-07 1998-07-07 3D input device

Publications (2)

Publication Number Publication Date
US20020089675A1 true US20020089675A1 (en) 2002-07-11
US6424422B1 US6424422B1 (en) 2002-07-23

Family

ID=26494004

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/334,918 Expired - Fee Related US6424422B1 (en) 1998-06-18 1999-06-17 Three-dimensional input device

Country Status (1)

Country Link
US (1) US6424422B1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002077920A2 (en) * 2001-03-26 2002-10-03 Dynapel Systems, Inc. Method and system for the estimation and compensation of brightness changes for optical flow calculations
US20030043277A1 (en) * 2001-09-04 2003-03-06 Minolta Co., Ltd. Imaging system, photographing device and three-dimensional measurement auxiliary unit used for the system
WO2004063665A1 (en) * 2003-01-13 2004-07-29 Koninklijke Philips Electronics N.V. Method of and apparatus for determining height or profile of an object
US20100277330A1 (en) * 2007-11-12 2010-11-04 Datalogic Automation S.R.L. Optical code reader
US20110068164A1 (en) * 2009-09-24 2011-03-24 Trimble Navigation Limited Method and Apparatus for Barcode and Position Detection
US8500005B2 (en) * 2008-05-20 2013-08-06 Trimble Navigation Limited Method and system for surveying using RFID devices
US8800859B2 (en) 2008-05-20 2014-08-12 Trimble Navigation Limited Method and system for surveying using RFID devices
USD743397S1 (en) * 2014-06-20 2015-11-17 Datalogic Ip Tech S.R.L. Optical scanner
US20170170899A1 (en) * 2014-03-25 2017-06-15 Osram Sylvania Inc. Techniques for Raster Line Alignment in Light-Based Communication
USD852194S1 (en) * 2016-03-25 2019-06-25 Datalogic Ip Tech S.R.L. Code reader

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7098435B2 (en) * 1996-10-25 2006-08-29 Frederick E. Mueller Method and apparatus for scanning three-dimensional objects
JP4186347B2 (en) * 1999-10-14 2008-11-26 コニカミノルタセンシング株式会社 Three-dimensional input method and apparatus
US7072726B2 (en) * 2002-06-19 2006-07-04 Microsoft Corporation Converting M channels of digital audio data into N channels of digital audio data
WO2006034144A2 (en) * 2004-09-18 2006-03-30 The Ohio Willow Wood Company Apparatus for determining the three dimensional shape of an object
JP4238891B2 (en) * 2006-07-25 2009-03-18 コニカミノルタセンシング株式会社 3D shape measurement system, 3D shape measurement method
KR101666854B1 (en) * 2010-06-14 2016-10-17 삼성전자주식회사 Apparatus and method for depth unfolding based on multiple depth images
JP6290854B2 (en) * 2012-03-30 2018-03-07 ニコン メトロロジー エン ヴェー Improved optical scanning probe
KR101871235B1 (en) 2012-06-05 2018-06-27 삼성전자주식회사 Depth image generating method and apparatus, depth image processing method and apparatus
US9434181B1 (en) * 2015-06-19 2016-09-06 Roland Dg Corporation Printing device and printing method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5024529A (en) * 1988-01-29 1991-06-18 Synthetic Vision Systems, Inc. Method and system for high-speed, high-resolution, 3-D imaging of an object at a vision station
CA1313040C (en) * 1988-03-31 1993-01-26 Mitsuaki Uesugi Method and apparatus for measuring a three-dimensional curved surface shape
US4939379A (en) * 1989-02-28 1990-07-03 Automation Research Technology, Inc. Contour measurement using time-based triangulation methods
US5668631A (en) 1993-12-20 1997-09-16 Minolta Co., Ltd. Measuring system with improved method of reading image data of an object
US6141105A (en) * 1995-11-17 2000-10-31 Minolta Co., Ltd. Three-dimensional measuring device and three-dimensional measuring method
JP3417222B2 (en) * 1996-08-07 2003-06-16 松下電器産業株式会社 Real-time range finder
US6252659B1 (en) * 1998-03-26 2001-06-26 Minolta Co., Ltd. Three dimensional measurement apparatus
JP4111592B2 (en) * 1998-06-18 2008-07-02 コニカミノルタセンシング株式会社 3D input device

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020163596A1 (en) * 2001-03-26 2002-11-07 Max Griessl Method and system for the estimation and compensation of brightness changes for optical flow calculations
WO2002077920A3 (en) * 2001-03-26 2003-09-18 Dynapel Systems Inc Method and system for the estimation and compensation of brightness changes for optical flow calculations
US6959118B2 (en) 2001-03-26 2005-10-25 Dynapel Systems, Inc. Method and system for the estimation and compensation of brightness changes for optical flow calculations
WO2002077920A2 (en) * 2001-03-26 2002-10-03 Dynapel Systems, Inc. Method and system for the estimation and compensation of brightness changes for optical flow calculations
US20030043277A1 (en) * 2001-09-04 2003-03-06 Minolta Co., Ltd. Imaging system, photographing device and three-dimensional measurement auxiliary unit used for the system
US6987531B2 (en) * 2001-09-04 2006-01-17 Minolta Co., Ltd. Imaging system, photographing device and three-dimensional measurement auxiliary unit used for the system
WO2004063665A1 (en) * 2003-01-13 2004-07-29 Koninklijke Philips Electronics N.V. Method of and apparatus for determining height or profile of an object
US20100277330A1 (en) * 2007-11-12 2010-11-04 Datalogic Automation S.R.L. Optical code reader
US8740079B2 (en) * 2007-11-12 2014-06-03 Datalogic Automation Srl Optical code reader
US8800859B2 (en) 2008-05-20 2014-08-12 Trimble Navigation Limited Method and system for surveying using RFID devices
US8500005B2 (en) * 2008-05-20 2013-08-06 Trimble Navigation Limited Method and system for surveying using RFID devices
US20110068164A1 (en) * 2009-09-24 2011-03-24 Trimble Navigation Limited Method and Apparatus for Barcode and Position Detection
US20170170899A1 (en) * 2014-03-25 2017-06-15 Osram Sylvania Inc. Techniques for Raster Line Alignment in Light-Based Communication
US9871589B2 (en) * 2014-03-25 2018-01-16 Osram Sylvania Inc. Techniques for raster line alignment in light-based communication
USD743397S1 (en) * 2014-06-20 2015-11-17 Datalogic Ip Tech S.R.L. Optical scanner
USD800120S1 (en) 2014-06-20 2017-10-17 Datalogic Ip Tech S.R.L. Optical scanner
USD852194S1 (en) * 2016-03-25 2019-06-25 Datalogic Ip Tech S.R.L. Code reader

Also Published As

Publication number Publication date
US6424422B1 (en) 2002-07-23

Similar Documents

Publication Publication Date Title
US6268918B1 (en) Three-dimensional input device
US6424422B1 (en) Three-dimensional input device
US6529280B1 (en) Three-dimensional measuring device and three-dimensional measuring method
JP4111166B2 (en) 3D shape input device
JP3873401B2 (en) 3D measurement system
US6172755B1 (en) Three dimensional measurement system and pickup apparatus
US20070150228A1 (en) Method and apparatus for three-dimensional measurement
IL138414A (en) Apparatus and method for optically measuring an object surface contour
US6233049B1 (en) Three-dimensional measurement apparatus
JP3493403B2 (en) 3D measuring device
US6614537B1 (en) Measuring apparatus and measuring method
US6421114B1 (en) Three-dimensional information measuring apparatus
US6616347B1 (en) Camera with rotating optical displacement unit
US6297881B1 (en) Three-dimensional measurement method and three-dimensional measurement device
Clark et al. Measuring range using a triangulation sensor with variable geometry
JPH07174537A (en) Image input camera
US7492398B1 (en) Three-dimensional input apparatus and image sensing control method
JP3324367B2 (en) 3D input camera
JP3360505B2 (en) Three-dimensional measuring method and device
Shafer Automation and calibration for robot vision systems
JP3740848B2 (en) 3D input device
JP2000275024A (en) Three-dimensional input apparatus
Golnabi Design and operation of a laser scanning system
JP3861475B2 (en) 3D input device
JP2000002520A (en) Three-dimensional input apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: MINOLTA CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KAMON, KOICHI;TANABE, HIDEKI;MIYAZAKI, MAKOTO;AND OTHERS;REEL/FRAME:010165/0024;SIGNING DATES FROM 19990716 TO 19990723

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20140723