US20030190072A1 - Method and apparatus for processing images - Google Patents

Method and apparatus for processing images Download PDF

Info

Publication number
US20030190072A1
US20030190072A1 US10/255,746 US25574602A US2003190072A1 US 20030190072 A1 US20030190072 A1 US 20030190072A1 US 25574602 A US25574602 A US 25574602A US 2003190072 A1 US2003190072 A1 US 2003190072A1
Authority
US
United States
Prior art keywords
image
flow
images
parallax
aligned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/255,746
Inventor
Sean Adkins
Keith Hanna
James Bergen
Rakesh Kumar
Harpreet Sawhney
Jeffrey Lubin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imax Corp
Original Assignee
Sean Adkins
Keith Hanna
Bergen James R.
Rakesh Kumar
Harpreet Sawhney
Jeffrey Lubin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sean Adkins, Keith Hanna, Bergen James R., Rakesh Kumar, Harpreet Sawhney, Jeffrey Lubin filed Critical Sean Adkins
Priority to US10/255,746 priority Critical patent/US20030190072A1/en
Publication of US20030190072A1 publication Critical patent/US20030190072A1/en
Assigned to IMAX CORPORATION reassignment IMAX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SARNOFF CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the invention relates to an image processing method-and apparatus and, more particularly, the invention relates to a method and apparatus for enhancing the quality of an image.
  • [0008] Creation of an enhanced digital image by processing one or more frames of imagery from cameras and or other sensors which have captured the imagery at the same time instant.
  • the synthesized frame represents the view of an enhanced synthetic camera located at the position of one of the real sensors.
  • the disadvantages associated with the prior art are overcome by the present invention for a method and apparatus for accurately computing image flow information as captured by imagery of a scene.
  • the invention computes the image flow information of each point in an image by computing the image flow within windows that are offset with respect to the point for which the image flow is being computed. Additionally, image flow computations are performed over multiple frames of imagery to ensure accuracy of the image flow computation and to facilitate correction of occluded imagery.
  • the image flow computation is constrained to compute parallax information.
  • the imagery and parallax (or flow) information can be used to enhance various image processing techniques such as image resolution enhancement, enhancement of focus, depth of field, color, and brightness.
  • the parallax (or flow) information can also be used to generate a synthetic high-resolution image that can be used in combination with the original image to form a stereo image.
  • the apparatus comprises an imaging device for producing images (e.g., video frame sequences) and a scene sensing device for producing information regarding the imaged scene.
  • An image processor uses the information from the scene sensing device to process the images produced by the imaging device. This processing produces parallax information regarding the imaged scene.
  • the imagery from the imaging device and the parallax information can be used to enhance any one of the above-mentioned image processing applications.
  • the invention includes a method that is embodied in a software routine, or a combination of software and hardware.
  • the inventive method comprises the steps of supplying image data having a first resolution and supplying image information regarding the scene represented by the image data.
  • the image data and information are processed by, for example, warping the first image data to form a synthetic image having a synthetic view, where the viewpoint of the synthetic image is different from the viewpoint represented in the image data.
  • the synthetic image and the original image can be used to compute parallax information regarding the scene. By using multiple frames from the original imagery and the synthetic view imagery, the inventive process improves the accuracy of the parallax computation.
  • Alternate embodiments of the invention include but are not limited to, utilizing multiple sensors in addition to the scene sensing device to provide greater amounts of scene data for use in enhancing the synthetic image, using a displacement device in conjunction with the second imaging device to create a viewpoint for the warped image that is at the location of the displacement device, and using a range finding device as the second imaging device to provide image depth information.
  • FIG. 1 depicts a block diagram of an imaging apparatus incorporating the image analysis method and apparatus of the invention
  • FIG. 2 depicts a block schematic of an imaging apparatus and an image analysis method used to produce one embodiment of the subject invention
  • FIG. 3 is a flow chart of the parallax computation method
  • FIG. 4 is a flow chart of the image warping method
  • FIG. 5 depicts a block diagram of an imaging apparatus and an image analysis method used to produce a second embodiment of the subject invention
  • FIG. 6 depicts a block diagram of an imaging apparatus and an image analysis method used to produce a third embodiment of the subject invention
  • FIG. 7 depicts a schematic view of multiple offset windows as used to compute parallax at points within an image
  • FIG. 8 depicts an illustration for a process to compute a quality measure for parallax computation accuracy.
  • FIG. 1 depicts a high-resolution synthetic image generation apparatus 100 of the present invention.
  • An input video sequence 112 is supplied to a computer 102 .
  • the computer 102 comprises a central processing unit (CPU) 104 , support circuits 106 , and memory 108 . Residing within the memory 108 is a high-resolution synthetic image generation routine 110 .
  • the high-resolution synthetic image generation routine 110 may alternately be readable from another source such as a floppy disk, CD, remote memory source or via a network.
  • the computer additionally is coupled to input/output accessories 118 .
  • an input video sequence 112 is supplied to the computer 102 , which after operation of the high-resolution synthetic image generation routine 110 , outputs a synthetic high-resolution image 114 .
  • the high-resolution synthetic image generation routine 110 hereinafter referred to as the routine 110 , can be understood in greater detail by referencing FIG. 2.
  • the process of the present invention is discussed as being implemented as a software routine 110 , some of the method steps that are disclosed therein may be performed in hardware as well as by the software controller. As such, the invention may be implemented in software as executed upon a computer system, in hardware as an application specific integrated circuit or other type of hardware implementation, or a combination of software and hardware.
  • each step of the routine 110 should also be construed as having an equivalent application specific hardware device (module), or hardware device used in combination with software.
  • the high-resolution synthetic image generation routine 110 of one illustrative embodiment of the invention receives the input 112 from a first image acquisition device 206 and a second image acquisition device 208 .
  • the first image acquisition device 206 views a scene 200 from a first viewpoint 216 while the second image acquisition device 208 views the scene 200 from a second viewpoint 218 .
  • the second viewpoint 218 may include the first viewpoint 216 (i.e., the first and second image acquisition devices 206 and 208 may view the scene 200 from the same position).
  • a displacement mechanism 232 e.g., a mirror positioned in a remote location 234 may be used to make the data captured by the second image acquisition device 208 appear as if the second image acquisition device 208 is positioned at the remote location 234 .
  • the first image acquisition device 206 has an image resolution higher than that of the second image acquisition device 208 .
  • the first image acquisition device 206 may comprise a number of different devices having a number of different data output formats, as one skilled in the art will readily be able to adapt the process described by the teachings herein to any number of devices and data formats and/or protocols.
  • the first image acquisition device 206 is a high-definition camera, i.e., a camera with a resolution of at least 8000 by 6000 pixels/cm 2 .
  • the second image acquisition device 208 may also comprise a varied number of devices, since one skilled in the art can readily adapt the routine 110 to various devices as discussed above.
  • the second image acquisition device 206 is a camera having a resolution lower than the resolution of the high-resolution device, i.e., a standard definition video camera.
  • the high resolution imagery may have 8000 by 6000 pixels/cm 2 and the lower resolution image may have 1000 by 1000 pixels/cm 2 .
  • the routine 110 receives input data from the first image acquisition device 206 and corrects the spatial, intensity and chroma distortions in step 202 .
  • the chroma distortions are caused by, for example, lens distortion. This correction is desired in order to improve the accuracy of subsequent steps executed in the routine 110 .
  • Methods are known in the art for computing a parametric function that describes the lens distortion function. For example, the parameters are recovered in step 202 using a calibration procedure as described in H. S. Sawhney and R. Kumar, True Multi-Image Alignment and its Application to Mosaicing and Lens Distortion, Computer Vision and Pattern Recognition Conference proceedings, pages 450-456, 1997, incorporated by reference in its entirety herein.
  • step 202 also performs chromanence (chroma) and intensity corrections. This is necessary since image data from the second image acquisition device 208 is merged with data from the first image acquisition device 206 , and any differences in the device response to scene color and intensity or due to lens vignetting, for example, results in image artifacts in the synthesized image 114 .
  • the correction is performed by pre-calibrating the devices (i.e., the first image acquisition device 206 and the second image acquisition device 208 ) such that the mapping of chroma and intensity from one device to the next is known.
  • the measured chroma and intensity from each device is stored as look-up table or a parametric function.
  • the look up table or parametric equation are then accessed to perform the chroma and intensity corrections in order to match the chroma and intensity of the other device.
  • Input data from the second image acquisition device 208 is also corrected for spatial, intensity and chroma distortions in step 204 .
  • the process for correcting the low-resolution distortions in step 204 follow the same process as the corrections performed in step 202 .
  • the chroma and intensity correction between the high resolution and low resolution imaging devices may also be performed by automatically aligning images based on parallax or temporal optical flow computation either in a pre-calibration step using fixed patterns or through an online computation as a part of the frame synthesis process.
  • regions of alignment and misalignment are labeled using a quality of alignment metric.
  • parametric transformations are computed that represent color and intensity transformations between the cameras. With the knowledge of each parametric transformation, the source color pixels can be transformed into the destination color pixels that completely match the original destination pixels.
  • step 210 The corrected high-resolution data from step 202 is subsequently filtered and subsampled in step 210 .
  • the purpose of step 210 is to reduce the resolution of the high-resolution imagery such that it matches the resolution of the low-resolution image.
  • Step 210 is necessary since features that appear in the high-resolution imagery may not be present in the low-resolution imagery, and cause errors in a depth recovery process (step 306 detailed in FIG. 3 below). Specifically, these errors are caused since the depth recovery process 306 attempts to determine the correspondence between the high-resolution imagery and the low-resolution imagery, and if features are present in one image and not the other, then the correspondence process is inherently error-prone.
  • the step 210 is performed by first calculating the difference in spatial resolution between the high-resolution and low-resolution devices. From the difference in spatial resolution, a convolution kernel can be computed that reduces the high-frequency components in the high-resolution imagery such that the remaining frequency components match those components in the low-resolution imager. This can be performed using standard, sampling theory (e.g., see P. J. Burt and E. H. Adelson, The Laplacian Pyramid as a Compact Image Code , IEEE Transactions on Communication, Vol. 31, pages 532-540, 1983, incorporated by reference herein in its entirety).
  • standard, sampling theory e.g., see P. J. Burt and E. H. Adelson, The Laplacian Pyramid as a Compact Image Code , IEEE Transactions on Communication, Vol. 31, pages 532-540, 1983, incorporated by reference herein in its entirety).
  • an appropriate filter kernel is [1,4,6,4,1]/16. This filter is applied first vertically, then horizontally.
  • the high-resolution image can then be sub-sampled by a factor of 2 so that the spatial sampling of the image data derived from the high-resolution imager matches that of the low-resolution imager.
  • the parallax is computed in step 212 at each frame time to determine the relationship between viewpoint 216 and viewpoint 218 in the high-resolution and low-resolution data sets. More specifically, the parallax computation of step 212 computes the displacement of image pixels between the images taken from view point 216 and viewpoint 218 due to their difference in viewpoint of the scene 200 .
  • the pair of images can be left and right images (images from viewpoints 216 and 218 ) to form a stereo pair captured at the same time instant, or a pair of images captured at two closely spaced time intervals, or two images at different time instants during which no substantial independent object motion has taken place.
  • the parallax processing is accomplished using at least two images and, for more accurate results, uses many images, e.g., five.
  • this parallax information depends on the relationship between the at least two input images having different viewpoints ( 216 and 218 , respectively) of a scene 200 , it is initially computed at the spatial resolution of the lower resolution image. This is accomplished by resampling the high-resolution input image using an appropriate filtering and sub-sampling process, as described above in step 210 .
  • the resolution of the input images may be the same. This is a special case of the more general variable resolution case.
  • the parallax computation techniques are identical for both the cases once the high resolution image has been filtered and subsampled to be represented at the resolution of the low resolution image.
  • step 212 The computation of step 212 is performed using more or less constrained algorithms depending on the assumptions made about the availability and accuracy of calibration information. In the uncalibrated extreme case, a two-dimensional flow vector is computed for each pixel in the to which alignment is being performed. If it is known that the epipolar geometry is stable and accurately known, then the computation reduces to a single value for each image point.
  • the computation used to produce image flow information can be constrained to produce parallax information. The techniques described below can be applied to either the flow information or parallax information.
  • step 212 it is advantageous in step 212 to compute parallax with respect to some local parametric surface.
  • This is method of computation is known as “plane plus parallax”.
  • the plane plus parallax representation can be used to reduce the size of per-pixel quantities that need to be estimated.
  • parallax may be computed in step 212 as a combination of planar layers with additional out-of-plane component of structure.
  • the procedure for performing the plane plus parallax method is detailed in U.S. patent application Ser. No. 08/493,632, filed Jun. 22, 1995; R.
  • step 212 can be satisfied by simply computing parallax using the plane plus parallax method described above, there are a number of techniques that can be used to make the basic two-frame stereo parallax computation of step 212 more robust and reliable. These techniques may be performed singularly or in combination to improve the accuracy of step 212 .
  • the techniques are depicted in the block diagram of FIG. 3 and comprise of augmentation routines 302 , sharpening 304 , routines that compute residual parallax 306 , occlusion detection 308 , and motion analysis 310 .
  • the augmentation routines 302 make the basic two-frame stereo parallax computation robust and reliable.
  • One approach divides the images into tiles and, within each tile, the parameterization is of a dominant plane and parallax.
  • the dominant plane could be a frontal plane.
  • the planar parameterization for each tile is constrained through a global rotation and translation (which is either known through pre-calibration of the stereo set up or can be solved for using a direct method).
  • Another augmentation routine 302 handles occlusions and textureless areas that may induce errors into the parallax computation.
  • depth matching across two frames is done using varying window sizes, and from coarse to fine spatial frequencies.
  • a “window” is a region of the image that is being processed to compute parallax information for a point or pixel within the window. Multiple window sizes are used at any given resolution level to test for consistency of depth estimate and the quality of the correlation. Depth estimate is considered reliable only if at least two window sizes produce acceptable correlation levels with consistent depth estimates. Otherwise, the depth at the level which produces unacceptable results is not updated.
  • the depth estimate is ignored and a consistent depth estimate from a larger window size is preferred if available.
  • Areas in which the depth remains undefined are labeled as such as to that they can be filled in either using preprocessing, i.e., data from the previous synthetic frame or through temporal predictions using the low-resolution data, i.e., up-sampling low-resolution data to fill in the labeled area in the synthetic image 114 .
  • FIG. 7 depicts an overall image region 702 that is being processed and a plurality of windows 700 A, 700 B, 700 C, 700 D, 700 E used to process the image region.
  • Each window 700 A-E contains the image point 704 for which the parallax information is being generated.
  • Window 700 E is centered on the point 704
  • windows 700 A-D are not centered on the point 704 A (i.e., the windows are offset from the point 704 ).
  • Parallax information is computed for each window 700 A-E and the parallax information corresponding to the window having a minimum alignment error and consistent depth estimates is selected as the parallax information for the image point 704 .
  • the size and shape of the windows 700 A-E are for illustrative purposes and do not cover all the possible window configurations that could be used to process the imagery. For example, windows not aligned with the coordinate axes (vertical and horizontal) are also used. In particular, these may be diagonal shaped windows.
  • JND Just Noticeable Difference Models
  • An additional augmentation routine 302 provides an algorithm for computing image location correspondences. First, all potential correspondences at image locations are defined by a given camera rotation and translation at the furthest possible range, and then correspondences are continuously checked at point locations corresponding to successively closer ranges. Consistency between correspondences recovered between adjacent ranges gives a measure of the accuracy of the correspondence.
  • Another augmentation routine 302 avoids blank areas around the perimeter of the synthesized image. Since the high-resolution imagery is being warped such that it appears at a different location, the image borders of the synthesized image may not have a correspondence in the original synthesized image. Such areas may potentially be left blank.
  • This problem is solved using three approaches. The first approach is to display only a central window of the original and high-resolution imagery, such that the problem area is not displayed. The second approach is to use data from previous synthesized frames to fill in the region at the boundary. The third approach is to filter and up-sample the data from the low-resolution device, and insert that data at the image boundary.
  • An additional augmentation routine 302 provides an algorithm that imposes global 3D and local (multi-) plane constraints Specifically, the approach is to represent flow between frame pairs as tiled parametric (with soft constraints across tiles) and smooth residual flow. In addition, even the tiles can be represented in terms of a small number of parametric layers per tile. In the case when there is a global 3D constraint across the two frames (stereo), then the tiles are represented as planar layers where within a patch more than one plane may exist.
  • Another method for improving the quality of the parallax computation of step 212 is to employ a sharpening routine 304 .
  • a sharpening routine 304 For example, in the neighborhood of range discontinuities or other rapid transitions, there is typically a region of intermediate estimated parallax due to the finite spatial support used in the computation process 212 . Explicit detection of such transitions and subsequent “sharpening” of the parallax field minimize these errors.
  • information from earlier (and potentially later) portions of the image sequence is used to improve synthesis of the high-resolution image 114 . For example, image detail in occluded areas may be visible from the high-resolution device in preceding or subsequent frames. Use of this information requires computation of motion information from frame to frame as well as computation of parallax. However, this additional computation is performed as needed to correct errors rather than on a continual basis during the processing of the entire sequence.
  • the parallax computation of step 212 can be improved by computing the residual parallax (depth) using a method described as follows or an equivalent method that computes residual parallax 306 .
  • One method monitors the depth consistency over time to further constrain depth/disparity computation when a motion stereo sequence is available as is the case, for example, with a hi-resolution still image.
  • a rigidity constraint is valid and is exploited in the two-frame computation of depth outlined above.
  • optical flow is computed between the corresponding frames over time. The optical flow serves as a predictor of depth in the new frames.
  • depth computation is accomplished between the pair while being constrained with soft constraints coming from the predicted depth estimate. This can be performed forward and backwards in time. Therefore, any areas for which estimates are available at one time instant but not at another can be filled in for both the time instants.
  • Another method of computing residual parallax 306 is to use the optical flow constraint along with a rigidity constraint for simultaneous depth/disparity computation over multiple stereo pairs, i.e., pairs of images over time.
  • the temporal rigidity constraint is parameterized in the depth computation in exactly the same manner as the rigidity constraint between the two frames at the same time instant.
  • the optical flow constraint over time may be employed as a soft constraint as a part of the multi-time instant depth computation.
  • Another method of computing residual parallax 306 is to constrain depth as consistent over time to improve alignment and maintain consistency across the temporal sequence. For example, once depth is recovered at one time instant, the depth at the next frame time can be predicted by shifting the depth by the camera rotation and translation recovered between the old and new frames. This approach can also be extended by propagating the location of identified contours or occlusion boundaries in time to improve parallax or flow computation.
  • An additional approach for computing residual parallax 306 is to directly solve for temporally smooth stereo, rather than solve for instantaneous depth, and impose subsequent constraints to smooth the result.
  • This can be implemented using a combined epipolar and flow constraint. For example, assuming that previous synthesized frames are available, the condition imposed on the newly synthesized frame is that it is consistent with the instantaneous parallax computation and that it is smooth in time with respect to the previously generated frames. This latter condition can be imposed by making a flow-based prediction based on the previous frames and making the difference from that prediction part of the error term.
  • the parallax-based frame i.e., the warped high-resolution image
  • the flow based temporally interpolated frame can be compared with the flow based temporally interpolated frame. This comparison can be used either to detect problem areas or to refine the parallax computation.
  • This approach can be used without making rigidity assumptions or in conjunction with a structure/power constraint. In this latter case, the flow-based computation can operate with respect to the residual motion after the rigid part has been compensated.
  • An extension of this technique is to apply the planar constraint across frames along with the global rigid motion constraint across all the files in one frame.
  • An additional approach is to enhance the quality of imagery using multiple frames in order to improve parallax estimates, as well as to produce imagery that has higher visual quality.
  • the approach is as follows:
  • [0058] perform alignment over time using a batch of frames (11 is an example number of frames) using the optical flow approaches described above so that images are in the same coordinate system
  • the result is a enhanced image.
  • the approach can be extended so that the approach is performed on pre-filtered images, and not on the raw intensity images.
  • An example of a pre-filter is an oriented band-pass filter, for example, those described in “Two-dimensional signal and image processing” by Jae Lim, 1990, published by Prentice-Hall, Engelwood Cliffs, N.J.
  • a method of computing residual parallax 306 which avoids a potential problem with instability in the synthetic stereo sequence in three dimensional structure composed using the synthetic image 114 is to limit and amount of depth change between frames. To reduce this problem, it is important to avoid temporal fluctuations in the extracted parallax structure using temporal smoothing. A simple form of this smoothing can be obtained by simply limiting the amount of change introduced when updating a previous estimate. To do this in a systematic way requires inter-frame motion analysis as well as intra-frame parallax computation to be performed.
  • Occlusion detection 308 is helpful in situations in which an area of the view to be synthesized is not visible from the position of the high-resolution camera. In such situations, it is necessary to use a different source for the image information in that area. Before this can be done, it is necessary to detect that such a situation has occurred. This can be accomplished by comparing results obtained when image correspondence is computed bi-directionally. That is, in areas in which occlusion is not a problem, the estimated displacements from computing right-left correspondence and from computing left-right correspondence agree. In areas of occlusion, they generally do not agree. This leads to a method for detecting occluded regions. Occlusion conditions can also be predicted from the structure of the parallax field itself. To the extent that this is stable over time areas of likely occlusion can be flagged in the previous frame. The bi-directional technique can then be used to confirm the condition.
  • Areas of occlusion and more generally areas of mismatch between an original frame and a parallax/flow-warped frame are detected using a quality-of-alignment measure applied to the original and warped frames.
  • One method for generating such a measure is through normalized correlation between the pair of frames. Areas of low variance in both the frames are ignored since they do not affect the warped frame. Normalized correlation is defined over a number of different image representations some of which are: color, intensity, outputs of oriented and scaled filters.
  • Motion analysis 310 also improves the parallax computation of step 212 .
  • Motion analysis 310 involves analyzing frame-to-frame motion within the captured sequence. This information can be used to solve occlusion problems because regions not visible at one point in time may have been visible (or may become visible) at another point in time. Additionally, the problem of temporal instability can be reduced by requiring consistent three-dimensional structure across several frames of the sequence.
  • Analysis of frame-to-frame motion generally involves parsing the observed image change into components due to viewpoint change (i.e., camera motion), three dimensional structure and object motion.
  • viewpoint change i.e., camera motion
  • techniques for performing this decomposition and estimating the respective components include direct camera motion estimation, motion parallax estimation, simultaneous motion and parallax estimation, and layer extraction for representation of moving objects or multiple depth surfaces.
  • a key component of these techniques is the “plane plus parallax” representation.
  • parallax structure is represented as the induced motion of a plane (or other parametric surface) plus a residual per pixel parallax map representing the variation of induced motion due to local surface structure.
  • the parallax estimation techniques referred to above are essentially special cases of motion analysis techniques for the case in which camera motion is assumed to be given by the fixed stereo baseline.
  • step 212 Once the parallax field has been computed in step 212 , it is used to produce the high-resolution synthesized image 114 in a warping step 214 .
  • the reader is encouraged to simultaneously refer to FIG. 2 and FIG. 4 for the best understanding of the warping step 214 .
  • step 214 the process of warping involves two steps: parallax interpolation and image warping. In practice these two steps are usually combined into one operation as represented by step 214 .
  • the computation of step 214 involves accessing a displacement vector specifying a location in the high-resolution source image from the first image acquisition device 206 (step 502 ), accessing the pixels in some neighborhood of the specified location and computing, based on those pixels (step 504 ), an interpolated value for the synthesized pixels that comprise the synthetic image 114 (step 506 ).
  • Step 214 should be performed at the full target image resolution.
  • the interpolation step 506 should be done using at least a bilinear or bicubic interpolation function.
  • the resultant synthesized image 114 has an apparent viewpoint 230 .
  • the apparent viewpoint 230 may be chosen by the user to comprise all viewpoints other than the first viewpoint 216 .
  • step 508 Even more effective warping algorithms can make use of motion, parallax, other information (step 508 ).
  • the location of depth discontinuities from the depth recovery process can be used to prevent spatial interpolation in the warping across such discontinuities. Such interpolation can cause blurring in such regions.
  • occluded areas can be filled in with information from previous or following frames using flow based warping. The technique described above in the discussion of plane plus parallax is applicable for accomplishing step 508 .
  • temporal scintillation of the synthesized imagery can be reduced using flow information to impose temporal smoothness (step 510 ).
  • This flow information can be both between frames in the synthesize sequence, as well as between the original and synthesized imagery.
  • Scintillation can also be reduced by adaptively peaking pyramid-based appearance descriptors for synthesized regions with the corresponding regions of the original high resolution frames. These can be smoothed over time to reduce “texture flicker.”
  • Temporal flicker in the synthesized frames is avoided by creating a synthesized frame from a window of original resolution frames rather than from just one frame.
  • a window of, for example, five frames is selected.
  • parallax/depth based correspondences are computed as described above.
  • parallax based correspondences are computed (again as described above).
  • quality of alignment maps are computed for each pair of low resolution/high resolution frames.
  • a synthetic high resolution frame is synthesized by compositing the multiple high resolution frames within the window after warping these with their corresponding correspondence maps.
  • the compositing process uses weights that are directly proportional to the quality of alignment at every pixel and the distance of the high resolution frame in time from the current frame. Further off frames are given lesser weight than the closer frames.
  • I ⁇ ( p ; t ) ⁇ t k ⁇ w c ⁇ ( p ; t k ) ⁇ w t ⁇ ( t k ) w ⁇ t k ⁇ w c ⁇ ( p ; t k ) ⁇ w t ⁇ ( t k )
  • w c (p;t k ) is the quality-of-alignment weight between frames t and t k (this variable is set to zero if the quality measure is below a pre-defined threshold); and w t (t k ) is a weight that decreases as a function of time away from frame t. Any pixels that are left unfilled by this process are filled from the original (upsampled) frame as described above. An illustration of the concept of temporal windows is shown in FIG. 8.
  • Temporal flicker is also reduced using the constraint that regions of error are typically consistent over time. For example, an occlusion boundary between two frames is typically present in subsequent frames, albeit in a slightly different image location.
  • the quality of alignment metric can be computed as described above and this quality metric itself can be tracked over time in order to locate the movement of problematic regions such as occlusion boundaries.
  • the flow estimation method described above can be used to track the quality metric and associated occlusion boundaries. Once these boundaries have been aligned, the compositing result computed above can be processed to reduce flicker. For example the compositing result can be smoothed over time.
  • the warping step 214 can also be performed using data collected over an image patch, rather than just a small neighborhood of pixels.
  • the image can be split up into a number of separate regions, and the resampling is performed based on the area covered by the region in the target image (step 512 ).
  • the depth recovery may not produce completely precise depth estimates at each image pixel. This can result in a difference between the desired intensity or chroma value and the values produced from the original high-resolution imagery.
  • the warping module can then choose to select one or more of the following options as a depth recovery technique (step 514 ), either separately, or in combination:
  • JND Just Noticeable Difference
  • the JND measures performed on the synthesized sequence, and comparing the difference between a low-resolution form of the synthesized data and data from the low-resolution camera.
  • Various JND measures are described in U.S. patent application Ser. No. 09/055,076, filed Apr. 3, 1989, Ser. No. 08/829,540, filed Mar. 28, 1997, Ser. No. 08/829,516, filed Mar. 28, 1997, and Ser. No. 08/828,161, filed Mar. 28, 1997 and U.S. Pat. Nos. 5,738,430 and 5,694,491, all of which are incorporated herein by reference in their entireties. Additionally, the JND can be performed between the synthesized high-resolution image data, and the previous synthesized high-resolution image after being warped by the flow field computed from the parallax computation in step 212 .
  • the routine 110 receives the input 112 from a plurality of image acquisition devices 503 comprising the first image acquisition device 206 , the second image acquisition device 208 and a third low-resolution image acquisition device 502 . Additional low resolution image acquisition devices may be added as needed.
  • the first, second and third image acquisition devices, 206 , 208 and 502 view the scene 200 respectively from a first viewpoint 216 , a second viewpoint 218 and a third viewpoint 504 .
  • the routine 110 receives processes the input data from the image acquisition devices, 206 , 208 and 502 as discussed above with reference to steps 202 , 204 , 210 , 212 and 214 .
  • the additional image(s) received from the at least third image acquisition device 502 provides data that is used in concert with the data received from the second image acquisition device 208 during the parallax computation step 212 and the warping step 214 to enhance the quality of the synthetic image 114 , particularly the ability to place the apparent viewpoint 230 in locations not containing one of the image acquisition devices (i.e., the greater number of image acquisitions devices used results in having more lower-resolution data available to interpolate and fill in occluded or textureless areas in the synthesized image).
  • a third embodiment of the routine 110 can be understood in greater detail by referencing FIG. 6.
  • the routine 110 receives the input 112 from the first image acquisition device 206 and the second image acquisition device 208 wherein the low-resolution image acquisition device captures range data, for example, a laser range finder.
  • the first image acquisition device 206 views the scene 200 from a first viewpoint 216 while the second image acquisition device 208 views the scene 200 from a second viewpoint 218 .
  • the routine 110 receives input data from the first image acquisition device 206 and corrects the spatial, intensity and chroma distortions in step 202 as discussed above.
  • the warping step 214 creates the synthesized image 114 by using the range (depth) data acquired from the second image acquisition device 208 .
  • the warping step 214 again is performed as discussed above.

Abstract

A method and apparatus for accurately computing parallax information as captured by imagery of a scene. The method computes the parallax information of each point in an image by computing the parallax within windows that are offset with respect to the point for which the parallax is being computed. Additionally, parallax computations are performed over multiple frames of imagery to ensure accuracy of the parallax computation and to facilitate correction of occluded imagery.

Description

  • This application claims the benefit under 35 United States Code §119 of U.S. Provisional Application No. 60/098,368, filed Aug. 28, 1998, and U.S. Provisional Application No. 60/123,615, filed Mar. 10, 1999. Both of which are hereby incorporated by reference in their entirety. [0001]
  • This application contains related subject matter to that of U.S. patent application Ser. No. ______, filed simultaneously herewith (Attorney Docket Number SAR 13165), and incorporated herein by reference in its entirety. [0002]
  • The invention relates to an image processing method-and apparatus and, more particularly, the invention relates to a method and apparatus for enhancing the quality of an image. [0003]
  • BACKGROUND OF THE DISCLOSURE
  • For entertainment and other applications, it is useful to obtain high-resolution stereo imagery of a scene so that viewers can visualize the scene in three dimensions. To obtain such high-resolution imagery, the common practice of the prior art is to use two or more high-resolution devices or cameras, displaced from each other. The first high-resolution camera captures an image or image sequence, that can be merged with other high-resolution images taken from a viewpoint different than the first high-resolution camera, creating a stereo image of the scene. [0004]
  • However, creating stereo imagery with multiple high-resolution cameras can be difficult and very expensive. The number of high-resolution cameras used to record a scene can contribute significantly to the cost of producing the stereo image scene. Additionally, high-resolution cameras are large and unwieldy. As such, the high-resolution cameras are not easy to move about when filming a scene. Consequently, some viewpoints may not be able to be accommodated because of the size of the high-resolution cameras, thus limiting the viewpoints available for creating the stereo image. [0005]
  • Similarly, in other applications given a collection of captured digital imagery, the need is to generate enhanced imagery for monocular or binocular viewing Examples of such application are resolution enhancement of video and other digital imagery, quality enhancement in terms of enhanced focus, depth of field, color and brightness/contrast enhancement, and creation of synthetic imagery from novel viewpoints based on captured digital imagery and videos. [0006]
  • All the above applications involve combining multiple co-temporal digital sensors (camera for example) and/or temporally separated sensors for the purpose of creation of synthetic digital imagery. The various applications can be broadly divided along the following lines (but are not limited to these): [0007]
  • 1. Creation of an enhanced digital image by processing one or more frames of imagery from cameras and or other sensors which have captured the imagery at the same time instant. The synthesized frame represents the view of an enhanced synthetic camera located at the position of one of the real sensors. [0008]
  • 2. Creation of enhanced digital imagery by processing frames that have been captured over time and space (multiple cameras/sensors capturing video imagery over time). The synthesized frames represent enhanced synthetic cameras located at the position of one or more of the real sensors. [0009]
  • 3. Creation of enhanced digital imagery by processing frames that have been captured over time and space (multiple cameras/sensors capturing video imagery over time). The synthesized frames represent enhanced synthetic cameras that are located at positions other than those of the real sensors. [0010]
  • Therefore, a need exists in the art for a method and apparatus for creating a synthetic high-resolution image and/enhancing images using only one high-resolution camera. [0011]
  • SUMMARY OF THE INVENTION
  • The disadvantages associated with the prior art are overcome by the present invention for a method and apparatus for accurately computing image flow information as captured by imagery of a scene. The invention computes the image flow information of each point in an image by computing the image flow within windows that are offset with respect to the point for which the image flow is being computed. Additionally, image flow computations are performed over multiple frames of imagery to ensure accuracy of the image flow computation and to facilitate correction of occluded imagery. [0012]
  • In one illustrative embodiment of the invention, the image flow computation is constrained to compute parallax information. The imagery and parallax (or flow) information can be used to enhance various image processing techniques such as image resolution enhancement, enhancement of focus, depth of field, color, and brightness. The parallax (or flow) information can also be used to generate a synthetic high-resolution image that can be used in combination with the original image to form a stereo image. Specifically, the apparatus comprises an imaging device for producing images (e.g., video frame sequences) and a scene sensing device for producing information regarding the imaged scene. An image processor uses the information from the scene sensing device to process the images produced by the imaging device. This processing produces parallax information regarding the imaged scene. The imagery from the imaging device and the parallax information can be used to enhance any one of the above-mentioned image processing applications. [0013]
  • The invention includes a method that is embodied in a software routine, or a combination of software and hardware. The inventive method comprises the steps of supplying image data having a first resolution and supplying image information regarding the scene represented by the image data. The image data and information are processed by, for example, warping the first image data to form a synthetic image having a synthetic view, where the viewpoint of the synthetic image is different from the viewpoint represented in the image data. The synthetic image and the original image can be used to compute parallax information regarding the scene. By using multiple frames from the original imagery and the synthetic view imagery, the inventive process improves the accuracy of the parallax computation. [0014]
  • Alternate embodiments of the invention include but are not limited to, utilizing multiple sensors in addition to the scene sensing device to provide greater amounts of scene data for use in enhancing the synthetic image, using a displacement device in conjunction with the second imaging device to create a viewpoint for the warped image that is at the location of the displacement device, and using a range finding device as the second imaging device to provide image depth information.[0015]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which: [0016]
  • FIG. 1 depicts a block diagram of an imaging apparatus incorporating the image analysis method and apparatus of the invention; [0017]
  • FIG. 2 depicts a block schematic of an imaging apparatus and an image analysis method used to produce one embodiment of the subject invention; [0018]
  • FIG. 3 is a flow chart of the parallax computation method; [0019]
  • FIG. 4 is a flow chart of the image warping method; [0020]
  • FIG. 5 depicts a block diagram of an imaging apparatus and an image analysis method used to produce a second embodiment of the subject invention; [0021]
  • FIG. 6 depicts a block diagram of an imaging apparatus and an image analysis method used to produce a third embodiment of the subject invention; [0022]
  • FIG. 7 depicts a schematic view of multiple offset windows as used to compute parallax at points within an image; and [0023]
  • FIG. 8 depicts an illustration for a process to compute a quality measure for parallax computation accuracy. [0024]
  • To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. [0025]
  • DETAILED DESCRIPTION
  • FIG. 1 depicts a high-resolution synthetic [0026] image generation apparatus 100 of the present invention. An input video sequence 112 is supplied to a computer 102. The computer 102 comprises a central processing unit (CPU) 104, support circuits 106, and memory 108. Residing within the memory 108 is a high-resolution synthetic image generation routine 110. The high-resolution synthetic image generation routine 110 may alternately be readable from another source such as a floppy disk, CD, remote memory source or via a network. The computer additionally is coupled to input/output accessories 118. As a brief description of operation, an input video sequence 112 is supplied to the computer 102, which after operation of the high-resolution synthetic image generation routine 110, outputs a synthetic high-resolution image 114.
  • The high-resolution synthetic [0027] image generation routine 110 hereinafter referred to as the routine 110, can be understood in greater detail by referencing FIG. 2. Although the process of the present invention is discussed as being implemented as a software routine 110, some of the method steps that are disclosed therein may be performed in hardware as well as by the software controller. As such, the invention may be implemented in software as executed upon a computer system, in hardware as an application specific integrated circuit or other type of hardware implementation, or a combination of software and hardware. Thus, the reader should note that each step of the routine 110 should also be construed as having an equivalent application specific hardware device (module), or hardware device used in combination with software.
  • The high-resolution synthetic [0028] image generation routine 110 of one illustrative embodiment of the invention receives the input 112 from a first image acquisition device 206 and a second image acquisition device 208. The first image acquisition device 206 views a scene 200 from a first viewpoint 216 while the second image acquisition device 208 views the scene 200 from a second viewpoint 218. The second viewpoint 218 may include the first viewpoint 216 (i.e., the first and second image acquisition devices 206 and 208 may view the scene 200 from the same position). Alternately, a displacement mechanism 232 (e.g., a mirror) positioned in a remote location 234 may be used to make the data captured by the second image acquisition device 208 appear as if the second image acquisition device 208 is positioned at the remote location 234. As such, the scene would be imaged by device 208 from the mirror 232 rather than directly. The first image acquisition device 206 has an image resolution higher than that of the second image acquisition device 208. The first image acquisition device 206 may comprise a number of different devices having a number of different data output formats, as one skilled in the art will readily be able to adapt the process described by the teachings herein to any number of devices and data formats and/or protocols. In one embodiment, the first image acquisition device 206 is a high-definition camera, i.e., a camera with a resolution of at least 8000 by 6000 pixels/cm2. Similarly, the second image acquisition device 208 may also comprise a varied number of devices, since one skilled in the art can readily adapt the routine 110 to various devices as discussed above. In one embodiment, the second image acquisition device 206 is a camera having a resolution lower than the resolution of the high-resolution device, i.e., a standard definition video camera. For example, the high resolution imagery may have 8000 by 6000 pixels/cm2 and the lower resolution image may have 1000 by 1000 pixels/cm2.
  • The routine [0029] 110 receives input data from the first image acquisition device 206 and corrects the spatial, intensity and chroma distortions in step 202. The chroma distortions are caused by, for example, lens distortion. This correction is desired in order to improve the accuracy of subsequent steps executed in the routine 110. Methods are known in the art for computing a parametric function that describes the lens distortion function. For example, the parameters are recovered in step 202 using a calibration procedure as described in H. S. Sawhney and R. Kumar, True Multi-Image Alignment and its Application to Mosaicing and Lens Distortion, Computer Vision and Pattern Recognition Conference proceedings, pages 450-456, 1997, incorporated by reference in its entirety herein.
  • Additionally, step [0030] 202 also performs chromanence (chroma) and intensity corrections. This is necessary since image data from the second image acquisition device 208 is merged with data from the first image acquisition device 206, and any differences in the device response to scene color and intensity or due to lens vignetting, for example, results in image artifacts in the synthesized image 114. The correction is performed by pre-calibrating the devices (i.e., the first image acquisition device 206 and the second image acquisition device 208) such that the mapping of chroma and intensity from one device to the next is known. The measured chroma and intensity from each device is stored as look-up table or a parametric function. The look up table or parametric equation are then accessed to perform the chroma and intensity corrections in order to match the chroma and intensity of the other device.
  • Input data from the second [0031] image acquisition device 208 is also corrected for spatial, intensity and chroma distortions in step 204. The process for correcting the low-resolution distortions in step 204 follow the same process as the corrections performed in step 202.
  • To clarify, the chroma and intensity correction between the high resolution and low resolution imaging devices, or between multiple same resolution imaging devices, may also be performed by automatically aligning images based on parallax or temporal optical flow computation either in a pre-calibration step using fixed patterns or through an online computation as a part of the frame synthesis process. After aligning corresponding frames using the methods described below, regions of alignment and misalignment are labeled using a quality of alignment metric. By using pixels between two or more cameras that have aligned well, parametric transformations are computed that represent color and intensity transformations between the cameras. With the knowledge of each parametric transformation, the source color pixels can be transformed into the destination color pixels that completely match the original destination pixels. [0032]
  • The corrected high-resolution data from [0033] step 202 is subsequently filtered and subsampled in step 210. The purpose of step 210 is to reduce the resolution of the high-resolution imagery such that it matches the resolution of the low-resolution image. Step 210 is necessary since features that appear in the high-resolution imagery may not be present in the low-resolution imagery, and cause errors in a depth recovery process (step 306 detailed in FIG. 3 below). Specifically, these errors are caused since the depth recovery process 306 attempts to determine the correspondence between the high-resolution imagery and the low-resolution imagery, and if features are present in one image and not the other, then the correspondence process is inherently error-prone.
  • The [0034] step 210 is performed by first calculating the difference in spatial resolution between the high-resolution and low-resolution devices. From the difference in spatial resolution, a convolution kernel can be computed that reduces the high-frequency components in the high-resolution imagery such that the remaining frequency components match those components in the low-resolution imager. This can be performed using standard, sampling theory (e.g., see P. J. Burt and E. H. Adelson, The Laplacian Pyramid as a Compact Image Code, IEEE Transactions on Communication, Vol. 31, pages 532-540, 1983, incorporated by reference herein in its entirety).
  • For example, if the high-resolution and low-resolution imagery were different in spatial resolution by a factor of 2 vertically and horizontally, then an appropriate filter kernel is [1,4,6,4,1]/16. This filter is applied first vertically, then horizontally. The high-resolution image can then be sub-sampled by a factor of 2 so that the spatial sampling of the image data derived from the high-resolution imager matches that of the low-resolution imager. [0035]
  • Once the high-resolution image data has been filtered and subsampled in [0036] step 210, the parallax is computed in step 212 at each frame time to determine the relationship between viewpoint 216 and viewpoint 218 in the high-resolution and low-resolution data sets. More specifically, the parallax computation of step 212 computes the displacement of image pixels between the images taken from view point 216 and viewpoint 218 due to their difference in viewpoint of the scene 200.
  • The pair of images can be left and right images (images from [0037] viewpoints 216 and 218) to form a stereo pair captured at the same time instant, or a pair of images captured at two closely spaced time intervals, or two images at different time instants during which no substantial independent object motion has taken place. In any of these cases the parallax processing is accomplished using at least two images and, for more accurate results, uses many images, e.g., five.
  • Because this parallax information depends on the relationship between the at least two input images having different viewpoints ([0038] 216 and 218, respectively) of a scene 200, it is initially computed at the spatial resolution of the lower resolution image. This is accomplished by resampling the high-resolution input image using an appropriate filtering and sub-sampling process, as described above in step 210.
  • Generally speaking, the resolution of the input images may be the same. This is a special case of the more general variable resolution case. The parallax computation techniques are identical for both the cases once the high resolution image has been filtered and subsampled to be represented at the resolution of the low resolution image. [0039]
  • The computation of [0040] step 212 is performed using more or less constrained algorithms depending on the assumptions made about the availability and accuracy of calibration information. In the uncalibrated extreme case, a two-dimensional flow vector is computed for each pixel in the to which alignment is being performed. If it is known that the epipolar geometry is stable and accurately known, then the computation reduces to a single value for each image point. The computation used to produce image flow information can be constrained to produce parallax information. The techniques described below can be applied to either the flow information or parallax information.
  • In many situations, particularly those in which parallax magnitudes are large, it is advantageous in [0041] step 212 to compute parallax with respect to some local parametric surface. This is method of computation is known as “plane plus parallax”. The plane plus parallax representation can be used to reduce the size of per-pixel quantities that need to be estimated. For example, in the case where scene 200 comprises an urban scene with a lot of approximately planar facets, parallax may be computed in step 212 as a combination of planar layers with additional out-of-plane component of structure. The procedure for performing the plane plus parallax method is detailed in U.S. patent application Ser. No. 08/493,632, filed Jun. 22, 1995; R. Kumar et al., Direct Recovery of Shape From Multiple Views: A Parallax Based Approach, 12th ICPR, 1994; Harpreet Sawhney, 3D Geometry From Planar Parallax, CVPR 94, June 1994; and A. Shashua and N. Navab, Relative Affine Structure, Theory and Application to 3D Construction From 2D Views, IEEE Conference on Computer Vision and Pattern Recognition, June 1994, all of which are hereby incorporated by reference.
  • Other algorithms are available that can perform parallax analysis in-lieu of the plane plus parallax method. These algorithms generally use a coarse-fine recursive estimation process using multiresolution image pyramid representations. These algorithms begin estimation of image displacements at reduced resolution and then refine these estimates through repeated warping and residual displacement estimation at successively finer resolution levels. The key advantage of these methods is that they provide very efficient computation even when large displacements are present but also provide sub-pixel accuracy in displacement estimates. A number of published papers describe the underlying techniques employed in the parallax computation of [0042] step 212. Details of such techniques can be found in U.S. Pat. No. 5,259,040, issued Nov. 2, 1993; J. R. Bergen et al., Hierarchical Model-Based Motion Estimation, 2nd European Conference on Computer Vision, pages 237-252, 1992; K. J. Hanna, Direct Multi-Resolution Estimation of Ego-Motion and Structure From Motion, IEEE Workshop on Visual Motion, pages 156-162, 1991; K. J. Hanna and Neil E. Okamoto, Combining Stereo and Motion Analysis for Direct Estimation of Scene Structure, International Conference on Computer Vision, pages 357-356, 1993; R. Kumar et al., Direct Recovery of Shape from Multiple Views: A Parallax Based Approach, ICPR, pages 685-688, 1994; and S. Ayer and J. S. Sawhney, Layered Representation of Motion Video Using Robust Maximum-Likelihood Estimation of Mixture Models and MDL Encoding, International Conference on Computer Vision, pages 777-784, 1995, all of which are hereby incorporated by reference.
  • Although the [0043] step 212 can be satisfied by simply computing parallax using the plane plus parallax method described above, there are a number of techniques that can be used to make the basic two-frame stereo parallax computation of step 212 more robust and reliable. These techniques may be performed singularly or in combination to improve the accuracy of step 212. The techniques are depicted in the block diagram of FIG. 3 and comprise of augmentation routines 302, sharpening 304, routines that compute residual parallax 306, occlusion detection 308, and motion analysis 310.
  • The [0044] augmentation routines 302 make the basic two-frame stereo parallax computation robust and reliable. One approach divides the images into tiles and, within each tile, the parameterization is of a dominant plane and parallax. In particular, the dominant plane could be a frontal plane. The planar parameterization for each tile is constrained through a global rotation and translation (which is either known through pre-calibration of the stereo set up or can be solved for using a direct method).
  • Another [0045] augmentation routine 302 handles occlusions and textureless areas that may induce errors into the parallax computation. To process occlusions and textureless areas, depth matching across two frames is done using varying window sizes, and from coarse to fine spatial frequencies. A “window” is a region of the image that is being processed to compute parallax information for a point or pixel within the window. Multiple window sizes are used at any given resolution level to test for consistency of depth estimate and the quality of the correlation. Depth estimate is considered reliable only if at least two window sizes produce acceptable correlation levels with consistent depth estimates. Otherwise, the depth at the level which produces unacceptable results is not updated. If the window under consideration does not have sufficient texture, the depth estimate is ignored and a consistent depth estimate from a larger window size is preferred if available. Areas in which the depth remains undefined are labeled as such as to that they can be filled in either using preprocessing, i.e., data from the previous synthetic frame or through temporal predictions using the low-resolution data, i.e., up-sampling low-resolution data to fill in the labeled area in the synthetic image 114.
  • Multiple windows are defined in terms of their sizes as well as relative location with respect to the pixel/region for which depth/parallax estimation is performed. Windows are defined both as centered on the pixel for which depth/parallax is desired as well as off-centered windows. Along with selection of windows based on a consistent depth estimate, the selection is also accomplished on the basis of error in alignment; specifically windows that are used to compute parallax information that leads to a minimum alignment error and consistent depth estimates are selected as the parallax information for the point in the image. An illustration of the multi-window concept is shown in FIG. 7. FIG. 7 depicts an [0046] overall image region 702 that is being processed and a plurality of windows 700A,700B, 700C, 700D, 700E used to process the image region. Each window 700A-E contains the image point 704 for which the parallax information is being generated. Window 700E is centered on the point 704, while windows 700A-D are not centered on the point 704A (i.e., the windows are offset from the point 704). Parallax information is computed for each window 700A-E and the parallax information corresponding to the window having a minimum alignment error and consistent depth estimates is selected as the parallax information for the image point 704. The size and shape of the windows 700A-E are for illustrative purposes and do not cover all the possible window configurations that could be used to process the imagery. For example, windows not aligned with the coordinate axes (vertical and horizontal) are also used. In particular, these may be diagonal shaped windows.
  • An additional approach for employing an [0047] augmentation routine 302 is to use Just Noticeable Difference Models (JND) models in the optimization for depth estimation. For example, typically image measures such as intensity difference are used to quantify the error in the depth representation. However, these measures can be supplemented with JND measures that attempt to measure errors that are most visible to a human observer. The approach for employing JND methods are discussed in greater detail below.
  • An [0048] additional augmentation routine 302 provides an algorithm for computing image location correspondences. First, all potential correspondences at image locations are defined by a given camera rotation and translation at the furthest possible range, and then correspondences are continuously checked at point locations corresponding to successively closer ranges. Consistency between correspondences recovered between adjacent ranges gives a measure of the accuracy of the correspondence.
  • Another [0049] augmentation routine 302 avoids blank areas around the perimeter of the synthesized image. Since the high-resolution imagery is being warped such that it appears at a different location, the image borders of the synthesized image may not have a correspondence in the original synthesized image. Such areas may potentially be left blank. This problem is solved using three approaches. The first approach is to display only a central window of the original and high-resolution imagery, such that the problem area is not displayed. The second approach is to use data from previous synthesized frames to fill in the region at the boundary. The third approach is to filter and up-sample the data from the low-resolution device, and insert that data at the image boundary.
  • An [0050] additional augmentation routine 302 provides an algorithm that imposes global 3D and local (multi-) plane constraints Specifically, the approach is to represent flow between frame pairs as tiled parametric (with soft constraints across tiles) and smooth residual flow. In addition, even the tiles can be represented in terms of a small number of parametric layers per tile. In the case when there is a global 3D constraint across the two frames (stereo), then the tiles are represented as planar layers where within a patch more than one plane may exist.
  • Another method for improving the quality of the parallax computation of [0051] step 212 is to employ a sharpening routine 304. For example, in the neighborhood of range discontinuities or other rapid transitions, there is typically a region of intermediate estimated parallax due to the finite spatial support used in the computation process 212. Explicit detection of such transitions and subsequent “sharpening” of the parallax field minimize these errors. As an extension to this basic process, information from earlier (and potentially later) portions of the image sequence is used to improve synthesis of the high-resolution image 114. For example, image detail in occluded areas may be visible from the high-resolution device in preceding or subsequent frames. Use of this information requires computation of motion information from frame to frame as well as computation of parallax. However, this additional computation is performed as needed to correct errors rather than on a continual basis during the processing of the entire sequence.
  • Additionally, the parallax computation of [0052] step 212 can be improved by computing the residual parallax (depth) using a method described as follows or an equivalent method that computes residual parallax 306. One method monitors the depth consistency over time to further constrain depth/disparity computation when a motion stereo sequence is available as is the case, for example, with a hi-resolution still image. Within two images captured at the same time instant, a rigidity constraint is valid and is exploited in the two-frame computation of depth outlined above. For multiple stereo frames, optical flow is computed between the corresponding frames over time. The optical flow serves as a predictor of depth in the new frames. Within the new frames, depth computation is accomplished between the pair while being constrained with soft constraints coming from the predicted depth estimate. This can be performed forward and backwards in time. Therefore, any areas for which estimates are available at one time instant but not at another can be filled in for both the time instants.
  • Another method of computing [0053] residual parallax 306 is to use the optical flow constraint along with a rigidity constraint for simultaneous depth/disparity computation over multiple stereo pairs, i.e., pairs of images over time. In particular, if large parts of the scene 200 are rigid, then the temporal rigidity constraint is parameterized in the depth computation in exactly the same manner as the rigidity constraint between the two frames at the same time instant. When there may be independently moving components in the scene 200, the optical flow constraint over time may be employed as a soft constraint as a part of the multi-time instant depth computation.
  • Another method of computing [0054] residual parallax 306 is to constrain depth as consistent over time to improve alignment and maintain consistency across the temporal sequence. For example, once depth is recovered at one time instant, the depth at the next frame time can be predicted by shifting the depth by the camera rotation and translation recovered between the old and new frames. This approach can also be extended by propagating the location of identified contours or occlusion boundaries in time to improve parallax or flow computation.
  • In order to compute a consistent depth map in a given reference grame, multiple frames over time can be used. Regions of the scene that are occluded in one pair (with respect to the reference frame) are generally visible in another image pair taken at some other instant of time. Therefore, in the coordinate system of a reference frame, matching regions from multiple frames can be used to derive a consistent depth/parallax map. [0055]
  • An additional approach for computing [0056] residual parallax 306 is to directly solve for temporally smooth stereo, rather than solve for instantaneous depth, and impose subsequent constraints to smooth the result. This can be implemented using a combined epipolar and flow constraint. For example, assuming that previous synthesized frames are available, the condition imposed on the newly synthesized frame is that it is consistent with the instantaneous parallax computation and that it is smooth in time with respect to the previously generated frames. This latter condition can be imposed by making a flow-based prediction based on the previous frames and making the difference from that prediction part of the error term. Similarly, if a sequence has already been generated, then the parallax-based frame (i.e., the warped high-resolution image) can be compared with the flow based temporally interpolated frame. This comparison can be used either to detect problem areas or to refine the parallax computation. This approach can be used without making rigidity assumptions or in conjunction with a structure/power constraint. In this latter case, the flow-based computation can operate with respect to the residual motion after the rigid part has been compensated. An extension of this technique is to apply the planar constraint across frames along with the global rigid motion constraint across all the files in one frame.
  • An additional approach is to enhance the quality of imagery using multiple frames in order to improve parallax estimates, as well as to produce imagery that has higher visual quality. The approach is as follows: [0057]
  • perform alignment over time using a batch of frames (11 is an example number of frames) using the optical flow approaches described above so that images are in the same coordinate system [0058]
  • sort the intensities for the batch of frames [0059]
  • Perform a SELECTION process. An example is rejecting the top 2 and the lowest 2 intensities in the sorted list at each pixel. [0060]
  • Perform a COMBINATION process. An example is averaging the remaining pixels. [0061]
  • The result is a enhanced image. The approach can be extended so that the approach is performed on pre-filtered images, and not on the raw intensity images. An example of a pre-filter is an oriented band-pass filter, for example, those described in “Two-dimensional signal and image processing” by Jae Lim, 1990, published by Prentice-Hall, Engelwood Cliffs, N.J. [0062]
  • Additionally, a method of computing [0063] residual parallax 306 which avoids a potential problem with instability in the synthetic stereo sequence in three dimensional structure composed using the synthetic image 114 is to limit and amount of depth change between frames. To reduce this problem, it is important to avoid temporal fluctuations in the extracted parallax structure using temporal smoothing. A simple form of this smoothing can be obtained by simply limiting the amount of change introduced when updating a previous estimate. To do this in a systematic way requires inter-frame motion analysis as well as intra-frame parallax computation to be performed.
  • The multi-window approach described above for the parallax computation is also valid for flow and/or parallax computation over time. Essentially window selection is accomplished based on criterion involving consistency of local displacement vector (flow vector over time) and minimum alignment error between frame pairs as in the case of two-frame parallax/depth computation. [0064]
  • [0065] Occlusion detection 308 is helpful in situations in which an area of the view to be synthesized is not visible from the position of the high-resolution camera. In such situations, it is necessary to use a different source for the image information in that area. Before this can be done, it is necessary to detect that such a situation has occurred. This can be accomplished by comparing results obtained when image correspondence is computed bi-directionally. That is, in areas in which occlusion is not a problem, the estimated displacements from computing right-left correspondence and from computing left-right correspondence agree. In areas of occlusion, they generally do not agree. This leads to a method for detecting occluded regions. Occlusion conditions can also be predicted from the structure of the parallax field itself. To the extent that this is stable over time areas of likely occlusion can be flagged in the previous frame. The bi-directional technique can then be used to confirm the condition.
  • Areas of occlusion and more generally areas of mismatch between an original frame and a parallax/flow-warped frame are detected using a quality-of-alignment measure applied to the original and warped frames. One method for generating such a measure is through normalized correlation between the pair of frames. Areas of low variance in both the frames are ignored since they do not affect the warped frame. Normalized correlation is defined over a number of different image representations some of which are: color, intensity, outputs of oriented and scaled filters. [0066]
  • [0067] Motion analysis 310 also improves the parallax computation of step 212. Motion analysis 310 involves analyzing frame-to-frame motion within the captured sequence. This information can be used to solve occlusion problems because regions not visible at one point in time may have been visible (or may become visible) at another point in time. Additionally, the problem of temporal instability can be reduced by requiring consistent three-dimensional structure across several frames of the sequence.
  • Analysis of frame-to-frame motion generally involves parsing the observed image change into components due to viewpoint change (i.e., camera motion), three dimensional structure and object motion. There is a collection of techniques for performing this decomposition and estimating the respective components. These techniques include direct camera motion estimation, motion parallax estimation, simultaneous motion and parallax estimation, and layer extraction for representation of moving objects or multiple depth surfaces. A key component of these techniques is the “plane plus parallax” representation. In this approach, parallax structure is represented as the induced motion of a plane (or other parametric surface) plus a residual per pixel parallax map representing the variation of induced motion due to local surface structure. Computationally, the parallax estimation techniques referred to above are essentially special cases of motion analysis techniques for the case in which camera motion is assumed to be given by the fixed stereo baseline. [0068]
  • Once the parallax field has been computed in [0069] step 212, it is used to produce the high-resolution synthesized image 114 in a warping step 214. The reader is encouraged to simultaneously refer to FIG. 2 and FIG. 4 for the best understanding of the warping step 214.
  • Conceptually the process of warping involves two steps: parallax interpolation and image warping. In practice these two steps are usually combined into one operation as represented by [0070] step 214. In either case, for each pixel in the to-be-synthesized image, the computation of step 214 involves accessing a displacement vector specifying a location in the high-resolution source image from the first image acquisition device 206 (step 502), accessing the pixels in some neighborhood of the specified location and computing, based on those pixels (step 504), an interpolated value for the synthesized pixels that comprise the synthetic image 114 (step 506). Step 214 should be performed at the full target image resolution. Also, to preserve the desired image quality in the synthesized image 114, the interpolation step 506 should be done using at least a bilinear or bicubic interpolation function. The resultant synthesized image 114 has an apparent viewpoint 230. The apparent viewpoint 230 may be chosen by the user to comprise all viewpoints other than the first viewpoint 216.
  • Even more effective warping algorithms can make use of motion, parallax, other information (step [0071] 508). For example, the location of depth discontinuities from the depth recovery process can be used to prevent spatial interpolation in the warping across such discontinuities. Such interpolation can cause blurring in such regions. In addition, occluded areas can be filled in with information from previous or following frames using flow based warping. The technique described above in the discussion of plane plus parallax is applicable for accomplishing step 508.
  • Also, temporal scintillation of the synthesized imagery can be reduced using flow information to impose temporal smoothness (step [0072] 510). This flow information can be both between frames in the synthesize sequence, as well as between the original and synthesized imagery. Scintillation can also be reduced by adaptively peaking pyramid-based appearance descriptors for synthesized regions with the corresponding regions of the original high resolution frames. These can be smoothed over time to reduce “texture flicker.”
  • Temporal flicker in the synthesized frames is avoided by creating a synthesized frame from a window of original resolution frames rather than from just one frame. For example for the high resolution image synthesis application, a window of, for example, five frames is selected. Between the stereo image pair involving the current low resolution and high resolution frames, parallax/depth based correspondences are computed as described above. Furthermore, between the current low resolution and previous and future high resolution frames within the window generalized flow and parallax based correspondences are computed (again as described above). Given the multiple correspondence maps between the current low resolution frame and the five high resolution frames within the window, quality of alignment maps are computed for each pair of low resolution/high resolution frames. Subsequently, a synthetic high resolution frame is synthesized by compositing the multiple high resolution frames within the window after warping these with their corresponding correspondence maps. The compositing process uses weights that are directly proportional to the quality of alignment at every pixel and the distance of the high resolution frame in time from the current frame. Further off frames are given lesser weight than the closer frames. [0073] I ( p ; t ) = t k w c ( p ; t k ) w t ( t k ) w t k w c ( p ; t k ) w t ( t k )
    Figure US20030190072A1-20031009-M00001
  • where w[0074] c(p;tk) is the quality-of-alignment weight between frames t and tk (this variable is set to zero if the quality measure is below a pre-defined threshold); and wt(tk) is a weight that decreases as a function of time away from frame t. Any pixels that are left unfilled by this process are filled from the original (upsampled) frame as described above. An illustration of the concept of temporal windows is shown in FIG. 8.
  • For the video enhancement application, the same method can be applied to combine frames over time. Correspondences over time are established using flow estimation as described above. Multiple frames are then combined by quality weighted averaging as above. [0075]
  • Temporal flicker is also reduced using the constraint that regions of error are typically consistent over time. For example, an occlusion boundary between two frames is typically present in subsequent frames, albeit in a slightly different image location. The quality of alignment metric can be computed as described above and this quality metric itself can be tracked over time in order to locate the movement of problematic regions such as occlusion boundaries. The flow estimation method described above can be used to track the quality metric and associated occlusion boundaries. Once these boundaries have been aligned, the compositing result computed above can be processed to reduce flicker. For example the compositing result can be smoothed over time. [0076]
  • The warping [0077] step 214 can also be performed using data collected over an image patch, rather than just a small neighborhood of pixels. For example, the image can be split up into a number of separate regions, and the resampling is performed based on the area covered by the region in the target image (step 512).
  • The depth recovery may not produce completely precise depth estimates at each image pixel. This can result in a difference between the desired intensity or chroma value and the values produced from the original high-resolution imagery. The warping module can then choose to select one or more of the following options as a depth recovery technique (step [0078] 514), either separately, or in combination:
  • leave the artifact as it is (step [0079] 516)
  • insert data that has been upsampled from the low-resolution imagery (step 518) [0080]
  • use data that has been previously synthesized (step [0081] 520)
  • allow an operator to manually correct the problem (step [0082] 522).
  • A Just Noticeable Difference (JND) technique can be used for selecting the appropriate combination of choices. The JND measures performed on the synthesized sequence, and comparing the difference between a low-resolution form of the synthesized data and data from the low-resolution camera. Various JND measures are described in U.S. patent application Ser. No. 09/055,076, filed Apr. 3, 1989, Ser. No. 08/829,540, filed Mar. 28, 1997, Ser. No. 08/829,516, filed Mar. 28, 1997, and Ser. No. 08/828,161, filed Mar. 28, 1997 and U.S. Pat. Nos. 5,738,430 and 5,694,491, all of which are incorporated herein by reference in their entireties. Additionally, the JND can be performed between the synthesized high-resolution image data, and the previous synthesized high-resolution image after being warped by the flow field computed from the parallax computation in [0083] step 212.
  • Depicted in FIG. 5 is a second embodiment of the routine [0084] 110. The routine 110 receives the input 112 from a plurality of image acquisition devices 503 comprising the first image acquisition device 206, the second image acquisition device 208 and a third low-resolution image acquisition device 502. Additional low resolution image acquisition devices may be added as needed. The first, second and third image acquisition devices, 206, 208 and 502, view the scene 200 respectively from a first viewpoint 216, a second viewpoint 218 and a third viewpoint 504. The routine 110 receives processes the input data from the image acquisition devices, 206, 208 and 502 as discussed above with reference to steps 202, 204, 210, 212 and 214. The additional image(s) received from the at least third image acquisition device 502 provides data that is used in concert with the data received from the second image acquisition device 208 during the parallax computation step 212 and the warping step 214 to enhance the quality of the synthetic image 114, particularly the ability to place the apparent viewpoint 230 in locations not containing one of the image acquisition devices (i.e., the greater number of image acquisitions devices used results in having more lower-resolution data available to interpolate and fill in occluded or textureless areas in the synthesized image).
  • A third embodiment of the routine [0085] 110 can be understood in greater detail by referencing FIG. 6. The routine 110, receives the input 112 from the first image acquisition device 206 and the second image acquisition device 208 wherein the low-resolution image acquisition device captures range data, for example, a laser range finder. The first image acquisition device 206 views the scene 200 from a first viewpoint 216 while the second image acquisition device 208 views the scene 200 from a second viewpoint 218. The routine 110 receives input data from the first image acquisition device 206 and corrects the spatial, intensity and chroma distortions in step 202 as discussed above.
  • After the high-resolution data has been corrected in [0086] step 202, the warping step 214 creates the synthesized image 114 by using the range (depth) data acquired from the second image acquisition device 208. The warping step 214 again is performed as discussed above.
  • Although the embodiment which incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings and spirit of the invention. [0087]

Claims (24)

What is claimed is:
1. A method for computing image flow information from a plurality of images comprising:
aligning a plurality of images to form an aligned image;
defining a plurality of windows, where each of said windows circumscribe an image region containing a point within said aligned image;
offsetting at least one of said windows from said point;
computing a flow estimation within each of said windows;
identifying the flow estimation having the lowest error; and
deeming said flow estimation associated with said lowest error as said flow information for said point.
2. The method of claim 1 wherein said flow information is constrained to produce parallax information.
3. The method of claim 1 wherein one of said windows is centered upon said point.
4. The method of claim 1 wherein said windows have different sizes.
5. The method of claim 1 wherein said plurality of images comprises a plurality of images and said windows are defined in said aligned images.
6. The method of claim 1 wherein said plurality of images are tiled and pairs of tiles form said plurality of images.
7. The method of claim 1 wherein each said plurality of images are imaged contemporaneously.
8. The method of claim 1 further comprising the steps of:
computing a flow estimate for each of said aligned images;
identifying a flow estimate having a lowest error;
identifying, in response to said flow estimate, errant information in first aligned image; and
repairing said errant information in said first aligned image with information from at least one other aligned image.
9. The method of claim 1 wherein said flow estimate is constrained to form a parallax estimate.
10. The method of claim 1 wherein said flow estimation is corrected.
11. A method for enhancing regions within a plurality of images comprising:
aligning a plurality of images to form a plurality of aligned images;
computing a flow estimation for each of said aligned images;
identifying flow estimation having the lowest error;
identifying, in response to said flow estimation, regions in a first aligned image; and
enhancing said regions in said first aligned image with information from at least one other aligned image.
12. The method of claim 11 wherein said flow estimation is constrained to form a parallax estimation.
13. The method of claim 11 wherein computing step further comprises:
computing an epipolar constraint for each of said aligned images; and
computing a flow field representing image changes from aligned image to aligned image.
14. The method of claim 11 wherein said computing step further comprises the step of:
computing a temporal constraint.
15. The method of claim 11 further comprising the steps of:
computing a flow estimation for a second aligned image; and
using the flow estimation from said second aligned image to correct a flow estimation for said first aligned image.
16. The method of claim 11 wherein said region is caused by noise and said enhancing said step reduces said noise.
17. A method of determining image flow comprising the steps of:
aligning a plurality of pairs of images in said plurality of images to form a plurality of aligned images;
computing a flow estimation for each of said aligned images to produce a plurality of flow estimates;
weighting the flow estimates; and
compositing an image by combining the weighted flow estimates.
18. The method of claim 17 wherein said flow estimation is constrained to produce parallax estimation.
19. The method of claim 17 wherein said flow estimation is corrected.
20. The method of claim 17 wherein the weighting step weights flow estimates for images over time.
21. Apparatus for generating a enhancing an image comprising:
a first imaging device for producing first images at a first resolution;
a second imaging device for producing second images at a second resolution;
an image processor coupled to said first and said second imaging devices, for using said second image to enhance said first image.
22. The apparatus of claim 21 wherein said image processor comprises:
an image flow generator.
23. The apparatus of claim 22 wherein said image flow generator is a parallax computer.
24. The apparatus of claim 23 wherein said parallax computer further comprises one or more augmentation modules selected from the group consisting of:
a module for dividing the images into tiles, a depth correlator, a module which performs Just Noticeable Differences, a correspondence checker, and a blank area avoidance module.
US10/255,746 1998-08-28 2002-09-26 Method and apparatus for processing images Abandoned US20030190072A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/255,746 US20030190072A1 (en) 1998-08-28 2002-09-26 Method and apparatus for processing images

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US9836898P 1998-08-28 1998-08-28
US12361599P 1999-03-10 1999-03-10
US09/384,118 US6269175B1 (en) 1998-08-28 1999-08-27 Method and apparatus for enhancing regions of aligned images using flow estimation
US09/888,693 US6490364B2 (en) 1998-08-28 2001-06-25 Apparatus for enhancing images using flow estimation
US10/255,746 US20030190072A1 (en) 1998-08-28 2002-09-26 Method and apparatus for processing images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/888,693 Continuation US6490364B2 (en) 1998-08-28 2001-06-25 Apparatus for enhancing images using flow estimation

Publications (1)

Publication Number Publication Date
US20030190072A1 true US20030190072A1 (en) 2003-10-09

Family

ID=27378585

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/384,118 Expired - Lifetime US6269175B1 (en) 1998-08-28 1999-08-27 Method and apparatus for enhancing regions of aligned images using flow estimation
US09/837,407 Expired - Lifetime US6430304B2 (en) 1998-08-28 2001-04-18 Method and apparatus for processing images to compute image flow information
US09/888,693 Expired - Lifetime US6490364B2 (en) 1998-08-28 2001-06-25 Apparatus for enhancing images using flow estimation
US10/255,746 Abandoned US20030190072A1 (en) 1998-08-28 2002-09-26 Method and apparatus for processing images

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/384,118 Expired - Lifetime US6269175B1 (en) 1998-08-28 1999-08-27 Method and apparatus for enhancing regions of aligned images using flow estimation
US09/837,407 Expired - Lifetime US6430304B2 (en) 1998-08-28 2001-04-18 Method and apparatus for processing images to compute image flow information
US09/888,693 Expired - Lifetime US6490364B2 (en) 1998-08-28 2001-06-25 Apparatus for enhancing images using flow estimation

Country Status (5)

Country Link
US (4) US6269175B1 (en)
EP (1) EP1110178A1 (en)
JP (2) JP2003526829A (en)
CA (1) CA2342318A1 (en)
WO (1) WO2000013142A1 (en)

Cited By (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050053309A1 (en) * 2003-08-22 2005-03-10 Szczuka Steven J. Image processors and methods of image processing
US20060045383A1 (en) * 2004-08-31 2006-03-02 Picciotto Carl E Displacement estimation system and method
US20060050338A1 (en) * 2004-08-09 2006-03-09 Hiroshi Hattori Three-dimensional-information reconstructing apparatus, method and program
US20060120712A1 (en) * 2004-12-07 2006-06-08 Samsung Electronics Co., Ltd. Method and apparatus for processing image
US20060158730A1 (en) * 2004-06-25 2006-07-20 Masataka Kira Stereoscopic image generating method and apparatus
US20060245640A1 (en) * 2005-04-28 2006-11-02 Szczuka Steven J Methods and apparatus of image processing using drizzle filtering
US20080012856A1 (en) * 2006-07-14 2008-01-17 Daphne Yu Perception-based quality metrics for volume rendering
US20080049970A1 (en) * 2006-02-14 2008-02-28 Fotonation Vision Limited Automatic detection and correction of non-red eye flash defects
US20080101724A1 (en) * 2006-10-31 2008-05-01 Henry Harlyn Baker Constructing arbitrary-plane and multi-arbitrary-plane mosaic composite images from a multi-imager
US20080232711A1 (en) * 2005-11-18 2008-09-25 Fotonation Vision Limited Two Stage Detection for Photographic Eye Artifacts
US20080246759A1 (en) * 2005-02-23 2008-10-09 Craig Summers Automatic Scene Modeling for the 3D Camera and 3D Video
US20080298679A1 (en) * 1997-10-09 2008-12-04 Fotonation Vision Limited Detecting red eye filter and apparaus using meta-data
US20090080797A1 (en) * 2007-09-25 2009-03-26 Fotonation Vision, Ltd. Eye Defect Detection in International Standards Organization Images
US20090207236A1 (en) * 2008-02-19 2009-08-20 Bae Systems Information And Electronic Systems Integration Inc. Focus actuated vergence
US20100014780A1 (en) * 2008-07-16 2010-01-21 Kalayeh Hooshmand M Image stitching and related method therefor
US7738015B2 (en) * 1997-10-09 2010-06-15 Fotonation Vision Limited Red-eye filter method and apparatus
EP2202682A1 (en) * 2007-10-15 2010-06-30 Nippon Telegraph and Telephone Corporation Image generation method, device, its program and program recorded medium
US20100271511A1 (en) * 2009-04-24 2010-10-28 Canon Kabushiki Kaisha Processing multi-view digital images
US7865036B2 (en) 2005-11-18 2011-01-04 Tessera Technologies Ireland Limited Method and apparatus of correcting hybrid flash artifacts in digital images
US7869628B2 (en) 2005-11-18 2011-01-11 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US20110007137A1 (en) * 2008-01-04 2011-01-13 Janos Rohaly Hierachical processing using image deformation
US7916190B1 (en) 1997-10-09 2011-03-29 Tessera Technologies Ireland Limited Red-eye filter method and apparatus
US20110074927A1 (en) * 2009-09-29 2011-03-31 Perng Ming-Hwei Method for determining ego-motion of moving platform and detection system
US7920723B2 (en) 2005-11-18 2011-04-05 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US7962629B2 (en) 2005-06-17 2011-06-14 Tessera Technologies Ireland Limited Method for establishing a paired connection between media devices
US7965875B2 (en) 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images
US7995804B2 (en) 2007-03-05 2011-08-09 Tessera Technologies Ireland Limited Red eye false positive filtering using face location and orientation
US8000526B2 (en) 2007-11-08 2011-08-16 Tessera Technologies Ireland Limited Detecting redeye defects in digital images
US8036460B2 (en) 2004-10-28 2011-10-11 DigitalOptics Corporation Europe Limited Analyzing partial face regions for red-eye detection in acquired digital images
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
US8081254B2 (en) 2008-08-14 2011-12-20 DigitalOptics Corporation Europe Limited In-camera based method of detecting defect eye with high accuracy
US20110311130A1 (en) * 2010-03-19 2011-12-22 Oki Semiconductor Co., Ltd. Image processing apparatus, method, program, and recording medium
WO2012005947A2 (en) * 2010-07-07 2012-01-12 Spinella Ip Holdings, Inc. System and method for transmission, processing, and rendering of stereoscopic and multi-view images
US8126208B2 (en) 2003-06-26 2012-02-28 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8212864B2 (en) 2008-01-30 2012-07-03 DigitalOptics Corporation Europe Limited Methods and apparatuses for using image acquisition data to detect and correct image defects
US20120263373A1 (en) * 2010-05-04 2012-10-18 Bae Systems National Security Solutions Inc. Inverse stereo image matching for change detection
WO2012177166A1 (en) * 2011-06-24 2012-12-27 Intel Corporation An efficient approach to estimate disparity map
US20130088597A1 (en) * 2011-10-05 2013-04-11 L-3 Communications Mobilevision Inc. Multiple resolution camera system for automated license plate recognition and event recording
US20130114892A1 (en) * 2011-11-09 2013-05-09 Canon Kabushiki Kaisha Method and device for generating a super-resolution image portion
US8520093B2 (en) 2003-08-05 2013-08-27 DigitalOptics Corporation Europe Limited Face tracker and partial face tracker for red-eye filter method and apparatus
US8717418B1 (en) * 2011-02-08 2014-05-06 John Prince Real time 3D imaging for remote surveillance
US20140333731A1 (en) * 2008-05-20 2014-11-13 Pelican Imaging Corporation Systems and Methods for Performing Post Capture Refocus Using Images Captured by Camera Arrays
CN104584545A (en) * 2012-08-31 2015-04-29 索尼公司 Image processing device, image processing method, and information processing device
US9025894B2 (en) 2011-09-28 2015-05-05 Pelican Imaging Corporation Systems and methods for decoding light field image files having depth and confidence maps
US9041824B2 (en) 2010-12-14 2015-05-26 Pelican Imaging Corporation Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers
US9049411B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Camera arrays incorporating 3×3 imager configurations
US9100586B2 (en) 2013-03-14 2015-08-04 Pelican Imaging Corporation Systems and methods for photometric normalization in array cameras
US9100635B2 (en) 2012-06-28 2015-08-04 Pelican Imaging Corporation Systems and methods for detecting defective camera arrays and optic arrays
US9106784B2 (en) 2013-03-13 2015-08-11 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US9123118B2 (en) 2012-08-21 2015-09-01 Pelican Imaging Corporation System and methods for measuring depth using an array camera employing a bayer filter
US9143711B2 (en) 2012-11-13 2015-09-22 Pelican Imaging Corporation Systems and methods for array camera focal plane control
US20150288945A1 (en) * 2014-04-08 2015-10-08 Semyon Nisenzon Generarting 3d images using multiresolution camera clusters
US9185276B2 (en) 2013-11-07 2015-11-10 Pelican Imaging Corporation Methods of manufacturing array camera modules incorporating independently aligned lens stacks
US9210392B2 (en) 2012-05-01 2015-12-08 Pelican Imaging Coporation Camera modules patterned with pi filter groups
US9214013B2 (en) 2012-09-14 2015-12-15 Pelican Imaging Corporation Systems and methods for correcting user identified artifacts in light field images
US9247117B2 (en) 2014-04-07 2016-01-26 Pelican Imaging Corporation Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array
US9253380B2 (en) 2013-02-24 2016-02-02 Pelican Imaging Corporation Thin form factor computational array cameras and modular array cameras
US9264610B2 (en) 2009-11-20 2016-02-16 Pelican Imaging Corporation Capturing and processing of images including occlusions captured by heterogeneous camera arrays
US9412206B2 (en) 2012-02-21 2016-08-09 Pelican Imaging Corporation Systems and methods for the manipulation of captured light field image data
US9412007B2 (en) 2003-08-05 2016-08-09 Fotonation Limited Partial face detector red-eye filter method and apparatus
US9426361B2 (en) 2013-11-26 2016-08-23 Pelican Imaging Corporation Array camera configurations incorporating multiple constituent array cameras
US9438888B2 (en) 2013-03-15 2016-09-06 Pelican Imaging Corporation Systems and methods for stereo imaging with camera arrays
US9462164B2 (en) 2013-02-21 2016-10-04 Pelican Imaging Corporation Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information
US9497370B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Array camera architecture implementing quantum dot color filters
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
US20160337635A1 (en) * 2015-05-15 2016-11-17 Semyon Nisenzon Generarting 3d images using multi-resolution camera set
US9516222B2 (en) 2011-06-28 2016-12-06 Kip Peli P1 Lp Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing
US9521319B2 (en) 2014-06-18 2016-12-13 Pelican Imaging Corporation Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor
US9519972B2 (en) 2013-03-13 2016-12-13 Kip Peli P1 Lp Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9521416B1 (en) 2013-03-11 2016-12-13 Kip Peli P1 Lp Systems and methods for image data compression
US9578259B2 (en) 2013-03-14 2017-02-21 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9633442B2 (en) 2013-03-15 2017-04-25 Fotonation Cayman Limited Array cameras including an array camera module augmented with a separate camera
US9741118B2 (en) 2013-03-13 2017-08-22 Fotonation Cayman Limited System and methods for calibration of an array camera
US9766380B2 (en) 2012-06-30 2017-09-19 Fotonation Cayman Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US9774789B2 (en) 2013-03-08 2017-09-26 Fotonation Cayman Limited Systems and methods for high dynamic range imaging using array cameras
US9794476B2 (en) 2011-09-19 2017-10-17 Fotonation Cayman Limited Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures
US9813616B2 (en) 2012-08-23 2017-11-07 Fotonation Cayman Limited Feature based high resolution motion estimation from low resolution images captured using an array source
US9866739B2 (en) 2011-05-11 2018-01-09 Fotonation Cayman Limited Systems and methods for transmitting and receiving array camera image data
US9888194B2 (en) 2013-03-13 2018-02-06 Fotonation Cayman Limited Array camera architecture implementing quantum film image sensors
US9898856B2 (en) 2013-09-27 2018-02-20 Fotonation Cayman Limited Systems and methods for depth-assisted perspective distortion correction
US9936148B2 (en) 2010-05-12 2018-04-03 Fotonation Cayman Limited Imager array interfaces
US9942474B2 (en) 2015-04-17 2018-04-10 Fotonation Cayman Limited Systems and methods for performing high speed video capture and depth estimation using array cameras
US9955070B2 (en) 2013-03-15 2018-04-24 Fotonation Cayman Limited Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US9986224B2 (en) 2013-03-10 2018-05-29 Fotonation Cayman Limited System and methods for calibration of an array camera
US10089740B2 (en) 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US10122993B2 (en) 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US10119808B2 (en) 2013-11-18 2018-11-06 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
KR101937673B1 (en) 2012-09-21 2019-01-14 삼성전자주식회사 GENERATING JNDD(Just Noticeable Depth Difference) MODEL OF 3D DISPLAY, METHOD AND SYSTEM OF ENHANCING DEPTH IMAGE USING THE JNDD MODEL
US10250871B2 (en) 2014-09-29 2019-04-02 Fotonation Limited Systems and methods for dynamic calibration of array cameras
US10390005B2 (en) 2012-09-28 2019-08-20 Fotonation Limited Generating images from light fields utilizing virtual viewpoints
US10482618B2 (en) 2017-08-21 2019-11-19 Fotonation Limited Systems and methods for hybrid depth regularization
US11270110B2 (en) 2019-09-17 2022-03-08 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11302012B2 (en) 2019-11-30 2022-04-12 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
US11525906B2 (en) 2019-10-07 2022-12-13 Intrinsic Innovation Llc Systems and methods for augmentation of sensor systems and imaging systems with polarization
US11580667B2 (en) 2020-01-29 2023-02-14 Intrinsic Innovation Llc Systems and methods for characterizing object pose detection and measurement systems
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US11797863B2 (en) 2020-01-30 2023-10-24 Intrinsic Innovation Llc Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects

Families Citing this family (174)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269175B1 (en) * 1998-08-28 2001-07-31 Sarnoff Corporation Method and apparatus for enhancing regions of aligned images using flow estimation
US6476873B1 (en) * 1998-10-23 2002-11-05 Vtel Corporation Enhancement of a selectable region of video
DE19860038C1 (en) * 1998-12-23 2000-06-29 Siemens Ag Motion compensation for series of two-dimensional images
CA2369648A1 (en) * 1999-04-16 2000-10-26 Matsushita Electric Industrial Co., Limited Image processing device and monitoring system
US6731790B1 (en) * 1999-10-19 2004-05-04 Agfa-Gevaert Method of enhancing color images
JP4523095B2 (en) * 1999-10-21 2010-08-11 富士通テン株式会社 Information processing apparatus, information integration apparatus, and information processing method
US6714672B1 (en) * 1999-10-27 2004-03-30 Canon Kabushiki Kaisha Automated stereo fundus evaluation
US6466618B1 (en) * 1999-11-19 2002-10-15 Sharp Laboratories Of America, Inc. Resolution improvement for multiple images
US6813371B2 (en) * 1999-12-24 2004-11-02 Aisin Seiki Kabushiki Kaisha On-vehicle camera calibration device
US6513054B1 (en) * 2000-02-22 2003-01-28 The United States Of America As Represented By The Secretary Of The Army Asynchronous parallel arithmetic processor utilizing coefficient polynomial arithmetic (CPA)
US7016551B1 (en) * 2000-04-10 2006-03-21 Fuji Xerox Co., Ltd. Image reader
CA2316610A1 (en) * 2000-08-21 2002-02-21 Finn Uredenhagen System and method for interpolating a target image from a source image
US6987865B1 (en) * 2000-09-09 2006-01-17 Microsoft Corp. System and method for extracting reflection and transparency layers from multiple images
JP4608152B2 (en) * 2000-09-12 2011-01-05 ソニー株式会社 Three-dimensional data processing apparatus, three-dimensional data processing method, and program providing medium
US6784884B1 (en) * 2000-09-29 2004-08-31 Intel Corporation Efficient parametric surface binning based on control points
EP1354292B1 (en) * 2000-12-01 2012-04-04 Imax Corporation Method and apparatus FOR DEVELOPING HIGH-RESOLUTION IMAGERY
JP2002224982A (en) * 2000-12-01 2002-08-13 Yaskawa Electric Corp Thin substrate transfer robot and detection method of the same
US6751362B2 (en) * 2001-01-11 2004-06-15 Micron Technology, Inc. Pixel resampling system and method for text
WO2002067235A2 (en) * 2001-02-21 2002-08-29 Koninklijke Philips Electronics N.V. Display system for processing a video signal
US6973218B2 (en) * 2001-04-25 2005-12-06 Lockheed Martin Corporation Dynamic range compression
US7103235B2 (en) * 2001-04-25 2006-09-05 Lockheed Martin Corporation Extended range image processing for electro-optical systems
US6901173B2 (en) * 2001-04-25 2005-05-31 Lockheed Martin Corporation Scene-based non-uniformity correction for detector arrays
US20040247157A1 (en) * 2001-06-15 2004-12-09 Ulrich Lages Method for preparing image information
CA2453056A1 (en) * 2001-07-06 2003-01-16 Vision Iii Imaging, Inc. Image segmentation by means of temporal parallax difference induction
US7113634B2 (en) * 2001-07-31 2006-09-26 Canon Kabushiki Kaisha Stereoscopic image forming apparatus, stereoscopic image forming method, stereoscopic image forming system and stereoscopic image forming program
JP4316170B2 (en) * 2001-09-05 2009-08-19 富士フイルム株式会社 Image data creation method and apparatus
KR100415313B1 (en) * 2001-12-24 2004-01-16 한국전자통신연구원 computation apparatus of optical flow and camera motion using correlation and system modelon sequential image
AU2002366985A1 (en) * 2001-12-26 2003-07-30 Yeda Research And Development Co.Ltd. A system and method for increasing space or time resolution in video
CA2478671C (en) * 2002-03-13 2011-09-13 Imax Corporation Systems and methods for digitally re-mastering or otherwise modifying motion pictures or other image sequences data
JP4075418B2 (en) * 2002-03-15 2008-04-16 ソニー株式会社 Image processing apparatus, image processing method, printed material manufacturing apparatus, printed material manufacturing method, and printed material manufacturing system
AU2003226081A1 (en) * 2002-03-25 2003-10-13 The Trustees Of Columbia University In The City Of New York Method and system for enhancing data quality
CA2380105A1 (en) 2002-04-09 2003-10-09 Nicholas Routhier Process and system for encoding and playback of stereoscopic video sequences
US20040001149A1 (en) * 2002-06-28 2004-01-01 Smith Steven Winn Dual-mode surveillance system
EP1567988A1 (en) * 2002-10-15 2005-08-31 University Of Southern California Augmented virtual environments
KR100446636B1 (en) 2002-11-21 2004-09-04 삼성전자주식회사 Apparatus and method for measuring ego motion of autonomous vehicles and 3D shape of object in front of autonomous vehicles
US6847728B2 (en) * 2002-12-09 2005-01-25 Sarnoff Corporation Dynamic depth recovery from multiple synchronized video streams
WO2004056133A1 (en) * 2002-12-16 2004-07-01 Sanyo Electric Co., Ltd. Stereoscopic video creating device and stereoscopic video distributing method
US7340099B2 (en) * 2003-01-17 2008-03-04 University Of New Brunswick System and method for image fusion
DE10302671A1 (en) * 2003-01-24 2004-08-26 Robert Bosch Gmbh Method and device for adjusting an image sensor system
US7345786B2 (en) * 2003-02-18 2008-03-18 Xerox Corporation Method for color cast removal in scanned images
US20040222987A1 (en) * 2003-05-08 2004-11-11 Chang Nelson Liang An Multiframe image processing
US8264576B2 (en) 2007-03-05 2012-09-11 DigitalOptics Corporation Europe Limited RGBW sensor array
US9160897B2 (en) * 2007-06-14 2015-10-13 Fotonation Limited Fast motion estimation method
US8180173B2 (en) 2007-09-21 2012-05-15 DigitalOptics Corporation Europe Limited Flash artifact eye defect correction in blurred images using anisotropic blurring
US7636486B2 (en) * 2004-11-10 2009-12-22 Fotonation Ireland Ltd. Method of determining PSF using multiple instances of a nominally similar scene
US8989516B2 (en) * 2007-09-18 2015-03-24 Fotonation Limited Image processing method and apparatus
US8199222B2 (en) * 2007-03-05 2012-06-12 DigitalOptics Corporation Europe Limited Low-light video frame enhancement
US8417055B2 (en) * 2007-03-05 2013-04-09 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US7639889B2 (en) 2004-11-10 2009-12-29 Fotonation Ireland Ltd. Method of notifying users regarding motion artifacts based on image analysis
US7596284B2 (en) * 2003-07-16 2009-09-29 Hewlett-Packard Development Company, L.P. High resolution image reconstruction
US7593597B2 (en) * 2003-08-06 2009-09-22 Eastman Kodak Company Alignment of lens array images using autocorrelation
US20050036702A1 (en) * 2003-08-12 2005-02-17 Xiaoli Yang System and method to enhance depth of field of digital image from consecutive image taken at different focus
JP3838243B2 (en) * 2003-09-04 2006-10-25 ソニー株式会社 Image processing method, image processing apparatus, and computer program
US20050240612A1 (en) * 2003-10-10 2005-10-27 Holden Carren M Design by space transformation form high to low dimensions
US20050084135A1 (en) * 2003-10-17 2005-04-21 Mei Chen Method and system for estimating displacement in a pair of images
US20050176812A1 (en) * 2003-11-06 2005-08-11 Pamela Cohen Method of treating cancer
EP1542167A1 (en) * 2003-12-09 2005-06-15 Koninklijke Philips Electronics N.V. Computer graphics processor and method for rendering 3D scenes on a 3D image display screen
US20090102973A1 (en) * 2004-01-09 2009-04-23 Harris Scott C Video split device
US20050207486A1 (en) * 2004-03-18 2005-09-22 Sony Corporation Three dimensional acquisition and visualization system for personal electronic devices
US20050219642A1 (en) * 2004-03-30 2005-10-06 Masahiko Yachida Imaging system, image data stream creation apparatus, image generation apparatus, image data stream generation apparatus, and image data stream generation system
US8036494B2 (en) * 2004-04-15 2011-10-11 Hewlett-Packard Development Company, L.P. Enhancing image resolution
US7671916B2 (en) * 2004-06-04 2010-03-02 Electronic Arts Inc. Motion sensor using dual camera inputs
US20050285947A1 (en) * 2004-06-21 2005-12-29 Grindstaff Gene A Real-time stabilization
US7916173B2 (en) * 2004-06-22 2011-03-29 Canon Kabushiki Kaisha Method for detecting and selecting good quality image frames from video
JP2008511080A (en) * 2004-08-23 2008-04-10 サーノフ コーポレーション Method and apparatus for forming a fused image
JP4483483B2 (en) * 2004-08-31 2010-06-16 株式会社ニコン Imaging device
US7545997B2 (en) * 2004-09-10 2009-06-09 Xerox Corporation Simulated high resolution using binary sub-sampling
US7730406B2 (en) * 2004-10-20 2010-06-01 Hewlett-Packard Development Company, L.P. Image processing system and method
US7639888B2 (en) * 2004-11-10 2009-12-29 Fotonation Ireland Ltd. Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts
JP4717728B2 (en) * 2005-08-29 2011-07-06 キヤノン株式会社 Stereo display device and control method thereof
TW200806040A (en) * 2006-01-05 2008-01-16 Nippon Telegraph & Telephone Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media for storing the programs
CA2636858C (en) * 2006-01-27 2015-11-24 Imax Corporation Methods and systems for digitally re-mastering of 2d and 3d motion pictures for exhibition with enhanced visual quality
CN101405767A (en) * 2006-03-15 2009-04-08 皇家飞利浦电子股份有限公司 Method for determining a depth map from images, device for determining a depth map
JP4116649B2 (en) * 2006-05-22 2008-07-09 株式会社東芝 High resolution device and method
IES20070229A2 (en) * 2006-06-05 2007-10-03 Fotonation Vision Ltd Image acquisition method and apparatus
KR100762670B1 (en) * 2006-06-07 2007-10-01 삼성전자주식회사 Method and device for generating disparity map from stereo image and stereo matching method and device therefor
US8340349B2 (en) * 2006-06-20 2012-12-25 Sri International Moving target detection in the presence of parallax
CA2653815C (en) 2006-06-23 2016-10-04 Imax Corporation Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
WO2008029345A1 (en) * 2006-09-04 2008-03-13 Koninklijke Philips Electronics N.V. Method for determining a depth map from images, device for determining a depth map
JP4818053B2 (en) 2006-10-10 2011-11-16 株式会社東芝 High resolution device and method
DE102006055641B4 (en) * 2006-11-22 2013-01-31 Visumotion Gmbh Arrangement and method for recording and reproducing images of a scene and / or an object
US20080212895A1 (en) * 2007-01-09 2008-09-04 Lockheed Martin Corporation Image data processing techniques for highly undersampled images
US7773118B2 (en) * 2007-03-25 2010-08-10 Fotonation Vision Limited Handheld article with movement discrimination
EP2179398B1 (en) * 2007-08-22 2011-03-02 Honda Research Institute Europe GmbH Estimating objects proper motion using optical flow, kinematics and depth information
WO2009051062A1 (en) * 2007-10-15 2009-04-23 Nippon Telegraph And Telephone Corporation Image generation method, device, its program and recording medium stored with program
US8497905B2 (en) * 2008-04-11 2013-07-30 nearmap australia pty ltd. Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features
US8675068B2 (en) 2008-04-11 2014-03-18 Nearmap Australia Pty Ltd Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features
JP4843640B2 (en) * 2008-05-07 2011-12-21 日本放送協会 3D information generation apparatus and 3D information generation program
FR2932911A1 (en) * 2008-06-24 2009-12-25 France Telecom METHOD AND DEVICE FOR FILLING THE OCCULTATION ZONES OF A DEPTH CARD OR DISPARITIES ESTIMATED FROM AT LEAST TWO IMAGES.
JP4513906B2 (en) * 2008-06-27 2010-07-28 ソニー株式会社 Image processing apparatus, image processing method, program, and recording medium
JP2010034964A (en) * 2008-07-30 2010-02-12 Sharp Corp Image composition apparatus, image composition method and image composition program
JP5238429B2 (en) * 2008-09-25 2013-07-17 株式会社東芝 Stereoscopic image capturing apparatus and stereoscopic image capturing system
US8903191B2 (en) * 2008-12-30 2014-12-02 Intel Corporation Method and apparatus for noise reduction in video
US8478067B2 (en) * 2009-01-27 2013-07-02 Harris Corporation Processing of remotely acquired imaging data including moving objects
US8363067B1 (en) * 2009-02-05 2013-01-29 Matrox Graphics, Inc. Processing multiple regions of an image in a graphics display system
US8260086B2 (en) * 2009-03-06 2012-09-04 Harris Corporation System and method for fusion of image pairs utilizing atmospheric and solar illumination modeling
US8111300B2 (en) * 2009-04-22 2012-02-07 Qualcomm Incorporated System and method to selectively combine video frame image data
US8639046B2 (en) * 2009-05-04 2014-01-28 Mamigo Inc Method and system for scalable multi-user interactive visualization
US20120044327A1 (en) * 2009-05-07 2012-02-23 Shinichi Horita Device for acquiring stereo image
US9380292B2 (en) 2009-07-31 2016-06-28 3Dmedia Corporation Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene
US8436893B2 (en) 2009-07-31 2013-05-07 3Dmedia Corporation Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional (3D) images
US8508580B2 (en) * 2009-07-31 2013-08-13 3Dmedia Corporation Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene
JP2011091527A (en) * 2009-10-21 2011-05-06 Panasonic Corp Video conversion device and imaging apparatus
AU2009243439A1 (en) * 2009-11-30 2011-06-16 Canon Kabushiki Kaisha Robust image alignment for distributed multi-view imaging systems
US20120120207A1 (en) * 2009-12-28 2012-05-17 Hiroaki Shimazaki Image playback device and display device
WO2011096136A1 (en) * 2010-02-02 2011-08-11 コニカミノルタホールディングス株式会社 Simulated image generating device and simulated image generating method
JP5387856B2 (en) * 2010-02-16 2014-01-15 ソニー株式会社 Image processing apparatus, image processing method, image processing program, and imaging apparatus
RS62794B1 (en) * 2010-04-13 2022-02-28 Ge Video Compression Llc Inheritance in sample array multitree subdivision
CN106412606B (en) 2010-04-13 2020-03-27 Ge视频压缩有限责任公司 Method for decoding data stream, method for generating data stream
HUE045693T2 (en) 2010-04-13 2020-01-28 Ge Video Compression Llc Video coding using multi-tree sub-divisions of images
WO2011128366A1 (en) 2010-04-13 2011-10-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Sample region merging
KR101665567B1 (en) * 2010-05-20 2016-10-12 삼성전자주식회사 Temporal interpolation of three dimension depth image method and apparatus
JP5627498B2 (en) * 2010-07-08 2014-11-19 株式会社東芝 Stereo image generating apparatus and method
JP5140210B2 (en) 2010-08-31 2013-02-06 パナソニック株式会社 Imaging apparatus and image processing method
JP5204349B2 (en) * 2010-08-31 2013-06-05 パナソニック株式会社 Imaging apparatus, playback apparatus, and image processing method
JP5204350B2 (en) * 2010-08-31 2013-06-05 パナソニック株式会社 Imaging apparatus, playback apparatus, and image processing method
US20120056982A1 (en) * 2010-09-08 2012-03-08 Microsoft Corporation Depth camera based on structured light and stereo vision
JP2012085252A (en) * 2010-09-17 2012-04-26 Panasonic Corp Image generation device, image generation method, program, and recording medium with program recorded thereon
KR101682137B1 (en) * 2010-10-25 2016-12-05 삼성전자주식회사 Method and apparatus for temporally-consistent disparity estimation using texture and motion detection
US10200671B2 (en) * 2010-12-27 2019-02-05 3Dmedia Corporation Primary and auxiliary image capture devices for image processing and related methods
US8274552B2 (en) * 2010-12-27 2012-09-25 3Dmedia Corporation Primary and auxiliary image capture devices for image processing and related methods
BR112013026538B1 (en) 2011-04-19 2022-06-07 Dolby Laboratories Licensing Corporation Highlight projector system, display system and method for displaying an image according to image data
JP2012249038A (en) * 2011-05-27 2012-12-13 Hitachi Consumer Electronics Co Ltd Image signal processing apparatus and image signal processing method
WO2012168322A2 (en) 2011-06-06 2012-12-13 3Shape A/S Dual-resolution 3d scanner
US20140114679A1 (en) * 2011-06-27 2014-04-24 High Tech Campus 5 Method of anatomical tagging of findings in image data
WO2013003276A1 (en) 2011-06-28 2013-01-03 Pelican Imaging Corporation Optical arrangements for use with an array camera
JP2013038602A (en) * 2011-08-08 2013-02-21 Sony Corp Image processor, image processing method, and program
AU2012307095B2 (en) * 2011-09-07 2017-03-30 Commonwealth Scientific And Industrial Research Organisation System and method for three-dimensional surface imaging
JP5912382B2 (en) * 2011-10-03 2016-04-27 ソニー株式会社 Imaging apparatus and video recording / reproducing system
JP5412692B2 (en) * 2011-10-04 2014-02-12 株式会社モルフォ Image processing apparatus, image processing method, image processing program, and recording medium
FR2983998B1 (en) * 2011-12-08 2016-02-26 Univ Pierre Et Marie Curie Paris 6 METHOD FOR 3D RECONSTRUCTION OF A SCENE USING ASYNCHRONOUS SENSORS
JP6167525B2 (en) * 2012-03-21 2017-07-26 株式会社リコー Distance measuring device and vehicle
US9031357B2 (en) * 2012-05-04 2015-05-12 Microsoft Technology Licensing, Llc Recovering dis-occluded areas using temporal information integration
US9237326B2 (en) * 2012-06-27 2016-01-12 Imec Taiwan Co. Imaging system and method
CN103546736B (en) * 2012-07-12 2016-12-28 三星电子株式会社 Image processing equipment and method
JP2014027448A (en) * 2012-07-26 2014-02-06 Sony Corp Information processing apparatus, information processing metho, and program
US10063757B2 (en) * 2012-11-21 2018-08-28 Infineon Technologies Ag Dynamic conservation of imaging power
WO2014083574A2 (en) * 2012-11-30 2014-06-05 Larsen & Toubro Limited A method and system for extended depth of field calculation for microscopic images
US9897792B2 (en) 2012-11-30 2018-02-20 L&T Technology Services Limited Method and system for extended depth of field calculation for microscopic images
TWI591584B (en) 2012-12-26 2017-07-11 財團法人工業技術研究院 Three dimensional sensing method and three dimensional sensing apparatus
CN103083089B (en) * 2012-12-27 2014-11-12 广东圣洋信息科技实业有限公司 Virtual scale method and system of digital stereo-micrography system
US9426451B2 (en) * 2013-03-15 2016-08-23 Digimarc Corporation Cooperative photography
US9886636B2 (en) * 2013-05-23 2018-02-06 GM Global Technology Operations LLC Enhanced top-down view generation in a front curb viewing system
WO2015005163A1 (en) * 2013-07-12 2015-01-15 三菱電機株式会社 High-resolution image generation device, high-resolution image generation method, and high-resolution image generation program
KR102125525B1 (en) * 2013-11-20 2020-06-23 삼성전자주식회사 Method for processing image and electronic device thereof
US10026010B2 (en) * 2014-05-14 2018-07-17 At&T Intellectual Property I, L.P. Image quality estimation using a reference image portion
US20230027499A1 (en) * 2014-05-15 2023-01-26 Mtt Innovation Incorporated Optimizing drive schemes for multiple projector systems
JP6788504B2 (en) * 2014-05-15 2020-11-25 エムティティ イノベーション インコーポレイテッドMtt Innovation Incorporated Optimizing drive scheme for multiple projector systems
US10306125B2 (en) 2014-10-09 2019-05-28 Belkin International, Inc. Video camera with privacy
US9179105B1 (en) 2014-09-15 2015-11-03 Belkin International, Inc. Control of video camera with privacy feedback
JP6474278B2 (en) * 2015-02-27 2019-02-27 株式会社ソニー・インタラクティブエンタテインメント Image generation system, image generation method, program, and information storage medium
US10713610B2 (en) * 2015-12-22 2020-07-14 Symbol Technologies, Llc Methods and systems for occlusion detection and data correction for container-fullness estimation
US9940730B2 (en) 2015-11-18 2018-04-10 Symbol Technologies, Llc Methods and systems for automatic fullness estimation of containers
JP6934887B2 (en) * 2015-12-31 2021-09-15 エムエル ネザーランズ セー.フェー. Methods and systems for real-time 3D capture and live feedback with monocular cameras
US20190026924A1 (en) * 2016-01-15 2019-01-24 Nokia Technologies Oy Method and Apparatus for Calibration of a Multi-Camera System
US9870638B2 (en) 2016-02-24 2018-01-16 Ondrej Jamri{hacek over (s)}ka Appearance transfer techniques
US9852523B2 (en) * 2016-02-24 2017-12-26 Ondrej Jamri{hacek over (s)}ka Appearance transfer techniques maintaining temporal coherence
JP6237811B2 (en) * 2016-04-01 2017-11-29 ソニー株式会社 Imaging apparatus and video recording / reproducing system
US10057562B2 (en) 2016-04-06 2018-08-21 Facebook, Inc. Generating intermediate views using optical flow
US9934615B2 (en) 2016-04-06 2018-04-03 Facebook, Inc. Transition between binocular and monocular views
US10027954B2 (en) 2016-05-23 2018-07-17 Microsoft Technology Licensing, Llc Registering cameras in a multi-camera imager
US10326979B2 (en) 2016-05-23 2019-06-18 Microsoft Technology Licensing, Llc Imaging system comprising real-time image registration
US10339662B2 (en) 2016-05-23 2019-07-02 Microsoft Technology Licensing, Llc Registering cameras with virtual fiducials
ES2846864T3 (en) * 2016-07-12 2021-07-29 Sz Dji Technology Co Ltd Image processing to obtain environmental information
JP6932487B2 (en) * 2016-07-29 2021-09-08 キヤノン株式会社 Mobile monitoring device
US10796425B1 (en) * 2016-09-06 2020-10-06 Amazon Technologies, Inc. Imagery-based member deformation gauge
JP7159057B2 (en) * 2017-02-10 2022-10-24 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Free-viewpoint video generation method and free-viewpoint video generation system
KR102455632B1 (en) * 2017-09-14 2022-10-17 삼성전자주식회사 Mehtod and apparatus for stereo matching
CN108234988A (en) * 2017-12-28 2018-06-29 努比亚技术有限公司 Parallax drawing generating method, device and computer readable storage medium
US10783656B2 (en) 2018-05-18 2020-09-22 Zebra Technologies Corporation System and method of determining a location for placement of a package
CN109263557B (en) * 2018-11-19 2020-10-09 威盛电子股份有限公司 Vehicle blind area detection method
US11450018B1 (en) * 2019-12-24 2022-09-20 X Development Llc Fusing multiple depth sensing modalities
US11741625B2 (en) * 2020-06-12 2023-08-29 Elphel, Inc. Systems and methods for thermal imaging
DE102021203812B4 (en) 2021-04-16 2023-04-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung eingetragener Verein Optical measuring device and method for determining a multidimensional surface model

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4683496A (en) * 1985-08-23 1987-07-28 The Analytic Sciences Corporation System for and method of enhancing images using multiband information
US4924521A (en) * 1987-12-18 1990-05-08 International Business Machines Corporation Image processing system and method employing combined black and white and gray scale image data
US5241372A (en) * 1990-11-30 1993-08-31 Sony Corporation Video image processing apparatus including convolution filter means to process pixels of a video image by a set of parameter coefficients
US5259040A (en) * 1991-10-04 1993-11-02 David Sarnoff Research Center, Inc. Method for determining sensor motion and scene structure and image processing system therefor
US5550937A (en) * 1992-11-23 1996-08-27 Harris Corporation Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries
US5657402A (en) * 1991-11-01 1997-08-12 Massachusetts Institute Of Technology Method of creating a high resolution still image using a plurality of images and apparatus for practice of the method
US5668660A (en) * 1994-11-29 1997-09-16 Hunt; Gary D. Microscope with plural zoom lens assemblies in series
US5680487A (en) * 1991-12-23 1997-10-21 Texas Instruments Incorporated System and method for determining optical flow
US5684491A (en) * 1995-01-27 1997-11-04 Hazeltine Corporation High gain antenna systems for cellular use
US5696848A (en) * 1995-03-09 1997-12-09 Eastman Kodak Company System for creating a high resolution image from a sequence of lower resolution motion images
US5706416A (en) * 1995-11-13 1998-01-06 Massachusetts Institute Of Technology Method and apparatus for relating and combining multiple images of the same scene or object(s)
US5738430A (en) * 1996-03-29 1998-04-14 David Sarnoff Research Center, Inc. Method and apparatus for predicting retinal illuminance
US5768404A (en) * 1994-04-13 1998-06-16 Matsushita Electric Industrial Co., Ltd. Motion and disparity estimation method, image synthesis method, and apparatus for implementing same methods
US5919516A (en) * 1997-12-04 1999-07-06 Hsieh; Chen-Hui Process of making joss-sticks
US5953014A (en) * 1996-06-07 1999-09-14 U.S. Philips Image generation using three z-buffers
US5959914A (en) * 1998-03-27 1999-09-28 Lsi Logic Corporation Memory controller with error correction memory test application
US5963664A (en) * 1995-06-22 1999-10-05 Sarnoff Corporation Method and system for image combination using a parallax-based technique
US5974159A (en) * 1996-03-29 1999-10-26 Sarnoff Corporation Method and apparatus for assessing the visibility of differences between two image sequences
US6011875A (en) * 1998-04-29 2000-01-04 Eastman Kodak Company Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening
US6075884A (en) * 1996-03-29 2000-06-13 Sarnoff Corporation Method and apparatus for training a neural network to learn and use fidelity metric as a control mechanism
US6137904A (en) * 1997-04-04 2000-10-24 Sarnoff Corporation Method and apparatus for assessing the visibility of differences between two signal sequences
US6269175B1 (en) * 1998-08-28 2001-07-31 Sarnoff Corporation Method and apparatus for enhancing regions of aligned images using flow estimation
US6371610B1 (en) * 2000-01-28 2002-04-16 Seiren Co., Ltd. Ink-jet printing method and ink-jet printed cloth

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5265172A (en) * 1989-10-13 1993-11-23 Texas Instruments Incorporated Method and apparatus for producing optical flow using multi-spectral images
US5257209A (en) * 1990-06-26 1993-10-26 Texas Instruments Incorporated Optical flow computation for moving sensors
US5627905A (en) * 1994-12-12 1997-05-06 Lockheed Martin Tactical Defense Systems Optical flow detection system
JP3539788B2 (en) 1995-04-21 2004-07-07 パナソニック モバイルコミュニケーションズ株式会社 Image matching method
JPH09212648A (en) * 1996-01-31 1997-08-15 Toshiba Corp Moving image processing method
US6081606A (en) * 1996-06-17 2000-06-27 Sarnoff Corporation Apparatus and a method for detecting motion within an image sequence
JPH1091795A (en) * 1996-09-12 1998-04-10 Toshiba Corp Device for detecting mobile object and method therefor
US5949914A (en) * 1997-03-17 1999-09-07 Space Imaging Lp Enhancing the resolution of multi-spectral image data with panchromatic image data using super resolution pan-sharpening
US6043838A (en) 1997-11-07 2000-03-28 General Instrument Corporation View offset estimation for stereoscopic video coding
US6192156B1 (en) * 1998-04-03 2001-02-20 Synapix, Inc. Feature tracking using a dense feature array
US6298144B1 (en) * 1998-05-20 2001-10-02 The United States Of America As Represented By The National Security Agency Device for and method of detecting motion in an image

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4683496A (en) * 1985-08-23 1987-07-28 The Analytic Sciences Corporation System for and method of enhancing images using multiband information
US4924521A (en) * 1987-12-18 1990-05-08 International Business Machines Corporation Image processing system and method employing combined black and white and gray scale image data
US5241372A (en) * 1990-11-30 1993-08-31 Sony Corporation Video image processing apparatus including convolution filter means to process pixels of a video image by a set of parameter coefficients
US5259040A (en) * 1991-10-04 1993-11-02 David Sarnoff Research Center, Inc. Method for determining sensor motion and scene structure and image processing system therefor
US5657402A (en) * 1991-11-01 1997-08-12 Massachusetts Institute Of Technology Method of creating a high resolution still image using a plurality of images and apparatus for practice of the method
US5680487A (en) * 1991-12-23 1997-10-21 Texas Instruments Incorporated System and method for determining optical flow
US5550937A (en) * 1992-11-23 1996-08-27 Harris Corporation Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries
US5768404A (en) * 1994-04-13 1998-06-16 Matsushita Electric Industrial Co., Ltd. Motion and disparity estimation method, image synthesis method, and apparatus for implementing same methods
US5668660A (en) * 1994-11-29 1997-09-16 Hunt; Gary D. Microscope with plural zoom lens assemblies in series
US5684491A (en) * 1995-01-27 1997-11-04 Hazeltine Corporation High gain antenna systems for cellular use
US5696848A (en) * 1995-03-09 1997-12-09 Eastman Kodak Company System for creating a high resolution image from a sequence of lower resolution motion images
US5963664A (en) * 1995-06-22 1999-10-05 Sarnoff Corporation Method and system for image combination using a parallax-based technique
US5706416A (en) * 1995-11-13 1998-01-06 Massachusetts Institute Of Technology Method and apparatus for relating and combining multiple images of the same scene or object(s)
US6075884A (en) * 1996-03-29 2000-06-13 Sarnoff Corporation Method and apparatus for training a neural network to learn and use fidelity metric as a control mechanism
US5738430A (en) * 1996-03-29 1998-04-14 David Sarnoff Research Center, Inc. Method and apparatus for predicting retinal illuminance
US5974159A (en) * 1996-03-29 1999-10-26 Sarnoff Corporation Method and apparatus for assessing the visibility of differences between two image sequences
US5953014A (en) * 1996-06-07 1999-09-14 U.S. Philips Image generation using three z-buffers
US6137904A (en) * 1997-04-04 2000-10-24 Sarnoff Corporation Method and apparatus for assessing the visibility of differences between two signal sequences
US5919516A (en) * 1997-12-04 1999-07-06 Hsieh; Chen-Hui Process of making joss-sticks
US5959914A (en) * 1998-03-27 1999-09-28 Lsi Logic Corporation Memory controller with error correction memory test application
US6011875A (en) * 1998-04-29 2000-01-04 Eastman Kodak Company Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening
US6269175B1 (en) * 1998-08-28 2001-07-31 Sarnoff Corporation Method and apparatus for enhancing regions of aligned images using flow estimation
US20010036307A1 (en) * 1998-08-28 2001-11-01 Hanna Keith James Method and apparatus for processing images
US6430304B2 (en) * 1998-08-28 2002-08-06 Sarnoff Corporation Method and apparatus for processing images to compute image flow information
US6490364B2 (en) * 1998-08-28 2002-12-03 Sarnoff Corporation Apparatus for enhancing images using flow estimation
US6371610B1 (en) * 2000-01-28 2002-04-16 Seiren Co., Ltd. Ink-jet printing method and ink-jet printed cloth

Cited By (256)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7916190B1 (en) 1997-10-09 2011-03-29 Tessera Technologies Ireland Limited Red-eye filter method and apparatus
US7787022B2 (en) * 1997-10-09 2010-08-31 Fotonation Vision Limited Red-eye filter method and apparatus
US7847839B2 (en) 1997-10-09 2010-12-07 Fotonation Vision Limited Detecting red eye filter and apparatus using meta-data
US7847840B2 (en) 1997-10-09 2010-12-07 Fotonation Vision Limited Detecting red eye filter and apparatus using meta-data
US7738015B2 (en) * 1997-10-09 2010-06-15 Fotonation Vision Limited Red-eye filter method and apparatus
US7804531B2 (en) 1997-10-09 2010-09-28 Fotonation Vision Limited Detecting red eye filter and apparatus using meta-data
US8203621B2 (en) 1997-10-09 2012-06-19 DigitalOptics Corporation Europe Limited Red-eye filter method and apparatus
US7852384B2 (en) 1997-10-09 2010-12-14 Fotonation Vision Limited Detecting red eye filter and apparatus using meta-data
US8264575B1 (en) 1997-10-09 2012-09-11 DigitalOptics Corporation Europe Limited Red eye filter method and apparatus
US20080298679A1 (en) * 1997-10-09 2008-12-04 Fotonation Vision Limited Detecting red eye filter and apparaus using meta-data
US8126208B2 (en) 2003-06-26 2012-02-28 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8224108B2 (en) 2003-06-26 2012-07-17 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8131016B2 (en) 2003-06-26 2012-03-06 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US9412007B2 (en) 2003-08-05 2016-08-09 Fotonation Limited Partial face detector red-eye filter method and apparatus
US8520093B2 (en) 2003-08-05 2013-08-27 DigitalOptics Corporation Europe Limited Face tracker and partial face tracker for red-eye filter method and apparatus
US20050053309A1 (en) * 2003-08-22 2005-03-10 Szczuka Steven J. Image processors and methods of image processing
US8000521B2 (en) * 2004-06-25 2011-08-16 Masataka Kira Stereoscopic image generating method and apparatus
US20060158730A1 (en) * 2004-06-25 2006-07-20 Masataka Kira Stereoscopic image generating method and apparatus
US7720277B2 (en) * 2004-08-09 2010-05-18 Kabushiki Kaisha Toshiba Three-dimensional-information reconstructing apparatus, method and program
US20060050338A1 (en) * 2004-08-09 2006-03-09 Hiroshi Hattori Three-dimensional-information reconstructing apparatus, method and program
US20060045383A1 (en) * 2004-08-31 2006-03-02 Picciotto Carl E Displacement estimation system and method
US8036460B2 (en) 2004-10-28 2011-10-11 DigitalOptics Corporation Europe Limited Analyzing partial face regions for red-eye detection in acquired digital images
US8265388B2 (en) 2004-10-28 2012-09-11 DigitalOptics Corporation Europe Limited Analyzing partial face regions for red-eye detection in acquired digital images
US20060120712A1 (en) * 2004-12-07 2006-06-08 Samsung Electronics Co., Ltd. Method and apparatus for processing image
US20080246759A1 (en) * 2005-02-23 2008-10-09 Craig Summers Automatic Scene Modeling for the 3D Camera and 3D Video
US20060245640A1 (en) * 2005-04-28 2006-11-02 Szczuka Steven J Methods and apparatus of image processing using drizzle filtering
US7962629B2 (en) 2005-06-17 2011-06-14 Tessera Technologies Ireland Limited Method for establishing a paired connection between media devices
US7970184B2 (en) 2005-11-18 2011-06-28 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US7920723B2 (en) 2005-11-18 2011-04-05 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US7869628B2 (en) 2005-11-18 2011-01-11 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US8126217B2 (en) 2005-11-18 2012-02-28 DigitalOptics Corporation Europe Limited Two stage detection for photographic eye artifacts
US7970183B2 (en) 2005-11-18 2011-06-28 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US7953252B2 (en) 2005-11-18 2011-05-31 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US8126218B2 (en) 2005-11-18 2012-02-28 DigitalOptics Corporation Europe Limited Two stage detection for photographic eye artifacts
US8180115B2 (en) 2005-11-18 2012-05-15 DigitalOptics Corporation Europe Limited Two stage detection for photographic eye artifacts
US8175342B2 (en) 2005-11-18 2012-05-08 DigitalOptics Corporation Europe Limited Two stage detection for photographic eye artifacts
US7970182B2 (en) 2005-11-18 2011-06-28 Tessera Technologies Ireland Limited Two stage detection for photographic eye artifacts
US8160308B2 (en) 2005-11-18 2012-04-17 DigitalOptics Corporation Europe Limited Two stage detection for photographic eye artifacts
US7865036B2 (en) 2005-11-18 2011-01-04 Tessera Technologies Ireland Limited Method and apparatus of correcting hybrid flash artifacts in digital images
US8131021B2 (en) 2005-11-18 2012-03-06 DigitalOptics Corporation Europe Limited Two stage detection for photographic eye artifacts
US20080232711A1 (en) * 2005-11-18 2008-09-25 Fotonation Vision Limited Two Stage Detection for Photographic Eye Artifacts
US8184900B2 (en) 2006-02-14 2012-05-22 DigitalOptics Corporation Europe Limited Automatic detection and correction of non-red eye flash defects
US20080049970A1 (en) * 2006-02-14 2008-02-28 Fotonation Vision Limited Automatic detection and correction of non-red eye flash defects
US7965875B2 (en) 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images
US20080012856A1 (en) * 2006-07-14 2008-01-17 Daphne Yu Perception-based quality metrics for volume rendering
US8019180B2 (en) * 2006-10-31 2011-09-13 Hewlett-Packard Development Company, L.P. Constructing arbitrary-plane and multi-arbitrary-plane mosaic composite images from a multi-imager
US20080101724A1 (en) * 2006-10-31 2008-05-01 Henry Harlyn Baker Constructing arbitrary-plane and multi-arbitrary-plane mosaic composite images from a multi-imager
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
US8233674B2 (en) 2007-03-05 2012-07-31 DigitalOptics Corporation Europe Limited Red eye false positive filtering using face location and orientation
US7995804B2 (en) 2007-03-05 2011-08-09 Tessera Technologies Ireland Limited Red eye false positive filtering using face location and orientation
US8503818B2 (en) 2007-09-25 2013-08-06 DigitalOptics Corporation Europe Limited Eye defect detection in international standards organization images
US20090080797A1 (en) * 2007-09-25 2009-03-26 Fotonation Vision, Ltd. Eye Defect Detection in International Standards Organization Images
EP2202682A4 (en) * 2007-10-15 2011-06-01 Nippon Telegraph & Telephone Image generation method, device, its program and program recorded medium
US8346019B2 (en) 2007-10-15 2013-01-01 Nippon Telegraph And Telephone Corporation Image generation method and apparatus, program therefor, and storage medium which stores the program
TWI397023B (en) * 2007-10-15 2013-05-21 Nippon Telegraph & Telephone Image generation method and apparatus, program therefor, and storage medium for storing the program
EP2202682A1 (en) * 2007-10-15 2010-06-30 Nippon Telegraph and Telephone Corporation Image generation method, device, its program and program recorded medium
US20100208991A1 (en) * 2007-10-15 2010-08-19 Nippon Telegraph And Telephone Corporation Image generation method and apparatus, program therefor, and storage medium which stores the program
US8290267B2 (en) 2007-11-08 2012-10-16 DigitalOptics Corporation Europe Limited Detecting redeye defects in digital images
US8036458B2 (en) 2007-11-08 2011-10-11 DigitalOptics Corporation Europe Limited Detecting redeye defects in digital images
US8000526B2 (en) 2007-11-08 2011-08-16 Tessera Technologies Ireland Limited Detecting redeye defects in digital images
US8830309B2 (en) * 2008-01-04 2014-09-09 3M Innovative Properties Company Hierarchical processing using image deformation
US20110007137A1 (en) * 2008-01-04 2011-01-13 Janos Rohaly Hierachical processing using image deformation
US8212864B2 (en) 2008-01-30 2012-07-03 DigitalOptics Corporation Europe Limited Methods and apparatuses for using image acquisition data to detect and correct image defects
US20090207236A1 (en) * 2008-02-19 2009-08-20 Bae Systems Information And Electronic Systems Integration Inc. Focus actuated vergence
WO2009105195A3 (en) * 2008-02-19 2009-12-30 Bae Systems Information And Electronic Systems Focus actuated vergence
US8970677B2 (en) 2008-02-19 2015-03-03 Bae Systems Information And Electronic Systems Integration Inc. Focus actuated vergence
US9049411B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Camera arrays incorporating 3×3 imager configurations
US9049381B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Systems and methods for normalizing image data captured by camera arrays
US10142560B2 (en) 2008-05-20 2018-11-27 Fotonation Limited Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US9094661B2 (en) 2008-05-20 2015-07-28 Pelican Imaging Corporation Systems and methods for generating depth maps using a set of images containing a baseline image
US9077893B2 (en) 2008-05-20 2015-07-07 Pelican Imaging Corporation Capturing and processing of images captured by non-grid camera arrays
US9060121B2 (en) * 2008-05-20 2015-06-16 Pelican Imaging Corporation Capturing and processing of images captured by camera arrays including cameras dedicated to sampling luma and cameras dedicated to sampling chroma
US9060120B2 (en) 2008-05-20 2015-06-16 Pelican Imaging Corporation Systems and methods for generating depth maps using images captured by camera arrays
US9060142B2 (en) 2008-05-20 2015-06-16 Pelican Imaging Corporation Capturing and processing of images captured by camera arrays including heterogeneous optics
US9188765B2 (en) 2008-05-20 2015-11-17 Pelican Imaging Corporation Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US9191580B2 (en) 2008-05-20 2015-11-17 Pelican Imaging Corporation Capturing and processing of images including occlusions captured by camera arrays
US9235898B2 (en) 2008-05-20 2016-01-12 Pelican Imaging Corporation Systems and methods for generating depth maps using light focused on an image sensor by a lens element array
US9060124B2 (en) * 2008-05-20 2015-06-16 Pelican Imaging Corporation Capturing and processing of images using non-monolithic camera arrays
US20140333731A1 (en) * 2008-05-20 2014-11-13 Pelican Imaging Corporation Systems and Methods for Performing Post Capture Refocus Using Images Captured by Camera Arrays
US20140368683A1 (en) * 2008-05-20 2014-12-18 Pelican Imaging Corporation Capturing and Processing of Images Using Non-Monolithic Camera Arrays
US9055213B2 (en) 2008-05-20 2015-06-09 Pelican Imaging Corporation Systems and methods for measuring depth using images captured by monolithic camera arrays including at least one bayer camera
US20150009362A1 (en) * 2008-05-20 2015-01-08 Pelican Imaging Corporation Capturing and Processing of Images Captured by Camera Arrays Including Cameras Dedicated to Sampling Luma and Cameras Dedicated to Sampling Chroma
US9485496B2 (en) 2008-05-20 2016-11-01 Pelican Imaging Corporation Systems and methods for measuring depth using images captured by a camera array including cameras surrounding a central camera
US9055233B2 (en) 2008-05-20 2015-06-09 Pelican Imaging Corporation Systems and methods for synthesizing higher resolution images using a set of images containing a baseline image
US11412158B2 (en) 2008-05-20 2022-08-09 Fotonation Limited Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US9576369B2 (en) 2008-05-20 2017-02-21 Fotonation Cayman Limited Systems and methods for generating depth maps using images captured by camera arrays incorporating cameras having different fields of view
US9049367B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Systems and methods for synthesizing higher resolution images using images captured by camera arrays
US9712759B2 (en) 2008-05-20 2017-07-18 Fotonation Cayman Limited Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras
US9749547B2 (en) 2008-05-20 2017-08-29 Fotonation Cayman Limited Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view
US9049391B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Capturing and processing of near-IR images including occlusions using camera arrays incorporating near-IR light sources
US9041829B2 (en) 2008-05-20 2015-05-26 Pelican Imaging Corporation Capturing and processing of high dynamic range images using camera arrays
US9124815B2 (en) 2008-05-20 2015-09-01 Pelican Imaging Corporation Capturing and processing of images including occlusions captured by arrays of luma and chroma cameras
US10027901B2 (en) 2008-05-20 2018-07-17 Fotonation Cayman Limited Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras
US9041823B2 (en) * 2008-05-20 2015-05-26 Pelican Imaging Corporation Systems and methods for performing post capture refocus using images captured by camera arrays
US9049390B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Capturing and processing of images captured by arrays including polychromatic cameras
US8600193B2 (en) * 2008-07-16 2013-12-03 Varian Medical Systems, Inc. Image stitching and related method therefor
US20100014780A1 (en) * 2008-07-16 2010-01-21 Kalayeh Hooshmand M Image stitching and related method therefor
US8081254B2 (en) 2008-08-14 2011-12-20 DigitalOptics Corporation Europe Limited In-camera based method of detecting defect eye with high accuracy
US20100271511A1 (en) * 2009-04-24 2010-10-28 Canon Kabushiki Kaisha Processing multi-view digital images
US8509558B2 (en) 2009-04-24 2013-08-13 Canon Kabushiki Kaisha Processing multi-view digital images
US20110074927A1 (en) * 2009-09-29 2011-03-31 Perng Ming-Hwei Method for determining ego-motion of moving platform and detection system
US10306120B2 (en) 2009-11-20 2019-05-28 Fotonation Limited Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps
US9264610B2 (en) 2009-11-20 2016-02-16 Pelican Imaging Corporation Capturing and processing of images including occlusions captured by heterogeneous camera arrays
US8917929B2 (en) * 2010-03-19 2014-12-23 Lapis Semiconductor Co., Ltd. Image processing apparatus, method, program, and recording medium
US20110311130A1 (en) * 2010-03-19 2011-12-22 Oki Semiconductor Co., Ltd. Image processing apparatus, method, program, and recording medium
US8837774B2 (en) * 2010-05-04 2014-09-16 Bae Systems Information Solutions Inc. Inverse stereo image matching for change detection
US20120263373A1 (en) * 2010-05-04 2012-10-18 Bae Systems National Security Solutions Inc. Inverse stereo image matching for change detection
US10455168B2 (en) 2010-05-12 2019-10-22 Fotonation Limited Imager array interfaces
US9936148B2 (en) 2010-05-12 2018-04-03 Fotonation Cayman Limited Imager array interfaces
WO2012005947A3 (en) * 2010-07-07 2014-06-26 Spinella Ip Holdings, Inc. System and method for transmission, processing, and rendering of stereoscopic and multi-view images
WO2012005947A2 (en) * 2010-07-07 2012-01-12 Spinella Ip Holdings, Inc. System and method for transmission, processing, and rendering of stereoscopic and multi-view images
US9361662B2 (en) 2010-12-14 2016-06-07 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US9041824B2 (en) 2010-12-14 2015-05-26 Pelican Imaging Corporation Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers
US10366472B2 (en) 2010-12-14 2019-07-30 Fotonation Limited Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US9047684B2 (en) 2010-12-14 2015-06-02 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using a set of geometrically registered images
US11875475B2 (en) 2010-12-14 2024-01-16 Adeia Imaging Llc Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US11423513B2 (en) 2010-12-14 2022-08-23 Fotonation Limited Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US8717418B1 (en) * 2011-02-08 2014-05-06 John Prince Real time 3D imaging for remote surveillance
US9866739B2 (en) 2011-05-11 2018-01-09 Fotonation Cayman Limited Systems and methods for transmitting and receiving array camera image data
US10742861B2 (en) 2011-05-11 2020-08-11 Fotonation Limited Systems and methods for transmitting and receiving array camera image data
US10218889B2 (en) 2011-05-11 2019-02-26 Fotonation Limited Systems and methods for transmitting and receiving array camera image data
WO2012177166A1 (en) * 2011-06-24 2012-12-27 Intel Corporation An efficient approach to estimate disparity map
US9454851B2 (en) 2011-06-24 2016-09-27 Intel Corporation Efficient approach to estimate disparity map
US9578237B2 (en) 2011-06-28 2017-02-21 Fotonation Cayman Limited Array cameras incorporating optics with modulation transfer functions greater than sensor Nyquist frequency for capture of images used in super-resolution processing
US9516222B2 (en) 2011-06-28 2016-12-06 Kip Peli P1 Lp Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing
US10375302B2 (en) 2011-09-19 2019-08-06 Fotonation Limited Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures
US9794476B2 (en) 2011-09-19 2017-10-17 Fotonation Cayman Limited Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures
US10019816B2 (en) 2011-09-28 2018-07-10 Fotonation Cayman Limited Systems and methods for decoding image files containing depth maps stored as metadata
US10430682B2 (en) 2011-09-28 2019-10-01 Fotonation Limited Systems and methods for decoding image files containing depth maps stored as metadata
US9811753B2 (en) 2011-09-28 2017-11-07 Fotonation Cayman Limited Systems and methods for encoding light field image files
US9864921B2 (en) 2011-09-28 2018-01-09 Fotonation Cayman Limited Systems and methods for encoding image files containing depth maps stored as metadata
US9129183B2 (en) 2011-09-28 2015-09-08 Pelican Imaging Corporation Systems and methods for encoding light field image files
US11729365B2 (en) 2011-09-28 2023-08-15 Adela Imaging LLC Systems and methods for encoding image files containing depth maps stored as metadata
US20180197035A1 (en) 2011-09-28 2018-07-12 Fotonation Cayman Limited Systems and Methods for Encoding Image Files Containing Depth Maps Stored as Metadata
US9042667B2 (en) 2011-09-28 2015-05-26 Pelican Imaging Corporation Systems and methods for decoding light field image files using a depth map
US10275676B2 (en) 2011-09-28 2019-04-30 Fotonation Limited Systems and methods for encoding image files containing depth maps stored as metadata
US9036931B2 (en) 2011-09-28 2015-05-19 Pelican Imaging Corporation Systems and methods for decoding structured light field image files
US9536166B2 (en) 2011-09-28 2017-01-03 Kip Peli P1 Lp Systems and methods for decoding image files containing depth maps stored as metadata
US9031335B2 (en) 2011-09-28 2015-05-12 Pelican Imaging Corporation Systems and methods for encoding light field image files having depth and confidence maps
US10984276B2 (en) 2011-09-28 2021-04-20 Fotonation Limited Systems and methods for encoding image files containing depth maps stored as metadata
US9025894B2 (en) 2011-09-28 2015-05-05 Pelican Imaging Corporation Systems and methods for decoding light field image files having depth and confidence maps
US9031343B2 (en) 2011-09-28 2015-05-12 Pelican Imaging Corporation Systems and methods for encoding light field image files having a depth map
US9025895B2 (en) 2011-09-28 2015-05-05 Pelican Imaging Corporation Systems and methods for decoding refocusable light field image files
US9147116B2 (en) * 2011-10-05 2015-09-29 L-3 Communications Mobilevision, Inc. Multiple resolution camera system for automated license plate recognition and event recording
US20130088597A1 (en) * 2011-10-05 2013-04-11 L-3 Communications Mobilevision Inc. Multiple resolution camera system for automated license plate recognition and event recording
US20130114892A1 (en) * 2011-11-09 2013-05-09 Canon Kabushiki Kaisha Method and device for generating a super-resolution image portion
US8971664B2 (en) * 2011-11-09 2015-03-03 Canon Kabushiki Kaisha Method and device for generating a super-resolution image portion
US10311649B2 (en) 2012-02-21 2019-06-04 Fotonation Limited Systems and method for performing depth based image editing
US9754422B2 (en) 2012-02-21 2017-09-05 Fotonation Cayman Limited Systems and method for performing depth based image editing
US9412206B2 (en) 2012-02-21 2016-08-09 Pelican Imaging Corporation Systems and methods for the manipulation of captured light field image data
US9706132B2 (en) 2012-05-01 2017-07-11 Fotonation Cayman Limited Camera modules patterned with pi filter groups
US9210392B2 (en) 2012-05-01 2015-12-08 Pelican Imaging Coporation Camera modules patterned with pi filter groups
US10334241B2 (en) 2012-06-28 2019-06-25 Fotonation Limited Systems and methods for detecting defective camera arrays and optic arrays
US9100635B2 (en) 2012-06-28 2015-08-04 Pelican Imaging Corporation Systems and methods for detecting defective camera arrays and optic arrays
US9807382B2 (en) 2012-06-28 2017-10-31 Fotonation Cayman Limited Systems and methods for detecting defective camera arrays and optic arrays
US11022725B2 (en) 2012-06-30 2021-06-01 Fotonation Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US9766380B2 (en) 2012-06-30 2017-09-19 Fotonation Cayman Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US10261219B2 (en) 2012-06-30 2019-04-16 Fotonation Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US9240049B2 (en) 2012-08-21 2016-01-19 Pelican Imaging Corporation Systems and methods for measuring depth using an array of independently controllable cameras
US9123118B2 (en) 2012-08-21 2015-09-01 Pelican Imaging Corporation System and methods for measuring depth using an array camera employing a bayer filter
US10380752B2 (en) 2012-08-21 2019-08-13 Fotonation Limited Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US9858673B2 (en) 2012-08-21 2018-01-02 Fotonation Cayman Limited Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US9123117B2 (en) 2012-08-21 2015-09-01 Pelican Imaging Corporation Systems and methods for generating depth maps and corresponding confidence maps indicating depth estimation reliability
US9129377B2 (en) 2012-08-21 2015-09-08 Pelican Imaging Corporation Systems and methods for measuring depth based upon occlusion patterns in images
US9235900B2 (en) 2012-08-21 2016-01-12 Pelican Imaging Corporation Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US9147254B2 (en) 2012-08-21 2015-09-29 Pelican Imaging Corporation Systems and methods for measuring depth in the presence of occlusions using a subset of images
US10462362B2 (en) 2012-08-23 2019-10-29 Fotonation Limited Feature based high resolution motion estimation from low resolution images captured using an array source
US9813616B2 (en) 2012-08-23 2017-11-07 Fotonation Cayman Limited Feature based high resolution motion estimation from low resolution images captured using an array source
US20150248744A1 (en) * 2012-08-31 2015-09-03 Sony Corporation Image processing device, image processing method, and information processing device
CN104584545A (en) * 2012-08-31 2015-04-29 索尼公司 Image processing device, image processing method, and information processing device
US9600859B2 (en) * 2012-08-31 2017-03-21 Sony Corporation Image processing device, image processing method, and information processing device
US9214013B2 (en) 2012-09-14 2015-12-15 Pelican Imaging Corporation Systems and methods for correcting user identified artifacts in light field images
KR101937673B1 (en) 2012-09-21 2019-01-14 삼성전자주식회사 GENERATING JNDD(Just Noticeable Depth Difference) MODEL OF 3D DISPLAY, METHOD AND SYSTEM OF ENHANCING DEPTH IMAGE USING THE JNDD MODEL
US10390005B2 (en) 2012-09-28 2019-08-20 Fotonation Limited Generating images from light fields utilizing virtual viewpoints
US9749568B2 (en) 2012-11-13 2017-08-29 Fotonation Cayman Limited Systems and methods for array camera focal plane control
US9143711B2 (en) 2012-11-13 2015-09-22 Pelican Imaging Corporation Systems and methods for array camera focal plane control
US10009538B2 (en) 2013-02-21 2018-06-26 Fotonation Cayman Limited Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information
US9462164B2 (en) 2013-02-21 2016-10-04 Pelican Imaging Corporation Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information
US9374512B2 (en) 2013-02-24 2016-06-21 Pelican Imaging Corporation Thin form factor computational array cameras and modular array cameras
US9253380B2 (en) 2013-02-24 2016-02-02 Pelican Imaging Corporation Thin form factor computational array cameras and modular array cameras
US9743051B2 (en) 2013-02-24 2017-08-22 Fotonation Cayman Limited Thin form factor computational array cameras and modular array cameras
US9774831B2 (en) 2013-02-24 2017-09-26 Fotonation Cayman Limited Thin form factor computational array cameras and modular array cameras
US9917998B2 (en) 2013-03-08 2018-03-13 Fotonation Cayman Limited Systems and methods for measuring scene information while capturing images using array cameras
US9774789B2 (en) 2013-03-08 2017-09-26 Fotonation Cayman Limited Systems and methods for high dynamic range imaging using array cameras
US11272161B2 (en) 2013-03-10 2022-03-08 Fotonation Limited System and methods for calibration of an array camera
US10958892B2 (en) 2013-03-10 2021-03-23 Fotonation Limited System and methods for calibration of an array camera
US9986224B2 (en) 2013-03-10 2018-05-29 Fotonation Cayman Limited System and methods for calibration of an array camera
US11570423B2 (en) 2013-03-10 2023-01-31 Adeia Imaging Llc System and methods for calibration of an array camera
US10225543B2 (en) 2013-03-10 2019-03-05 Fotonation Limited System and methods for calibration of an array camera
US9521416B1 (en) 2013-03-11 2016-12-13 Kip Peli P1 Lp Systems and methods for image data compression
US9888194B2 (en) 2013-03-13 2018-02-06 Fotonation Cayman Limited Array camera architecture implementing quantum film image sensors
US10127682B2 (en) 2013-03-13 2018-11-13 Fotonation Limited System and methods for calibration of an array camera
US9106784B2 (en) 2013-03-13 2015-08-11 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US9741118B2 (en) 2013-03-13 2017-08-22 Fotonation Cayman Limited System and methods for calibration of an array camera
US9519972B2 (en) 2013-03-13 2016-12-13 Kip Peli P1 Lp Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9800856B2 (en) 2013-03-13 2017-10-24 Fotonation Cayman Limited Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9733486B2 (en) 2013-03-13 2017-08-15 Fotonation Cayman Limited Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US10547772B2 (en) 2013-03-14 2020-01-28 Fotonation Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9100586B2 (en) 2013-03-14 2015-08-04 Pelican Imaging Corporation Systems and methods for photometric normalization in array cameras
US10412314B2 (en) 2013-03-14 2019-09-10 Fotonation Limited Systems and methods for photometric normalization in array cameras
US10091405B2 (en) 2013-03-14 2018-10-02 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9578259B2 (en) 2013-03-14 2017-02-21 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9787911B2 (en) 2013-03-14 2017-10-10 Fotonation Cayman Limited Systems and methods for photometric normalization in array cameras
US10455218B2 (en) 2013-03-15 2019-10-22 Fotonation Limited Systems and methods for estimating depth using stereo array cameras
US9955070B2 (en) 2013-03-15 2018-04-24 Fotonation Cayman Limited Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US9602805B2 (en) 2013-03-15 2017-03-21 Fotonation Cayman Limited Systems and methods for estimating depth using ad hoc stereo array cameras
US9800859B2 (en) 2013-03-15 2017-10-24 Fotonation Cayman Limited Systems and methods for estimating depth using stereo array cameras
US10638099B2 (en) 2013-03-15 2020-04-28 Fotonation Limited Extended color processing on pelican array cameras
US9633442B2 (en) 2013-03-15 2017-04-25 Fotonation Cayman Limited Array cameras including an array camera module augmented with a separate camera
US10182216B2 (en) 2013-03-15 2019-01-15 Fotonation Limited Extended color processing on pelican array cameras
US10542208B2 (en) 2013-03-15 2020-01-21 Fotonation Limited Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US10674138B2 (en) 2013-03-15 2020-06-02 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
US9497370B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Array camera architecture implementing quantum dot color filters
US10122993B2 (en) 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US9438888B2 (en) 2013-03-15 2016-09-06 Pelican Imaging Corporation Systems and methods for stereo imaging with camera arrays
US10540806B2 (en) 2013-09-27 2020-01-21 Fotonation Limited Systems and methods for depth-assisted perspective distortion correction
US9898856B2 (en) 2013-09-27 2018-02-20 Fotonation Cayman Limited Systems and methods for depth-assisted perspective distortion correction
US9426343B2 (en) 2013-11-07 2016-08-23 Pelican Imaging Corporation Array cameras incorporating independently aligned lens stacks
US9185276B2 (en) 2013-11-07 2015-11-10 Pelican Imaging Corporation Methods of manufacturing array camera modules incorporating independently aligned lens stacks
US9924092B2 (en) 2013-11-07 2018-03-20 Fotonation Cayman Limited Array cameras incorporating independently aligned lens stacks
US9264592B2 (en) 2013-11-07 2016-02-16 Pelican Imaging Corporation Array camera modules incorporating independently aligned lens stacks
US10119808B2 (en) 2013-11-18 2018-11-06 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US11486698B2 (en) 2013-11-18 2022-11-01 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US10767981B2 (en) 2013-11-18 2020-09-08 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US9456134B2 (en) 2013-11-26 2016-09-27 Pelican Imaging Corporation Array camera configurations incorporating constituent array cameras and constituent cameras
US10708492B2 (en) 2013-11-26 2020-07-07 Fotonation Limited Array camera configurations incorporating constituent array cameras and constituent cameras
US9813617B2 (en) 2013-11-26 2017-11-07 Fotonation Cayman Limited Array camera configurations incorporating constituent array cameras and constituent cameras
US9426361B2 (en) 2013-11-26 2016-08-23 Pelican Imaging Corporation Array camera configurations incorporating multiple constituent array cameras
US10574905B2 (en) 2014-03-07 2020-02-25 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US10089740B2 (en) 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US9247117B2 (en) 2014-04-07 2016-01-26 Pelican Imaging Corporation Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array
US20150288945A1 (en) * 2014-04-08 2015-10-08 Semyon Nisenzon Generarting 3d images using multiresolution camera clusters
US9729857B2 (en) * 2014-04-08 2017-08-08 Semyon Nisenzon High resolution depth map computation using multiresolution camera clusters for 3D image generation
US9521319B2 (en) 2014-06-18 2016-12-13 Pelican Imaging Corporation Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor
US11546576B2 (en) 2014-09-29 2023-01-03 Adeia Imaging Llc Systems and methods for dynamic calibration of array cameras
US10250871B2 (en) 2014-09-29 2019-04-02 Fotonation Limited Systems and methods for dynamic calibration of array cameras
US9942474B2 (en) 2015-04-17 2018-04-10 Fotonation Cayman Limited Systems and methods for performing high speed video capture and depth estimation using array cameras
US20160337635A1 (en) * 2015-05-15 2016-11-17 Semyon Nisenzon Generarting 3d images using multi-resolution camera set
US10326981B2 (en) * 2015-05-15 2019-06-18 Semyon Nisenzon Generating 3D images using multi-resolution camera set
US10482618B2 (en) 2017-08-21 2019-11-19 Fotonation Limited Systems and methods for hybrid depth regularization
US10818026B2 (en) 2017-08-21 2020-10-27 Fotonation Limited Systems and methods for hybrid depth regularization
US11562498B2 (en) 2017-08-21 2023-01-24 Adela Imaging LLC Systems and methods for hybrid depth regularization
US11699273B2 (en) 2019-09-17 2023-07-11 Intrinsic Innovation Llc Systems and methods for surface modeling using polarization cues
US11270110B2 (en) 2019-09-17 2022-03-08 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
US11525906B2 (en) 2019-10-07 2022-12-13 Intrinsic Innovation Llc Systems and methods for augmentation of sensor systems and imaging systems with polarization
US11302012B2 (en) 2019-11-30 2022-04-12 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
US11842495B2 (en) 2019-11-30 2023-12-12 Intrinsic Innovation Llc Systems and methods for transparent object segmentation using polarization cues
US11580667B2 (en) 2020-01-29 2023-02-14 Intrinsic Innovation Llc Systems and methods for characterizing object pose detection and measurement systems
US11797863B2 (en) 2020-01-30 2023-10-24 Intrinsic Innovation Llc Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images
US11683594B2 (en) 2021-04-15 2023-06-20 Intrinsic Innovation Llc Systems and methods for camera exposure control
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US11953700B2 (en) 2021-05-27 2024-04-09 Intrinsic Innovation Llc Multi-aperture polarization optical systems using beam splitters
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers

Also Published As

Publication number Publication date
US6490364B2 (en) 2002-12-03
JP2005244916A (en) 2005-09-08
US6269175B1 (en) 2001-07-31
US20010036307A1 (en) 2001-11-01
JP2003526829A (en) 2003-09-09
CA2342318A1 (en) 2000-03-09
US20010019621A1 (en) 2001-09-06
EP1110178A1 (en) 2001-06-27
WO2000013142A9 (en) 2000-08-10
WO2000013142A1 (en) 2000-03-09
JP4302572B2 (en) 2009-07-29
US6430304B2 (en) 2002-08-06

Similar Documents

Publication Publication Date Title
US6430304B2 (en) Method and apparatus for processing images to compute image flow information
EP1418766A2 (en) Method and apparatus for processing images
CA2430591C (en) Techniques and systems for developing high-resolution imagery
WO2000013423A9 (en) Method and apparatus for synthesizing high-resolution imagery using one high-resolution camera and a lower resolution camera
Szeliski Prediction error as a quality metric for motion and stereo
US5963664A (en) Method and system for image combination using a parallax-based technique
US20190320154A1 (en) Electronic system including image processing unit for reconstructing 3d surfaces and iterative triangulation method
US20040165781A1 (en) Method and system for constraint-consistent motion estimation
EP0979487A1 (en) Method and apparatus for mosaic image construction
WO1998002844A9 (en) Method and apparatus for mosaic image construction
US8867826B2 (en) Disparity estimation for misaligned stereo image pairs
Irani et al. Direct recovery of planar-parallax from multiple frames
US10586345B2 (en) Method for estimating aggregation results for generating three dimensional images
US20230401855A1 (en) Method, system and computer readable media for object detection coverage estimation
Knorr et al. Stereoscopic 3D from 2D video with super-resolution capability
CA2463162C (en) Method and apparatus for processing images
Szeliski et al. Dense motion estimation
Kopernik et al. Improved disparity estimation for the coding of stereoscopic television
Patras et al. Construction of multiple views using jointly estimated motion and disparity fields

Legal Events

Date Code Title Description
AS Assignment

Owner name: IMAX CORPORATION, CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SARNOFF CORPORATION;REEL/FRAME:014805/0071

Effective date: 20031022

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION