WO2012074361A1 - Method of image segmentation using intensity and depth information - Google Patents

Method of image segmentation using intensity and depth information Download PDF

Info

Publication number
WO2012074361A1
WO2012074361A1 PCT/MY2011/000123 MY2011000123W WO2012074361A1 WO 2012074361 A1 WO2012074361 A1 WO 2012074361A1 MY 2011000123 W MY2011000123 W MY 2011000123W WO 2012074361 A1 WO2012074361 A1 WO 2012074361A1
Authority
WO
WIPO (PCT)
Prior art keywords
intensity
image data
depth
depth value
region
Prior art date
Application number
PCT/MY2011/000123
Other languages
French (fr)
Inventor
Yen San Yong
Hock Woon Hon
Ching Hau Chan
Sheau Wei Chau
Siu Jing Then
Hafriza Zakaria Khairil
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2012074361A1 publication Critical patent/WO2012074361A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present invention relates to image processing.
  • the present invention relates to a method of image segmentation using intensity and depth information.
  • Image segmentation is a computer process to divide an image data into smaller segmented regions based on some image characteristics, such as intensity, color, texture, depth value and so forth.
  • the main purpose of image segmentation is to locate plurality of different objects within the image data.
  • an object in the image data has similar characteristics, and therefore, in image segmentation process, a resultant segmented region is usually associated with an object in the image data.
  • Intensity-based image segmentation is one of the common approaches for image segmentation.
  • intensity-based image segmentation an intensity value of each pixel in the image data is determined. Pixels having similar intensity values are segmented as one region. This region is recognized as one particular object in the image data.
  • Fig. 1A exemplifies an original image data 100 taken from an imaging device
  • Fig. IB exemplifies a processed image 110 of image data 100 shown in Fig. 1A, generated from intensity-based image segmentation.
  • the region 111 of Fig. IB is supposed to be identified as two separated regions as there are two different objects 101, 106 within the region 111.
  • the objects 101, 106 are identified as one region 111 as they have similar intensity values.
  • the intensity-based image method has difficulties when it is run to extract two overlapping objects having similar intensity values as two different regions.
  • Depth-based image segmentation is another known image segmentation technique.
  • the depth value of each pixel in an image data is obtained and mapped in a depth map.
  • the technique works by simply detecting all objects within same depth plane as one region, separating it from the background layer which located at different depth plane. Such technique does not offer local segmentation on plurality of different objects which are located at same depth plane within the image data.
  • Fig. 1C exemplifies a processed image 120 of the image 100 of Fig. 1 A. generated from depth-based image segmentation.
  • the depth-based image segmentation segments the image 100 into two regions only: region 121 corresponds to the background layer 101 and region 122 corresponds to the foreground layer 102.
  • Such method may not be sufficient to extract details from a complex image data.
  • the present invention discloses a method to enhance the quality of intensity-based segmented image using depth information of the image.
  • the method comprises steps of: a) segmenting an image data into intensity-based segmented regions based on intensity value of each pixel in the image data: b) obtaining depth value and confidence level information of each pixel in the image data to form a depth map: c) comparing the intensity-based segmented regions against the depth map at the corresponding regions; d) determining if the respective intensity-based segmented regions consist of more than one depth value; e) further segmenting the respective intensity-based segmented regions when the respective intensity-based segmented regions consist depth value with no confidence: and splitting the intensity-based segmented region to extract object therefrom, when the depth values with confidence levels within the intensity-based segmented regions are determined.
  • the intensity-based image segmentation of each pixel in the original image data comprises the steps of a) generating a histogram; b) determining valley points on the histogram: and c) segmenting the original image data based on the valley points of the histogram.
  • the extraction of depth value and confidence level of each pixel in the original image data comprises the steps of a) deriving depth value for each pixel: b) calculating a confidence level for each depth value; c) forming a depth map based on the depth values with confidence level; and d) removing depth value having a confidence level below a threshold confidence level.
  • the confidence level of the depth value of one image data is affected by background color and texture of the image data.
  • one of the intensity-based segmented image regions is selected for further processing to extract objects.
  • the step of intensity-based segmentation of the original image data and the step of obtaining depth value and confidence level of the image data are processed simultaneously.
  • the step of intensity-based segmentation of the original image data and the step of obtaining depth value and confidence level of the image data are processed independently.
  • Fig. 1A exemplifies an input image to be processed
  • Fig. IB exemplifies a segmentation result of intensity-based image segmentation for the input image data shown in Fig. 1A;
  • Fig. 1C exemplifies a segmentation result of depth-based image segmentation for the input image data shown in Fig. 1A;
  • Fig. 2 is a flow chart of a method for image segmentation according to one embodiment of the present invention
  • Fig. 3 exemplifies a process for intensity-based image segmentation for an image data:
  • FIG. 4 exemplifies a process for depth-based image segmentation:
  • FIG. 5A shows a flowchart of a segmentation process of input image data shown in Fig. 1A according to the present invention:
  • FIG. 5B illustrates result of each step taken during the image segmentation process of input image data shown in Fig. 1A according to another embodiment the present invention:
  • Fig. 6 exemplifies another input image data:
  • Fig. 6A shows a flowchart of the entire segmentation process of input image data shown in Fig. 6 according to one embodiment of the present invention.
  • Fig. 6B illustrates result of each step taken during the image segmentation process of input image data shown in Fig. 6 according to the present invention.
  • Detailed Description [0026] The following descriptions of a number of specific and alternative embodiments are provided to understand the inventive features of the present invention. It shall be apparent to one skilled in the art, however that this invention may be practiced without such specific details. Some of the details may not be described in length so as to not obscure the invention. For ease of reference, common reference numerals will be used throughout the figures when referring to same or similar features common to the figures.
  • Fig. 1A is an input image data 100, which is assembled by background layer 101 and foreground layer 102, wherein the foreground layer 102 contains a plurality of different objects 103, 104, 105, 106, and 107 with different intensity values.
  • the input image data 100 may be taken from an imaging device, such as digital camera.
  • One of the objects 106 residing within the foreground layer 102, has similar intensity value with part of the background layer 101.
  • Fig. IB exemplifies a processed image 110 of the input image data 100 shown in Fig. 1A.
  • the processed image 110 is obtained from segmenting the input image data 100 based on intensity.
  • the object 106 and part of the background layer 101, which have similar intensity value, are segmented as one region 111.
  • Fig. 1C exemplifies another processed image 120 of the input image data 100 shown in Fig. 1A.
  • the processed image 120 is obtained from segmenting the input image data based on depth value.
  • the input image data 100 is segmented into two regions, i.e. region of background layer 121 and region of foreground layer 122. There is no local segmentation occurs within the foreground layer 122 of the input image data 100 despite plurality of different objects can be extracted from the foreground layer.
  • Fig. 2 illustrates a flow diagram of a method for image segmentation in according to one embodiment of the present invention.
  • the method combines intensity-based image segmentation with depth-based image segmentation, hence enhancing the quality of segmented image.
  • the method 200 of image segmentation comprises inputting an image data at step 201: segmenting the image data into intensity-based segmented regions at step 202; extracting depth value of each pixel in the input image data, mapped the depth value into a depth map.
  • step 202 i.e. segmenting the input image data obtained from step 201 based on intensity value.
  • step 202 the intensity value of each pixel in the input image data is determined and the input image data is segmented into smaller regions based on the intensity value of each pixel in the original image data.
  • step 203 depth values of each pixel in the input image data are mapped into a depth map and confidence level of the depth values for each pixel is calculated accordingly.
  • the confidence level of the depth value indicates how confidence the depth value is and may be affected by the background colour and texture of image data.
  • Steps 202 and 203 are carried out independently to each other. They can therefore be carried out simultaneously, or one after another. After the step 202 and 203 are completed, one of the intensity-based segmented regions is selected for further processing at step 204.
  • the selected intensity-based segmented region is compared against corresponding depth value region within the depth map to determine whether the respective intensity-based segmented region consists of more than one depth value at step 207.
  • the intensity-based segmented region consists of one depth value and when the intensity-based segmented region consists of more than one depth value.
  • the confidence level of the depth value is evaluated at step 208.
  • the depth value has no confidence level, it indicates that the respective intensity-based segmented region might actually consist of more than one object.
  • the respective intensity-based segmented region is further segmented at step 209.
  • the respective intensity-based segmented region is split at step 210 for extracting objects from the respective intensity-based segmented region.
  • the corresponding depth value region consists of only one depth value.
  • the depth-value region consist of only one depth value, it indicates that the respective intensity-based segmented region consists of only one object, hence the respective intensity-based segmented region does not need further processing based on its depth value information.
  • the image processing will then proceed to other intensity-based segmented regions of the image data at step 211 and the process will loop back to step 206. Once all intensity-based segmented regions of the image data have been processed, the image processing ends.
  • Fig. 3 exemplifies an intensity-based image segmentation process that can be carried out at the step 202 of Fig. 2.
  • the intensity-based image segmentation process is a histogram-based segmentation.
  • the process 202 comprises steps of inputting the input image data at step 201; getting the intensity value histogram of the input image data at step 302; getting the valley point of the histogram at step 303; and finally segmenting the image data base on the valley point at step 304.
  • Fig. 4 exemplifies a flow chart for extracting depth information of the input image data at step 203 of Fig.2.
  • the depth information extraction process at step 203 comprises steps of inputting the image data at step 201; getting the depth value which is mapped at one depth map at step 402; calculating the confidence level for each depth value at step 403: and removing depth value which is under threshold confidence level at step 404.
  • Fig. 5A in conjunction with Fig. 5B illustrate application of method 200 to the input image data 100.
  • Fig. 5A shows a flowchart of the segmentation process of input image data 100 according to the present invention
  • Fig. 5B illustrates the resultant image of each step taken during the image segmentation process of input image data 100. in accordance with the present invention.
  • the input image data 100 contains two layers, i.e. background layer 101 and foreground layer 102.
  • the foreground layer contains a plurality of objects 103, 104, 105, 106, and 107.
  • the image data 100 is first segmented based on intensity values in accordance with step 534 to obtain intensity-based segmented image data 503 shown in Fig. 5B.
  • the intensity-based segmented image data 503 consists of several regions, i.e. region 503A corresponds to object 103, region 503B corresponds to object 104, region 503C corresponds to object 105, region 503D corresponds to object 106 and part of background 101, and region 503E corresponds to object 107. Since part of background 101 layer has similar intensity and is overlapping the object 106, they are segmented as one region 503D.
  • the depth information of the image data 100 is also extracted at step 541 of Fig. 5A, and a depth map 120 of the image data 100 is obtained.
  • the depth map 120 contains two depth-value regions, i.e. region 120A corresponds to background layer 101 and region 120B corresponds to foreground layer 102.
  • region 120A corresponds to background layer 101
  • region 120B corresponds to foreground layer 102.
  • steps 404 of Fig. 4 when any depth value within the depth map is under threshold confidence level, the depth value is removed.
  • all depth values of background layer 101 of the image data 100 are under threshold confidence level. Therefore, the depth values are removed and the background layer 101 is represented as black pixel region 120A in the depth map 120.
  • one intensity-based segmented region of the intensity-based segmented image data 503, is selected for further processing.
  • the intensity-based segmented region 503D is selected first.
  • the intensity-based segmented region 503D is compared against a corresponding depth-value region 120C within the depth map 120. It is determined at step 537 that the depth-value region 120C consists of more than one depth value. It is then further determined, at step 538 of Fig. 5 A, that the region 120C consists of depth value with no confidence due to the presence of black pixels in the depth value region 120C of the depth map 120.
  • the corresponding intensity-based segmented region 503D further segmented at step 539.
  • the process for this respective region 503D stops. Further segmentation of region 503D results in a new intensity-based segmented region 504 with its histogram 504A.
  • This additional intensity-based segmented region 504 will be treated as another intensity-based segmented image region to be processed according to method 200 of Fig. 2.
  • the image segmentation process of image data 100 then proceeds to the next intensity- based segmented region, for example region 503A of the intensity-based segmented image data 503, at step 540.
  • the image segmentation process of the input image data 100 stops when all intensity-based segmented regions of intensity-based segmented image data 503 have been processed.
  • FIG. 6 Another input image data 600, shown in Fig. 6, is being processed.
  • the image data 600 contains a background layer 611 and two foreground layers; 612 and 613.
  • the image data contains a plurality of objects 601, 602, 603, 604, 605, 606, 607, 608, 609, and 610, wherein object 605 and 608 are having similar intensity and overlapping each other.
  • Fig. 6A shows a flowchart of the segmentation process of input image data 600
  • Fig. 6B illustrates the resultant images of each step taken during the image segmentation process of input image data 600.
  • the input image data 600 is acquired.
  • the input image data 600 is then segmented based on the intensity value at step 642.
  • the intensity-based segmented image data 620 is obtained from step 642 wherein the intensity-based segmented image data 620 consists of several regions, i.e.
  • region 620A corresponds to the object 601
  • region 620B corresponds to the object 602
  • region 620C corresponds to the object 603
  • region 620D corresponds to the object 604
  • region 620E corresponds to the object 605 and 608
  • region 620F corresponds to the object 606
  • region 620G corresponds to the object 607
  • region 620H corresponds to the object 609
  • region 6201 corresponds to the object 610.
  • the object 605 and 608 is segmented as one region 620E because both objects have similar intensity value.
  • the depth map 630 of the input image data 600 is also extracted at step 643.
  • the depth map 630 contains three depth- value regions, i.e.
  • one intensity-based segmented region of the intensity-based segmented image data 620 is selected for further processing.
  • the region 620E is selected first.
  • the selected region 620E is then compared against a corresponding depth-value region 614 within the depth map 630 at step 645. From the corresponding depth-value region 614, at step 646 it is determined that the depth value region 614 has more than one depth value.
  • step 647 of Fig. 6A it is further determined that the depth- value region 614 has a satisfactory confidence level.
  • the region 620E is split into two different regions, 625A and 625B representing two different objects 605 and 608 respectively.
  • the image segmentation process of the input image data 600 continues to process other intensity-based segmented regions of the intensity- based segmented image data 620 at step 649. Once all intensity-based segmented regions of the intensity-based segmented image data 620 have been processed, image segmentation process of the input image data 600 stops.
  • the method according to the present invention helps to compensate the confusion that might occur in either intensity-based image segmentation or depth- based image segmentation.
  • the method helps to determine whether a segmented-image region requires further segmentation or splitting the segmented-region into smaller region using depth information of an image data. The method is thus able to enhance the quality of segmented image.

Abstract

A method of image segmentation using intensity and depth value information is disclosed. The method of the present invention comprises steps of: a) segmenting an image data into intensity-based segmented regions based on intensity value of each pixel in the image data: b) obtaining depth value and confidence level information of each pixel in the image data to form a depth map: c) comparing the intensity-based segmented regions against the depth map at the corresponding regions: d) determining if the respective intensity-based segmented regions consist of more than one depth value; e) further segmenting the respective intensity-based segmented regions when the respective intensity-based segmented regions consist depth value with no confidence: and f) splitting the intensity-based segmented region to extract object therefrom, when the depth values with confidence levels within the intensity-based segmented regions are determined.

Description

Method of linage Segmentation Using Intensity and Depth Information
Field of the Invention
[0001] The present invention relates to image processing. In particular, the present invention relates to a method of image segmentation using intensity and depth information.
Background
[0002] Image segmentation is a computer process to divide an image data into smaller segmented regions based on some image characteristics, such as intensity, color, texture, depth value and so forth. The main purpose of image segmentation is to locate plurality of different objects within the image data. Typically, an object in the image data has similar characteristics, and therefore, in image segmentation process, a resultant segmented region is usually associated with an object in the image data.
[0003] Intensity-based image segmentation is one of the common approaches for image segmentation. In intensity-based image segmentation, an intensity value of each pixel in the image data is determined. Pixels having similar intensity values are segmented as one region. This region is recognized as one particular object in the image data. Fig. 1A exemplifies an original image data 100 taken from an imaging device, and Fig. IB exemplifies a processed image 110 of image data 100 shown in Fig. 1A, generated from intensity-based image segmentation. Referring to the original image data 100, in a proper image segmentation process, the region 111 of Fig. IB is supposed to be identified as two separated regions as there are two different objects 101, 106 within the region 111. However, in the intensity-based segmentation process, the objects 101, 106 are identified as one region 111 as they have similar intensity values. In conclusion, the intensity-based image method has difficulties when it is run to extract two overlapping objects having similar intensity values as two different regions.
[0004] Depth-based image segmentation is another known image segmentation technique. The depth value of each pixel in an image data is obtained and mapped in a depth map. In depth-based image segmentation, the technique works by simply detecting all objects within same depth plane as one region, separating it from the background layer which located at different depth plane. Such technique does not offer local segmentation on plurality of different objects which are located at same depth plane within the image data. Fig. 1C exemplifies a processed image 120 of the image 100 of Fig. 1 A. generated from depth-based image segmentation. Regardless of plurality of objects resided within the foreground layer 102, the depth-based image segmentation segments the image 100 into two regions only: region 121 corresponds to the background layer 101 and region 122 corresponds to the foreground layer 102. Such method may not be sufficient to extract details from a complex image data. [0005] Although there are many available methods for image segmentation, the results are often unsatisfactory. An image segmentation method that could compensate the weaknesses of prior arts is therefore required. Summary
[0006] According to one aspect of the present invention, there is provided a method of image segmentation that combines intensity-based image segmentation method with depth-based image segmentation method in order to eliminate the weaknesses of each method. In particular, the present invention discloses a method to enhance the quality of intensity-based segmented image using depth information of the image.
[0007] In one aspect of the present invention, the method comprises steps of: a) segmenting an image data into intensity-based segmented regions based on intensity value of each pixel in the image data: b) obtaining depth value and confidence level information of each pixel in the image data to form a depth map: c) comparing the intensity-based segmented regions against the depth map at the corresponding regions; d) determining if the respective intensity-based segmented regions consist of more than one depth value; e) further segmenting the respective intensity-based segmented regions when the respective intensity-based segmented regions consist depth value with no confidence: and splitting the intensity-based segmented region to extract object therefrom, when the depth values with confidence levels within the intensity-based segmented regions are determined.
[0008] In one embodiment, the intensity-based image segmentation of each pixel in the original image data comprises the steps of a) generating a histogram; b) determining valley points on the histogram: and c) segmenting the original image data based on the valley points of the histogram. [0009] In another embodiment, the extraction of depth value and confidence level of each pixel in the original image data comprises the steps of a) deriving depth value for each pixel: b) calculating a confidence level for each depth value; c) forming a depth map based on the depth values with confidence level; and d) removing depth value having a confidence level below a threshold confidence level.
[0010] In a further embodiment, the confidence level of the depth value of one image data is affected by background color and texture of the image data.
[0011] In another embodiment, after the original image data is segmented into the intensity-based segmented regions, one of the intensity-based segmented image regions is selected for further processing to extract objects.
[0012] In yet a further embodiment, the step of intensity-based segmentation of the original image data and the step of obtaining depth value and confidence level of the image data are processed simultaneously.
[0013] In yet another embodiment, the step of intensity-based segmentation of the original image data and the step of obtaining depth value and confidence level of the image data are processed independently.
Brief Description of the Drawings
[0014] This invention will be described by way of non-limiting embodiments of the present invention, with reference to the accompanying drawings, in which: [0015] Fig. 1A exemplifies an input image to be processed; [0016] Fig. IB exemplifies a segmentation result of intensity-based image segmentation for the input image data shown in Fig. 1A;
[0017] Fig. 1C exemplifies a segmentation result of depth-based image segmentation for the input image data shown in Fig. 1A; [0018] Fig. 2 is a flow chart of a method for image segmentation according to one embodiment of the present invention;
[0019] Fig. 3 exemplifies a process for intensity-based image segmentation for an image data:
[0020] Fig. 4 exemplifies a process for depth-based image segmentation: [0021] Fig. 5A shows a flowchart of a segmentation process of input image data shown in Fig. 1A according to the present invention:
[0022] Fig. 5B illustrates result of each step taken during the image segmentation process of input image data shown in Fig. 1A according to another embodiment the present invention: [0023] Fig. 6 exemplifies another input image data: and
[0024] Fig. 6A shows a flowchart of the entire segmentation process of input image data shown in Fig. 6 according to one embodiment of the present invention; and
[0025] Fig. 6B illustrates result of each step taken during the image segmentation process of input image data shown in Fig. 6 according to the present invention. Detailed Description [0026] The following descriptions of a number of specific and alternative embodiments are provided to understand the inventive features of the present invention. It shall be apparent to one skilled in the art, however that this invention may be practiced without such specific details. Some of the details may not be described in length so as to not obscure the invention. For ease of reference, common reference numerals will be used throughout the figures when referring to same or similar features common to the figures.
[0027] Fig. 1A is an input image data 100, which is assembled by background layer 101 and foreground layer 102, wherein the foreground layer 102 contains a plurality of different objects 103, 104, 105, 106, and 107 with different intensity values. The input image data 100 may be taken from an imaging device, such as digital camera. One of the objects 106The object 106, residing within the foreground layer 102, has similar intensity value with part of the background layer 101. [0028] Fig. IB exemplifies a processed image 110 of the input image data 100 shown in Fig. 1A. The processed image 110 is obtained from segmenting the input image data 100 based on intensity. The object 106 and part of the background layer 101, which have similar intensity value, are segmented as one region 111.
[0029] Fig. 1C exemplifies another processed image 120 of the input image data 100 shown in Fig. 1A. The processed image 120 is obtained from segmenting the input image data based on depth value. The input image data 100 is segmented into two regions, i.e. region of background layer 121 and region of foreground layer 122. There is no local segmentation occurs within the foreground layer 122 of the input image data 100 despite plurality of different objects can be extracted from the foreground layer.
[0030] Fig. 2 illustrates a flow diagram of a method for image segmentation in according to one embodiment of the present invention. Generally, the method combines intensity-based image segmentation with depth-based image segmentation, hence enhancing the quality of segmented image. The method 200 of image segmentation comprises inputting an image data at step 201: segmenting the image data into intensity-based segmented regions at step 202; extracting depth value of each pixel in the input image data, mapped the depth value into a depth map. and calculating confidence level of the depth value of the input image data at step 203; selecting one intensity-based segmented region at step 204; comparing the intensity-based segmented region against corresponding depth value region of the respective intensity-based segmented region at step 206; determining whether the intensity-based segmented region consists of uniform depth value at step 207: determining whether the intensity-based segmented region consist of the depth value with no confidence at step 208: further segmentation for the intensity-based segmented region if required at step 209: splitting the intensity-based segmented region into different objects if required at step 210; proceeding to another intensity- based segmented region at step 211. [0031] Referring to Fig. 2, after inputting/acquiring the input image data at step 201, the image process proceeds to step 202, i.e. segmenting the input image data obtained from step 201 based on intensity value. At the step 202, the intensity value of each pixel in the input image data is determined and the input image data is segmented into smaller regions based on the intensity value of each pixel in the original image data. At the step 203, depth values of each pixel in the input image data are mapped into a depth map and confidence level of the depth values for each pixel is calculated accordingly. The confidence level of the depth value indicates how confidence the depth value is and may be affected by the background colour and texture of image data. The method of intensity based image segmentation as well as extracting depth value and its confidence level is widely known in the art, therefore no further description will be provided herein. Steps 202 and 203 are carried out independently to each other. They can therefore be carried out simultaneously, or one after another. After the step 202 and 203 are completed, one of the intensity-based segmented regions is selected for further processing at step 204.
[0032] At step 206, the selected intensity-based segmented region is compared against corresponding depth value region within the depth map to determine whether the respective intensity-based segmented region consists of more than one depth value at step 207. There are two possible scenarios at step 207, i.e. when the intensity-based segmented region consists of one depth value and when the intensity-based segmented region consists of more than one depth value.
[0033] When the corresponding depth value region consists of more than one depth value, the confidence level of the depth value is evaluated at step 208. When the depth value has no confidence level, it indicates that the respective intensity-based segmented region might actually consist of more than one object. Hence, the respective intensity-based segmented region is further segmented at step 209. On the other hand, when the respective intensity-based segmented region has satisfactory confidence level, the respective intensity-based segmented region is split at step 210 for extracting objects from the respective intensity-based segmented region. After evaluating the depth value information of the respective intensity-based segmented region, the image processing continues to process another intensity-based segmented region of the original image data at step 211 by looping back to step 206 and continuing the image processing therefrom. When all intensity-based segmented regions of the original image data have been evaluated, the image segmentation ends.
[0034] Another scenario that might happen at step 207 is, the corresponding depth value region consists of only one depth value. When the depth-value region consist of only one depth value, it indicates that the respective intensity-based segmented region consists of only one object, hence the respective intensity-based segmented region does not need further processing based on its depth value information. The image processing will then proceed to other intensity-based segmented regions of the image data at step 211 and the process will loop back to step 206. Once all intensity-based segmented regions of the image data have been processed, the image processing ends.
[0035] Fig. 3 exemplifies an intensity-based image segmentation process that can be carried out at the step 202 of Fig. 2. The intensity-based image segmentation process is a histogram-based segmentation. The process 202 comprises steps of inputting the input image data at step 201; getting the intensity value histogram of the input image data at step 302; getting the valley point of the histogram at step 303; and finally segmenting the image data base on the valley point at step 304.
[0036] Fig. 4 exemplifies a flow chart for extracting depth information of the input image data at step 203 of Fig.2. The depth information extraction process at step 203 comprises steps of inputting the image data at step 201; getting the depth value which is mapped at one depth map at step 402; calculating the confidence level for each depth value at step 403: and removing depth value which is under threshold confidence level at step 404.
[0037] The intensity-based image segmentation process and the depth information illustrated in Fig. 3 and Fig. 4 respectively are provided above by way of example only, not limitations. It is understood to a skilled person that many other algorithms or methods are available in the art and they may be adapted in the present invention.
[0038] The method 200 of Fig. 2 is now further illustrated by way of applying the method 200 to the input image data 100. Fig. 5A in conjunction with Fig. 5B illustrate application of method 200 to the input image data 100. Fig. 5A shows a flowchart of the segmentation process of input image data 100 according to the present invention, whilst Fig. 5B illustrates the resultant image of each step taken during the image segmentation process of input image data 100. in accordance with the present invention. As mentioned above, the input image data 100 contains two layers, i.e. background layer 101 and foreground layer 102. The foreground layer contains a plurality of objects 103, 104, 105, 106, and 107. [0039] Inputting image data 100 at step 531 shown in Fig. 5A. the image data 100 is first segmented based on intensity values in accordance with step 534 to obtain intensity-based segmented image data 503 shown in Fig. 5B. The intensity-based segmented image data 503 consists of several regions, i.e. region 503A corresponds to object 103, region 503B corresponds to object 104, region 503C corresponds to object 105, region 503D corresponds to object 106 and part of background 101, and region 503E corresponds to object 107. Since part of background 101 layer has similar intensity and is overlapping the object 106, they are segmented as one region 503D.
[0040] The depth information of the image data 100 is also extracted at step 541 of Fig. 5A, and a depth map 120 of the image data 100 is obtained. The depth map 120 contains two depth-value regions, i.e. region 120A corresponds to background layer 101 and region 120B corresponds to foreground layer 102. Referring to steps 404 of Fig. 4, when any depth value within the depth map is under threshold confidence level, the depth value is removed. Reviewing the image data 100, all depth values of background layer 101 of the image data 100 are under threshold confidence level. Therefore, the depth values are removed and the background layer 101 is represented as black pixel region 120A in the depth map 120.
[0041] Still referring to Fig. 5A, at step 535, one intensity-based segmented region of the intensity-based segmented image data 503, is selected for further processing. For simplicity, the intensity-based segmented region 503D is selected first. At step 536, the intensity-based segmented region 503D is compared against a corresponding depth-value region 120C within the depth map 120. It is determined at step 537 that the depth-value region 120C consists of more than one depth value. It is then further determined, at step 538 of Fig. 5 A, that the region 120C consists of depth value with no confidence due to the presence of black pixels in the depth value region 120C of the depth map 120. Because the depth value region 120C has no confidence, the corresponding intensity-based segmented region 503D further segmented at step 539. After further segmentation at step 539, the process for this respective region 503D stops. Further segmentation of region 503D results in a new intensity-based segmented region 504 with its histogram 504A. This additional intensity-based segmented region 504 will be treated as another intensity-based segmented image region to be processed according to method 200 of Fig. 2. The image segmentation process of image data 100 then proceeds to the next intensity- based segmented region, for example region 503A of the intensity-based segmented image data 503, at step 540. The image segmentation process of the input image data 100 stops when all intensity-based segmented regions of intensity-based segmented image data 503 have been processed.
[0042] To give further illustration of method 200 of Fig. 2, another input image data 600, shown in Fig. 6, is being processed. The image data 600 contains a background layer 611 and two foreground layers; 612 and 613. The image data contains a plurality of objects 601, 602, 603, 604, 605, 606, 607, 608, 609, and 610, wherein object 605 and 608 are having similar intensity and overlapping each other.
[0043] Fig. 6A shows a flowchart of the segmentation process of input image data 600, whilst Fig. 6B illustrates the resultant images of each step taken during the image segmentation process of input image data 600. At step 641, the input image data 600 is acquired. The input image data 600 is then segmented based on the intensity value at step 642. The intensity-based segmented image data 620, is obtained from step 642 wherein the intensity-based segmented image data 620 consists of several regions, i.e. region 620A corresponds to the object 601, region 620B corresponds to the object 602, region 620C corresponds to the object 603, region 620D corresponds to the object 604, region 620E corresponds to the object 605 and 608, region 620F corresponds to the object 606, region 620G corresponds to the object 607, region 620H corresponds to the object 609, and region 6201 corresponds to the object 610. The object 605 and 608 is segmented as one region 620E because both objects have similar intensity value. The depth map 630 of the input image data 600 is also extracted at step 643. The depth map 630 contains three depth- value regions, i.e. region 631 corresponds to the background layer 611; region 632 corresponds to the first foreground layer 612; and region 633 corresponds to the second foreground layer 613. At step 644, one intensity-based segmented region of the intensity-based segmented image data 620 is selected for further processing. For simplicity, the region 620E is selected first. The selected region 620E is then compared against a corresponding depth-value region 614 within the depth map 630 at step 645. From the corresponding depth-value region 614, at step 646 it is determined that the depth value region 614 has more than one depth value. Next at step 647 of Fig. 6A, it is further determined that the depth- value region 614 has a satisfactory confidence level. Hence, based on information obtained from the depth-value region 614, at step 648, the region 620E is split into two different regions, 625A and 625B representing two different objects 605 and 608 respectively. As image processing of the intensity-based segmented image region 620E is completed, the image segmentation process of the input image data 600 continues to process other intensity-based segmented regions of the intensity- based segmented image data 620 at step 649. Once all intensity-based segmented regions of the intensity-based segmented image data 620 have been processed, image segmentation process of the input image data 600 stops.
[0044] The method according to the present invention helps to compensate the confusion that might occur in either intensity-based image segmentation or depth- based image segmentation. In accordance to the present invention, the method helps to determine whether a segmented-image region requires further segmentation or splitting the segmented-region into smaller region using depth information of an image data. The method is thus able to enhance the quality of segmented image.
[0045] The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. While specific embodiments have been described and illustrated it is understood that many charges, modifications, variations and combinations thereof could be made to the present invention without departing from the scope of the present invention. The above examples, embodiments, instructions semantics, and drawings should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims:

Claims

Claims
1. A method of processing an original image data obtained through an imaging device, the method comprising: segmenting the original image data into intensity-based segmented regions based on intensity value of each pixel in the original image data; obtaining a depth value and a confidence level of each pixel in the original image data to form a depth map; comparing the intensity-based segmented regions against the depth map at the corresponding regions; determining if the respective intensity-based segmented regions consist of more than one depth value; further segmenting the respective intensity-based segmented regions when the intensity-based segmented regions consist depth value with no confidence; and splitting the intensity-based segmented regions to extract object threrefrom, when the depth values with confidence level within the intensity-based segmented regions are determined.
2. A method according to claim 1, wherein the wherein segmenting the original image data comprises: generating a histogram; determining valley points on the histogram; and segmenting the original image data based on the valley points of the histogram.
3. The method according to claim 1, wherein the step of obtaining depth value and confidence level of each pixel in the original image data comprising: deriving depth value for each pixel; calculating a confidence level for each depth value; forming a depth map based on the depth values with confidence level; and removing depth value having a confidence level below a threshold confidence level. 4. A method according to claim 1, wherein the confidence level of the depth value of one image data is affected by background color and texture of the image data.
S. A method according to claim 1, wherein after the original image data is segmented into the intensity-based segmented regions, further comprising selecting one of the segmented-iinage regions for further processing to extract objects. 6. A method according to claim 5, further comprising selecting another intensity- based segmented region for processing to extract object.
7. A method according to claim 1, wherein the step of segmenting original image data and the step of obtaining depth value and confidence level are processed simultaneously.
8. A method according to claim 1, wherein the step of segmenting original image data and the step of obtaining depth value and confidence level are processed independently.
PCT/MY2011/000123 2010-12-03 2011-06-22 Method of image segmentation using intensity and depth information WO2012074361A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2010005761A MY150361A (en) 2010-12-03 2010-12-03 Method of image segmentation using intensity and depth information
MYPI2010005761 2010-12-03

Publications (1)

Publication Number Publication Date
WO2012074361A1 true WO2012074361A1 (en) 2012-06-07

Family

ID=46172122

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2011/000123 WO2012074361A1 (en) 2010-12-03 2011-06-22 Method of image segmentation using intensity and depth information

Country Status (2)

Country Link
MY (1) MY150361A (en)
WO (1) WO2012074361A1 (en)

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014043641A1 (en) * 2012-09-14 2014-03-20 Pelican Imaging Corporation Systems and methods for correcting user identified artifacts in light field images
US20140140613A1 (en) * 2012-11-22 2014-05-22 Samsung Electronics Co., Ltd. Apparatus and method for processing color image using depth image
US8831367B2 (en) 2011-09-28 2014-09-09 Pelican Imaging Corporation Systems and methods for decoding light field image files
US8861089B2 (en) 2009-11-20 2014-10-14 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US8866920B2 (en) 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
US8885059B1 (en) 2008-05-20 2014-11-11 Pelican Imaging Corporation Systems and methods for measuring depth using images captured by camera arrays
US8928793B2 (en) 2010-05-12 2015-01-06 Pelican Imaging Corporation Imager array interfaces
US9100586B2 (en) 2013-03-14 2015-08-04 Pelican Imaging Corporation Systems and methods for photometric normalization in array cameras
US9100635B2 (en) 2012-06-28 2015-08-04 Pelican Imaging Corporation Systems and methods for detecting defective camera arrays and optic arrays
US9106784B2 (en) 2013-03-13 2015-08-11 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US9124864B2 (en) 2013-03-10 2015-09-01 Pelican Imaging Corporation System and methods for calibration of an array camera
US9123118B2 (en) 2012-08-21 2015-09-01 Pelican Imaging Corporation System and methods for measuring depth using an array camera employing a bayer filter
US9128228B2 (en) 2011-06-28 2015-09-08 Pelican Imaging Corporation Optical arrangements for use with an array camera
US9143711B2 (en) 2012-11-13 2015-09-22 Pelican Imaging Corporation Systems and methods for array camera focal plane control
US9185276B2 (en) 2013-11-07 2015-11-10 Pelican Imaging Corporation Methods of manufacturing array camera modules incorporating independently aligned lens stacks
US9197821B2 (en) 2011-05-11 2015-11-24 Pelican Imaging Corporation Systems and methods for transmitting and receiving array camera image data
US9210392B2 (en) 2012-05-01 2015-12-08 Pelican Imaging Coporation Camera modules patterned with pi filter groups
US9247117B2 (en) 2014-04-07 2016-01-26 Pelican Imaging Corporation Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array
US9253380B2 (en) 2013-02-24 2016-02-02 Pelican Imaging Corporation Thin form factor computational array cameras and modular array cameras
CN105741279A (en) * 2016-01-27 2016-07-06 西安电子科技大学 Rough set based image segmentation method for quickly inhibiting fuzzy clustering
US9412206B2 (en) 2012-02-21 2016-08-09 Pelican Imaging Corporation Systems and methods for the manipulation of captured light field image data
US9426361B2 (en) 2013-11-26 2016-08-23 Pelican Imaging Corporation Array camera configurations incorporating multiple constituent array cameras
US9438888B2 (en) 2013-03-15 2016-09-06 Pelican Imaging Corporation Systems and methods for stereo imaging with camera arrays
US9445003B1 (en) 2013-03-15 2016-09-13 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US9462164B2 (en) 2013-02-21 2016-10-04 Pelican Imaging Corporation Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
US9497370B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Array camera architecture implementing quantum dot color filters
US9516222B2 (en) 2011-06-28 2016-12-06 Kip Peli P1 Lp Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing
US9521416B1 (en) 2013-03-11 2016-12-13 Kip Peli P1 Lp Systems and methods for image data compression
US9519972B2 (en) 2013-03-13 2016-12-13 Kip Peli P1 Lp Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9521319B2 (en) 2014-06-18 2016-12-13 Pelican Imaging Corporation Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor
US9578259B2 (en) 2013-03-14 2017-02-21 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9633442B2 (en) 2013-03-15 2017-04-25 Fotonation Cayman Limited Array cameras including an array camera module augmented with a separate camera
US9638883B1 (en) 2013-03-04 2017-05-02 Fotonation Cayman Limited Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process
US9741118B2 (en) 2013-03-13 2017-08-22 Fotonation Cayman Limited System and methods for calibration of an array camera
US9766380B2 (en) 2012-06-30 2017-09-19 Fotonation Cayman Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US9774789B2 (en) 2013-03-08 2017-09-26 Fotonation Cayman Limited Systems and methods for high dynamic range imaging using array cameras
US9794476B2 (en) 2011-09-19 2017-10-17 Fotonation Cayman Limited Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures
US9813616B2 (en) 2012-08-23 2017-11-07 Fotonation Cayman Limited Feature based high resolution motion estimation from low resolution images captured using an array source
US9888194B2 (en) 2013-03-13 2018-02-06 Fotonation Cayman Limited Array camera architecture implementing quantum film image sensors
US9898856B2 (en) 2013-09-27 2018-02-20 Fotonation Cayman Limited Systems and methods for depth-assisted perspective distortion correction
US9942474B2 (en) 2015-04-17 2018-04-10 Fotonation Cayman Limited Systems and methods for performing high speed video capture and depth estimation using array cameras
CN108230346A (en) * 2017-03-30 2018-06-29 北京市商汤科技开发有限公司 For dividing the method and apparatus of image semantic feature, electronic equipment
US10089740B2 (en) 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US10122993B2 (en) 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US10122912B2 (en) 2017-04-10 2018-11-06 Sony Corporation Device and method for detecting regions in an image
US10119808B2 (en) 2013-11-18 2018-11-06 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US10250871B2 (en) 2014-09-29 2019-04-02 Fotonation Limited Systems and methods for dynamic calibration of array cameras
US10390005B2 (en) 2012-09-28 2019-08-20 Fotonation Limited Generating images from light fields utilizing virtual viewpoints
US10482618B2 (en) 2017-08-21 2019-11-19 Fotonation Limited Systems and methods for hybrid depth regularization
CN113822901A (en) * 2021-07-21 2021-12-21 南京旭锐软件科技有限公司 Image segmentation method, image segmentation device, storage medium and electronic equipment
US11270110B2 (en) 2019-09-17 2022-03-08 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11302012B2 (en) 2019-11-30 2022-04-12 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
WO2022143366A1 (en) * 2021-01-04 2022-07-07 北京沃东天骏信息技术有限公司 Image processing method and apparatus, electronic device, medium, and computer program product
US11525906B2 (en) 2019-10-07 2022-12-13 Intrinsic Innovation Llc Systems and methods for augmentation of sensor systems and imaging systems with polarization
US11580667B2 (en) 2020-01-29 2023-02-14 Intrinsic Innovation Llc Systems and methods for characterizing object pose detection and measurement systems
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US11797863B2 (en) 2020-01-30 2023-10-24 Intrinsic Innovation Llc Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661918B1 (en) * 1998-12-04 2003-12-09 Interval Research Corporation Background estimation and segmentation based on range and color

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661918B1 (en) * 1998-12-04 2003-12-09 Interval Research Corporation Background estimation and segmentation based on range and color

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WOO ET AL.: "Object Segmentation for Z-keying Using Stereo Images", PROCEEDINGS OF THE 5TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, vol. 2, 25 August 2000 (2000-08-25), BEIJING, CHINA, pages 1249 - 1254 *

Cited By (184)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9060120B2 (en) 2008-05-20 2015-06-16 Pelican Imaging Corporation Systems and methods for generating depth maps using images captured by camera arrays
US9749547B2 (en) 2008-05-20 2017-08-29 Fotonation Cayman Limited Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view
US9712759B2 (en) 2008-05-20 2017-07-18 Fotonation Cayman Limited Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras
US9576369B2 (en) 2008-05-20 2017-02-21 Fotonation Cayman Limited Systems and methods for generating depth maps using images captured by camera arrays incorporating cameras having different fields of view
US8866920B2 (en) 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US10027901B2 (en) 2008-05-20 2018-07-17 Fotonation Cayman Limited Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras
US8885059B1 (en) 2008-05-20 2014-11-11 Pelican Imaging Corporation Systems and methods for measuring depth using images captured by camera arrays
US8896719B1 (en) 2008-05-20 2014-11-25 Pelican Imaging Corporation Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations
US8902321B2 (en) 2008-05-20 2014-12-02 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US10142560B2 (en) 2008-05-20 2018-11-27 Fotonation Limited Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US9485496B2 (en) 2008-05-20 2016-11-01 Pelican Imaging Corporation Systems and methods for measuring depth using images captured by a camera array including cameras surrounding a central camera
US9188765B2 (en) 2008-05-20 2015-11-17 Pelican Imaging Corporation Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US9191580B2 (en) 2008-05-20 2015-11-17 Pelican Imaging Corporation Capturing and processing of images including occlusions captured by camera arrays
US11412158B2 (en) 2008-05-20 2022-08-09 Fotonation Limited Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US9124815B2 (en) 2008-05-20 2015-09-01 Pelican Imaging Corporation Capturing and processing of images including occlusions captured by arrays of luma and chroma cameras
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US9094661B2 (en) 2008-05-20 2015-07-28 Pelican Imaging Corporation Systems and methods for generating depth maps using a set of images containing a baseline image
US9041823B2 (en) 2008-05-20 2015-05-26 Pelican Imaging Corporation Systems and methods for performing post capture refocus using images captured by camera arrays
US9077893B2 (en) 2008-05-20 2015-07-07 Pelican Imaging Corporation Capturing and processing of images captured by non-grid camera arrays
US9041829B2 (en) 2008-05-20 2015-05-26 Pelican Imaging Corporation Capturing and processing of high dynamic range images using camera arrays
US9060142B2 (en) 2008-05-20 2015-06-16 Pelican Imaging Corporation Capturing and processing of images captured by camera arrays including heterogeneous optics
US9049411B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Camera arrays incorporating 3×3 imager configurations
US9060124B2 (en) 2008-05-20 2015-06-16 Pelican Imaging Corporation Capturing and processing of images using non-monolithic camera arrays
US9049381B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Systems and methods for normalizing image data captured by camera arrays
US9049367B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Systems and methods for synthesizing higher resolution images using images captured by camera arrays
US9049391B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Capturing and processing of near-IR images including occlusions using camera arrays incorporating near-IR light sources
US9049390B2 (en) 2008-05-20 2015-06-02 Pelican Imaging Corporation Capturing and processing of images captured by arrays including polychromatic cameras
US9055233B2 (en) 2008-05-20 2015-06-09 Pelican Imaging Corporation Systems and methods for synthesizing higher resolution images using a set of images containing a baseline image
US9055213B2 (en) 2008-05-20 2015-06-09 Pelican Imaging Corporation Systems and methods for measuring depth using images captured by monolithic camera arrays including at least one bayer camera
US9060121B2 (en) 2008-05-20 2015-06-16 Pelican Imaging Corporation Capturing and processing of images captured by camera arrays including cameras dedicated to sampling luma and cameras dedicated to sampling chroma
US10306120B2 (en) 2009-11-20 2019-05-28 Fotonation Limited Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps
US8861089B2 (en) 2009-11-20 2014-10-14 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US9264610B2 (en) 2009-11-20 2016-02-16 Pelican Imaging Corporation Capturing and processing of images including occlusions captured by heterogeneous camera arrays
US10455168B2 (en) 2010-05-12 2019-10-22 Fotonation Limited Imager array interfaces
US8928793B2 (en) 2010-05-12 2015-01-06 Pelican Imaging Corporation Imager array interfaces
US9936148B2 (en) 2010-05-12 2018-04-03 Fotonation Cayman Limited Imager array interfaces
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
US11423513B2 (en) 2010-12-14 2022-08-23 Fotonation Limited Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US9047684B2 (en) 2010-12-14 2015-06-02 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using a set of geometrically registered images
US9041824B2 (en) 2010-12-14 2015-05-26 Pelican Imaging Corporation Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers
US11875475B2 (en) 2010-12-14 2024-01-16 Adeia Imaging Llc Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US10366472B2 (en) 2010-12-14 2019-07-30 Fotonation Limited Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US9361662B2 (en) 2010-12-14 2016-06-07 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US10742861B2 (en) 2011-05-11 2020-08-11 Fotonation Limited Systems and methods for transmitting and receiving array camera image data
US9866739B2 (en) 2011-05-11 2018-01-09 Fotonation Cayman Limited Systems and methods for transmitting and receiving array camera image data
US10218889B2 (en) 2011-05-11 2019-02-26 Fotonation Limited Systems and methods for transmitting and receiving array camera image data
US9197821B2 (en) 2011-05-11 2015-11-24 Pelican Imaging Corporation Systems and methods for transmitting and receiving array camera image data
US9578237B2 (en) 2011-06-28 2017-02-21 Fotonation Cayman Limited Array cameras incorporating optics with modulation transfer functions greater than sensor Nyquist frequency for capture of images used in super-resolution processing
US9128228B2 (en) 2011-06-28 2015-09-08 Pelican Imaging Corporation Optical arrangements for use with an array camera
US9516222B2 (en) 2011-06-28 2016-12-06 Kip Peli P1 Lp Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing
US10375302B2 (en) 2011-09-19 2019-08-06 Fotonation Limited Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures
US9794476B2 (en) 2011-09-19 2017-10-17 Fotonation Cayman Limited Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures
US10430682B2 (en) 2011-09-28 2019-10-01 Fotonation Limited Systems and methods for decoding image files containing depth maps stored as metadata
US9864921B2 (en) 2011-09-28 2018-01-09 Fotonation Cayman Limited Systems and methods for encoding image files containing depth maps stored as metadata
US9042667B2 (en) 2011-09-28 2015-05-26 Pelican Imaging Corporation Systems and methods for decoding light field image files using a depth map
US9811753B2 (en) 2011-09-28 2017-11-07 Fotonation Cayman Limited Systems and methods for encoding light field image files
US9036928B2 (en) 2011-09-28 2015-05-19 Pelican Imaging Corporation Systems and methods for encoding structured light field image files
US8831367B2 (en) 2011-09-28 2014-09-09 Pelican Imaging Corporation Systems and methods for decoding light field image files
US10984276B2 (en) 2011-09-28 2021-04-20 Fotonation Limited Systems and methods for encoding image files containing depth maps stored as metadata
US9025894B2 (en) 2011-09-28 2015-05-05 Pelican Imaging Corporation Systems and methods for decoding light field image files having depth and confidence maps
US9031335B2 (en) 2011-09-28 2015-05-12 Pelican Imaging Corporation Systems and methods for encoding light field image files having depth and confidence maps
US9036931B2 (en) 2011-09-28 2015-05-19 Pelican Imaging Corporation Systems and methods for decoding structured light field image files
US10019816B2 (en) 2011-09-28 2018-07-10 Fotonation Cayman Limited Systems and methods for decoding image files containing depth maps stored as metadata
US20180197035A1 (en) 2011-09-28 2018-07-12 Fotonation Cayman Limited Systems and Methods for Encoding Image Files Containing Depth Maps Stored as Metadata
US11729365B2 (en) 2011-09-28 2023-08-15 Adela Imaging LLC Systems and methods for encoding image files containing depth maps stored as metadata
US9536166B2 (en) 2011-09-28 2017-01-03 Kip Peli P1 Lp Systems and methods for decoding image files containing depth maps stored as metadata
US9031342B2 (en) 2011-09-28 2015-05-12 Pelican Imaging Corporation Systems and methods for encoding refocusable light field image files
US9031343B2 (en) 2011-09-28 2015-05-12 Pelican Imaging Corporation Systems and methods for encoding light field image files having a depth map
US9129183B2 (en) 2011-09-28 2015-09-08 Pelican Imaging Corporation Systems and methods for encoding light field image files
US10275676B2 (en) 2011-09-28 2019-04-30 Fotonation Limited Systems and methods for encoding image files containing depth maps stored as metadata
US9025895B2 (en) 2011-09-28 2015-05-05 Pelican Imaging Corporation Systems and methods for decoding refocusable light field image files
US10311649B2 (en) 2012-02-21 2019-06-04 Fotonation Limited Systems and method for performing depth based image editing
US9754422B2 (en) 2012-02-21 2017-09-05 Fotonation Cayman Limited Systems and method for performing depth based image editing
US9412206B2 (en) 2012-02-21 2016-08-09 Pelican Imaging Corporation Systems and methods for the manipulation of captured light field image data
US9210392B2 (en) 2012-05-01 2015-12-08 Pelican Imaging Coporation Camera modules patterned with pi filter groups
US9706132B2 (en) 2012-05-01 2017-07-11 Fotonation Cayman Limited Camera modules patterned with pi filter groups
US10334241B2 (en) 2012-06-28 2019-06-25 Fotonation Limited Systems and methods for detecting defective camera arrays and optic arrays
US9807382B2 (en) 2012-06-28 2017-10-31 Fotonation Cayman Limited Systems and methods for detecting defective camera arrays and optic arrays
US9100635B2 (en) 2012-06-28 2015-08-04 Pelican Imaging Corporation Systems and methods for detecting defective camera arrays and optic arrays
US11022725B2 (en) 2012-06-30 2021-06-01 Fotonation Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US10261219B2 (en) 2012-06-30 2019-04-16 Fotonation Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US9766380B2 (en) 2012-06-30 2017-09-19 Fotonation Cayman Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US9123118B2 (en) 2012-08-21 2015-09-01 Pelican Imaging Corporation System and methods for measuring depth using an array camera employing a bayer filter
US10380752B2 (en) 2012-08-21 2019-08-13 Fotonation Limited Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US9858673B2 (en) 2012-08-21 2018-01-02 Fotonation Cayman Limited Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US9240049B2 (en) 2012-08-21 2016-01-19 Pelican Imaging Corporation Systems and methods for measuring depth using an array of independently controllable cameras
US9235900B2 (en) 2012-08-21 2016-01-12 Pelican Imaging Corporation Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US9147254B2 (en) 2012-08-21 2015-09-29 Pelican Imaging Corporation Systems and methods for measuring depth in the presence of occlusions using a subset of images
US9129377B2 (en) 2012-08-21 2015-09-08 Pelican Imaging Corporation Systems and methods for measuring depth based upon occlusion patterns in images
US9123117B2 (en) 2012-08-21 2015-09-01 Pelican Imaging Corporation Systems and methods for generating depth maps and corresponding confidence maps indicating depth estimation reliability
US10462362B2 (en) 2012-08-23 2019-10-29 Fotonation Limited Feature based high resolution motion estimation from low resolution images captured using an array source
US9813616B2 (en) 2012-08-23 2017-11-07 Fotonation Cayman Limited Feature based high resolution motion estimation from low resolution images captured using an array source
US9214013B2 (en) 2012-09-14 2015-12-15 Pelican Imaging Corporation Systems and methods for correcting user identified artifacts in light field images
WO2014043641A1 (en) * 2012-09-14 2014-03-20 Pelican Imaging Corporation Systems and methods for correcting user identified artifacts in light field images
US10390005B2 (en) 2012-09-28 2019-08-20 Fotonation Limited Generating images from light fields utilizing virtual viewpoints
US9749568B2 (en) 2012-11-13 2017-08-29 Fotonation Cayman Limited Systems and methods for array camera focal plane control
US9143711B2 (en) 2012-11-13 2015-09-22 Pelican Imaging Corporation Systems and methods for array camera focal plane control
US9202287B2 (en) * 2012-11-22 2015-12-01 Samsung Electronics Co., Ltd. Apparatus and method for processing color image using depth image
US20140140613A1 (en) * 2012-11-22 2014-05-22 Samsung Electronics Co., Ltd. Apparatus and method for processing color image using depth image
US9462164B2 (en) 2013-02-21 2016-10-04 Pelican Imaging Corporation Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information
US10009538B2 (en) 2013-02-21 2018-06-26 Fotonation Cayman Limited Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information
US9743051B2 (en) 2013-02-24 2017-08-22 Fotonation Cayman Limited Thin form factor computational array cameras and modular array cameras
US9774831B2 (en) 2013-02-24 2017-09-26 Fotonation Cayman Limited Thin form factor computational array cameras and modular array cameras
US9374512B2 (en) 2013-02-24 2016-06-21 Pelican Imaging Corporation Thin form factor computational array cameras and modular array cameras
US9253380B2 (en) 2013-02-24 2016-02-02 Pelican Imaging Corporation Thin form factor computational array cameras and modular array cameras
US9638883B1 (en) 2013-03-04 2017-05-02 Fotonation Cayman Limited Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process
US9917998B2 (en) 2013-03-08 2018-03-13 Fotonation Cayman Limited Systems and methods for measuring scene information while capturing images using array cameras
US9774789B2 (en) 2013-03-08 2017-09-26 Fotonation Cayman Limited Systems and methods for high dynamic range imaging using array cameras
US10225543B2 (en) 2013-03-10 2019-03-05 Fotonation Limited System and methods for calibration of an array camera
US10958892B2 (en) 2013-03-10 2021-03-23 Fotonation Limited System and methods for calibration of an array camera
US9124864B2 (en) 2013-03-10 2015-09-01 Pelican Imaging Corporation System and methods for calibration of an array camera
US11570423B2 (en) 2013-03-10 2023-01-31 Adeia Imaging Llc System and methods for calibration of an array camera
US11272161B2 (en) 2013-03-10 2022-03-08 Fotonation Limited System and methods for calibration of an array camera
US9986224B2 (en) 2013-03-10 2018-05-29 Fotonation Cayman Limited System and methods for calibration of an array camera
US9521416B1 (en) 2013-03-11 2016-12-13 Kip Peli P1 Lp Systems and methods for image data compression
US9519972B2 (en) 2013-03-13 2016-12-13 Kip Peli P1 Lp Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9888194B2 (en) 2013-03-13 2018-02-06 Fotonation Cayman Limited Array camera architecture implementing quantum film image sensors
US9800856B2 (en) 2013-03-13 2017-10-24 Fotonation Cayman Limited Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9733486B2 (en) 2013-03-13 2017-08-15 Fotonation Cayman Limited Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US9106784B2 (en) 2013-03-13 2015-08-11 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US10127682B2 (en) 2013-03-13 2018-11-13 Fotonation Limited System and methods for calibration of an array camera
US9741118B2 (en) 2013-03-13 2017-08-22 Fotonation Cayman Limited System and methods for calibration of an array camera
US9100586B2 (en) 2013-03-14 2015-08-04 Pelican Imaging Corporation Systems and methods for photometric normalization in array cameras
US10091405B2 (en) 2013-03-14 2018-10-02 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US10547772B2 (en) 2013-03-14 2020-01-28 Fotonation Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US10412314B2 (en) 2013-03-14 2019-09-10 Fotonation Limited Systems and methods for photometric normalization in array cameras
US9578259B2 (en) 2013-03-14 2017-02-21 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9787911B2 (en) 2013-03-14 2017-10-10 Fotonation Cayman Limited Systems and methods for photometric normalization in array cameras
US9602805B2 (en) 2013-03-15 2017-03-21 Fotonation Cayman Limited Systems and methods for estimating depth using ad hoc stereo array cameras
US10674138B2 (en) 2013-03-15 2020-06-02 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US9497370B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Array camera architecture implementing quantum dot color filters
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
US10182216B2 (en) 2013-03-15 2019-01-15 Fotonation Limited Extended color processing on pelican array cameras
US9633442B2 (en) 2013-03-15 2017-04-25 Fotonation Cayman Limited Array cameras including an array camera module augmented with a separate camera
US9445003B1 (en) 2013-03-15 2016-09-13 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US9438888B2 (en) 2013-03-15 2016-09-06 Pelican Imaging Corporation Systems and methods for stereo imaging with camera arrays
US9955070B2 (en) 2013-03-15 2018-04-24 Fotonation Cayman Limited Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US10638099B2 (en) 2013-03-15 2020-04-28 Fotonation Limited Extended color processing on pelican array cameras
US9800859B2 (en) 2013-03-15 2017-10-24 Fotonation Cayman Limited Systems and methods for estimating depth using stereo array cameras
US10542208B2 (en) 2013-03-15 2020-01-21 Fotonation Limited Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US10122993B2 (en) 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US10455218B2 (en) 2013-03-15 2019-10-22 Fotonation Limited Systems and methods for estimating depth using stereo array cameras
US9898856B2 (en) 2013-09-27 2018-02-20 Fotonation Cayman Limited Systems and methods for depth-assisted perspective distortion correction
US10540806B2 (en) 2013-09-27 2020-01-21 Fotonation Limited Systems and methods for depth-assisted perspective distortion correction
US9426343B2 (en) 2013-11-07 2016-08-23 Pelican Imaging Corporation Array cameras incorporating independently aligned lens stacks
US9185276B2 (en) 2013-11-07 2015-11-10 Pelican Imaging Corporation Methods of manufacturing array camera modules incorporating independently aligned lens stacks
US9924092B2 (en) 2013-11-07 2018-03-20 Fotonation Cayman Limited Array cameras incorporating independently aligned lens stacks
US9264592B2 (en) 2013-11-07 2016-02-16 Pelican Imaging Corporation Array camera modules incorporating independently aligned lens stacks
US10119808B2 (en) 2013-11-18 2018-11-06 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US11486698B2 (en) 2013-11-18 2022-11-01 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US10767981B2 (en) 2013-11-18 2020-09-08 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US10708492B2 (en) 2013-11-26 2020-07-07 Fotonation Limited Array camera configurations incorporating constituent array cameras and constituent cameras
US9426361B2 (en) 2013-11-26 2016-08-23 Pelican Imaging Corporation Array camera configurations incorporating multiple constituent array cameras
US9456134B2 (en) 2013-11-26 2016-09-27 Pelican Imaging Corporation Array camera configurations incorporating constituent array cameras and constituent cameras
US9813617B2 (en) 2013-11-26 2017-11-07 Fotonation Cayman Limited Array camera configurations incorporating constituent array cameras and constituent cameras
US10574905B2 (en) 2014-03-07 2020-02-25 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US10089740B2 (en) 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US9247117B2 (en) 2014-04-07 2016-01-26 Pelican Imaging Corporation Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array
US9521319B2 (en) 2014-06-18 2016-12-13 Pelican Imaging Corporation Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor
US11546576B2 (en) 2014-09-29 2023-01-03 Adeia Imaging Llc Systems and methods for dynamic calibration of array cameras
US10250871B2 (en) 2014-09-29 2019-04-02 Fotonation Limited Systems and methods for dynamic calibration of array cameras
US9942474B2 (en) 2015-04-17 2018-04-10 Fotonation Cayman Limited Systems and methods for performing high speed video capture and depth estimation using array cameras
CN105741279A (en) * 2016-01-27 2016-07-06 西安电子科技大学 Rough set based image segmentation method for quickly inhibiting fuzzy clustering
CN108230346A (en) * 2017-03-30 2018-06-29 北京市商汤科技开发有限公司 For dividing the method and apparatus of image semantic feature, electronic equipment
CN108230346B (en) * 2017-03-30 2020-09-11 北京市商汤科技开发有限公司 Method and device for segmenting semantic features of image and electronic equipment
US10122912B2 (en) 2017-04-10 2018-11-06 Sony Corporation Device and method for detecting regions in an image
US10818026B2 (en) 2017-08-21 2020-10-27 Fotonation Limited Systems and methods for hybrid depth regularization
US11562498B2 (en) 2017-08-21 2023-01-24 Adela Imaging LLC Systems and methods for hybrid depth regularization
US10482618B2 (en) 2017-08-21 2019-11-19 Fotonation Limited Systems and methods for hybrid depth regularization
US11699273B2 (en) 2019-09-17 2023-07-11 Intrinsic Innovation Llc Systems and methods for surface modeling using polarization cues
US11270110B2 (en) 2019-09-17 2022-03-08 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
US11525906B2 (en) 2019-10-07 2022-12-13 Intrinsic Innovation Llc Systems and methods for augmentation of sensor systems and imaging systems with polarization
US11302012B2 (en) 2019-11-30 2022-04-12 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
US11842495B2 (en) 2019-11-30 2023-12-12 Intrinsic Innovation Llc Systems and methods for transparent object segmentation using polarization cues
US11580667B2 (en) 2020-01-29 2023-02-14 Intrinsic Innovation Llc Systems and methods for characterizing object pose detection and measurement systems
US11797863B2 (en) 2020-01-30 2023-10-24 Intrinsic Innovation Llc Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images
WO2022143366A1 (en) * 2021-01-04 2022-07-07 北京沃东天骏信息技术有限公司 Image processing method and apparatus, electronic device, medium, and computer program product
US11683594B2 (en) 2021-04-15 2023-06-20 Intrinsic Innovation Llc Systems and methods for camera exposure control
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US11953700B2 (en) 2021-05-27 2024-04-09 Intrinsic Innovation Llc Multi-aperture polarization optical systems using beam splitters
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
CN113822901B (en) * 2021-07-21 2023-12-12 南京旭锐软件科技有限公司 Image segmentation method and device, storage medium and electronic equipment
CN113822901A (en) * 2021-07-21 2021-12-21 南京旭锐软件科技有限公司 Image segmentation method, image segmentation device, storage medium and electronic equipment

Also Published As

Publication number Publication date
MY150361A (en) 2013-12-31

Similar Documents

Publication Publication Date Title
WO2012074361A1 (en) Method of image segmentation using intensity and depth information
US11922615B2 (en) Information processing device, information processing method, and storage medium
US8611728B2 (en) Video matting based on foreground-background constraint propagation
JP6355346B2 (en) Image processing apparatus, image processing method, program, and storage medium
JP2016505186A (en) Image processor with edge preservation and noise suppression functions
US20090290796A1 (en) Image processing apparatus and image processing method
IES20060564A2 (en) Improved foreground / background separation
CN110390643B (en) License plate enhancement method and device and electronic equipment
US9401027B2 (en) Method and apparatus for scene segmentation from focal stack images
CN109858438B (en) Lane line detection method based on model fitting
CN109903294B (en) Image processing method and device, electronic equipment and readable storage medium
JP2016200970A (en) Main subject detection method, main subject detection device and program
US20170243328A1 (en) Method and apparatus for image processing
CN112508923B (en) Weak and small target detection method
Zhu et al. Automatic object detection and segmentation from underwater images via saliency-based region merging
KR101836811B1 (en) Method, apparatus and computer program for matching between the images
EP3053137B1 (en) Method and apparatus for generating superpixel clusters
KR20180067909A (en) Apparatus and method for segmenting image
CN111080723A (en) Image element segmentation method based on Unet network
JP5914046B2 (en) Image processing apparatus and image processing method
CN111192286A (en) Image synthesis method, electronic device and storage medium
CN104573085A (en) Image retrieval method, image retrieval device and terminal
CN111161299B (en) Image segmentation method, storage medium and electronic device
Dimiccoli et al. Hierarchical region-based representation for segmentation and filtering with depth in single images
CN107248167B (en) Moving object shadow detection method and device and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11845303

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11845303

Country of ref document: EP

Kind code of ref document: A1