WO2010045406A2 - Camera system with autonomous miniature camera and light source assembly and method for image enhancement - Google Patents

Camera system with autonomous miniature camera and light source assembly and method for image enhancement Download PDF

Info

Publication number
WO2010045406A2
WO2010045406A2 PCT/US2009/060745 US2009060745W WO2010045406A2 WO 2010045406 A2 WO2010045406 A2 WO 2010045406A2 US 2009060745 W US2009060745 W US 2009060745W WO 2010045406 A2 WO2010045406 A2 WO 2010045406A2
Authority
WO
WIPO (PCT)
Prior art keywords
sub
light source
camera
image
band
Prior art date
Application number
PCT/US2009/060745
Other languages
French (fr)
Other versions
WO2010045406A3 (en
Inventor
Yu-Hwa Lo
Yoav Mintz
Truong Nguyen
Jack Tzeng
Original Assignee
The Regents Of The University Of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Regents Of The University Of California filed Critical The Regents Of The University Of California
Priority to US13/124,659 priority Critical patent/US8860793B2/en
Publication of WO2010045406A2 publication Critical patent/WO2010045406A2/en
Publication of WO2010045406A3 publication Critical patent/WO2010045406A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0661Endoscope light sources
    • A61B1/0676Endoscope light sources at distal tip of an endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0661Endoscope light sources
    • A61B1/0684Endoscope light sources using light emitting diodes [LED]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene
    • G03B15/03Combinations of cameras with lighting apparatus; Flash units
    • G03B15/05Combinations of cameras with electronic flash apparatus; Electronic flash units
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/30Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
    • A61B2090/309Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure using white LEDs
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B2215/00Special procedures for taking photographs; Apparatus therefor
    • G03B2215/05Combinations of cameras with electronic flash units
    • G03B2215/0514Separate unit
    • G03B2215/0517Housing
    • G03B2215/0525Reflector
    • G03B2215/0528Reflector movable reflector, e.g. change of illumination angle or illumination direction

Definitions

  • the present invention relates to camera systems and methods and, more particularly, to systems and methods of lighting that are employed as part of (or in combination with) such camera systems and methods in any of a variety of applications and environments including, for example, medical applications and environments,
  • MIS Minimally invasive surgery
  • Today this operation typically uses 3-6 abdominal skin incisions (each approximately 5-12 mm in length), through which optical fibers, cameras, and long operating instruments are inserted into the abdominal cavity.
  • the abdomen is usually insufflated (inflated) with carbon dioxide gas to create a working and viewing space and the operation is performed using these long instruments through the incisions.
  • One currently available camera for MIS is composed of a 12-15 -inch long tube containing several lenses and optical fibers (the laparoscope).
  • the front section of the laparoscope enters the abdomen through a portal called a "trocar"and the back end is connected to a power source, light source and supporting hardware through large cables protruding from the rear of the scope.
  • the internal optical fibers convey xenon or halogen light via an external light source into the internal cavity and the lenses transfer the images from within the cavity to an externally connected portable video camera.
  • this imaging device includes its two-dimensional view, lack of sufficient optical zoom, inability to adjust its angle of view or angle of illumination without a new incision, inability to control light intensity, and restrictions on its movement due to its large size and external cables.
  • This is especially significant in this era of MIS in which it is desired to minimize the number and extent of incisions, and wherein new technologies are being developed such as Natural Orifice Transluminal Endoscopic Surgery (NOTES) which use no abdominal incisions at all. but instead utilize a single trocar inserted into the mouth or vagina to carry all surgical instruments as well as the camera and light source.
  • NOTES Natural Orifice Transluminal Endoscopic Surgery
  • Lighting has been a challenge for laparoscopic cameras because they operate in an environment of extreme darkness where accessibility to sources of illumination is limited. Further, new cameras are being utilized which possess features such as a flexible field of view or optical zoom capability. Although such camera features are extremely desirable insofar as they allow surgeons to obtain both the global and the detailed view of the patient's abdomen as well as a large working space to practice surgery, it results in more strict requirements on lighting.
  • the limited illumination angle of the light source may not cover the entire desired image area, leading to rapid fading of the acquired image towards the edge of the image field,
  • the center portion of the acquired image may be over illuminated and saturated, resulting in penalty in resolution and dynamic range,
  • Tunable lenses refer to lenses with variable focal distances, and one type of a tunable lens is a iluidic lens. IBy utilizing such tunable lenses, which in at least some embodiments are fluidic (or microfluidic) lenses, improved lighting particularly involving a tuned radiation pattern and/or a controlled beam angle can be achieved.
  • In at least one embodiment, the present invention relates to a camera system suitable for use in minimally invasive surgery (MIS).
  • MIS minimally invasive surgery
  • the camera system includes an autonomous miniature camera, a light source assembly providing features such as steerable illumination, a variable radiation angle and auto-controlled light intensity, and a control and processing unit for processing images acquired by the camera to generate improved images having reduced blurring using a deblurring algorithm which correlates information between colors, [0011]
  • the present invention relates to a camera system for MIS having one or more autonomous miniature cameras enabling simultaneous multi-angle viewing and a three dimensional (3D) view, and a light source assembly having a light source with a tunable radiation pattern and dynamic beam steering.
  • the camera is insertabie into an abdominal cavity through the standard portal of the procedure, either through a trocar or via natural orifice.
  • the light source assembly is also insertable through the same portal.
  • the camera system also preferably includes a control and processing unit for independently controlling the camera (pan, tilt and zoom) and the light source assembly.
  • Wireless network links enable communication between these system components and the control and processing unit, eliminating bulky cables.
  • the control and processing unit receives location signals, transmits control signals, and receives acquired image data.
  • the control and processing unit processes the acquired image data to provide high quality images to the surgeon.
  • the tunable and ste ⁇ rable nature of the light source assembly allows light to be efficiently focused primarily on the camera field of view, thereby optimizing the energy directed to the region of interest
  • the autonomous nature of the camera and light source assembly allows the camera to be freely movable within the abdominal cavity to allow multiple angles of view of a region of interest (such as target organs), with the light source assembly capable of being adjusted accordingly. In this manner, although camera motion tracking is required, the illumination can be adjusted to reduce glare and provide optimal lighting efficiency.
  • the camera includes a fiuidic lens system which causes at least one color plane (channel) of acquired images to appear sharp and the other color planes (channels) to appear blurred. Therefore, another aspect of the invention provides a deblurring algorithm for correcting these blurred color planes of acquired images from the camera.
  • the algorithm uses an adapted perfect reconstruction filter bank that uses high frequency sub-bands of sharp color planes to improve blurred color planes. Refinements can be made by adding cascades and a filter to the system based on the channel characteristics. Blurred color planes have good shading information but poor edge information.
  • the filter band structure allows the separation of the edge and shading information. During reconstruction, the edge information from the blurred color plane is replaced by the edge information from the sharp color plane.
  • the present invention relates to a camera system configured to acquire image information representative of subject matter within a selected field of view.
  • the camera system includes a camera that receives reflected light and based thereon generates the image information, and a light source assembly operated in conjunction with the camera so as to provide illumination, the light source assembly including a first light source and a first tunable lens. At least some of the illumination output by the light source assembly is received back at the camera as the reflected light.
  • the present invention relates to a method of operating a camera system to acquire image information representative of subject matter within a selected field of view.
  • the method includes providing a camera and a light source assembly, where the light source assembly includes a light source and a tunable lens.
  • the method also includes controlling the tunable lens, and transmitting the light beam from the light source assembly.
  • the method additionally includes receiving reflected light at the camera, where at least some of the light of the light beam transmitted from the light source assembly is included as part of the reflected light, and where the image information is based at least in part upon the reflected light.
  • the tunable lens is controlled to vary the light beam output by the light source assembly, whereby the light beam can be varied to substantially match the selected field of view.
  • a fluidic lens camera system including an image sensor is used for acquiring image data of a scene within a field of view of the camera.
  • Image data is provided from each of a plurality of color channels of the image sensor, such as red (R), green (G) 5 and blue (B) color channels.
  • a control and processing unit is operable to receive and process the acquired image data in accordance with image processing methods described below to correct the blurred image data.
  • wavelet or contourlet transforms can be used to decompose the image data corresponding to the different color channels, mesh the edge information from a sharp color channel with the non- edge information from a blurred color channel to form a set of sub-band output coefficients or reconstruction coefficients in the wavelet or contourlet domain, and then reconstruct a deblurred image using these sub-band output coefficients,
  • the decomposition and reconstruction steps use a modified perfect reconstruction filter bank and wavelet transforms.
  • the decomposition and reconstruction steps use a contourlet filter bank, contourlet transforms, and an ant colony optimization method for extracting relevant edge information.
  • a method for debi ⁇ rring a blurred first color image corresponding to a first color channel of a camera that also produces a sharp second color image corresponding to a second color channel of the camera is provided, wherein the first and the second color images each include a plurality of pixels with each pixel having an associated respective value.
  • the method includes decomposing the first color image by filtering and ⁇ psampiing to generate a first set of one or more first sub-band output coefficients, wherein each first sub-band output coefficient corresponds to a respective sub- band in a selected one of a wavelet and a contourlet domain, and decomposing the second color image by filtering and upsampling to generate a second set of second sub-band output coefficients, wherein each second sub-band output coefficient corresponds Io a respective sub-band in the selected domain,
  • the method further includes selecting those second sub- band output coefficients that represent edge information, each of the selected second sub- band output coefficients corresponding to a respective selected sub-band, with the selected sub-bands together defining an edge sub-band set, and preparing a third set of sub-band output coefficients which includes the selected second sub-band output coefficients representing edge information and at least one first sub-band output coefficient corresponding to a sub-band other than those sub -bands in the edge sub-band set.
  • the present invention relates to a light source assembly.
  • the light source assembly includes an output port, a light source, and a tunable lens positioned between the first light source and the output port.
  • the light source generates light that passes through the tunable lens and then exits the light source assembly via the output port as output light, and the tunable lens is adjustable to vary a characteristic oi the output light exiting the light source assembly via the output port,
  • the light source includes an array of light emitting diodes (LEDs) that each can be controlled to be turned on or turned off and, based upon such control, a direction of light exiting the light source assembly can be varied,
  • FfG. 1 is a schematic view of an exemplary camera system in accordance with at least one embodiment of the present invention
  • FIGS, 2A-2C respectively show three different schematic views of an exemplary light source assembly using an LED and a microfluidic lens, illustrating various tunable radiation patterns achievable by adjusting the lens;
  • FIGS. 3 ⁇ -3C respectively show three different schematic views of another embodiment of a light source assembly using LED arrays and a micro fluidic lens, illustrating dynamic beam steering by selection and energization of one or more 1.,BDs in the array;
  • FIG, 4 illustrates a perfect reconstruction filter bank for decomposing and reconstructing an image;
  • FIG. 5 illustrates a modified perfect reconstruction filter bank: 15 ⁇ F ⁇ G.
  • 6 illustrates a one dimensional perfect reconstruction filter bank: 6]
  • FIG. 7 illustrates a modified one dimensional perfect reconstruction filter bank;
  • FIG. 8 illustrates a contour] et transform and the resulting frequency division in a contourlet frequency domain
  • FIG. 9 illustrates an exemplary two dimensional contourlet filter bank
  • FIGS. 10(a)-(g) illustrate a various conditions tor a contourlet filter bank.
  • an exemplary embodiment of a camera system 2 includes a miniature camera 4, two light source assemblies 6 and 8, respectively, and a control and processing unit 10 which includes a control unit 1 1 and a FZT (piezoelectric transducer) operator control unit 12 that are coupled to and in communication with one another,
  • the miniature camera 4 and light source assemblies 6, 8 are placed within an abdominal cavity 1 (or other body cavity or the like) of a patient, while the control and processing unit 10 remains exterior to the body of the patient.
  • the camera is intertable into an abdominal cavity through a 20 mm incision, such as on the abdominal wail, and the light source assembly is also insertable through an incision.
  • the miniature camera 4 is physically separated from each of the light source assemblies 6, 8, each of which is also physically separate from one another, Nevertheless, as represented by a dashed line 19, in other embodiments, it is possible for the miniature camera 4 and the light source assemblies 6, 8 (or, alternatively, one of those light source assemblies), to be physically connected or attached, or even to be housed within the same housing.
  • a field of view 14 of the camera 4 encompasses a desired region of interest (which in this example is a quadrilateral) 1 (S within the abdominal cavity 1, and thus encompasses subject matter within that desired region of interest (e.g., a particular organ or portion of an organ).
  • a desired region of interest which in this example is a quadrilateral
  • subject matter within that desired region of interest (e.g., a particular organ or portion of an organ).
  • the light source assemblies 6, 8 are respectively adjusted to generate light beams 1 1 , 13, respectively, which provide efficient illumination to the field of view 14 so as to effectively illuminate the desired region of interest 16 and subject matter contained therein, Some or all of the light directed to the desired region of interest 16 and generally to the field of view 14 is reflected off of the subject matter located there and consequently received by the miniature camera 4 as image information, in at least some embodiments, an effort is made to control the light provided by the light source assemblies 6, 8 so that the region illuminated by the light exactly corresponds to (or falls upon), or substantially corresponds to, the field of view 14. Given the adjustability of the light source assemblies in such mariner, in at least some embodiments the light source assemblies (or light sources) can be referred to as “smart light source assemblies" or (-'smart light sources”).
  • the miniature camera 4 includes a light sensor and a fiuidic lens (e.g., microfluidic lens) system.
  • the light sensor can take a variety of forms, tor example, that of a complementary metal-oxide-semiconductor (CMOS) or that of a charge coupled device (CCD) sensor.
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge coupled device
  • the fiuidic lens system of the miniature camera 2 affects different color wavelengths non-u ⁇ iformly.
  • control unit 11 and piezoelectric (PZT) operator control unit 12 can take a variety of forms depending upon the embodiment, ⁇ n one embodiment, each of the control units 1 1 , 12 includes a processing device (e.g., a microprocessor) and a memory device, among other components. Also, while the control and processing unit 10 is shown to include both the control unit 1 1 and the PZT operator control unit 12, in other embodiments the functioning of those two control units can be performed by a single device.
  • the miniature camera 4 of the camera system 2 acquires images (e.g., acquired image data) of the desired region of interest 16 within the field of view 14, which are transmitted to the control and processing unit 10.
  • the miniature camera 4 transmits, over 3 channels, two video images and additionally the optical zoom state.
  • the control unit 11 e.g., a master computer
  • the control unit 11 processes the received data, and can further transmit to the light source assemblies 6, 8 any changes that should be made in their operation so as to adjust the intensity and/or direction/angle of the light emitted from the light source assemblies.
  • control and processing unit 10 provides color corrections to the acquired images using a deblun ⁇ ng algorithm, as more fully described below.
  • the system can in other embodiments include any arbitrary number of light source assemblies and any arbitrary number of cameras.
  • the use of the multiple light sources 6, 8 so as to provide illumination from different angles in the present embodiment does afford greater efficiencies.
  • the use of multiple light sources from different angles can make possible additional types of image processing. For example, different lighting angles and shadows can be processed to create enhanced three-dimensional images and anatomical landmarks identification.
  • a two-way wireless communication network having wireless communication links 18 is provided that allows for the transmission of control and/or monitoring signals between/among components of the system 2,
  • the wireless communication links 18 among other things allow for communication between the control unit 1 1 and the miniature camera 4. More particularly in this regard, the wireless communication links 18 allow for signals to be transmitted from the miniature camera '1 back to the control unit 1 1 consider particularly signals representative of video images from inside the abdominal cavity as detected by the miniature camera. Also, the wireless corn muni cation links 18 allow for control signals to be provided from the control and processing unit 10 to the miniature camera 4 for the purpose of governing the pan, tilt, and zoom of the camera lens.
  • control unit 11 Optical zoom information and any camera motion information can be further taken into account by the control unit 11 in its processing of image data.
  • the wireless communication links 18 allow for communication between the control unit 1 1 and the light source assemblies 6, 8 so as to allow for control over the on/off status, brightness, beam orientation and/or other operational characteristics of the light source assemblies.
  • the control and processing unit 10 receives video imaging and zoom data, processes it, and according to the change of optical zoom and motion estimation, transmits appropriate control signals to the lens and the light source assemblies,
  • control and processing unit 10 as well as the components internal to the abdominal cavity 1 (or other body cavity or other space), such as the miniature camera 4 and the light source assemblies 6, 8, can each include a wireless transceiver or other conventional wireless communications hardware allowing for the establishment of the wireless communication links 18,
  • the use of the wireless communication links 18 for the purpose of allowing for communication of control and monitoring signals between the control and processing unit 10 (and particularly the control unit 1 1) and each of the miniature camera 4 and light source assemblies ⁇ , 8 eliminates the need for cables extending through the wall of the abdomen 1 for such purpose, ((MBSj
  • the control and processing unit 10 coordinates all wireless communications between the components of the camera system 2.
  • some or all wireless communications are achieved by way of conventional signal processing algorithms and networking protocols.
  • the miniature camera 4 transmits to the control and processing unit 10 in three channels the two video images (for example, the G and B channels) and the optical zoom state,
  • the wireless communication links 18 employ Wi-Fi or Bluetooth communication protocols/technologies. Additionally, depending on video resolution, frame rate, the distance between the control and processing unit 10 and the miniature camera 4 (or cameras), as well as the number of cameras in circumstances where more than one such camera is used, it can become desirable to compress the video bit streams.
  • Possible compression algorithms include, for example, the simple image coding JPEG standard, the high-end image coding JPEG2000 standard, and the high-end video coding H.264 standard, as well as Scalable Video Coding.
  • the algorithm choice depends on other factors such as algorithm complexity, overall delay, and compression artifacts, [0039]
  • the portions of the camera system 2 within the abdominal cavity 1 are autonomous and freely movable within the abdominal cavity, which among other things allows for the desired region of interest 16 to be viewed from multiple directions/angles. Allowing for multiple angles of views can be desirable in a variety of circumstances, for example, when the desired region of interest 16 includes multiple target organs, or target organs with complicated surface shapes.
  • an operator e.g., a physician using the PZT operator control unit. 12 is able to provide instructions to the control and processing unit 10 (the PZT operator control unit can include input devices allowing for the operator to provide such instructions), based upon which the control and processing unit 10 in turn causes changes in the position(s) of one or both of the light source assemblies 6, 8 in relation to (and, in some cases, so as to become closer to) the desired region of interest 16.
  • the control and processing unit 10 can in some circumstances save energy and provide a longer illumination period notwithstanding battery powering of the light source assemblies,
  • the particular positions of the light source assemblies 6, 8 and the miniature camera 4 are set up at a particular time during an operation (e.g., by insertion into the abdominal cavity 1 by a physician early on during an operation) and then those positions are fixed during the remainder of the operation.
  • the relative locations of the light source assemblies 6, 8 and the miniature camera 4 can be computed at an initial set up stage.
  • the initial parameters for the light beams 1 1, 13 provided by the light source assemblies 6, 8 e.g., angles and beam direction
  • for the miniature camera 4 e.g., viewing angles with respect to the light beams
  • additional movements of the light source assemblies 6, 8 and/or the miniature camera 4 can also be tracked.
  • the control and processing unit 10 is obtained by lrianguiation of the wireless signals captured from each component using any of various conventional trianguiation techniques. Additionally as discussed in further detail below, in at least some embodiments, the illumination emitted from the light source assemblies 6, 8 can further be adjusted as necessary.
  • the miniature camera 4 provides several advantageous features,
  • the miniature camera 4 is small in size, has powerful optical zoom capabilities, and has few or no moving parts while zooming.
  • the powerful optical zooming afforded by the miniature camera 4 allows the miniature camera to be distant from the desired region of interest 16. Consequently, while if the light source assemblies 6, 8 were physically connected to the miniature camera 4, a significant amount of illumination would potentially be wasted (in that it would not reach the desired region of interest 16), in the present this need not occur.
  • the light source assemblies are separate from and can be moved independently of the miniature camera 2,
  • FIGS. 2A-2C one of the light source assemblies 6, 8, namely, the light source assembly 6, is shown schematically in more detail (in cross-section) in three different operational circumstances.
  • each of the light source assemblies 6, 8 is identical in construction and operation, albeit in other embodiments the various light sources need not be identical.
  • the light source assembly 6 includes a tunable lens that in the present embodiment is a microfiuidic lens 20 (albeit in other embodiments other types of tunable lenses can be employed as well), and additionally a light emitting diode ( LI ⁇ D) 22.
  • LI ⁇ D light emitting diode
  • the LED 22 is small (typically about 1 nun' ' including package ⁇ and can be a widely available, off the shelf component that is of low cost, and thai can easily be mounted on a printed circuit board (also not shown) so as to form a surface-mount LED or the like.
  • the light source assembly 6 further should be understood as including a power source such as a battery. Having an integrated power source to supply power to the light source assembly 6 is advantageous because, by using such a structure, there is no need to employ cables going through the abdominal wall into the abdominal cavity 3 (albeit this limits the amount of power available and further increases the need for focusing the light to the desired region of interest 16 rather than illuminating a large unrecorded field beyond the desired region of interest),
  • the rnicrofiuidic lens 20 (and, indeed, any microfluidie lens or lenses employed in the camera 4 as well) can be a rnicrofiuidic lens such as that described in U.S. Patent Application No. 11/683.141, which was filed on March 7, 2007 and is entitled "Fluidic Adaptive Lens Systems and Methods" (and which issued as U.S. Patent No. 7,453,64(S on November 18, 2008), which is hereby incorporated by reference herein.
  • the rnicrofiuidic lens 20 is placed a few millimeters away from the LED 22 (although this separation distance is not shown FIGS, 2A-2C) and has a tunable focal length and appropriate aperture typically several times the dimension of the LED die, For example, if the LED die is 1 mm x 1 mm (i.e., 1 mm 2 ), the clear aperture of the tunable lens can be 3 mm by 3mm (i.e., 9 mm") or greater.
  • the radiation pattern e.g., radiation angle
  • each light source can be dynamically adjusted to obtain optimal illumination to match varying fields of view of the camera.
  • FIGS, 2A, 2B and 2C in particular respectively show medium, narrow and wide radiation patterns (particularly the extent of an angle ⁇ FWHM corresponding to full width at half maxim urn intensity) corresponding to three versions of the light beam 11 that are generated by the same light source assembly 6 when the microfluidie lens 20 is tuned to three different settings.
  • the narrow and medium radiation patterns are achieved when the microfluidie lens 20 is tuned so as to be highly-convex and moderately convex, respectively, while the wide radiation pattern is achieved when the microfluidie lens is tuned so as to be concave, ⁇ n the present embodiment, the distance between the LED 22 and the microfluidie lens 20 in the light source assembly 6 is close enough (e.g. 3-5 rnrn) so that, even for the shortest focal distance of the lens, the light of the LED is "defocused” rather than being " ⁇ focused".
  • the defocused LED light radiates at a divergent angle.
  • the radiation angle of illumination can be controlled to vary within a wide range of approximately 15 degrees to 140 degrees.
  • the optimal illumination condition is achieved when the radiation pattern produced by the light source assembly 6 is about 20% greater than the field-of-vi ⁇ w 14 of the miniature camera 4.
  • the corresponding desired illumination angles range from 30 to 120 degrees, well within the capability of the above-described light source assembly 6.
  • FIGS, 3A-3C another embodiment of a light source assembly 2(S is shown in simplified schematic form in three different views (again in cross-section).
  • the light source assembly 26 can be used in place of either of the light source assemblies 6, 8 discussed above.
  • the light source assembly 26 in particular allows for steering of an illumination beam 28 emanating from the light source assembly, so as to allow for efficient illumination of various different desired regions of interest (e.g., the region of interest 16) without requiring mechanical movement of the light source so that it can shine upon those different regions. In some circumstances, such .steering of the illumination beam 28 also allows for efficient illumination of a given desired region of interest from different directions/angles.
  • the light source assembly 26 (like the light source assembly 6 of F ⁇ GS, 2A-2C) again includes both a tunable iluidic (again in this embodiment, microfk ⁇ dic) lens 24 and a LED light source
  • the LED light source includes not merely one LED but rather includes an array of multiple LEDs 22 adjacent the lens 24, Depending upon the embodiment, the array 22 can take a variety of forms, employ- any arbitrary number of LEDs, and/or employ LEDs of any arbitrary color(s).
  • the array 22 particular has a center LED 30 aligned with a central axis 32 of the lens 24, and six additional LEDs 34 arranged along a concentric ring extending around the center LED 30 in manner where each of the additional LEDs is spaced apart by the same distance from each of its three neighboring LEDs (that is from the center LED and from each of the neighboring LEDs along the concentric ring).
  • the center LF:D 30 and two of the six additional LEDs arc visible (it will be understood that the remaining four additional LEDs would be fore or aft of the three visible LEDs 5 that is, into or out of the page when viewing FIGS.
  • the same light source assembly 26 given the same tuning of the lens 24 is capable of producing light beams 28 that have different orientations. More particularly, as shown in FlG. 3 B, when the center LED 30 is turned on and the additional LEDs 34 are all shut off, the light beam 28 is aligned (that is, centered about) the central axis 32 of the lens 24. By comparison, as shown in FIGS.
  • the light beams 28 produced are light cones that are angularly offset from the central axis 32 (by amounts ⁇ Q FF )- More particularly, FIG. 2 ⁇ illustrates a circumstance where the additional LED 34 to the left of the center LED 30 is energized, while FIG. 2C illustrates a circumstance where the additional LED to the right of the center LED is energized.
  • the amount of angular offset in any given circumstance is determined by the particular spacing of the respective additional LEDs 34 (and also possibly, to some extent, by the tuning of the lens 24).
  • the particular angles of divergence ⁇ FWHM of the light beams 28 are also determined by the tuning of the lens 24. That is, the light beams 28 are of a tunable divergent angle.
  • beam steering can be achieved in the present embodiment without mechanical movement by selecting (and varying) which on ⁇ (s) of the LElDs 30, 34 on or within the concentric ring are powered at any given time. While in the above example, only a single one of the LEDs 30, 34 is energized at a given time, in other embodiments multiple LEDs can also be powered simultaneously to create multiple beams along different directions, which can be desirable depending upon the particular desired regions of interest that are desirably illuminated, or to provide illumination suitable for supporting multiple cameras.
  • any of a variety of LED arrays having a variety of formations with any arbitrary number of LEDs can be utilized depending upon the embodiment.
  • additional LEDs and/or additional tunable lenses can be added to the assembly to achieve more finely-graded or quasi-continuous beam steering. That is, as the number of LEDs increases and/or the spacing between LEDs decreases, there is an increased ability to achieve finer steering of the light beams that are generated.
  • Such quasi-continuous designs are more efficient and can allow for energy saving. Instead of illuminating a field that is composed of the margins between two LCD areas by these two LEDs, the quasi-continuous design will illuminate the same area using only one LED.
  • more than one lens can be utilized.
  • two or more lenses can be positioned sequentially between the LED(s) and the outer surface of the light source assembly through which emitted light leaves the light source assembly.
  • the lenses can operate as a zoom lens system but in a reverse sense.
  • a dual tunable-lens system can continuously vary the orientation of the radiation pattern without physically tilting the device.
  • the emitted light can be controlled in two respects, namely, the area and the location of illumination.
  • a fluidic tunable lens positioned in front of the light source is controlled.
  • the emitted light will be adjusted to a narrow or wide radiation pattern by changing the shape of the lens.
  • control signals can be generated to select and power desired LEDs in an LED array.
  • the beam emanating from the light source assembly can be steered. Further, by tracking the location of the camera and the light source assemblies (using the control and processing unit 10), both the radiation pattern and the direction of the light beam can be varied in a manner suited for illuminating arbitrary desired regions of interest.
  • the camera 4 of the camera system 2 in at least some embodiments includes a tunable fluidic lens (for example, a microiluidie lens) system, and can be either a still camera or a video camera for acquiring a sequence of color images.
  • the camera 4 includes an image sensor such as a CMOS sensor for acquiring images in each of a plurality of color channels, such as a red color channel (R), a green color channel (G), and a blue color channel (B).
  • the images can be in the form of image data arrays, each corresponding to a respective one of the color channels and each comprising a plurality of pixels, with each pixel having an associated image value. As described below, some of the color channel images are blurred.
  • Data from the image sensor can be transferred to the control and processing unit 10 to be further processed, and the corresponding images from each color channel can be combined to form a composite color image.
  • the control and processing unit 10 can therefore perform a variety of image processing algorithms, including deblurring algorithms, which operate to enhance or correct any blurred images, [0053]
  • deblurring algorithms which operate to enhance or correct any blurred images.
  • the fluidic lens system of the camera affects different color wavelengths of light non-uniformly, different color wavelengths are focused at different focal depths resulting in the different R, G, B color channels having different amounts of blurring.
  • the fluidic lens system can also cause non-uniform blurring in the spatial domain, making objects at the center of the field of view more blurred than objects near the outer borders.
  • image enhancement and correction algorithms are desirable.
  • control and processing unit can be programmed to perform various image processing algorithms, including an image processing technique for correcting for image warping and a technique for correcting blurred images.
  • a warping correction technique can model the image warping taking into account both tangential and radial distortion components, a set of calibration parameters can be determined and the distortion can be inverted.
  • the fluidie lens system and/or image sensor of the camera By adjusting the fluidie lens system and/or image sensor of the camera such that one color channel is sharp (at least relative to the others) even though the other color channels are blurred, it is possible to extract the edge information from the sharp image and use it in conjunction with non-edge information of the blurred image to produce a deblurred image. This is possible because the blurred color channels have good shading information but poor edge information, [0057J !n one embodiment, the iluidic lens system and/or CMOS sensor are adjusted such that the green channel is sharp and the red and blue channels are blurred, i.e., corresponding images are out of focus and have more blurring distortion.
  • the acquired images from the camera can be processed (such as by the control and processing unit) to extract the edge information from the green image, and use it to correct the blurred images corresponding to each of the red and blue color channels,
  • the images corresponding to each color channel can then be combined to produce a debiurr ⁇ d composite color image.
  • the image corresponding to the green color channel and the image corresponding to the red (or blue) color channel are both decomposed by filtering and downsarapHng using a filter bank that allows for the separation of the edge and the shading information.
  • Edge information from the sharp green image is then mesh ⁇ d with the non-edge information for the red (or blue) image to form a set of sub-band output coefficients, and these are input to a reconstruction portion of the filter bank which includes upsampling, filtering and combining steps to generate a less blurred red (or blue) image and ultimately a " less blurred composite color image,
  • decomposition is performed using wavelet decomposition and wavelet transforms, while in another embodiment co ⁇ tourlet decomposition and contourlet waveforms are used, In the case of the wavelet decomposition, the selection of which sub-band output coefficients to use in an image reconstruction can be determined a priori.
  • the sharp image is decomposed to obtain certain sub-band output coefficients which are each assumed to represent edge information (Lc 5 corresponding to sub-bands having a high frequency component) and the blurred image is decomposed to obtain other sub-band output coefficients corresponding to sub-bands which are assumed to not represent edge information (i.e., those sub-bands not having a high frequency component), In other words, the sub-bands for these sets do not overlap.
  • both the blurred and the sharp images can he decomposed Io generate a corresponding sub-band output coefficient for each sub-band in a respective set of sub-bands, and the two sets of sub-bands can be overlapping sets. Then the resultant sub-band output coefficients corresponding to the sharp image can be evaluated to distinguish between strong and weak edges.
  • an ant colony optimization technique can be utilized to determine edge information, although other edge detection techniques can also be employed.
  • a set of sub-band output coefficients corresponding to the blurred image are modified to replace some of the sub-band output coefficients in the set with corresponding sharp image sub-band output coefficients which correspond to edge information for the sharp image.
  • Some of the sub-band output coefficients in the set are not replaced but are retained, and these are sub-band output coefficients which correspond to non-edge information.
  • the resultant modified set of sub- band output coefficients can then be used in the reconstruction process, [0060] With respect to the wavelet decomposition and reconstruction, in one embodiment, the following steps are performed;
  • [0062] Decompose the images into sub-bands by first filtering and down- sampling the rows of the image, then filtering and down-sampling the columns: [0063] 3. Form a set of reconstruction coefficients for the blue image including the B ⁇ LL coefficient and the band pass sub-band output coefficients corresponding to the green image (denoted by G ⁇ LH, G ⁇ HL, and G ⁇ H1 I) instead of using the band pass sub-band output coefficients of the blue image (denoted by B ⁇ LH, B ⁇ HL, and B ⁇ ! IK); [0064] 4, Depending on the degree of blur, introduce more levels of decomposition by further down-sampling and filtering the B A LL component. The green color sub-band output coefficients can replace more of the corresponding blue color sub-band output coefficients; and
  • control and processing unit 10 performs such a wavelet sub- band meshing image processing method using a modified perfect reconstruction filter bank to separate image edge information and shading information.
  • a modified perfect reconstruction filter bank begins with an understanding of a perfect reconstruction filter bank 40, Jn this case, this filter bank 40 has as its input a signal denoted by B ⁇ , which represents a blurred blue image from the blue color channel after passing through a lens, where LC) represents a lens blurring function and B represents an unblurr ⁇ d blue image.
  • Perfect reconstruction filter bank 40 includes a deconstruction portion
  • the filter bank 40 operates to deconstruct signal B ⁇ into a predetermined number of sub-band output coefficients corresponding to sub-bands in a wavelet domain (akin to a frequency domain) using the deconstruction portion 42.
  • the filter bank then operates to reconstruct a version of the signal B ⁇ using these coefficients as input to the reconstruction portion 44, with a so-called "perfect" reconstruction upon appropriate selection of filter characteristics.
  • the decomposition portion 42 of the perfect reconstruction filter bank 40 includes two cascaded levels and decomposes the input signal into four sub-bands. Specifically, at each of the two different levels, decomposition of the signal B A occurs by filtering (using deconstruction filter HO 5 a low pass filter, or decomposition filter Hl, a high pass filter) and downsampling (by a factor of two) to generate a respective sub-band output coefficient for each of the four different sub-bands at the intermediate section 43.
  • B ⁇ LH represents a sub-band output coefficient after filtering and down-sampling B ⁇ twice, where L represents a low pass filter and H represents a high pass filter.
  • the resultant sub-band output coefficients for the illustrated filter bank include B ⁇ LL, which represents the shading information, and B ⁇ LH, B ⁇ HL, and B ⁇ H ⁇ -l, which represent, the edge information.
  • This filter bank acts to at least partially decompose both blurred image B' ⁇ corresponding to the blurred image data of the blue color channel, and also image G, corresponding to the sharp image from the green color channel Sharp image G has also passed through the lens resulting in what can be denoted G ⁇ . but because little blurring occurs, it can be assumed that G ⁇ is approximately the same as G, Blurred image B ⁇ is decomposed over two levels using two low pass filters to extract its corresponding shading information in the form of a sub-band output coefficient denoted by B'M.L.
  • the filter bank 50 acts to at least partially decompose sharp image data G (corresponding to the sharp image data of the green color channel) to extract its corresponding edge information in the form of the sub-band output coefficients denoted by .
  • Hie sub-band output coefficients nd form a reconstruction coefficient set which is input to a reconstruction portion 54 which is the same as the reconstruction portion 44 of the perfect reconstruction filter bank 40.
  • a new deblurrcd image denoted by A* is then reconstructed by using this reconstruction set by upsampling, filtering, and combining over two levels, Image A* is an improvement over blurred image B ⁇ and maintains the shading information of the blurred blue image but has sharper edges.
  • FICJ illustrates a 1-D standard perfect reconstruction filter bank 60
  • Fig, 7 illustrates a modified prefect reconstruction filter bank 70.
  • LO(z) models the blurring effects of the lens on the true blue Image B(/.) as a low pass filter according to:
  • FIG. 6 shows that a standard filter bank can be expressed as follows: ( )[ f ) ( ) ( ) ] where:
  • B A rL is a reconstructed output in a low frequency sub-band (L) of the blurred image B ⁇ .
  • the filter bank of FlG. 6 is modified by replacing B ⁇ rH(z) with G ⁇ rH(z) from the green image sub-band, such as shown in FlG. 7, and expressed by: where:
  • G ⁇ rH is a reconstru ct ⁇ d output in a high frequency sub-band (H) of the image G ⁇ .
  • E B represents the error in the estimate of image B.
  • tha represents the blur filter of the blue image.
  • An appropriate frequency response is as follows; s approximately 1 if , and is approximately 0 if ⁇ /4
  • the method should use the blue sub-bands for all frequencie that the lens passes, and use the green sub-bands for all frequencies
  • the method requires a two level decomposition, such as shown in FIGS, 5 and 7. Further:
  • ⁇ j Is less than too , and is approximately 0 if
  • the decomposition level c can increase until ⁇ /2" is less than or equal to too , and too is less than or equal to ⁇ /2 ⁇ . Choosing a large c means discarding parts of the frequency spectrum which the lens does not actually corrupt. Making c too small will increase the error because in the low band of the frequency spectra 0 is not equal to (l -L(z)).
  • this expression includes four terms.
  • the l ⁇ rst three terms are approximately zero by construction because is approximately zero and s approximately zero.
  • the last term contains error, To reduce this error. an replac as discussed above. For the lower frequency sub-bands, the correlation does poorly and the reconstruction suffers.
  • a modified Wiener filler can be designed to reduce the error in all sub-bands with frequencies below
  • the filter hank design increases the level of decomposition so that the highest B ⁇ sub-band used in reconstruction has a transition frequency that is arbitrarily close t in. practice, complexity and image size limit the number of levels of decomposition.
  • a contourlet sub-band meshing method is used for debiurring an image in a first color channel, such as a blue (or red) channel, using Information from a second sharp color channel, such as a green channel.
  • This method is similar to the wavelet- based meshing method described above in that decomposition and reconstruction are involved.
  • the contour! et sub-band meshing method uses a contourlet transform instead of a wavelet transform and generates an edge map for further analysis of the edges prior to substitution of some green coefficients for blue ones in the reconstruction of the d ⁇ blurr ⁇ d blue image.
  • the contourlet transform was recently proposed by Do and Vetterii as a directional multi-resolution image repres ⁇ nlation that can efficiently capture and represent smooth object boundaries in natural images ⁇ as discussed in Minn N.
  • Do and Martin Vetterii "The contourlet transform: An efficient directional rnulii resolution image representation.” IEEE Trans, on Image Processing, vol. 18, pp, 729-739, April 2009, which is hereby incorporated by reference herein). More specifically, the contourlet transform is constructed as an iterated double filter bank including a Laplacian pyramid stage and a directional filter bank stage.
  • FIGS, 8(a)-(b) the operation can be illustrated with reference to FIGS, 8(a)-(b).
  • a Laplacian pyramid iteratively decomposes a 2 -D image into low pass and high pass sub- bands, and directional filter banks are applied to the high pass sub-bands to further decompose the frequency spectrum.
  • the process is iteratively repeated using a downsampl ⁇ d version of the low pass output as input to the next stages.
  • the contourlet transform will decompose the 2-D frequency spectrum into trapezoid-shap ⁇ d sub-band regions as shown in FIG. 8(b).
  • a two dimensional decomposition portion 90 of a contourlet filter bank is schematically shown in FIG. 9, and is operationally somewhat similar to the operation of the decomposition portion of the filter banks described above.
  • a decomposition of the blue image produces three outputs: the low pass output B ⁇ LL, the band pass output B A LH, and the high pass output B A HH.
  • the green image is similarly decomposed to generate G ⁇ LH and G A HI-I, which can be substituted for the B ⁇ LH and B ⁇ H11 coefficients as described below.
  • more levels or stages of iteration can be added to decompose the image into many more sub- bands.
  • the blue image and the green image are both decomposed using a desired number of levels or stages.
  • An ant colony optimization edge detection scheme of this method produces a binary edge map which is then dilated and used to decide which sub-band output coefficients corresponding to the blue channel will be replaced with green sub-band output coefficients and which sub-bands output coefficients will not be replaced.
  • Improved results compared to conventional methods can be obtained because the variable nature of contourlets allows for the natural contours of an image to be more accurately defined. Further, the edge detection scheme allows for the characterization of edges as strong or weak. [00104] Consequently, the contourlet sub-mcshing method can be performed as follows:
  • this method replaces some of the blurred blue edges with sharp green edges but keeps those blurred blue edges which correspond to weak green edges.
  • This method assumes that a strong green edge indicates a similarly strong true blue edge and chooses a corresponding green coefficient, in areas with a weak green edge, the method assumes that the blurred blue edge better matches the true sharp blue image and chooses the corresponding blue coefficient.
  • Natural images usually adhere to this generalization and thus improved image reconstruction can be achieved.
  • An appropriate dilation radius (such as in the range of 5-25 pixels) is selected.
  • MSE mean squared error
  • the numerator and the denominator of the expression above will typically have similar magnitude, satisfying the upper bound.
  • the expression above also suggests that in the high frequency sub-bands, the method will reduce the MSB when the green and clean blue coefficients have the same sign and a strong correlation exists between the color channels. In areas where they have different signs, the condition fails, and the poor correlation with result in color bleeding.
  • the blur kernel (LO) does not affect these frequencies, and thu ( ) is approximately 1.
  • B( ⁇ >./2, o;b/2) is approximately equal t and the equations of FIG, 30(g) are not satisfied.
  • the blur kernel does not affect the lower frequencies and substituting in green sub-bands at these lower frequencies does not reduce the MSE.
  • the contourlet method is an improvement over the wavelet method.
  • the input image quality decreases and the PSNR ( Peak Signal to Noise Ratio) decreases. With the blue image more degraded, more of the green color channel coefficients can be used. This means that more levels of decomposition are required.
  • the camera system 2 has been described with respect to its use in imaging body cavities as can be employed in MIS, However, other applications for the camera system and color correction algorithm are also contemplated. For example, cameras with extensive zoom control are available today for use in outer space for homeland security purposes.
  • Light sources for enable the use of such cameras remains an issue in conventional designs. Either the distance or the desire to remain unrecognized prevents the use of a light source directly from the location of the camera.
  • a camera system such as described herein including light source assemblies and network communications can resolve this difficulty,
  • a powerful camera can be positioned miles away from the target sites as long as the light source is able to adjust itself according to the movement of the camera.
  • Wireless communication between the light source assemblies and the camera can enable the use of the camera even when very distant from the target, for example, when airborne and mobile.
  • Embodiments of the present invention can also enable various systems that use only one powerful camera that is centrally located with the ability to acquire images from different areas, as long as it includes a light source for each area of interest,
  • embodiments of the present invention Eire also suitable for various civilian purposes such as using a powerful light source with variable illumination pattern and steering capability among different groups. Having a powerful light source that can be used by different consumers will allow landing airplanes and docking ships to use the light source of the airport or the seaport for assistance in landing. This will enable the consumers to direct the light to where it is needed rather than have a standard stationary illumination.
  • a system such as the camera system 2 can be used for imaging other spaces and regions other than cavities of the human body (or cavities in an animal body), including spaces or regions that are otherwise hard to reach. ⁇ ⁇ 0I22J Additionally, while much of the above description relates to the camera system 2.
  • the present invention is also intended to encompass other applications or embodiments in which only one or more of these components is present,
  • the present invention is also intended to relate to a light source assembly that is independent of any camera and merely used to provide lighting for a desired target.
  • the present invention is also intended to encompass, for example, a stand-alone light source assembly such as that described above that employs (i) one or more tunable lenses for the purpose of controlling an output light pattern (e.g., amount of light divergence), and/or (ii) controllable LEDs that can be switched on and off to cause variations in light beam direction.
  • any system with high edge correlation can benefit from the above-described deb ⁇ urring algorithm
  • Other future applications include uses in super resolution and video compression.
  • super resolution all three color planes share the same edge information, the resulting color image has much sharper edges than traditional techniques.
  • the edge information is redundant between color planes, such that one can use the edge information from one color plane as edge information tor ail three color planes. This redundancy can be used to save on the number of bits required without sacrificing much in terms of quality.
  • Another example is in systems where one sharp image sensor can improve the quality of an inexpensive sensor such as infrared which has high edge information.
  • This weapon could be located far from the fire zone or even airborne and could be used by multiple consumers.
  • the weapon can be locked into target by a camera carried by the consumer as long as its spatial orientation is registered and communication to the weapon is available. Motion estimation and distance to the target can be calculated in the processor near the weapon.

Abstract

The present Invention relates to a camera system suitable for use in minimally invasive surgery (MIS ), among other applications. In at least one embodiment, the camera system includes an autonomous miniature camera, a light source assembly providing features such as steerable illumination and a variable radiation angle, and a control and processing unit for processing images acquired by the camera Io generate improved images having reduced blurring using a deblurring algorithm.

Description

CAMERA SYSTEM WITH AUTONOMOUS MINIATURE CAMERA AND LIGI SOURCE ASSEMBLY AND METHOD FOR IMAGE ENHANCEMENT
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. provisional patent application no, 61/105,542 entitled "Camera System With Autonomous Miniature Camera And Light Source Assembly And Method For Image Enhancement Using Color Correlation" filed on October 15, 2008, which is hereby incorporated by reference herein.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
FIELD OF THE INVENTION
[0002] The present invention relates to camera systems and methods and, more particularly, to systems and methods of lighting that are employed as part of (or in combination with) such camera systems and methods in any of a variety of applications and environments including, for example, medical applications and environments,
BACKGROUND OF THE INVENTION
[0003] Minimally invasive surgery (MIS) is a modern surgical technique in which operations in the abdomen and thorax or elsewhere are performed through small incisions in the skin. Since 1981 , when the first laparoscopic cholecystectomy (gallbladder removal) was performed, this surgical field has greatly improved due to various technological advances. Today this operation typically uses 3-6 abdominal skin incisions (each approximately 5-12 mm in length), through which optical fibers, cameras, and long operating instruments are inserted into the abdominal cavity. The abdomen is usually insufflated (inflated) with carbon dioxide gas to create a working and viewing space and the operation is performed using these long instruments through the incisions. Images acquired by way of one or more camera devices inserted into the working and viewing space are viewed on a TV monitor beside the patient as the surgery progresses. [0004] One currently available camera for MIS is composed of a 12-15 -inch long tube containing several lenses and optical fibers (the laparoscope). The front section of the laparoscope enters the abdomen through a portal called a "trocar"and the back end is connected to a power source, light source and supporting hardware through large cables protruding from the rear of the scope. The internal optical fibers convey xenon or halogen light via an external light source into the internal cavity and the lenses transfer the images from within the cavity to an externally connected portable video camera. Disadvantages of this imaging device include its two-dimensional view, lack of sufficient optical zoom, inability to adjust its angle of view or angle of illumination without a new incision, inability to control light intensity, and restrictions on its movement due to its large size and external cables. This is especially significant in this era of MIS in which it is desired to minimize the number and extent of incisions, and wherein new technologies are being developed such as Natural Orifice Transluminal Endoscopic Surgery (NOTES) which use no abdominal incisions at all. but instead utilize a single trocar inserted into the mouth or vagina to carry all surgical instruments as well as the camera and light source.
[0005] Lighting has been a challenge for laparoscopic cameras because they operate in an environment of extreme darkness where accessibility to sources of illumination is limited. Further, new cameras are being utilized which possess features such as a flexible field of view or optical zoom capability. Although such camera features are extremely desirable insofar as they allow surgeons to obtain both the global and the detailed view of the patient's abdomen as well as a large working space to practice surgery, it results in more strict requirements on lighting.
[Oθθό] More particularly, current light sources including halogen and LED lamps are designed to have a fixed radiation pattern characterized by a full-width half-intensity angle, For those cameras for which zoom is an option, when a surgeon wants to see the details of an organ in a zoom-in mode, the angle of illumination may be much greater than the camera's fϊeid-of-view and thus there is often insufficient lighting for the desired image resulting in poor quality of the acquired image. On the other hand, when a surgeon operates the camera in zoom-out mode to achieve a wide fiεld-of-view, the limited illumination angle of the light source may not cover the entire desired image area, leading to rapid fading of the acquired image towards the edge of the image field, In addition, the center portion of the acquired image may be over illuminated and saturated, resulting in penalty in resolution and dynamic range,
[0007] For at least the above-described reasons, therefore, it would be advantageous if an improved system and method for illumination capable of being employed along with cameras as are used in a variety of applications including, for example MIS applications, could be developed. fOOΘB] Further, images corresponding to one or more color channels of the camera can be blurred, and ii would be advantageous if an improved deblurring method could he developed to provide enhanced image quality,
SUMMARY OF THF- INVENTION
[0009] The present inventors have recognized that such an improved system and method for illumination can be provided through the use of one or more tunable lenses as part of the light source. Tunable lenses refer to lenses with variable focal distances, and one type of a tunable lens is a iluidic lens. IBy utilizing such tunable lenses, which in at least some embodiments are fluidic (or microfluidic) lenses, improved lighting particularly involving a tuned radiation pattern and/or a controlled beam angle can be achieved. |0010| In at least one embodiment, the present invention relates to a camera system suitable for use in minimally invasive surgery (MIS). The camera system includes an autonomous miniature camera, a light source assembly providing features such as steerable illumination, a variable radiation angle and auto-controlled light intensity, and a control and processing unit for processing images acquired by the camera to generate improved images having reduced blurring using a deblurring algorithm which correlates information between colors, [0011] In at least one additional embodiment, the present invention relates to a camera system for MIS having one or more autonomous miniature cameras enabling simultaneous multi-angle viewing and a three dimensional (3D) view, and a light source assembly having a light source with a tunable radiation pattern and dynamic beam steering. The camera is insertabie into an abdominal cavity through the standard portal of the procedure, either through a trocar or via natural orifice. The light source assembly is also insertable through the same portal. The camera system also preferably includes a control and processing unit for independently controlling the camera (pan, tilt and zoom) and the light source assembly. Wireless network links enable communication between these system components and the control and processing unit, eliminating bulky cables. The control and processing unit receives location signals, transmits control signals, and receives acquired image data. The control and processing unit processes the acquired image data to provide high quality images to the surgeon.
[0012] In such an embodiment, the tunable and steεrable nature of the light source assembly allows light to be efficiently focused primarily on the camera field of view, thereby optimizing the energy directed to the region of interest, The autonomous nature of the camera and light source assembly (in terms of location and movement) allows the camera to be freely movable within the abdominal cavity to allow multiple angles of view of a region of interest (such as target organs), with the light source assembly capable of being adjusted accordingly. In this manner, although camera motion tracking is required, the illumination can be adjusted to reduce glare and provide optimal lighting efficiency. [0013] In still another embodiment of the present invention, the camera includes a fiuidic lens system which causes at least one color plane (channel) of acquired images to appear sharp and the other color planes (channels) to appear blurred. Therefore, another aspect of the invention provides a deblurring algorithm for correcting these blurred color planes of acquired images from the camera. The algorithm uses an adapted perfect reconstruction filter bank that uses high frequency sub-bands of sharp color planes to improve blurred color planes. Refinements can be made by adding cascades and a filter to the system based on the channel characteristics. Blurred color planes have good shading information but poor edge information. The filter band structure allows the separation of the edge and shading information. During reconstruction, the edge information from the blurred color plane is replaced by the edge information from the sharp color plane.
[0014] Further, in at least one embodiment, the present invention relates to a camera system configured to acquire image information representative of subject matter within a selected field of view. The camera system includes a camera that receives reflected light and based thereon generates the image information, and a light source assembly operated in conjunction with the camera so as to provide illumination, the light source assembly including a first light source and a first tunable lens. At least some of the illumination output by the light source assembly is received back at the camera as the reflected light. Additionally, the tunable lens is adjustable to vary the illumination output by the light source assembly, whereby the illumination can be varied to substantially match the selected field of view, [0015] Further, m at least one additional embodiment, the present invention relates to a method of operating a camera system to acquire image information representative of subject matter within a selected field of view. The method includes providing a camera and a light source assembly, where the light source assembly includes a light source and a tunable lens. The method also includes controlling the tunable lens, and transmitting the light beam from the light source assembly. The method additionally includes receiving reflected light at the camera, where at least some of the light of the light beam transmitted from the light source assembly is included as part of the reflected light, and where the image information is based at least in part upon the reflected light. The tunable lens is controlled to vary the light beam output by the light source assembly, whereby the light beam can be varied to substantially match the selected field of view.
[0OJ 6] further, in another aspect of the present invention, a fluidic lens camera system including an image sensor is used for acquiring image data of a scene within a field of view of the camera. Image data is provided from each of a plurality of color channels of the image sensor, such as red (R), green (G)5 and blue (B) color channels. A control and processing unit is operable to receive and process the acquired image data in accordance with image processing methods described below to correct the blurred image data. Briefly, wavelet or contourlet transforms can be used to decompose the image data corresponding to the different color channels, mesh the edge information from a sharp color channel with the non- edge information from a blurred color channel to form a set of sub-band output coefficients or reconstruction coefficients in the wavelet or contourlet domain, and then reconstruct a deblurred image using these sub-band output coefficients,
|0017] In one embodiment, the decomposition and reconstruction steps use a modified perfect reconstruction filter bank and wavelet transforms. In another embodiment, the decomposition and reconstruction steps use a contourlet filter bank, contourlet transforms, and an ant colony optimization method for extracting relevant edge information. [0018] In another embodiment, a method for debiαrring a blurred first color image corresponding to a first color channel of a camera that also produces a sharp second color image corresponding to a second color channel of the camera is provided, wherein the first and the second color images each include a plurality of pixels with each pixel having an associated respective value. The method includes decomposing the first color image by filtering and αpsampiing to generate a first set of one or more first sub-band output coefficients, wherein each first sub-band output coefficient corresponds to a respective sub- band in a selected one of a wavelet and a contourlet domain, and decomposing the second color image by filtering and upsampling to generate a second set of second sub-band output coefficients, wherein each second sub-band output coefficient corresponds Io a respective sub-band in the selected domain, The method further includes selecting those second sub- band output coefficients that represent edge information, each of the selected second sub- band output coefficients corresponding to a respective selected sub-band, with the selected sub-bands together defining an edge sub-band set, and preparing a third set of sub-band output coefficients which includes the selected second sub-band output coefficients representing edge information and at least one first sub-band output coefficient corresponding to a sub-band other than those sub -bands in the edge sub-band set. A deblurrcd first color image is reconstructed by upsampling and filtering using the third set of sub-band output coefficients as input,
[0019] Additionally, in at least some further embodiments, the present invention relates to a light source assembly. The light source assembly includes an output port, a light source, and a tunable lens positioned between the first light source and the output port. The light source generates light that passes through the tunable lens and then exits the light source assembly via the output port as output light, and the tunable lens is adjustable to vary a characteristic oi the output light exiting the light source assembly via the output port, In at least some of the above embodiments, the light source includes an array of light emitting diodes (LEDs) that each can be controlled to be turned on or turned off and, based upon such control, a direction of light exiting the light source assembly can be varied,
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] FfG. 1 is a schematic view of an exemplary camera system in accordance with at least one embodiment of the present invention; FIGS, 2A-2C respectively show three different schematic views of an exemplary light source assembly using an LED and a microfluidic lens, illustrating various tunable radiation patterns achievable by adjusting the lens;
[00221 FIGS. 3Λ-3C respectively show three different schematic views of another embodiment of a light source assembly using LED arrays and a micro fluidic lens, illustrating dynamic beam steering by selection and energization of one or more 1.,BDs in the array; [0023] FIG, 4 illustrates a perfect reconstruction filter bank for decomposing and reconstructing an image;
FIG. 5 illustrates a modified perfect reconstruction filter bank: 15} FΪG. 6 illustrates a one dimensional perfect reconstruction filter bank: 6] FIG. 7 illustrates a modified one dimensional perfect reconstruction filter bank;
FIG, 8 illustrates a contour] et transform and the resulting frequency division in a contourlet frequency domain;
[0028] FIG. 9 illustrates an exemplary two dimensional contourlet filter bank; and [0029] FIGS. 10(a)-(g) illustrate a various conditions tor a contourlet filter bank.
DETAl LL- D DESCRIPTION OF THE PREFER RL D EMBODIMENT
[0030] Referring to FlG. 1, an exemplary embodiment of a camera system 2 includes a miniature camera 4, two light source assemblies 6 and 8, respectively, and a control and processing unit 10 which includes a control unit 1 1 and a FZT (piezoelectric transducer) operator control unit 12 that are coupled to and in communication with one another, As shown schematically, the miniature camera 4 and light source assemblies 6, 8 are placed within an abdominal cavity 1 (or other body cavity or the like) of a patient, while the control and processing unit 10 remains exterior to the body of the patient. In at least one embodiment, the camera is intertable into an abdominal cavity through a 20 mm incision, such as on the abdominal wail, and the light source assembly is also insertable through an incision. In the present embodiment, it is envisioned that the miniature camera 4 is physically separated from each of the light source assemblies 6, 8, each of which is also physically separate from one another, Nevertheless, as represented by a dashed line 19, in other embodiments, it is possible for the miniature camera 4 and the light source assemblies 6, 8 (or, alternatively, one of those light source assemblies), to be physically connected or attached, or even to be housed within the same housing.
[003 J j The camera system 2 is adjusted such that a field of view 14 of the camera 4 encompasses a desired region of interest (which in this example is a quadrilateral) 1 (S within the abdominal cavity 1, and thus encompasses subject matter within that desired region of interest (e.g., a particular organ or portion of an organ). As discussed further below, the light source assemblies 6, 8 are respectively adjusted to generate light beams 1 1 , 13, respectively, which provide efficient illumination to the field of view 14 so as to effectively illuminate the desired region of interest 16 and subject matter contained therein, Some or all of the light directed to the desired region of interest 16 and generally to the field of view 14 is reflected off of the subject matter located there and consequently received by the miniature camera 4 as image information, in at least some embodiments, an effort is made to control the light provided by the light source assemblies 6, 8 so that the region illuminated by the light exactly corresponds to (or falls upon), or substantially corresponds to, the field of view 14. Given the adjustability of the light source assemblies in such mariner, in at least some embodiments the light source assemblies (or light sources) can be referred to as "smart light source assemblies" or (-'smart light sources").
|ΘO32J As will be described in more detail below with respect to FIGS. 2.A-2C and 3A-3C, in the present embodiment, the miniature camera 4 includes a light sensor and a fiuidic lens (e.g., microfluidic lens) system. The light sensor can take a variety of forms, tor example, that of a complementary metal-oxide-semiconductor (CMOS) or that of a charge coupled device (CCD) sensor. In the present embodiment, the fiuidic lens system of the miniature camera 2 affects different color wavelengths non-uπiformly. .Although in the present embodiment a fiuidic lens system is employed, in other embodiments other types of lens systems including other tunable lens systems can be employed instead (or in addition to) a fiuidic lens system. Typically the lens system employed will afford the camera with at least- some zooming capability. Further, the control unit 11 and piezoelectric (PZT) operator control unit 12 can take a variety of forms depending upon the embodiment, ϊn one embodiment, each of the control units 1 1 , 12 includes a processing device (e.g., a microprocessor) and a memory device, among other components. Also, while the control and processing unit 10 is shown to include both the control unit 1 1 and the PZT operator control unit 12, in other embodiments the functioning of those two control units can be performed by a single device.
|0033] During operation, the miniature camera 4 of the camera system 2 acquires images (e.g., acquired image data) of the desired region of interest 16 within the field of view 14, which are transmitted to the control and processing unit 10. In one embodiment, the miniature camera 4 transmits, over 3 channels, two video images and additionally the optical zoom state. The control unit 11 (e.g., a master computer) of the control and processing unit 10 processes the received data, and can further transmit to the light source assemblies 6, 8 any changes that should be made in their operation so as to adjust the intensity and/or direction/angle of the light emitted from the light source assemblies. Among other things, the control and processing unit 10 (particularly the control unit 1 1 ) provides color corrections to the acquired images using a deblunϊng algorithm, as more fully described below. [0034] Although shown with the single miniature camera 4 and the two light source assemblies 6, 8, the system can in other embodiments include any arbitrary number of light source assemblies and any arbitrary number of cameras. Although only a single light source assembly can be used in some embodiments, the use of the multiple light sources 6, 8 so as to provide illumination from different angles in the present embodiment does afford greater efficiencies. Further, the use of multiple light sources from different angles can make possible additional types of image processing. For example, different lighting angles and shadows can be processed to create enhanced three-dimensional images and anatomical landmarks identification.
[0035] Still referring to FlG. 1, in the present embodiment a two-way wireless communication network having wireless communication links 18 is provided that allows for the transmission of control and/or monitoring signals between/among components of the system 2, The wireless communication links 18 among other things allow for communication between the control unit 1 1 and the miniature camera 4. More particularly in this regard, the wireless communication links 18 allow for signals to be transmitted from the miniature camera '1 back to the control unit 1 1 „ particularly signals representative of video images from inside the abdominal cavity as detected by the miniature camera. Also, the wireless corn muni cation links 18 allow for control signals to be provided from the control and processing unit 10 to the miniature camera 4 for the purpose of governing the pan, tilt, and zoom of the camera lens. Optical zoom information and any camera motion information can be further taken into account by the control unit 11 in its processing of image data. [ΘO36| Additionally, the wireless communication links 18 allow for communication between the control unit 1 1 and the light source assemblies 6, 8 so as to allow for control over the on/off status, brightness, beam orientation and/or other operational characteristics of the light source assemblies. In at least some embodiments, the control and processing unit 10 receives video imaging and zoom data, processes it, and according to the change of optical zoom and motion estimation, transmits appropriate control signals to the lens and the light source assemblies,
[0037] Although not shown, it will be understood that the control and processing unit 10 as well as the components internal to the abdominal cavity 1 (or other body cavity or other space), such as the miniature camera 4 and the light source assemblies 6, 8, can each include a wireless transceiver or other conventional wireless communications hardware allowing for the establishment of the wireless communication links 18, The use of the wireless communication links 18 for the purpose of allowing for communication of control and monitoring signals between the control and processing unit 10 (and particularly the control unit 1 1) and each of the miniature camera 4 and light source assemblies ό, 8 eliminates the need for cables extending through the wall of the abdomen 1 for such purpose, ((MBSj In at least some embodiments, the control and processing unit 10 coordinates all wireless communications between the components of the camera system 2. Further, in at least some embodiments, some or all wireless communications are achieved by way of conventional signal processing algorithms and networking protocols. For example, in one embodiment, the miniature camera 4 transmits to the control and processing unit 10 in three channels the two video images (for example, the G and B channels) and the optical zoom state, Further for example, in one embodiment, the wireless communication links 18 employ Wi-Fi or Bluetooth communication protocols/technologies. Additionally, depending on video resolution, frame rate, the distance between the control and processing unit 10 and the miniature camera 4 (or cameras), as well as the number of cameras in circumstances where more than one such camera is used, it can become desirable to compress the video bit streams. Possible compression algorithms include, for example, the simple image coding JPEG standard, the high-end image coding JPEG2000 standard, and the high-end video coding H.264 standard, as well as Scalable Video Coding. The algorithm choice depends on other factors such as algorithm complexity, overall delay, and compression artifacts, [0039] Additionally, it will be understood that the portions of the camera system 2 within the abdominal cavity 1 (again, the miniature camera 4 and the light source assemblies 6, 8) are autonomous and freely movable within the abdominal cavity, which among other things allows for the desired region of interest 16 to be viewed from multiple directions/angles. Allowing for multiple angles of views can be desirable in a variety of circumstances, for example, when the desired region of interest 16 includes multiple target organs, or target organs with complicated surface shapes. In at least one embodiment, an operator (e.g., a physician) using the PZT operator control unit. 12 is able to provide instructions to the control and processing unit 10 (the PZT operator control unit can include input devices allowing for the operator to provide such instructions), based upon which the control and processing unit 10 in turn causes changes in the position(s) of one or both of the light source assemblies 6, 8 in relation to (and, in some cases, so as to become closer to) the desired region of interest 16. Such modifications in the ρosition(s) of the light source assemblies 6, 8 can in some circumstances save energy and provide a longer illumination period notwithstanding battery powering of the light source assemblies,
[0040] In at least some embodiments, the particular positions of the light source assemblies 6, 8 and the miniature camera 4 are set up at a particular time during an operation (e.g., by insertion into the abdominal cavity 1 by a physician early on during an operation) and then those positions are fixed during the remainder of the operation. In such cases, the relative locations of the light source assemblies 6, 8 and the miniature camera 4 can be computed at an initial set up stage. The initial parameters for the light beams 1 1, 13 provided by the light source assemblies 6, 8 (e.g., angles and beam direction) as well as for the miniature camera 4 (e.g., viewing angles with respect to the light beams) are also computed. In some embodiments, additional movements of the light source assemblies 6, 8 and/or the miniature camera 4 can also be tracked. In some such embodiments, to calculate the light angle modification (or a desired light angle modification), ongoing information regarding the spatial locations of the miniature camera 4, light source assemblies 6, 8, and in at least some eases also the control and processing unit 10 is obtained by lrianguiation of the wireless signals captured from each component using any of various conventional trianguiation techniques. Additionally as discussed in further detail below, in at least some embodiments, the illumination emitted from the light source assemblies 6, 8 can further be adjusted as necessary.
[0041] Given the particular features of the miniature camera 4 as described above and further below, the miniature camera provides several advantageous features, In particular, in the present embodiment the miniature camera 4 is small in size, has powerful optical zoom capabilities, and has few or no moving parts while zooming. The powerful optical zooming afforded by the miniature camera 4 allows the miniature camera to be distant from the desired region of interest 16. Consequently, while if the light source assemblies 6, 8 were physically connected to the miniature camera 4, a significant amount of illumination would potentially be wasted (in that it would not reach the desired region of interest 16), in the present this need not occur. Rather, instead of placing the light source assemblies 6, 8 on (or otherwise physically connecting those assemblies to) the miniature camera 4, in the present embodiment the light source assemblies are separate from and can be moved independently of the miniature camera 2,
[0042] Turning now to FIGS. 2A-2C, one of the light source assemblies 6, 8, namely, the light source assembly 6, is shown schematically in more detail (in cross-section) in three different operational circumstances. It will be understood that, in the present embodiment, each of the light source assemblies 6, 8 is identical in construction and operation, albeit in other embodiments the various light sources need not be identical. As shown in each of FIGS. 2Λ-2C. the light source assembly 6 includes a tunable lens that in the present embodiment is a microfiuidic lens 20 (albeit in other embodiments other types of tunable lenses can be employed as well), and additionally a light emitting diode ( LIΛD) 22. The LED 22 is small (typically about 1 nun'' including package} and can be a widely available, off the shelf component that is of low cost, and thai can easily be mounted on a printed circuit board (also not shown) so as to form a surface-mount LED or the like. Although not shown, the light source assembly 6 further should be understood as including a power source such as a battery. Having an integrated power source to supply power to the light source assembly 6 is advantageous because, by using such a structure, there is no need to employ cables going through the abdominal wall into the abdominal cavity 3 (albeit this limits the amount of power available and further increases the need for focusing the light to the desired region of interest 16 rather than illuminating a large unrecorded field beyond the desired region of interest),
[Θ043J In the present embodiment, the rnicrofiuidic lens 20 (and, indeed, any microfluidie lens or lenses employed in the camera 4 as well) can be a rnicrofiuidic lens such as that described in U.S. Patent Application No. 11/683.141, which was filed on March 7, 2007 and is entitled "Fluidic Adaptive Lens Systems and Methods" (and which issued as U.S. Patent No. 7,453,64(S on November 18, 2008), which is hereby incorporated by reference herein. The rnicrofiuidic lens 20 is placed a few millimeters away from the LED 22 (although this separation distance is not shown FIGS, 2A-2C) and has a tunable focal length and appropriate aperture typically several times the dimension of the LED die, For example, if the LED die is 1 mm x 1 mm (i.e., 1 mm2), the clear aperture of the tunable lens can be 3 mm by 3mm (i.e., 9 mm") or greater. By adjusting the focal length of the rnicrofiuidic lens 20, the radiation pattern (e.g., radiation angle) of each light source can be dynamically adjusted to obtain optimal illumination to match varying fields of view of the camera. That is, the light beam 11 emanating from the light source assembly 6 can be varied in this manner. [0044] To illustrate this effect FIGS, 2A, 2B and 2C in particular respectively show medium, narrow and wide radiation patterns (particularly the extent of an angle ΘFWHM corresponding to full width at half maxim urn intensity) corresponding to three versions of the light beam 11 that are generated by the same light source assembly 6 when the microfluidie lens 20 is tuned to three different settings. More particularly as shown, the narrow and medium radiation patterns are achieved when the microfluidie lens 20 is tuned so as to be highly-convex and moderately convex, respectively, while the wide radiation pattern is achieved when the microfluidie lens is tuned so as to be concave, ϊn the present embodiment, the distance between the LED 22 and the microfluidie lens 20 in the light source assembly 6 is close enough (e.g. 3-5 rnrn) so that, even for the shortest focal distance of the lens, the light of the LED is "defocused" rather than being "focused". The defocused LED light radiates at a divergent angle.
[Θ04SJ Notwithstanding the exemplary radiation patterns shown in FIGS, 2A-2C, assuming that the microfluidie lens 20 has an ultrawide tuning range, the radiation angle of illumination can be controlled to vary within a wide range of approximately 15 degrees to 140 degrees. The optimal illumination condition is achieved when the radiation pattern produced by the light source assembly 6 is about 20% greater than the field-of-viεw 14 of the miniature camera 4. For a 4x optical zoom camera with a iϊeid-of-view of 25 Io 100 degrees, the corresponding desired illumination angles range from 30 to 120 degrees, well within the capability of the above-described light source assembly 6. As a result, for any chosen field of view of a miniaturized camera, the subject matter within the field of view will be uniformly illuminated for the best image quality.
[0046| Turning to FIGS, 3A-3C, another embodiment of a light source assembly 2(S is shown in simplified schematic form in three different views (again in cross-section). The light source assembly 26 can be used in place of either of the light source assemblies 6, 8 discussed above. The light source assembly 26 in particular allows for steering of an illumination beam 28 emanating from the light source assembly, so as to allow for efficient illumination of various different desired regions of interest (e.g., the region of interest 16) without requiring mechanical movement of the light source so that it can shine upon those different regions. In some circumstances, such .steering of the illumination beam 28 also allows for efficient illumination of a given desired region of interest from different directions/angles.
[0047] While in this embodiment the light source assembly 26 (like the light source assembly 6 of FΪGS, 2A-2C) again includes both a tunable iluidic (again in this embodiment, microfkπdic) lens 24 and a LED light source, in this embodiment the LED light source includes not merely one LED but rather includes an array of multiple LEDs 22 adjacent the lens 24, Depending upon the embodiment, the array 22 can take a variety of forms, employ- any arbitrary number of LEDs, and/or employ LEDs of any arbitrary color(s). In the present exemplary embodiment, the array 22 particular has a center LED 30 aligned with a central axis 32 of the lens 24, and six additional LEDs 34 arranged along a concentric ring extending around the center LED 30 in manner where each of the additional LEDs is spaced apart by the same distance from each of its three neighboring LEDs (that is from the center LED and from each of the neighboring LEDs along the concentric ring). Thus, in the cross-scctiona! views provided by FIGS. 3A-3C, the center LF:D 30 and two of the six additional LEDs arc visible (it will be understood that the remaining four additional LEDs would be fore or aft of the three visible LEDs5 that is, into or out of the page when viewing FIGS. 3A-3C), [0048] As further shown in FIGS. 3A-3C, by selectively turning on or off the different ones of the LKDs so that the different LEDs are illuminated, the same light source assembly 26 given the same tuning of the lens 24 is capable of producing light beams 28 that have different orientations. More particularly, as shown in FlG. 3 B, when the center LED 30 is turned on and the additional LEDs 34 are all shut off, the light beam 28 is aligned (that is, centered about) the central axis 32 of the lens 24. By comparison, as shown in FIGS. 2Λ and 2C, given proper design of the lens aperture and the f-riumber of the lens 24, when the respective additional LEDs 34 on the concentric ring are energized (and the center LED 30 is shut off), the light beams 28 produced are light cones that are angularly offset from the central axis 32 (by amounts ΘQFF)- More particularly, FIG. 2Λ illustrates a circumstance where the additional LED 34 to the left of the center LED 30 is energized, while FIG. 2C illustrates a circumstance where the additional LED to the right of the center LED is energized. The amount of angular offset in any given circumstance is determined by the particular spacing of the respective additional LEDs 34 (and also possibly, to some extent, by the tuning of the lens 24). As discussed with respect to FIGS. 2A-2C, the particular angles of divergence ΘFWHM of the light beams 28 are also determined by the tuning of the lens 24. That is, the light beams 28 are of a tunable divergent angle.
[0049] Given that different orientations of the light beams 28 can be achieved by energizing different ones of the LEDs 30, 34, it will be understood that beam steering can be achieved in the present embodiment without mechanical movement by selecting (and varying) which onε(s) of the LElDs 30, 34 on or within the concentric ring are powered at any given time. While in the above example, only a single one of the LEDs 30, 34 is energized at a given time, in other embodiments multiple LEDs can also be powered simultaneously to create multiple beams along different directions, which can be desirable depending upon the particular desired regions of interest that are desirably illuminated, or to provide illumination suitable for supporting multiple cameras. As already noted, any of a variety of LED arrays having a variety of formations with any arbitrary number of LEDs can be utilized depending upon the embodiment. In other embodiments, additional LEDs and/or additional tunable lenses can be added to the assembly to achieve more finely-graded or quasi-continuous beam steering. That is, as the number of LEDs increases and/or the spacing between LEDs decreases, there is an increased ability to achieve finer steering of the light beams that are generated. Such quasi-continuous designs are more efficient and can allow for energy saving. Instead of illuminating a field that is composed of the margins between two LCD areas by these two LEDs, the quasi-continuous design will illuminate the same area using only one LED.
[Θ050] Further, depending upon the embodiment, more than one lens can be utilized. For example, two or more lenses can be positioned sequentially between the LED(s) and the outer surface of the light source assembly through which emitted light leaves the light source assembly. Additionally, in the case of two or more lenses, the lenses can operate as a zoom lens system but in a reverse sense. Like a zoom Sens capable of continuously varying its magnification factor, a dual tunable-lens system can continuously vary the orientation of the radiation pattern without physically tilting the device. Through the use of multiple lenses in addition to multiple LEDs (particularly a LED array with a large number of LEDs), truly continuous beam steering can be achieved. Again, such truly continuous beam steering can be more energy-efficient in providing desired illumination.
[Θ05J] To summarize, as illustrated by FIGS. 2Λ-3C, in at least some embodiments of the present invention the emitted light can be controlled in two respects, namely, the area and the location of illumination. In at least some embodiments, in order to control the area of illumination, a fluidic tunable lens positioned in front of the light source is controlled. Depending on the optical zoom used by the camera, the emitted light will be adjusted to a narrow or wide radiation pattern by changing the shape of the lens. Further, in at least some embodiments, in order to control the location of the center of the beam of emitted light relative to a center axis of the lens, control signals can be generated to select and power desired LEDs in an LED array. By appropriately turning on and off different LEDs of the LED array, and/or appropriately tuning one or more rnicrofluidic lenses, the beam emanating from the light source assembly can be steered. Further, by tracking the location of the camera and the light source assemblies (using the control and processing unit 10), both the radiation pattern and the direction of the light beam can be varied in a manner suited for illuminating arbitrary desired regions of interest.
[0052] As mentioned above, the camera 4 of the camera system 2 in at least some embodiments includes a tunable fluidic lens (for example, a microiluidie lens) system, and can be either a still camera or a video camera for acquiring a sequence of color images. The camera 4 includes an image sensor such as a CMOS sensor for acquiring images in each of a plurality of color channels, such as a red color channel (R), a green color channel (G), and a blue color channel (B). The images can be in the form of image data arrays, each corresponding to a respective one of the color channels and each comprising a plurality of pixels, with each pixel having an associated image value. As described below, some of the color channel images are blurred. Data from the image sensor can be transferred to the control and processing unit 10 to be further processed, and the corresponding images from each color channel can be combined to form a composite color image. The control and processing unit 10 can therefore perform a variety of image processing algorithms, including deblurring algorithms, which operate to enhance or correct any blurred images, [0053] In particular, because the fluidic lens system of the camera affects different color wavelengths of light non-uniformly, different color wavelengths are focused at different focal depths resulting in the different R, G, B color channels having different amounts of blurring. The fluidic lens system can also cause non-uniform blurring in the spatial domain, making objects at the center of the field of view more blurred than objects near the outer borders. Thus, image enhancement and correction algorithms are desirable.
[0054] Thus, the control and processing unit can be programmed to perform various image processing algorithms, including an image processing technique for correcting for image warping and a technique for correcting blurred images. A warping correction technique can model the image warping taking into account both tangential and radial distortion components, a set of calibration parameters can be determined and the distortion can be inverted.
[ΘOSS] Various known deblurring techniques exist, such as Lucy-Richardson Deconvolution and Wiener filtering techniques, which have been used to correct for blur that occurs in glass lens systems. Both of these techniques require that an appropriate point spread function (PSF) be calculated, which can be a function of color, frequency, depth, and spatial location. Calculation of the PSF can he complex, cumbersome, and potentially inaccurate, making these techniques disadvantageous. Further, both techniques assume that the level of blur is the same for each of the color channels and do not account for the variations in blur between the color channels. Generally, the dcblurring methods described herein rely on the realization that edges in natural images occur at the same location in the different color channels. By adjusting the fluidie lens system and/or image sensor of the camera such that one color channel is sharp (at least relative to the others) even though the other color channels are blurred, it is possible to extract the edge information from the sharp image and use it in conjunction with non-edge information of the blurred image to produce a deblurred image. This is possible because the blurred color channels have good shading information but poor edge information, [0057J !n one embodiment, the iluidic lens system and/or CMOS sensor are adjusted such that the green channel is sharp and the red and blue channels are blurred, i.e., corresponding images are out of focus and have more blurring distortion. Then the acquired images from the camera can be processed (such as by the control and processing unit) to extract the edge information from the green image, and use it to correct the blurred images corresponding to each of the red and blue color channels, The images corresponding to each color channel can then be combined to produce a debiurrεd composite color image. [0058] Basically, the image corresponding to the green color channel and the image corresponding to the red (or blue) color channel are both decomposed by filtering and downsarapHng using a filter bank that allows for the separation of the edge and the shading information. Edge information from the sharp green image is then meshεd with the non-edge information for the red (or blue) image to form a set of sub-band output coefficients, and these are input to a reconstruction portion of the filter bank which includes upsampling, filtering and combining steps to generate a less blurred red (or blue) image and ultimately a " less blurred composite color image, In one embodiment, decomposition is performed using wavelet decomposition and wavelet transforms, while in another embodiment coπtourlet decomposition and contourlet waveforms are used, In the case of the wavelet decomposition, the selection of which sub-band output coefficients to use in an image reconstruction can be determined a priori. In this regard, the sharp image is decomposed to obtain certain sub-band output coefficients which are each assumed to represent edge information (Lc5 corresponding to sub-bands having a high frequency component) and the blurred image is decomposed to obtain other sub-band output coefficients corresponding to sub-bands which are assumed to not represent edge information (i.e., those sub-bands not having a high frequency component), In other words, the sub-bands for these sets do not overlap. The output
U coefficients from each are then combined to form an appropriate reconstruction set without any evaluation of whether the green output coefficients actually represent edge information . [0059] In the case of coπtourlεt decomposition, both the blurred and the sharp images can he decomposed Io generate a corresponding sub-band output coefficient for each sub-band in a respective set of sub-bands, and the two sets of sub-bands can be overlapping sets. Then the resultant sub-band output coefficients corresponding to the sharp image can be evaluated to distinguish between strong and weak edges. For example, an ant colony optimization technique can be utilized to determine edge information, although other edge detection techniques can also be employed, In this ease, a set of sub-band output coefficients corresponding to the blurred image are modified to replace some of the sub-band output coefficients in the set with corresponding sharp image sub-band output coefficients which correspond to edge information for the sharp image. Some of the sub-band output coefficients in the set are not replaced but are retained, and these are sub-band output coefficients which correspond to non-edge information. The resultant modified set of sub- band output coefficients can then be used in the reconstruction process, [0060] With respect to the wavelet decomposition and reconstruction, in one embodiment, the following steps are performed;
[ΘΘ61 j I . Select a modified perfect reconstruction filter bank;
[0062] 2. Decompose the images into sub-bands by first filtering and down- sampling the rows of the image, then filtering and down-sampling the columns: [0063] 3. Form a set of reconstruction coefficients for the blue image including the BΛLL coefficient and the band pass sub-band output coefficients corresponding to the green image (denoted by GΛLH, GΛHL, and GΛH1 I) instead of using the band pass sub-band output coefficients of the blue image (denoted by BΛLH, BΛHL, and BΛ! IK); [0064] 4, Depending on the degree of blur, introduce more levels of decomposition by further down-sampling and filtering the BALL component. The green color sub-band output coefficients can replace more of the corresponding blue color sub-band output coefficients; and
5] 5, Reconstruct by up-sampling and filtering.
More specifically, the control and processing unit 10 performs such a wavelet sub- band meshing image processing method using a modified perfect reconstruction filter bank to separate image edge information and shading information. With reference to FIG. 4, an understanding of a modified perfect reconstruction filter bank begins with an understanding of a perfect reconstruction filter bank 40, Jn this case, this filter bank 40 has as its input a signal denoted by BΛ, which represents a blurred blue image from the blue color channel after passing through a lens, where LC) represents a lens blurring function and B represents an unblurrεd blue image. Perfect reconstruction filter bank 40 includes a deconstruction portion
42 on the left side, a reconstruction portion 44 on the right side, and an intermediate section
43 at which sub-band output coefficients are output from the deconstruction portion 42 and input to the reconstruction portion 44.
|0067] The filter bank 40 operates to deconstruct signal BΛ into a predetermined number of sub-band output coefficients corresponding to sub-bands in a wavelet domain (akin to a frequency domain) using the deconstruction portion 42. The filter bank then operates to reconstruct a version of the signal BΛ using these coefficients as input to the reconstruction portion 44, with a so-called "perfect" reconstruction upon appropriate selection of filter characteristics.
[0068] As illustrated, the decomposition portion 42 of the perfect reconstruction filter bank 40 includes two cascaded levels and decomposes the input signal into four sub-bands. Specifically, at each of the two different levels, decomposition of the signal BA occurs by filtering (using deconstruction filter HO5 a low pass filter, or decomposition filter Hl, a high pass filter) and downsampling (by a factor of two) to generate a respective sub-band output coefficient for each of the four different sub-bands at the intermediate section 43. For example, BΛLH represents a sub-band output coefficient after filtering and down-sampling BΛ twice, where L represents a low pass filter and H represents a high pass filter. The resultant sub-band output coefficients for the illustrated filter bank include BΛLL, which represents the shading information, and BΛLH, BΛHL, and BΛHϊ-l, which represent, the edge information.
|0069] These sub-band output coefficients are input to the reconstruction portion 44 and upsamplcd (by a factor of two) and filtered (using reconstruction filters FO, a low pass filter, or reconstruction filter Fl, a high pass filter) and combined, in each of two levels, to reconstruct the image signal BΛ. The filters HO (low pass), FO, and Hl (high pass), Fl make up a set of perfect reconstruction filter bank pairs and appropriate selection of these filters can occur using known methods along with the constraints described below. [Θθ7θf As shown in FIG. 5, a modified perfect reconstruction filter hank 50 also takes in image G, corresponding to a sharp image of the green color channel. This filter bank acts to at least partially decompose both blurred image B'\ corresponding to the blurred image data of the blue color channel, and also image G, corresponding to the sharp image from the green color channel Sharp image G has also passed through the lens resulting in what can be denoted GΛ. but because little blurring occurs, it can be assumed that GΛ is approximately the same as G, Blurred image BΛ is decomposed over two levels using two low pass filters to extract its corresponding shading information in the form of a sub-band output coefficient denoted by B'M.L. Further the filter bank 50 acts to at least partially decompose sharp image data G (corresponding to the sharp image data of the green color channel) to extract its corresponding edge information in the form of the sub-band output coefficients denoted by
Figure imgf000023_0001
. Hie sub-band output coefficients nd
Figure imgf000023_0002
form a reconstruction coefficient set which is input to a reconstruction portion 54
Figure imgf000023_0003
which is the same as the reconstruction portion 44 of the perfect reconstruction filter bank 40. A new deblurrcd image denoted by A* is then reconstructed by using this reconstruction set by upsampling, filtering, and combining over two levels, Image A* is an improvement over blurred image BΛ and maintains the shading information of the blurred blue image but has sharper edges. By using this modified filter bank 50, the color image edges can be improved without introducing false colors.
[007 J] Refinements can be made by adding more cascaded levels to the reconstruction filter bank and one or more pre-filters based on the channel characteristics. As described below, a prefilter such as WO can be added to improve results, such as to filter the image prior to decomposition. Further, the number of cascaded levels in the filter bank can be determined based on the frequency response of the initial system (filter bank and lens). [0072] An analysis of a one-dimensional version of the system can be described with reference to FIGS. 6 and 7. Although this analysis applies to the 1-D case, these concepts can be easily extended to the 2-D case. This analysis assumes thai all the filters have unit gain, FICJ, 6 illustrates a 1-D standard perfect reconstruction filter bank 60, and Fig, 7 illustrates a modified prefect reconstruction filter bank 70. In both FΪGS, 6 and 7, LO(z) models the blurring effects of the lens on the true blue Image B(/.) as a low pass filter according to:
Figure imgf000024_0002
[0073] After the lens, FIG. 6 shows that a standard filter bank can be expressed as follows:
Figure imgf000024_0001
( )[ f ) ( ) ( ) ] where:
BArL is a reconstructed output in a low frequency sub-band (L) of the blurred image BΛ.
[0074] The filter bank of FlG. 6 is modified by replacing BΛrH(z) with GΛrH(z) from the green image sub-band, such as shown in FlG. 7, and expressed by:
Figure imgf000024_0003
where:
GΛrH is a reconstru ctεd output in a high frequency sub-band (H) of the image GΛ.
[0075] In order to reconstruct the original image data B, an estimate for the higher sub-band used in reconstruction must be close to the higher sub-band of the original B image data. From the optical properties of the lens, assume that GA better estimates the edges of the original blue signal, as follows:
Figure imgf000024_0004
where EQ represents the error in the estimate of image G, and
EB represents the error in the estimate of image B.
[0076] The above two equations represent estimates of the true high pass sub-bands of B, where EQ and EB are the errors of the two estimates. Because of high edge correlation, this model assumes that the absolute value of EQ is less than or equal to absolute value of Eg. [0077] The green color sub-band outputs are used to create a reconstructed image A*(z), where:
Figure imgf000024_0005
Figure imgf000025_0002
3] It is desirable to remove the aliasing component B(-z) and reconstruct a delayed version of the original signal Rf1Z). 1 his leads to the following reconstruction conditions; ( ])
Figure imgf000025_0001
1 I0(z) and L(z) are both low pass filters. If
Figure imgf000025_0005
has a lower transition frequency than that of L0(z), then the following two approximations hold:
Figure imgf000025_0006
[0079] These approximations simplify the three reconstruction conditions above to:
Figure imgf000025_0003
[0080] Where the equality holds for the above three conditions, these conditions match the perfect reconstruction conditions of a conventional two-channel filter band. If the system uses a perfect reconstruction filter bank, then the output is simplified as follows:
Figure imgf000025_0004
[0081] This process will lead to an Λ*(zj that closely resembles the signal B(z) with a small error factor EQ. EG changes spatially, and the better results will be obtained in regions where the two images have high edge correlation.
[Θ082J This method assumes that the lens can be modeled as a low-pass filter on the blue color channel. An analysis of the error begins with the realization that the process requires H0(z) to have a lower transition frequency than LD(V) so that the approximations set forth above are satisfied. For a two channel perfect reconstruction filter bank, BO has a transition band centered at ω = 0,5 π. If it is determined thai the L0(/.) transition frequency of the lens is less than 0.5 π, then additional levels of sub-band decomposition can be added to the filter bank 50,
[0083] The above analysis suggests that there is a trade-off between the error created by using the green sub-band outputs and the error created by the lens when blurring the blue sub-bands. Rather than recovery of the high frequencies, the deblurring method replaces this part of the spectrum with the corresponding green sub-band output coefficients, although when the low-pass lens filter no longer passes the higher frequency sub-bands, then the tradeoff favors using the green sub-bands. The levels of decomposition, c, can be increased until the transition frequency of the blurring filter falls beyond the transition frequency of the lowest sub-band,
[0084] Determining the levels of decompositions involves the following considerations. Recall tha
Figure imgf000026_0007
) represents the blur filter of the blue image. An appropriate frequency response is as follows;
Figure imgf000026_0005
s approximately 1 if
Figure imgf000026_0006
, and is approximately 0 if π/4
Figure imgf000026_0008
[0085] Thus, the method should use the blue sub-bands for all frequencie
Figure imgf000026_0004
that the lens passes, and use the green sub-bands for all frequencies
Figure imgf000026_0003
For this
Figure imgf000026_0002
, the method requires a two level decomposition, such as shown in FIGS, 5 and 7. Further:
Figure imgf000026_0001
10086] Instead of using the BΛLH, BΛHL, and BΛI!lϊ terms, this method uses instead the respectiv
Figure imgf000026_0011
, and
Figure imgf000026_0012
terras, The
Figure imgf000026_0013
nd BΛ1 IH sub-band terms estimate the original B image poorly. The unfiltεred sub-bands terms o
Figure imgf000026_0014
and GΛHH estimate the B signal well because of strong edge correlation. The reconstruction error depends on the accuracy of the estimate of these sub-band terms. [0087] Now consider the following change of variables;
Figure imgf000026_0009
[0088] ) represents the error between BLL and BΛLL, and can be expressed as
Figure imgf000026_0015
follows:
Figure imgf000026_0010
Thus, four distinct terms comprise
Figure imgf000027_0005
For the first term,
Figure imgf000027_0007
is approximately 1 when | OJ> | is less than π, thus the first term is approximately 0. For the second terra, HO is a low-pass filter. Thus HQ
Figure imgf000027_0006
is approximately equal to zero by construction and the second term is approximately 0. For the remaining two terms, HOC-e'*") is approximately 0 and those terms arc approximately 0. In order to make EBLL(Z) small, HO should approximate an ideal low-pass filter as much as possible. This effect suggests that adding more coefficients Io the fillers will improve performance. Because
Figure imgf000027_0003
) passes the frequencies in this sub-band, the estimate produces a small error
Figure imgf000027_0004
[0090] Consider generalizing LO5 such that
Figure imgf000027_0002
is approximately 1 if ! ω j Is less than too , and is approximately 0 if | ω j is greater than or equal to coo and less than π. [0091] In order to reduce the overall error, the decomposition level c can increase until π/2" is less than or equal to too , and too is less than or equal to π/2^ . Choosing a large c means discarding parts of the frequency spectrum which the lens does not actually corrupt. Making c too small will increase the error because in the low band of the frequency spectra 0 is not equal to (l -L(z)).
[0092] In other words, the optical properties of the lens blur out BΛLH, BΛHL, and B'ΗH and yield a high approximation error. GΛLH, G'ΗL and GΛHH better estimate the edges of the original blue signal.
Figure imgf000027_0001
10093| Recall that GΛLH, GΛIIL and GΛH11 then pass through the reconstruction portion of die filter bank. Thus, the filter band must satisfy known reconstruction conditions such as set forth in G, Strang and T.Q. Nguyen, "Wavelets and filter banks'", Cambridge, MA: Wellesley- Carn bridge, 1997, which is hereby incorporated by reference herein. The following expresses the output of the filter bank:
Figure imgf000028_0001
I] Increasing the decomposition level c causes more
Figure imgf000028_0006
terras to appear. To reduce
Figure imgf000028_0008
without introducing extra
Figure imgf000028_0009
terms, level c should be increased until
Figure imgf000028_0007
1 , [0095] Best results are obtained by limiting the number of EQ terms, and using only the sub- bands with small
Figure imgf000028_0010
terms. Similar to above, consider the error
Figure imgf000028_0011
between BLH and :
Figure imgf000028_0002
[0096] Again, this expression includes four terms. The lϊrst three terms are approximately zero by construction because
Figure imgf000028_0012
is approximately zero and
Figure imgf000028_0013
s approximately zero. However, the last term contains error, To reduce this error.
Figure imgf000028_0015
an replac
Figure imgf000028_0014
as discussed above. For the lower frequency sub-bands, the correlation does poorly and the reconstruction suffers.
[0097] To reduce the error further, a pre-fiiier W0(z) can be added directly after LQ{z) in FIG. 5. By adding this filter, the fourth term of the prior equation changes to:
Figure imgf000028_0003
[0098] To make the last term approximately zero, the filter W0(z) needs to satisfy the following:
Figure imgf000028_0004
[0099] Assume more is known about LO(C1'0), as for example that LOf eJ"1) is approximately:
Figure imgf000028_0005
Here
Figure imgf000028_0016
represents the transition band. The first zero o
Figure imgf000028_0017
as a higher frequency tha A modified Wiener filler can be designed to reduce the error in all sub-bands with frequencies below
Figure imgf000028_0018
The filter hank design increases the level of decomposition so that the highest BΛ sub-band used in reconstruction has a transition frequency that is arbitrarily close t in. practice, complexity and image size limit the number of levels of decomposition. [00100] hi another embodiment, a contourlet sub-band meshing method is used for debiurring an image in a first color channel, such as a blue (or red) channel, using Information from a second sharp color channel, such as a green channel. This method is similar to the wavelet- based meshing method described above in that decomposition and reconstruction are involved. However, the contour! et sub-band meshing method uses a contourlet transform instead of a wavelet transform and generates an edge map for further analysis of the edges prior to substitution of some green coefficients for blue ones in the reconstruction of the dεblurrεd blue image.
[00101] The contourlet transform was recently proposed by Do and Vetterii as a directional multi-resolution image represεnlation that can efficiently capture and represent smooth object boundaries in natural images {as discussed in Minn N. Do and Martin Vetterii, "The contourlet transform: An efficient directional rnulii resolution image representation." IEEE Trans, on Image Processing, vol. 18, pp, 729-739, April 2009, which is hereby incorporated by reference herein). More specifically, the contourlet transform is constructed as an iterated double filter bank including a Laplacian pyramid stage and a directional filter bank stage. Conceptually, the operation can be illustrated with reference to FIGS, 8(a)-(b). A Laplacian pyramid iteratively decomposes a 2 -D image into low pass and high pass sub- bands, and directional filter banks are applied to the high pass sub-bands to further decompose the frequency spectrum. The process is iteratively repeated using a downsamplεd version of the low pass output as input to the next stages. Using ideal filters, the contourlet transform will decompose the 2-D frequency spectrum into trapezoid-shapεd sub-band regions as shown in FIG. 8(b).
[00102] A two dimensional decomposition portion 90 of a contourlet filter bank is schematically shown in FIG. 9, and is operationally somewhat similar to the operation of the decomposition portion of the filter banks described above. Using this two level contourlet decomposition filter bank for the blue channel, a decomposition of the blue image produces three outputs: the low pass output BΛLL, the band pass output BALH, and the high pass output BAHH. The green image is similarly decomposed to generate GΛLH and G A HI-I, which can be substituted for the BΛLH and BΛH11 coefficients as described below. Of course more levels or stages of iteration can be added to decompose the image into many more sub- bands. The blue image and the green image are both decomposed using a desired number of levels or stages.
[00103] An ant colony optimization edge detection scheme of this method produces a binary edge map which is then dilated and used to decide which sub-band output coefficients corresponding to the blue channel will be replaced with green sub-band output coefficients and which sub-bands output coefficients will not be replaced. Improved results compared to conventional methods can be obtained because the variable nature of contourlets allows for the natural contours of an image to be more accurately defined. Further, the edge detection scheme allows for the characterization of edges as strong or weak. [00104] Consequently, the contourlet sub-mcshing method can be performed as follows:
[Θ0105J 3 , Select a contourlet filter bank and decompose both the green and the blue image into sub-bands by filtering and down-sampling, to generate corresponding sub- band output coefficients for each image.
[00106] 2. Detect major edges in the green color channel based on an ant colony optimization (ACQ) edge detection scheme to create a binary map in the contourlet domain. The binary edge map can be dilated to collect the area around detected major edges such thai areas around the edges in the contouriet domain will also take a value of 1 rather than 0. [00107] 3. Select each coefficient of the band-pass contourlet sub-bands according to the dilated binary edge map, which is used to determine the best coefficient (green or blue).
[ΘΘ108J 4. Depending on the degree of blur, additional levels of decomposition can be introduced by further down-sampling and filtering the lowest frequency sub-band. The green color sub-bands can then replace more of the corresponding blue color sub-bands. The lowest sub-band should remain the original blurred blue color sub-band in order to reduce false coloring.
[0Θ109] 5. Reconstruct the blue image by up-sampling, filtering, and combining as appropriate.
[00110] Further with respect to the ACO edge detection scheme, this scheme is described in an article by Jing Tian, Weiyu Yu, and Shengli Xie, titled "An ant colony optimization algorithm for image edge detection," in IEEE Congress on Evolutionary Compulation, 2008, pages 751-756, which is hereby incorporated by reference herein. The ACO scheme arises from the natural behavior of ants in making and following trails. Here, a binary edge map for the green channel can be generated with each sub-band having a value of 1 or 0 depending on whether a strong edge is present or not. Dilation allows more of the green coefficients to be used. Le
Figure imgf000031_0003
be the dilated green εdgε map in the contourlet domain. Then, a new coefficient for a sub-band is generated based on this map, for example:
Figure imgf000031_0001
[00111] Thus this method replaces some of the blurred blue edges with sharp green edges but keeps those blurred blue edges which correspond to weak green edges. This method assumes that a strong green edge indicates a similarly strong true blue edge and chooses a corresponding green coefficient, in areas with a weak green edge, the method assumes that the blurred blue edge better matches the true sharp blue image and chooses the corresponding blue coefficient. Natural images usually adhere to this generalization and thus improved image reconstruction can be achieved.
[00112] An appropriate dilation radius (such as in the range of 5-25 pixels) is selected.
Λt a higher dilation radius, the border around the edges increases and fewer ghosting artifacts occur. Also, at a higher dilation radius, the edges will appear sharper, but the shading will differ from the clean image,
[00113] More particularly, mixing coefficients between the two color channels creates ghosting artifacts. The mean squared error (MSE) between the reconstructed image and the original image becomes a function of these artifacts:
Figure imgf000031_0002
[00114] Λ trade off exists because the green coefficients produce sharp edges, but can result in false coloring. Mixing with blue coefficients produces less false coloring, but introduces ghosting artifacts. The goal is to minimize this trade-off and produce a natural looking image.
[00115] Consider BΛLH expressed in terms of β(wl, w2). As illustrated in FlG. 10(a), the band-pass output can be expressed as shown, where HL is a low pass filter, HH is a high- pass filter, and HD is a group of directional filters. Only the term when Tn=O, n=0 matters, as all the other terms arc close to 0. The clean blue image and green image have similar BLH and GΛLH terms. Thus, the method uses GI. H when il has a smaller MSE as expressed in FIG. 10(b). Many of the similar terms can combined into variable a, as shown in FIG. 10(c), which can then be simplified as shown in FIG. 10(d). The difference of squares then results in the inequality shown in FIG, iθ(c), which leads to the two conditions shown in FIG. 1 C)(O- If either of these conditions is satisfied, then a better MSE is achieved. By construction.
Figure imgf000032_0001
) is the blurred blue image obtained from the camera. The last two conditions of FIG. 10(f) can be rewritten as shown in FIG. 10(g).
[00116] Similar conditions can be produced for each of the outputs of the deconstruction /liter bank. In the high frequency sub-bands, L0(cθ]/2, tθ2/2) is approximately zero and blurs out these coefficients. Under these circumstances, the expression shown in Fig. K)(O becomes:
Figure imgf000032_0002
[00117] For the high frequency components, the numerator and the denominator of the expression above will typically have similar magnitude, satisfying the upper bound. The expression above also suggests that in the high frequency sub-bands, the method will reduce the MSB when the green and clean blue coefficients have the same sign and a strong correlation exists between the color channels. In areas where they have different signs, the condition fails, and the poor correlation with result in color bleeding. [001 IS J For the lowest frequency components, the blur kernel (LO) does not affect these frequencies, and thu
Figure imgf000032_0004
( ) is approximately 1. In this case, B(ω>./2, o;b/2)is approximately equal t
Figure imgf000032_0003
and the equations of FIG, 30(g) are not satisfied. Intuitively, the blur kernel does not affect the lower frequencies and substituting in green sub-bands at these lower frequencies does not reduce the MSE. j00119| The contourlet method is an improvement over the wavelet method. Further, as the size of the blur kernel increases, the input image quality decreases and the PSNR ( Peak Signal to Noise Ratio) decreases. With the blue image more degraded, more of the green color channel coefficients can be used. This means that more levels of decomposition are required. [0OJ 20] The camera system 2 has been described with respect to its use in imaging body cavities as can be employed in MIS, However, other applications for the camera system and color correction algorithm are also contemplated. For example, cameras with extensive zoom control are available today for use in outer space for homeland security purposes. Light sources for enable the use of such cameras remains an issue in conventional designs. Either the distance or the desire to remain unrecognized prevents the use of a light source directly from the location of the camera. A camera system such as described herein including light source assemblies and network communications can resolve this difficulty, A powerful camera can be positioned miles away from the target sites as long as the light source is able to adjust itself according to the movement of the camera. Wireless communication between the light source assemblies and the camera can enable the use of the camera even when very distant from the target, for example, when airborne and mobile. Embodiments of the present invention can also enable various systems that use only one powerful camera that is centrally located with the ability to acquire images from different areas, as long as it includes a light source for each area of interest,
[001211 Also for example, embodiments of the present invention Eire also suitable for various civilian purposes such as using a powerful light source with variable illumination pattern and steering capability among different groups. Having a powerful light source that can be used by different consumers will allow landing airplanes and docking ships to use the light source of the airport or the seaport for assistance in landing. This will enable the consumers to direct the light to where it is needed rather than have a standard stationary illumination. Further tor example, a system such as the camera system 2 can be used for imaging other spaces and regions other than cavities of the human body (or cavities in an animal body), including spaces or regions that are otherwise hard to reach. { Θ0I22J Additionally, while much of the above description relates to the camera system 2. which has both the camera 4 as well as one or more of the light source assemblies 6, 8 (as well as the control and processing unit K)), the present invention is also intended to encompass other applications or embodiments in which only one or more of these components is present, For example, the present invention is also intended to relate to a light source assembly that is independent of any camera and merely used to provide lighting for a desired target. That is, the present invention is also intended to encompass, for example, a stand-alone light source assembly such as that described above that employs (i) one or more tunable lenses for the purpose of controlling an output light pattern (e.g., amount of light divergence), and/or (ii) controllable LEDs that can be switched on and off to cause variations in light beam direction.
[00123] Any system with high edge correlation can benefit from the above-described debϊurring algorithm, Other future applications include uses in super resolution and video compression. For super resolution, all three color planes share the same edge information, the resulting color image has much sharper edges than traditional techniques. For video compression, the edge information is redundant between color planes, such that one can use the edge information from one color plane as edge information tor ail three color planes. This redundancy can be used to save on the number of bits required without sacrificing much in terms of quality. Another example is in systems where one sharp image sensor can improve the quality of an inexpensive sensor such as infrared which has high edge information. Using the same concept of wireless network and motion estimation but instead of having a light source having a powerful weapon will enable land forces to be mobile and use a heavy weapon at the same time, This weapon could be located far from the fire zone or even airborne and could be used by multiple consumers. The weapon can be locked into target by a camera carried by the consumer as long as its spatial orientation is registered and communication to the weapon is available. Motion estimation and distance to the target can be calculated in the processor near the weapon.
[00124] It is specifically intended that the present invention not be limited to the embodiments and illustrations contained herein, but include modified forms of those embodiments including portions of the embodiments and combinations of elements of different embodiments as come within the scope of the following claims.

Claims

CLAIMSWE CLAIM:
1. A camera system configured to acquire image information representative of subject matter within a selected field of view, the camera system comprising. a camera that receives reflected light and based thereon generates the image information; and a light source assembly operated in conjunction with the camera so as to provide illumination, the light source assembly including a first light source and a first tunable lens, wherein at least some of the illumination output by the light source assembly is received back at the camera as the reflected light, and wherein the tunable lens is adjustable to vary the illumination output by the light source assembly, whereby the illumination can be varied to substantially match the selected field of view.
2. The camera system of claim 1, wherein the tunable lens is a microfluidic lens,
3. The camera system of claim 25 wherein adjusting of the micro fludic lens causes a variation in a radiation pattern of the illumination output by the light source assembly.
4. The camera system of claim 3, wherein the variation in the radiation pattern includes an increase or a decrease of a width of a light beam formed by the illumination, and wherein the light source assembly and camera are physically housed in a shared housing.
5. The camera system of claim 2, further comprising a second tunable lens positioned between the tunable lens and the light source, wherein the first and second tunable Senses are configured to operate in combination with one another to achieve zooming operation,
6. The camera system of claim 1 , wherein the light source includes at least one light emitting diode (LED),
7. The camera system of claim 6. wherein the light source is an LED array including a plurality of LEDs, wherein the light source is also adjustable in that at least one of the LEDs of the LED array can be powered on or powered off, and wherein a change of direction of the illumination occurs when at least one of the LEDs is adjusted from being powered off to being powered on or from being powered on to being powered off.
8. The camera system of claim 7, wherein the plurality of LEDs of the LED array includes a ring of LEDs and wherein, by selecting one or more of the LEDs of the ring of LEDs to be powered on or off. a beam of light output by the light source can be effectively steered, the beam of light being at least some of the illumination.
9. The camera system of claim 7, wherein the plurality of LEDs of the LED array further includes a center LED positioned within the ring of LEDs, and wherein all of the LEDs are surface mounted on a printed circuit board,
10. The camera system of claim I , wherein the camera is a miniature camera suitable for use in minimally-invasive surgery (MIS).
11. The camera system of claim 1 , wherein a first physical location of the camera is independent from a second physical location of the light source assembly.
12. The camera system of claim 1, wherein the light source assembly further includes a second light source and a second tunable lens that is adjustable independently of adjustments to the first tunable lens.
13. The camera system of claim L wherein the camera includes a fluidic lens system.
14. The camera system of claim 1, further including a control and processing unit communicating via at least one wireless connection with the camera and the light source assembly, wherein the control and processing unit receives the image information from the camera via the at least one wireless connection.
15. The camera system of claim 14, wherein the control and processing unit includes at least one of: means for processing the imaging information; and means for receiving instructions from an operator according to which operations of the camera or the light source are controlled.
16. The camera system of claim 15, wherein image information is provided by the camera to a control and processing unit, and wherein a deblurring algorithm is employed by the control and processing unit so as to generate improved image information.
17. A method of operating a camera system to acquire image information representative of subject matter within a selected field of view, the method comprising: providing a camera and a light source assembly, wherein the light source assembly includes a light source and a tunable lens; controlling the tunable lens; transmitting the light beam from the light source assembly; and receiving reflected light at the camera, wherein at least some of the light of the light beam transmitted from the light source assembly is included as part of the reflected light, and wherein the image information is based at least in part upon the reflected light, wherein the tunable lens is controlled to vary the light beam output by the light source assembly, whereby the light beam can be varied to substantially match the selected field of view,
18. The method of claim 17, further comprising; controlling an on/off status of at least one LED of an LED array that is included by the light source, so as to steer the light beam.
19. The method of claim 18, further comprising; transmitting the image information or further information based thereon wirεlessly from the camera to a control and processing unit that is in wireless communications with both the light source assembly and the camera.
20. The method of claim 19, further comprising: processing the image information or further information at the control and processing unit to obtain processed image information.
21. Λ method for deblurring a blurred first image corresponding to a first color channel of a camera that also produces a sharp second image corresponding to a second color channel of the camera, wherein the first and the second images each include a plurality of pixels with each pixel having an associated respective value, the method comprising: decomposing the first image by filtering and upsampling to generate a first set of one or more first sub-band output coefficients, wherein each first sub-band output coefficient corresponds to a respective sub-band in a selected one of a wavelet and a coritourlet domain; decomposing the second image by filtering and upsampling to generate a second set of second sub-band output coefficients, wherein each second sub-band output coefficient corresponds to a respective sub-band in the selected domain: selecting those second sub-band output coefficients that represent edge information, each of the selected second sub-band output coefficients corresponding to a respective selected sub-band, with the selected sub-bands together defining an edge sub-band set; preparing a third set of sub-band output coefficients which includes the selected second sub-band output coefficients representing edge information and at least one first sub- band output coefficient corresponding to a sub-band other than those sub-bands in the edge sub-hand set; and reconstructing a deblurred first image by upsampling and filtering using the third set of sub-band output coefficients as input.
22. The method of claim 21 s wherein each decomposing step occurs over two or more levels and the reconstructing step occurs over the same number of levels.
23. The method of claim 22, wherein the number of levels of decomposition is determined based at least in part on a frequency response of a blurring lens of the camera.
24. The method of claim 21, the first set and the second set arc combined to form the third set,
25. 'flie method of claim 21, wherein the sub-bands corresponding to first set and the second set are predetermined and non-overlapping,
26. The method of claim 21 , further including prefiitering the second image prior to decomposition of the second image.
27. The method of claim 26, wherein the prefiitering uses a Wiener filter.
28. The method of claim 21, wherein the decomposition steps occur in the contourlet domain and the selected second sub-band output coefficients are those which have been determined to represent edge information.
29. The method of claim 28, wherein the edge information is determined using an ant colonization optimization scheme.
30. A method for dehlurring a first image of a camera, wherein the camera generates the first image corresponding to a first color channel and a second image corresponding to a second color channel, wherein the first and the second images each includes a plurality of pixels with each pixel having an associated respective value, the method comprising: selecting a filter bank having a decomposition portion and a reconstruction portion, the decomposition portion having at least two cascaded levels for receiving two inputs and generating intermediate sub-band output coefficients for each of a predetermined number N of sub-bands, the reconstruction portion having at least two cascaded levels for receiving the intermediate sub-band output coefficients and generating an output, wherein the decomposition portion includes multiple decomposition stages at a first level connecting to N decomposition stages at a second level, each decomposition stage including at least one of a decomposition filter and a downsampler, wherein the reconstruction portion includes N reconstruction stages at a third level connecting to N/2 reconstruction stages at a fourth level, each reconstruction stage including at least one of an upsamplεr and a reconstruction filter; decomposing the first image using a first part of the decomposition portion of the filter bank to generate at least one first intermediate sub-band output coefficient corresponding to one of the N sub- bands; decomposing the second image using a second part of the decomposition portion of the filter bank to generate at least a second, a third intermediate, and a fourth sub-band output coefficient, each corresponding to a respective one of the N sub-bands; and reconstructing a deblurred image corresponding to the first color channel using at least the first, the second, the third, and the fourth sub-band output coefficients as input to the reconstruction side of the filter bank.
31. The method of claim 30, wherein each decomposing step occurs over two or more levels and the reconstructing step occurs over the same number of levels.
32. The method of claim 31 , wherein the number of levels of decomposition is determined based at least in part on a frequency response of a blurring lens of the camera.
33. The method of claim 30, wherein the sub-bands corresponding to intermediate sub-band output coefficients of the first image are non-overlapping with the sub-bands corresponding to the intermediate sub-band output coefficients of the second image.
34. The method of claim 30, further including prefiltering the second image prior to decomposition of the second color image.
35. The method of claim 34, wherein the prefiltering uses a Wiener filter.
36. A method for deblurring a blurred first image, wherein the blurred first image corresponds to a first color channel of a camera and a second image corresponds to a second color channel of the camera, wherein the first and the second images each comprise a plurality of pixels with each pixel having an associated respective value, the method comprising: decomposing the first image by filtering and downsampling in a decomposition portion of a filter bank to generate for a predetermined number of sub-bands in a contourlct domain a first set of first sub-band output coefficients, wherein each first sub-band output coefficient corresponds to a respective one of the sub-bands; decomposing the second image by filtering and downsampling in the decomposition portion of the filter bank to generate for the predetermined number of sub-bands a second set of second sub-band output coefficients, wherein each second sub-band output coefficient corresponds to a respective one of the sub-bands in the contourlet domain; determining which of the second sub-band output coefficients and corresponding respective sub-bands represent edge information; preparing a set of third sub-band output coefficients by modifying the first set of first sub-band output coefficients to replace each of those first sub-band output coefficients in the first set which correspond to a respective determined sub-band representing edge information with the corresponding second sub-band output coefficient from the second set; and reconstructing a deblurred image corresponding to the first image by upsampling and filtering the set of third sub-band output coefficients in a reconstruction portion of the filter bank.
37. The method of claim 36, further including combining the deblurred image with the second image to generate a composite color image.
38. The method of claim 36, wherein the edge information is determined using an ant colonization optimization scheme.
39. The method of claim 38, further including preparing a binary edge map in the contourlct domain and using the binary edge map in the preparing step to select corresponding second sub-band output coefficients from the second set.
40. The method of claim 39, further including preparing a binary edge map in the eontouriet domain, dilating the binary edge map, and using the dilated binary edge map in the preparing step to select corresponding second sub-band output coefficients from the second set,
41. A light source assembly comprising: an output port; a light source; and a tunable lens positioned between the first light source and the output port, wherein the light source generates light that passes through the tunable lens and then exits the light source assembly via the output port as output light, and wherein the tunable lens is adjustable to vary a characteristic of the output light exiting the light source assembly via the output port.
42. The light source assembly of claim 41 , wherein the light source includes an array of LEDs, wherein each of the LEDs can be controlled to be turned on or turned off so as to determine a predominant direction of the output light exiting the light source assembly.
PCT/US2009/060745 2008-10-15 2009-10-15 Camera system with autonomous miniature camera and light source assembly and method for image enhancement WO2010045406A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/124,659 US8860793B2 (en) 2008-10-15 2009-10-15 Camera system with autonomous miniature camera and light source assembly and method for image enhancement

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10554208P 2008-10-15 2008-10-15
US61/105,542 2008-10-15

Publications (2)

Publication Number Publication Date
WO2010045406A2 true WO2010045406A2 (en) 2010-04-22
WO2010045406A3 WO2010045406A3 (en) 2010-09-16

Family

ID=42107227

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/060745 WO2010045406A2 (en) 2008-10-15 2009-10-15 Camera system with autonomous miniature camera and light source assembly and method for image enhancement

Country Status (2)

Country Link
US (1) US8860793B2 (en)
WO (1) WO2010045406A2 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT511251A1 (en) * 2011-03-18 2012-10-15 A Tron3D Gmbh DEVICE FOR TAKING PICTURES OF THREE-DIMENSIONAL OBJECTS
US9101287B2 (en) 2011-03-07 2015-08-11 Endochoice Innovation Center Ltd. Multi camera endoscope assembly having multiple working channels
US9101268B2 (en) 2009-06-18 2015-08-11 Endochoice Innovation Center Ltd. Multi-camera endoscope
US9314147B2 (en) 2011-12-13 2016-04-19 Endochoice Innovation Center Ltd. Rotatable connector for an endoscope
US9351629B2 (en) 2011-02-07 2016-05-31 Endochoice Innovation Center Ltd. Multi-element cover for a multi-camera endoscope
US9492063B2 (en) 2009-06-18 2016-11-15 Endochoice Innovation Center Ltd. Multi-viewing element endoscope
US9554692B2 (en) 2009-06-18 2017-01-31 EndoChoice Innovation Ctr. Ltd. Multi-camera endoscope
US9560953B2 (en) 2010-09-20 2017-02-07 Endochoice, Inc. Operational interface in a multi-viewing element endoscope
US9560954B2 (en) 2012-07-24 2017-02-07 Endochoice, Inc. Connector for use with endoscope
US9642513B2 (en) 2009-06-18 2017-05-09 Endochoice Inc. Compact multi-viewing element endoscope system
US9655502B2 (en) 2011-12-13 2017-05-23 EndoChoice Innovation Center, Ltd. Removable tip endoscope
US9706903B2 (en) 2009-06-18 2017-07-18 Endochoice, Inc. Multiple viewing elements endoscope system with modular imaging units
US9713415B2 (en) 2011-03-07 2017-07-25 Endochoice Innovation Center Ltd. Multi camera endoscope having a side service channel
US9713417B2 (en) 2009-06-18 2017-07-25 Endochoice, Inc. Image capture assembly for use in a multi-viewing elements endoscope
US9814374B2 (en) 2010-12-09 2017-11-14 Endochoice Innovation Center Ltd. Flexible electronic circuit board for a multi-camera endoscope
US9872609B2 (en) 2009-06-18 2018-01-23 Endochoice Innovation Center Ltd. Multi-camera endoscope
US9901244B2 (en) 2009-06-18 2018-02-27 Endochoice, Inc. Circuit board assembly of a multiple viewing elements endoscope
US9986899B2 (en) 2013-03-28 2018-06-05 Endochoice, Inc. Manifold for a multiple viewing elements endoscope
US9993142B2 (en) 2013-03-28 2018-06-12 Endochoice, Inc. Fluid distribution device for a multiple viewing elements endoscope
US10080486B2 (en) 2010-09-20 2018-09-25 Endochoice Innovation Center Ltd. Multi-camera endoscope having fluid channels
US10165929B2 (en) 2009-06-18 2019-01-01 Endochoice, Inc. Compact multi-viewing element endoscope system
US10182707B2 (en) 2010-12-09 2019-01-22 Endochoice Innovation Center Ltd. Fluid channeling component of a multi-camera endoscope
US10203493B2 (en) 2010-10-28 2019-02-12 Endochoice Innovation Center Ltd. Optical systems for multi-sensor endoscopes
CN109561819A (en) * 2016-08-08 2019-04-02 索尼公司 The control method of endoscope apparatus and endoscope apparatus
US10499794B2 (en) 2013-05-09 2019-12-10 Endochoice, Inc. Operational interface in a multi-viewing element endoscope
US11278190B2 (en) 2009-06-18 2022-03-22 Endochoice, Inc. Multi-viewing element endoscope
US11547275B2 (en) 2009-06-18 2023-01-10 Endochoice, Inc. Compact multi-viewing element endoscope system
US11864734B2 (en) 2009-06-18 2024-01-09 Endochoice, Inc. Multi-camera endoscope
US11889986B2 (en) 2010-12-09 2024-02-06 Endochoice, Inc. Flexible electronic circuit board for a multi-camera endoscope

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9402533B2 (en) 2011-03-07 2016-08-02 Endochoice Innovation Center Ltd. Endoscope circuit board assembly
KR20110064156A (en) * 2009-12-07 2011-06-15 삼성전자주식회사 Imaging device and its manufacturing method
JP2011209019A (en) * 2010-03-29 2011-10-20 Sony Corp Robot device and method of controlling the same
US9442285B2 (en) 2011-01-14 2016-09-13 The Board Of Trustees Of The University Of Illinois Optical component array having adjustable curvature
US9765934B2 (en) 2011-05-16 2017-09-19 The Board Of Trustees Of The University Of Illinois Thermally managed LED arrays assembled by printing
RU2455676C2 (en) * 2011-07-04 2012-07-10 Общество с ограниченной ответственностью "ТРИДИВИ" Method of controlling device using gestures and 3d sensor for realising said method
US8638989B2 (en) 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US11493998B2 (en) 2012-01-17 2022-11-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US9501152B2 (en) 2013-01-15 2016-11-22 Leap Motion, Inc. Free-space user interface and control using virtual constructs
US9285893B2 (en) * 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US9626015B2 (en) 2013-01-08 2017-04-18 Leap Motion, Inc. Power consumption in motion-capture systems with audio and optical signals
US9459697B2 (en) 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
US20140240477A1 (en) * 2013-02-26 2014-08-28 Qualcomm Incorporated Multi-spectral imaging system for shadow detection and attenuation
US8761594B1 (en) 2013-02-28 2014-06-24 Apple Inc. Spatially dynamic illumination for camera systems
US9702977B2 (en) 2013-03-15 2017-07-11 Leap Motion, Inc. Determining positional information of an object in space
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US10281987B1 (en) 2013-08-09 2019-05-07 Leap Motion, Inc. Systems and methods of free-space gestural interaction
US10846942B1 (en) 2013-08-29 2020-11-24 Ultrahaptics IP Two Limited Predictive information for free space gesture control and communication
US9632572B2 (en) 2013-10-03 2017-04-25 Leap Motion, Inc. Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
US9996638B1 (en) 2013-10-31 2018-06-12 Leap Motion, Inc. Predictive information for free space gesture control and communication
CN105765441B (en) * 2014-01-15 2018-09-11 奥林巴斯株式会社 Endoscope apparatus
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US10232133B2 (en) * 2014-03-28 2019-03-19 Electronics And Telecommunications Research Institute Apparatus for imaging
US10567635B2 (en) * 2014-05-15 2020-02-18 Indiana University Research And Technology Corporation Three dimensional moving pictures with a single imager and microfluidic lens
US9118872B1 (en) * 2014-07-25 2015-08-25 Raytheon Company Methods and apparatuses for image enhancement
US9699453B1 (en) 2014-07-25 2017-07-04 Raytheon Company Methods and apparatuses for video enhancement and video object tracking
JP2016038889A (en) 2014-08-08 2016-03-22 リープ モーション, インコーポレーテッドLeap Motion, Inc. Extended reality followed by motion sensing
KR102376954B1 (en) * 2015-03-06 2022-03-21 삼성전자주식회사 Method for irradiating litght for capturing iris and device thereof
US10542961B2 (en) 2015-06-15 2020-01-28 The Research Foundation For The State University Of New York System and method for infrasonic cardiac monitoring
US10943333B2 (en) * 2015-10-16 2021-03-09 Capsovision Inc. Method and apparatus of sharpening of gastrointestinal images based on depth information
US11301964B2 (en) * 2016-03-29 2022-04-12 Sony Corporation Image processing apparatus, image processing method, and medical system to correct blurring without removing a screen motion caused by a biological body motion
US10213271B2 (en) * 2016-07-06 2019-02-26 Illumix Surgical Canada Inc. Illuminating surgical device and control element
CN106600657A (en) * 2016-12-16 2017-04-26 重庆邮电大学 Adaptive contourlet transformation-based image compression method
AU2018200147B2 (en) 2017-01-09 2018-11-15 Verathon Inc. Upgradable video laryngoscope system exhibiting reduced far end dimming
JP7021974B2 (en) * 2018-02-23 2022-02-17 Hoya株式会社 Endoscope and its manufacturing method
US10642053B2 (en) 2018-03-26 2020-05-05 Simmonds Precision Products, Inc. Scanned linear illumination of distant objects
US11331006B2 (en) 2019-03-05 2022-05-17 Physmodo, Inc. System and method for human motion detection and tracking
WO2020181136A1 (en) 2019-03-05 2020-09-10 Physmodo, Inc. System and method for human motion detection and tracking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6813335B2 (en) * 2001-06-19 2004-11-02 Canon Kabushiki Kaisha Image processing apparatus, image processing system, image processing method, program, and storage medium
US20050041207A1 (en) * 2003-03-17 2005-02-24 The Az Board Regents On Behalf Of The Uni. Of Az Imaging lens and illumination system
US20070160304A1 (en) * 2001-07-31 2007-07-12 Kathrin Berkner Enhancement of compressed images
US7256943B1 (en) * 2006-08-24 2007-08-14 Teledyne Licensing, Llc Variable focus liquid-filled lens using polyphenyl ethers

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2183059B (en) 1985-11-05 1989-09-27 Michel Treisman Suspension system for a flexible optical membrane
GB2184562B (en) 1985-12-10 1989-10-11 Joshua David Silver Liquid or semi-solid lens or mirror with system for adjusting focal length
US5526446A (en) * 1991-09-24 1996-06-11 Massachusetts Institute Of Technology Noise reduction system
US5233470A (en) 1992-12-30 1993-08-03 Hsin Yi Foundation Variable lens assembly
US5446591A (en) 1993-02-08 1995-08-29 Lockheed Missiles & Space Co., Inc. Lens mounting for use with liquid lens elements
JP3206420B2 (en) 1996-02-22 2001-09-10 株式会社デンソー Camera device
AU2692097A (en) 1996-03-26 1997-10-17 Mannesmann Aktiengesellschaft Opto-electronic imaging system for industrial applications
FR2769375B1 (en) 1997-10-08 2001-01-19 Univ Joseph Fourier VARIABLE FOCAL LENS
FR2791439B1 (en) 1999-03-26 2002-01-25 Univ Joseph Fourier DEVICE FOR CENTERING A DROP
GB9805977D0 (en) 1998-03-19 1998-05-20 Silver Joshua D Improvements in variable focus optical devices
JP4078575B2 (en) 1998-06-26 2008-04-23 株式会社デンソー Variable focus lens device
US6702483B2 (en) 2000-02-17 2004-03-09 Canon Kabushiki Kaisha Optical element
US6806988B2 (en) 2000-03-03 2004-10-19 Canon Kabushiki Kaisha Optical apparatus
JP4553336B2 (en) 2000-11-30 2010-09-29 キヤノン株式会社 Optical element, optical apparatus and photographing apparatus
US6956975B2 (en) * 2001-04-02 2005-10-18 Eastman Kodak Company Method for improving breast cancer diagnosis using mountain-view and contrast-enhancement presentation of mammography
US6737646B2 (en) 2001-06-04 2004-05-18 Northwestern University Enhanced scanning probe microscope and nanolithographic methods using the same
US6715876B2 (en) 2001-11-19 2004-04-06 Johnnie E. Floyd Lens arrangement with fluid cell and prescriptive element
US7053953B2 (en) * 2001-12-21 2006-05-30 Eastman Kodak Company Method and camera system for blurring portions of a verification image to show out of focus areas in a captured archival image
KR101016253B1 (en) 2002-02-14 2011-02-25 코닌클리케 필립스 일렉트로닉스 엔.브이. Variable focus lens
US6891682B2 (en) 2003-03-03 2005-05-10 Lucent Technologies Inc. Lenses with tunable liquid optical elements
JP4125208B2 (en) * 2003-09-29 2008-07-30 キヤノン株式会社 Image processing apparatus and image processing method
US7367550B2 (en) 2003-11-18 2008-05-06 Massachusetts Institute Of Technology Peristaltic mixing and oxygenation system
US6999238B2 (en) 2003-12-01 2006-02-14 Fujitsu Limited Tunable micro-lens array
US8654201B2 (en) * 2005-02-23 2014-02-18 Hewlett-Packard Development Company, L.P. Method for deblurring an image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6813335B2 (en) * 2001-06-19 2004-11-02 Canon Kabushiki Kaisha Image processing apparatus, image processing system, image processing method, program, and storage medium
US20070160304A1 (en) * 2001-07-31 2007-07-12 Kathrin Berkner Enhancement of compressed images
US20050041207A1 (en) * 2003-03-17 2005-02-24 The Az Board Regents On Behalf Of The Uni. Of Az Imaging lens and illumination system
US7256943B1 (en) * 2006-08-24 2007-08-14 Teledyne Licensing, Llc Variable focus liquid-filled lens using polyphenyl ethers

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10165929B2 (en) 2009-06-18 2019-01-01 Endochoice, Inc. Compact multi-viewing element endoscope system
US9706903B2 (en) 2009-06-18 2017-07-18 Endochoice, Inc. Multiple viewing elements endoscope system with modular imaging units
US11547275B2 (en) 2009-06-18 2023-01-10 Endochoice, Inc. Compact multi-viewing element endoscope system
US9101268B2 (en) 2009-06-18 2015-08-11 Endochoice Innovation Center Ltd. Multi-camera endoscope
US10638922B2 (en) 2009-06-18 2020-05-05 Endochoice, Inc. Multi-camera endoscope
US11534056B2 (en) 2009-06-18 2022-12-27 Endochoice, Inc. Multi-camera endoscope
US9492063B2 (en) 2009-06-18 2016-11-15 Endochoice Innovation Center Ltd. Multi-viewing element endoscope
US9554692B2 (en) 2009-06-18 2017-01-31 EndoChoice Innovation Ctr. Ltd. Multi-camera endoscope
US10092167B2 (en) 2009-06-18 2018-10-09 Endochoice, Inc. Multiple viewing elements endoscope system with modular imaging units
US11471028B2 (en) 2009-06-18 2022-10-18 Endochoice, Inc. Circuit board assembly of a multiple viewing elements endoscope
US9642513B2 (en) 2009-06-18 2017-05-09 Endochoice Inc. Compact multi-viewing element endoscope system
US11278190B2 (en) 2009-06-18 2022-03-22 Endochoice, Inc. Multi-viewing element endoscope
US9706905B2 (en) 2009-06-18 2017-07-18 Endochoice Innovation Center Ltd. Multi-camera endoscope
US11864734B2 (en) 2009-06-18 2024-01-09 Endochoice, Inc. Multi-camera endoscope
US10912445B2 (en) 2009-06-18 2021-02-09 Endochoice, Inc. Compact multi-viewing element endoscope system
US9713417B2 (en) 2009-06-18 2017-07-25 Endochoice, Inc. Image capture assembly for use in a multi-viewing elements endoscope
US10905320B2 (en) 2009-06-18 2021-02-02 Endochoice, Inc. Multi-camera endoscope
US10791909B2 (en) 2009-06-18 2020-10-06 Endochoice, Inc. Image capture assembly for use in a multi-viewing elements endoscope
US9872609B2 (en) 2009-06-18 2018-01-23 Endochoice Innovation Center Ltd. Multi-camera endoscope
US9901244B2 (en) 2009-06-18 2018-02-27 Endochoice, Inc. Circuit board assembly of a multiple viewing elements endoscope
US10791910B2 (en) 2009-06-18 2020-10-06 Endochoice, Inc. Multiple viewing elements endoscope system with modular imaging units
US10799095B2 (en) 2009-06-18 2020-10-13 Endochoice, Inc. Multi-viewing element endoscope
US9986892B2 (en) 2010-09-20 2018-06-05 Endochoice, Inc. Operational interface in a multi-viewing element endoscope
US10080486B2 (en) 2010-09-20 2018-09-25 Endochoice Innovation Center Ltd. Multi-camera endoscope having fluid channels
US9560953B2 (en) 2010-09-20 2017-02-07 Endochoice, Inc. Operational interface in a multi-viewing element endoscope
US10203493B2 (en) 2010-10-28 2019-02-12 Endochoice Innovation Center Ltd. Optical systems for multi-sensor endoscopes
US11543646B2 (en) 2010-10-28 2023-01-03 Endochoice, Inc. Optical systems for multi-sensor endoscopes
US10898063B2 (en) 2010-12-09 2021-01-26 Endochoice, Inc. Flexible electronic circuit board for a multi camera endoscope
US11889986B2 (en) 2010-12-09 2024-02-06 Endochoice, Inc. Flexible electronic circuit board for a multi-camera endoscope
US10182707B2 (en) 2010-12-09 2019-01-22 Endochoice Innovation Center Ltd. Fluid channeling component of a multi-camera endoscope
US11497388B2 (en) 2010-12-09 2022-11-15 Endochoice, Inc. Flexible electronic circuit board for a multi-camera endoscope
US9814374B2 (en) 2010-12-09 2017-11-14 Endochoice Innovation Center Ltd. Flexible electronic circuit board for a multi-camera endoscope
US10070774B2 (en) 2011-02-07 2018-09-11 Endochoice Innovation Center Ltd. Multi-element cover for a multi-camera endoscope
US9351629B2 (en) 2011-02-07 2016-05-31 Endochoice Innovation Center Ltd. Multi-element cover for a multi-camera endoscope
US9713415B2 (en) 2011-03-07 2017-07-25 Endochoice Innovation Center Ltd. Multi camera endoscope having a side service channel
US11026566B2 (en) 2011-03-07 2021-06-08 Endochoice, Inc. Multi camera endoscope assembly having multiple working channels
US10292578B2 (en) 2011-03-07 2019-05-21 Endochoice Innovation Center Ltd. Multi camera endoscope assembly having multiple working channels
US9854959B2 (en) 2011-03-07 2018-01-02 Endochoice Innovation Center Ltd. Multi camera endoscope assembly having multiple working channels
US9101287B2 (en) 2011-03-07 2015-08-11 Endochoice Innovation Center Ltd. Multi camera endoscope assembly having multiple working channels
AT511251B1 (en) * 2011-03-18 2013-01-15 A Tron3D Gmbh DEVICE FOR TAKING PICTURES OF THREE-DIMENSIONAL OBJECTS
AT511251A1 (en) * 2011-03-18 2012-10-15 A Tron3D Gmbh DEVICE FOR TAKING PICTURES OF THREE-DIMENSIONAL OBJECTS
US9655502B2 (en) 2011-12-13 2017-05-23 EndoChoice Innovation Center, Ltd. Removable tip endoscope
US9314147B2 (en) 2011-12-13 2016-04-19 Endochoice Innovation Center Ltd. Rotatable connector for an endoscope
US10470649B2 (en) 2011-12-13 2019-11-12 Endochoice, Inc. Removable tip endoscope
US9560954B2 (en) 2012-07-24 2017-02-07 Endochoice, Inc. Connector for use with endoscope
US10905315B2 (en) 2013-03-28 2021-02-02 Endochoice, Inc. Manifold for a multiple viewing elements endoscope
US10925471B2 (en) 2013-03-28 2021-02-23 Endochoice, Inc. Fluid distribution device for a multiple viewing elements endoscope
US9993142B2 (en) 2013-03-28 2018-06-12 Endochoice, Inc. Fluid distribution device for a multiple viewing elements endoscope
US11793393B2 (en) 2013-03-28 2023-10-24 Endochoice, Inc. Manifold for a multiple viewing elements endoscope
US9986899B2 (en) 2013-03-28 2018-06-05 Endochoice, Inc. Manifold for a multiple viewing elements endoscope
US11925323B2 (en) 2013-03-28 2024-03-12 Endochoice, Inc. Fluid distribution device for a multiple viewing elements endoscope
US10499794B2 (en) 2013-05-09 2019-12-10 Endochoice, Inc. Operational interface in a multi-viewing element endoscope
US11266295B2 (en) 2016-08-08 2022-03-08 Sony Corporation Endoscope apparatus and control method of endoscope apparatus
CN109561819B (en) * 2016-08-08 2021-10-01 索尼公司 Endoscope device and control method for endoscope device
CN109561819A (en) * 2016-08-08 2019-04-02 索尼公司 The control method of endoscope apparatus and endoscope apparatus

Also Published As

Publication number Publication date
WO2010045406A3 (en) 2010-09-16
US20110261178A1 (en) 2011-10-27
US8860793B2 (en) 2014-10-14

Similar Documents

Publication Publication Date Title
US8860793B2 (en) Camera system with autonomous miniature camera and light source assembly and method for image enhancement
US10932649B2 (en) Surgical system including a non-white light general illuminator
KR101813909B1 (en) Method and system for fluorescent imaging with background surgical image composed of selective illumination spectra
JP4951256B2 (en) Biological observation device
EP2498667A2 (en) Stereo imaging miniature endoscope with single imaging chip and conjugated multi-bandpass filters
US20200400795A1 (en) Noise aware edge enhancement in a pulsed laser mapping imaging system
US10965879B2 (en) Imaging device, video signal processing device, and video signal processing method
EP2862499B1 (en) Simultaneous display of two or more different sequentially processed images
CN110945399B (en) Signal processing apparatus, imaging apparatus, signal processing method, and memory
CN116134298A (en) Method and system for joint demosaicing and spectral feature map estimation
KR101512110B1 (en) Multi-angle rear-viewing endoscope and method of operation therof
CN109068035B (en) Intelligent micro-camera array endoscopic imaging system
WO2019048492A1 (en) An imaging device, method and program for producing images of a scene
US11328811B2 (en) Medical image processing apparatus, medical observation apparatus, and image processing method
CN117314754B (en) Double-shot hyperspectral image imaging method and system and double-shot hyperspectral endoscope
CN114375173A (en) Portable ergonomic endoscope with disposable cannula
CN113487498A (en) Endoscope imaging image enhancement processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09821227

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 13124659

Country of ref document: US

122 Ep: pct application non-entry in european phase

Ref document number: 09821227

Country of ref document: EP

Kind code of ref document: A2