US20110075257A1 - 3-Dimensional electro-optical see-through displays - Google Patents

3-Dimensional electro-optical see-through displays Download PDF

Info

Publication number
US20110075257A1
US20110075257A1 US12/807,868 US80786810A US2011075257A1 US 20110075257 A1 US20110075257 A1 US 20110075257A1 US 80786810 A US80786810 A US 80786810A US 2011075257 A1 US2011075257 A1 US 2011075257A1
Authority
US
United States
Prior art keywords
display
active
eye
optical element
focal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/807,868
Inventor
Hong Hua
Sheng Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arizona Board of Regents of University of Arizona
Original Assignee
Arizona Board of Regents of University of Arizona
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arizona Board of Regents of University of Arizona filed Critical Arizona Board of Regents of University of Arizona
Priority to US12/807,868 priority Critical patent/US20110075257A1/en
Assigned to THE ARIZONA BOARD OF REGENTS ON BEHALF OF THE UNIVERSITY OF ARIZONA reassignment THE ARIZONA BOARD OF REGENTS ON BEHALF OF THE UNIVERSITY OF ARIZONA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUA, HONG, LIU, SHENG
Publication of US20110075257A1 publication Critical patent/US20110075257A1/en
Assigned to NATIONAL SCIENCE FOUNDATION reassignment NATIONAL SCIENCE FOUNDATION CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: UNIVERSITY OF ARIZONA
Priority to US14/729,195 priority patent/US11079596B2/en
Priority to US17/123,789 priority patent/US11803059B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/004Optical devices or arrangements for the control of light using movable or deformable optical elements based on a displacement or a deformation of a fluid
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/34Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/322Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using varifocal lenses or mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0127Head-up displays characterised by optical features comprising devices increasing the depth of field
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0145Head-up displays characterised by optical features creating an intermediate image
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0147Head-up displays characterised by optical features comprising a device modifying the resolution of the displayed image
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • This disclosure pertains to, inter alia, three-dimensional electro-optical displays that can be head-worn or otherwise placed relative to a person's eyes in a manner allowing the person to view images rendered by the display.
  • 3-dimensional (3-D) displays Interest in 3-dimensional (3-D) displays is long-standing and spans various fields including, for example, flight simulation, scientific visualization, education and training, tele-manipulation and tele-presence, and entertainment systems.
  • Various types of 3-D displays have been proposed in the past, including head-mounted displays (HMDs) (Hua and Gao, Applied Optics 46:2600-2610, May 2007; Rolland et al., Appl. Opt. 39:3209-3215, July 2000; Schowengerdt and Seibel, J. Soc. Info. Displ. 14:135-143, February 2006); projection-based immersive displays (Cruz-Neira et al., Proc. 20 th Ann. Conf. Comp.
  • HMDs are desirable from the standpoints of cost and technical capabilities. For instance, HMDs provide mobile displays for wearable computing. For use in augmented reality, they can merge images of virtual objects with actual physical scenes. (Azuma et al., IEEE Comp, Graphics and Applies. 21:34-47, November/Dececember 2001; Hua, Opt. Photonics News 17:26-33, October 2006.)
  • volumetric display portrays a large number (e.g., millions) of voxels within a physical volume. Volumetric displays are conventionally classified as “true” 3-D displays.
  • the practical implementation of such technology has been hindered by several technical challenges, such as its low efficiency with which the large number of calculations are made to update all the voxels, its limited rendering volume, and its poor ability to render view-dependent lighting effects correctly such as occlusions, specular reflection, and shading.
  • Another conventional approach is a “multi-focal plane” display that renders respective focus cues for virtual objects at different “depths” by forming respective images of light patterns produced at multiple focal planes by respective 2-D micro-displays located at respective discrete “depths” from the eyes.
  • Rolland et al. Appl. Opt. 39:3209-3215, 2000; Akeley et al., ACM Trans. Graphics 23:804-813, July 2004.
  • Each of the focal planes is responsible for rendering 3-D virtual objects at respective nominal depth ranges, and these discrete focal planes collectively render a volume of virtual 3-D objects with focus cues that are specific to a given viewpoint.
  • a multi-focal-plane display may be embodied via a “spatial-multiplexed” approach which uses multiple layers of 2-D micro-displays.
  • Rolland cited above
  • Implementation of this approach has been hindered by the lack of practical technologies for producing micro-displays having sufficient transmittance to allow stacking them and passing light through the stack, and by the displays' demands for large computational power to render simultaneously a stack of 2-D images of a 3-D scene based on geometric depth.
  • Another conventional approach is a “time-multiplexed” multi-focal-plane display, in which multiple virtual focal planes are created time sequentially and synchronously with the respective depths of the objects being rendered.
  • multiple virtual focal planes are created time sequentially and synchronously with the respective depths of the objects being rendered.
  • McQuaide et al. Displays 24:65-72, August 2003.
  • a see-through retinal scanning display (RSD) including a deformable membrane mirror (DMM) was reported in which a nearly collimated laser beam is modulated and scanned across the field of view (FOV) to generate pixels on the retina.
  • Yet another conventional approach is a variable-focal-plane display, in which the focal distance of a 2-D micro-display is controllably changed synchronously with the respective depths of the objects correlated with the region of interest (ROI) of the viewer.
  • the region of interest of a viewer may be identified through a user feedback interface. See, e.g., Shiwa et al., J. Soc. Info. Displ. 4:255-261, December 1996; Shibata et al., J. Soc. Info. Displ. 13:665-671, August 2005.
  • Shiwa's device included a relay lens that, when physically displaced, changed the perceived depth position of a rendered virtual object.
  • Shibata achieved similar results by axially displacing the 2-D micro-display mounted using a micro-controlled stage on which the micro-display was mounted.
  • these approaches were capable of rendering adaptive accommodation cues, they were unable to render retinal blur cues in 3-D space and requires a user input to determine the ROI in real time.
  • certain aspects of the invention are directed to stereoscopic displays that can be head-mounted and that have addressable focal planes for improved depth perceptions but that require substantially less computational power than existing methods summarized above while providing more accurate focus cues to a viewer. More specifically, the invention provides, inter alia, vari-focal or time-multiplexed multi-focal-plane displays in which the focal distance of a light pattern produced by a 2-D “micro-display” is modulated in a time-sequential manner using a liquid-lens or analogous active-optical element.
  • An active-optical element configured as, for example, a “liquid lens” provides addressable accommodation cues ranging from optical infinity to as close as the near point of the eye.
  • a liquid lens is refractive allows the display to be compact and practical, including for head-mounted use, without compromising the required accommodation range. It also requires no moving mechanical parts to render focus cues and uses conventional micro-display and graphics hardware.
  • Certain aspects of the invention are directed to see-through displays that can be monocular or binocular, head-mounted or not.
  • the displays have addressable means for providing focus cues to the user of the display that are more accurate than provided by conventional displays.
  • the user receives, from the display, images providing improved and more accurate depth perceptions for the user.
  • These images are formed in a manner that requires substantially less computational power than conventional displays summarized above.
  • the displays are for placement in an optical pathway extending from an entrance pupil of a person's eye to a real-world scene beyond the eye.
  • One embodiment of such a display comprises an active-optical element and at least one 2-D added-image source.
  • the added-image source is addressable to produce a light pattern corresponding to a virtual object and is situated to direct the light pattern toward the person's eye to superimpose the virtual object on an image of the real-world scene as perceived by the eye via the optical pathway.
  • the active-optical element is situated between the eye and the added-image source at a location that is optically conjugate to the entrance pupil and at which the active-optical element forms an intermediate image of the light pattern from the added-image source.
  • the active-optical element has variable optical power and is addressable to change its optical power to produce a corresponding change in perceived distance at which the intermediate image is formed, as an added image to the real-world scene, relative to the eye.
  • An exemplary added-image source is a micro-display comprising a 2-D array of light-producing pixels.
  • the pixels when appropriately energized, produce a light pattern destined to be the virtual object added to the real-world scene.
  • the active-optical element is a refractive optical element, such as a lens that, when addressed, exhibits change in optical power or a change in refractive index.
  • An effective type of refractive optical element is a so-called “liquid lens” that operates according to the “electrowetting” effect, wherein the lens addressed by application thereto of a respective electrical voltage (e.g., an AC voltage) exhibits a change in shape sufficient to effect a corresponding change in optical power.
  • a respective electrical voltage e.g., an AC voltage
  • Another type of refractive optical element is a liquid-crystal lens that is addressed by application of a voltage causing the liquid-crystal material to exhibit a corresponding change in refractive index.
  • the refractive active-optical element is situated relative to the added-image source such that light from the added-image source is transmitted through the optical element.
  • a liquid lens being refractive, allows the display to be compact and practical, including for head-mounted use, without compromising the required accommodation range. It also requires no moving mechanical parts to render focus cues and uses conventional micro-display and graphics hardware.
  • the active optical element is a reflective optical element such as an adaptive-optics mirror, a deformable membrane minor, a micro-mirror array, or the like.
  • the reflective active-optical element desirably is situated relative to the added-image source such that light from the added-image source is reflected from the optical element. As the reflective optical element receives an appropriate address, it changes its reflective-surface profile sufficiently to change its optical power as required or desired.
  • a refractive active-optical element is desirably associated with an objective lens that provides most of the optical power.
  • the objective lens typically operates at a fixed optical power, but the optical power can be adjustable.
  • the objective lens desirably is located adjacent the active-optical element on the same optical axis. Desirably, this optical axis intersects the optical pathway.
  • the added-image source also can be located on this optical axis.
  • a beam-splitter is situated in the optical pathway to receive light of the intermediate image from the active-optical element along the optical axis that intersects the optical pathway at the beam-splitter.
  • a mirror can be located on the axis on a second side of the beam-splitter to reflect light back to the beam-splitter that has passed through the beam-splitter from the active-optical element.
  • This minor desirably is a condensing mirror, and can be spherical or non-spherical. If the mirror has a center of curvature and a focal plane, then the active-optical element can be situated at the center of curvature to produce a conjugate exit pupil through the beam-splitter.
  • the intermediate image is correspondingly moved along the optical pathway relative to the focal plane to produce a corresponding change in distance of the added image relative to the eye.
  • the distance at which the added image is formed can serve as an accommodation cue for the person with respect to the intermediate image.
  • a “stereoscopic” display is a display configured for use by both eyes of a user, and to display a scene having perceived depth as well as length and width.
  • “Accommodation” is an action by an eye to focus, in which the eye changes the shape of its crystalline lens as required to “see” objects sharply at different distances from the eye.
  • Consence is an action by the eyes to rotate in their sockets in a coordinated manner to cause their respective visual axes to intersect at or on an object at a particular distance in 3-D space.
  • An “accommodation cue” is a visual stimulus (e.g., blurred image) that is perceived by a viewer to represent an abnormal accommodation condition and that, when so perceived, urges the eyes to correct the accommodation condition by making a corresponding accommodation change.
  • a visual stimulus e.g., blurred image
  • a “convergence cue” is a visual stimulus (e.g. binocular disparity, i.e., slightly shifted image features in a stereoscopic image pair) that is perceived by a viewer to represent an abnormal convergence condition and that, when so perceived, urges the eyes to correct the convergence condition by making a corresponding convergence change.
  • a visual stimulus e.g. binocular disparity, i.e., slightly shifted image features in a stereoscopic image pair
  • a “retinal blur cue” is visual stimulus (e.g., blurred image) that is perceived by a viewer to represent an out-of-focus condition and that, when so perceived, provides the eyes information for depth judgment and may urge the eyes to correct the accommodation condition by making a corresponding change. (Note, the eyes do not necessarily make accommodation change, in many cases the retinal blur cue provides a sense of how far the appeared blurred object is from in-focus objects.)
  • a combination of an accommodation cue and a retinal blur cue provides a “focus cue” used by a person's eyes and brain to sense and establish good focus of respective objects at different distances from the eyes, thereby providing good depth perception and visual acuity.
  • An “addressable” parameter is a parameter that is controlled or changed by input of data and/or command(s). Addressing the parameter can be manual (performed by a person using a “user interface”) or performed by machine (e.g., a computer or electronic controller). Addressable also applies to the one or more operating modes of the subject displays. Upon addressing a desired mode, one or more operating parameters of the mode are also addressable.
  • an “accommodation cue” is a stimulus (usually an image) that stimulates the eye(s) to change or adjust its or their accommodation distance.
  • a “see-through” display allows a user to receive light from the real world, situated outside the display, wherein the light passes through the display to the user's eyes. Meanwhile, the user also receives light corresponding to one or more virtual objects rendered by the display and superimposed by the display on the image of the real world.
  • a “virtual object” is not an actual object in the real world but rather is in the form of an image artificially produced by the display and superimposed on the perceived image of the real world.
  • the virtual object may be perceived by the eyes as being an actual real-world object, but it normally does not have a co-existing material counterpart, in contrast to a real object.
  • An “added-image source” is any of various 2-D devices that are addressable to produce a light pattern corresponding to at least one virtual object superimposed by the display on the real-world view, as perceived by the user of the display.
  • the added-image source is a “micro-display” comprising an X-Y array of multiple light-producing pixels that, when addressed, collectively produce a light pattern.
  • Other candidate added-image sources include, but are not limited to, digital micro-mirror devices (DMDs) and ferroelectric liquid-crystal-on-silicon (FLCOS) devices.
  • the displays address focal distances in at least two possible operational modes.
  • One mode involves a single but variable-distance focal plane, and the other mode involves multiple focal planes at respective distances.
  • the latter mode addresses the active-optical element and a 2-D virtual-image source in a time-sequential manner.
  • the presenting of multiple full-color 2D images by a subject display from a 2-D added-image source in a time-sequential, image-by-image manner substantially reduces the address speed (from MHz to approximately 100 Hz) required for addressing all the pixels and the active-optical element(s).
  • the response speed of the active-optical element is increased (e.g., from about 75 ms to less than 10 ms), the efficiency of the display is correspondingly increased.
  • FIG. 1 is a schematic diagram of a display according to a first representative embodiment.
  • the depicted display can be used as either a monocular or binocular display, the latter requiring an additional assembly as shown for a user's second eye (not shown).
  • FIGS. 2( a )- 2 ( d ) depict respective binocular viewing situations, including real-world ( FIG. 2( a )), use of a conventional stereoscopic display ( FIG. 2( b )), use of the embodiment for near convergence and accommodation ( FIG. 2( c )), and use of the embodiment for far convergence and accommodation ( FIG. 2( d )).
  • FIG. 2( e ) is a perspective depiction of operation in a multi-focal-plane mode. In this example, there are two selectable focal planes.
  • FIG. 3 is an unfolded optical diagram of the display of FIG. 1 .
  • FIG. 4( a ) is a plot of the optical power of the liquid lens used in Example 1, as a function of applied voltages.
  • FIG. 4( b ) is a plot of the accommodation cue produced by the display of Example 1, as a function of the voltage applied to the liquid lens.
  • FIGS. 5( a )- 5 ( c ) are respective images captured by a camcorder fitted to a display operating in the variable-single-focus-plane mode, showing the change in focus of a virtual torus achieved by changing the voltage applied to the liquid lens.
  • FIGS. 6( a )- 6 ( d ) are respective images of a simple mixed-reality application of a display operating in the variable-single-focus-plane mode.
  • Sharp images of the COKE can (virtual object) and coffee cup (real world) were obtained whenever the accommodation cue was matched to actual distance (rendered “depth” of the can is 40 cm in FIGS. 6( a ) and 6 ( b ) and 100 cm in FIGS. 6( c ) and 6 ( d )), and the camera obtaining the images was focused at 40 cm in FIGS. 6( a ) and 6 ( d ) and at 100 cm in FIGS. 6( b ) and 6 ( c ).
  • FIGS. 7( a )- 7 ( b ) are plots of a square-wave signal for driving the liquid lens of a display operating in the multi-focal-plane mode ( FIG. 7( a )) and the resulting rendering of the virtual object ( FIG. 7( b )).
  • the liquid lens is fast-switched between two selected driving voltages as separate image frames are displayed sequentially in a synchronous manner.
  • FIG. 8 is a plot of the time response of two liquid lenses.
  • FIG. 9( a ) is a schematic optical diagram of a display according to the second representative embodiment.
  • FIG. 9( b ) is a plot of the focus cue (z) as a function of voltage (U) applied to the liquid lens of the second representative embodiment.
  • FIG. 10( a ) is a time plot of an exemplary square wave of voltage applied to the liquid lens in the second representative embodiment, with fast switching between 49 and 37 V rms so as to time-multiplex the focal planes at 1D and 6D, respectively, in the second representative embodiment.
  • FIG. 10( b ) is a time plot of an exemplary rendering and display of images (Frame I and Frame II) of an object (torus) synchronously with energization of the liquid lens in the second representative embodiment.
  • the accompanying Frame I shows the superposition of a sphere and a mask for a torus in front of the sphere.
  • Frame II is a full image of the torus, with the sphere masked out.
  • FIG. 10( c ) is a time plot of a square wave, synchronous with energization of the liquid lens, including respective blank frames per cycle.
  • FIGS. 11( a ) and 11 ( b ) depict exemplary results of the display of the second representative embodiment operating at 37.5 Hz in the multi-focal-plane mode, according to the lens-driving scheme of FIGS. 10( a )- 10 ( b ).
  • FIG. 11( a ) when the camera was focused at the bar target at 6D, the torus (rendered at 6D) appears to be in focus while the sphere is blurred.
  • FIG. 11( b ) shows an image in which the camera was focused on the sphere at 1D, causing the sphere to appear in substantial focus.
  • FIGS. 11( c ) and 11 ( d ) show operation of the display of the second representative embodiment according to the rendering scheme of FIG. 10( c ), producing better focus cues.
  • FIG. 12 is a control diagram of a variable-focus gaze-contingent display including real-time POG (point of gaze) tracking and DOF (depth of focus) rendering, in the third representative embodiment operating in the single-variable-focal-plane mode.
  • POG point of gaze
  • DOF depth of focus
  • FIG. 13 is a schematic diagram of the eye-tracking as used in the third representative embodiment, wherein a pair of monocular trackers was used to triangulate the convergence point using respective lines of sight of a user's eyes.
  • FIGS. 14( a )- 14 ( f ) are example results obtained with the third representative embodiment configured as a VF-GCD (variable-focus gaze-contingent display).
  • FIG. 14( a ) is a rendered image of a virtual scene (rabbits) obtained using a standard pin-hole camera.
  • FIG. 14( b ) is a virtual image post-processed by applying a blur filter.
  • FIGS. 14( c ) and 14 ( e ) are degree-of-blur maps of the virtual scene with the eye focused at 3D and 1D, respectively.
  • FIGS. 14( d ) and 14 ( f ) are final rendered images of the 3-D scene with corresponding focus cues when the eye is focused at 3D and 1D, respectively.
  • FIGS. 15( a )- 15 ( d ) are example results obtained with the third representative embodiment configured as a VC-GCD.
  • FIG. 15( a ) is a plot of eye-tracked convergence distances versus time.
  • FIG. 15( b ) is a real-time rendering of focus cues while tracking the convergence distance.
  • FIGS. 15( c ) and 15 ( d ) are optical see-through images of the VC-GCD captured with a camera, placed at the eye-pupil position, focused at 3D and 1D, respectively, while the optical power of the liquid lens was updated accordingly to match the focal distance of the display with the convergence distance.
  • FIG. 16 is a schematic diagram of a depth-fused display operating in the multi-focal-plane mode, as described in the fourth representative embodiment. Pixels on the front (A) and back (B) focal planes are located at z 1 and z 2 , respectively, from the eye, and the fused pixel (C) is located at z (z 2 ⁇ z ⁇ z 1 ). All distances are in dioptric units.
  • FIG. 17( a ) is a plot of modulation transfer functions (MTF) of a depth-fused display (operating in the multi-focal-plane mode as described in the fourth representative embodiment) as a function of dioptric spacings of 0.2D, 0.4D, 0.6D, 0.8D, and 1.0D.
  • MTF of an ideal viewing condition is plotted as a dashed line.
  • Also included are plots of defocused MTFs (+0.3D) and ( ⁇ 0.3D).
  • the sizes of the images are proportional to the relative sizes as viewed on the retina.
  • FIG. 19 provides plots of simulated filter curves of accommodation cue versus depth, obtained with the fourth representative embodiment.
  • z 1 3D
  • z 6 0D
  • FIGS. 20( a )- 20 ( d ) show simulated retinal images, obtained as described in the fourth representative embodiment, of a 3-D scene through a six-focal-plane DFD display with depth-weighted non-linear fusing functions as given in Eq. (11), as well as the box filter ( FIG. 20( b )), linear filter ( FIG. 20( c )), and non-linear filter ( FIG. 20( c )) shown in FIG. 19 .
  • FIG. 20( a ) is a depth map of the scene rendered by shaders.
  • FIGS. 21( a )- 21 ( g ) are comparative plots of MTFs in a dual-focal-plane DFD display using liner and non-linear depth-weighted fusing functions, respectively.
  • FIG. 22 is a schematic diagram of the experimental setup used in the depth-judgment subjective evaluations.
  • FIG. 23 is a bar graph of average error rate and subjective ranking on depth perception by all subjects under the viewing condition without presenting real reference targets (case A), as described in the subjective evaluations.
  • FIG. 24 is a plot of mean perceived depths among ten subjects as a function of accommodation cues rendered by the display operating in the variable-single-focal-plane mode, as described in the subjective evaluations.
  • FIG. 25 is a plot of averaged rankings on depth perception when the real target reference was not presented (solid bar) and when the real target reference was presented (hatched bar), as described in the subjective evaluations.
  • FIG. 26 is a plot of objective measurements of the accommodative responses to the accommodation cues presented by the see-through display, as described in the subjective evaluations.
  • FIG. 27 is a schematic diagram showing the first representative embodiment configured for use as a head-mounted display.
  • FIG. 28 is a schematic diagram of the first representative embodiment including driving electronics, controller, and user interface.
  • FIG. 29 is similar to FIG. 28 , but depicting a binocular display.
  • the various embodiments of displays address multiple focal planes in an optical see-through display.
  • a particularly desirable display configuration is head-mountable; however, head-mountability is not a mandatory feature.
  • contemplated as being within the scope of the invention are displays relative to which a viewer simply places his or her head or at least his or her eyes.
  • the displays include binocular (intended and configured for use with both eyes) as well as monocular displays (intended and configured for use with one eye).
  • Each of the various embodiments of displays described herein comprises an active-optical element that can change its focal length by application of an appropriate electrical stimulus (e.g., voltage) or command.
  • An active-optical element can be refractive (e.g., a lens) or reflective (e.g., a mirror).
  • a practical active-optical element in this regard is a so-called “liquid lens.”
  • a liquid lens operates according to the electrowetting phenomenon, and can exhibit a wide range of optical power.
  • Electrowetting is exemplified by placement of a small volume (e.g., a drop) of water on an electrically conductive substrate, wherein the water is covered by a thin layer of an electrical insulator.
  • a voltage applied to the substrate modifies the contact angle of the liquid drop relative to the substrate.
  • Currently available liquid lenses actually comprise two liquids having the same density. One liquid is an electrical insulator while the other liquid (water) is electrically conductive. The liquids are not miscible with each other but contact each other at a liquid-liquid interface.
  • Changing the applied voltage causes a corresponding change in curvature of the liquid-liquid interface, which in turn changes the focal length of the lens.
  • One commercial source of liquid lenses is Varioptic, Inc., Lyon, France.
  • the respective liquid lens exhibits an optical power ranging from ⁇ 5 to +20 diopters ( ⁇ 5D to 20D) by applying an AC voltage ranging from 32 V rms to 60 V rms , respectively.
  • Such a lens is capable of dynamically controlling the focal distance of a light pattern produced by a 2-D micro-display from infinity to as close as the near point of the eye.
  • FIG. 1 A representative embodiment of a stereoscopic display 10 is shown in FIG. 1 , which depicts half of a binocular display.
  • the depicted display 10 is used with one eye while the other half (not shown) is used with the viewer's other eye.
  • the two halves arc normally configured as mirror images of each other.
  • the display 10 is configured as an optical see-through (OST) head-mounted display (HMD) having multiple addressable focal planes.
  • OST optical see-through
  • HMD head-mounted display
  • See-through means the user sees through the display to the real world beyond the display. Superimposed on the image of the real world, as seen through the display, is one or more virtual objects formed and placed by the display.
  • the display 10 comprises a 2-D micro-display 12 (termed herein an “added-image source”), a focusing lens 14 , a beam-splitter (BS) 16 , and a condensing (e.g., concave spherical) mirror 18 .
  • the added-image source 12 generates a light pattern intended to be added, as an image, to the view of the “real world” being perceived by a user wearing or otherwise using the display 10 .
  • FIGS. 2( a )- 2 ( d ) depict viewing using this embodiment.
  • FIG. 2( a ) depicts normal viewing of the real world
  • FIG. 2( b ) depicts viewing using a conventional stereoscopic display
  • FIGS. 2( c ) and 2 ( d ) depict viewing using this embodiment.
  • two objects configured as boxes located near (Box A) and far (Box B) are shown.
  • the eyes In the real-world viewing situation ( FIG. 2( a )), the eyes alternatingly adjust focus between near and far distances while natural focus cues are maintained.
  • “distance” is outward along the optical axis of the display, as measured from the exit pupil of the eye.
  • the display's image plane is moved to the near distance accordingly, thereby rendering Box A in focus and rendering Box B with appropriate blur.
  • the image plane is translated to the far distance, thereby rendering Box B in focus and rendering Box A with appropriate blur. Therefore, the retinal images shown in the insets of FIGS. 2( c ) and 2 ( d ) simulate those of the real world situation by concurrently adjusting the focal distance of the display to match with the user's convergence distance and rendering retinal blur cues in the scene according to the current focal status of the eyes.
  • the focusing lens 14 is drawn as a singlet in FIG. 1 , but it actually comprises, in this embodiment, an “accommodation lens” (i.e., the liquid lens) 14 a with variable optical power ⁇ A , and an objective lens 14 b having a constant optical power ⁇ o .
  • the two lenses 14 a, 14 b form an intermediate image 20 of the light pattern produced by the added-image source 12 on the left side of the mirror 18 .
  • the objective lens provides most of the optical power and aberration control for forming this intermediate image.
  • the liquid lens 14 a is optically conjugate to the entrance pupil of the eye 15 , which allows accommodative changes made by the eye 15 to be adaptively compensated by optical-power changes of the liquid lens.
  • the mirror 18 relays the intermediate image 20 toward the viewer's eye through the beam-splitter 16 .
  • the liquid lens 14 a is the limiting aperture of the display optics, it desirably is placed at the center of curvature (O SM ) of the mirror 18 so that a conjugate exit pupil is formed through the beam-splitter 16 .
  • O SM center of curvature
  • the accommodation lens 14 a changes its optical power from high (I) to low (II)
  • the intermediate image 20 produced by the accommodation lens is displaced toward (I′) or away from (II′), respectively, the focal plane (f SM ) of the mirror 18 .
  • the added image is formed either far (I′′) or close (II′′), or in between, to the eye 15 . Since the liquid lens 34 a is located optically conjugate to the entrance pupil, any change in power produced by the liquid lens does not change the apparent field of view.
  • the two lenses 14 a, 14 b together form an intermediate image of the light pattern produced by the added-image source 12 , and the mirror 18 relays and directs the intermediate image toward the viewer's eye via the beam-splitter 16 .
  • the minor 18 is configured to ensure a conjugate exit pupil is formed at the eye of a person using the display 10 . By placing the eye at the conjugate pupil position, the viewer sees both the image of the light pattern produced by the added-image source 12 and a view of the real world.
  • the minor 18 in this embodiment is spherically concave, it will be understood that it alternatively could be aspherical-concave.
  • the minor 18 can be omitted.
  • the main benefit of the mirror is its ability to fold the optical pathway and provide a compact optical system in the display. In certain situations such compactness may not be necessary.
  • the accommodation lens 14 a is a liquid lens in this embodiment, which is an example of a refractive active-optical element. It will be understood that any of several other types of refractive active-optical elements can alternatively be used, such as but not limited to a liquid-crystal lens. Further alternatively, the accommodation lens can be a reflective active-optical element, such as an actively deformable mirror. In other words, any of various optical elements can be used that have the capability of changing their focal length upon being addressed (i.e., upon command).
  • the accommodation cue, d, of the display 10 (i.e., the distance from the eye 15 to the image plane of the virtual object produced by the added-image source 12 ) is determined by:
  • t is the axial separation between the objective lens 14 b and the accommodation lens 14 a, it is the axial distance from the 2-D added-image source 12 to the focusing lens 14
  • R is the radius of curvature of the mirror 18 . All distances are defined by the sign convention in optical designs.
  • This display 10 has multiple addressable focal planes for improved depth perceptions.
  • the liquid lens 14 a or other refractive active-optical element provides an addressable accommodation cue that ranges from infinity to as close as the near-point of the eye.
  • the transmissive nature of the liquid lens 14 a or other refractive active-optical element allows for a compact and practical display that has substantially no moving mechanical parts and that does not compromise the accommodation range.
  • FIG. 3 shows the unfolded optical path of the schematic diagram in FIG. 1 .
  • Focus cues are addressable with this embodiment in at least one of two modes.
  • One mode is a variable-single-focal-plane mode, and the other in a time-multiplexed multi-focal-plane mode.
  • the variable-single-focal-plane mode the accommodation cue of a displayed virtual object is continuously addressed from far to near distances and vice versa.
  • the accommodation cue provided by a virtual object can be arbitrarily manipulated in a viewed 3-D world.
  • the active-optical element operating synchronously with graphics hardware and software driving the added-image source, is driven time-sequentially to render both accommodation and retinal blur cues for virtual objects at different depths.
  • use in this embodiment of the 2-D added-image source to render multiple full-color 2-D images on a frame-sequential basis substantially eliminates any requirement for high addressing speeds.
  • This embodiment is head-mountable, as shown, for example, in FIG. 27 , in which the dashed line indicates a housing and head-band for the display.
  • FIG. 1 depicts a monocular display, used with one of a person's eyes.
  • the monocular display is also shown in FIG. 28 , which also depicts driving electronics connected to the “microdisplay” (added-image source), and a controller connected to the active-optical element.
  • driving electronics connected to the “microdisplay” (added-image source)
  • controller connected to the active-optical element.
  • FIG. 29 A corresponding binocular display is shown in FIG. 29 .
  • a monocular display was constructed, in which the accommodation lens 14 a was a liquid lens (“Arctic 320” manufactured by Varioptic, Inc., Lyon, France) having a variable optical power from ⁇ 5 to +20 diopters by applying an AC voltage from 32 V rms to 60 V rms , respectively.
  • the liquid lens 14 a having a clear aperture of 3 mm, was coupled to an objective lens 14 b having an 18-mm focal length.
  • the source of images to be placed in a viewed portion of the real world was an organic-LED, full-color, 2-D added-image source (“micro-display,” 0.59 inches square) having 800 ⁇ 600 pixels and a refresh rate of up to 85 Hz (manufactured by eMagin, Inc., Bellevue, Wash.).
  • the mirror 18 was spherically concave, with a 70-mm radius of curvature and a 35-mm clear aperture. Based on these parametric combinations, the display had an exit-pupil diameter of 3 mm, an eye-relief of 20 mm, a diagonal field of view (FOV) of about 28°, and an angular resolution of 1.7 arcmins.
  • the 28° FOV was derived by accounting for the chief-ray angle in the image space.
  • FIG. 4( a ) is an exemplary plot of the optical power of the liquid lens 14 a of this example as a function of applied voltages.
  • the curve was prepared by entering specifications of the liquid lens 14 a, under different driving voltages, into an optical-design software, CODE V (http://www.opticalres.com). Two examples are shown in FIG. 4( a ).
  • CODE V http://www.opticalres.com.
  • Two examples are shown in FIG. 4( a ).
  • the liquid lens 14 a produced 0 diopter of optical power, as indicated by the planarity of the liquid interface (lower inset).
  • At 49 V rms the liquid lens 14 a produced 10.5 diopters of optical power, as indicated by the strongly curved liquid interface (upper inset).
  • FIG. 4( b ) is a plot of the accommodation cue produced by the display as a function of the voltage applied to the liquid lens 14 a.
  • driving the liquid lens at 38 V rms and 49 V rms produced accommodation cues at 6 diopters and 1 diopter, respectively.
  • Changing the applied voltage of 32 V rms to 51 V rms changed the accommodation cue of the display from 12.5 cm (8 diopters) to infinity (0 diopter), respectively, thereby covering almost the entire accommodative range of the human visual system.
  • addressing the accommodation cue being produced by the display is achieved by addressing the liquid lens 14 a. I.e., addressing the optical power of the liquid lens 14 a addresses the corresponding accommodation cue produced by the display.
  • the display 10 can be operated in at least one of two modes: variable-single-focal-plane mode and time-multiplexed multi-focal-plane mode.
  • the variable single-focal-plane mode meets specific application needs, for instance, matching the accommodation cue of virtual and real objects in mixed and augmented realities.
  • the liquid lens 14 a is fast-switched among between multiple discrete driving voltages to provide multiple respective focal distances, such as I′′ and II′′ in FIG. 1 , in a time-sequential manner. Synchronized with this switching of the focal-plane, the electronics used for driving the 2-D added-image source 12 are updated as required to render the added virtual object(s) at distances corresponding to the rendered focus cues of the display 10 . The faster the response speed of the liquid lens 14 a and the higher the refresh rate of the added-image source 12 , the more focal planes that can be presented to the viewer at a substantially flicker-free rate.
  • FIG. 2( e ) is a perspective view of the display of this embodiment used in the multi-focal-plane mode, more specifically a dual-focal-plane mode.
  • the liquid lens is switched between two discrete operating voltages to provide two focal planes FPI and FPII.
  • the eye perceives these two focal planes at respective distances z 1 and z 2 .
  • the added images are similar to those shown in the insets in FIGS. 10( a ) and 10 ( b ), discussed later below.
  • the dioptric spacing between adjacent focal planes and the overall range of accommodation cues can be controlled by changing the voltages applied to the liquid lens 14 a. Switching among various multi-focal-plane settings, or between the variable-single-focal-plane mode and the multi-focal-plane mode, does not require any hardware modifications. These distinctive capabilities provide a flexible management of focus cues suited for a variety of applications, which may involve focal planes spanning a wide depth range or dense focal planes within a relatively smaller depth range for better accuracy.
  • Certain embodiments are operable in a mode that is essentially a combination of both operating modes summarized above.
  • variable-single-focal-plane mode allows for the dynamic rendering of accommodation cues which may vary with the viewer's position of interest in the viewing volume. Operation in this mode usually requires some form of feedback and thus some form of feedback control.
  • the feedback control need not be automatic.
  • the feedback can be generated by a user using the display and responding to accommodation and/or convergence cues provided by the display and feeding back his responses using a user interface. Alternatively, the feedback can be produced using sensors producing data that are fed to a computer or processor controlling the display.
  • a user interface also typically requires a computer or processor to interpret commands from the interface and produce corresponding address commands for the active-optical element.
  • the added-image source 12 produces a light pattern corresponding to a desired image to be added, as a virtual object, to the real-world view being produced by the display 10 .
  • the voltage applied to the liquid lens 14 a is dynamically adjusted to focus the added image of the light pattern at different focal distances, from infinity to as close as the near point of the eye, in the real-world view.
  • This dynamic adjustment can be achieved using a “user interface,” which in this context is a device manipulated by a user to produce and input data and/or commands to the display.
  • An example command is the particular depth at which the user would like the added image placed in the real-world view.
  • the image of the light pattern produced by the added-image source 12 is thus contributed, at the desired depth, to the view of the “real” world being provided by the display 10 .
  • Another user interface is a 3-D eye-tracker, for example, that is capable of tracking the convergence point of the left and right eyes in 3-D space.
  • a hand-held device offers easy and robust control of slowly changing points of interest, but usually lacks the ability to respond to rapidly updating points of interest at a pace comparable to the speed of moderate eye movements.
  • An eye-tracker interface which may be applicable for images of virtual objects graphically rendered with the depth-of-field effects, enables synchronous action between the focus cues of the virtual images and the viewer's eye movements.
  • a hand-held device e.g., “SpaceTraveler” (3DConnexion, Inc., Fremont, Calif.) for manipulating accommodation cues of the display in 3-D space.
  • variable-single-focal-plane mode meets specific application needs, such as substantially matching the accommodation cues of virtual and real objects in mixed and augmented realities being perceived by the user of the display.
  • the accommodation and/or focus cues can be pre-programmed, if desired, to animate the virtual object to move in 3-D space, as perceived by the user.
  • the added-image source 12 was addressed to produce an image of a torus and to place the image of the torus successively, at a constant rate of change, along the visual axis of the display at 16 cm, 33 cm, and 100 cm from the eye, or in reverse order. Meanwhile, the voltage applied to the liquid lens 14 a was changed synchronously with the rate of change of the distance of the virtual torus from the eye. By varying the voltage between 38 V rms and 49 V rms , the accommodation cue of the displayed torus image was varied correspondingly from 6 diopters to 1 diopter.
  • FIGS. 5( a )- 5 ( c ) the digital camcorder captured the images shown in FIGS. 5( a )- 5 ( c ). Comparing these figures, the virtual torus in FIG. 5( a ) only appears in focus whenever the voltage applied to the liquid lens was 38 V rms (note, the camcorder in FIG. 5( a ) was constantly focused at 16 cm, or 6 diopters, distance). Similarly, the virtual torus in each of FIGS. 5( b ) and 5 ( c ) only appears in focus whenever the driving voltage was 45 V rms and 49 V rms , respectively. These images clearly demonstrate the change of accommodation cue provided by the virtual object.
  • FIGS. 6( a )- 6 ( d ) shows a simple mixed-reality application in the variable-single-focal-plane mode.
  • the real scene is of two actual coffee mugs, one located 40 cm from the viewer and the other located 100 cm from the viewer (exit pupil).
  • the virtual image was of a COKE® can rendered at two different depths, 40 cm and 100 cm, respectively.
  • a digital camera placed at the exit pupil served as the “eye.”
  • the digital camera was focused on the mug at 40 cm while the liquid lens was driven (at 49 V rms ) to render the can at a matching depth of 40 cm. Whenever the accommodation cue was matched to actual distance, a sharp image of the can was perceived.
  • FIG. 6( a )- 6 ( d ) shows a simple mixed-reality application in the variable-single-focal-plane mode.
  • the real scene is of two actual coffee mugs, one located 40 cm from the viewer and the other located 100
  • the digital camera was focused on the mug at 100 cm while the liquid lens was driven (at 49 V rms ) to render the can at a depth of 40 cm.
  • the resulting mismatch of accommodation cue to actual distance produced a blurred image of the can.
  • the camera was focused on the mug at 100 cm while the liquid lens was driven (at 46 V rms ) to render the can at a depth of 100 cm.
  • the resulting match of accommodation cue to actual distance yielded a sharp image of the can.
  • the camera was focused on the mug at 40 cm while the liquid lens was driven (at 46 V rms ) to render the can at a depth of 100 cm.
  • the resulting mismatch of accommodation cue to actual distance produced a blurred image of the can.
  • the virtual image of the COKE can appeared realistically (in good focus) with the two mugs at a near and far distance, respectively.
  • the focusing cue may be dynamically modified to match its physical distance to the user, yielding a realistic augmentation of a virtual object or scene with a real scene.
  • accurate depth perceptions are produced in an augmented reality application.
  • a series of focus cues can be pre-preprogrammed to animate a virtual object in the real-world view to move smoothly in the view in three-dimensional space.
  • variable-single-focal-plane mode is a useful mode for many applications
  • the multi-focal-plane mode addresses needs for a true 3-D display, in which depth perceptions are not limited by a single or a variable focal plane that may need an eye tracker or the like to track a viewer's point of interest in a dynamic manner.
  • the multi-focal-plane mode can be used without the need for feedback or feedback control.
  • a display operating in the multi-focal-plane mode balances accuracy of depth perception, practicability for device implementation, and accessibility to computational resources and graphics-rendering techniques.
  • the liquid lens 14 a is rapidly switched among multiple selectable driving voltages to provide multiple respective focal distances, such as I′′ and II′′ in FIG. 1 , in a time-sequential manner.
  • the pattern produced by the added-image source 12 is updated (“refreshed”) as required to render respective virtual objects at distances approximately matched to the respective accommodation cues being provided by the display, as produced by the liquid lens 14 a.
  • the faster the response speed of the liquid lens 14 a and the higher the refresh rate of the added-image source 12 the greater the number of focal planes that can be presented per unit time.
  • the presentation rate of focal planes can be sufficiently fast to avoid flicker.
  • the dioptric spacing between adjacent focal planes and the overall range of accommodation cue can be controlled by changing the respective voltages applied to the liquid lens 14 a.
  • This distinctive capability enables the flexible management of accommodation cues as required by a variety of applications requiring either focal planes spanning a wide depth range or dense focal planes within a relatively smaller depth range for better accuracy.
  • the display in the time-multiplexed multi-focal-plane mode is made possible, for example, by using the liquid lens 14 a as an active-optical element to control the accommodation cue.
  • the liquid lens 14 a as an active-optical element to control the accommodation cue.
  • RSD retinal scanning display
  • the subject embodiments of the display 10 use a liquid lens 14 a (a refractive active-optical element), rather than a reflective DMM device.
  • Use of the liquid lens 14 a provides a compact and practical display without compromising the range of accommodation cues.
  • the subject embodiments instead of addressing each pixel individually by a laser-scanning mechanism as in the RSD technique, the subject embodiments use a 2-D added-image source 12 to generate and present high-resolution, images (typically in full color) in a time-sequential, image-by-image manner to respective focal planes. Consequently, the subject embodiments do not require the very high addressing speed (at the MHz level) conventionally required to render images pixel-by-pixel. Rather, the addressing speeds of the added-image source 12 and of the active-optical element 14 a are substantially reduced to, e.g., the 100-Hz level. In contrast, the pixel-sequential rendering approach used in a conventional RSD system requires MHz operation speeds for both the DMM device and the mechanism for scanning multiple laser beams.
  • the driving signal of the liquid lens 14 a and an exemplary manner of driving the production of virtual objects are shown in FIGS. 7( a ) and 7 ( b ), respectively.
  • the liquid lens 14 a is fast-switched between two selected driving voltages, as shown in FIG. 7( a ).
  • the accommodation cue provided by the display 10 is consequently fast-switched between selected far and near distances.
  • far and near virtual objects are rendered on two or more separate image frames and displayed sequentially, as shown in FIG. 7( b ).
  • the two or more image frames can be separated from each other by one or more “blank” frames. If the switching rate is sufficiently rapid to eliminate “flicker,” the blank frames are not significantly perceived.
  • the added-image source 12 and graphics electronics driving it desirably have frame rates that are at least two-times higher than their regular counterparts.
  • the liquid lens 14 a desirably has a compatible response speed.
  • the maximally achievable frame rate, f N of a display 10 operating in the multi-focal-plane mode is given by:
  • N is the total number of focal planes and f min is the lowest response speed (in Hz) among the added-image source 12 , the active-optical element 14 a, and the electronics driving these components.
  • the waveforms in FIGS. 7( a )- 7 ( b ) reflect operation of all these elements at ideal speed.
  • the liquid lens 14 a (Varioptic “Arctic 320”) was driven by a square wave oscillating between 49 V rms and 38 V rms , respectively. Meanwhile, the accommodation cue provided by the display 10 was fast-switched between the depths of 100 cm and 16 cm.
  • the period, T, of the driving signal was adjustable in the image-rendering program. Ideally, T should be set to match the response speed of the slowest component in the display 10 , which determines the frame rate of the display operating in the dual-focal-plane mode.
  • the control electronics driving the liquid lens 14 a allows for a high-speed operational mode, in which the driving voltage is updated every 600 ⁇ s to drive the liquid lens. The response speed of this liquid lens 14 a (shown in FIG.
  • the speed at which the liquid lens 14 a can be driven is the limiting factor regarding the speed of the display 10 .
  • the curve of data indicated by circles indicates a 9-ms rise-time of the Arctic 314 to reach 90% of its maximum optical power.
  • the highest achievable frequency of the display operating in the dual-focal-plane mode would be 56 Hz if the liquid lens were the limiting factor of speed in the display.
  • This frame rate is almost at the flicker-free frequency of 60 Hz.
  • a display 30 according to this embodiment and example comprised a faster liquid lens 34 a than used in the first embodiment.
  • the faster liquid lens 34 a was the “Arctic 314” manufactured by Varioptic, Inc.
  • This liquid lens 34 a had a response speed of about 9 ms, which allowed the frame rate of the display 30 (operating in dual-focal-plane mode) to be increased to 37.5 Hz.
  • the display 30 (only the respective portion, termed a “monocular” portion, for one eye is shown; a binocular display would include two monocular portions for stereoscopic viewing) also included a spherical concave mirror 38 , a 2-D added-image source 32 , and a beam-splitter (BS) 36 .
  • BS beam-splitter
  • the liquid lens 34 a had a clear aperture of 2.5 mm rather than the 3-mm clear aperture of the liquid lens 14 a. To compensate for the reduced clear aperture, certain modifications were made. As shown in FIG. 9( a ), the liquid lens 34 a was offset from the center of the radius of curvature O of the mirror 38 by ⁇ , thus the exit pupil of the display 30 was magnified by
  • the focus cue is specified by the distance z from the virtual image to the exit pupil of the display 30 , given as:
  • the liquid lens 34 a had a variable optical power ranging from ⁇ 5 to +20 diopters by applying an AC voltage, ranging from 32 V rms to 60 V rms , respectively.
  • the other optical components e.g., the beam-splitter 36 and singlet objective lens 34 b ) were as used in Example 1.
  • the axial distance t between the objective lens 34 b and the liquid lens 34 a was 6 mm
  • the offset A was 6 mm
  • the object distance ( ⁇ u) was 34 mm.
  • the display 30 exhibited a 24° diagonal field-of-view (FOV) with an exit pupil of 3 mm.
  • a comparison of the Arctic 314 and Arctic 320 lenses is shown in Table 2.
  • FIG. 9( b ) is a plot of the focus cue (z) as a function of the voltage U applied to the liquid lens (the focus cue was calculated per Eq. (3)).
  • the speed requirements for the liquid lens 34 a, of the 2-D added-image source 32 , and of the driving electronics (“graphics card”) were proportional to the number of focal planes.
  • this example operated at up to 37.5 Hz, which is half the 75-Hz frame rate of the driving electronics.
  • the dual focal planes can be positioned as far as 0 diopter or as close as 8 diopters to the viewer by applying respective voltages ranging between 51 V rms and 32 V rms , respectively, to the liquid lens 34 a.
  • respective voltages ranging between 51 V rms and 32 V rms , respectively.
  • two time-multiplexed focal planes were positioned at 1 diopter and 6 diopters with application of 49 V rms and 37 V rms , respectively, to the liquid lens 34 a.
  • the liquid lens 34 a was driven by a square wave, with a period T of fast-switching between 49 V rms and 37 V rms to temporally multiplex the focal planes at 1 diopter and 6 diopters, respectively.
  • two frames of images (I and II), corresponding to far and near objects, respectively, were rendered and displayed sequentially as shown in FIG. 10( b ).
  • Correct occlusion can be portrayed by creating a stencil mask for near objects rendered on the frame II.
  • frame I in FIG. 10( b ) shows the superposition of a sphere and the mask for a torus in front of the sphere.
  • the duration t 0 of both the far- and near-frames is one-half of the period T.
  • the minimum value of t 0 was 13.3 ms, and the highest refresh rate of the display was 37.5 Hz to complete the rendering of both far and near focal states.
  • a depth-weighted blending algorithm can be used to improve the focus-cue accuracy for objects located between two adjacent focal planes.
  • FIGS. 11( a ) and 11 ( b ) show experimental results produced by the display operating at 37.5 Hz in the multi-focal-plane mode.
  • the targets at 6 diopters (large size) and 1 diopter (small size) were used as references for visualizing the focus cues rendered by the display.
  • the target at 3 diopters (medium size) helped to visualize the transition of focus cues from far to near distances and vice versa. To obtain the respective picture shown in FIGS.
  • FIG. 11( a )- 11 ( d ) a camera was mounted at the eye location shown in FIG. 9( a ).
  • FIG. 11( a ) when the camera was focused on the bar target at 6 diopters, the torus (rendered at 6D) appears to be in focus while the sphere shows noticeable out-of-focus blurring.
  • FIG. 11( b ) demonstrates a situation in which the camera was focused on the sphere at 1 diopter. The sphere appears to be in focus while the torus is not in focus.
  • the virtual objects were animated in such a way that they both moved along the visual axis at a constant speed from either 6 diopters to 1 diopter, or vice versa.
  • the voltage applied to the liquid lens 34 a was adjusted accordingly such that the locations of the two focal planes always corresponded to the respective depths of the two objects.
  • FIGS. 11( c ) and 11 ( d ) show operation of the display at near and far focus, respectively, using the rendering scheme of FIG. 10( c ).
  • the in-focus virtual objects in FIGS. 11( c ) and 11 ( d ) i.e., the torus and the sphere, respectively
  • the out-of-focus objects i.e., the sphere and the torus, respectively
  • the insets of FIGS. 11( c ) and 11 ( d ) showing the same area as in FIG. 11( a ), demonstrated improved focus cues.
  • the occlusion cue became more prominent than shown in FIGS. 11( a ) and 11 ( b ), with a sharper boundary between the near torus and far sphere.
  • brightness level may be correspondingly lower, as quantified by:
  • a faster liquid lens and/or added-image source and higher-speed driving electronics are beneficial for producing accurate focus cues at a substantially flicker-free rate.
  • the liquid lens can be driven in an overshoot manner with decreased time-to-depth-of-field in an auto-focusing imaging system.
  • Other active-optical technologies such as high-speed DMM and liquid-crystal lenses, could also be used in the time-multiplexed multi-focal-plane mode to reduce flicker.
  • a display operating in the time-multiplexed multi-focal-plane mode was produced and operated in this example.
  • the display was capable of rendering nearly correct focus cues and other depth cues such as occlusion and shading, and the focus cues were presentable within a wide range, from infinity to as close as 8 diopters.
  • This embodiment is directed to a display that is gaze-contingent and that is capable of rendering nearly correct focus cues in real-time for the attended region of interest.
  • the display addresses accommodation cues produced in the variable-single-focal-plane mode in synchrony with the graphical rendering of retinal blur cues and tracking of the convergence distance of the eye.
  • VF-GCD variable-focus gaze-contingent display
  • This embodiment utilizes a display operating in the variable-single-focal-plane mode and provides integrated convergence tracking to provide accurate rendering of real-time focus cues.
  • the VF-GCD automatically tracks the viewer's current 3-D point-of-gaze (POG) and adjusts the focal plane of the display to match the viewer's current convergence distance in real-time.
  • POG point-of-gaze
  • a display operating in the variable-single-focal-plane mode with user interface typically has a delay in feedback produced by the user mentally processing feedback information and utilizing that information in responding to accommodation and/or convergence cues.
  • the VF-GCD renders the projected 2-D image of the 3-D scene onto moving image planes, thereby significantly improving the rendering efficiency as well as taking full advantage of commercially available graphics electronics for rendering focus cues.
  • This embodiment incorporates three principles for rendering nearly correct focus cues: addressable accommodation cues, convergence tracking, and real-time rendering of retinal blur cues. Reference is made again to FIGS. 2( a )- 2 ( d ), discussed above.
  • the VF-GCD forms a closed-loop system that can respond in real-time to user feedback in the form of convergent or divergent eye rotations. See FIG. 12 .
  • the convergence distance can be computed, so that the accommodation cue rendered by the display can be matched accordingly.
  • This tracking can be performed using an “eye-tracker” which obtains useful information from the subject's gaze.
  • the scene elements can be rendered with appropriately simulated DOF effects using the graphics electronics.
  • the combination of eye-tracking together with an addressable active-optical element and DOF rendering provides visual feedback to the viewer in the form of updated focus cues, thereby closing the system in a feedback sense.
  • the focal plane moves in three dimensions, matching with the convergence depth of the viewer.
  • the addressable accommodation cue is realized by an active-optical element having variable optical power.
  • the active-optical element should satisfy the following conditions: (1) It should provide a variable range of optical power that is compatible with the accommodative range of the human eye. (2) It should be optically conjugate to the entrance pupil of the viewer, making the display appearing to have a fixed FOV that is independent of focus changes. (3) It should have a response speed that substantially matches the speed of rapid eye movements.
  • the VF-GCD computes changes in the viewer's convergence distance using a binocular eye-tracking system adapted from a pair of 2-D monocular eye-trackers.
  • current monocular eye-trackers utilize one or more of non-imaging-based tracking, image-based tracking, and model-based tracking methods.
  • image-based tracking methods dark-pupil tracking is generally regarded as the simplest and most robust.
  • a pair of monocular trackers was used to triangulate the convergence point using the lines of sight of both eyes, as shown in FIG. 13 .
  • the 2-D gaze points (x 1 ′, y 1 ′) and (x 2 ′, y 2 ′) for left (E 1 ) and right (E 2 ) eyes, respectively are determined in the local coordinate system of a calibration plane (bold grey line in FIG. 12 ) at an established distance z 0 from the eye in 3-D space.
  • the frame of reference of the 3-D space has its origin O xyz , located at the mid-point between the eyes.
  • the points (x i ′, y i ′) may be transformed into their world-space correspondences (x i , y i , z 0 ) so that the convergence point (x, y, z) is given by:
  • IPD is the inter-pupillary distance of the viewer.
  • the convergence distance z is updated for the display optics and the image-rendering system, such that the image plane is translated to the same depth z for the presentation of the correct accommodation cue.
  • the VF-GCD also desirably includes an image-rendering system capable of simulating real-time retinal blur effects, which is commonly referred to as “DOF rendering.”
  • DOF rendering Depth-of-field effects improve the photo-realistic appearance of a 3-D scene by simulating a thin-lens camera model with a finite aperture, thereby inducing a circle of confusion into the rendered image for virtual objects outside the focal plane.
  • Virtual scenes rendered with DOF effects provide a more realistic appearance of the scene than images rendered with the more typical pinhole-camera model and can potentially reduce visual artifacts.
  • Real-time DOF has particular relevance in the VF-GCD since the focal distance of the display changes following the convergence distance of the viewer. Maintaining the expected blurring cues is thus important to preventing depth confusion as the viewer browses objects at varying depths in the scene.
  • Graphically rendering DOF effects can be done in any of several ways that differ from one another significantly in their rendering accuracy and speed. For instance, ray-tracing and accumulation-buffer methods provide good visual results on rendered blur cues but are typically not feasible for real-time systems. Single-layer and multiple-layer post-processing methods tend to yield acceptable real-time performance with somewhat lesser visual accuracy. The latter methods are made computationally feasible due to the highly parallel nature of their algorithms; this feasibility is suitable for implementation on currently available high-performance graphics processing units (GPUs).
  • GPUs high-performance graphics processing units
  • We used a single-layer post-processing DOF method To illustrate this DOF algorithm, note the rabbits rendered in FIGS. 14( a )- 14 ( f ).
  • the final blended images are given in FIGS. 14( d ) and 14 ( f ) for the eyes converging at 3D and 1D, respectively.
  • a key component of the DOF algorithm is the computation of the DOB (depth of blur) map, which is used for weighted blending of the pin-hole and blurred images.
  • the DOB map is created by normalizing the depth values Z′, which are retrieved from the z-buffer for the image, with respect to the viewer's current convergence distance Z given by the binocular eye-tracker:
  • DOB ⁇ z ′ - z z near - z far ⁇ , Z far ⁇ Z ′ , Z ⁇ Z near ( 6 )
  • VF-GCD comprising a variable-focus display, convergence tracking, and real-time DOF rendering.
  • the optical path for the VF-GCD was arranged perpendicularly, mainly due to ergonomic reasons, to prevent the spherical mirror from blocking the center FOV of both eyes.
  • the key element for controlling focal distance in real-time was a liquid lens, which was coupled to an imaging lens to provide variable and sufficient optical power.
  • the entrance pupil of the viewer was optically conjugate with the aperture of the liquid lens.
  • the focus adjustment of the eye was optically compensated by the optical power change of the liquid lens, thus forming a closed-loop control system as shown in FIG. 12 .
  • VF-GCD farnesoid-based spectroscopy
  • NIR near-infrared
  • the NIR camera as a pixel resolution of 640 ⁇ 480 pixels at 30 frames per second (fps), which is capable of tracking 2-D POG in real-time.
  • FIGS. 15( a )- 15 ( d ) The capability of the VF-GCD was demonstrated in an experiment as outlined in FIGS. 15( a )- 15 ( d ).
  • three bar-type resolution targets were arranged along the visual axis of the VF-GCD at 3-D, 2-D, and 1-D, respectively.
  • Three rabbits were graphically rendered at these corresponding locations, as shown in FIGS. 15( c ) and 15 ( d ).
  • the viewer alternatingly changed his focus from far (1D) to near (3D) distances and then from near to far.
  • FIG. 15( a ) shows the real-time tracking result on the convergence distance of the viewer, versus time. As shown in FIG.
  • FIG. 15( a ) shows the eye-tracked convergence distances approximately matched the distances of the real targets. (Any slight mismatch may be explained in part by the about 0.6D depth-of-field of the eyes.)
  • FIG. 15( b ) shows the synthetic-focus-cues effects in the VF-GCD. Similar to the images shown in FIGS. 14( a )- 14 ( f ), as the eye was focused at the far distance 1D, the rabbit at the corresponding distance was sharply and clearly rendered while the other two rabbits (at 2D and 3D, respectively) were out of focus and hence proportionately blurred with respect to the defocused distance from 1D; vice versa when the eye was focused at either 2D or 3D.
  • the rendering program ran on a desk-top computer equipped with a 3.20 GHz Intel Pentium 4 CPU and a Geforce 8600 GS graphics card, which maintained a frame rate of 37.6 fps for rendering retinal blur cues.
  • FIGS. 15( c ) and 15 ( d ) provide further comparison of the addressable focus cues rendered by the VF-GCD against the focus cues of real-world targets.
  • a digital camera was disposed at the exit-pupil location of the VF-GCD. The camera was set at f/4.8, thereby approximately matching the speed of the human eye.
  • FIG. 15( c ) when the observer focused at the near distance 3D, the rabbit at 3D was rendered sharply and clearly while the rabbits at 2D and 1D were blurred. Meanwhile, the focal distance of the VF-GCD was adjusted to 3D using the liquid lens, thereby matching with the viewer's convergence distance (and vice versa in FIG.
  • FIGS. 15( d ) and 15 ( d ) simulate the retinal images of looking through the VF-GCD at different convergence conditions.
  • the virtual rabbits located at three discrete depths demonstrated nearly correct focus cues similar to those of the real resolution targets.
  • This embodiment is directed to a variable-focus gaze-contingent display that is capable of rendering nearly correct focus cues of a volumetric space in real-time and in a closed-loop manner.
  • the VF-GCD provided rendered focus cues more accurately, with reduced visual artifacts such as the conflict between convergence and accommodation.
  • the VF-GCD was much simpler and conserved hardware and computational resources.
  • This embodiment is directed to the multi-focal-plane mode that operates in a so-called “depth fused” manner.
  • a large number of focal planes and small dioptric spacings between them are desirable for improving image quality and reducing perceptual effects in the multi-focal-plane mode.
  • a depth-weighted blending technique can be implemented.
  • This technique can lead to a “depth-fused 3-D” (DFD) perception, in which two overlapped images displayed at two different respective depths may be perceived as a single-depth image. The luminance ratio between the two images may be modulated to change the perceived depth of the fused image.
  • the DFD effect can be incorporated into the multi-focal-plane mode.
  • Another concern addressed by this embodiment is the choice of diopter spacing between adjacent focal planes.
  • the embodiment In this embodiment a systematic approach is utilized to address these issues. It is based on quantitative evaluation of the modulation transfer functions (MTF) of DFD images formed on the retina.
  • MTF modulation transfer functions
  • the embodiment also takes into account most of the ocular factors, such as pupil size, monochromatic and chromatic aberrations, diffraction, Stiles-Crawford effect (SCE), and accommodation; and also takes into account certain display factors, such as dioptric midpoint, dioptric spacing, depth filter, and spatial frequency of the target.
  • SCE Tin-Crawford effect
  • display factors such as dioptric midpoint, dioptric spacing, depth filter, and spatial frequency of the target.
  • FIG. 16 illustrates the depth-fusion concept of two images displayed on two adjacent focal planes separated by a dioptric distance of ⁇ z.
  • the dioptric distance from the eye to the front focal plane is z 1 and to the rear plane is z 2 .
  • the luminance of the fused pixel (L) is summed from the front and rear pixels (L 1 and L 2 , respectively), and the luminance distribution between the front and back pixels is weighted by the rendered depth z of the fused pixel.
  • w 1 (z) and w 2 (z) are the depth-weighted fusing functions modulating the luminance of the front and back focal planes, respectively.
  • the peak luminance of individual focal planes is normalized to be uniform, without considering system-specific optical losses potentially in some forms of multi-focal plane displays (e.g., in spatially multiplexed displays where light may be projected through a thick stack of display panels).
  • Optical losses of a system should be characterized to normalize non-uniformity across the viewing volume before applying depth-weighted fusing functions.
  • the depth-fused 3-D perception effect indicates that, as the depth-weighted fusing functions (w 1 and w 2 ) change, the perceived depth ⁇ circumflex over (z) ⁇ of the fused pixel will change accordingly. This is formulated as:
  • the dioptric distances from the eye to the n focal planes are denoted as z 1 , z 2 , . . . , z n in distance order, where z 1 is the closest one to the eye.
  • z 1 is the closest one to the eye.
  • the closest scene point corresponding to a specific pixel can typically be retrieved from the z-buffer in a computer graphics renderer.
  • the depth of the closest 3-D scene point projected onto a given pixel of the i th focal plane is z.
  • the luminance of the 3-D point is distributed between the (I ⁇ 1) th and i th focal planes if z i ⁇ 1 ⁇ z ⁇ z i , otherwise between the i th and (I+1) th focal planes if z i ⁇ z ⁇ z i+1 .
  • the luminance attribution to the i th focal plane is weighted by the depth z.
  • the depth-weighted fusing function, w i (z), of the i th focal plane can be defined as:
  • w i ⁇ ( z ) ⁇ g i ⁇ ( z ) , z i ⁇ z ⁇ z i + 1 , ( 1 ⁇ i ⁇ n ) 1 - g i - 1 ⁇ ( z ) , z i - 1 ⁇ z ⁇ z i . ( 2 ⁇ i ⁇ n ) ( 9 )
  • the luminance levels of the multi-focal plane images can be modulated accordingly by the depth-weighted fusing functions in Eq. (9) to render pseudo-correct focus cues.
  • the adjacent focal planes are separated in space at a considerable distance.
  • the retinal image quality is expected to worsen when the eye is accommodated at a distance in between the front and back focal planes than when focusing on the front or back focal planes.
  • both the dioptric spacing between adjacent focal planes and the depth-weighted fusing functions can be selected such that the perceived depth of the fused pixel I closely matches with the rendered depth z and the image quality degradation is minimally perceptible as the observer accommodates to different distances between the focal planes.
  • the optical quality of a fused pixel in DFD displays may be quantitatively measured by the point spread function (PSF) of the retinal image, or equivalently by the modulation transfer function (MTF), which is characterized by the ratio of the contrast modulation of the retinal image to that of a sinusoidal object on the 3-D display.
  • PSF point spread function
  • MTF modulation transfer function
  • PSF 1 (z, z 1 ) and PSF 2 (z, z 2 ) are the point spread functions of the front and back pixels, respectively, corresponding to the eye accommodated distance z.
  • the MTF of a DFD display can then be calculated via the Fourier Transform (FT) of the PSF 12 and subsequently the FT of the PSF 1 and PSF 2 .
  • FT Fourier Transform
  • Ocular factors are mostly related to the human visual system when viewing DFD images from a viewer's perspective. These variables, including pupil size, pupil apodization, reference wavelength, and accommodation state, should be carefully considered when modeling the eye optics.
  • Display factors are related to the practical configuration of the display with DFD operability, such as the covered depth range, dioptric midpoint of two adjacent focal planes to the eye, dioptric spacing between two adjacent focal planes, depth-weighted fusing functions, as well as the spatial frequency of a displayed target.
  • a schematic Arizona eye model to simulate and analyze the retinal image quality from simulated targets to derive generalizable results.
  • various schematic eye models have been widely used to predict the performance of an optical system involved with human subjects.
  • the Arizona eye model was set up in CODE V.
  • the Arizona eye model is designed to match clinical levels of aberration, both on- and off-axis fields, and can accommodate to different distances.
  • the accommodative distance z as shown in FIG. 16 , determines the lens shape, conic constant, and refractive index of the surfaces in the schematic eye.
  • the distances of the front and back focal planes, z 1 and z 2 , respectively, and their spacing z are varied to simulate different display configurations.
  • Ocular characteristics of the HVS such as depth of field, pupil size, diffraction, Stiles-Crawford effect, monochromatic and chromatic aberrations, and accommodation, play important roles on the perceived image quality of a DFD display.
  • the treatment to the aforementioned factors lacks generality to average subjects and to a full-color DFD display with different display configurations. For instance, only monochromatic aberrations specific to one user's eye were considered and a linear depth-weighted fusing function was assumed.
  • the image source in the model was set up with polychromatic wavelengths, including F, d, and C components as listed in Table 3, to simulate a full-color DFD display.
  • polychromatic wavelengths including F, d, and C components as listed in Table 3, to simulate a full-color DFD display.
  • LCA longitudinal chromatic aberration
  • the display optics may be optimized to have an equivalent chromatic aberration to compensate the LCA of the visual system.
  • CODE V modeling software
  • PSF 1 (z, z 1 ) and PSF 2 (z, z 2 ) for an on-axis point source are simulated separately in CODE V.
  • a series of PSF 12 (z) are computed by varying w 1 from 1 to 0, which corresponds to varying the rendered depth z from z 1 to z 2 .
  • the corresponding MTF 12 (z) of the DFD display is derived by taking the FT of PSF 12 .
  • a fused pixel that is rendered to be at the dioptric midpoint of two adjacent focal planes was expected to have the worst retinal image quality compared with other points between the focal planes. Therefore, in the following analysis, we used the retinal image quality of a fused pixel rendered at the midpoint of two adjacent focal planes as a criterion for determining appropriate settings for display designs.
  • the overall focal range of a DFD display covers the depth varying from 3D (z 1 ) to 0D (z n ).
  • dioptric spacing on DFD displays can be evaluated by setting the midpoint of a pair of adjacent focal planes at an arbitrary position within the depth range without loss of generality.
  • 1D the midpoint of a focal-plane pair
  • ⁇ z the dioptric spacing ⁇ z from 0.2D to 1D at an interval of 0.2D.
  • FIG. 17( a ) is a plot of the results corresponding to different dioptric spacings.
  • MTF ideal corresponds to the MTF of a real pixel placed at the midpoint
  • MTF +0.3D and MTF ⁇ 0.3D which correspond to the MTF of the eye model with +0.3D and ⁇ 0.3D defocus from the midpoint focus, respectively.
  • the ⁇ 0.3D defocus was chosen to match the commonly accepted DOF of the human eye.
  • MTF 12 consistently degraded with the increase of the spacing of the focal planes.
  • ⁇ z was no larger than 0.6D
  • MTF 12 fell within the region enclosed by MTF ideal (green dashed line) and the ⁇ 0.3D defocused MTFs (the overlapped blue and red dashed lines).
  • the DOF of the human eye under photopic viewing conditions can be selected as the threshold value of the dioptric spacing in a display operating in the multi-focal-plane mode, which ensures the degradation of the retinal image quality of a DFD display from an ideal display condition is minimally perceptible to average subjects.
  • Past studies of the effects of stimulus contrast and contrast gradient on eye accommodation in viewing real-world scenes have suggested that the accommodative response attempts to maximize the contrast of the foveal retinal image, and the contrast gradient helps stabilize the accommodation fluctuation of the eye on the target of interest. Therefore, pseudo-correct focus cues can be generated at the dioptric midpoint by applying an appropriate depth-fusing filter even without a real focal plane.
  • the accumulated results yielded the optimal depth-weighted luminance (L 1 and L 2 ) of the front and back focal planes to the luminance of the fused target (L) as a function of the accommodation distance (z) for a focal-plane pair.
  • This evaluation can be extended to more than two focal planes covering a much larger depth range.
  • a 6-focal-plane DFD display covering a depth range from 3D to 0D.
  • six focal planes were placed at 3D (z 1 ), 2.4D (z 2 ), 1.8D (z 3 ), 1.2D (z 4 ), 0.6D (z 5 ), and 0D (z 6 ), respectively.
  • a periodical function g i (z) can be used to describe the dependence of the luminance ratio of the front focal plane in a given pair of focal planes upon the scene depth:
  • ⁇ z′ characterizes the nonlinearity of g i (z).
  • z′ i,i+1 is equal to the dioptric midpoint z i,i+1 .
  • Table 4 lists detailed parameters of g i (z) for the six-focal-plane DFD display. As the distance of the focal planes from the eye increased from 2.7D to 0.3D, the difference between z i,i+1 and z′ i,i+1 increased from ⁇ 0.013D to +0.024D.
  • the slight mismatch between z′ i,i+1 and z i,i+1 may be attributed to the dependence of spherical aberration on eye-accommodation distances.
  • the nonlinear fittings of the luminance ratio functions were plotted as red dashed curves in FIG. 19 with a correlation coefficient of 0.985 to the simulated black curves.
  • the depth-weighted fusing function w i as defined in Eq. (9), for each focal plane of an N-focal plane DFD display was then obtained.
  • FIGS. 20( a )- 20 ( d ) show the simulated retinal images of a 3-D scene through a 6-focal plane DFD display with depth-weighted nonlinear fusing functions given in Eq. (11), as well as with the box and linear filters shown in FIG. 19 .
  • the six focal planes were placed at 3, 2.4, 1.8, 1.2, 0.6, and 0D, respectively, and the accommodation of the observer's eye was assumed at 0.5D.
  • the 3-D scene consisted of a planar object extending from 3D to 0.5D at a slanted angle relative to the z-axis (depth-axis) and a green grid as ground plane spanning the same depth range.
  • the planar object was textured with a sinusoidal grating subtending a spatial frequency of 1.5 ⁇ 9 cpd from its left (front) to right (back) ends.
  • the entire scene subtended a FOV of 14.2 ⁇ 10.7 degrees.
  • the simulation of the DFD images required five steps.
  • a 2-D depth map ( FIG. 20( a )) in the same size as that of the 2-D perspective image is then generated by retrieving the depth (z) of each rendered pixel from the z-buffer in OpenGL shaders.
  • a set of six depth-weighted maps was generated, one for each of the focal planes, by applying the depth-weighted filtering functions in Eq. (11) to the 2-D depth map.
  • we rendered six focal-plane images by individually applying each of the depth-weighted maps to the 2-D perspective image rendered in the first step through an alpha-blending technique.
  • the resulting retinal images were then obtained by summing up the convolved images.
  • FIGS. 20( b ), 20 ( c ), and 20 ( d ) show the simulated retinal images of the DFD display by employing a box, linear, and non-linear depth-weighted fusing function, respectively.
  • the 3-D scene rendered by the box filter ( FIG. 20( b )) indicated a strong depth-discontinuity effect around the midpoint of two adjacent focal planes, while those rendered by linear and non-linear filters showed smoothly rendered depths.
  • the non-linear filters were expected to yield higher image contrast in general than the linear filters, the contrast differences were barely visible by only comparing FIGS. 20( c ) and 20 ( d ), partially due to the low spatial frequency of the grating target.
  • FIGS. 21( a )- 21 ( g ) are plots of the respective MTFs of the retinal images simulated with the linear (green circle) and nonlinear (red square) depth-weighted fusing functions. As shown in FIGS.
  • the non-linear depth-weighted fusing functions shown in FIG. 19 can produce better retinal image quality compared to a linear filter. Consequently, a display incorporating these functions may better approximate the real 3-D viewing condition and further improve the accuracy of depth perception.
  • the non-linear form of depth filters appears to be better than a box filter in terms of improved depth continuity, and better than a linear filter in terms of retinal image contrast modulation.
  • our evaluation did not take into account certain other ocular factors such as scattering on the retina and psychophysical factors such as the neuron response, it provides a systematic framework that can objectively predict the optical quality and guide efforts to configure DFD displays for operation in the multi-focal-plane mode.
  • the major purpose of the depth-judgment experiment was to determine the relationship of the perceived depths of virtual objects versus the accommodation cues rendered by the active optical element.
  • a depth-judgment task was devised to evaluate depth perceptions in the display in two viewing conditions. In Case A, a subject was asked to estimate subjectively the depth of a virtual stimulus without seeing any real target references. In Case B, a subject was asked to position a real reference target at the same perceived depth as the displayed virtual object.
  • FIG. 22 illustrates the schematic setup of the experiment.
  • the total FOV of the display is divided into left and right halves, each of which subtending about an 8-degree FOV horizontally.
  • the left region was either blocked by a black card (Case A) or displayed a real target (Case B), while the right region displayed a virtual object as a visual stimulus.
  • a resolution target similar to the Siemens star in the ISO 15775 chart was employed for both the real and virtual targets, shown as the left and right insets of FIG. 22 .
  • An aperture was placed in front of the beam-splitter, limiting the overall horizontal visual field to about 16 degrees to the subject's eye.
  • the display optics together with the subject, were enclosed in a black box. The subject positioned his or her head on a chin rest and only viewed the targets with one eye (dominant eye with normal or corrected vision) through the limiting aperture.
  • the real target was mounted on a rail to allow movement along the visual axis of the display.
  • multiple light sources were employed to create a uniform illumination on the real target throughout the viewing space.
  • the rail was about 1.5 meters long, but due to the mechanical mounts, the real target could be as close as about 15 cm to the viewer's eye, specifying the measurement range of perceived depths from 0.66 diopters to about 7 diopters.
  • the accommodation distance of the virtual target was controlled by applying five different voltages to the liquid lens, 49, 46.8, 44.5, 42.3, and 40 V rms , which corresponded to rendered depths at 1, 2, 3, 4 and 5 diopters, respectively.
  • the depth-judgment task started with a 10-minute training session, followed by 25 consecutive trials.
  • the tasks were to subjectively (Case A) and objectively (Case B) determine the depth of a virtual target displayed at one of the five depths among 1, 2, 3, 4, and 5 diopters. Each of the five depths was repeated in five trials.
  • the subject was first asked to close his/her eyes.
  • the virtual stimulus was then displayed and the real target was placed randomly along the optical rail.
  • the experimenter blocked the real target with a black board and instructed the subject to open his/her eyes.
  • the subject was then asked to subjectively estimate the perceived depth of the virtual target and rate its depth as Far, Middle, or Near, accordingly. (Case A).
  • the blocker of the real target was then removed.
  • the experimenter moved the real target along the optical rail in directions in which the real target appeared to approach the depth of the virtual target.
  • the subject made a fine depth judgment by repeatedly moving the real target backward and forward from the initial judged position until he/she determined that the virtual and real targets appeared to collocate at the same depth.
  • the position of the real target was then recorded as the objective measurement of the perceived depth of the virtual display in Case B.
  • the subjective depth estimations for stimuli at 2 and 4 diopters were disregarded to avoid low-confidence, random guessing. Only virtual targets at 1, 3, and 5 diopters were considered as valid stimuli, corresponding to Far, Middle, and Near depths, respectively.
  • each subject was asked to fill out a questionnaire, asking how well he/she could perceive depth without (Case A) or with (Case B) seeing the real reference target. The subject was given three choices, ranking his/her sense of depth as Strong, Medium, or Weak in both Cases A and B.
  • FIG. 23 is a plot of the error rate (blue solid bars with deviations) for each of the subjects.
  • subjects S 4 , S 6 , and S 10 corresponded to relatively higher error rates of 0.27, 0.27, and 0.27, respectively, than other subjects, and they also gave lower ranking on depth perceptions (Weak, Weak, and Weak, respectively); Subject S 9 had the lowest error rate of 0.07 and his rank on the perception of depth was Strong. Subjects S 1 and S 5 , however, had somewhat conflicting perception rankings against their error rates. The average ranking among the ten subjects for depth estimation without real references was within the Weak to Medium range, as will be shown later ( FIG. 25 ). Overall, based on a pool of ten subjects and due to the large standard deviation of the error rates in FIG.
  • the ranking on depth perception correlated at least to some extent with the error rate of the subjective depth estimations.
  • the mean error rate for completing fifteen trials was 0.207 among ten subjects, corresponding to about one error on depth estimation within five trials on average. This indicated that the subjects could perceive the rendered depth to some extent of accuracy under the monocular viewing condition where all the depth cues except the accommodation cues were minimized.
  • FIG. 24 is a plot of the averaged perceived depths versus the rendered accommodation cues of the display.
  • the black diamonds indicate the mean value of the perceived depth at each of the accommodation cues.
  • a linear relationship was found, by linearly fitting the five data points, with a slope of 1.0169 and a correlation factor (R 2 ) of 0.9995, as shown in the blue line in FIG. 24 .
  • the major purpose of the accommodative response measurements was to quantify accommodative response of the human visual system to the depth cues presented through the subject display.
  • the accommodative responses of the eye were measured by a near-infrared (NIR) auto-refractor (RM-8000B, Topcon).
  • the auto-refractor has a measurement range of the refractive power from ⁇ 20 to 20 diopters, a measurement speed of about 2 sec and an RMS measurement error of 0.33 diopters.
  • the eye relief of the auto-refractor is about 50 mm.
  • the auto-refractor was placed right in front of the beam-splitter, so that the exit pupil of the auto-refractor coincided with that of the display. Throughout the data-acquisition procedure, the ambient lights were turned off to prevent their influences on accommodation responses.
  • a subject with normal vision was asked to focus on the virtual display, which was presented at 1 diopter, 3 diopters, and 5 diopters, respectively, in a three-trial test.
  • the accommodative response of the subject's eye was recorded at every 2 sec for up to nine measurement points.
  • the results for one subject are plotted in FIG. 26 for the three trials corresponding to three focal distances of the virtual display.
  • the data points are shown as three sets of blue diamonds.
  • the red solid lines in FIG. 26 correspond to the accommodation cues rendered by the liquid lens.
  • the average value of the nine measurements in each trial was 0.97 diopters, 2.95 diopters, and 5.38 diopters, with standard deviations of 0.33 diopters, 0.33 diopters, and 0.42 diopters, respectively.
  • the averages of the accommodative responses of the user matched with the accommodation cue stimuli presented by the display.

Abstract

An exemplary display is placed in an optical pathway extending from an entrance pupil of a person's eye to a real-world scene beyond the eye. The display includes at least one 2-D added-image source that is addressable to produce a light pattern corresponding to a virtual object. The source is situated to direct the light pattern toward the person's eye to superimpose the virtual object on an image of the real-world scene as perceived by the eye via the optical pathway. An active-optical element is situated between the eye and the added-image source at a location that is optically conjugate to the entrance pupil and at which the active-optical element forms an intermediate image of the light pattern from the added-image source. The active-optical element has variable optical power and is addressable to change its optical power to produce a corresponding change in perceived distance at which the intermediate image is formed, as an added image to the real-world scene, relative to the eye.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims priority to and the benefit of U.S. Provisional Patent Application No. 61/276,578, filed Sep. 14, 2009, which is incorporated herein by reference in its entirety.
  • ACKNOWLEDGEMENT OF GOVERNMENT SUPPORT
  • This invention was made with funding from grant nos. 05-34777 and 09-15035 from the National Science Foundation. The government has certain rights in the invention.
  • FIELD
  • This disclosure pertains to, inter alia, three-dimensional electro-optical displays that can be head-worn or otherwise placed relative to a person's eyes in a manner allowing the person to view images rendered by the display.
  • BACKGROUND
  • Interest in 3-dimensional (3-D) displays is long-standing and spans various fields including, for example, flight simulation, scientific visualization, education and training, tele-manipulation and tele-presence, and entertainment systems. Various types of 3-D displays have been proposed in the past, including head-mounted displays (HMDs) (Hua and Gao, Applied Optics 46:2600-2610, May 2007; Rolland et al., Appl. Opt. 39:3209-3215, July 2000; Schowengerdt and Seibel, J. Soc. Info. Displ. 14:135-143, February 2006); projection-based immersive displays (Cruz-Neira et al., Proc. 20th Ann. Conf. Comp. Graphics Interactive Techniques, pp 135-142, ACM SIGGRAPH, ACM Press, September 1993); volumetric displays (Sullivan, SID Symp. Dig. Tech. Papers 34:1531-1533, May 2003; Favalora et al., Proc. SPIE, 4712:300-312, August 2002; Downing et al., Science 273:1185-1189, August 1996); and holographic displays (Heanue et al., Science 265:749-752, August 1994). HMDs are desirable from the standpoints of cost and technical capabilities. For instance, HMDs provide mobile displays for wearable computing. For use in augmented reality, they can merge images of virtual objects with actual physical scenes. (Azuma et al., IEEE Comp, Graphics and Applies. 21:34-47, November/Dececember 2001; Hua, Opt. Photonics News 17:26-33, October 2006.)
  • Despite ongoing advances in stereoscopic displays, many persistent technical and usability issues prevent the current technology from being widely accepted for demanding applications and daily usage. For example, various visual artifacts and other problems are associated with long-term use of stereoscopic displays, particularly HMDs, such as apparent distortions and inaccuracies in perceived depth, visual fatigue, diplopic vision, and degradation of oculomoter responses. Although at least some of these artifacts may arise from engineering-related aspects of the display itself, such as poor image quality, limited eye relief, and inappropriate inter-pupillary distance (IPD), a key factor is the discrepancy between accommodation and convergence associated with use of a conventional display. Mon-Williams et al., Ophth. Physiol. Opt. 13:387-391, October 1993; Wann et al., Vis. Res. 35:2731-2736, October 1995.
  • In most people, accommodation and convergence are normally tightly coupled with each other so that convergence depth coincides with accommodation depth as required for three-dimensional (3-D) depth perception. Conventional stereoscopic displays, however, lack the ability to render focus cues correctly because such displays present stereoscopic images on a fixed image plane while forcing the eyes to converge at different distances to perceive objects at different depths. In other words, contrary to natural vision, whenever a viewer is using a conventional stereoscopic display, all objects (regardless of their actual locations relative to the viewer's eyes) are perceived to be in focus if the viewer focuses his eyes on the image plane of the display. Also, all objects (regardless of their actual locations relative to the viewer's eyes) are perceived as blurred if the viewer's accommodation varies with convergence. This results in a forced, and unnatural, decoupling of the accommodation and convergence cues, which results in an erroneous focus cue. An erroneous focus cue induces incorrect blurring of images formed on the retina that do not vary with the rendered depth of a virtual scene. As a result, unfaithful focus cues can cause, for example, under-estimation or mis-estimation of the rendered depth of a 3-D scene and visual fatigue after prolonged exposure to the stereoscopic environment produced by the display.
  • Significant interest has arisen in developing 3-D displays that can provide correct or nearly correct focus cues. One conventional approach is a “volumetric” display that portrays a large number (e.g., millions) of voxels within a physical volume. Volumetric displays are conventionally classified as “true” 3-D displays. The practical implementation of such technology, however, has been hindered by several technical challenges, such as its low efficiency with which the large number of calculations are made to update all the voxels, its limited rendering volume, and its poor ability to render view-dependent lighting effects correctly such as occlusions, specular reflection, and shading.
  • Another conventional approach is a “multi-focal plane” display that renders respective focus cues for virtual objects at different “depths” by forming respective images of light patterns produced at multiple focal planes by respective 2-D micro-displays located at respective discrete “depths” from the eyes. Rolland et al., Appl. Opt. 39:3209-3215, 2000; Akeley et al., ACM Trans. Graphics 23:804-813, July 2004. (As used herein, “depth” in this context means the optical-path distance from the viewer's eyes.) Each of the focal planes is responsible for rendering 3-D virtual objects at respective nominal depth ranges, and these discrete focal planes collectively render a volume of virtual 3-D objects with focus cues that are specific to a given viewpoint.
  • A multi-focal-plane display may be embodied via a “spatial-multiplexed” approach which uses multiple layers of 2-D micro-displays. For example, Rolland (cited above) proposed use of a thick stack of fourteen equally spaced planar (2-D) micro-displays to form respective focal planes in an head-mounted display that divided the entire volumetric space from infinity to 2 diopters. Implementation of this approach has been hindered by the lack of practical technologies for producing micro-displays having sufficient transmittance to allow stacking them and passing light through the stack, and by the displays' demands for large computational power to render simultaneously a stack of 2-D images of a 3-D scene based on geometric depth.
  • Another conventional approach is a “time-multiplexed” multi-focal-plane display, in which multiple virtual focal planes are created time sequentially and synchronously with the respective depths of the objects being rendered. See, e.g., Schowengerdt and Seibel, J. Soc. Info. Displ. 14:135-143, February 2006; McQuaide et al., Displays 24:65-72, August 2003. For example, in the work cited here, a see-through retinal scanning display (RSD) including a deformable membrane mirror (DMM) was reported in which a nearly collimated laser beam is modulated and scanned across the field of view (FOV) to generate pixels on the retina. Meanwhile, correct focusing cues are rendered on a pixel-by-pixel basis by defocusing the laser beam through the DMM. To achieve a practical full-color and flicker-free multi-focal-plane stereo display, extremely fast address speeds of both the laser beam and the DMM are required, up to MHz. Rendering each pixel by a beam-scanning mechanism limits the compatibility of the system with existing 2-D displays and rendering techniques.
  • Yet another conventional approach is a variable-focal-plane display, in which the focal distance of a 2-D micro-display is controllably changed synchronously with the respective depths of the objects correlated with the region of interest (ROI) of the viewer. The region of interest of a viewer may be identified through a user feedback interface. See, e.g., Shiwa et al., J. Soc. Info. Displ. 4:255-261, December 1996; Shibata et al., J. Soc. Info. Displ. 13:665-671, August 2005. Shiwa's device included a relay lens that, when physically displaced, changed the perceived depth position of a rendered virtual object. Shibata achieved similar results by axially displacing the 2-D micro-display mounted using a micro-controlled stage on which the micro-display was mounted. Although these approaches were capable of rendering adaptive accommodation cues, they were unable to render retinal blur cues in 3-D space and requires a user input to determine the ROI in real time.
  • Despite all the past work on 3-D displays summarized above, none of the conventional displays, including conventional addressable-focus displays, has the capability of incorporating variable-focal-plane, multiple-focal plane, and depth-fused 3-D techniques into a cohesively integrated system allowing the flexible, precise, and real-time addressability of focus cues. There is still a need for a see-through display with addressable focal planes for improved depth perceptions and more natural rendering of accommodation and convergence cues. There is also a need for such displays that are head-mounted.
  • SUMMARY
  • In view the limitations of conventional displays summarized above, certain aspects of the invention are directed to stereoscopic displays that can be head-mounted and that have addressable focal planes for improved depth perceptions but that require substantially less computational power than existing methods summarized above while providing more accurate focus cues to a viewer. More specifically, the invention provides, inter alia, vari-focal or time-multiplexed multi-focal-plane displays in which the focal distance of a light pattern produced by a 2-D “micro-display” is modulated in a time-sequential manner using a liquid-lens or analogous active-optical element. An active-optical element configured as, for example, a “liquid lens” provides addressable accommodation cues ranging from optical infinity to as close as the near point of the eye. The fact that a liquid lens is refractive allows the display to be compact and practical, including for head-mounted use, without compromising the required accommodation range. It also requires no moving mechanical parts to render focus cues and uses conventional micro-display and graphics hardware.
  • Certain aspects of the invention are directed to see-through displays that can be monocular or binocular, head-mounted or not. The displays have addressable means for providing focus cues to the user of the display that are more accurate than provided by conventional displays. Thus, the user receives, from the display, images providing improved and more accurate depth perceptions for the user. These images are formed in a manner that requires substantially less computational power than conventional displays summarized above. The displays are for placement in an optical pathway extending from an entrance pupil of a person's eye to a real-world scene beyond the eye.
  • One embodiment of such a display comprises an active-optical element and at least one 2-D added-image source. The added-image source is addressable to produce a light pattern corresponding to a virtual object and is situated to direct the light pattern toward the person's eye to superimpose the virtual object on an image of the real-world scene as perceived by the eye via the optical pathway. The active-optical element is situated between the eye and the added-image source at a location that is optically conjugate to the entrance pupil and at which the active-optical element forms an intermediate image of the light pattern from the added-image source. The active-optical element has variable optical power and is addressable to change its optical power to produce a corresponding change in perceived distance at which the intermediate image is formed, as an added image to the real-world scene, relative to the eye.
  • An exemplary added-image source is a micro-display comprising a 2-D array of light-producing pixels. The pixels, when appropriately energized, produce a light pattern destined to be the virtual object added to the real-world scene.
  • In some embodiments the active-optical element is a refractive optical element, such as a lens that, when addressed, exhibits change in optical power or a change in refractive index. An effective type of refractive optical element is a so-called “liquid lens” that operates according to the “electrowetting” effect, wherein the lens addressed by application thereto of a respective electrical voltage (e.g., an AC voltage) exhibits a change in shape sufficient to effect a corresponding change in optical power. Another type of refractive optical element is a liquid-crystal lens that is addressed by application of a voltage causing the liquid-crystal material to exhibit a corresponding change in refractive index. The refractive active-optical element is situated relative to the added-image source such that light from the added-image source is transmitted through the optical element. A liquid lens, being refractive, allows the display to be compact and practical, including for head-mounted use, without compromising the required accommodation range. It also requires no moving mechanical parts to render focus cues and uses conventional micro-display and graphics hardware.
  • In other embodiments the active optical element is a reflective optical element such as an adaptive-optics mirror, a deformable membrane minor, a micro-mirror array, or the like. The reflective active-optical element desirably is situated relative to the added-image source such that light from the added-image source is reflected from the optical element. As the reflective optical element receives an appropriate address, it changes its reflective-surface profile sufficiently to change its optical power as required or desired.
  • A refractive active-optical element is desirably associated with an objective lens that provides most of the optical power. The objective lens typically operates at a fixed optical power, but the optical power can be adjustable. The objective lens desirably is located adjacent the active-optical element on the same optical axis. Desirably, this optical axis intersects the optical pathway. The added-image source also can be located on this optical axis. In an example embodiment a beam-splitter is situated in the optical pathway to receive light of the intermediate image from the active-optical element along the optical axis that intersects the optical pathway at the beam-splitter.
  • If the active-optical element is on a first side of the beam-splitter, then a mirror can be located on the axis on a second side of the beam-splitter to reflect light back to the beam-splitter that has passed through the beam-splitter from the active-optical element. This minor desirably is a condensing mirror, and can be spherical or non-spherical. If the mirror has a center of curvature and a focal plane, then the active-optical element can be situated at the center of curvature to produce a conjugate exit pupil through the beam-splitter.
  • As the active-optical element addressably changes its optical power, the intermediate image is correspondingly moved along the optical pathway relative to the focal plane to produce a corresponding change in distance of the added image relative to the eye. The distance at which the added image is formed can serve as an accommodation cue for the person with respect to the intermediate image.
  • The following definitions are provided for respective terms as used herein:
  • A “stereoscopic” display is a display configured for use by both eyes of a user, and to display a scene having perceived depth as well as length and width.
  • “Accommodation” is an action by an eye to focus, in which the eye changes the shape of its crystalline lens as required to “see” objects sharply at different distances from the eye.
  • “Convergence” is an action by the eyes to rotate in their sockets in a coordinated manner to cause their respective visual axes to intersect at or on an object at a particular distance in 3-D space.
  • An “accommodation cue” is a visual stimulus (e.g., blurred image) that is perceived by a viewer to represent an abnormal accommodation condition and that, when so perceived, urges the eyes to correct the accommodation condition by making a corresponding accommodation change.
  • A “convergence cue” is a visual stimulus (e.g. binocular disparity, i.e., slightly shifted image features in a stereoscopic image pair) that is perceived by a viewer to represent an abnormal convergence condition and that, when so perceived, urges the eyes to correct the convergence condition by making a corresponding convergence change.
  • A “retinal blur cue” is visual stimulus (e.g., blurred image) that is perceived by a viewer to represent an out-of-focus condition and that, when so perceived, provides the eyes information for depth judgment and may urge the eyes to correct the accommodation condition by making a corresponding change. (Note, the eyes do not necessarily make accommodation change, in many cases the retinal blur cue provides a sense of how far the appeared blurred object is from in-focus objects.)
  • Normally, a combination of an accommodation cue and a retinal blur cue provides a “focus cue” used by a person's eyes and brain to sense and establish good focus of respective objects at different distances from the eyes, thereby providing good depth perception and visual acuity.
  • An “addressable” parameter is a parameter that is controlled or changed by input of data and/or command(s). Addressing the parameter can be manual (performed by a person using a “user interface”) or performed by machine (e.g., a computer or electronic controller). Addressable also applies to the one or more operating modes of the subject displays. Upon addressing a desired mode, one or more operating parameters of the mode are also addressable.
  • An “accommodation cue” is a stimulus (usually an image) that stimulates the eye(s) to change or adjust its or their accommodation distance.
  • A “see-through” display allows a user to receive light from the real world, situated outside the display, wherein the light passes through the display to the user's eyes. Meanwhile, the user also receives light corresponding to one or more virtual objects rendered by the display and superimposed by the display on the image of the real world.
  • A “virtual object” is not an actual object in the real world but rather is in the form of an image artificially produced by the display and superimposed on the perceived image of the real world. The virtual object may be perceived by the eyes as being an actual real-world object, but it normally does not have a co-existing material counterpart, in contrast to a real object.
  • An “added-image source” is any of various 2-D devices that are addressable to produce a light pattern corresponding to at least one virtual object superimposed by the display on the real-world view, as perceived by the user of the display. In many embodiments the added-image source is a “micro-display” comprising an X-Y array of multiple light-producing pixels that, when addressed, collectively produce a light pattern. Other candidate added-image sources include, but are not limited to, digital micro-mirror devices (DMDs) and ferroelectric liquid-crystal-on-silicon (FLCOS) devices.
  • For producing accommodation cues, the displays address focal distances in at least two possible operational modes. One mode involves a single but variable-distance focal plane, and the other mode involves multiple focal planes at respective distances. The latter mode addresses the active-optical element and a 2-D virtual-image source in a time-sequential manner. Compared to a conventional time-multiplexed RSD that depends upon pixel-by-pixel rendering, the presenting of multiple full-color 2D images by a subject display from a 2-D added-image source in a time-sequential, image-by-image manner substantially reduces the address speed (from MHz to approximately 100 Hz) required for addressing all the pixels and the active-optical element(s). As the response speed of the active-optical element is increased (e.g., from about 75 ms to less than 10 ms), the efficiency of the display is correspondingly increased.
  • The foregoing and additional advantages and features of the invention will be more apparent from the following detailed description, which proceeds with reference to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic diagram of a display according to a first representative embodiment. The depicted display can be used as either a monocular or binocular display, the latter requiring an additional assembly as shown for a user's second eye (not shown).
  • FIGS. 2( a)-2(d) depict respective binocular viewing situations, including real-world (FIG. 2( a)), use of a conventional stereoscopic display (FIG. 2( b)), use of the embodiment for near convergence and accommodation (FIG. 2( c)), and use of the embodiment for far convergence and accommodation (FIG. 2( d)).
  • FIG. 2( e) is a perspective depiction of operation in a multi-focal-plane mode. In this example, there are two selectable focal planes.
  • FIG. 3 is an unfolded optical diagram of the display of FIG. 1.
  • FIG. 4( a) is a plot of the optical power of the liquid lens used in Example 1, as a function of applied voltages.
  • FIG. 4( b) is a plot of the accommodation cue produced by the display of Example 1, as a function of the voltage applied to the liquid lens.
  • FIGS. 5( a)-5(c) are respective images captured by a camcorder fitted to a display operating in the variable-single-focus-plane mode, showing the change in focus of a virtual torus achieved by changing the voltage applied to the liquid lens.
  • FIGS. 6( a)-6(d) are respective images of a simple mixed-reality application of a display operating in the variable-single-focus-plane mode. Sharp images of the COKE can (virtual object) and coffee cup (real world) were obtained whenever the accommodation cue was matched to actual distance (rendered “depth” of the can is 40 cm in FIGS. 6( a) and 6(b) and 100 cm in FIGS. 6( c) and 6(d)), and the camera obtaining the images was focused at 40 cm in FIGS. 6( a) and 6(d) and at 100 cm in FIGS. 6( b) and 6(c).
  • FIGS. 7( a)-7(b) are plots of a square-wave signal for driving the liquid lens of a display operating in the multi-focal-plane mode (FIG. 7( a)) and the resulting rendering of the virtual object (FIG. 7( b)). In this example, the liquid lens is fast-switched between two selected driving voltages as separate image frames are displayed sequentially in a synchronous manner.
  • FIG. 8 is a plot of the time response of two liquid lenses.
  • FIG. 9( a) is a schematic optical diagram of a display according to the second representative embodiment.
  • FIG. 9( b) is a plot of the focus cue (z) as a function of voltage (U) applied to the liquid lens of the second representative embodiment.
  • FIG. 10( a) is a time plot of an exemplary square wave of voltage applied to the liquid lens in the second representative embodiment, with fast switching between 49 and 37 Vrms so as to time-multiplex the focal planes at 1D and 6D, respectively, in the second representative embodiment.
  • FIG. 10( b) is a time plot of an exemplary rendering and display of images (Frame I and Frame II) of an object (torus) synchronously with energization of the liquid lens in the second representative embodiment. The accompanying Frame I shows the superposition of a sphere and a mask for a torus in front of the sphere. Frame II is a full image of the torus, with the sphere masked out.
  • FIG. 10( c) is a time plot of a square wave, synchronous with energization of the liquid lens, including respective blank frames per cycle.
  • FIGS. 11( a) and 11(b) depict exemplary results of the display of the second representative embodiment operating at 37.5 Hz in the multi-focal-plane mode, according to the lens-driving scheme of FIGS. 10( a)-10(b). In FIG. 11( a), when the camera was focused at the bar target at 6D, the torus (rendered at 6D) appears to be in focus while the sphere is blurred. FIG. 11( b) shows an image in which the camera was focused on the sphere at 1D, causing the sphere to appear in substantial focus.
  • FIGS. 11( c) and 11(d) show operation of the display of the second representative embodiment according to the rendering scheme of FIG. 10( c), producing better focus cues.
  • FIG. 12 is a control diagram of a variable-focus gaze-contingent display including real-time POG (point of gaze) tracking and DOF (depth of focus) rendering, in the third representative embodiment operating in the single-variable-focal-plane mode.
  • FIG. 13 is a schematic diagram of the eye-tracking as used in the third representative embodiment, wherein a pair of monocular trackers was used to triangulate the convergence point using respective lines of sight of a user's eyes.
  • FIGS. 14( a)-14(f) are example results obtained with the third representative embodiment configured as a VF-GCD (variable-focus gaze-contingent display). FIG. 14( a) is a rendered image of a virtual scene (rabbits) obtained using a standard pin-hole camera. FIG. 14( b) is a virtual image post-processed by applying a blur filter. FIGS. 14( c) and 14(e) are degree-of-blur maps of the virtual scene with the eye focused at 3D and 1D, respectively. FIGS. 14( d) and 14(f) are final rendered images of the 3-D scene with corresponding focus cues when the eye is focused at 3D and 1D, respectively.
  • FIGS. 15( a)-15(d) are example results obtained with the third representative embodiment configured as a VC-GCD. FIG. 15( a) is a plot of eye-tracked convergence distances versus time. FIG. 15( b) is a real-time rendering of focus cues while tracking the convergence distance. FIGS. 15( c) and 15(d) are optical see-through images of the VC-GCD captured with a camera, placed at the eye-pupil position, focused at 3D and 1D, respectively, while the optical power of the liquid lens was updated accordingly to match the focal distance of the display with the convergence distance.
  • FIG. 16 is a schematic diagram of a depth-fused display operating in the multi-focal-plane mode, as described in the fourth representative embodiment. Pixels on the front (A) and back (B) focal planes are located at z1 and z2, respectively, from the eye, and the fused pixel (C) is located at z (z2<z<z1). All distances are in dioptric units.
  • FIG. 17( a) is a plot of modulation transfer functions (MTF) of a depth-fused display (operating in the multi-focal-plane mode as described in the fourth representative embodiment) as a function of dioptric spacings of 0.2D, 0.4D, 0.6D, 0.8D, and 1.0D. MTF of an ideal viewing condition is plotted as a dashed line. Also included are plots of defocused MTFs (+0.3D) and (−0.3D).
  • FIG. 17( b) is a plot of MTFs as a function of accommodations with z=1.3, 1.2, 1.1, 1.0, 0.9, 0.8, 0.7D, obtained with the fourth representative embodiment. The medial focal plane is set up at 1D and the luminance ratio is L1/L=0.5.
  • FIGS. 18( a)-18(l) are simulated retinal images of a Snellen E target in a display operated in the depth-fused multi-focal-plane mode, as described in the fourth representative embodiment, with z1=1.3D, z2=0.7D, and w1=0.5. The accommodation distances are z=1.3D in FIGS. 18( a), 18(d), 18(g), and 18(j); z=1.0D in FIGS. 18( b), 18(e), 18(h), and 18(k); and z=0.7D in FIGS. 18( c), 18(f), 18(i), and 18(l), respectively. The target spatial frequencies are v=2 cpd in FIGS. 18( a), 18(b), and 18(c); v=5 cpd in FIGS. 18( d), 18(e), and 18(f); v=10 cpd in FIGS. 18( g), 18(h), and 18(i); and v=30 cpd in FIGS. 18( j), 18(k), and 18(l), respectively. The sizes of the images are proportional to the relative sizes as viewed on the retina.
  • FIG. 19 provides plots of simulated filter curves of accommodation cue versus depth, obtained with the fourth representative embodiment. For a six-focal-plane display operating as a DFD, with z1=3D, z6=0D, and Δz=0.6D.
  • FIGS. 20( a)-20(d) show simulated retinal images, obtained as described in the fourth representative embodiment, of a 3-D scene through a six-focal-plane DFD display with depth-weighted non-linear fusing functions as given in Eq. (11), as well as the box filter (FIG. 20( b)), linear filter (FIG. 20( c)), and non-linear filter (FIG. 20( c)) shown in FIG. 19. FIG. 20( a) is a depth map of the scene rendered by shaders.
  • FIGS. 21( a)-21(g) are comparative plots of MTFs in a dual-focal-plane DFD display using liner and non-linear depth-weighted fusing functions, respectively. Front and back focal planes are assumed at z1=1.8D and z2=1.2D, respectively. Accommodation distance is z=1.8D (FIG. 21( a)), 1.7D (FIG. 21( b)), 1.6D (FIG. 21(c)), 1.5D (FIG. 21( d)), 1.4D (FIG. 21( e), 1.3D (FIG. 21( f)), and 1.2D (FIG. 21( g)), respectively.
  • FIG. 22 is a schematic diagram of the experimental setup used in the depth-judgment subjective evaluations.
  • FIG. 23 is a bar graph of average error rate and subjective ranking on depth perception by all subjects under the viewing condition without presenting real reference targets (case A), as described in the subjective evaluations.
  • FIG. 24 is a plot of mean perceived depths among ten subjects as a function of accommodation cues rendered by the display operating in the variable-single-focal-plane mode, as described in the subjective evaluations.
  • FIG. 25 is a plot of averaged rankings on depth perception when the real target reference was not presented (solid bar) and when the real target reference was presented (hatched bar), as described in the subjective evaluations.
  • FIG. 26 is a plot of objective measurements of the accommodative responses to the accommodation cues presented by the see-through display, as described in the subjective evaluations.
  • FIG. 27 is a schematic diagram showing the first representative embodiment configured for use as a head-mounted display.
  • FIG. 28 is a schematic diagram of the first representative embodiment including driving electronics, controller, and user interface.
  • FIG. 29 is similar to FIG. 28, but depicting a binocular display.
  • DETAILED DESCRIPTION
  • The following disclosure is presented in the context of representative embodiments that are not to be construed as being limiting in any way. This disclosure is directed toward all novel and non-obvious features and aspects of the various disclosed embodiments, alone and in various combinations and sub-combinations with one another. The disclosed methods, apparatus, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present or problems be solved.
  • Although the operations of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement of the operations, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed systems, methods, and apparatus can be used in conjunction with other things and methods.
  • The following explanations of terms are provided to better describe the present disclosure and to guide those of ordinary skill in the art in the practice of the present disclosure.
  • This disclosure sometimes uses terms like “produce,” “generate,” “select,” “receive,” “exhibit,” and “provide” to describe the disclosed methods. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms may vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art.
  • The singular forms “a,” “an,” and “the” include the plural forms unless the context clearly dictates otherwise. The term “includes” means “comprises.” Unless the context dictates otherwise, the term “coupled” means mechanically, electrically, or electromagnetically connected or linked and includes both direct connections or direct links and indirect connections or indirect links through one or more intermediate elements not affecting the intended operation of the described system.
  • Certain terms may be used such as “up,” “down,” “upper,” “lower,” and the like. These terms are used, where applicable, to provide some clarity of description when dealing with relative relationships. But, these terms are not intended to imply absolute relationships, positions, and/or orientations.
  • The term “or” refers to a single element of stated alternative elements or a combination of two or more elements, unless the context clearly indicates otherwise.
  • Unless explained otherwise, all technical and scientific terms used herein have the same meaning as commonly understood to one of ordinary skill in the art to which this disclosure belongs. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present disclosure, suitable methods and materials are described below. The materials, methods, and examples are illustrative only and not intended to be limiting. Other features of the disclosure are apparent from the following detailed description and the claims.
  • Unless otherwise indicated, all numbers expressing quantities of components, percentages, temperatures, times, and so forth, as used in the specification or claims are to be understood as being modified by the term “about” or “approximately.” Accordingly, unless otherwise indicated, implicitly or explicitly, the numerical parameters set forth are approximations that may depend on the desired properties sought and/or limits of detection under standard test conditions/methods. When directly and explicitly distinguishing embodiments from discussed prior art, the embodiment numbers are not approximates unless the word “about” is recited.
  • The various embodiments of displays address multiple focal planes in an optical see-through display. A particularly desirable display configuration is head-mountable; however, head-mountability is not a mandatory feature. For example, contemplated as being within the scope of the invention are displays relative to which a viewer simply places his or her head or at least his or her eyes. The displays include binocular (intended and configured for use with both eyes) as well as monocular displays (intended and configured for use with one eye).
  • Each of the various embodiments of displays described herein comprises an active-optical element that can change its focal length by application of an appropriate electrical stimulus (e.g., voltage) or command. An active-optical element can be refractive (e.g., a lens) or reflective (e.g., a mirror).
  • A practical active-optical element in this regard is a so-called “liquid lens.” A liquid lens operates according to the electrowetting phenomenon, and can exhibit a wide range of optical power. Electrowetting is exemplified by placement of a small volume (e.g., a drop) of water on an electrically conductive substrate, wherein the water is covered by a thin layer of an electrical insulator. A voltage applied to the substrate modifies the contact angle of the liquid drop relative to the substrate. Currently available liquid lenses actually comprise two liquids having the same density. One liquid is an electrical insulator while the other liquid (water) is electrically conductive. The liquids are not miscible with each other but contact each other at a liquid-liquid interface. Changing the applied voltage causes a corresponding change in curvature of the liquid-liquid interface, which in turn changes the focal length of the lens. One commercial source of liquid lenses is Varioptic, Inc., Lyon, France. In one example embodiment the respective liquid lens exhibits an optical power ranging from −5 to +20 diopters (−5D to 20D) by applying an AC voltage ranging from 32 Vrms to 60 Vrms, respectively. Such a lens is capable of dynamically controlling the focal distance of a light pattern produced by a 2-D micro-display from infinity to as close as the near point of the eye.
  • First Representative Embodiment
  • A representative embodiment of a stereoscopic display 10 is shown in FIG. 1, which depicts half of a binocular display. The depicted display 10 is used with one eye while the other half (not shown) is used with the viewer's other eye. The two halves arc normally configured as mirror images of each other. The display 10 is configured as an optical see-through (OST) head-mounted display (HMD) having multiple addressable focal planes. “See-through” means the user sees through the display to the real world beyond the display. Superimposed on the image of the real world, as seen through the display, is one or more virtual objects formed and placed by the display.
  • The display 10 comprises a 2-D micro-display 12 (termed herein an “added-image source”), a focusing lens 14, a beam-splitter (BS) 16, and a condensing (e.g., concave spherical) mirror 18. The added-image source 12 generates a light pattern intended to be added, as an image, to the view of the “real world” being perceived by a user wearing or otherwise using the display 10.
  • To illustrate generally the operation of the display 10, reference is made to FIGS. 2( a)-2(d). FIG. 2( a) depicts normal viewing of the real world; FIG. 2( b) depicts viewing using a conventional stereoscopic display; and FIGS. 2( c) and 2(d) depict viewing using this embodiment. For simplicity, only two objects (configured as boxes) located near (Box A) and far (Box B) are shown. In the real-world viewing situation (FIG. 2( a)), the eyes alternatingly adjust focus between near and far distances while natural focus cues are maintained. As used herein, “distance” is outward along the optical axis of the display, as measured from the exit pupil of the eye. The accommodation and convergence distances are normally coupled to each other, and an object out of the current focal distance will appear blurred, as indicated by the simulated retinal images in the inset to the right. In a conventional stereoscopic display (FIG. 2( b)), assuming the image plane is fixed at a far distance, converging at the near distance will cause an unnatural conflict between convergence and accommodation, causing both rendered boxes to appear either in focus or blurred as the eyes accommodate at the far or near distance, respectively. This situation yields incorrect focus cues as shown in the corresponding inset images in FIG. 2( b). In contrast, the subject display 10 approximates the viewing condition of the real world, as shown in FIGS. 2( c) and 2(d). When the eyes converge at the near distance (Box A), the display's image plane is moved to the near distance accordingly, thereby rendering Box A in focus and rendering Box B with appropriate blur. When the eyes converge at the far distance (Box B), the image plane is translated to the far distance, thereby rendering Box B in focus and rendering Box A with appropriate blur. Therefore, the retinal images shown in the insets of FIGS. 2( c) and 2(d) simulate those of the real world situation by concurrently adjusting the focal distance of the display to match with the user's convergence distance and rendering retinal blur cues in the scene according to the current focal status of the eyes.
  • The focusing lens 14 is drawn as a singlet in FIG. 1, but it actually comprises, in this embodiment, an “accommodation lens” (i.e., the liquid lens) 14 a with variable optical power ΦA, and an objective lens 14 b having a constant optical power Φo. The two lenses 14 a, 14 b form an intermediate image 20 of the light pattern produced by the added-image source 12 on the left side of the mirror 18. (The objective lens provides most of the optical power and aberration control for forming this intermediate image.) The liquid lens 14 a is optically conjugate to the entrance pupil of the eye 15, which allows accommodative changes made by the eye 15 to be adaptively compensated by optical-power changes of the liquid lens. The mirror 18 relays the intermediate image 20 toward the viewer's eye through the beam-splitter 16. Since the liquid lens 14 a is the limiting aperture of the display optics, it desirably is placed at the center of curvature (OSM) of the mirror 18 so that a conjugate exit pupil is formed through the beam-splitter 16. The viewer, by positioning an eye 15 at the conjugate exit pupil, sees both the added image of the light pattern produced by the added-image source 12 and an image of the real world through the beam-splitter 16. Indicated by the dashed and solid lines, respectively, as the accommodation lens 14 a changes its optical power from high (I) to low (II), the intermediate image 20 produced by the accommodation lens is displaced toward (I′) or away from (II′), respectively, the focal plane (fSM) of the mirror 18. Correspondingly, the added image is formed either far (I″) or close (II″), or in between, to the eye 15. Since the liquid lens 34 a is located optically conjugate to the entrance pupil, any change in power produced by the liquid lens does not change the apparent field of view.
  • Thus, the two lenses 14 a, 14 b together form an intermediate image of the light pattern produced by the added-image source 12, and the mirror 18 relays and directs the intermediate image toward the viewer's eye via the beam-splitter 16. The minor 18 is configured to ensure a conjugate exit pupil is formed at the eye of a person using the display 10. By placing the eye at the conjugate pupil position, the viewer sees both the image of the light pattern produced by the added-image source 12 and a view of the real world. Although the minor 18 in this embodiment is spherically concave, it will be understood that it alternatively could be aspherical-concave.
  • In certain alternative configurations, the minor 18 can be omitted. The main benefit of the mirror is its ability to fold the optical pathway and provide a compact optical system in the display. In certain situations such compactness may not be necessary.
  • The accommodation lens 14 a is a liquid lens in this embodiment, which is an example of a refractive active-optical element. It will be understood that any of several other types of refractive active-optical elements can alternatively be used, such as but not limited to a liquid-crystal lens. Further alternatively, the accommodation lens can be a reflective active-optical element, such as an actively deformable mirror. In other words, any of various optical elements can be used that have the capability of changing their focal length upon being addressed (i.e., upon command).
  • Based on first-order optics and use of a liquid lens as an active-optical element, the accommodation cue, d, of the display 10 (i.e., the distance from the eye 15 to the image plane of the virtual object produced by the added-image source 12) is determined by:
  • d = - uR 2 u + R + uR Φ ( 1 )
  • where Φ=ΦoA−ΦoΦAt is the combined optical power of the focusing lens, t is the axial separation between the objective lens 14 b and the accommodation lens 14 a, it is the axial distance from the 2-D added-image source 12 to the focusing lens 14, and R is the radius of curvature of the mirror 18. All distances are defined by the sign convention in optical designs.
  • This display 10 has multiple addressable focal planes for improved depth perceptions. Similarly to the accommodative ability of the crystalline lens in the human visual system, the liquid lens 14 a or other refractive active-optical element provides an addressable accommodation cue that ranges from infinity to as close as the near-point of the eye. Unlike mechanical focusing methods, and unlike retinal scanning displays (RSDS) based on reflective deformable membrane mirrors (DMMs), the transmissive nature of the liquid lens 14 a or other refractive active-optical element allows for a compact and practical display that has substantially no moving mechanical parts and that does not compromise the accommodation range. FIG. 3 shows the unfolded optical path of the schematic diagram in FIG. 1.
  • Focus cues are addressable with this embodiment in at least one of two modes. One mode is a variable-single-focal-plane mode, and the other in a time-multiplexed multi-focal-plane mode. In the variable-single-focal-plane mode, the accommodation cue of a displayed virtual object is continuously addressed from far to near distances and vice versa. Thus, the accommodation cue provided by a virtual object can be arbitrarily manipulated in a viewed 3-D world. In the time-multiplexed multi-focal-plane mode, the active-optical element, operating synchronously with graphics hardware and software driving the added-image source, is driven time-sequentially to render both accommodation and retinal blur cues for virtual objects at different depths. In comparison to the conventional time-multiplexed RSD approach using individually addressable pixels, use in this embodiment of the 2-D added-image source to render multiple full-color 2-D images on a frame-sequential basis substantially eliminates any requirement for high addressing speeds.
  • This embodiment is head-mountable, as shown, for example, in FIG. 27, in which the dashed line indicates a housing and head-band for the display.
  • FIG. 1 depicts a monocular display, used with one of a person's eyes. The monocular display is also shown in FIG. 28, which also depicts driving electronics connected to the “microdisplay” (added-image source), and a controller connected to the active-optical element. As described in more detail below, also shown is a “user interface” that is manipulated by the user. The driving electronics, controller, and user interface are shown connected to a computer, but it will be understood that the controller can be used for top-level control without having also to use a computer. A corresponding binocular display is shown in FIG. 29.
  • EXAMPLE 1
  • In this example a monocular display was constructed, in which the accommodation lens 14 a was a liquid lens (“Arctic 320” manufactured by Varioptic, Inc., Lyon, France) having a variable optical power from −5 to +20 diopters by applying an AC voltage from 32 Vrms to 60 Vrms, respectively. The liquid lens 14 a, having a clear aperture of 3 mm, was coupled to an objective lens 14 b having an 18-mm focal length. The source of images to be placed in a viewed portion of the real world was an organic-LED, full-color, 2-D added-image source (“micro-display,” 0.59 inches square) having 800×600 pixels and a refresh rate of up to 85 Hz (manufactured by eMagin, Inc., Bellevue, Wash.). The mirror 18 was spherically concave, with a 70-mm radius of curvature and a 35-mm clear aperture. Based on these parametric combinations, the display had an exit-pupil diameter of 3 mm, an eye-relief of 20 mm, a diagonal field of view (FOV) of about 28°, and an angular resolution of 1.7 arcmins. The 28° FOV was derived by accounting for the chief-ray angle in the image space.
  • FIG. 4( a) is an exemplary plot of the optical power of the liquid lens 14 a of this example as a function of applied voltages. The curve was prepared by entering specifications of the liquid lens 14 a, under different driving voltages, into an optical-design software, CODE V (http://www.opticalres.com). Two examples are shown in FIG. 4( a). At 38 Vrms of applied voltage, the liquid lens 14 a produced 0 diopter of optical power, as indicated by the planarity of the liquid interface (lower inset). At 49 Vrms the liquid lens 14 a produced 10.5 diopters of optical power, as indicated by the strongly curved liquid interface (upper inset).
  • Based on the parametric selections in this example and on Eq. (1), FIG. 4( b) is a plot of the accommodation cue produced by the display as a function of the voltage applied to the liquid lens 14 a. As denoted by two solid-triangular markers in FIG. 4( b), driving the liquid lens at 38 Vrms and 49 Vrms produced accommodation cues at 6 diopters and 1 diopter, respectively. Changing the applied voltage of 32 Vrms to 51 Vrms changed the accommodation cue of the display from 12.5 cm (8 diopters) to infinity (0 diopter), respectively, thereby covering almost the entire accommodative range of the human visual system.
  • As indicated by FIGS. 4( a)-4(b), addressing the accommodation cue being produced by the display is achieved by addressing the liquid lens 14 a. I.e., addressing the optical power of the liquid lens 14 a addresses the corresponding accommodation cue produced by the display. The display 10 can be operated in at least one of two modes: variable-single-focal-plane mode and time-multiplexed multi-focal-plane mode. The variable single-focal-plane mode meets specific application needs, for instance, matching the accommodation cue of virtual and real objects in mixed and augmented realities.
  • In the multi-focal plane mode, the liquid lens 14 a is fast-switched among between multiple discrete driving voltages to provide multiple respective focal distances, such as I″ and II″ in FIG. 1, in a time-sequential manner. Synchronized with this switching of the focal-plane, the electronics used for driving the 2-D added-image source 12 are updated as required to render the added virtual object(s) at distances corresponding to the rendered focus cues of the display 10. The faster the response speed of the liquid lens 14 a and the higher the refresh rate of the added-image source 12, the more focal planes that can be presented to the viewer at a substantially flicker-free rate.
  • FIG. 2( e) is a perspective view of the display of this embodiment used in the multi-focal-plane mode, more specifically a dual-focal-plane mode. The liquid lens is switched between two discrete operating voltages to provide two focal planes FPI and FPII. The eye perceives these two focal planes at respective distances z1 and z2. The added images are similar to those shown in the insets in FIGS. 10( a) and 10(b), discussed later below.
  • In the multi-focal-plane mode, the dioptric spacing between adjacent focal planes and the overall range of accommodation cues can be controlled by changing the voltages applied to the liquid lens 14 a. Switching among various multi-focal-plane settings, or between the variable-single-focal-plane mode and the multi-focal-plane mode, does not require any hardware modifications. These distinctive capabilities provide a flexible management of focus cues suited for a variety of applications, which may involve focal planes spanning a wide depth range or dense focal planes within a relatively smaller depth range for better accuracy.
  • Certain embodiments are operable in a mode that is essentially a combination of both operating modes summarized above.
  • Variable-Single-Focal-Plane Mode
  • Operating the system under the variable-single-focal-plane mode allows for the dynamic rendering of accommodation cues which may vary with the viewer's position of interest in the viewing volume. Operation in this mode usually requires some form of feedback and thus some form of feedback control. The feedback control need not be automatic. The feedback can be generated by a user using the display and responding to accommodation and/or convergence cues provided by the display and feeding back his responses using a user interface. Alternatively, the feedback can be produced using sensors producing data that are fed to a computer or processor controlling the display. A user interface also typically requires a computer or processor to interpret commands from the interface and produce corresponding address commands for the active-optical element.
  • In this mode the added-image source 12 produces a light pattern corresponding to a desired image to be added, as a virtual object, to the real-world view being produced by the display 10. Meanwhile, the voltage applied to the liquid lens 14 a is dynamically adjusted to focus the added image of the light pattern at different focal distances, from infinity to as close as the near point of the eye, in the real-world view. This dynamic adjustment can be achieved using a “user interface,” which in this context is a device manipulated by a user to produce and input data and/or commands to the display. An example command is the particular depth at which the user would like the added image placed in the real-world view. The image of the light pattern produced by the added-image source 12 is thus contributed, at the desired depth, to the view of the “real” world being provided by the display 10. Another user interface is a 3-D eye-tracker, for example, that is capable of tracking the convergence point of the left and right eyes in 3-D space. A hand-held device offers easy and robust control of slowly changing points of interest, but usually lacks the ability to respond to rapidly updating points of interest at a pace comparable to the speed of moderate eye movements. An eye-tracker interface, which may be applicable for images of virtual objects graphically rendered with the depth-of-field effects, enables synchronous action between the focus cues of the virtual images and the viewer's eye movements. In various experiments we adopted a hand-held device, e.g., “SpaceTraveler” (3DConnexion, Inc., Fremont, Calif.) for manipulating accommodation cues of the display in 3-D space.
  • The variable-single-focal-plane mode meets specific application needs, such as substantially matching the accommodation cues of virtual and real objects in mixed and augmented realities being perceived by the user of the display. The accommodation and/or focus cues can be pre-programmed, if desired, to animate the virtual object to move in 3-D space, as perceived by the user.
  • To demonstrate the addressability of focus cues in the variable-single-focal-plane mode, three bar-type resolution targets were placed along the visual axis of an actually constructed display. The targets served as references to the virtual image with variable focus cues. As shown on the left side of each sub-image in FIGS. 5( a)-5(c), the bar targets were placed at 16 cm (largest target), 33 cm (mid-sized target), and 100 cm (smallest target), respectively, away from the exit pupil of the display (i.e., the eye position). The periods of the bar targets were inversely proportional to their respective distances from the eye so that the subtended angular resolution of the grating remained constant among all targets. A digital camcorder, with which the images in FIGS. 5( a)-5(d) were obtained, was situated at the eye position.
  • The added-image source 12 was addressed to produce an image of a torus and to place the image of the torus successively, at a constant rate of change, along the visual axis of the display at 16 cm, 33 cm, and 100 cm from the eye, or in reverse order. Meanwhile, the voltage applied to the liquid lens 14 a was changed synchronously with the rate of change of the distance of the virtual torus from the eye. By varying the voltage between 38 Vrms and 49 Vrms, the accommodation cue of the displayed torus image was varied correspondingly from 6 diopters to 1 diopter.
  • Meanwhile, the digital camcorder captured the images shown in FIGS. 5( a)-5(c). Comparing these figures, the virtual torus in FIG. 5( a) only appears in focus whenever the voltage applied to the liquid lens was 38 Vrms (note, the camcorder in FIG. 5( a) was constantly focused at 16 cm, or 6 diopters, distance). Similarly, the virtual torus in each of FIGS. 5( b) and 5(c) only appears in focus whenever the driving voltage was 45 Vrms and 49 Vrms, respectively. These images clearly demonstrate the change of accommodation cue provided by the virtual object.
  • FIGS. 6( a)-6(d) shows a simple mixed-reality application in the variable-single-focal-plane mode. The real scene is of two actual coffee mugs, one located 40 cm from the viewer and the other located 100 cm from the viewer (exit pupil). The virtual image was of a COKE® can rendered at two different depths, 40 cm and 100 cm, respectively. A digital camera placed at the exit pupil served as the “eye.” In FIG. 6( a) the digital camera was focused on the mug at 40 cm while the liquid lens was driven (at 49 Vrms) to render the can at a matching depth of 40 cm. Whenever the accommodation cue was matched to actual distance, a sharp image of the can was perceived. In FIG. 6( b) the digital camera was focused on the mug at 100 cm while the liquid lens was driven (at 49 Vrms) to render the can at a depth of 40 cm. The resulting mismatch of accommodation cue to actual distance produced a blurred image of the can. In FIG. 6( c) the camera was focused on the mug at 100 cm while the liquid lens was driven (at 46 Vrms) to render the can at a depth of 100 cm. The resulting match of accommodation cue to actual distance yielded a sharp image of the can. In FIG. 6( d) the camera was focused on the mug at 40 cm while the liquid lens was driven (at 46 Vrms) to render the can at a depth of 100 cm. The resulting mismatch of accommodation cue to actual distance produced a blurred image of the can. Thus, by applying 46 Vrms or 49 Vrms, respectively, to the liquid lens, the virtual image of the COKE can appeared realistically (in good focus) with the two mugs at a near and far distance, respectively. In this example, while a user is interacting with the virtual object, the focusing cue may be dynamically modified to match its physical distance to the user, yielding a realistic augmentation of a virtual object or scene with a real scene. Thus, accurate depth perceptions are produced in an augmented reality application.
  • A series of focus cues can be pre-preprogrammed to animate a virtual object in the real-world view to move smoothly in the view in three-dimensional space.
  • Multi-Focal-Plane Mode
  • Although the variable-single-focal-plane mode is a useful mode for many applications, the multi-focal-plane mode addresses needs for a true 3-D display, in which depth perceptions are not limited by a single or a variable focal plane that may need an eye tracker or the like to track a viewer's point of interest in a dynamic manner. In other words, the multi-focal-plane mode can be used without the need for feedback or feedback control. Compared to the volumetric displays, a display operating in the multi-focal-plane mode balances accuracy of depth perception, practicability for device implementation, and accessibility to computational resources and graphics-rendering techniques.
  • In the multi-focal-plane mode, the liquid lens 14 a is rapidly switched among multiple selectable driving voltages to provide multiple respective focal distances, such as I″ and II″ in FIG. 1, in a time-sequential manner. Synchronously with switching of the focal-plane, the pattern produced by the added-image source 12 is updated (“refreshed”) as required to render respective virtual objects at distances approximately matched to the respective accommodation cues being provided by the display, as produced by the liquid lens 14 a. The faster the response speed of the liquid lens 14 a and the higher the refresh rate of the added-image source 12, the greater the number of focal planes that can be presented per unit time. The presentation rate of focal planes can be sufficiently fast to avoid flicker. In the multi-focal-plane mode, the dioptric spacing between adjacent focal planes and the overall range of accommodation cue can be controlled by changing the respective voltages applied to the liquid lens 14 a. This distinctive capability enables the flexible management of accommodation cues as required by a variety of applications requiring either focal planes spanning a wide depth range or dense focal planes within a relatively smaller depth range for better accuracy.
  • Use of the display in the time-multiplexed multi-focal-plane mode is made possible, for example, by using the liquid lens 14 a as an active-optical element to control the accommodation cue. There are a few major differences between this mode as used with certain of the displays described herein versus the conventional retinal scanning display (RSD) technique. Firstly, the subject embodiments of the display 10 use a liquid lens 14 a (a refractive active-optical element), rather than a reflective DMM device. Use of the liquid lens 14 a provides a compact and practical display without compromising the range of accommodation cues. Secondly, instead of addressing each pixel individually by a laser-scanning mechanism as in the RSD technique, the subject embodiments use a 2-D added-image source 12 to generate and present high-resolution, images (typically in full color) in a time-sequential, image-by-image manner to respective focal planes. Consequently, the subject embodiments do not require the very high addressing speed (at the MHz level) conventionally required to render images pixel-by-pixel. Rather, the addressing speeds of the added-image source 12 and of the active-optical element 14 a are substantially reduced to, e.g., the 100-Hz level. In contrast, the pixel-sequential rendering approach used in a conventional RSD system requires MHz operation speeds for both the DMM device and the mechanism for scanning multiple laser beams.
  • For an example display in a dual-focal-plane mode (as an example of a multi-focal-plane mode), the driving signal of the liquid lens 14 a and an exemplary manner of driving the production of virtual objects are shown in FIGS. 7( a) and 7(b), respectively. Differently from the variable-single-focal-plane mode, in this mode the liquid lens 14 a is fast-switched between two selected driving voltages, as shown in FIG. 7( a). Thus, the accommodation cue provided by the display 10 is consequently fast-switched between selected far and near distances. In synchrony with the signal driving the liquid lens 14 a, far and near virtual objects are rendered on two or more separate image frames and displayed sequentially, as shown in FIG. 7( b). The two or more image frames can be separated from each other by one or more “blank” frames. If the switching rate is sufficiently rapid to eliminate “flicker,” the blank frames are not significantly perceived. To create a substantially flicker-free appearance of the virtual objects rendered sequentially at the two depths, the added-image source 12 and graphics electronics driving it desirably have frame rates that are at least two-times higher than their regular counterparts. Also, the liquid lens 14 a desirably has a compatible response speed. In general, the maximally achievable frame rate, fN, of a display 10 operating in the multi-focal-plane mode is given by:
  • f N = f min N ( 2 )
  • where N is the total number of focal planes and fmin is the lowest response speed (in Hz) among the added-image source 12, the active-optical element 14 a, and the electronics driving these components. The waveforms in FIGS. 7( a)-7(b) reflect operation of all these elements at ideal speed.
  • EXAMPLE 2
  • In this example, the liquid lens 14 a (Varioptic “Arctic 320”) was driven by a square wave oscillating between 49 Vrms and 38 Vrms, respectively. Meanwhile, the accommodation cue provided by the display 10 was fast-switched between the depths of 100 cm and 16 cm. The period, T, of the driving signal was adjustable in the image-rendering program. Ideally, T should be set to match the response speed of the slowest component in the display 10, which determines the frame rate of the display operating in the dual-focal-plane mode. For example, if T is set at 200 ms, matching the speed (fmin) of the slowest component in the display 10, the speed of the display will be 5 Hz, and the virtual objects at the two depths will appear alternatingly to a user of the display. If T is set at 20 ms (50 Hz) faster than the slowest component (in one example the highest refresh rate of the electronics driving the added-image source 12 is 75 Hz), then the virtual objects will be rendered at a speed of about fmin/2=37.5 Hz. In another example, the control electronics driving the liquid lens 14 a allows for a high-speed operational mode, in which the driving voltage is updated every 600 μs to drive the liquid lens. The response speed of this liquid lens 14 a (shown in FIG. 8 as the curve formed with diamond-shaped markers) is approximately 75 ms. The maximum refresh rate of the added-image source 12 is 85 Hz and of the electronics driving it is 75 Hz. Hence, in this example the speed at which the liquid lens 14 a can be driven is the limiting factor regarding the speed of the display 10.
  • This is shown in Table 1. In the left-hand column of Table 1, potential limiting factors to the maximum speed of the display operating in a dual-focal-plane mode are listed, including the liquid lens 14 a, the added-image source 12, and the driving electronics (“graphics card”). For example, if the particular liquid lens 14 a used in the display 10 is the “Arctic 320”, then the maximum achievable frame rate in the dual-focal-plane mode is 7 Hz. A more recent type of liquid lens, namely the “Arctic 314” from Varioptic, has a purported 5˜10 times faster response speed than the Arctic 320. In FIG. 8, the curve of data indicated by circles indicates a 9-ms rise-time of the Arctic 314 to reach 90% of its maximum optical power. With this liquid lens, the highest achievable frequency of the display operating in the dual-focal-plane mode would be 56 Hz if the liquid lens were the limiting factor of speed in the display. This frame rate is almost at the flicker-free frequency of 60 Hz.
  • TABLE 1
    Hardware Max. Display
    Limiting Factor Speed (ms) Speed (Hz)
    Liquid Lens, Arctic 320 74 7
    Graphics Card, 75 Hz 13.3 37.5
    OLED Micro-display, 85 Hz 11.8 42.5
    Liquid Lens, Arctic 314 9 56
    Flicker-Free Frequency 8.4 60
  • Second Representative Embodiment EXAMPLE 3
  • A display 30 according to this embodiment and example comprised a faster liquid lens 34 a than used in the first embodiment. Specifically, the faster liquid lens 34 a was the “Arctic 314” manufactured by Varioptic, Inc. This liquid lens 34 a had a response speed of about 9 ms, which allowed the frame rate of the display 30 (operating in dual-focal-plane mode) to be increased to 37.5 Hz. Referring to FIG. 9( a), the display 30 (only the respective portion, termed a “monocular” portion, for one eye is shown; a binocular display would include two monocular portions for stereoscopic viewing) also included a spherical concave mirror 38, a 2-D added-image source 32, and a beam-splitter (BS) 36.
  • An alternative object-rendering scheme was used in this embodiment and example to reduce artifacts and further improve the accuracy of the convergence cues produced by the display 30. The liquid lens 34 a had a clear aperture of 2.5 mm rather than the 3-mm clear aperture of the liquid lens 14 a. To compensate for the reduced clear aperture, certain modifications were made. As shown in FIG. 9( a), the liquid lens 34 a was offset from the center of the radius of curvature O of the mirror 38 by Δ, thus the exit pupil of the display 30 was magnified by
  • m p = R R + 2 Δ
  • to the size of the clear aperture of the liquid lens 34 a. The focus cue is specified by the distance z from the virtual image to the exit pupil of the display 30, given as:
  • z = - R ( u + Δ + u Δφ ) 2 ( u + Δ + u Δφ ) + R ( 1 + u φ ) + Δ R R + 2 Δ ( 3 )
  • The liquid lens 34 a had a variable optical power ranging from −5 to +20 diopters by applying an AC voltage, ranging from 32 Vrms to 60 Vrms, respectively. The other optical components (e.g., the beam-splitter 36 and singlet objective lens 34 b) were as used in Example 1. The axial distance t between the objective lens 34 b and the liquid lens 34 a was 6 mm, the offset A was 6 mm, and the object distance (−u) was 34 mm. With these parameters, the display 30 exhibited a 24° diagonal field-of-view (FOV) with an exit pupil of 3 mm. A comparison of the Arctic 314 and Arctic 320 lenses is shown in Table 2.
  • TABLE 2
    Parameter ARCTIC 320 ARCTIC 314
    Applied voltage 0-60 Vrms 0-60 Vrms
    Optical Power −5D~20D −5D~20D
    Effective aperture 3.0 mm 2.5 mm
    Response time
    75 msec (90% rise time) 9 msec (90% rise time)
    Operate wavelength Visible Visible
    Linear range
    38~49 V rms 38~49 Vrms
    Drive Freq. 1 kHz 1 kHz
    Wavefront distort. <0.5 μm 80 nm (typ.)
    Transmittance @ >90% rms >97% rms
    587 nm
  • Given the dependence of the optical power Φ upon the voltage U applied to the liquid lens 34 a, FIG. 9( b) is a plot of the focus cue (z) as a function of the voltage U applied to the liquid lens (the focus cue was calculated per Eq. (3)). To produce a substantially flicker-free appearance of 3-D virtual objects rendered sequentially on multiple focal planes, the speed requirements for the liquid lens 34 a, of the 2-D added-image source 32, and of the driving electronics (“graphics card”) were proportional to the number of focal planes. Thus, this example operated at up to 37.5 Hz, which is half the 75-Hz frame rate of the driving electronics. FIG. 9( b) suggests that the dual focal planes can be positioned as far as 0 diopter or as close as 8 diopters to the viewer by applying respective voltages ranging between 51 Vrms and 32 Vrms, respectively, to the liquid lens 34 a. For example, in one experimental demonstration, two time-multiplexed focal planes were positioned at 1 diopter and 6 diopters with application of 49 Vrms and 37 Vrms, respectively, to the liquid lens 34 a.
  • As illustrated in FIG. 10( a), the liquid lens 34 a was driven by a square wave, with a period T of fast-switching between 49 Vrms and 37 Vrms to temporally multiplex the focal planes at 1 diopter and 6 diopters, respectively. In synchrony with energization of the liquid lens 34 a, two frames of images (I and II), corresponding to far and near objects, respectively, were rendered and displayed sequentially as shown in FIG. 10( b). Correct occlusion can be portrayed by creating a stencil mask for near objects rendered on the frame II. As an example, frame I in FIG. 10( b) shows the superposition of a sphere and the mask for a torus in front of the sphere. In this rendering, the duration t0 of both the far- and near-frames is one-half of the period T. The refresh rate of the display 30 is given as f=1/T=1/(2t0), which specifies the speed at which the far and near focal states are rendered. Limited by the 75-Hz frame rate of the electronics in this example, the minimum value of t0 was 13.3 ms, and the highest refresh rate of the display was 37.5 Hz to complete the rendering of both far and near focal states. A depth-weighted blending algorithm can be used to improve the focus-cue accuracy for objects located between two adjacent focal planes.
  • Using the lens-driving scheme of FIGS. 10( a) and 10(b), FIGS. 11( a) and 11(b) show experimental results produced by the display operating at 37.5 Hz in the multi-focal-plane mode. Three real bar-type resolution targets, shown on the left side of each of FIGS. 11( a)-11(d), were placed along the visual axis of the display. The targets at 6 diopters (large size) and 1 diopter (small size) were used as references for visualizing the focus cues rendered by the display. The target at 3 diopters (medium size) helped to visualize the transition of focus cues from far to near distances and vice versa. To obtain the respective picture shown in FIGS. 11( a)-11(d), a camera was mounted at the eye location shown in FIG. 9( a). Two virtual objects, a sphere and a torus, were rendered sequentially at 1 diopter and 6 diopters, respectively. As shown in FIG. 11( a), when the camera was focused on the bar target at 6 diopters, the torus (rendered at 6D) appears to be in focus while the sphere shows noticeable out-of-focus blurring. FIG. 11( b) demonstrates a situation in which the camera was focused on the sphere at 1 diopter. The sphere appears to be in focus while the torus is not in focus. The virtual objects were animated in such a way that they both moved along the visual axis at a constant speed from either 6 diopters to 1 diopter, or vice versa. Synchronously, the voltage applied to the liquid lens 34 a was adjusted accordingly such that the locations of the two focal planes always corresponded to the respective depths of the two objects. These results demonstrated correct correspondence of focus cues for the two virtual objects, matching with the focus-setting change of the camera.
  • In this example, since the response speed of the liquid lens 34 a was about 9 ms, longitudinal shifts of the focal planes during the settling time of the liquid lens were expected as the driving signal was switched between the two voltages. This phenomenon can produce minor image blur and less than ideally accurate depth representations. A liquid lens (or other adaptive optical element) having a faster response speed can reduce these artifacts and render more accurate focus cues at high speed.
  • Experiments were also performed to investigate another scheme for image rendering. As shown in FIG. 10( c), a blank frame (having a duration t1) was inserted to lead the rendering of each actual image frame (the duration of which being reduced to t2=t0−t1) to maintain synchrony with the liquid lens 34 a. Limited by the 75-Hz refresh rate of the graphics electronics, the minimum value for both t1 and t2 was 13.3 ms, and the highest refresh rate of the display 30 operating in the multi-focal-plane mode was f=1/(2t1+2t2)=18.75 Hz.
  • FIGS. 11( c) and 11(d) show operation of the display at near and far focus, respectively, using the rendering scheme of FIG. 10( c). Compared to FIGS. 11( a) and 11(b), the in-focus virtual objects in FIGS. 11( c) and 11(d) (i.e., the torus and the sphere, respectively) appear to be sharper than the out-of-focus objects (i.e., the sphere and the torus, respectively), matching well with the real reference targets at 1 diopter and 6 diopters. The insets of FIGS. 11( c) and 11(d), showing the same area as in FIG. 11( a), demonstrated improved focus cues. Furthermore, the occlusion cue became more prominent than shown in FIGS. 11( a) and 11(b), with a sharper boundary between the near torus and far sphere.
  • Due to the shortened duration of image frames, brightness level may be correspondingly lower, as quantified by:
  • B = t 2 t 1 + t 2 ( 4 )
  • If t1=t2=13.3 ms, the relative brightness level in FIGS. 11( c) and 11(d) is B=0.5, which is half the brightness of FIGS. 11( a) and 11(b), with B=1. Another possible artifact is flicker which was more noticeable 18.75 Hz than at 37.5 Hz.
  • A faster liquid lens and/or added-image source and higher-speed driving electronics are beneficial for producing accurate focus cues at a substantially flicker-free rate. For less flicker the liquid lens can be driven in an overshoot manner with decreased time-to-depth-of-field in an auto-focusing imaging system. Other active-optical technologies, such as high-speed DMM and liquid-crystal lenses, could also be used in the time-multiplexed multi-focal-plane mode to reduce flicker.
  • In any event, by using a faster active-optical element, a display operating in the time-multiplexed multi-focal-plane mode was produced and operated in this example. The display was capable of rendering nearly correct focus cues and other depth cues such as occlusion and shading, and the focus cues were presentable within a wide range, from infinity to as close as 8 diopters.
  • We compared the effects of two rendering schemes having respective refresh rates; the first scheme having a higher refresh rate (e.g., f=37.5 Hz) and producing a brighter image (B=1.0) but with reduced image sharpness and focus-cue accuracy due to the limited response speed of the liquid lens, and the second scheme producing sharper images and more accurate focus cues but with compromised speed (e.g., f=18.75 Hz) and image brightness (B=0.5) due to the limited frame rate of the driving electronics.
  • Third Representative Embodiment
  • This embodiment is directed to a display that is gaze-contingent and that is capable of rendering nearly correct focus cues in real-time for the attended region of interest. The display addresses accommodation cues produced in the variable-single-focal-plane mode in synchrony with the graphical rendering of retinal blur cues and tracking of the convergence distance of the eye.
  • This embodiment is termed herein a “variable-focus gaze-contingent display” (VF-GCD). It can produce improved focus-cue presentation and better matching of accommodation and convergence in the single-variable-focal-plane. Thus, this embodiment utilizes a display operating in the variable-single-focal-plane mode and provides integrated convergence tracking to provide accurate rendering of real-time focus cues. Unlike conventional stereoscopic displays, which typically fix the distance of the focal plane in the visual space, the VF-GCD automatically tracks the viewer's current 3-D point-of-gaze (POG) and adjusts the focal plane of the display to match the viewer's current convergence distance in real-time. (In contrast, a display operating in the variable-single-focal-plane mode with user interface typically has a delay in feedback produced by the user mentally processing feedback information and utilizing that information in responding to accommodation and/or convergence cues.) Also, in contrast to volumetric displays that typically render the entire 3-D scene as a discretized space of voxels, the VF-GCD renders the projected 2-D image of the 3-D scene onto moving image planes, thereby significantly improving the rendering efficiency as well as taking full advantage of commercially available graphics electronics for rendering focus cues.
  • This embodiment incorporates three principles for rendering nearly correct focus cues: addressable accommodation cues, convergence tracking, and real-time rendering of retinal blur cues. Reference is made again to FIGS. 2( a)-2(d), discussed above.
  • By passively involving the viewer (user) for feedback purposes, the VF-GCD forms a closed-loop system that can respond in real-time to user feedback in the form of convergent or divergent eye rotations. See FIG. 12. In particular, by tracking the viewer's 3-D POG, the convergence distance can be computed, so that the accommodation cue rendered by the display can be matched accordingly. This tracking can be performed using an “eye-tracker” which obtains useful information from the subject's gaze. Likewise, the scene elements can be rendered with appropriately simulated DOF effects using the graphics electronics. The combination of eye-tracking together with an addressable active-optical element and DOF rendering provides visual feedback to the viewer in the form of updated focus cues, thereby closing the system in a feedback sense.
  • In this embodiment the focal plane moves in three dimensions, matching with the convergence depth of the viewer. In practice, the addressable accommodation cue is realized by an active-optical element having variable optical power. From a practical standpoint, the active-optical element should satisfy the following conditions: (1) It should provide a variable range of optical power that is compatible with the accommodative range of the human eye. (2) It should be optically conjugate to the entrance pupil of the viewer, making the display appearing to have a fixed FOV that is independent of focus changes. (3) It should have a response speed that substantially matches the speed of rapid eye movements.
  • The display of this embodiment comprises a liquid lens (Arctic 314 made by Varioptic), which has a variable optical power ranging from −5 diopters (−5D) (1 diopter=1/meter) to 20D, a clear aperture of ˜3 mm, and a response speed of about 10 msec.
  • To maintain proper focus cues, the VF-GCD computes changes in the viewer's convergence distance using a binocular eye-tracking system adapted from a pair of 2-D monocular eye-trackers. In general, current monocular eye-trackers utilize one or more of non-imaging-based tracking, image-based tracking, and model-based tracking methods. Among the image-based tracking methods, dark-pupil tracking is generally regarded as the simplest and most robust.
  • To compute the viewer's convergence distance in 3-D space, a pair of monocular trackers was used to triangulate the convergence point using the lines of sight of both eyes, as shown in FIG. 13. Using multi-points calibration, the 2-D gaze points (x1′, y1′) and (x2′, y2′) for left (E1) and right (E2) eyes, respectively, are determined in the local coordinate system of a calibration plane (bold grey line in FIG. 12) at an established distance z0 from the eye in 3-D space. The frame of reference of the 3-D space has its origin Oxyz, located at the mid-point between the eyes. By using the relative position (x0′, y0′), which is the orthogonal projection of the world origin onto the calibration plane, the points (xi′, yi′) may be transformed into their world-space correspondences (xi, yi, z0) so that the convergence point (x, y, z) is given by:
  • { z = IPD IPD + x 1 - x 2 z 0 x = x 1 + x 2 2 z z 0 y = y 1 + y 2 2 z z 0 ( 5 )
  • where IPD is the inter-pupillary distance of the viewer. As shown in FIG. 13, as the eye-tracker tracks the 3-D POG in real-time, the convergence distance z is updated for the display optics and the image-rendering system, such that the image plane is translated to the same depth z for the presentation of the correct accommodation cue.
  • The VF-GCD also desirably includes an image-rendering system capable of simulating real-time retinal blur effects, which is commonly referred to as “DOF rendering.” Depth-of-field effects improve the photo-realistic appearance of a 3-D scene by simulating a thin-lens camera model with a finite aperture, thereby inducing a circle of confusion into the rendered image for virtual objects outside the focal plane. Virtual scenes rendered with DOF effects provide a more realistic appearance of the scene than images rendered with the more typical pinhole-camera model and can potentially reduce visual artifacts. Real-time DOF has particular relevance in the VF-GCD since the focal distance of the display changes following the convergence distance of the viewer. Maintaining the expected blurring cues is thus important to preventing depth confusion as the viewer browses objects at varying depths in the scene.
  • Graphically rendering DOF effects can be done in any of several ways that differ from one another significantly in their rendering accuracy and speed. For instance, ray-tracing and accumulation-buffer methods provide good visual results on rendered blur cues but are typically not feasible for real-time systems. Single-layer and multiple-layer post-processing methods tend to yield acceptable real-time performance with somewhat lesser visual accuracy. The latter methods are made computationally feasible due to the highly parallel nature of their algorithms; this feasibility is suitable for implementation on currently available high-performance graphics processing units (GPUs). We used a single-layer post-processing DOF method. To illustrate this DOF algorithm, note the rabbits rendered in FIGS. 14( a)-14(f). Nearly correct retinal blur cues can be derived by blending the image rendered by the pinhole camera model (FIG. 14( a)) with another down-sampled and post-blurred image (FIG. 14( b)) using a depth map (also known as a degree-of-blur map; FIGS. 14( c) and 14(e)) to weight the relative contributions of each image, formulated as I′=I0+(I0−I1)×DOB. The final blended images are given in FIGS. 14( d) and 14(f) for the eyes converging at 3D and 1D, respectively.
  • A key component of the DOF algorithm is the computation of the DOB (depth of blur) map, which is used for weighted blending of the pin-hole and blurred images. The DOB map is created by normalizing the depth values Z′, which are retrieved from the z-buffer for the image, with respect to the viewer's current convergence distance Z given by the binocular eye-tracker:
  • DOB = z - z z near - z far , Z far Z , Z Z near ( 6 )
  • where Znear and Zfar indicate the nearest and furthest depths, respectively, of the rendered 3-D space from the viewer's eyes. Note that all distances expressed in capital letters in Eq. (6) are defined in dioptric rather than Euclidian space. Taking FIG. 14( c) as an example, when the eye is focused at near distance of Z=Znear=3D, the rabbit at Z′=3D appears totally black (indicating zero blur), while the rabbit at Z′=1D appears to be white, indicating maximum blur.
  • We constructed a VF-GCD comprising a variable-focus display, convergence tracking, and real-time DOF rendering. The optical path for the VF-GCD was arranged perpendicularly, mainly due to ergonomic reasons, to prevent the spherical mirror from blocking the center FOV of both eyes. The key element for controlling focal distance in real-time was a liquid lens, which was coupled to an imaging lens to provide variable and sufficient optical power. The entrance pupil of the viewer was optically conjugate with the aperture of the liquid lens. As a result, without affecting the size of the FOV, the focus adjustment of the eye was optically compensated by the optical power change of the liquid lens, thus forming a closed-loop control system as shown in FIG. 12. In addition, two commercial eye-trackers (Viewpoint, Arlington Research, Inc.) were attached to the VF-GCD, one for each eye, by setting up two near-infrared (NIR) cameras, with NIR LED illumination attached to each camera. The NIR camera as a pixel resolution of 640×480 pixels at 30 frames per second (fps), which is capable of tracking 2-D POG in real-time.
  • The capability of the VF-GCD was demonstrated in an experiment as outlined in FIGS. 15( a)-15(d). To stimulate convergence changes by the viewer, three bar-type resolution targets were arranged along the visual axis of the VF-GCD at 3-D, 2-D, and 1-D, respectively. Three rabbits were graphically rendered at these corresponding locations, as shown in FIGS. 15( c) and 15(d). During the experiment, the viewer alternatingly changed his focus from far (1D) to near (3D) distances and then from near to far. FIG. 15( a) shows the real-time tracking result on the convergence distance of the viewer, versus time. As shown in FIG. 15( a), the eye-tracked convergence distances approximately matched the distances of the real targets. (Any slight mismatch may be explained in part by the about 0.6D depth-of-field of the eyes.) FIG. 15( b) shows the synthetic-focus-cues effects in the VF-GCD. Similar to the images shown in FIGS. 14( a)-14(f), as the eye was focused at the far distance 1D, the rabbit at the corresponding distance was sharply and clearly rendered while the other two rabbits (at 2D and 3D, respectively) were out of focus and hence proportionately blurred with respect to the defocused distance from 1D; vice versa when the eye was focused at either 2D or 3D. The rendering program ran on a desk-top computer equipped with a 3.20 GHz Intel Pentium 4 CPU and a Geforce 8600 GS graphics card, which maintained a frame rate of 37.6 fps for rendering retinal blur cues.
  • FIGS. 15( c) and 15(d) provide further comparison of the addressable focus cues rendered by the VF-GCD against the focus cues of real-world targets. A digital camera was disposed at the exit-pupil location of the VF-GCD. The camera was set at f/4.8, thereby approximately matching the speed of the human eye. As shown in FIG. 15( c), when the observer focused at the near distance 3D, the rabbit at 3D was rendered sharply and clearly while the rabbits at 2D and 1D were blurred. Meanwhile, the focal distance of the VF-GCD was adjusted to 3D using the liquid lens, thereby matching with the viewer's convergence distance (and vice versa in FIG. 15( d)) as the viewer focused at 1D. The images in FIGS. 15( c) and 15(d) simulate the retinal images of looking through the VF-GCD at different convergence conditions. The virtual rabbits located at three discrete depths demonstrated nearly correct focus cues similar to those of the real resolution targets. The results indicated a viewing situation with the VF-GCD that was analogous to the real-world, with nearly correct focus cues being rendered interactively by the display hardware (i.e., liquid lens) and software (i.e., graphics card).
  • This embodiment is directed to a variable-focus gaze-contingent display that is capable of rendering nearly correct focus cues of a volumetric space in real-time and in a closed-loop manner. Compared to a conventional stereoscopic display, the VF-GCD provided rendered focus cues more accurately, with reduced visual artifacts such as the conflict between convergence and accommodation. Compared to conventional volumetric displays, the VF-GCD was much simpler and conserved hardware and computational resources.
  • Although this embodiment and example were described in the context of a monocular system, the embodiment encompasses corresponding binocular systems that can provide both binocular and monocular depth cues.
  • Fourth Representative Embodiment
  • This embodiment is directed to the multi-focal-plane mode that operates in a so-called “depth fused” manner. A large number of focal planes and small dioptric spacings between them are desirable for improving image quality and reducing perceptual effects in the multi-focal-plane mode. But, to keep the number of focal planes to a manageable level, a depth-weighted blending technique can be implemented. This technique can lead to a “depth-fused 3-D” (DFD) perception, in which two overlapped images displayed at two different respective depths may be perceived as a single-depth image. The luminance ratio between the two images may be modulated to change the perceived depth of the fused image. The DFD effect can be incorporated into the multi-focal-plane mode. Another concern addressed by this embodiment is the choice of diopter spacing between adjacent focal planes.
  • In this embodiment a systematic approach is utilized to address these issues. It is based on quantitative evaluation of the modulation transfer functions (MTF) of DFD images formed on the retina. The embodiment also takes into account most of the ocular factors, such as pupil size, monochromatic and chromatic aberrations, diffraction, Stiles-Crawford effect (SCE), and accommodation; and also takes into account certain display factors, such as dioptric midpoint, dioptric spacing, depth filter, and spatial frequency of the target. Based on the MTFs of the retinal images of the display and the depth of field (DOF) of the human visual system under photopic viewing conditions, the optimal arrangement of focal planes was determined, and the depth-weighted fusing function between adjacent focal planes was characterized.
  • FIG. 16 illustrates the depth-fusion concept of two images displayed on two adjacent focal planes separated by a dioptric distance of Δz. The dioptric distance from the eye to the front focal plane is z1 and to the rear plane is z2. When the images shown on the two-layer displays are aligned such that each pixel on the front and rear planes subtends the same visual angle to the eye, the front and back pixels (e.g., A and B, respectively) are viewed as completely overlapped at the viewpoint and fused as a single pixel (e.g., C). The luminance of the fused pixel (L) is summed from the front and rear pixels (L1 and L2, respectively), and the luminance distribution between the front and back pixels is weighted by the rendered depth z of the fused pixel. These relationships may be expressed as:

  • L=L 1(z)+L 2(z)=w 1(z)L+w 2(z)L   (7)
  • where w1(z) and w2(z) are the depth-weighted fusing functions modulating the luminance of the front and back focal planes, respectively. Typically, w1(z)+w2(z)=1 is enforced such that the luminance of the fused pixel is L1 when w1(z)=1 and is L2 when w2(z)=1. We hereafter assume the peak luminance of individual focal planes is normalized to be uniform, without considering system-specific optical losses potentially in some forms of multi-focal plane displays (e.g., in spatially multiplexed displays where light may be projected through a thick stack of display panels). Optical losses of a system should be characterized to normalize non-uniformity across the viewing volume before applying depth-weighted fusing functions.
  • The depth-fused 3-D perception effect indicates that, as the depth-weighted fusing functions (w1 and w2) change, the perceived depth {circumflex over (z)} of the fused pixel will change accordingly. This is formulated as:

  • {circumflex over (z)}=f(w 1 , w 2)   (8)
  • For instance, when w1(z)=1, the perceived depth should be z1, and should be z2 when w2(z)=1. In a generalized n-focal plane DFD system, the dioptric distances from the eye to the n focal planes are denoted as z1, z2, . . . , zn in distance order, where z1 is the closest one to the eye. We assume that the 3-D scenes contained between a pair of adjacent focal planes are rendered only on this corresponding focal plane pair. Under this assumption, a given focal plane at zi will render all the 3-D scenes contained between the (I−1)th and the (I+1)th focal planes. Within the depth range of zi−1≧z≧zi+1, many scene points may be projected onto the same pixel of the ith focal plane, among which only the closest scene point to the eye is un-occluded and thus effectively determines the depth-weighted fusing function modulating the luminance of the specific pixel.
  • The closest scene point corresponding to a specific pixel can typically be retrieved from the z-buffer in a computer graphics renderer. Let us assume the depth of the closest 3-D scene point projected onto a given pixel of the ith focal plane is z. Based on the depth-fused 3-D perception described above, the luminance of the 3-D point is distributed between the (I−1)th and ith focal planes if zi−1≧z≧zi, otherwise between the ith and (I+1)th focal planes if zi≧z≧zi+1. The luminance attribution to the ith focal plane is weighted by the depth z. It may be characterized by the ratio of the luminance attribution Li(z) on the ith focal plane at zi to that of the total scene luminance L(z), written as gi(z)=Li(z)/L(z), where L(z)=Li−1(z)+Li(z) if zi−1≧z≧zi or L(z)=Li(z)+Li+1(z) if zi≧z≧zi+1. In general, the depth-weighted fusing function, wi(z), of the ith focal plane can be defined as:
  • w i ( z ) = { g i ( z ) , z i z z i + 1 , ( 1 i n ) 1 - g i - 1 ( z ) , z i - 1 z z i . ( 2 i n ) ( 9 )
  • In summary, by knowing the rendered depth z of a 3-D virtual scene, the luminance levels of the multi-focal plane images can be modulated accordingly by the depth-weighted fusing functions in Eq. (9) to render pseudo-correct focus cues.
  • In displays comprising DFD operability, the adjacent focal planes are separated in space at a considerable distance. The retinal image quality is expected to worsen when the eye is accommodated at a distance in between the front and back focal planes than when focusing on the front or back focal planes. However, both the dioptric spacing between adjacent focal planes and the depth-weighted fusing functions can be selected such that the perceived depth of the fused pixel I closely matches with the rendered depth z and the image quality degradation is minimally perceptible as the observer accommodates to different distances between the focal planes.
  • The optical quality of a fused pixel in DFD displays may be quantitatively measured by the point spread function (PSF) of the retinal image, or equivalently by the modulation transfer function (MTF), which is characterized by the ratio of the contrast modulation of the retinal image to that of a sinusoidal object on the 3-D display. Without loss of generality, hereafter a dual-focal plane display is assumed and the results therewith can be extended to n focal planes. Based on Eq. (7), when the eye is accommodated at the rendered distance z, the PSF of the fused pixel, PSF12, may be described as:

  • PSF12(z)=w 1(z)PSF1(z, z 1)+w 2(z)PSF2(z, z 2)   (10)
  • where PSF1(z, z1) and PSF2(z, z2) are the point spread functions of the front and back pixels, respectively, corresponding to the eye accommodated distance z. The MTF of a DFD display can then be calculated via the Fourier Transform (FT) of the PSF12 and subsequently the FT of the PSF1 and PSF2.
  • Multiple factors may affect the retinal image quality—PSF12 and MTF12—of a DFD display. Table 3 categorizes the parameters, along with their notation and typical range, into two types: ocular and display factors. Ocular factors are mostly related to the human visual system when viewing DFD images from a viewer's perspective. These variables, including pupil size, pupil apodization, reference wavelength, and accommodation state, should be carefully considered when modeling the eye optics. Display factors are related to the practical configuration of the display with DFD operability, such as the covered depth range, dioptric midpoint of two adjacent focal planes to the eye, dioptric spacing between two adjacent focal planes, depth-weighted fusing functions, as well as the spatial frequency of a displayed target.
  • TABLE 3
    Type of
    Factors Factors Notation Typical range
    Ocular Pupil diameter D 2 mm~8 mm
    Stiles-Crawford effect B −0.116 mm−2
    Reference wavelength
    Figure US20110075257A1-20110331-P00001
    F (486.1 nm), d (587.6
    nm), C (656.3 nm)
    Accommodation Z zi+1 < z < zi
    Display Focal range z1-z n 3D
    Medial focus zi, i+1 = 0D~3D
    (zi + zi+1)/2
    Dioptric spacing Δz = zi − zi+1 0D~1D
    Depth filter wi, W i+1 0 ≦ wi, wi+1 ≦ 1
    Target spatial V 1 cpd~30 cpd
    frequency
  • Instead of using observer- and display-specific measurements to evaluate the PSF and MTF of DFD displays, we adopted a schematic Arizona eye model to simulate and analyze the retinal image quality from simulated targets to derive generalizable results. In the fields of optical design and ophthalmology, various schematic eye models have been widely used to predict the performance of an optical system involved with human subjects. In this study, the Arizona eye model was set up in CODE V. The Arizona eye model is designed to match clinical levels of aberration, both on- and off-axis fields, and can accommodate to different distances. The accommodative distance z, as shown in FIG. 16, determines the lens shape, conic constant, and refractive index of the surfaces in the schematic eye. The distances of the front and back focal planes, z1 and z2, respectively, and their spacing z are varied to simulate different display configurations.
  • Ocular characteristics of the HVS, such as depth of field, pupil size, diffraction, Stiles-Crawford effect, monochromatic and chromatic aberrations, and accommodation, play important roles on the perceived image quality of a DFD display. Although there have been investigations of image-quality dependence upon pupil size, high-order aberration, and accommodation, the treatment to the aforementioned factors lacks generality to average subjects and to a full-color DFD display with different display configurations. For instance, only monochromatic aberrations specific to one user's eye were considered and a linear depth-weighted fusing function was assumed.
  • To simulate the PSF/MTF of the retinal images accurately in a DFD display, we firstly examined the dependence of the polychromatic MTF of a fused pixel upon eye-pupil diameter while fixing other ocular and display factors. Particularly, we examined the MTFs under the condition that the luminance of a rendered pixel is equally distributed between the front and back focal planes separated by 0.5D, and the eye is accommodated at the midpoint between the two focal planes. The midpoint is generally expected to have the worst retinal image quality for a fused pixel. Assuming the same pupil size, we further compared the MTFs of the fused pixel against that of a real pixel that is physically placed at the dioptric midpoint between the two focal planes. For pupil diameters no larger than 4 mm, we found the MTF differences of the fused pixel from a real pixel at the same distance is acceptable for spatial frequencies below 20 cpd, while a considerable degradation is observed for larger pupils. Therefore, we set the eye pupil diameter of the eye model to be 4 mm, which in fact corresponded well to the pupil size viewing conventional HMD-like displays. Secondly, to account for the directional sensitivity of photoreceptors on the human retina, which commonly refers to the Stiles-Crawford effect (SCE), a Gaussian apodization filter was applied to the entrance pupil with an amplitude transmittance coefficient of β=−0.116 mm−2. Consequently, SCE may induce a slightly contracted effective pupil, and thus reduce spherical aberration and improve MTF.
  • Furthermore, the image source in the model was set up with polychromatic wavelengths, including F, d, and C components as listed in Table 3, to simulate a full-color DFD display. To compensate the longitudinal chromatic aberration (LCA) that commonly exists in human eyes, we inserted a zero optical power achromat at 15 mm from the cornea vertex with the LCA opposite to the Arizona eye model. In a practical DFD display, instead of inserting an achromat directly in front of the eye, the display optics may be optimized to have an equivalent chromatic aberration to compensate the LCA of the visual system. Finally, the effect of diffraction was accounted for in the modeling software (CODE V) while simulating PSFs. The effect of accommodation is discussed below with depth filters.
  • Based on the model setup described above, for a given eye accommodation status and display settings, PSF1(z, z1) and PSF2(z, z2) for an on-axis point source are simulated separately in CODE V. Using the relationship in Eq. (9), a series of PSF12(z) are computed by varying w1 from 1 to 0, which corresponds to varying the rendered depth z from z1 to z2. The corresponding MTF12(z) of the DFD display is derived by taking the FT of PSF12.
  • To evaluate the retinal image quality of a depth-fused pixel against a physical pixel placed at the same distance, we further simulated the PSF of a real point source placed at distance z, PSFideal(z), and computed the corresponding MTFideal(z). The degradation of MTF12(z) from MTFideal(z) was expected to vary with the dioptric spacing of the two adjacent focal planes, rendered depth z, as well as eye-specific parameters. Through comprehensive analysis of the retinal image quality of the DFD display, threshold values were established to ensure the degradation from a real display condition was minimally perceptible to average subjects. Optimal depth-weighted fusing functions were then obtained.
  • As mentioned earlier, a fused pixel that is rendered to be at the dioptric midpoint of two adjacent focal planes was expected to have the worst retinal image quality compared with other points between the focal planes. Therefore, in the following analysis, we used the retinal image quality of a fused pixel rendered at the midpoint of two adjacent focal planes as a criterion for determining appropriate settings for display designs.
  • In this study to determine optimal dioptric spacing, the overall focal range of a DFD display covers the depth varying from 3D (z1) to 0D (zn). Within this range, we further assumed a constant dioptric spacing between two adjacent focal planes (e.g., zi and zi+1) independent of the dioptric midpoint of the focal plane pair relative to the eye noted as zi,i+1=(zi+zi+1)/2 in Table 3. Using the simulation method described above, we validated this assumption by examining the dependence of the MTF of a fused pixel at the midpoint of two focal planes upon the dioptric distance of the midpoint to the eye while fixing other ocular and display factors (i.e., w1=w2=0.5, Δz=0.5D, z=zi,i+1). As expected the MTF of a fused pixel at the midpoint varies as the midpoint gets closer to the eye due to ocular aberrations being highly correlated to accommodation. However, the average variation is less than 15% for spatial frequencies below 20 cpd for within the 0D˜3D range.
  • Under these assumptions, the effect of dioptric spacing on DFD displays can be evaluated by setting the midpoint of a pair of adjacent focal planes at an arbitrary position within the depth range without loss of generality. We thus chose 1D as the midpoint of a focal-plane pair and varied their dioptric spacing Δz from 0.2D to 1D at an interval of 0.2D. For each dioptric spacing condition, the MTF of a fused pixel at the dioptric midpoint (i.e., MTF12(z=zi,i+1)) of the two focal planes was calculated with the assumption that the luminance level was evenly divided between front and back focal planes. FIG. 17( a) is a plot of the results corresponding to different dioptric spacings. For comparison, on the same figure are also plotted MTFideal, which corresponds to the MTF of a real pixel placed at the midpoint, and the MTF+0.3D and MTF−0.3D, which correspond to the MTF of the eye model with +0.3D and −0.3D defocus from the midpoint focus, respectively. The ±0.3D defocus was chosen to match the commonly accepted DOF of the human eye. As expected, MTF12 consistently degraded with the increase of the spacing of the focal planes. However, when Δz was no larger than 0.6D, MTF12 fell within the region enclosed by MTFideal (green dashed line) and the ±0.3D defocused MTFs (the overlapped blue and red dashed lines). The results indicated that the DOF of the human eye under photopic viewing conditions can be selected as the threshold value of the dioptric spacing in a display operating in the multi-focal-plane mode, which ensures the degradation of the retinal image quality of a DFD display from an ideal display condition is minimally perceptible to average subjects. If better retinal image quality is required for certain applications, a smaller Δz may be used but at the expense of adding more focal planes. For instance, if Δz=0.6D is selected, six focal planes would be sufficient to cover the depth range from 3.0D to 0D, while nine focal planes would be necessary to cover the same range if Δz=0.4D were selected.
  • By setting a dioptric spacing of Δz=0.6D and a dioptric midpoint of z12=1D from the eye, we further examined the MTF of a fused pixel while incrementally varying the eye accommodation distance from the front focal plane (z1=1.3D) to the back focal plane (z2=0.7D) at an increment of 0.1D, as shown in FIG. 17( b). As expected, an accommodation distance at the dioptric midpoint (z=z12=1D) would maximize the MTF of the fused pixel, while shifting the accommodation distance toward either front or back focal planes will always decrease the MTF. For instance, the MTF values for a target spatial frequency of 10 cpd is reduced from 0.6 when z=1D to nearly 0D when z=1.3D or z=0.7D. Past studies of the effects of stimulus contrast and contrast gradient on eye accommodation in viewing real-world scenes have suggested that the accommodative response attempts to maximize the contrast of the foveal retinal image, and the contrast gradient helps stabilize the accommodation fluctuation of the eye on the target of interest. Therefore, pseudo-correct focus cues can be generated at the dioptric midpoint by applying an appropriate depth-fusing filter even without a real focal plane.
  • To further demonstrate the pseudo-correct focus cues created using a DFD display, we configured a dual-focal plane display similarly to that used in the previous paragraph (i.e., z12=1D, and Δz=0.6D). We simulated multiple retinal images of a Snellen E target by convolving the target with the PSF12(z) defined in Eq. (9), while the luminance of the target was evenly divided between the two focal planes (i.e. w1=w2=0.5). Thus, the fused target was expected to appear at the dioptric midpoint of the two focal planes. In FIG. 18, the left-to-right columns correspond to the eye accommodation distances of z=1.3, 1, and 0.7D, respectively, while the top-to-bottom rows correspond to the target spatial frequencies of v=2, 5, 10, and 30 cpd, respectively. As predicted by the results in FIG. 17( b), the retinal image contrast was higher when the eye was focused at z=1D rather than at either z=z1=1.3D or z=z2=0.7D. Meanwhile, at the same accommodation distance, the retinal-image contrast clearly depended on the spatial frequency of the target, where the targets with lower spatial frequencies (e.g., 2, 4, and 10 cpd) had better image contrast than the higher frequencies (e.g., v=30 cpd).
  • To derive the dependence of the rendered accommodation cue on the depth-weighted fusing function as described in Eq. (8), we extended the MTF simulation shown in FIG. 17( b) by incrementally varying w1 from 1 to 0 at an increment of 0.01 while having w2=1−w1. For each w1 increment, we simulated the MTF12 of a fused pixel while incrementally varying the eye accommodation distance from the front focal plane (z1=1.3D) to the back focal plane (z2=0.7D) at an increment of 0.02D. We selected the accommodation distance that maximizes the MTF12 to be the rendered accommodation cue corresponding to the given depth-weighted fusing factor (w1) of the front focal plane. The accumulated results yielded the optimal depth-weighted luminance (L1 and L2) of the front and back focal planes to the luminance of the fused target (L) as a function of the accommodation distance (z) for a focal-plane pair.
  • This evaluation can be extended to more than two focal planes covering a much larger depth range. As an example, we chose a 6-focal-plane DFD display covering a depth range from 3D to 0D. By assuming a 0.6D dioptric spacing, six focal planes were placed at 3D (z1), 2.4D (z2), 1.8D (z3), 1.2D (z4), 0.6D (z5), and 0D (z6), respectively. In this display configuration, we repeated the above-described simulations independently to each adjacent pair of focal planes. The black solid curves in FIG. 19 are plots of the luminance ratio gi=(i=1,2,3,4,5) of the front focal plane in each focal-plane pair of (i, i+1) as a function of the rendered accommodation cue z. Also plotted in the same figure is a typical box filter (blue dashed curves), which corresponds to multi-focal-plane displays in which depth-weighted fusing is not applied, and a linear depth-weighted filter (green dashed curves). The fusing functions based on the maximal MTF12 values had some non-linearity. As mentioned above, since the retinal image quality is affected by defocus, the non-linearity could be due to the non-linear degradation of the retinal image quality with defocus.
  • Based on the simulated results shown in FIG. 19, a periodical function gi(z) can be used to describe the dependence of the luminance ratio of the front focal plane in a given pair of focal planes upon the scene depth:
  • g i ( z ) = L i ( z ) / L = 1 - 1 1 + exp ( z - z i , i + 1 Δ z ) , z z z i + 1 . ( 1 < 6 ) ( 11 )
  • where z′i,i+1 represents the pseudo-correct accommodation cue rendered by a luminance ratio of gi(z=z′i,i+1)=0.5, and αz′ characterizes the nonlinearity of gi(z). Ideally, z′i,i+1 is equal to the dioptric midpoint zi,i+1. Table 4 lists detailed parameters of gi(z) for the six-focal-plane DFD display. As the distance of the focal planes from the eye increased from 2.7D to 0.3D, the difference between zi,i+1 and z′i,i+1 increased from −0.013D to +0.024D. The slight mismatch between z′i,i+1 and zi,i+1 may be attributed to the dependence of spherical aberration on eye-accommodation distances. The nonlinear fittings of the luminance ratio functions were plotted as red dashed curves in FIG. 19 with a correlation coefficient of 0.985 to the simulated black curves. The depth-weighted fusing function wi, as defined in Eq. (9), for each focal plane of an N-focal plane DFD display was then obtained.
  • TABLE 4
    Parameters of Eq. (15) for a 6-focal plane DFD display.
    l
    1 2 3 4 5
    zi, i+1 (diopters) 2.7 2.1 1.5 0.9 0.3
    z′i,i+1 (diopters) 2.7134 2.1082 1.5034 0.8959 0.2758
    Δz′ (diopters) 0.0347 0.0318 0.0366 0.0408 0.0534
  • FIGS. 20( a)-20(d) show the simulated retinal images of a 3-D scene through a 6-focal plane DFD display with depth-weighted nonlinear fusing functions given in Eq. (11), as well as with the box and linear filters shown in FIG. 19. The six focal planes were placed at 3, 2.4, 1.8, 1.2, 0.6, and 0D, respectively, and the accommodation of the observer's eye was assumed at 0.5D. The 3-D scene consisted of a planar object extending from 3D to 0.5D at a slanted angle relative to the z-axis (depth-axis) and a green grid as ground plane spanning the same depth range. The planar object was textured with a sinusoidal grating subtending a spatial frequency of 1.5˜9 cpd from its left (front) to right (back) ends. The entire scene subtended a FOV of 14.2×10.7 degrees. The simulation of the DFD images required five steps. We first rendered a regular 2-D perspective image of a 3-D scene using computer-graphics-rendering techniques. A 2-D depth map (FIG. 20( a)) in the same size as that of the 2-D perspective image is then generated by retrieving the depth (z) of each rendered pixel from the z-buffer in OpenGL shaders. Next, a set of six depth-weighted maps was generated, one for each of the focal planes, by applying the depth-weighted filtering functions in Eq. (11) to the 2-D depth map. In the fourth step, we rendered six focal-plane images by individually applying each of the depth-weighted maps to the 2-D perspective image rendered in the first step through an alpha-blending technique. Finally, the six focal-plane images were convolved with the corresponding PSFs of the eye determined by the specific accommodation distance (z=0.5D) and the focal-plane distances. The resulting retinal images were then obtained by summing up the convolved images. FIGS. 20( b), 20(c), and 20(d) show the simulated retinal images of the DFD display by employing a box, linear, and non-linear depth-weighted fusing function, respectively. As expected, the 3-D scene rendered by the box filter (FIG. 20( b)) indicated a strong depth-discontinuity effect around the midpoint of two adjacent focal planes, while those rendered by linear and non-linear filters showed smoothly rendered depths. Whereas the non-linear filters were expected to yield higher image contrast in general than the linear filters, the contrast differences were barely visible by only comparing FIGS. 20( c) and 20(d), partially due to the low spatial frequency of the grating target.
  • To quantitatively evaluate the retinal-image quality differences between the linear and nonlinear fusing functions, we further evaluated the MTFs of the retinal images simulated with the method described above. A display operating in the dual-focal-plane mode, with z1=1.8D and z2=1.2D, was assumed in the simulation without loss of generality. The eye-accommodation distance z was varied from z1 to z2 at an interval of 0.1D. For each eye-accommodation distance, FIGS. 21( a)-21(g) are plots of the respective MTFs of the retinal images simulated with the linear (green circle) and nonlinear (red square) depth-weighted fusing functions. As shown in FIGS. 21( a), 21(d), and 21(g), when the accommodation distance was at z1, z2, or z12, the MTFs of using the linear depth filter were nearly identical to that of using the non-linear filters. Meanwhile, at all other accommodation distances, the MTFs of using the nonlinear filter were consistently better than when using the linear filter, as indicated by FIGS. 21( b), 21(c), 21(e), and 21(f). Whereas conventional thinking would have included the assumption that the worst image quality occurs at the dioptric midpoint by employing a linear depth filter, our quantitative analysis showed this assumption is not supported by a linear filter, while it appears to be true for the nonlinear filter. For instance, the green-colored MTF in FIG. 21( b) (as z=1.7D) is even worse than that in FIG. 21( d) (as z=z12=1.5D).
  • In summary, the non-linear depth-weighted fusing functions shown in FIG. 19 can produce better retinal image quality compared to a linear filter. Consequently, a display incorporating these functions may better approximate the real 3-D viewing condition and further improve the accuracy of depth perception.
  • In this embodiment we presented an exemplary systematic method to address two issues in configuring a display for operation in the multi-focal-plane mode: (1) the appropriate dioptric spacing between adjacent focal planes; and (2) the depth-weighted fusing function to render a continuous 3-D volume. By taking account of both ocular and display factors, we determined the optimal spacing between two adjacent focal planes to be ˜0.6D to ensure the MTF of a fused pixel at the dioptric midpoint is comparable to the DOF effect of the HVS on the MTF of a real pixel at the same distance under photopic viewing conditions. We further characterized the optimal form of a set of depth-weighted fusing functions as a function of rendered accommodation cues. Based on simulation results, the non-linear form of depth filters appears to be better than a box filter in terms of improved depth continuity, and better than a linear filter in terms of retinal image contrast modulation. Although our evaluation did not take into account certain other ocular factors such as scattering on the retina and psychophysical factors such as the neuron response, it provides a systematic framework that can objectively predict the optical quality and guide efforts to configure DFD displays for operation in the multi-focal-plane mode.
  • Subjective Evaluations
  • To better understand how depth perception is affected by the displays disclosed herein, and how the human visual system responds to the addressable focal planes in the display, we performed two user studies. One was a depth-judgment experiment, in which we explored the perceived depth of the displayed virtual object with respect to the variable accommodation cues rendered by the display. The other was an accommodative response measurement, in which we quantitatively measured the accommodative response of a user to a virtual object being presented at different depths. Both experiments were carried out using a display operating in the variable-single-focal-plane mode, configured as a monocular bench prototype.
  • The major purpose of the depth-judgment experiment was to determine the relationship of the perceived depths of virtual objects versus the accommodation cues rendered by the active optical element. A depth-judgment task was devised to evaluate depth perceptions in the display in two viewing conditions. In Case A, a subject was asked to estimate subjectively the depth of a virtual stimulus without seeing any real target references. In Case B, a subject was asked to position a real reference target at the same perceived depth as the displayed virtual object.
  • FIG. 22 illustrates the schematic setup of the experiment. The total FOV of the display is divided into left and right halves, each of which subtending about an 8-degree FOV horizontally. The left region was either blocked by a black card (Case A) or displayed a real target (Case B), while the right region displayed a virtual object as a visual stimulus. To minimize the influence of perspective depth cues on the depth judgment, a resolution target similar to the Siemens star in the ISO 15775 chart was employed for both the real and virtual targets, shown as the left and right insets of FIG. 22. An aperture was placed in front of the beam-splitter, limiting the overall horizontal visual field to about 16 degrees to the subject's eye. Therefore, if the real target was sufficiently large so that the subject could not see the edge of the real target through the aperture, the subtended angle of each white/black sector remained constant and the real target appeared unchanged to the viewer, in spite of the varying distance of the target along the visual axis. On the other hand, since the liquid lens is the limiting stop of the optics, the chief rays of the virtual display did not change as the lens changed its optical power. Throughout the depth-judgment task, the display optics, together with the subject, were enclosed in a black box. The subject positioned his or her head on a chin rest and only viewed the targets with one eye (dominant eye with normal or corrected vision) through the limiting aperture. Therefore, perspective depth cues were minimized for both the real and the virtual targets as they moved along the visual axis. The white arms in the real and virtual targets together divided the 2n angular space into 16 evenly spaced triangular sectors. Consequently, from the center of the visual field to the edge, the spatial frequency in the azimuthal direction dropped from infinity to about 1 cycle/degree. Gazing around the center of the visual field was expected to give the most accurate judgment on perceived depths.
  • On an optical bench, the real target was mounted on a rail to allow movement along the visual axis of the display. To avoid the accommodative dependence on the luminance, multiple light sources were employed to create a uniform illumination on the real target throughout the viewing space. The rail was about 1.5 meters long, but due to the mechanical mounts, the real target could be as close as about 15 cm to the viewer's eye, specifying the measurement range of perceived depths from 0.66 diopters to about 7 diopters. The accommodation distance of the virtual target was controlled by applying five different voltages to the liquid lens, 49, 46.8, 44.5, 42.3, and 40 Vrms, which corresponded to rendered depths at 1, 2, 3, 4 and 5 diopters, respectively.
  • Ten subjects, 8 males and 2 females, participated in the depth-judgment experiments. The average age of all subjects was 28.6. Six subjects had previous experiences with stereoscopic displays, while the other four were from unrelated fields. All subjects had either normal or corrected vision.
  • The depth-judgment task started with a 10-minute training session, followed by 25 consecutive trials. The tasks were to subjectively (Case A) and objectively (Case B) determine the depth of a virtual target displayed at one of the five depths among 1, 2, 3, 4, and 5 diopters. Each of the five depths was repeated in five trials. In each trial, the subject was first asked to close his/her eyes. The virtual stimulus was then displayed and the real target was placed randomly along the optical rail. The experimenter blocked the real target with a black board and instructed the subject to open his/her eyes. The subject was then asked to subjectively estimate the perceived depth of the virtual target and rate its depth as Far, Middle, or Near, accordingly. (Case A). The blocker of the real target was then removed. Following the subject's instruction, the experimenter moved the real target along the optical rail in directions in which the real target appeared to approach the depth of the virtual target. The subject made a fine depth judgment by repeatedly moving the real target backward and forward from the initial judged position until he/she determined that the virtual and real targets appeared to collocate at the same depth. The position of the real target was then recorded as the objective measurement of the perceived depth of the virtual display in Case B. Considering that all the depth cues except the accommodation cue were minimized in the subjective experiment (Case A), we expected that the depth-estimation accuracy would be low. Therefore, the subjective depth estimations for stimuli at 2 and 4 diopters were disregarded to avoid low-confidence, random guessing. Only virtual targets at 1, 3, and 5 diopters were considered as valid stimuli, corresponding to Far, Middle, and Near depths, respectively.
  • To counter potential learning effects, the order of first five trials, with depths of 1D, 2D, 3D, 4D, and 5D, respectively, were counter-balanced among the ten subjects using a double Latin Square design. The remaining twenty trials for each subject were then generated by random orders. An additional requirement was that two consecutive trials have different rendered depths. Overall, 10×25 trials were performed with 150 valid data points being collected for the subjective experiment and 250 data points for the objective experiment.
  • After completing all the trials, each subject was asked to fill out a questionnaire, asking how well he/she could perceive depth without (Case A) or with (Case B) seeing the real reference target. The subject was given three choices, ranking his/her sense of depth as Strong, Medium, or Weak in both Cases A and B.
  • We firstly analyzed the data of the subjective assessments of the perceived depth in the viewing condition without the real target references (Case A). For each subject, we counted the number of correct and incorrect depth estimations among the 15 trials to compute the error rate. For example, when the virtual target was presented at 5 diopters, the correct count would increase by 1 only if the subject estimated the perceived depth as Near; otherwise (either Middle or Far) the error count would increase by 1. Similar counting methods were applied to stimuli displayed at 3 diopters and at 1 diopter. The average error rate for each subject was quantified by the overall error count divided by 15. FIG. 23 is a plot of the error rate (blue solid bars with deviations) for each of the subjects. The error rates among ten subjects varied between 0.07 and 0.33, with an average value of 0.207 and a standard deviation of 0.08. This corresponded to about one error per every five estimates, on average. The standard deviation of the error rate, however, varied significantly among the subjects, ranging from 0 (S3 and S8) to 0.23 (S2, S5, and S6). In the same figure, we also plotted the subjective ranking (red textured bars) on the sense of depth in Case A, obtained from the questionnaire responses. Interestingly, although the subjects were unaware of their performances on the depth estimation through the experiment, in the end, some of the subjects ranked the difficulty level on depth estimation in agreement with their average error rates. For instance, in FIG. 23, subjects S4, S6, and S10 corresponded to relatively higher error rates of 0.27, 0.27, and 0.27, respectively, than other subjects, and they also gave lower ranking on depth perceptions (Weak, Weak, and Weak, respectively); Subject S9 had the lowest error rate of 0.07 and his rank on the perception of depth was Strong. Subjects S1 and S5, however, had somewhat conflicting perception rankings against their error rates. The average ranking among the ten subjects for depth estimation without real references was within the Weak to Medium range, as will be shown later (FIG. 25). Overall, based on a pool of ten subjects and due to the large standard deviation of the error rates in FIG. 23, the ranking on depth perception correlated at least to some extent with the error rate of the subjective depth estimations. The mean error rate for completing fifteen trials was 0.207 among ten subjects, corresponding to about one error on depth estimation within five trials on average. This indicated that the subjects could perceive the rendered depth to some extent of accuracy under the monocular viewing condition where all the depth cues except the accommodation cues were minimized.
  • The objective measurement results of the perceived depth were then analyzed. For each subject, the perceived depth at each rendered depth, such as 5, 4, 3, 2 and 1 diopter, was computed by averaging the measurements of the five repeating virtual stimuli among the 25 trials. Then, the results from ten subjects were averaged to compute the mean perceived depth among ten subjects. FIG. 24 is a plot of the averaged perceived depths versus the rendered accommodation cues of the display. The black diamonds indicate the mean value of the perceived depth at each of the accommodation cues. A linear relationship was found, by linearly fitting the five data points, with a slope of 1.0169 and a correlation factor (R2) of 0.9995, as shown in the blue line in FIG. 24. The results suggest, with the presence of an appropriate real target reference, that the perceived depth varied linearly with the rendered depth, creating a viewing condition similar to the real world. The depth perception was accurate, with an average standard deviation of about 0.1 diopters among the ten subjects. For a single subject, however, the standard deviation was a bit larger, around 0.2 diopters, which agreed with the DOF of the human visual system of 0.25˜0.3 diopters. The much lower standard deviation in Case B may be explained by the presence of the real reference target, which added an extra focus cue (i.e., blurring) and helped subjects to judge finely the depth of the rendered display. Compared to Case A without presenting real references, subjects appeared to perceive depth better using the display in an augmented viewing configuration.
  • Finally, we compared the subjective ranking data on depth perception in two cases: without (Case A) and with (Case B) a real target reference. To analyze the ranking data from different users, we assigned values of 1, 2, and 3 to the rankings of Strong, Medium, and Weak, respectively. Thus, the average ranking and the standard deviation for each viewing condition could be computed for ten subjects. The results are plotted in FIG. 25. Indicated by a blue solid bar with an average ranking of 2.3 and with a standard deviation of 0.67, the impression on depths was within the Weak to Medium range in Case A. However, as indicated by a textured red bar with an average ranking of 1.3 and with a standard deviation of 0.48, the impression on depths is within the Medium to Strong range in Case B.
  • Despite the fact that only the focus cues were primarily relied upon for the depth-judgment tasks, the results indicated that, under the monocular viewing condition without presenting perspective and binocular depth cues, the perceived depth in Case A matched with the rendered accommodation cue with good accuracy, and in Case B matched well with the rendered accommodation cues. In contrast to the usability studies on traditional stereoscopic displays that have suggested distorted and compressed perceived depths by rendering conflicting binocular disparity and focus cues, the user studies reported herein suggest that depth perception is improved by appropriately rendering accommodation cues in this display with addressable focal planes. The depth-judgment task described above proved the potential that this optical see-through display with addressable focus cues can be applied for mixed and augmented reality applications, approximating the viewing condition in the real world.
  • The major purpose of the accommodative response measurements was to quantify accommodative response of the human visual system to the depth cues presented through the subject display. In this experiment, the accommodative responses of the eye were measured by a near-infrared (NIR) auto-refractor (RM-8000B, Topcon). The auto-refractor has a measurement range of the refractive power from −20 to 20 diopters, a measurement speed of about 2 sec and an RMS measurement error of 0.33 diopters. The eye relief of the auto-refractor is about 50 mm. In the objective measurement, the auto-refractor was placed right in front of the beam-splitter, so that the exit pupil of the auto-refractor coincided with that of the display. Throughout the data-acquisition procedure, the ambient lights were turned off to prevent their influences on accommodation responses.
  • During the test, a subject with normal vision was asked to focus on the virtual display, which was presented at 1 diopter, 3 diopters, and 5 diopters, respectively, in a three-trial test. At each trial, after the subject set his or her focus on the virtual display, the accommodative response of the subject's eye was recorded at every 2 sec for up to nine measurement points. The results for one subject are plotted in FIG. 26 for the three trials corresponding to three focal distances of the virtual display. The data points are shown as three sets of blue diamonds. The red solid lines in FIG. 26 correspond to the accommodation cues rendered by the liquid lens. Although the measured accommodative response of the user fluctuated with time, the average value of the nine measurements in each trial was 0.97 diopters, 2.95 diopters, and 5.38 diopters, with standard deviations of 0.33 diopters, 0.33 diopters, and 0.42 diopters, respectively. The averages of the accommodative responses of the user matched with the accommodation cue stimuli presented by the display.
  • Whereas the invention has been described in connection with various representative embodiments, it will be understood that it is not limited to those embodiments. On the contrary, it is intended to cover all alternatives, modifications, and equal limits as may be included within the spirit and scope of the invention as defined by the appended claims.

Claims (44)

1. A see-through display for placement in an optical pathway extending from an entrance pupil of a person's eye to a real-world scene beyond the eye, the display comprising;
at least one 2-D added-image source that is addressable to produce a light pattern corresponding to a virtual object and that is situated to direct the light pattern toward the person's eye to superimpose the virtual object on an image of the real-world scene as perceived by the eye via the optical pathway; and
an active-optical element situated between the eye and the added-image source at a location that is optically conjugate to the entrance pupil and at which the active-optical element forms an intermediate image of the light pattern from the added-image source, the active-optical element having variable optical power and being addressable to change its optical power to produce a corresponding change in perceived distance at which the intermediate image is formed, as an added image to the real-world scene, relative to the eye.
2. The display of claim 1, wherein the added-image source is a micro-display comprising a 2-D array of light-producing pixels.
3. The display of claim 1, wherein the active-optical element comprises a refractive active-optical element.
4. The display of claim 3, wherein the refractive active-optical element comprises a liquid lens.
5. The display of claim 4, wherein:
the active-optical element and added-image source are situated on an optical axis that intersects the optical pathway; and
the refractive active-optical element further comprises a fixed-power objective lens situated on the optical axis.
6. The display of claim 1, further comprising:
a beam-splitter situated in the optical pathway to receive light of the intermediate image from the active-optical element along an optical axis that intersects the optical pathway at the beam-splitter such that the active-optical element is on a first side of the beam-splitter; and
a mirror located on the axis on a second side of the beam-splitter to reflect light back to the beam-splitter that has passed through the beam-splitter from the active-optical element.
7. The display of claim 6, wherein the mirror is a condensing mirror.
8. The display of claim 7, wherein:
the mirror has a center of curvature and a focal plane; and
the active-optical element is situated at the center of curvature to produce a conjugate exit pupil through the beam-splitter.
9. The display of claim 7, wherein, as the active-optical element addressably changes its optical power, the intermediate image is correspondingly moved relative to the focal plane to produce a corresponding change in distance of the added image relative to the eye.
10. The display of claim 9, wherein the distance at which the added image is formed serves as an accommodation cue for the person with respect to the intermediate image.
11. The display of claim 6, wherein the beam-splitter is further situated to reflect light reflected from the mirror to the person's eye.
12. The display of claim 1, wherein the display is mountable on the person's head whenever the person is using the display.
13. The display of claim 1, wherein the display is binocular and comprises first and a second optical pathways extending from respective eyes of the person to the real-world scene, first and second added-image sources associated with the respective optical pathways, and first and second active-optical elements, the first and second added-image sources and active-optical elements being situated relative to respective eyes of the person.
14. The display of claim 1, wherein the display is addressably operable in at least one of a variable-single-focal-plane mode and a multi-focal-plane mode.
15. The display of claim 14, wherein:
for operation in the variable-single-focal-plane mode the display further comprises a user interface coupled to the active-optical element; and
the active-optical element is addressable to change its power in response to feedback produced by and received from the user interface being operated by the person.
16. The display of claim 15, wherein:
the user interface is configured to received, from the person, respective responses to accommodation and/or convergence cues perceived and interpreted by the person; and
the accommodation and/or convergence cues are provided by the display to the person interpreting a user-perceived distance of the intermediate image in the real-world view.
17. The display of claim 14, wherein:
for operation in the variable-single-focal-plane mode, the display further comprises an eye-tracker situated relative to the eye to detect and track a parameter of the eye related to accommodation and/or convergence; and
the active-optical element is addressable to change its power in response to feedback produced by and received from the eye-tracker.
18. The display of claim 17, wherein:
the display further comprises a controller connected to the eye-tracker and to the active-optical element; and
the controller receives data from the eye-tracker, interprets the data, and delivers corresponding address commands to the active-optical element to provide an accommodation and/or convergence cue regarding the virtual object as view by the person.
19. The display of claim 18, wherein the controller delivers address commands to the active-optical element in real-time as the person perceives the intermediate image.
20. The display of claim 14, wherein:
for operation in the multi-focal-plane mode, the display further comprises a controller connected to the active-optical element; and
the active-optical element is addressable by the controller to change its optical power in response to a respective command received by the active-optical element from the controller.
21. The display of claim 20, wherein the controller is configured to address the active-optical element in a time-multiplexed manner to cause the active-optical element to exhibit multiple respective discrete optical powers that form multiple respective discrete distances of the virtual object as perceived by the person.
22. The display of claim 21, wherein:
the controller is further connected to the added-image source to address the added-image source; and
the controller is configured to address the added-image source to cause the added-image source to produce respective light patterns at selected respective distances as perceived by the person.
23. The display of claim 22, further comprising multiple added-image sources each being connected to and addressable by the controller to produce respective light patterns and to direct the light patterns, at respective distances and at respective times to the person's eye.
24. The display of claim 23, wherein the respective light patterns produced by each added-image source are coordinated by the controller to respective focal distances exhibited by the active-optical element in response to respective addresses delivered to the active-optical element from the controller.
25. The display of claim 18, wherein:
the display is a binocular display comprising a respective eye-tracker for each eye; and
the controller is configured to interpret data and generate corresponding address commands for the active-optical element according to a variable-focus gaze-contingent algorithm comprising integrated convergence tracking of the person's eyes to provide the person with real-time focus cues regarding the virtual object.
26. The display of claim 21, wherein the controller is configured to establish the discrete powers according to a depth-fused 3-D algorithm.
27. The display of claim 14, wherein:
the display is a binocular display comprising a respective active-optical element for each eye;
for operation in the variable-single-focal-plane mode, the display further comprises a respective eye-tracker for each eye;
the display further comprises a controller connected to the active-optical elements and to the eye-trackers; and
the controller is configured to receive point-of-gaze data from the eye-trackers and to address the active-optical elements to match the person's perceived convergence distances in real-time based on a variable-focus gaze-contingent display algorithm.
28. The display of claim 1, further comprising:
a condensing mirror; and
a beam-splitter;
wherein the active-optical element is configured to form an intermediate image of the light pattern;
the mirror is configured to relay light from the intermediate image to the beam-splitter; and
the beam-splitter is configured to direct the light toward the eye.
29. The display of claim 6, wherein:
the condensing mirror has a center of curvature; and
the active-optical element is situated at the center of curvature.
30. The display of claim 14, wherein:
for operation in the variable-single-focal-plane mode, the display further comprises a feedback device; and
the active-optical element is addressable to change its power in response to feedback provided by the feedback device.
31. The display of claim 14, wherein, for operation in the multi-focal-plane mode, the display further comprises a controller connected to the active-optical element, the controller being programmed to address the active-optical element in a time-multiplexed manner to produce multiple intermediate images in the real-world view at different respective distances as perceived by the person.
32. A method for producing an image of a virtual object in a view of a real-world scene as provide to at least one eye of a person, the method comprising:
from a source other than the real-world scene, producing a light pattern corresponding to the virtual object;
directing the light pattern to an active-optical element, located optically conjugate to an entrance pupil of the eye to enable the active-optical element to form an intermediate image of the light pattern;
directing the intermediate image to the person's eye;
as the person is viewing the real-world scene and the intermediate image, addressing the active-optical element to provide a selected optical power, from a selectable range of optical powers, to produce a corresponding perceived distance at which the intermediate image is formed relative to the person's eye in the real-world scene.
33. The method of claim 32, further comprising:
producing the light pattern from an addressable source; and
addressing the source to impart a change in the light pattern.
34. The method of claim 32, wherein the addressed change in the light pattern is coordinated with an addressed optical power provided by the active-optical element.
35. The method of claim 32, further comprising:
forming the intermediate image along a second axis intersecting the first axis;
at an intersection of the first and second axes, combining light of the intermediate image with light from the real-world scene such that the intermediate image is perceived by the person as being in the real-world scene at a distance corresponding to the selected optical power.
36. The method of claim 36, wherein the active optical element is addressed according to a mode selected from a variable-single-focal-plane mode and a multi-focal-plane mode.
37. The method of claim 36, further comprising, in the variable-single-focal-plane mode, addressing the active-optical element based on feedback data concerning at least one of accommodation and convergence exhibited by the person's eye.
38. The method of claim 37, wherein the feedback data is produced by the person.
39. The method of claim 37, wherein the feedback data is produced by monitoring an action of the eye as the eye views the intermediate image.
40. The method of claim 39, wherein monitoring of the eye is performed by VF-GCD tracking of the person's point of gaze, the method further comprising computing a convergence distance of the eye from tracking of the person's point of gaze, and using the computed convergence distance to address the active-optical element to provide an updated focus cue to the person's eye.
41. The method of claim 36, further comprising:
in the variable-focal-plane mode, producing the light pattern from an addressable source;
controllably addressing the source and the active-optical element to provide intermediate images at respective distances from the person's eye, as perceived by the person.
42. The method of claim 41, wherein the source and active-optical element are addressed according to a depth-fused 3-D algorithm in a time-multiplexed manner.
43. The method of claim 41, wherein the active-optical element is addressed using a square-wave addressing command, in which discrete peaks of the square wave correspond to respective optical powers of the active-optical element.
44. The method of claim 43, wherein the square-wave addressing command includes at least one null portion.
US12/807,868 2009-09-14 2010-09-14 3-Dimensional electro-optical see-through displays Abandoned US20110075257A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/807,868 US20110075257A1 (en) 2009-09-14 2010-09-14 3-Dimensional electro-optical see-through displays
US14/729,195 US11079596B2 (en) 2009-09-14 2015-06-03 3-dimensional electro-optical see-through displays
US17/123,789 US11803059B2 (en) 2009-09-14 2020-12-16 3-dimensional electro-optical see-through displays

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US27657809P 2009-09-14 2009-09-14
US12/807,868 US20110075257A1 (en) 2009-09-14 2010-09-14 3-Dimensional electro-optical see-through displays

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/729,195 Continuation US11079596B2 (en) 2009-09-14 2015-06-03 3-dimensional electro-optical see-through displays

Publications (1)

Publication Number Publication Date
US20110075257A1 true US20110075257A1 (en) 2011-03-31

Family

ID=43780097

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/807,868 Abandoned US20110075257A1 (en) 2009-09-14 2010-09-14 3-Dimensional electro-optical see-through displays
US14/729,195 Active 2031-08-01 US11079596B2 (en) 2009-09-14 2015-06-03 3-dimensional electro-optical see-through displays
US17/123,789 Active 2030-12-24 US11803059B2 (en) 2009-09-14 2020-12-16 3-dimensional electro-optical see-through displays

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/729,195 Active 2031-08-01 US11079596B2 (en) 2009-09-14 2015-06-03 3-dimensional electro-optical see-through displays
US17/123,789 Active 2030-12-24 US11803059B2 (en) 2009-09-14 2020-12-16 3-dimensional electro-optical see-through displays

Country Status (1)

Country Link
US (3) US20110075257A1 (en)

Cited By (207)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110085028A1 (en) * 2009-10-14 2011-04-14 Ramin Samadani Methods and systems for object segmentation in digital images
US20110169921A1 (en) * 2010-01-12 2011-07-14 Samsung Electronics Co., Ltd. Method for performing out-focus using depth information and camera using the same
US20120019557A1 (en) * 2010-07-22 2012-01-26 Sony Ericsson Mobile Communications Ab Displaying augmented reality information
CN102419631A (en) * 2010-10-15 2012-04-18 微软公司 Fusing virtual content into real content
US20120113092A1 (en) * 2010-11-08 2012-05-10 Avi Bar-Zeev Automatic variable virtual focus for augmented reality displays
US20130021658A1 (en) * 2011-07-20 2013-01-24 Google Inc. Compact See-Through Display System
WO2013028586A1 (en) * 2011-08-19 2013-02-28 Latta Stephen G Location based skins for mixed reality displays
US20130141549A1 (en) * 2010-06-29 2013-06-06 Cyclomedia Technology B.V. Method for Producing a Digital Photo Wherein at Least Some of the Pixels Comprise Position Information, and Such a Digital Photo
US20130194259A1 (en) * 2012-01-27 2013-08-01 Darren Bennett Virtual environment generating system
US20130234914A1 (en) * 2012-03-07 2013-09-12 Seiko Epson Corporation Head-mounted display device and control method for the head-mounted display device
US20130265220A1 (en) * 2012-04-09 2013-10-10 Omek Interactive, Ltd. System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
WO2013170074A1 (en) * 2012-05-09 2013-11-14 Nokia Corporation Method and apparatus for providing focus correction of displayed information
WO2013170073A1 (en) * 2012-05-09 2013-11-14 Nokia Corporation Method and apparatus for determining representations of displayed information based on focus distance
US20130335404A1 (en) * 2012-06-15 2013-12-19 Jeff Westerinen Depth of field control for see-thru display
CN103472909A (en) * 2012-04-10 2013-12-25 微软公司 Realistic occlusion for a head mounted augmented reality display
US20140002491A1 (en) * 2012-06-29 2014-01-02 Mathew J. Lamb Deep augmented reality tags for head mounted displays
GB2504311A (en) * 2012-07-25 2014-01-29 Bae Systems Plc Head-up display using fluidic lens
WO2014033306A1 (en) * 2012-09-03 2014-03-06 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Head mounted system and method to compute and render a stream of digital images using a head mounted system
US20140092006A1 (en) * 2012-09-28 2014-04-03 Joshua Boelter Device and method for modifying rendering based on viewer focus area from eye tracking
US20140098135A1 (en) * 2012-10-05 2014-04-10 Elwha Llc Correlating user reactions with augmentations displayed through augmented views
WO2014060736A1 (en) * 2012-10-15 2014-04-24 Bae Systems Plc Prismatic correcting lens
US20140145914A1 (en) * 2012-11-29 2014-05-29 Stephen Latta Head-mounted display resource management
CN103931179A (en) * 2011-09-19 2014-07-16 埃克兰斯波莱尔斯股份有限公司/波拉斯克琳斯股份有限公司 Method and display for showing a stereoscopic image
WO2014144989A1 (en) * 2013-03-15 2014-09-18 Ostendo Technologies, Inc. 3d light field displays and methods with improved viewing angle depth and resolution
US8868384B2 (en) 2012-03-15 2014-10-21 General Electric Company Methods and apparatus for monitoring operation of a system asset
JP2014219621A (en) * 2013-05-10 2014-11-20 株式会社タイトー Display device and display control program
US20140362110A1 (en) * 2013-06-08 2014-12-11 Sony Computer Entertainment Inc. Systems and methods for customizing optical representation of views provided by a head mounted display based on optical prescription of a user
US20150002374A1 (en) * 2011-12-19 2015-01-01 Dolby Laboratories Licensing Corporation Head-Mounted Display
US20150062311A1 (en) * 2012-04-29 2015-03-05 Hewlett-Packard Development Company, L.P. View weighting for multiview displays
US8988461B1 (en) 2011-01-18 2015-03-24 Disney Enterprises, Inc. 3D drawing and painting system with a 3D scalar field
US20150084986A1 (en) * 2013-09-23 2015-03-26 Kil-Whan Lee Compositor, system-on-chip having the same, and method of driving system-on-chip
US20150163480A1 (en) * 2012-05-25 2015-06-11 Hoya Corporation Simulation device
KR20150070195A (en) * 2012-10-18 2015-06-24 더 아리조나 보드 오브 리전츠 온 비핼프 오브 더 유니버시티 오브 아리조나 Stereoscopic displays with addressable focus cues
CN104808342A (en) * 2015-04-30 2015-07-29 杭州映墨科技有限公司 Optical lens structure of wearable virtual-reality headset capable of displaying three-dimensional scene
US9096920B1 (en) * 2012-03-22 2015-08-04 Google Inc. User interface method
WO2015126735A1 (en) * 2014-02-19 2015-08-27 Microsoft Technology Licensing, Llc Stereoscopic display responsive to focal-point shift
US9122053B2 (en) 2010-10-15 2015-09-01 Microsoft Technology Licensing, Llc Realistic occlusion for a head mounted augmented reality display
WO2015134740A1 (en) * 2014-03-05 2015-09-11 Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3d augmented reality display with variable focus and/or object recognition
US9142056B1 (en) * 2011-05-18 2015-09-22 Disney Enterprises, Inc. Mixed-order compositing for images having three-dimensional painting effects
US20150279022A1 (en) * 2014-03-31 2015-10-01 Empire Technology Development Llc Visualization of Spatial and Other Relationships
US9164281B2 (en) 2013-03-15 2015-10-20 Honda Motor Co., Ltd. Volumetric heads-up display with dynamic focal plane
US20150306330A1 (en) * 2014-04-29 2015-10-29 MaskSelect, Inc. Mask Selection System
US9195053B2 (en) 2012-03-27 2015-11-24 Ostendo Technologies, Inc. Spatio-temporal directional light modulator
WO2015184412A1 (en) 2014-05-30 2015-12-03 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
US20160007015A1 (en) * 2013-12-12 2016-01-07 Boe Technology Group Co., Ltd. Open Head Mount Display Device and Display method Thereof
US20160011419A1 (en) * 2010-12-24 2016-01-14 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
US9239453B2 (en) 2009-04-20 2016-01-19 Beijing Institute Of Technology Optical see-through free-form head-mounted display
US20160019868A1 (en) * 2014-07-18 2016-01-21 Samsung Electronics Co., Ltd. Method for focus control and electronic device thereof
US9244277B2 (en) 2010-04-30 2016-01-26 The Arizona Board Of Regents On Behalf Of The University Of Arizona Wide angle and high resolution tiled head-mounted display device
US9251715B2 (en) 2013-03-15 2016-02-02 Honda Motor Co., Ltd. Driver training system using heads-up display augmented reality graphics elements
US9255813B2 (en) 2011-10-14 2016-02-09 Microsoft Technology Licensing, Llc User controlled real object disappearance in a mixed reality display
US20160042554A1 (en) * 2014-08-05 2016-02-11 Samsung Electronics Co., Ltd. Method and apparatus for generating real three-dimensional (3d) image
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9304319B2 (en) 2010-11-18 2016-04-05 Microsoft Technology Licensing, Llc Automatic focus improvement for augmented reality displays
US9310591B2 (en) 2008-01-22 2016-04-12 The Arizona Board Of Regents On Behalf Of The University Of Arizona Head-mounted projection display using reflective microdisplays
US9323325B2 (en) 2011-08-30 2016-04-26 Microsoft Technology Licensing, Llc Enhancing an object of interest in a see-through, mixed reality display device
US9330470B2 (en) 2010-06-16 2016-05-03 Intel Corporation Method and system for modeling subjects from a depth map
US20160150214A1 (en) * 2014-11-25 2016-05-26 Harold O. Hosea Device for creating and enhancing three-dimensional image effects
US20160147078A1 (en) * 2014-11-25 2016-05-26 Ricoh Company, Ltd. Multifocal Display
US20160156896A1 (en) * 2014-12-01 2016-06-02 Samsung Electronics Co., Ltd. Apparatus for recognizing pupillary distance for 3d display
US20160161739A1 (en) * 2011-07-27 2016-06-09 Microsoft Technology Licensing, Llc Variable-Depth Stereoscopic Display
US9378644B2 (en) 2013-03-15 2016-06-28 Honda Motor Co., Ltd. System and method for warning a driver of a potential rear end collision
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
US9393870B2 (en) 2013-03-15 2016-07-19 Honda Motor Co., Ltd. Volumetric heads-up display with dynamic focal plane
US20160212404A1 (en) * 2013-08-23 2016-07-21 The Schepens Eye Research Institute, Inc. Prevention and Treatment of Myopia
CN105929537A (en) * 2016-04-08 2016-09-07 北京骁龙科技有限公司 Head-mounted display and eyepiece system thereof
US9443354B2 (en) 2013-04-29 2016-09-13 Microsoft Technology Licensing, Llc Mixed reality interactions
CN105940337A (en) * 2014-01-29 2016-09-14 谷歌公司 Dynamic lens for head mounted display
US20160320625A1 (en) * 2016-04-21 2016-11-03 Maximilian Ralph Peter von und zu Liechtenstein Virtual Monitor Display Technique for Augmented Reality Environments
US9509975B2 (en) * 2004-10-21 2016-11-29 Try Tech Llc Methods for acquiring, storing, transmitting and displaying stereoscopic images
DE102015007245A1 (en) 2015-06-05 2016-12-08 Audi Ag Method for operating a data-goggle device and data-goggle device
CN106327584A (en) * 2016-08-24 2017-01-11 上海与德通讯技术有限公司 Image processing method used for virtual reality equipment and image processing device thereof
CN106375694A (en) * 2015-12-31 2017-02-01 北京智谷睿拓技术服务有限公司 Light field display control method and device, and light field display equipment
US20170097511A1 (en) * 2015-10-02 2017-04-06 Jing Xu 3d image system, method, and applications
US9639964B2 (en) 2013-03-15 2017-05-02 Elwha Llc Dynamically preserving scene elements in augmented reality systems
US9652897B2 (en) * 2015-06-25 2017-05-16 Microsoft Technology Licensing, Llc Color fill in an augmented reality environment
US20170148215A1 (en) * 2015-11-19 2017-05-25 Oculus Vr, Llc Eye Tracking for Mitigating Vergence and Accommodation Conflicts
US20170154464A1 (en) * 2015-11-30 2017-06-01 Microsoft Technology Licensing, Llc Multi-optical surface optical design
US9671863B2 (en) 2012-10-05 2017-06-06 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
US20170195661A1 (en) * 2015-12-31 2017-07-06 Beijing Zhigu Rui Tuo Tech Co., Ltd. Light field display control method and apparatus, and light field display device
US9720232B2 (en) 2012-01-24 2017-08-01 The Arizona Board Of Regents On Behalf Of The University Of Arizona Compact eye-tracked head-mounted display
CN107077218A (en) * 2016-03-25 2017-08-18 深圳前海达闼云端智能科技有限公司 The viewing reminding method and device of a kind of three-dimensional content
US9747898B2 (en) 2013-03-15 2017-08-29 Honda Motor Co., Ltd. Interpretation of ambiguous vehicle instructions
US20170276948A1 (en) * 2016-03-25 2017-09-28 Magic Leap, Inc. Virtual and augmented reality systems and methods
US20170276956A1 (en) * 2012-08-06 2017-09-28 Sony Corporation Image display apparatus and image display method
US9785231B1 (en) * 2013-09-26 2017-10-10 Rockwell Collins, Inc. Head worn display integrity monitor system and methods
US20170293146A1 (en) * 2016-04-07 2017-10-12 Oculus Vr, Llc Accommodation based optical correction
US20170330376A1 (en) * 2016-05-10 2017-11-16 Disney Enterprises, Inc. Occluded virtual image display
ITUA20163946A1 (en) * 2016-05-30 2017-11-30 Univ Pisa Wearable viewer for augmented reality
US9857591B2 (en) 2014-05-30 2018-01-02 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
US9865043B2 (en) 2008-03-26 2018-01-09 Ricoh Company, Ltd. Adaptive image acquisition and display using multi-focal display
US9866826B2 (en) 2014-11-25 2018-01-09 Ricoh Company, Ltd. Content-adaptive multi-focal display
US20180045965A1 (en) * 2013-11-27 2018-02-15 Magic Leap, Inc. Virtual and augmented reality systems and methods
WO2018039270A1 (en) * 2016-08-22 2018-03-01 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
US9910498B2 (en) 2011-06-23 2018-03-06 Intel Corporation System and method for close-range movement tracking
CN107783291A (en) * 2016-08-30 2018-03-09 北京亮亮视野科技有限公司 True three- dimensional panoramic show wear-type visual device
US9918066B2 (en) 2014-12-23 2018-03-13 Elbit Systems Ltd. Methods and systems for producing a magnified 3D image
US9915826B2 (en) 2013-11-27 2018-03-13 Magic Leap, Inc. Virtual and augmented reality systems and methods having improved diffractive grating structures
CN107810634A (en) * 2015-06-12 2018-03-16 微软技术许可有限责任公司 Display for three-dimensional augmented reality
EP3298952A1 (en) * 2016-09-22 2018-03-28 Essilor International Optometry device
US9934575B2 (en) * 2009-11-27 2018-04-03 Sony Corporation Image processing apparatus, method and computer program to adjust 3D information based on human visual characteristics
WO2018100237A1 (en) * 2016-12-01 2018-06-07 Varjo Technologies Oy Display apparatus and method of displaying using image renderers and optical combiners
US9996984B2 (en) 2016-07-05 2018-06-12 Disney Enterprises, Inc. Focus control for virtual objects in augmented reality (AR) and virtual reality (VR) displays
US10001648B2 (en) 2016-04-14 2018-06-19 Disney Enterprises, Inc. Occlusion-capable augmented reality display using cloaking optics
US20180199028A1 (en) * 2017-01-10 2018-07-12 Intel Corporation Head-mounted display device
US10025486B2 (en) 2013-03-15 2018-07-17 Elwha Llc Cross-reality select, drag, and drop for augmented reality systems
WO2017192887A3 (en) * 2016-05-04 2018-07-26 The Regents Of The University Of California Pseudo light-field display apparatus
US10088673B2 (en) 2016-03-15 2018-10-02 Deepsee Inc. 3D display apparatus, method, and applications
US10109075B2 (en) 2013-03-15 2018-10-23 Elwha Llc Temporal element restoration in augmented reality systems
US10120480B1 (en) 2011-08-05 2018-11-06 P4tents1, LLC Application-specific pressure-sensitive touch screen system, method, and computer program product
US10127727B1 (en) * 2017-01-10 2018-11-13 Meta Company Systems and methods to provide an interactive environment over an expanded field-of-view
CN108886612A (en) * 2016-02-11 2018-11-23 奇跃公司 Reduce the more depth plane display systems switched between depth plane
US10162412B2 (en) * 2015-03-27 2018-12-25 Seiko Epson Corporation Display, control method of display, and program
US10176961B2 (en) 2015-02-09 2019-01-08 The Arizona Board Of Regents On Behalf Of The University Of Arizona Small portable night vision system
US10198978B2 (en) * 2015-12-15 2019-02-05 Facebook Technologies, Llc Viewing optics test subsystem for head mounted displays
US10215583B2 (en) 2013-03-15 2019-02-26 Honda Motor Co., Ltd. Multi-level navigation monitoring and control
US10241569B2 (en) 2015-12-08 2019-03-26 Facebook Technologies, Llc Focus adjustment method for a virtual reality headset
US10269179B2 (en) 2012-10-05 2019-04-23 Elwha Llc Displaying second augmentations that are based on registered first augmentations
US10282912B1 (en) 2017-05-26 2019-05-07 Meta View, Inc. Systems and methods to provide an interactive space over an expanded field-of-view with focal distance tuning
US10289108B2 (en) 2012-03-15 2019-05-14 General Electric Company Methods and apparatus for monitoring operation of a system asset
US10317690B2 (en) 2014-01-31 2019-06-11 Magic Leap, Inc. Multi-focal display system and method
US10339711B2 (en) 2013-03-15 2019-07-02 Honda Motor Co., Ltd. System and method for providing augmented reality based directions based on verbal and gestural cues
US10335342B2 (en) 2015-07-23 2019-07-02 New Jersey Institute Of Technology Method, system, and apparatus for treatment of binocular dysfunctions
US10353212B2 (en) * 2017-01-11 2019-07-16 Samsung Electronics Co., Ltd. See-through type display apparatus and method of operating the same
US20190227311A1 (en) * 2018-01-22 2019-07-25 Symbol Technologies, Llc Systems and methods for task-based adjustable focal distance for heads-up displays
US10368049B2 (en) * 2015-12-31 2019-07-30 Beijing Zhigu Rui Tuo Tech Co., Ltd. Light field display control method and apparatus, and light field display device
US10371938B2 (en) * 2013-01-24 2019-08-06 Yuchen Zhou Method and apparatus to achieve virtual reality with a flexible display
US10386636B2 (en) 2014-01-31 2019-08-20 Magic Leap, Inc. Multi-focal display system and method
EP3409013A4 (en) * 2016-01-29 2019-09-04 Hewlett-Packard Development Company, L.P. Viewing device adjustment based on eye accommodation in relation to a display
US10429647B2 (en) 2016-06-10 2019-10-01 Facebook Technologies, Llc Focus adjusting virtual reality headset
US10440354B2 (en) * 2015-12-31 2019-10-08 Beijing Zhigu Rui Tuo Tech Co., Ltd. Light field display control method and apparatus, and light field display device
CN110325895A (en) * 2017-02-21 2019-10-11 脸谱科技有限责任公司 It focuses and adjusts more plane head-mounted displays
US10444501B2 (en) * 2017-06-20 2019-10-15 Panasonic Intellectual Property Management Co., Ltd. Image display device
US10442774B1 (en) * 2012-11-06 2019-10-15 Valve Corporation Adaptive optical path with variable focal length
US10445860B2 (en) 2015-12-08 2019-10-15 Facebook Technologies, Llc Autofocus virtual reality headset
US10459230B2 (en) 2016-02-02 2019-10-29 Disney Enterprises, Inc. Compact augmented reality / virtual reality display
US10466486B2 (en) 2015-01-26 2019-11-05 Magic Leap, Inc. Virtual and augmented reality systems and methods having improved diffractive grating structures
US10488921B1 (en) * 2017-09-08 2019-11-26 Facebook Technologies, Llc Pellicle beamsplitter for eye tracking
US10497175B2 (en) 2011-12-06 2019-12-03 Microsoft Technology Licensing, Llc Augmented reality virtual monitor
US20190369719A1 (en) * 2018-05-31 2019-12-05 Tobii Ab Robust convergence signal
US10514546B2 (en) * 2017-03-27 2019-12-24 Avegant Corp. Steerable high-resolution display
US10521013B2 (en) 2018-03-01 2019-12-31 Samsung Electronics Co., Ltd. High-speed staggered binocular eye tracking systems
WO2020009922A1 (en) * 2018-07-06 2020-01-09 Pcms Holdings, Inc. Method and system for forming extended focal planes for large viewpoint changes
US20200033613A1 (en) * 2018-07-26 2020-01-30 Varjo Technologies Oy Display apparatus and method of displaying using curved optical combiner
US10585284B1 (en) 2017-11-17 2020-03-10 Meta View, Inc. Systems and methods to provide an interactive environment over a wide field of view
EP3497508A4 (en) * 2016-08-12 2020-04-22 Avegant Corp. A near-eye display system including a modulation stack
US10713846B2 (en) 2012-10-05 2020-07-14 Elwha Llc Systems and methods for sharing augmentation data
US10728534B2 (en) * 2018-07-31 2020-07-28 Lightspace Technologies, SIA Volumetric display system and method of displaying three-dimensional image
EP3602583A4 (en) * 2017-03-22 2020-07-29 Magic Leap, Inc. Dynamic field of view variable focus display system
US10739578B2 (en) 2016-08-12 2020-08-11 The Arizona Board Of Regents On Behalf Of The University Of Arizona High-resolution freeform eyepiece design with a large exit pupil
US10775628B2 (en) * 2015-03-16 2020-09-15 Magic Leap, Inc. Methods and systems for diagnosing and treating presbyopia
US10782526B2 (en) * 2015-12-22 2020-09-22 E-Vision Smart Optics, Inc. Dynamic focusing head mounted display
US10809546B2 (en) 2016-08-12 2020-10-20 Avegant Corp. Digital light path length modulation
US10809802B2 (en) * 2018-11-13 2020-10-20 Honda Motor Co., Ltd. Line-of-sight detection apparatus, computer readable storage medium, and line-of-sight detection method
US10838583B2 (en) 2016-05-17 2020-11-17 General Electric Company Systems and methods for prioritizing and monitoring device status in a condition monitoring software application
US10866428B2 (en) 2016-08-12 2020-12-15 Avegant Corp. Orthogonal optical path length extender
WO2021003009A1 (en) * 2019-06-30 2021-01-07 Corning Incorporated Display optical systems for stereoscopic imaging systems with reduced eye strain
US10921593B2 (en) 2017-04-06 2021-02-16 Disney Enterprises, Inc. Compact perspectively correct occlusion capable augmented reality displays
US10944904B2 (en) 2016-08-12 2021-03-09 Avegant Corp. Image capture with digital light path length modulation
WO2021051067A1 (en) * 2019-09-15 2021-03-18 Arizona Board Of Regents On Behalf Of The University Of Arizona Digital illumination assisted gaze tracking for augmented reality near to eye displays
US10962855B2 (en) 2017-02-23 2021-03-30 Magic Leap, Inc. Display system with variable power reflector
US11002971B1 (en) * 2018-08-24 2021-05-11 Apple Inc. Display device with mechanically adjustable optical combiner
US11016307B2 (en) 2016-08-12 2021-05-25 Avegant Corp. Method and apparatus for a shaped optical path length extender
US11022803B2 (en) * 2017-05-27 2021-06-01 Moon Key Lee Eye glasses-type transparent display using mirror
EP3835878A1 (en) * 2019-12-11 2021-06-16 Samsung Electronics Co., Ltd. Holographic display apparatus for providing expanded viewing window
US11042048B2 (en) 2016-08-12 2021-06-22 Avegant Corp. Digital light path length modulation systems
US11048333B2 (en) 2011-06-23 2021-06-29 Intel Corporation System and method for close-range movement tracking
US11067797B2 (en) 2016-04-07 2021-07-20 Magic Leap, Inc. Systems and methods for augmented reality
US11079596B2 (en) 2009-09-14 2021-08-03 The Arizona Board Of Regents On Behalf Of The University Of Arizona 3-dimensional electro-optical see-through displays
US11106041B2 (en) 2016-04-08 2021-08-31 Magic Leap, Inc. Augmented reality systems and methods with variable focus lens elements
US11106276B2 (en) 2016-03-11 2021-08-31 Facebook Technologies, Llc Focus adjusting headset
US11113794B2 (en) * 2018-01-23 2021-09-07 Facebook Technologies, Llc Systems and methods for generating defocus blur effects
US11126261B2 (en) 2019-01-07 2021-09-21 Avegant Corp. Display control system and rendering pipeline
US11138793B2 (en) 2014-03-14 2021-10-05 Magic Leap, Inc. Multi-depth plane display system with reduced switching between depth planes
US11153512B1 (en) * 2019-05-29 2021-10-19 Facebook Technologies, Llc Imaging and display with ellipsoidal lensing structure
US11170563B2 (en) * 2018-01-04 2021-11-09 8259402 Canada Inc. Immersive environment with digital environment to enhance depth sensation
US11169358B1 (en) * 2018-06-29 2021-11-09 Facebook Technologies, Llc Varifocal projection display
US11169383B2 (en) 2018-12-07 2021-11-09 Avegant Corp. Steerable positioning element
US11252399B2 (en) * 2015-05-28 2022-02-15 Microsoft Technology Licensing, Llc Determining inter-pupillary distance
US20220079675A1 (en) * 2018-11-16 2022-03-17 Philipp K. Lang Augmented Reality Guidance for Surgical Procedures with Adjustment of Scale, Convergence and Focal Plane or Focal Point of Virtual Data
US11320900B2 (en) 2016-03-04 2022-05-03 Magic Leap, Inc. Current drain reduction in AR/VR display systems
US11353698B1 (en) 2019-05-29 2022-06-07 Facebook Technologies, Llc Dual Purkinje imaging with ellipsoidal lensing structure
US11385710B2 (en) * 2018-04-28 2022-07-12 Boe Technology Group Co., Ltd. Geometric parameter measurement method and device thereof, augmented reality device, and storage medium
US11415935B2 (en) 2020-06-23 2022-08-16 Looking Glass Factory, Inc. System and method for holographic communication
US20220261966A1 (en) * 2021-02-16 2022-08-18 Samsung Electronics Company, Ltd. Multiple point spread function based image reconstruction for a camera behind a display
US11449004B2 (en) 2020-05-21 2022-09-20 Looking Glass Factory, Inc. System and method for holographic image display
US11474355B2 (en) * 2014-05-30 2022-10-18 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
US11477434B2 (en) 2018-03-23 2022-10-18 Pcms Holdings, Inc. Multifocal plane based method to produce stereoscopic viewpoints in a DIBR system (MFP-DIBR)
US11474284B2 (en) 2017-04-05 2022-10-18 Corning Incorporated Liquid lens control systems and methods
US11480784B2 (en) 2016-08-12 2022-10-25 Avegant Corp. Binocular display with digital light path length modulation
US20220343876A1 (en) * 2021-04-22 2022-10-27 GM Global Technology Operations LLC Dual image plane hud with automated illuminance setting for ar graphics displayed in far virtual image plane
US11509877B2 (en) * 2020-01-14 2022-11-22 Samsung Electronics Co., Ltd. Image display device including moveable display element and image display method
US11546575B2 (en) 2018-03-22 2023-01-03 Arizona Board Of Regents On Behalf Of The University Of Arizona Methods of rendering light field images for integral-imaging-based light field display
US20230037046A1 (en) * 2018-08-03 2023-02-02 Magic Leap, Inc. Depth plane selection for multi-depth plane display systems by user categorization
US11575865B2 (en) 2019-07-26 2023-02-07 Samsung Electronics Co., Ltd. Processing images captured by a camera behind a display
US11586049B2 (en) 2019-03-29 2023-02-21 Avegant Corp. Steerable hybrid display using a waveguide
US11624921B2 (en) 2020-01-06 2023-04-11 Avegant Corp. Head mounted system with color specific modulation
US11662585B2 (en) 2015-10-06 2023-05-30 Magic Leap, Inc. Virtual/augmented reality system having reverse angle diffraction grating
US11689709B2 (en) 2018-07-05 2023-06-27 Interdigital Vc Holdings, Inc. Method and system for near-eye focal plane overlays for 3D perception of content on 2D displays
US11722796B2 (en) 2021-02-26 2023-08-08 Samsung Electronics Co., Ltd. Self-regularizing inverse filter for image deblurring
US11822095B2 (en) * 2016-12-09 2023-11-21 Lg Innotek Co., Ltd. Camera module including liquid lens, optical device including the module, and method for driving the liquid lens
US11849102B2 (en) 2020-12-01 2023-12-19 Looking Glass Factory, Inc. System and method for processing three dimensional images
US11880043B2 (en) 2018-07-24 2024-01-23 Magic Leap, Inc. Display systems and methods for determining registration between display and eyes of user
US11880033B2 (en) 2018-01-17 2024-01-23 Magic Leap, Inc. Display systems and methods for determining registration between a display and a user's eyes
US11883104B2 (en) 2018-01-17 2024-01-30 Magic Leap, Inc. Eye center of rotation determination, depth plane selection, and render camera positioning in display systems
US11893755B2 (en) 2018-01-19 2024-02-06 Interdigital Vc Holdings, Inc. Multi-focal planes with varying positions
US11906739B2 (en) 2015-10-05 2024-02-20 Magic Leap, Inc. Microlens collimator for scanning optical fiber in virtual/augmented reality system

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016020630A2 (en) 2014-08-08 2016-02-11 Milan Momcilo Popovich Waveguide laser illuminator incorporating a despeckler
EP3245444B1 (en) 2015-01-12 2021-09-08 DigiLens Inc. Environmentally isolated waveguide display
GB2536650A (en) 2015-03-24 2016-09-28 Augmedics Ltd Method and system for combining video-based and optic-based augmented reality in a near eye display
WO2016180048A1 (en) * 2015-05-13 2016-11-17 京东方科技集团股份有限公司 Display device and drive method therefor
US10690916B2 (en) 2015-10-05 2020-06-23 Digilens Inc. Apparatus for providing waveguide displays with two-dimensional pupil expansion
CN108700743A (en) 2016-01-22 2018-10-23 康宁股份有限公司 Wide visual field individual's display
CN106371216B (en) * 2016-10-14 2019-07-09 东莞市美光达光学科技有限公司 It is a kind of for observing the secondary display virtual real optical system of large screen
EP3416381A1 (en) 2017-06-12 2018-12-19 Thomson Licensing Method and apparatus for providing information to a user observing a multi view content
EP3416371A1 (en) * 2017-06-12 2018-12-19 Thomson Licensing Method for displaying, on a 2d display device, a content derived from light field data
US10976551B2 (en) 2017-08-30 2021-04-13 Corning Incorporated Wide field personal display device
US10768431B2 (en) * 2017-12-20 2020-09-08 Aperture In Motion, LLC Light control devices and methods for regional variation of visual information and sampling
US10175490B1 (en) * 2017-12-20 2019-01-08 Aperture In Motion, LLC Light control devices and methods for regional variation of visual information and sampling
CN109633905B (en) * 2018-12-29 2020-07-24 华为技术有限公司 Multi-focal-plane display system and apparatus
IL264045B2 (en) * 2018-12-31 2023-08-01 Elbit Systems Ltd Direct view display with transparent variable optical power elements
JP2022520472A (en) 2019-02-15 2022-03-30 ディジレンズ インコーポレイテッド Methods and equipment for providing holographic waveguide displays using integrated grids
US11852813B2 (en) * 2019-04-12 2023-12-26 Nvidia Corporation Prescription augmented reality display
US20220091420A1 (en) * 2019-04-23 2022-03-24 Directional Systems Tracking Limited Augmented reality system
CN116582661A (en) 2019-05-23 2023-08-11 奇跃公司 Mixed mode three-dimensional display system and method
WO2020247930A1 (en) 2019-06-07 2020-12-10 Digilens Inc. Waveguides incorporating transmissive and reflective gratings and related methods of manufacturing
KR20220054386A (en) 2019-08-29 2022-05-02 디지렌즈 인코포레이티드. Vacuum Bragg grating and manufacturing method thereof
JP2023505235A (en) 2019-12-06 2023-02-08 マジック リープ, インコーポレイテッド Virtual, Augmented, and Mixed Reality Systems and Methods
US11382712B2 (en) 2019-12-22 2022-07-12 Augmedics Ltd. Mirroring in image guided surgery
US11917119B2 (en) 2020-01-09 2024-02-27 Jerry Nims 2D image capture system and display of 3D digital image
US11823343B1 (en) * 2020-03-26 2023-11-21 Apple Inc. Method and device for modifying content according to various simulation characteristics
KR102284743B1 (en) * 2020-05-14 2021-08-03 한국과학기술연구원 Extended dof image display apparatus and method for controlling thereof
EP3923059A1 (en) * 2020-06-12 2021-12-15 Optotune AG Display unit and method for operating a display unit
WO2021262759A1 (en) * 2020-06-22 2021-12-30 Digilens Inc. Systems and methods for real-time color correction of waveguide based displays
CN111830714A (en) * 2020-07-24 2020-10-27 闪耀现实(无锡)科技有限公司 Image display control method, image display control device and head-mounted display equipment
EP4245029A1 (en) * 2020-11-13 2023-09-20 Nims, Jerry 2d digital image capture system, frame speed, and simulating 3d digital image sequence
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter
WO2023141313A2 (en) * 2022-01-21 2023-07-27 Arizona Board Of Regents On Behalf Of The University Of Arizona Wavelength and diffractive multiplexed expansion of field of view for display devices

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7177083B2 (en) * 2003-02-17 2007-02-13 Carl-Zeiss-Stiftung Trading As Display device with electrooptical focussing
US7230583B2 (en) * 1998-11-09 2007-06-12 University Of Washington Scanned beam display with focal length adjustment
US20090168010A1 (en) * 2007-12-27 2009-07-02 Igor Vinogradov Adaptive focusing using liquid crystal lens in electro-optical readers

Family Cites Families (164)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3493290A (en) * 1966-01-14 1970-02-03 Mitre Corp Three-dimensional display
US3632184A (en) 1970-03-02 1972-01-04 Bell Telephone Labor Inc Three-dimensional display
JPS503354A (en) 1973-05-11 1975-01-14
DE3266147D1 (en) 1981-05-29 1985-10-17 Gec Avionics Night vision goggles
US4669810A (en) 1984-02-03 1987-06-02 Flight Dynamics, Inc. Head up display system
US4753522A (en) 1985-06-03 1988-06-28 Ricoh Company, Ltd. Plastic lens assembly for use in copying machines
US4863251A (en) 1987-03-13 1989-09-05 Xerox Corporation Double gauss lens for a raster input scanner
US5880888A (en) 1989-01-23 1999-03-09 Hughes Aircraft Company Helmet mounted display system
JPH02200074A (en) 1989-01-30 1990-08-08 Nippon Hoso Kyokai <Nhk> Solid-state image pickup device with image intensifier
GB8916206D0 (en) 1989-07-14 1989-11-08 Marconi Gec Ltd Helmet systems
JP2692996B2 (en) 1989-12-25 1997-12-17 オリンパス光学工業株式会社 Imaging lens
US5109469A (en) 1990-11-01 1992-04-28 Itt Corporation Phosphor screen for correcting luminous non-uniformity and method for making same
US5172275A (en) 1990-12-14 1992-12-15 Eastman Kodak Company Apochromatic relay lens systems suitable for use in a high definition telecine apparatus
WO1992018971A1 (en) 1991-04-22 1992-10-29 Evans & Sutherland Computer Corp. Head-mounted projection display system featuring beam splitter
DE69325607T2 (en) 1992-04-07 2000-04-06 Raytheon Co Wide spectral band virtual image display optical system
US6008781A (en) 1992-10-22 1999-12-28 Board Of Regents Of The University Of Washington Virtual retinal display
US5526183A (en) 1993-11-29 1996-06-11 Hughes Electronics Helmet visor display employing reflective, refractive and diffractive optical elements
US5416315A (en) 1994-01-24 1995-05-16 Night Vision General Partnership Visor-mounted night vision visor
US7262919B1 (en) 1994-06-13 2007-08-28 Canon Kabushiki Kaisha Head-up display device with curved optical surface having total reflection
US5621572A (en) 1994-08-24 1997-04-15 Fergason; James L. Optical system for a head mounted display using a retro-reflector and method of displaying an image
JPH08160345A (en) 1994-12-05 1996-06-21 Olympus Optical Co Ltd Head mounted display device
US5625495A (en) 1994-12-07 1997-04-29 U.S. Precision Lens Inc. Telecentric lens systems for forming an image of an object composed of pixels
JP3658034B2 (en) 1995-02-28 2005-06-08 キヤノン株式会社 Image observation optical system and imaging optical system
US5818632A (en) 1995-04-13 1998-10-06 Melles Griot, Inc Multi-element lens system
JP3599828B2 (en) 1995-05-18 2004-12-08 オリンパス株式会社 Optical device
EP0785457A3 (en) * 1996-01-17 1998-10-14 Nippon Telegraph And Telephone Corporation Optical device and three-dimensional display device
JP3556389B2 (en) 1996-05-01 2004-08-18 日本電信電話株式会社 Head mounted display device
JPH09218375A (en) 1996-02-08 1997-08-19 Canon Inc Fatigue deciding method and observing device using same
JPH09219832A (en) 1996-02-13 1997-08-19 Olympus Optical Co Ltd Image display
US5959780A (en) 1996-04-15 1999-09-28 Olympus Optical Co., Ltd. Head-mounted display apparatus comprising a rotationally asymmetric surface
JP3758265B2 (en) 1996-04-24 2006-03-22 ソニー株式会社 3D image display method and display device thereof
US5880711A (en) 1996-04-24 1999-03-09 Sony Corporation Three-dimensional image display method and its display apparatus
US6028606A (en) 1996-08-02 2000-02-22 The Board Of Trustees Of The Leland Stanford Junior University Camera simulation system
JP3924348B2 (en) 1996-11-05 2007-06-06 オリンパス株式会社 Image display device
US6034823A (en) 1997-02-07 2000-03-07 Olympus Optical Co., Ltd. Decentered prism optical system
JPH10307263A (en) 1997-05-07 1998-11-17 Olympus Optical Co Ltd Prism optical element and image observation device
US6760169B2 (en) 1997-05-07 2004-07-06 Olympus Corporation Prism optical element, image observation apparatus and image display apparatus
DE69824440T2 (en) 1997-11-05 2005-06-23 Omd Devices L.L.C., Wilmington FOCUS ERROR CORRECTION DEVICE
JP2001511266A (en) 1997-12-11 2001-08-07 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Image display device and head-mounted display including the image display device
US6236521B1 (en) 1998-02-09 2001-05-22 Canon Kabushiki Kaisha Objective lens and image pickup device using the same
US6198577B1 (en) 1998-03-10 2001-03-06 Glaxo Wellcome, Inc. Doubly telecentric lens and imaging system for multiwell plates
JP3279265B2 (en) 1998-03-26 2002-04-30 株式会社エム・アール・システム研究所 Image display device
US6704149B2 (en) 1998-04-21 2004-03-09 Minolta Co., Ltd. Lens optical system
JPH11326820A (en) 1998-05-18 1999-11-26 Olympus Optical Co Ltd Observing optical system and observing device using it
JP2000075240A (en) 1998-08-26 2000-03-14 Mr System Kenkyusho:Kk Composite display device
JP2000199853A (en) 1998-10-26 2000-07-18 Olympus Optical Co Ltd Image-formation optical system and observation optical system
JP2000171714A (en) 1998-12-07 2000-06-23 Olympus Optical Co Ltd Image-formation optical system
US6433760B1 (en) 1999-01-14 2002-08-13 University Of Central Florida Head mounted display with eyetracking capability
JP4550184B2 (en) 1999-07-02 2010-09-22 オリンパス株式会社 Observation optical system
JP2000231060A (en) 1999-02-12 2000-08-22 Olympus Optical Co Ltd Image-formation optical system
JP2000249974A (en) 1999-03-02 2000-09-14 Canon Inc Display device and stereoscopic display device
DE60041287D1 (en) 1999-04-02 2009-02-12 Olympus Corp Optical display and image display with this device
EP1054280A3 (en) 1999-05-20 2004-08-18 Konica Corporation Zoom lens
JP2001066543A (en) 1999-08-25 2001-03-16 Canon Inc Composite optical device
US6243199B1 (en) 1999-09-07 2001-06-05 Moxtek Broad band wire grid polarizing beam splitter for use in the visible wavelength region
JP3391342B2 (en) 1999-10-29 2003-03-31 ミノルタ株式会社 Imaging lens device
JP2001145127A (en) 1999-11-12 2001-05-25 Shunichi Suda Three-dimensional image display device
JP3854763B2 (en) 1999-11-19 2006-12-06 キヤノン株式会社 Image display device
KR100360592B1 (en) 1999-12-08 2002-11-13 동부전자 주식회사 Semiconductor devic and method for fabricating it
JP2001238229A (en) 2000-02-21 2001-08-31 Nippon Hoso Kyokai <Nhk> Stereoscopic image photographing device and stereoscopic image display device, and stereoscopic image photographing and display system
KR100386725B1 (en) 2000-07-31 2003-06-09 주식회사 대양이앤씨 Optical System for Head Mount Display
JP4727025B2 (en) 2000-08-01 2011-07-20 オリンパス株式会社 Image display device
JP3658295B2 (en) 2000-08-09 2005-06-08 キヤノン株式会社 Image display device
EP1316225B1 (en) 2000-09-07 2006-11-15 Actuality Systems, Inc. Volumetric display system
JP4583569B2 (en) 2000-09-22 2010-11-17 オリンパス株式会社 Observation optical system and imaging optical system
JP4646374B2 (en) 2000-09-29 2011-03-09 オリンパス株式会社 Image observation optical system
US6563648B2 (en) 2000-10-20 2003-05-13 Three-Five Systems, Inc. Compact wide field of view imaging system
JP2002148559A (en) 2000-11-15 2002-05-22 Mixed Reality Systems Laboratory Inc Image observing device and image observing system using the device
JP4943580B2 (en) 2000-12-25 2012-05-30 オリンパス株式会社 Imaging optics
JP3658330B2 (en) 2001-02-21 2005-06-08 キヤノン株式会社 Composite display device and head mounted display device using the same
JP2002258208A (en) 2001-03-01 2002-09-11 Mixed Reality Systems Laboratory Inc Optical element and composite display device utilizing it
US6529331B2 (en) 2001-04-20 2003-03-04 Johns Hopkins University Head mounted display with full field of view and high resolution
US6999239B1 (en) 2001-05-23 2006-02-14 Research Foundation Of The University Of Central Florida, Inc Head-mounted display by integration of phase-conjugate material
US6731434B1 (en) 2001-05-23 2004-05-04 University Of Central Florida Compact lens assembly for the teleportal augmented reality system
US6963454B1 (en) 2002-03-01 2005-11-08 Research Foundation Of The University Of Central Florida Head-mounted display by integration of phase-conjugate material
JP4751534B2 (en) 2001-07-24 2011-08-17 大日本印刷株式会社 Optical system and apparatus using the same
JP4129972B2 (en) 2002-02-18 2008-08-06 オリンパス株式会社 Decentered optical system
KR100509370B1 (en) 2002-12-30 2005-08-19 삼성테크윈 주식회사 Photographing lens
US7046447B2 (en) * 2003-01-13 2006-05-16 Pc Mirage, Llc Variable focus system
JP2006519421A (en) 2003-03-05 2006-08-24 スリーエム イノベイティブ プロパティズ カンパニー Diffractive lens
JP4035476B2 (en) 2003-04-23 2008-01-23 キヤノン株式会社 Scanning optical system, scanning image display apparatus, and image display system
US7152977B2 (en) 2003-04-24 2006-12-26 Qubic Light Corporation Solid state light engine optical system
US7077523B2 (en) 2004-02-13 2006-07-18 Angstorm Inc. Three-dimensional display using variable focusing lens
US20070246641A1 (en) 2004-02-27 2007-10-25 Baun Kenneth W Night vision system with video screen
US7339737B2 (en) 2004-04-23 2008-03-04 Microvision, Inc. Beam multiplier that can be used as an exit-pupil expander and related system and method
JP2008508621A (en) 2004-08-03 2008-03-21 シルバーブルック リサーチ ピーティワイ リミテッド Walk-up printing
WO2006041596A2 (en) 2004-09-01 2006-04-20 Optical Research Associates Compact head mounted display devices with tilted/decentered lens element
JP4639721B2 (en) 2004-09-22 2011-02-23 株式会社ニコン 3D image display device
JP4560368B2 (en) 2004-10-08 2010-10-13 キヤノン株式会社 Eye detection device and image display device
US7249853B2 (en) 2005-04-13 2007-07-31 Eastman Kodak Company Unpolished optical element with periodic surface roughness
US7405881B2 (en) 2005-05-30 2008-07-29 Konica Minolta Holdings, Inc. Image display apparatus and head mount display
US7360905B2 (en) 2005-06-24 2008-04-22 Texas Instruments Incorporated Compact optical engine for very small personal projectors using LED illumination
CN101278565A (en) 2005-08-08 2008-10-01 康涅狄格大学 Depth and lateral size control of three-dimensional images in projection integral imaging
JP2007101930A (en) 2005-10-05 2007-04-19 Matsushita Electric Ind Co Ltd Method for forming and displaying element image of stereoscopic image and stereoscopic image display device
US20070109505A1 (en) 2005-10-05 2007-05-17 Matsushita Electric Industrial Co., Ltd. Projection three-dimensional display apparatus
US7522344B1 (en) 2005-12-14 2009-04-21 University Of Central Florida Research Foundation, Inc. Projection-based head-mounted display with eye-tracking capabilities
US8360578B2 (en) 2006-01-26 2013-01-29 Nokia Corporation Eye tracker device
KR101255209B1 (en) 2006-05-04 2013-04-23 삼성전자주식회사 Hihg resolution autostereoscopic display apparatus with lnterlaced image
US20070273983A1 (en) 2006-05-26 2007-11-29 Hebert Raymond T Devices, methods, and systems for image viewing
US8102454B2 (en) 2006-06-08 2012-01-24 Shimadzu Corporation Image pickup apparatus
JP2006276884A (en) 2006-06-16 2006-10-12 Olympus Corp Eccentric prism optical system
US7515345B2 (en) 2006-10-09 2009-04-07 Drs Sensors & Targeting Systems, Inc. Compact objective lens assembly
WO2008089417A2 (en) 2007-01-18 2008-07-24 The Arizona Board Of Regents On Behalf Of The University Of Arizona A polarized head-mounted projection display
JP4906680B2 (en) 2007-11-02 2012-03-28 キヤノン株式会社 Image display device
GB2468997A (en) 2008-01-22 2010-09-29 Univ Arizona State Head-mounted projection display using reflective microdisplays
JP5169253B2 (en) 2008-01-29 2013-03-27 ブラザー工業株式会社 Image display device
FR2928034B1 (en) 2008-02-26 2010-03-19 New Imaging Technologies Sas MATRIX SENSOR FOR LIGHT AMPLIFIER TUBE
JP5329882B2 (en) 2008-09-17 2013-10-30 パイオニア株式会社 Display device
CN101359089B (en) 2008-10-08 2010-08-11 北京理工大学 Optical system of light and small-sized big angular field free curved surface prism helmet display
JP5341462B2 (en) 2008-10-14 2013-11-13 キヤノン株式会社 Aberration correction method, image processing apparatus, and image processing system
JP5464839B2 (en) 2008-10-31 2014-04-09 キヤノン株式会社 Image display device
CN101424788A (en) 2008-12-09 2009-05-06 中国科学院长春光学精密机械与物理研究所 Glasses type climbing helmet display optical system
US8331032B2 (en) 2009-02-19 2012-12-11 Drs Rsta, Inc. Compact objective lens assembly for simultaneously imaging multiple spectral bands
WO2010123934A1 (en) 2009-04-20 2010-10-28 The Arizona Board Of Regents On Behalf Of The University Of Arizona Optical see-through free-form head-mounted display
US8441733B2 (en) 2009-04-24 2013-05-14 David Kessler Pupil-expanded volumetric display
GB0909126D0 (en) 2009-05-27 2009-07-01 Qinetiq Ltd Eye tracking apparatus
US20110075257A1 (en) 2009-09-14 2011-03-31 The Arizona Board Of Regents On Behalf Of The University Of Arizona 3-Dimensional electro-optical see-through displays
JP2011085769A (en) 2009-10-15 2011-04-28 Canon Inc Imaging display device
EP3611555A1 (en) 2009-11-19 2020-02-19 eSight Corporation Image magnification on a head mounted display
US9182596B2 (en) 2010-02-28 2015-11-10 Microsoft Technology Licensing, Llc See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
JP2013521576A (en) 2010-02-28 2013-06-10 オスターハウト グループ インコーポレイテッド Local advertising content on interactive head-mounted eyepieces
US8467133B2 (en) 2010-02-28 2013-06-18 Osterhout Group, Inc. See-through display with an optical assembly including a wedge-shaped illumination system
EP2564259B1 (en) 2010-04-30 2015-01-21 Beijing Institute Of Technology Wide angle and high resolution tiled head-mounted display device
US20120013988A1 (en) 2010-07-16 2012-01-19 Hutchin Richard A Head mounted display having a panoramic field of view
AU2011278053A1 (en) 2010-07-16 2013-01-31 Mcgill Technology Limited Dispensing apparatus
US20120019557A1 (en) 2010-07-22 2012-01-26 Sony Ericsson Mobile Communications Ab Displaying augmented reality information
DE102010040030B4 (en) 2010-08-31 2017-02-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Lens and imaging system
JP5603716B2 (en) 2010-09-06 2014-10-08 オリンパス株式会社 PRISM OPTICAL SYSTEM, IMAGE DISPLAY DEVICE AND IMAGING DEVICE USING PRISM OPTICAL SYSTEM
US8503087B1 (en) 2010-11-02 2013-08-06 Google Inc. Structured optical surface
US9292973B2 (en) 2010-11-08 2016-03-22 Microsoft Technology Licensing, Llc Automatic variable virtual focus for augmented reality displays
CA2822978C (en) 2010-12-24 2019-02-19 Hong Hua An ergonomic head mounted display device and optical system
US10156722B2 (en) 2010-12-24 2018-12-18 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
US20120160302A1 (en) 2010-12-27 2012-06-28 Jeffrey Michael Citron Trough shaped fresnel reflector solar concentrator
DE112012001022T5 (en) 2011-02-28 2013-12-19 Osterhout Group, Inc. Alignment control in a head-worn augmented reality device
TWI490587B (en) 2011-04-12 2015-07-01 Ability Entpr Co Ltd Optical zoom lens
US11640050B2 (en) 2011-10-19 2023-05-02 Epic Optix Inc. Microdisplay-based head-up display system
CN104204904B (en) 2012-01-24 2018-05-18 亚利桑那大学评议会 Close-coupled eyes track head-mounted display
JP6111635B2 (en) 2012-02-24 2017-04-12 セイコーエプソン株式会社 Virtual image display device
US8985803B2 (en) 2012-03-21 2015-03-24 Microsoft Technology Licensing, Llc Freeform-prism eyepiece with illumination waveguide
JP6056171B2 (en) 2012-03-29 2017-01-11 富士通株式会社 Stereoscopic image display apparatus and method
US20130286053A1 (en) 2012-04-25 2013-10-31 Rod G. Fleck Direct view augmented reality eyeglass-type display
US20130285885A1 (en) 2012-04-25 2013-10-31 Andreas G. Nowatzyk Head-mounted light-field display
US20130300634A1 (en) 2012-05-09 2013-11-14 Nokia Corporation Method and apparatus for determining representations of displayed information based on focus distance
US8754829B2 (en) 2012-08-04 2014-06-17 Paul Lapstun Scanning light field camera and display
DE102013001097A1 (en) 2012-08-10 2014-02-13 Johnson Controls Gmbh Head-up display and method for operating a head-up display
JP6019918B2 (en) 2012-08-17 2016-11-02 セイコーエプソン株式会社 Virtual image display device
JP2015534108A (en) 2012-09-11 2015-11-26 マジック リープ, インコーポレイテッド Ergonomic head mounted display device and optical system
US9940901B2 (en) 2012-09-21 2018-04-10 Nvidia Corporation See-through optical image processing
IN2015DN02476A (en) 2012-10-18 2015-09-11 Univ Arizona State
US9858721B2 (en) 2013-01-15 2018-01-02 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for generating an augmented scene display
US9201193B1 (en) 2013-02-18 2015-12-01 Exelis, Inc. Textured fiber optic coupled image intensified camera
WO2014144989A1 (en) 2013-03-15 2014-09-18 Ostendo Technologies, Inc. 3d light field displays and methods with improved viewing angle depth and resolution
US9405124B2 (en) 2013-04-09 2016-08-02 Massachusetts Institute Of Technology Methods and apparatus for light field projection
CN103605214A (en) 2013-11-21 2014-02-26 深圳市华星光电技术有限公司 Stereoscopic display device
US9857591B2 (en) 2014-05-30 2018-01-02 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
US10274731B2 (en) 2013-12-19 2019-04-30 The University Of North Carolina At Chapel Hill Optical see-through near-eye display using point light source backlight
JP6264878B2 (en) 2013-12-24 2018-01-24 セイコーエプソン株式会社 Light guide device, virtual image display device, and light guide device manufacturing method
US10244223B2 (en) 2014-01-10 2019-03-26 Ostendo Technologies, Inc. Methods for full parallax compressed light field 3D imaging systems
CN106029000A (en) 2014-02-21 2016-10-12 阿克伦大学 Imaging and display system for guiding medical interventions
AU2015227094B2 (en) 2014-03-05 2019-07-04 Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3D augmented reality display with variable focus and/or object recognition
AU2015266670B2 (en) 2014-05-30 2019-05-09 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
WO2016033317A1 (en) 2014-08-29 2016-03-03 Arizona Board Of Regent On Behalf Of The University Of Arizona Ultra-compact head-up displays based on freeform waveguide
US20160239985A1 (en) 2015-02-17 2016-08-18 Osterhout Group, Inc. See-through computer display systems
CA3033651C (en) 2016-08-12 2023-09-05 Arizona Board Of Regents On Behalf Of The University Of Arizona High-resolution freeform eyepiece design with a large exit pupil

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7230583B2 (en) * 1998-11-09 2007-06-12 University Of Washington Scanned beam display with focal length adjustment
US7177083B2 (en) * 2003-02-17 2007-02-13 Carl-Zeiss-Stiftung Trading As Display device with electrooptical focussing
US20090168010A1 (en) * 2007-12-27 2009-07-02 Igor Vinogradov Adaptive focusing using liquid crystal lens in electro-optical readers

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Hidenori Kuriyabashi, Munekazu Date, Shiro Suyama, Toyohiko Hatada J. of the SID 14/5, 2006 pp 493-498 *
Hong Hua, Chunyu Gao, Frank Biocca, Jannick P. Rolland. Proc. of the Virtual Reality 2001 Conference (VR'01), 0-7695-0948-7/01. *
Love et al. (High Speed switchable lens enables the development of a volumetric stereoscopic display. Aug 2009, Optics Express. Vol. 17, No. 18, Pages 15716-15725.) *

Cited By (405)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9509975B2 (en) * 2004-10-21 2016-11-29 Try Tech Llc Methods for acquiring, storing, transmitting and displaying stereoscopic images
US11592650B2 (en) 2008-01-22 2023-02-28 Arizona Board Of Regents On Behalf Of The University Of Arizona Head-mounted projection display using reflective microdisplays
US9310591B2 (en) 2008-01-22 2016-04-12 The Arizona Board Of Regents On Behalf Of The University Of Arizona Head-mounted projection display using reflective microdisplays
US10495859B2 (en) 2008-01-22 2019-12-03 The Arizona Board Of Regents On Behalf Of The University Of Arizona Head-mounted projection display using reflective microdisplays
US11150449B2 (en) 2008-01-22 2021-10-19 Arizona Board Of Regents On Behalf Of The University Of Arizona Head-mounted projection display using reflective microdisplays
US9865043B2 (en) 2008-03-26 2018-01-09 Ricoh Company, Ltd. Adaptive image acquisition and display using multi-focal display
US9239453B2 (en) 2009-04-20 2016-01-19 Beijing Institute Of Technology Optical see-through free-form head-mounted display
US10416452B2 (en) 2009-04-20 2019-09-17 The Arizona Board Of Regents On Behalf Of The University Of Arizona Optical see-through free-form head-mounted display
US11300790B2 (en) 2009-04-20 2022-04-12 Arizona Board Of Regents On Behalf Of The University Of Arizona Optical see-through free-form head-mounted display
US11079596B2 (en) 2009-09-14 2021-08-03 The Arizona Board Of Regents On Behalf Of The University Of Arizona 3-dimensional electro-optical see-through displays
US11803059B2 (en) 2009-09-14 2023-10-31 The Arizona Board Of Regents On Behalf Of The University Of Arizona 3-dimensional electro-optical see-through displays
US8199165B2 (en) * 2009-10-14 2012-06-12 Hewlett-Packard Development Company, L.P. Methods and systems for object segmentation in digital images
US20110085028A1 (en) * 2009-10-14 2011-04-14 Ramin Samadani Methods and systems for object segmentation in digital images
US9934575B2 (en) * 2009-11-27 2018-04-03 Sony Corporation Image processing apparatus, method and computer program to adjust 3D information based on human visual characteristics
US9819931B2 (en) 2010-01-12 2017-11-14 Samsung Electronics Co., Ltd Method for performing out-focus using depth information and camera using the same
US11184603B2 (en) * 2010-01-12 2021-11-23 Samsung Electronics Co., Ltd. Method for performing out-focus using depth information and camera using the same
US9154684B2 (en) * 2010-01-12 2015-10-06 Samsung Electronics Co., Ltd Method for performing out-focus using depth information and camera using the same
US10659767B2 (en) 2010-01-12 2020-05-19 Samsung Electronics Co., Ltd. Method for performing out-focus using depth information and camera using the same
US20110169921A1 (en) * 2010-01-12 2011-07-14 Samsung Electronics Co., Ltd. Method for performing out-focus using depth information and camera using the same
US10809533B2 (en) 2010-04-30 2020-10-20 Arizona Board Of Regents On Behalf Of The University Of Arizona Wide angle and high resolution tiled head-mounted display device
US10281723B2 (en) 2010-04-30 2019-05-07 The Arizona Board Of Regents On Behalf Of The University Of Arizona Wide angle and high resolution tiled head-mounted display device
US9244277B2 (en) 2010-04-30 2016-01-26 The Arizona Board Of Regents On Behalf Of The University Of Arizona Wide angle and high resolution tiled head-mounted display device
US11609430B2 (en) 2010-04-30 2023-03-21 The Arizona Board Of Regents On Behalf Of The University Of Arizona Wide angle and high resolution tiled head-mounted display device
US9330470B2 (en) 2010-06-16 2016-05-03 Intel Corporation Method and system for modeling subjects from a depth map
US20130141549A1 (en) * 2010-06-29 2013-06-06 Cyclomedia Technology B.V. Method for Producing a Digital Photo Wherein at Least Some of the Pixels Comprise Position Information, and Such a Digital Photo
US10264239B2 (en) * 2010-06-29 2019-04-16 Cyclomedia Technology B.V. Method for producing a digital photo wherein at least some of the pixels comprise position information, and such a digital photo
US20120019557A1 (en) * 2010-07-22 2012-01-26 Sony Ericsson Mobile Communications Ab Displaying augmented reality information
US8884984B2 (en) 2010-10-15 2014-11-11 Microsoft Corporation Fusing virtual content into real content
US9122053B2 (en) 2010-10-15 2015-09-01 Microsoft Technology Licensing, Llc Realistic occlusion for a head mounted augmented reality display
CN102419631A (en) * 2010-10-15 2012-04-18 微软公司 Fusing virtual content into real content
US9588341B2 (en) 2010-11-08 2017-03-07 Microsoft Technology Licensing, Llc Automatic variable virtual focus for augmented reality displays
US20120113092A1 (en) * 2010-11-08 2012-05-10 Avi Bar-Zeev Automatic variable virtual focus for augmented reality displays
US9292973B2 (en) * 2010-11-08 2016-03-22 Microsoft Technology Licensing, Llc Automatic variable virtual focus for augmented reality displays
US9304319B2 (en) 2010-11-18 2016-04-05 Microsoft Technology Licensing, Llc Automatic focus improvement for augmented reality displays
US10055889B2 (en) 2010-11-18 2018-08-21 Microsoft Technology Licensing, Llc Automatic focus improvement for augmented reality displays
US10156722B2 (en) * 2010-12-24 2018-12-18 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
US20160011419A1 (en) * 2010-12-24 2016-01-14 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
US8988461B1 (en) 2011-01-18 2015-03-24 Disney Enterprises, Inc. 3D drawing and painting system with a 3D scalar field
US9142056B1 (en) * 2011-05-18 2015-09-22 Disney Enterprises, Inc. Mixed-order compositing for images having three-dimensional painting effects
US9910498B2 (en) 2011-06-23 2018-03-06 Intel Corporation System and method for close-range movement tracking
US11048333B2 (en) 2011-06-23 2021-06-29 Intel Corporation System and method for close-range movement tracking
US20130021658A1 (en) * 2011-07-20 2013-01-24 Google Inc. Compact See-Through Display System
US9091850B2 (en) 2011-07-20 2015-07-28 Google Inc. Compact see-through display system
US8508851B2 (en) * 2011-07-20 2013-08-13 Google Inc. Compact see-through display system
US20160161739A1 (en) * 2011-07-27 2016-06-09 Microsoft Technology Licensing, Llc Variable-Depth Stereoscopic Display
US10082669B2 (en) * 2011-07-27 2018-09-25 Microsoft Technology Licensing, Llc Variable-depth stereoscopic display
US10222893B1 (en) 2011-08-05 2019-03-05 P4tents1, LLC Pressure-based touch screen system, method, and computer program product with virtual display layers
US10222891B1 (en) 2011-08-05 2019-03-05 P4tents1, LLC Setting interface system, method, and computer program product for a multi-pressure selection touch screen
US10209809B1 (en) 2011-08-05 2019-02-19 P4tents1, LLC Pressure-sensitive touch screen system, method, and computer program product for objects
US10222895B1 (en) 2011-08-05 2019-03-05 P4tents1, LLC Pressure-based touch screen system, method, and computer program product with virtual display layers
US10203794B1 (en) 2011-08-05 2019-02-12 P4tents1, LLC Pressure-sensitive home interface system, method, and computer program product
US10209807B1 (en) 2011-08-05 2019-02-19 P4tents1, LLC Pressure sensitive touch screen system, method, and computer program product for hyperlinks
US10222894B1 (en) 2011-08-05 2019-03-05 P4tents1, LLC System, method, and computer program product for a multi-pressure selection touch screen
US10120480B1 (en) 2011-08-05 2018-11-06 P4tents1, LLC Application-specific pressure-sensitive touch screen system, method, and computer program product
US10209806B1 (en) 2011-08-05 2019-02-19 P4tents1, LLC Tri-state gesture-equipped touch screen system, method, and computer program product
US10162448B1 (en) 2011-08-05 2018-12-25 P4tents1, LLC System, method, and computer program product for a pressure-sensitive touch screen for messages
US10146353B1 (en) 2011-08-05 2018-12-04 P4tents1, LLC Touch screen system, method, and computer program product
US10209808B1 (en) 2011-08-05 2019-02-19 P4tents1, LLC Pressure-based interface system, method, and computer program product with virtual display layers
US10156921B1 (en) 2011-08-05 2018-12-18 P4tents1, LLC Tri-state gesture-equipped touch screen system, method, and computer program product
US10222892B1 (en) 2011-08-05 2019-03-05 P4tents1, LLC System, method, and computer program product for a multi-pressure selection touch screen
US8963956B2 (en) 2011-08-19 2015-02-24 Microsoft Technology Licensing, Llc Location based skins for mixed reality displays
WO2013028586A1 (en) * 2011-08-19 2013-02-28 Latta Stephen G Location based skins for mixed reality displays
US9323325B2 (en) 2011-08-30 2016-04-26 Microsoft Technology Licensing, Llc Enhancing an object of interest in a see-through, mixed reality display device
US9414049B2 (en) 2011-09-19 2016-08-09 Écrans Polaires Inc./Polar Screens Inc. Method and display for showing a stereoscopic image
CN103931179A (en) * 2011-09-19 2014-07-16 埃克兰斯波莱尔斯股份有限公司/波拉斯克琳斯股份有限公司 Method and display for showing a stereoscopic image
US9255813B2 (en) 2011-10-14 2016-02-09 Microsoft Technology Licensing, Llc User controlled real object disappearance in a mixed reality display
US10132633B2 (en) 2011-10-14 2018-11-20 Microsoft Technology Licensing, Llc User controlled real object disappearance in a mixed reality display
US10497175B2 (en) 2011-12-06 2019-12-03 Microsoft Technology Licensing, Llc Augmented reality virtual monitor
US10514542B2 (en) * 2011-12-19 2019-12-24 Dolby Laboratories Licensing Corporation Head-mounted display
US20150002374A1 (en) * 2011-12-19 2015-01-01 Dolby Laboratories Licensing Corporation Head-Mounted Display
US10606080B2 (en) 2012-01-24 2020-03-31 The Arizona Board Of Regents On Behalf Of The University Of Arizona Compact eye-tracked head-mounted display
US9720232B2 (en) 2012-01-24 2017-08-01 The Arizona Board Of Regents On Behalf Of The University Of Arizona Compact eye-tracked head-mounted display
US11181746B2 (en) 2012-01-24 2021-11-23 Arizona Board Of Regents On Behalf Of The University Of Arizona Compact eye-tracked head-mounted display
US10598939B2 (en) 2012-01-24 2020-03-24 Arizona Board Of Regents On Behalf Of The University Of Arizona Compact eye-tracked head-mounted display
US20180113316A1 (en) 2012-01-24 2018-04-26 Arizona Board Of Regents On Behalf Of The University Of Arizona Compact eye-tracked head-mounted display
US10969592B2 (en) 2012-01-24 2021-04-06 Arizona Board Of Regents On Behalf Of The University Of Arizona Compact eye-tracked head-mounted display
US9734633B2 (en) * 2012-01-27 2017-08-15 Microsoft Technology Licensing, Llc Virtual environment generating system
US20130194259A1 (en) * 2012-01-27 2013-08-01 Darren Bennett Virtual environment generating system
US9557566B2 (en) * 2012-03-07 2017-01-31 Seiko Epson Corporation Head-mounted display device and control method for the head-mounted display device
US20130234914A1 (en) * 2012-03-07 2013-09-12 Seiko Epson Corporation Head-mounted display device and control method for the head-mounted display device
US9274519B2 (en) 2012-03-15 2016-03-01 General Electric Company Methods and apparatus for monitoring operation of a system asset
US10289108B2 (en) 2012-03-15 2019-05-14 General Electric Company Methods and apparatus for monitoring operation of a system asset
US8868384B2 (en) 2012-03-15 2014-10-21 General Electric Company Methods and apparatus for monitoring operation of a system asset
US10055642B2 (en) 2012-03-22 2018-08-21 Google Llc Staredown to produce changes in information density and type
US9096920B1 (en) * 2012-03-22 2015-08-04 Google Inc. User interface method
US9600721B2 (en) 2012-03-22 2017-03-21 Google Inc. Staredown to produce changes in information density and type
US9195053B2 (en) 2012-03-27 2015-11-24 Ostendo Technologies, Inc. Spatio-temporal directional light modulator
US20130265220A1 (en) * 2012-04-09 2013-10-10 Omek Interactive, Ltd. System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
US9477303B2 (en) * 2012-04-09 2016-10-25 Intel Corporation System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
CN103472909A (en) * 2012-04-10 2013-12-25 微软公司 Realistic occlusion for a head mounted augmented reality display
US20150062311A1 (en) * 2012-04-29 2015-03-05 Hewlett-Packard Development Company, L.P. View weighting for multiview displays
JP2015525365A (en) * 2012-05-09 2015-09-03 ノキア コーポレイション Method and apparatus for performing focus correction of display information
WO2013170074A1 (en) * 2012-05-09 2013-11-14 Nokia Corporation Method and apparatus for providing focus correction of displayed information
WO2013170073A1 (en) * 2012-05-09 2013-11-14 Nokia Corporation Method and apparatus for determining representations of displayed information based on focus distance
US9967555B2 (en) * 2012-05-25 2018-05-08 Hoya Corporation Simulation device
US20150163480A1 (en) * 2012-05-25 2015-06-11 Hoya Corporation Simulation device
US9430055B2 (en) * 2012-06-15 2016-08-30 Microsoft Technology Licensing, Llc Depth of field control for see-thru display
US20130335404A1 (en) * 2012-06-15 2013-12-19 Jeff Westerinen Depth of field control for see-thru display
US20140002491A1 (en) * 2012-06-29 2014-01-02 Mathew J. Lamb Deep augmented reality tags for head mounted displays
US9417692B2 (en) * 2012-06-29 2016-08-16 Microsoft Technology Licensing, Llc Deep augmented reality tags for mixed reality
WO2014016577A2 (en) * 2012-07-25 2014-01-30 Bae Systems Plc Head up display fluidic lens
WO2014016577A3 (en) * 2012-07-25 2014-02-27 Bae Systems Plc Head up display fluidic lens
GB2504311A (en) * 2012-07-25 2014-01-29 Bae Systems Plc Head-up display using fluidic lens
US10670880B2 (en) * 2012-08-06 2020-06-02 Sony Corporation Image display apparatus and image display method
US20170276956A1 (en) * 2012-08-06 2017-09-28 Sony Corporation Image display apparatus and image display method
US9380287B2 (en) 2012-09-03 2016-06-28 Sensomotoric Instruments Gesellschaft Fur Innovative Sensorik Mbh Head mounted system and method to compute and render a stream of digital images using a head mounted display
WO2014033306A1 (en) * 2012-09-03 2014-03-06 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Head mounted system and method to compute and render a stream of digital images using a head mounted system
US20140092006A1 (en) * 2012-09-28 2014-04-03 Joshua Boelter Device and method for modifying rendering based on viewer focus area from eye tracking
US10269179B2 (en) 2012-10-05 2019-04-23 Elwha Llc Displaying second augmentations that are based on registered first augmentations
US10665017B2 (en) 2012-10-05 2020-05-26 Elwha Llc Displaying in response to detecting one or more user behaviors one or more second augmentations that are based on one or more registered first augmentations
US10180715B2 (en) 2012-10-05 2019-01-15 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
US9671863B2 (en) 2012-10-05 2017-06-06 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
US9674047B2 (en) * 2012-10-05 2017-06-06 Elwha Llc Correlating user reactions with augmentations displayed through augmented views
US20140098135A1 (en) * 2012-10-05 2014-04-10 Elwha Llc Correlating user reactions with augmentations displayed through augmented views
US10713846B2 (en) 2012-10-05 2020-07-14 Elwha Llc Systems and methods for sharing augmentation data
US10254830B2 (en) 2012-10-05 2019-04-09 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
AU2013333726B2 (en) * 2012-10-15 2017-01-19 Bae Systems Plc Prismatic correcting lens
WO2014060736A1 (en) * 2012-10-15 2014-04-24 Bae Systems Plc Prismatic correcting lens
JP2015535092A (en) * 2012-10-15 2015-12-07 ビ−エイイ− システムズ パブリック リミテッド カンパニ−BAE SYSTEMS plc Prism correction lens
JP2016502676A (en) * 2012-10-18 2016-01-28 アリゾナ ボード オブ リージェンツ オン ビハーフ オブ ザ ユニバーシティ オブ アリゾナ Stereoscopic display using addressable focus cues
KR20210010649A (en) * 2012-10-18 2021-01-27 더 아리조나 보드 오브 리전츠 온 비핼프 오브 더 유니버시티 오브 아리조나 Stereoscopic displays with addressable focus cues
KR20150070195A (en) * 2012-10-18 2015-06-24 더 아리조나 보드 오브 리전츠 온 비핼프 오브 더 유니버시티 오브 아리조나 Stereoscopic displays with addressable focus cues
JP7213002B2 (en) 2012-10-18 2023-01-26 アリゾナ ボード オブ リージェンツ オン ビハーフ オブ ザ ユニバーシティ オブ アリゾナ Stereoscopic display with addressable focal cues
JP2021047417A (en) * 2012-10-18 2021-03-25 アリゾナ ボード オブ リージェンツ オン ビハーフ オブ ザ ユニバーシティ オブ アリゾナ Three-dimensional view display using addressable focus clue
US11347036B2 (en) 2012-10-18 2022-05-31 The Arizona Board Of Regents On Behalf Of The University Of Arizona Stereoscopic displays with addressable focus cues
JP2019174815A (en) * 2012-10-18 2019-10-10 アリゾナ ボード オブ リージェンツ オン ビハーフ オブ ザ ユニバーシティ オブ アリゾナ Three-dimensional view display using focus clue in which address can be designated
US10598946B2 (en) 2012-10-18 2020-03-24 The Arizona Board Of Regents On Behalf Of The University Of Arizona Stereoscopic displays with addressable focus cues
KR102344903B1 (en) 2012-10-18 2021-12-28 더 아리조나 보드 오브 리전츠 온 비핼프 오브 더 유니버시티 오브 아리조나 Stereoscopic displays with addressable focus cues
US10394036B2 (en) 2012-10-18 2019-08-27 Arizona Board Of Regents On Behalf Of The University Of Arizona Stereoscopic displays with addressable focus cues
US9874760B2 (en) 2012-10-18 2018-01-23 Arizona Board Of Regents On Behalf Of The University Of Arizona Stereoscopic displays with addressable focus cues
KR102207298B1 (en) * 2012-10-18 2021-01-26 더 아리조나 보드 오브 리전츠 온 비핼프 오브 더 유니버시티 오브 아리조나 Stereoscopic displays with addressable focus cues
US10442774B1 (en) * 2012-11-06 2019-10-15 Valve Corporation Adaptive optical path with variable focal length
US11267793B1 (en) 2012-11-06 2022-03-08 Valve Corporation Adaptive optical path with variable focal length
US11767300B1 (en) 2012-11-06 2023-09-26 Valve Corporation Adaptive optical path with variable focal length
US9851787B2 (en) * 2012-11-29 2017-12-26 Microsoft Technology Licensing, Llc Display resource management
US20140145914A1 (en) * 2012-11-29 2014-05-29 Stephen Latta Head-mounted display resource management
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US10371938B2 (en) * 2013-01-24 2019-08-06 Yuchen Zhou Method and apparatus to achieve virtual reality with a flexible display
US11006102B2 (en) 2013-01-24 2021-05-11 Yuchen Zhou Method of utilizing defocus in virtual reality and augmented reality
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
US9393870B2 (en) 2013-03-15 2016-07-19 Honda Motor Co., Ltd. Volumetric heads-up display with dynamic focal plane
US10025486B2 (en) 2013-03-15 2018-07-17 Elwha Llc Cross-reality select, drag, and drop for augmented reality systems
WO2014144989A1 (en) * 2013-03-15 2014-09-18 Ostendo Technologies, Inc. 3d light field displays and methods with improved viewing angle depth and resolution
US20140347361A1 (en) * 2013-03-15 2014-11-27 Ostendo Technologies, Inc. 3D Light Field Displays and Methods with Improved Viewing Angle, Depth and Resolution
US10215583B2 (en) 2013-03-15 2019-02-26 Honda Motor Co., Ltd. Multi-level navigation monitoring and control
US9747898B2 (en) 2013-03-15 2017-08-29 Honda Motor Co., Ltd. Interpretation of ambiguous vehicle instructions
US10628969B2 (en) 2013-03-15 2020-04-21 Elwha Llc Dynamically preserving scene elements in augmented reality systems
US9452712B1 (en) 2013-03-15 2016-09-27 Honda Motor Co., Ltd. System and method for warning a driver of a potential rear end collision
US9164281B2 (en) 2013-03-15 2015-10-20 Honda Motor Co., Ltd. Volumetric heads-up display with dynamic focal plane
US10297071B2 (en) * 2013-03-15 2019-05-21 Ostendo Technologies, Inc. 3D light field displays and methods with improved viewing angle, depth and resolution
US10339711B2 (en) 2013-03-15 2019-07-02 Honda Motor Co., Ltd. System and method for providing augmented reality based directions based on verbal and gestural cues
US9251715B2 (en) 2013-03-15 2016-02-02 Honda Motor Co., Ltd. Driver training system using heads-up display augmented reality graphics elements
US9639964B2 (en) 2013-03-15 2017-05-02 Elwha Llc Dynamically preserving scene elements in augmented reality systems
US9400385B2 (en) 2013-03-15 2016-07-26 Honda Motor Co., Ltd. Volumetric heads-up display with dynamic focal plane
US10109075B2 (en) 2013-03-15 2018-10-23 Elwha Llc Temporal element restoration in augmented reality systems
TWI625551B (en) * 2013-03-15 2018-06-01 傲思丹度科技公司 3d light field displays and methods with improved viewing angle depth and resolution
US9378644B2 (en) 2013-03-15 2016-06-28 Honda Motor Co., Ltd. System and method for warning a driver of a potential rear end collision
US9754420B2 (en) 2013-04-29 2017-09-05 Microsoft Technology Licensing, Llc Mixed reality interactions
US9443354B2 (en) 2013-04-29 2016-09-13 Microsoft Technology Licensing, Llc Mixed reality interactions
US10510190B2 (en) 2013-04-29 2019-12-17 Microsoft Technology Licensing, Llc Mixed reality interactions
JP2014219621A (en) * 2013-05-10 2014-11-20 株式会社タイトー Display device and display control program
US20140362110A1 (en) * 2013-06-08 2014-12-11 Sony Computer Entertainment Inc. Systems and methods for customizing optical representation of views provided by a head mounted display based on optical prescription of a user
US20160212404A1 (en) * 2013-08-23 2016-07-21 The Schepens Eye Research Institute, Inc. Prevention and Treatment of Myopia
US20150084986A1 (en) * 2013-09-23 2015-03-26 Kil-Whan Lee Compositor, system-on-chip having the same, and method of driving system-on-chip
US9785231B1 (en) * 2013-09-26 2017-10-10 Rockwell Collins, Inc. Head worn display integrity monitor system and methods
US10643392B2 (en) * 2013-11-27 2020-05-05 Magic Leap, Inc. Virtual and augmented reality systems and methods
US11237403B2 (en) 2013-11-27 2022-02-01 Magic Leap, Inc. Virtual and augmented reality systems and methods
US10629004B2 (en) * 2013-11-27 2020-04-21 Magic Leap, Inc. Virtual and augmented reality systems and methods
US9915826B2 (en) 2013-11-27 2018-03-13 Magic Leap, Inc. Virtual and augmented reality systems and methods having improved diffractive grating structures
US10935806B2 (en) 2013-11-27 2021-03-02 Magic Leap, Inc. Virtual and augmented reality systems and methods
US20180045965A1 (en) * 2013-11-27 2018-02-15 Magic Leap, Inc. Virtual and augmented reality systems and methods
US11714291B2 (en) 2013-11-27 2023-08-01 Magic Leap, Inc. Virtual and augmented reality systems and methods
US10529138B2 (en) 2013-11-27 2020-01-07 Magic Leap, Inc. Virtual and augmented reality systems and methods
US10237544B2 (en) * 2013-12-12 2019-03-19 Boe Technology Group Co., Ltd. Open head mount display device and display method thereof
US20160007015A1 (en) * 2013-12-12 2016-01-07 Boe Technology Group Co., Ltd. Open Head Mount Display Device and Display method Thereof
US10247946B2 (en) * 2014-01-29 2019-04-02 Google Llc Dynamic lens for head mounted display
CN105940337A (en) * 2014-01-29 2016-09-14 谷歌公司 Dynamic lens for head mounted display
EP3100097A4 (en) * 2014-01-29 2018-02-14 Google LLC Dynamic lens for head mounted display
US10317690B2 (en) 2014-01-31 2019-06-11 Magic Leap, Inc. Multi-focal display system and method
US10386636B2 (en) 2014-01-31 2019-08-20 Magic Leap, Inc. Multi-focal display system and method
US11150489B2 (en) 2014-01-31 2021-10-19 Magic Leap, Inc. Multi-focal display system and method
US11209651B2 (en) * 2014-01-31 2021-12-28 Magic Leap, Inc. Multi-focal display system and method
EP4071537A1 (en) * 2014-01-31 2022-10-12 Magic Leap, Inc. Multi-focal display system
US11520164B2 (en) 2014-01-31 2022-12-06 Magic Leap, Inc. Multi-focal display system and method
US9313481B2 (en) 2014-02-19 2016-04-12 Microsoft Technology Licensing, Llc Stereoscopic display responsive to focal-point shift
KR20160123346A (en) * 2014-02-19 2016-10-25 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Stereoscopic display responsive to focal-point shift
KR102231910B1 (en) 2014-02-19 2021-03-25 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Stereoscopic display responsive to focal-point shift
WO2015126735A1 (en) * 2014-02-19 2015-08-27 Microsoft Technology Licensing, Llc Stereoscopic display responsive to focal-point shift
CN105992965A (en) * 2014-02-19 2016-10-05 微软技术许可有限责任公司 Stereoscopic display responsive to focal-point shift
US11350079B2 (en) 2014-03-05 2022-05-31 Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3D augmented reality display
US10805598B2 (en) 2014-03-05 2020-10-13 The Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3D lightfield augmented reality display
CN106662731A (en) * 2014-03-05 2017-05-10 亚利桑那大学评议会 Wearable 3d augmented reality display
WO2015134740A1 (en) * 2014-03-05 2015-09-11 Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3d augmented reality display with variable focus and/or object recognition
JP2017516154A (en) * 2014-03-05 2017-06-15 アリゾナ ボード オブ リージェンツ オン ビハーフ オブ ザ ユニバーシティ オブ アリゾナ Wearable 3D augmented reality display with variable focus and / or object recognition
US10469833B2 (en) 2014-03-05 2019-11-05 The Arizona Board Of Regents On Behalf Of The University Of Arizona Wearable 3D augmented reality display with variable focus and/or object recognition
US11138793B2 (en) 2014-03-14 2021-10-05 Magic Leap, Inc. Multi-depth plane display system with reduced switching between depth planes
US10298911B2 (en) * 2014-03-31 2019-05-21 Empire Technology Development Llc Visualization of spatial and other relationships
US20150279022A1 (en) * 2014-03-31 2015-10-01 Empire Technology Development Llc Visualization of Spatial and Other Relationships
US20150306330A1 (en) * 2014-04-29 2015-10-29 MaskSelect, Inc. Mask Selection System
KR20170015375A (en) * 2014-05-30 2017-02-08 매직 립, 인코포레이티드 Methods and system for creating focal planes in virtual and augmented reality
JP2017522587A (en) * 2014-05-30 2017-08-10 マジック リープ, インコーポレイテッド Method and system for creating focal planes in virtual and augmented reality
US10234687B2 (en) 2014-05-30 2019-03-19 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
KR102225563B1 (en) * 2014-05-30 2021-03-08 매직 립, 인코포레이티드 Methods and system for creating focal planes in virtual and augmented reality
US11422374B2 (en) 2014-05-30 2022-08-23 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
WO2015184412A1 (en) 2014-05-30 2015-12-03 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
EP3149528A4 (en) * 2014-05-30 2018-01-24 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
CN106537219A (en) * 2014-05-30 2017-03-22 奇跃公司 Methods and system for creating focal planes in virtual and augmented reality
US11474355B2 (en) * 2014-05-30 2022-10-18 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
US9857591B2 (en) 2014-05-30 2018-01-02 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
US10627632B2 (en) 2014-05-30 2020-04-21 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
US20160019868A1 (en) * 2014-07-18 2016-01-21 Samsung Electronics Co., Ltd. Method for focus control and electronic device thereof
US10134370B2 (en) * 2014-07-18 2018-11-20 Samsung Electronics Co., Ltd. Smart mirror with focus control
US20160042554A1 (en) * 2014-08-05 2016-02-11 Samsung Electronics Co., Ltd. Method and apparatus for generating real three-dimensional (3d) image
US9565421B2 (en) * 2014-11-25 2017-02-07 Harold O. Hosea Device for creating and enhancing three-dimensional image effects
US9866826B2 (en) 2014-11-25 2018-01-09 Ricoh Company, Ltd. Content-adaptive multi-focal display
US20160147078A1 (en) * 2014-11-25 2016-05-26 Ricoh Company, Ltd. Multifocal Display
JP2016099631A (en) * 2014-11-25 2016-05-30 株式会社リコー Multi-focal display, method and controller
US9864205B2 (en) * 2014-11-25 2018-01-09 Ricoh Company, Ltd. Multifocal display
US20160150214A1 (en) * 2014-11-25 2016-05-26 Harold O. Hosea Device for creating and enhancing three-dimensional image effects
US20160156896A1 (en) * 2014-12-01 2016-06-02 Samsung Electronics Co., Ltd. Apparatus for recognizing pupillary distance for 3d display
US10742968B2 (en) * 2014-12-01 2020-08-11 Samsung Electronics Co., Ltd. Apparatus for recognizing pupillary distance for 3D display
US9918066B2 (en) 2014-12-23 2018-03-13 Elbit Systems Ltd. Methods and systems for producing a magnified 3D image
US10466486B2 (en) 2015-01-26 2019-11-05 Magic Leap, Inc. Virtual and augmented reality systems and methods having improved diffractive grating structures
US11009710B2 (en) 2015-01-26 2021-05-18 Magic Leap, Inc. Virtual and augmented reality systems and methods having improved diffractive grating structures
US10732417B2 (en) 2015-01-26 2020-08-04 Magic Leap, Inc. Virtual and augmented reality systems and methods having improved diffractive grating structures
US11487121B2 (en) 2015-01-26 2022-11-01 Magic Leap, Inc. Virtual and augmented reality systems and methods having improved diffractive grating structures
US10593507B2 (en) 2015-02-09 2020-03-17 Arizona Board Of Regents On Behalf Of The University Of Arizona Small portable night vision system
US11205556B2 (en) 2015-02-09 2021-12-21 Arizona Board Of Regents On Behalf Of The University Of Arizona Small portable night vision system
US10176961B2 (en) 2015-02-09 2019-01-08 The Arizona Board Of Regents On Behalf Of The University Of Arizona Small portable night vision system
US11474359B2 (en) 2015-03-16 2022-10-18 Magic Leap, Inc. Augmented and virtual reality display systems and methods for diagnosing health conditions based on visual fields
US10788675B2 (en) 2015-03-16 2020-09-29 Magic Leap, Inc. Methods and systems for diagnosing and treating eyes using light therapy
US11747627B2 (en) 2015-03-16 2023-09-05 Magic Leap, Inc. Augmented and virtual reality display systems and methods for diagnosing health conditions based on visual fields
US10775628B2 (en) * 2015-03-16 2020-09-15 Magic Leap, Inc. Methods and systems for diagnosing and treating presbyopia
US11156835B2 (en) 2015-03-16 2021-10-26 Magic Leap, Inc. Methods and systems for diagnosing and treating health ailments
US10983351B2 (en) 2015-03-16 2021-04-20 Magic Leap, Inc. Augmented and virtual reality display systems and methods for diagnosing health conditions based on visual fields
US10969588B2 (en) 2015-03-16 2021-04-06 Magic Leap, Inc. Methods and systems for diagnosing contrast sensitivity
US11256096B2 (en) 2015-03-16 2022-02-22 Magic Leap, Inc. Methods and systems for diagnosing and treating presbyopia
US10162412B2 (en) * 2015-03-27 2018-12-25 Seiko Epson Corporation Display, control method of display, and program
CN104808342A (en) * 2015-04-30 2015-07-29 杭州映墨科技有限公司 Optical lens structure of wearable virtual-reality headset capable of displaying three-dimensional scene
US20220132099A1 (en) * 2015-05-28 2022-04-28 Microsoft Technology Licensing, Llc Determining inter-pupillary distance
US11683470B2 (en) * 2015-05-28 2023-06-20 Microsoft Technology Licensing, Llc Determining inter-pupillary distance
US11252399B2 (en) * 2015-05-28 2022-02-15 Microsoft Technology Licensing, Llc Determining inter-pupillary distance
DE102015007245A1 (en) 2015-06-05 2016-12-08 Audi Ag Method for operating a data-goggle device and data-goggle device
CN107810634A (en) * 2015-06-12 2018-03-16 微软技术许可有限责任公司 Display for three-dimensional augmented reality
US20170221276A1 (en) * 2015-06-25 2017-08-03 Microsoft Technology Licensing, Llc Color fill in an augmented reality environment
US9652897B2 (en) * 2015-06-25 2017-05-16 Microsoft Technology Licensing, Llc Color fill in an augmented reality environment
US10204458B2 (en) * 2015-06-25 2019-02-12 Microsoft Technology Licensing, Llc Color fill in an augmented reality environment
US10335342B2 (en) 2015-07-23 2019-07-02 New Jersey Institute Of Technology Method, system, and apparatus for treatment of binocular dysfunctions
US9921413B2 (en) * 2015-10-02 2018-03-20 Deepsee Inc. 3D image system, method, and applications
US20170097511A1 (en) * 2015-10-02 2017-04-06 Jing Xu 3d image system, method, and applications
US11906739B2 (en) 2015-10-05 2024-02-20 Magic Leap, Inc. Microlens collimator for scanning optical fiber in virtual/augmented reality system
US11662585B2 (en) 2015-10-06 2023-05-30 Magic Leap, Inc. Virtual/augmented reality system having reverse angle diffraction grating
US20170148215A1 (en) * 2015-11-19 2017-05-25 Oculus Vr, Llc Eye Tracking for Mitigating Vergence and Accommodation Conflicts
US9984507B2 (en) * 2015-11-19 2018-05-29 Oculus Vr, Llc Eye tracking for mitigating vergence and accommodation conflicts
US20170154464A1 (en) * 2015-11-30 2017-06-01 Microsoft Technology Licensing, Llc Multi-optical surface optical design
US10204451B2 (en) * 2015-11-30 2019-02-12 Microsoft Technology Licensing, Llc Multi-optical surface optical design
CN108292043A (en) * 2015-11-30 2018-07-17 微软技术许可有限责任公司 More optical surface optical designs
US10241569B2 (en) 2015-12-08 2019-03-26 Facebook Technologies, Llc Focus adjustment method for a virtual reality headset
US10937129B1 (en) 2015-12-08 2021-03-02 Facebook Technologies, Llc Autofocus virtual reality headset
US10445860B2 (en) 2015-12-08 2019-10-15 Facebook Technologies, Llc Autofocus virtual reality headset
US10198978B2 (en) * 2015-12-15 2019-02-05 Facebook Technologies, Llc Viewing optics test subsystem for head mounted displays
US10782526B2 (en) * 2015-12-22 2020-09-22 E-Vision Smart Optics, Inc. Dynamic focusing head mounted display
US11668941B2 (en) 2015-12-22 2023-06-06 E-Vision Smart Optics, Inc. Dynamic focusing head mounted display
US11237396B2 (en) 2015-12-22 2022-02-01 E-Vision Smart Optics, Inc. Dynamic focusing head mounted display
US10708576B2 (en) * 2015-12-31 2020-07-07 Beijing Zhigu Riu Tech Co., Ltd. Light field display control method and apparatus, and light field display device
US10440354B2 (en) * 2015-12-31 2019-10-08 Beijing Zhigu Rui Tuo Tech Co., Ltd. Light field display control method and apparatus, and light field display device
CN106375694A (en) * 2015-12-31 2017-02-01 北京智谷睿拓技术服务有限公司 Light field display control method and device, and light field display equipment
US20170195661A1 (en) * 2015-12-31 2017-07-06 Beijing Zhigu Rui Tuo Tech Co., Ltd. Light field display control method and apparatus, and light field display device
US10582193B2 (en) * 2015-12-31 2020-03-03 Beijing Zhigu Rui Tuo Tech Co., Ltd. Light field display control method and apparatus, and light field display device
US10368049B2 (en) * 2015-12-31 2019-07-30 Beijing Zhigu Rui Tuo Tech Co., Ltd. Light field display control method and apparatus, and light field display device
US11006101B2 (en) 2016-01-29 2021-05-11 Hewlett-Packard Development Company, L.P. Viewing device adjustment based on eye accommodation in relation to a display
EP3409013A4 (en) * 2016-01-29 2019-09-04 Hewlett-Packard Development Company, L.P. Viewing device adjustment based on eye accommodation in relation to a display
US10459230B2 (en) 2016-02-02 2019-10-29 Disney Enterprises, Inc. Compact augmented reality / virtual reality display
CN108886612A (en) * 2016-02-11 2018-11-23 奇跃公司 Reduce the more depth plane display systems switched between depth plane
CN108886612B (en) * 2016-02-11 2021-05-25 奇跃公司 Multi-depth flat panel display system with reduced switching between depth planes
EP3414899A4 (en) * 2016-02-11 2019-11-06 Magic Leap, Inc. Multi-depth plane display system with reduced switching between depth planes
US11402898B2 (en) 2016-03-04 2022-08-02 Magic Leap, Inc. Current drain reduction in AR/VR display systems
US11320900B2 (en) 2016-03-04 2022-05-03 Magic Leap, Inc. Current drain reduction in AR/VR display systems
US11775062B2 (en) 2016-03-04 2023-10-03 Magic Leap, Inc. Current drain reduction in AR/VR display systems
US11106276B2 (en) 2016-03-11 2021-08-31 Facebook Technologies, Llc Focus adjusting headset
US10088673B2 (en) 2016-03-15 2018-10-02 Deepsee Inc. 3D display apparatus, method, and applications
US10698215B2 (en) * 2016-03-25 2020-06-30 Magic Leap, Inc. Virtual and augmented reality systems and methods
CN107077218A (en) * 2016-03-25 2017-08-18 深圳前海达闼云端智能科技有限公司 The viewing reminding method and device of a kind of three-dimensional content
US11467408B2 (en) 2016-03-25 2022-10-11 Magic Leap, Inc. Virtual and augmented reality systems and methods
WO2017161552A1 (en) * 2016-03-25 2017-09-28 深圳前海达闼云端智能科技有限公司 Viewing prompting method and apparatus for three-dimensional content
US20170276948A1 (en) * 2016-03-25 2017-09-28 Magic Leap, Inc. Virtual and augmented reality systems and methods
CN109154723A (en) * 2016-03-25 2019-01-04 奇跃公司 Virtual and augmented reality system and method
US11016301B1 (en) 2016-04-07 2021-05-25 Facebook Technologies, Llc Accommodation based optical correction
US20170293146A1 (en) * 2016-04-07 2017-10-12 Oculus Vr, Llc Accommodation based optical correction
US11067797B2 (en) 2016-04-07 2021-07-20 Magic Leap, Inc. Systems and methods for augmented reality
US10379356B2 (en) * 2016-04-07 2019-08-13 Facebook Technologies, Llc Accommodation based optical correction
US11614626B2 (en) 2016-04-08 2023-03-28 Magic Leap, Inc. Augmented reality systems and methods with variable focus lens elements
US11106041B2 (en) 2016-04-08 2021-08-31 Magic Leap, Inc. Augmented reality systems and methods with variable focus lens elements
CN105929537A (en) * 2016-04-08 2016-09-07 北京骁龙科技有限公司 Head-mounted display and eyepiece system thereof
US10001648B2 (en) 2016-04-14 2018-06-19 Disney Enterprises, Inc. Occlusion-capable augmented reality display using cloaking optics
US9726896B2 (en) * 2016-04-21 2017-08-08 Maximilian Ralph Peter von und zu Liechtenstein Virtual monitor display technique for augmented reality environments
US20160320625A1 (en) * 2016-04-21 2016-11-03 Maximilian Ralph Peter von und zu Liechtenstein Virtual Monitor Display Technique for Augmented Reality Environments
US20190137758A1 (en) * 2016-05-04 2019-05-09 The Regents Of The University Of California Pseudo light-field display apparatus
WO2017192887A3 (en) * 2016-05-04 2018-07-26 The Regents Of The University Of California Pseudo light-field display apparatus
US20170330376A1 (en) * 2016-05-10 2017-11-16 Disney Enterprises, Inc. Occluded virtual image display
US9922464B2 (en) * 2016-05-10 2018-03-20 Disney Enterprises, Inc. Occluded virtual image display
US10838583B2 (en) 2016-05-17 2020-11-17 General Electric Company Systems and methods for prioritizing and monitoring device status in a condition monitoring software application
WO2017208148A1 (en) * 2016-05-30 2017-12-07 Università Di Pisa Wearable visor for augmented reality
ITUA20163946A1 (en) * 2016-05-30 2017-11-30 Univ Pisa Wearable viewer for augmented reality
US10429647B2 (en) 2016-06-10 2019-10-01 Facebook Technologies, Llc Focus adjusting virtual reality headset
US9996984B2 (en) 2016-07-05 2018-06-12 Disney Enterprises, Inc. Focus control for virtual objects in augmented reality (AR) and virtual reality (VR) displays
US11016307B2 (en) 2016-08-12 2021-05-25 Avegant Corp. Method and apparatus for a shaped optical path length extender
US11025893B2 (en) 2016-08-12 2021-06-01 Avegant Corp. Near-eye display system including a modulation stack
US11852839B2 (en) 2016-08-12 2023-12-26 Avegant Corp. Optical path length extender
US11042048B2 (en) 2016-08-12 2021-06-22 Avegant Corp. Digital light path length modulation systems
US11852890B2 (en) 2016-08-12 2023-12-26 Avegant Corp. Near-eye display system
US10809546B2 (en) 2016-08-12 2020-10-20 Avegant Corp. Digital light path length modulation
US10866428B2 (en) 2016-08-12 2020-12-15 Avegant Corp. Orthogonal optical path length extender
US10739578B2 (en) 2016-08-12 2020-08-11 The Arizona Board Of Regents On Behalf Of The University Of Arizona High-resolution freeform eyepiece design with a large exit pupil
US11480784B2 (en) 2016-08-12 2022-10-25 Avegant Corp. Binocular display with digital light path length modulation
EP3497508A4 (en) * 2016-08-12 2020-04-22 Avegant Corp. A near-eye display system including a modulation stack
US10944904B2 (en) 2016-08-12 2021-03-09 Avegant Corp. Image capture with digital light path length modulation
AU2017317600B2 (en) * 2016-08-22 2021-12-09 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
WO2018039270A1 (en) * 2016-08-22 2018-03-01 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
US11823360B2 (en) 2016-08-22 2023-11-21 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
US10748259B2 (en) 2016-08-22 2020-08-18 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
US11151699B2 (en) 2016-08-22 2021-10-19 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
US10529063B2 (en) 2016-08-22 2020-01-07 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
AU2022201611B2 (en) * 2016-08-22 2022-12-01 Magic Leap, Inc. Virtual, augmented, and mixed reality systems and methods
CN106327584B (en) * 2016-08-24 2020-08-07 深圳市瑞云科技有限公司 Image processing method and device for virtual reality equipment
CN106327584A (en) * 2016-08-24 2017-01-11 上海与德通讯技术有限公司 Image processing method used for virtual reality equipment and image processing device thereof
CN107783291A (en) * 2016-08-30 2018-03-09 北京亮亮视野科技有限公司 True three- dimensional panoramic show wear-type visual device
US11382500B2 (en) * 2016-09-22 2022-07-12 Essilor International Optometry device
WO2018054997A1 (en) * 2016-09-22 2018-03-29 Essilor International Optometry device
EP3298952A1 (en) * 2016-09-22 2018-03-28 Essilor International Optometry device
CN110023815A (en) * 2016-12-01 2019-07-16 阴影技术公司 Display device and the method shown using image renderer and optical combiner
WO2018100237A1 (en) * 2016-12-01 2018-06-07 Varjo Technologies Oy Display apparatus and method of displaying using image renderers and optical combiners
US11822095B2 (en) * 2016-12-09 2023-11-21 Lg Innotek Co., Ltd. Camera module including liquid lens, optical device including the module, and method for driving the liquid lens
US10482676B2 (en) 2017-01-10 2019-11-19 Meta View, Inc. Systems and methods to provide an interactive environment over an expanded field-of-view
US10127727B1 (en) * 2017-01-10 2018-11-13 Meta Company Systems and methods to provide an interactive environment over an expanded field-of-view
US11601638B2 (en) * 2017-01-10 2023-03-07 Intel Corporation Head-mounted display device
US20180199028A1 (en) * 2017-01-10 2018-07-12 Intel Corporation Head-mounted display device
US10353212B2 (en) * 2017-01-11 2019-07-16 Samsung Electronics Co., Ltd. See-through type display apparatus and method of operating the same
US10866418B2 (en) 2017-02-21 2020-12-15 Facebook Technologies, Llc Focus adjusting multiplanar head mounted display
CN110325895A (en) * 2017-02-21 2019-10-11 脸谱科技有限责任公司 It focuses and adjusts more plane head-mounted displays
EP3485319A4 (en) * 2017-02-21 2020-04-01 Facebook Technologies, LLC Focus adjusting multiplanar head mounted display
US10983354B2 (en) 2017-02-21 2021-04-20 Facebook Technologies, Llc Focus adjusting multiplanar head mounted display
US11300844B2 (en) 2017-02-23 2022-04-12 Magic Leap, Inc. Display system with variable power reflector
US11774823B2 (en) 2017-02-23 2023-10-03 Magic Leap, Inc. Display system with variable power reflector
US10962855B2 (en) 2017-02-23 2021-03-30 Magic Leap, Inc. Display system with variable power reflector
US10859812B2 (en) 2017-03-22 2020-12-08 Magic Leap, Inc. Dynamic field of view variable focus display system
EP3602583A4 (en) * 2017-03-22 2020-07-29 Magic Leap, Inc. Dynamic field of view variable focus display system
US11656468B2 (en) 2017-03-27 2023-05-23 Avegant Corp. Steerable high-resolution display having a foveal display and a field display with intermediate optics
US11163164B2 (en) 2017-03-27 2021-11-02 Avegant Corp. Steerable high-resolution display
US10514546B2 (en) * 2017-03-27 2019-12-24 Avegant Corp. Steerable high-resolution display
US11474284B2 (en) 2017-04-05 2022-10-18 Corning Incorporated Liquid lens control systems and methods
US11822100B2 (en) * 2017-04-05 2023-11-21 Corning Incorporated Liquid lens control systems and methods
US10921593B2 (en) 2017-04-06 2021-02-16 Disney Enterprises, Inc. Compact perspectively correct occlusion capable augmented reality displays
US10282912B1 (en) 2017-05-26 2019-05-07 Meta View, Inc. Systems and methods to provide an interactive space over an expanded field-of-view with focal distance tuning
US11022803B2 (en) * 2017-05-27 2021-06-01 Moon Key Lee Eye glasses-type transparent display using mirror
US10444501B2 (en) * 2017-06-20 2019-10-15 Panasonic Intellectual Property Management Co., Ltd. Image display device
US10488921B1 (en) * 2017-09-08 2019-11-26 Facebook Technologies, Llc Pellicle beamsplitter for eye tracking
US10585284B1 (en) 2017-11-17 2020-03-10 Meta View, Inc. Systems and methods to provide an interactive environment over a wide field of view
US11170563B2 (en) * 2018-01-04 2021-11-09 8259402 Canada Inc. Immersive environment with digital environment to enhance depth sensation
US11880033B2 (en) 2018-01-17 2024-01-23 Magic Leap, Inc. Display systems and methods for determining registration between a display and a user's eyes
US11883104B2 (en) 2018-01-17 2024-01-30 Magic Leap, Inc. Eye center of rotation determination, depth plane selection, and render camera positioning in display systems
US11893755B2 (en) 2018-01-19 2024-02-06 Interdigital Vc Holdings, Inc. Multi-focal planes with varying positions
US20190227311A1 (en) * 2018-01-22 2019-07-25 Symbol Technologies, Llc Systems and methods for task-based adjustable focal distance for heads-up displays
US10634913B2 (en) * 2018-01-22 2020-04-28 Symbol Technologies, Llc Systems and methods for task-based adjustable focal distance for heads-up displays
US11113794B2 (en) * 2018-01-23 2021-09-07 Facebook Technologies, Llc Systems and methods for generating defocus blur effects
US10521013B2 (en) 2018-03-01 2019-12-31 Samsung Electronics Co., Ltd. High-speed staggered binocular eye tracking systems
US11546575B2 (en) 2018-03-22 2023-01-03 Arizona Board Of Regents On Behalf Of The University Of Arizona Methods of rendering light field images for integral-imaging-based light field display
US11477434B2 (en) 2018-03-23 2022-10-18 Pcms Holdings, Inc. Multifocal plane based method to produce stereoscopic viewpoints in a DIBR system (MFP-DIBR)
US11385710B2 (en) * 2018-04-28 2022-07-12 Boe Technology Group Co., Ltd. Geometric parameter measurement method and device thereof, augmented reality device, and storage medium
US10809800B2 (en) * 2018-05-31 2020-10-20 Tobii Ab Robust convergence signal
US20190369719A1 (en) * 2018-05-31 2019-12-05 Tobii Ab Robust convergence signal
US11169358B1 (en) * 2018-06-29 2021-11-09 Facebook Technologies, Llc Varifocal projection display
US11689709B2 (en) 2018-07-05 2023-06-27 Interdigital Vc Holdings, Inc. Method and system for near-eye focal plane overlays for 3D perception of content on 2D displays
EP4270944A3 (en) * 2018-07-06 2024-01-03 InterDigital VC Holdings, Inc. Method and system for forming extended focal planes for large viewpoint changes
WO2020009922A1 (en) * 2018-07-06 2020-01-09 Pcms Holdings, Inc. Method and system for forming extended focal planes for large viewpoint changes
US11880043B2 (en) 2018-07-24 2024-01-23 Magic Leap, Inc. Display systems and methods for determining registration between display and eyes of user
US20200033613A1 (en) * 2018-07-26 2020-01-30 Varjo Technologies Oy Display apparatus and method of displaying using curved optical combiner
US10728534B2 (en) * 2018-07-31 2020-07-28 Lightspace Technologies, SIA Volumetric display system and method of displaying three-dimensional image
US20230037046A1 (en) * 2018-08-03 2023-02-02 Magic Leap, Inc. Depth plane selection for multi-depth plane display systems by user categorization
US11002971B1 (en) * 2018-08-24 2021-05-11 Apple Inc. Display device with mechanically adjustable optical combiner
US10809802B2 (en) * 2018-11-13 2020-10-20 Honda Motor Co., Ltd. Line-of-sight detection apparatus, computer readable storage medium, and line-of-sight detection method
US20220079675A1 (en) * 2018-11-16 2022-03-17 Philipp K. Lang Augmented Reality Guidance for Surgical Procedures with Adjustment of Scale, Convergence and Focal Plane or Focal Point of Virtual Data
US11169383B2 (en) 2018-12-07 2021-11-09 Avegant Corp. Steerable positioning element
US11927762B2 (en) 2018-12-07 2024-03-12 Avegant Corp. Steerable positioning element
US11126261B2 (en) 2019-01-07 2021-09-21 Avegant Corp. Display control system and rendering pipeline
US11650663B2 (en) 2019-01-07 2023-05-16 Avegant Corp. Repositionable foveal display with a fast shut-off logic
US11586049B2 (en) 2019-03-29 2023-02-21 Avegant Corp. Steerable hybrid display using a waveguide
US11353698B1 (en) 2019-05-29 2022-06-07 Facebook Technologies, Llc Dual Purkinje imaging with ellipsoidal lensing structure
US11153512B1 (en) * 2019-05-29 2021-10-19 Facebook Technologies, Llc Imaging and display with ellipsoidal lensing structure
WO2021003009A1 (en) * 2019-06-30 2021-01-07 Corning Incorporated Display optical systems for stereoscopic imaging systems with reduced eye strain
US11575865B2 (en) 2019-07-26 2023-02-07 Samsung Electronics Co., Ltd. Processing images captured by a camera behind a display
WO2021051067A1 (en) * 2019-09-15 2021-03-18 Arizona Board Of Regents On Behalf Of The University Of Arizona Digital illumination assisted gaze tracking for augmented reality near to eye displays
EP3835878A1 (en) * 2019-12-11 2021-06-16 Samsung Electronics Co., Ltd. Holographic display apparatus for providing expanded viewing window
US11796960B2 (en) 2019-12-11 2023-10-24 Samsung Electronics Co., Ltd. Holographic display apparatus for providing expanded viewing window
US11624921B2 (en) 2020-01-06 2023-04-11 Avegant Corp. Head mounted system with color specific modulation
US11509877B2 (en) * 2020-01-14 2022-11-22 Samsung Electronics Co., Ltd. Image display device including moveable display element and image display method
US11754975B2 (en) 2020-05-21 2023-09-12 Looking Glass Factory, Inc. System and method for holographic image display
US11449004B2 (en) 2020-05-21 2022-09-20 Looking Glass Factory, Inc. System and method for holographic image display
US11415935B2 (en) 2020-06-23 2022-08-16 Looking Glass Factory, Inc. System and method for holographic communication
US11849102B2 (en) 2020-12-01 2023-12-19 Looking Glass Factory, Inc. System and method for processing three dimensional images
US11721001B2 (en) * 2021-02-16 2023-08-08 Samsung Electronics Co., Ltd. Multiple point spread function based image reconstruction for a camera behind a display
US20220261966A1 (en) * 2021-02-16 2022-08-18 Samsung Electronics Company, Ltd. Multiple point spread function based image reconstruction for a camera behind a display
US11722796B2 (en) 2021-02-26 2023-08-08 Samsung Electronics Co., Ltd. Self-regularizing inverse filter for image deblurring
US11735138B2 (en) * 2021-04-22 2023-08-22 GM Global Technology Operations LLC Dual image plane HUD with automated illuminance setting for AR graphics displayed in far virtual image plane
US20220343876A1 (en) * 2021-04-22 2022-10-27 GM Global Technology Operations LLC Dual image plane hud with automated illuminance setting for ar graphics displayed in far virtual image plane

Also Published As

Publication number Publication date
US20210103148A1 (en) 2021-04-08
US11079596B2 (en) 2021-08-03
US11803059B2 (en) 2023-10-31
US20160147067A1 (en) 2016-05-26

Similar Documents

Publication Publication Date Title
US11803059B2 (en) 3-dimensional electro-optical see-through displays
US11710469B2 (en) Depth based foveated rendering for display systems
JP7213002B2 (en) Stereoscopic display with addressable focal cues
Liu et al. A novel prototype for an optical see-through head-mounted display with addressable focus cues
Hua Enabling focus cues in head-mounted displays
US11644669B2 (en) Depth based foveated rendering for display systems
US10192292B2 (en) Accommodation-invariant computational near-eye displays
US7428001B2 (en) Materials and methods for simulating focal shifts in viewers using large depth of focus displays
US10319154B1 (en) Methods, systems, and computer readable media for dynamic vision correction for in-focus viewing of real and virtual objects
Kramida Resolving the vergence-accommodation conflict in head-mounted displays
Akeley et al. A stereo display prototype with multiple focal distances
Reichelt et al. Depth cues in human visual perception and their realization in 3D displays
US20190137758A1 (en) Pseudo light-field display apparatus
US11835721B2 (en) Display device and method for producing a large field of vision
Zabels et al. Integrated head-mounted display system based on a multi-planar architecture
Watt et al. Real-world stereoscopic performance in multiple-focal-plane displays: How far apart should the image planes be?
Kimura et al. Multifocal stereoscopic projection mapping
Padmanaban Enabling Gaze-Contingent Accommodation in Presbyopia Correction and Near-Eye Displays
US11327313B2 (en) Method and system for rendering an image with a pupil enhanced accommodation of the eye
Hua et al. Depth-fused multi-focal plane displays enable accurate depth perception
Hoffman et al. Stereo display with time-multiplexed focal adjustment
Dunn Deformable Beamsplitters: Enhancing Perception with Wide Field of View, Varifocal Augmented Reality Displays
Konrad Focus and Ocular Parallax Cues for Virtual and Augmented Reality Displays
Başak Wide field-of-view dual-focal-plane augmented reality interactive display with gaze-tracker
Ghanbari Niaki Pinhole imaging based solutions for stereoscopic 3D and head worn displays

Legal Events

Date Code Title Description
AS Assignment

Owner name: THE ARIZONA BOARD OF REGENTS ON BEHALF OF THE UNIV

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUA, HONG;LIU, SHENG;SIGNING DATES FROM 20101123 TO 20101124;REEL/FRAME:025490/0562

AS Assignment

Owner name: NATIONAL SCIENCE FOUNDATION, VIRGINIA

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:UNIVERSITY OF ARIZONA;REEL/FRAME:026305/0524

Effective date: 20110207

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION