WO2008066772A2 - Processing and displaying 3d and 4d image data - Google Patents

Processing and displaying 3d and 4d image data Download PDF

Info

Publication number
WO2008066772A2
WO2008066772A2 PCT/US2007/024353 US2007024353W WO2008066772A2 WO 2008066772 A2 WO2008066772 A2 WO 2008066772A2 US 2007024353 W US2007024353 W US 2007024353W WO 2008066772 A2 WO2008066772 A2 WO 2008066772A2
Authority
WO
WIPO (PCT)
Prior art keywords
shape
interest
image
encompassed
dimensions
Prior art date
Application number
PCT/US2007/024353
Other languages
French (fr)
Other versions
WO2008066772A3 (en
Inventor
Christopher Wood
Original Assignee
Clario Medical Imaging, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Clario Medical Imaging, Inc. filed Critical Clario Medical Imaging, Inc.
Publication of WO2008066772A2 publication Critical patent/WO2008066772A2/en
Publication of WO2008066772A3 publication Critical patent/WO2008066772A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering

Definitions

  • Ray casting techniques such as volume rendering (VR) and maximum intensity projection (MIP) are commonly used in the interpretation of 3D and 4D medical image data sets. See, e.g., http://web.cs.wpi.edu/ ⁇ matt/courses/cs563/talks/powwie/pl/ray- cast.htm, John Pawasauskas, CS563 - Advanced Topics in Computer Graphics, 18 February 1997; http://en.wikipedia.org/wiki/Maximum intensity projection.
  • VR volume rendering
  • MIP maximum intensity projection
  • volume rendering can be used to describe techniques which allow the visualization of three-dimensional data.
  • Volume rendering is a technique for visualizing sampled functions of three spatial dimensions by computing 2-D projections of a colored semitransparent volume.
  • CT scanners produce three-dimensional stacks of parallel plane images, each of which consist of an array of X-ray absorption coefficients.
  • X-ray CT images can have a resolution of 512 * 512 * 12 bits and there will be up to 50 slides in a stack. The slides are 1-5 mm thick, and are spaced 1-5 mm apart.
  • the intensity scattered along the ray to the eye depends on this value, a reflection function or phase function P, and the local density D(x,y,z).
  • the dependence on density expresses the fact that a few bright particles will scatter less light in the eye direction than a number of dimmer particles.
  • the density function is parameterized along the ray as:
  • Algorithms which implement the general ray casting technique described above involve a simplification of the integral which computes the intensity of the light arriving at the eye.
  • a method by which this is done is called “additive reprojection. " It essentially projects voxels along a certain viewing direction. Intensities of voxels along parallel viewing rays are projected to provide an intensity in the viewing plane. Voxels of a specified depth can be assigned a maximum opacity, so that the depth that the volume is visualized to can be controlled.
  • the color and opacity values are obtained by interpolation.
  • the interpolated colors and opacities are then merged with each other and with the background by compositing in either front-to-back or back-to-front order to yield the color of the pixel. (The order of compositing makes no difference in the final output image.)
  • Other references on ray casting include, Alan Watt, Mark Watt, Advanced Animation and Rendering Techniques .
  • MIP maximum intensity projection
  • MIP imaging was reportedly invented for use in Nuclear Medicine by Jerold Wallis, MD, in 1988, and subsequently published in IEEE Transactions in Medical Imaging (Three-dimensional display in nuclear medicine. IEEE Trans Med Imag 1989; 8:297-303). In the setting of Nuclear Medicine, it was originally called MAP (Maximum Activity Projection). Additional information can be found at J Nucl Med 1990; 31 : 1421- 1430 and J Nucl Med 1991; 32:534-546. [00016] Use of depth weighting during production of rotating cines of MIP images can avoid the problem of difficulty of distinguishing right from left, and clockwise vs. anticlockwise rotation.
  • MIP imaging has reportedly been used routinely by physicians in interpreting PET (Positron Emission Tomography) imaging studies. [00017] There has gone unmet a need for improved methods and systems relating to interpretation of 3D and/or 4D medical image data sets. The present systems and methods provide these and/or other advantages.
  • the techniques discussed herein involve the use of methods to create resulting images that are superior in at least one respect to previous methods.
  • the methods include geometry based carving and MIPSUM.
  • the present discussion relates to methods of viewing an object in an image in at least three dimensions, the methods comprising: a) providing a digital image comprising an object; b) identifying an object of interest in the image; c) automatically substantially encompassing the object of interest with at least one of a geometric solid and a pre-determined shape having a configuration similar to the object of interest to provide an encompassed shape; d) automatically processing the encompassed shape using a MIP process to provide a processed encompassed shape having at least three dimensions; and, e) displaying the processed encompassed shape having at least three dimensions to a human viewer in a manner that depicts at least three dimensions of the object of interest.
  • the digital image comprising the object can be a two-dimensional image, a three- dimensional image or otherwise as desired.
  • the pre-determined shape having the configuration similar to the object of interest can comprise a shape corresponding to an object of biological interest such as a liver, heart, brain, hippocampus, amygdala, spleen, bone, uterus, ovary, testes, colon, cochlea and kidney.
  • the geometric solid can comprise at least one of a sphere, an elongated sphere, a cylinder, a cone and a frusto-cone.
  • Step b) can comprise automatically or manually identifying an object of interest in the image, and the displaying in e) can be rendered on a 2D screen.
  • Step d) further can comprise automatically processing the encompassed shape using a MIPSUM process to provide a processed encompassed shape having at least three dimensions, wherein unwanted data can be excluded from the image within the encompassed shape and a summation along a ray centered at a maximum value can be computed.
  • the thickness of the object can be computed as a height and the height can be expressed as a thickness on a viewing plane.
  • the discussion herein provides computer-readable memory media containing instructions that control a computer processor to implement the methods herein, as well as computers containing and capable of running such a computer-readable memory medium, as well as imaging and display systems comprising such computers and/or computer-readable memory media.
  • Figure 1 depicts a screenshot of an implementation of methods discussed herein comprising an image of an artery with a portion of the artery encompassed by a box for geometry based carving and MIPSUM processing.
  • Figure 2 depicts a screenshot of an implementation of methods discussed herein comprising the portion of the artery discussed in Figure 1 encompassed by the box for geometry based carving and MIPSUM processing.
  • Figure 3 depicts a screenshot of an implementation of methods discussed herein comprising the portion of the artery discussed in Figure 1 encompassed by the box for geometry based carving and MIPSUM processing.
  • Figure 4 depicts a screenshot of an implementation of methods discussed herein comprising the portion of the artery discussed in Figure 1 rendered for 3D or 4D viewing.
  • the present systems, devices, methods, etc. provide approaches able to easily depict selected object(s) of interest from a digital image in at least three dimensions. Such images and objects are typically medical images and objects as discussed herein but the methods and systems can apply to any desired image(s) and object(s).
  • p3D visualization software such as MIP software
  • the methods, systems, devices, etc., discussed herein include geometry based carving which involves inserting a predetermined shape into the 3D scene or image.
  • Figure 1 shows a 3D MIP image 2 with a rectangular box 4 inserted into the image 2.
  • the box 4 encloses a blood vessel 6.
  • Other structures 8 are also depicted in the original image but are not enclosed within the box 4. While a box is used for illustration, this can be any shape, including shapes that are pre-designed for specific parts of the anatomy.
  • Exemplary shapes include liver, heart, brain, brain substructures such as hippocampus and amygdala, spleen, bone, uterus, ovary, testes, colon, cochlea and kidney, as well as sub-structures of such organs and structures.
  • box 4 When rotated, box 4 provides a three dimensional geometrical solid such as a cylinder or frusto-cone; other three dimensional shapes can be created through use of other geometrical shapes such as circles/spheres, ovals/elongated spheres, triangles/cones and polygons/corresponding geometrical solids.
  • Figure 2 shows the image which results from Figure 1 when a MIP is computed only within the box 4. Substantially no voxel outside the box, and typically none at all outside the box, is included in the MIP calculation. In some embodiments, voxels outside the box may still be shown for example in a dark gray "mask" to differentiate them from the selected portion. This can provide added context to the overall relative location of the object inside the box.
  • Figure 3 shows the resulting image when the box is made transparent. This results in a borderless viewing window 10.
  • a MIP image can be viewed from any angle. Morphological features, however, can still be lost. Because the location of the MIP voxel within the 3D volume is known, a summation along the ray centered at the maximum value can be computed. If the thickness of the vessel is computed as a height, the height can then be expressed as a thickness on a viewing plane, for example as shown in US Patent No. 7,283,654.
  • Figure 4 shows an exemplary resulting 3D image based on the blood vessel depicted in Figures 1-3, as rendered on a 2D screen.
  • blood vessel 6 has a height 12 corresponding to the thickness of the vessel.
  • the display of the resulting images, with or without the MIPSUM feature can comprise displaying a 4D image.
  • the fourth dimension is time.
  • a heart can be rendered to view the beating of the heart or possibly the action of the valves within the heart as the heart pumps over several repetitions.

Abstract

Systems, devices, methods, etc., that provide approaches able to easily depict in at least three dimensions selected object(s) of interest from a digital image. Such images and objects are typically medical images and objects as discussed herein but the methods and systems can apply to any desired image(s) and object(s).

Description

METHODS AND SYSTEMS RELATING TO GEOMETRY ISOLATED MIPSUM
METHOD OF PROCESSING AND DISPLAYING 3D AND 4D IMAGE DATA
INCLUDING MEDICAL IMAGE DATA
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority from United States provisional patent application No. Serial No.: 60/860,913, filed 22 November 2006, which is incorporated herein by reference in its entirety and for all its teachings and disclosures.
BACKGROUND
[0002] Ray casting techniques such as volume rendering (VR) and maximum intensity projection (MIP) are commonly used in the interpretation of 3D and 4D medical image data sets. See, e.g., http://web.cs.wpi.edu/~matt/courses/cs563/talks/powwie/pl/ray- cast.htm, John Pawasauskas, CS563 - Advanced Topics in Computer Graphics, 18 February 1997; http://en.wikipedia.org/wiki/Maximum intensity projection.
[0003] As discussed in Pawasauskas, the term volume rendering can be used to describe techniques which allow the visualization of three-dimensional data. Volume rendering is a technique for visualizing sampled functions of three spatial dimensions by computing 2-D projections of a colored semitransparent volume. [0004] Currently, the major application area of volume rendering is medical imaging, where volume data is available from X-ray Computer Tomography (CT) scanners and Positron Emission Tomography (PET) scanners. CT scanners produce three-dimensional stacks of parallel plane images, each of which consist of an array of X-ray absorption coefficients. Typically X-ray CT images can have a resolution of 512 * 512 * 12 bits and there will be up to 50 slides in a stack. The slides are 1-5 mm thick, and are spaced 1-5 mm apart.
[0005] In the two-dimensional domain, these slides can be viewed one at a time. The advantage of CT images over conventional X-ray images is that they only contain information from that one plane. A conventional X-ray image, on the other hand, contains information from all the planes, and the result is an accumulation of shadows that are a function of the density of the tissue, bone, organs, etc., anything that absorbs the X-rays. [00061 The availability of the stacks of parallel data produced by CT scanners prompted the development of techniques for viewing the data as a three-dimensional field rather than as individual planes. This gave the immediate advantage that the information could be viewed from any view point. There are a number of different methods that can be used, such as:
• Rendering voxels in binary partitioned space
• Marching cubes
• Ray casting
[0007] According to Pawasauskas, there are some problems with the first two of these methods. In the first method, choices are made for the entire voxel. This can produce a "blocky" image. It also has a lack of dynamic range in the computed surface normals, which produces images with relatively poor shading. The marching cubes approach solves this problem, but causes some others. Its biggest disadvantage is that it requires that a binary decision be made on the position of the intermediate surface that is extracted and rendered. Also, extracting an intermediate structure can cause false positive (artifacts that do not exist) and false negatives (discarding small or poorly defined features). [0008] Still according to Pawasauskas, the basic goal of ray casting is to allow a best use of the three-dimensional data and not attempt to impose any geometric structure on it. It solves one of the most important limitations of surface extraction techniques, namely the way in which they display a projection of a thin shell in the acquisition space. Surface extraction techniques fail to take into account that, particularly in medical imaging, data may originate from fluid and other materials which may be partially transparent and should be modeled as such. Ray casting doesn't suffer from this limitation. [0009] Currently, most volume rendering that uses ray casting is based on the Blinn/Kajiya model. In this model we have a volume which has a density D(x,y,z), penetrated by a ray R. At each point along the ray there is an illumination I(x,y,z) reaching the point (x,y,z) from the light source(s). The intensity scattered along the ray to the eye depends on this value, a reflection function or phase function P, and the local density D(x,y,z). The dependence on density expresses the fact that a few bright particles will scatter less light in the eye direction than a number of dimmer particles. The density function is parameterized along the ray as:
D(x(t), y(t), z(t)) = D(t) and the illumination from the source as: i(x(0, y(0, z(0) = i(t) and the illumination scattered along R from a point distance / along the ray is:
I(/)D(/)P(cos 0) where 0 is the angle between R and L, the light vector, from the point of interest.
[00010] Algorithms which implement the general ray casting technique described above involve a simplification of the integral which computes the intensity of the light arriving at the eye. A method by which this is done is called "additive reprojection. " It essentially projects voxels along a certain viewing direction. Intensities of voxels along parallel viewing rays are projected to provide an intensity in the viewing plane. Voxels of a specified depth can be assigned a maximum opacity, so that the depth that the volume is visualized to can be controlled. Several different algorithms for ray casting exist. In one implementation used in several commercial applications, for every pixel in the output image, a ray is shot into the data volume. At a predetermined number of evenly spaced locations along the ray, the color and opacity values are obtained by interpolation. The interpolated colors and opacities are then merged with each other and with the background by compositing in either front-to-back or back-to-front order to yield the color of the pixel. (The order of compositing makes no difference in the final output image.) [00011] Other references on ray casting include, Alan Watt, Mark Watt, Advanced Animation and Rendering Techniques . Theory and Practice, Addison- Wesley, Reading, Massachusetts, 1992; Marc Levoy, A Hybrid Ray Tracer for Rendering Polygon and Volume Data, IEEE Computer Graphics and Applications, pages 33-40, March 1990; Marc Levoy, Efficient Ray Tracing of Volume Data, ACM Transactions on Graphics, 9(3):245- 261, July 1990; http://www.cs.sunvsb.edu/~csilva/papers/rpe/nodelO.html; http://www- graphics.stanford.edu/proiects/volume/. [00012] An exemplary discussion of maximum intensity projection (MIP), http://en.wikipedia.org/wiki/Maximum intensity projection, indicates MIP is a computer visualization method for 3D data that projects in the visualization plane the voxels with maximum intensity that fall in the way of parallel rays traced from the viewpoint to the plane of projection. This implies that two MIP renderings from opposite viewpoints are symmetrical images.
[00013] This technique is computationally fast, but the 2D results do not provide a good sense of depth of the original data. To improve the sense of 3D, animations are usually rendered of several MIP frames in which the viewpoint is slightly changed from one to the other, thus creating the illusion of rotation. This helps the viewer's perception to find the relative 3D positions of the object components. However, since the projection is orthographic the viewer cannot distinguish between left or right, front or back and even if the object is rotating clockwise or anti-clockwise. [00014] MIP has been used for the detection of lung nodules in lung cancer screening programs which utilize computerized tomography scans. MIP can enhance the 3D nature of these nodules, making them stand out from pulmonary bronchi and vasculature. [00015] MIP imaging was reportedly invented for use in Nuclear Medicine by Jerold Wallis, MD, in 1988, and subsequently published in IEEE Transactions in Medical Imaging (Three-dimensional display in nuclear medicine. IEEE Trans Med Imag 1989; 8:297-303). In the setting of Nuclear Medicine, it was originally called MAP (Maximum Activity Projection). Additional information can be found at J Nucl Med 1990; 31 : 1421- 1430 and J Nucl Med 1991; 32:534-546. [00016] Use of depth weighting during production of rotating cines of MIP images can avoid the problem of difficulty of distinguishing right from left, and clockwise vs. anticlockwise rotation. MIP imaging has reportedly been used routinely by physicians in interpreting PET (Positron Emission Tomography) imaging studies. [00017] There has gone unmet a need for improved methods and systems relating to interpretation of 3D and/or 4D medical image data sets. The present systems and methods provide these and/or other advantages.
SUMMARY
[00018] The techniques discussed herein involve the use of methods to create resulting images that are superior in at least one respect to previous methods. The methods include geometry based carving and MIPSUM.
[00019] In one aspect, the present discussion relates to methods of viewing an object in an image in at least three dimensions, the methods comprising: a) providing a digital image comprising an object; b) identifying an object of interest in the image; c) automatically substantially encompassing the object of interest with at least one of a geometric solid and a pre-determined shape having a configuration similar to the object of interest to provide an encompassed shape; d) automatically processing the encompassed shape using a MIP process to provide a processed encompassed shape having at least three dimensions; and, e) displaying the processed encompassed shape having at least three dimensions to a human viewer in a manner that depicts at least three dimensions of the object of interest. [00020] The digital image comprising the object can be a two-dimensional image, a three- dimensional image or otherwise as desired. The pre-determined shape having the configuration similar to the object of interest can comprise a shape corresponding to an object of biological interest such as a liver, heart, brain, hippocampus, amygdala, spleen, bone, uterus, ovary, testes, colon, cochlea and kidney. The geometric solid can comprise at least one of a sphere, an elongated sphere, a cylinder, a cone and a frusto-cone. [00021] Step b) can comprise automatically or manually identifying an object of interest in the image, and the displaying in e) can be rendered on a 2D screen.
[00022] Step d) further can comprise automatically processing the encompassed shape using a MIPSUM process to provide a processed encompassed shape having at least three dimensions, wherein unwanted data can be excluded from the image within the encompassed shape and a summation along a ray centered at a maximum value can be computed.
[00023] In one example, the thickness of the object can be computed as a height and the height can be expressed as a thickness on a viewing plane.
[00024] In other aspects, the discussion herein provides computer-readable memory media containing instructions that control a computer processor to implement the methods herein, as well as computers containing and capable of running such a computer-readable memory medium, as well as imaging and display systems comprising such computers and/or computer-readable memory media.
[00025] These and other aspects, features and embodiments are set forth within this application, including the following Detailed Description and attached drawings. Unless expressly stated otherwise or clear from the context, all embodiments, aspects, features, etc., can be mixed and matched, combined and permuted in any desired manner. In addition, various references are set forth herein, including in the Cross-Reference To Related Applications, that discuss certain systems, apparatus, methods and other information; all such references are incorporated herein by reference in their entirety and for all their teachings and disclosures, regardless of where the references may appear in this application. BRIEF DESCRIPTION OF THE DRAWINGS
[00026] Figure 1 depicts a screenshot of an implementation of methods discussed herein comprising an image of an artery with a portion of the artery encompassed by a box for geometry based carving and MIPSUM processing. [00027] Figure 2 depicts a screenshot of an implementation of methods discussed herein comprising the portion of the artery discussed in Figure 1 encompassed by the box for geometry based carving and MIPSUM processing.
[00028] Figure 3 depicts a screenshot of an implementation of methods discussed herein comprising the portion of the artery discussed in Figure 1 encompassed by the box for geometry based carving and MIPSUM processing.
[00029] Figure 4 depicts a screenshot of an implementation of methods discussed herein comprising the portion of the artery discussed in Figure 1 rendered for 3D or 4D viewing.
DETAILED DESCRIPTION [00030] The present systems, devices, methods, etc., provide approaches able to easily depict selected object(s) of interest from a digital image in at least three dimensions. Such images and objects are typically medical images and objects as discussed herein but the methods and systems can apply to any desired image(s) and object(s). [00031] Turning first to geometry based carving, known software such as p3D visualization software (such as MIP software), see, e.g., Welling GFF Format Summary P3D, http://netghost.narod.ru/gff/graphics/summarv/p3d.htm, commonly provides methods to isolate structures (such as blood vessels). In the past, users were typically allowed to manually draw a shape on a 3D image and exclude everything outside that shape, which is projected into three dimensions. The user can then rotate the 3D object and draw another shape. Repeated rotation and drawing results in a remaining "carved" 3D region of interest. This is user intensive and time consuming.
[00032] In one aspect, the methods, systems, devices, etc., discussed herein include geometry based carving which involves inserting a predetermined shape into the 3D scene or image. Figure 1 shows a 3D MIP image 2 with a rectangular box 4 inserted into the image 2. The box 4 encloses a blood vessel 6. Other structures 8 are also depicted in the original image but are not enclosed within the box 4. While a box is used for illustration, this can be any shape, including shapes that are pre-designed for specific parts of the anatomy. Exemplary shapes include liver, heart, brain, brain substructures such as hippocampus and amygdala, spleen, bone, uterus, ovary, testes, colon, cochlea and kidney, as well as sub-structures of such organs and structures. When rotated, box 4 provides a three dimensional geometrical solid such as a cylinder or frusto-cone; other three dimensional shapes can be created through use of other geometrical shapes such as circles/spheres, ovals/elongated spheres, triangles/cones and polygons/corresponding geometrical solids. In addition, as with the shapes relating to biological shapes, the shapes do not need to be rotated unless desired, so 3-D shapes such as boxes, polyhedrons, pyramids, frusto-pyramids, etc., can also be used. [00033] Figure 2 shows the image which results from Figure 1 when a MIP is computed only within the box 4. Substantially no voxel outside the box, and typically none at all outside the box, is included in the MIP calculation. In some embodiments, voxels outside the box may still be shown for example in a dark gray "mask" to differentiate them from the selected portion. This can provide added context to the overall relative location of the object inside the box.
[00034] Figure 3 shows the resulting image when the box is made transparent. This results in a borderless viewing window 10.
[00035] Turning to another aspect of the systems, devices, etc., herein, in a MIPSUM, after unwanted data is, optionally, excluded, a MIP image can be viewed from any angle. Morphological features, however, can still be lost. Because the location of the MIP voxel within the 3D volume is known, a summation along the ray centered at the maximum value can be computed. If the thickness of the vessel is computed as a height, the height can then be expressed as a thickness on a viewing plane, for example as shown in US Patent No. 7,283,654. Figure 4 shows an exemplary resulting 3D image based on the blood vessel depicted in Figures 1-3, as rendered on a 2D screen. In Figure 4, blood vessel 6 has a height 12 corresponding to the thickness of the vessel. In further embodiments, the display of the resulting images, with or without the MIPSUM feature can comprise displaying a 4D image. Typically, the fourth dimension is time. Thus, for example, a heart can be rendered to view the beating of the heart or possibly the action of the valves within the heart as the heart pumps over several repetitions.
[00036] The methods, etc., herein can, with adequate computing power and imagery, be effected in real time or based on stored images. [00037] All terms used herein, including those specifically discussed below in this section, are used in accordance with their ordinary meanings unless the context or definition clearly indicates otherwise. Also unless expressly indicated otherwise, the use of "or" includes "and" and vice-versa. Non-limiting terms are not to be construed as limiting unless expressly stated, or the context clearly indicates, otherwise (for example, "including," "having," and "comprising" typically indicate "including without limitation"). Singular forms, including in the claims, such as "a," "an," and "the" include the plural reference unless expressly stated, or the context clearly indicates, otherwise. [00038] The scope of the present devices, systems and methods, etc., includes both means plus function and step plus function concepts. However, the claims are not to be interpreted as indicating a "means plus function" relationship unless the word "means" is specifically recited in a claim, and are to be interpreted as indicating a "means plus function" relationship where the word "means" is specifically recited in a claim. Similarly, the claims are not to be interpreted as indicating a "step plus function" relationship unless the word "step" is specifically recited in a claim, and are to be interpreted as indicating a "step plus function" relationship where the word "means" is specifically recited in a claim. [00039] From the foregoing, it will be appreciated that, although specific embodiments have been discussed herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the discussion herein. Accordingly, the systems and methods, etc., include such modifications as well as all permutations and combinations of the subject matter set forth herein and are not limited except as by the appended claims or other claim having adequate support in the discussion herein.

Claims

What is claimed is:
1. A method of viewing an object in an image in at least three dimensions, the method comprising: a) providing a digital image comprising an object; b) identifying an object of interest in the image; c) automatically substantially encompassing the object of interest with at least one of a geometric solid and a pre-determined shape having a configuration similar to the object of interest to provide an encompassed shape; d) automatically processing the encompassed shape using a MIP process to provide a processed encompassed shape having at least three dimensions; and, e) displaying the processed encompassed shape having at least three dimensions to a human viewer in a manner that depicts at least three dimensions of the object of interest.
2. The method of claim 1 wherein the digital image comprising the object is a two-dimensional image.
3. The method of claim 1 wherein the digital image comprising the object is a three-dimensional image.
4. The method of any one of claims 1 to 3 wherein the pre-determined shape having the configuration similar to the object of interest comprises a shape corresponding to an object of biological interest.
5. The method of claim 4 wherein the object of biological interest is at least one of a liver, heart, brain, hippocampus, amygdala, spleen, bone, uterus, ovary, testes, colon, cochlea and kidney.
6. The method of any one of claims 1 to 3 wherein the geometric solid comprises at least one of a sphere, an elongated sphere, a cylinder, a cone and a frusto- cone.
7. The method of any one of claims 1 to 6 wherein b) comprises automatically identifying an object of interest in the image.
8. The method of any one of claims 1 to 6 wherein b) comprises manually identifying an object of interest in the image.
9. The method of any one of claims 1 to 8 wherein the displaying in e) is rendered on a 2D screen.
10. The method of any one of claims 1 to 9 wherein d) further comprises automatically processing the encompassed shape using a MIPSUM process to provide a processed encompassed shape having at least three dimensions, wherein unwanted data is excluded from the image within the encompassed shape and a summation along a ray centered at a maximum value is computed.
11. The method of claim 10 wherein the thickness of the object is computed as a height and the height is expressed as a thickness on a viewing plane.
12. The method of claim 11 wherein the object is a biological object.
13. A computer-readable memory medium containing instructions that control a computer processor to implement a method according to of any one of claims 1 to 12.
14. A computer containing and capable of running a computer-readable memory medium of claim 13.
15. An imaging and display system comprising a computer of claim 14.
PCT/US2007/024353 2006-11-22 2007-11-23 Processing and displaying 3d and 4d image data WO2008066772A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US86091306P 2006-11-22 2006-11-22
US60/860,913 2006-11-22

Publications (2)

Publication Number Publication Date
WO2008066772A2 true WO2008066772A2 (en) 2008-06-05
WO2008066772A3 WO2008066772A3 (en) 2009-05-14

Family

ID=39468479

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/024353 WO2008066772A2 (en) 2006-11-22 2007-11-23 Processing and displaying 3d and 4d image data

Country Status (1)

Country Link
WO (1) WO2008066772A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113133753A (en) * 2021-05-21 2021-07-20 重庆理工大学 Biological tissue blood flow real-time monitoring system and simulation monitoring system based on magnetic induction phase shift

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6111582A (en) * 1996-12-20 2000-08-29 Jenkins; Barry L. System and method of image generation and encoding using primitive reprojection
US20060181551A1 (en) * 2005-01-07 2006-08-17 Ziosoft Inc. Method, computer program product, and apparatus for designating region of interest
US20060251307A1 (en) * 2005-04-13 2006-11-09 Charles Florin Method and apparatus for generating a 2D image having pixels corresponding to voxels of a 3D image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6111582A (en) * 1996-12-20 2000-08-29 Jenkins; Barry L. System and method of image generation and encoding using primitive reprojection
US20060181551A1 (en) * 2005-01-07 2006-08-17 Ziosoft Inc. Method, computer program product, and apparatus for designating region of interest
US20060251307A1 (en) * 2005-04-13 2006-11-09 Charles Florin Method and apparatus for generating a 2D image having pixels corresponding to voxels of a 3D image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113133753A (en) * 2021-05-21 2021-07-20 重庆理工大学 Biological tissue blood flow real-time monitoring system and simulation monitoring system based on magnetic induction phase shift
CN113133753B (en) * 2021-05-21 2023-07-18 重庆理工大学 Biological tissue blood flow real-time monitoring system and simulation monitoring system based on magnetic induction phase shift

Also Published As

Publication number Publication date
WO2008066772A3 (en) 2009-05-14

Similar Documents

Publication Publication Date Title
CN109584349B (en) Method and apparatus for rendering material properties
Lu et al. Illustrative interactive stipple rendering
US20070229500A1 (en) System and method for in-context mpr visualization using virtual incision volume visualization
WO2005055148A1 (en) Segmented volume rendering using a programmable graphics pipeline
US7692651B2 (en) Method and apparatus for providing efficient space leaping using a neighbor guided emptiness map in octree traversal for a fast ray casting algorithm
Chen et al. Fast voxelization of three-dimensional synthetic objects
Wilson et al. Interactive multi-volume visualization
Englmeier et al. Hybrid rendering of multidimensional image data
Ropinski et al. Efficient shadows for gpu-based volume raycasting
Gering et al. Object modeling using tomography and photography
Leu et al. Modelling and rendering graphics scenes composed of multiple volumetric datasets
Song et al. Breast tissue 3D segmentation and visualization on MRI
WO2008066772A2 (en) Processing and displaying 3d and 4d image data
WO2006067714A2 (en) Transparency change of view-obscuring objects
Viola Importance-driven expressive visualization
Corcoran et al. Perceptual enhancement of two-level volume rendering
Leith Computer visualization of volume data in electron tomography
Çalışkan et al. Overview of Computer Graphics and algorithms
Demiris et al. 3-D visualization in medicine: an overview
Vandenhouten et al. 3D colour visualization of label images using volume rendering techniques
Zhou et al. State of the art for volume rendering
van Scheltinga Technical background
STAGNOLI Ultrasound simulation with deformable mesh model from a Voxel-based dataset
Zhang Medical Volume Reconstruction Techniques
Somaskandan Visualization in 3D Medical Imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07862208

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07862208

Country of ref document: EP

Kind code of ref document: A2