throbber
(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2015/0145966 A1
`Krieger et al.
`(43) Pub. Date:
`May 28, 2015
`
`US 2015O145966A1
`
`3D CORRECTED IMAGING
`
`Publication Classification
`
`(51) Int. Cl.
`H04N I3/02
`G06T 7/00
`(52) U.S. Cl.
`CPC ....... H04N 13/0246 (2013.01); H04N 13/0239
`(2013.01); H04N 13/0271 (2013.01); G06T
`7/0051 (2013.01)
`
`(2006.01)
`(2006.01)
`
`ABSTRACT
`(57)
`A system and method for corrected imaging including an
`optical camera that captures at least one optical image of an
`area of interest, a depth sensor that captures at least one depth
`map of the area of interest, and circuitry that correlates depth
`information of the at least one depth map to the at least one
`optical image to generate a depth image, corrects the at least
`one optical image by applying a model to address alteration in
`the respective at least one optical image, the model using
`information from the depth image, and outputs the corrected
`at least one optical image for display in 2D and/or as a 3D
`Surface.
`
`(54)
`(71)
`
`(72)
`
`Applicant: Children's National Medical Center,
`Washington, DC (US)
`
`Inventors: Alex Krieger, Alexandria, VA (US);
`Peter C. W. Kim, Washington, DC (US);
`Ryan Decker, Baltimore, MD (US);
`Azad Shademan, Washington, DC (US)
`
`(73)
`
`Assignee: Children's National Medical Center,
`Washington, DC (US)
`
`(21)
`
`Appl. No.: 14/555,126
`
`(22)
`
`Filed:
`
`Nov. 26, 2014
`
`(60)
`
`Related U.S. Application Data
`Provisional application No. 61/909,604, filed on Nov.
`27, 2013.
`
`|
`
`Start
`y
`Read optical image, depth map, and prior information i? S1501
`
`Y —-
`
`y
`Use the optical image and depth map to inform the deformation
`model. The result is a 3D polygonal surface matching the imaged
`real object, informed mainly by the depth map but also by depth M S 1502
`cues in the optical image,
`
`Use all information, including prior information, to inform the
`distortion model. Calculate expected reflected light for the
`entire image, expected diffusion of light, areas of unwanted
`occlusion and shadow.
`
`
`
`f S1503
`
`Use the distortion and deformation model together to correct the
`image in a balancing step that weighs abnormalities due to optical Y S1504
`effects and those arising from the 3D nature of the imaged object.
`
`Allow the previewing of the corrected image in 2D or 3D,
`and allow the operator the chance to manually adjust some
`previous parameters
`
`N/ S1505
`
`y
`Display the image in 2D or 3D including virtual lighting
`conditions
`
`r
`
`- - -
`|
`End
`
`Petitioner's Exhibit 1023,
` Page 1 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 1 of 15
`
`US 201S/O145966 A1
`
`100
`
`(Start)
`
`y
`Depth
`(3D) M2 Optical Image
`-17
`- /
`
`w
`
`3
`
`1
`
`A.
`
`JN Prior information
`\
`
`
`
`y/N Distortion Model
`
`Deformation
`Model
`
`s/\
`
`Balancing
`
`A 6
`
`Display Image
`
`/\ 8
`
`End )
`
`N
`
`FIG. 1
`
`Petitioner's Exhibit 1023,
` Page 2 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 2 of 15
`
`US 201S/O145966 A1
`
`
`
`Petitioner's Exhibit 1023,
` Page 3 of 25
`
`

`

`Patent Application Publication
`
`May 28, 2015 Sheet 3 of 15
`
`US 2015/O145966 A1
`
`300
`
`---
`
`---
`
`1
`
`v/
`
`Prior information
`
`Deformation
`Model
`
`MY 5
`
`
`
`Preview
`
`---
`
`11
`~
`New System
`Parameter
`
`Display Image M 8
`
`FIG. 3
`
`Petitioner's Exhibit 1023,
` Page 4 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 4 of 15
`
`US 201S/O145966 A1
`
`
`
`S
`
`Petitioner's Exhibit 1023,
` Page 5 of 25
`
`

`

`Patent Application Publication May 28,2015 Sheet 5 of 15
`
`US 2015/0145966 Al
`
`FIG.5
`
`Petitioner's Exhibit 1023,
`Page 6 of 25
`
`(1
`
`aO
`
`x
`
`AN
`x
`
`Petitioner's Exhibit 1023,
` Page 6 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 6 of 15
`
`US 201S/O145966 A1
`
`
`
`Petitioner's Exhibit 1023,
` Page 7 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 7 of 15
`
`US 201S/O145966 A1
`
`
`
`s
`
`Petitioner's Exhibit 1023,
` Page 8 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 8 of 15
`
`US 201S/O145966 A1
`
`
`
`Petitioner's Exhibit 1023,
` Page 9 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 9 of 15
`
`US 201S/O145966 A1
`
`
`
`Petitioner's Exhibit 1023,
` Page 10 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 10 of 15
`
`US 201S/O145966 A1
`
`
`
`Petitioner's Exhibit 1023,
` Page 11 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 11 of 15
`
`US 201S/O145966 A1
`
`
`
`Petitioner's Exhibit 1023,
` Page 12 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 12 of 15
`
`US 201S/O145966 A1
`
`
`
`Petitioner's Exhibit 1023,
` Page 13 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 13 of 15
`
`US 201S/O145966 A1
`
`
`
`
`
`s
`
`Petitioner's Exhibit 1023,
` Page 14 of 25
`
`

`

`Patent Application Publication
`
`May 28, 2015 Sheet 14 of 15
`
`US 201S/O145966 A1
`
`
`
`/07 ||
`
`ET 8\//\OW BY?
`
`
`
`
`
`
`
`
`
`
`NOI_1 \,) IN [mwy WOO || 5DNICJYBO)3\}
`
`NOI ] }} Od
`
`Petitioner's Exhibit 1023,
` Page 15 of 25
`
`

`

`Patent Application Publication May 28, 2015 Sheet 15 of 15
`
`US 201S/O145966 A1
`
`Read optical image, depth map, and prior information R/X S1501
`
`Use the optical image and depth map to inform the deformation
`model. The result is a 3D polygonal surface matching the imaged
`real object, informed mainly by the depth map but also by depth
`cues in the optical image.
`
`N/Y S1502
`
`Use all information, including prior information, to inform the
`distortion model. Calculate expected reflected light for the
`entire image, expected diffusion of light, areas of unwanted
`occlusion and shadow.
`
`/Y S1503
`
`Use the distortion and deformation model together to correct the
`image in a balancing step that weighs abnormalities due to optical N/ S1504
`effects and those arising from the 3D nature of the imaged object.
`
`Allow the previewing of the corrected image in 2D or 3D,
`and allow the operator the chance to manually adjust Some VY S1505
`previous parameters
`
`Display the image in 2D or 3D including virtual lighting
`conditions
`
`S1506
`
`
`
`FIG. 15
`
`Petitioner's Exhibit 1023,
` Page 16 of 25
`
`

`

`US 2015/O 145966 A1
`
`May 28, 2015
`
`3D CORRECTED IMAGING
`
`CROSS REFERENCE TO RELATED
`APPLICATIONS
`0001. This disclosure claims the benefit of U.S. Provi
`sional Application No. 61/909,604, filed on Nov. 27, 2013,
`the disclosure of which is incorporated herein by reference in
`its entirety.
`
`BACKGROUND
`0002 1. Field of the Invention
`0003. The present embodiments are directed to a system
`and method of correcting undesirable abnormalities in
`acquired images through the use of 3-dimensional informa
`tion and a distortion model.
`0004 2. Description of the Related Art
`0005 Image acquisition and post-processing are currently
`limited by the knowledge level of the user. Once captured,
`images have limited information to be used in correcting their
`defects. Some automated processing algorithms exist with
`defined goals that often misrepresent the underlying informa
`tion. For example, the quick fix touch up steps being devel
`oped by photo-sharing websites may change the saturation,
`gain, sharpness and other characteristics to make the resulting
`images more pleasing to the eye. The lack of additional infor
`mation makes correcting complex artifacts and abnormalities
`in images difficult. Those image artifacts resulting from
`specular reflection, shadows, occlusion and other physical
`phenomenon are notable to be suitably corrected based on the
`information in a single camera image.
`
`SUMMARY
`0006 With information about the geometry being imaged,
`one may be able to infer more about the underlying physical
`processes behind the more complex image distortions, and
`Subsequently correct images to show the underlying objects
`most clearly. With the ability to sense depth, an optical imager
`can determine much more about its environment. This knowl
`edge, combined with knowledge of the camera and light
`Source used in a scene, opens new possibilities for post
`acquisition image correction.
`0007. The light source in a scene is of key importance in
`evaluating and correcting for optical distortions. Specular
`reflection, shadows, and diffuse reflection depend on the posi
`tion and intensity of illumination. Some information about
`the position and intensity of the light source can be inferred
`with knowledge of the 3D surface and location of specular
`reflections and shadows. But distance between the imaged
`object and light source is much harder to estimate. With prior
`knowledge of the light Source position, one could more accu
`rately model the physical phenomenon contributing to image
`distortion and correct for additional situations and additional
`types of illuminating radiation.
`0008. With the ability to sense depth, a camera could better
`inform a variety of post-processing algorithms adjusting the
`content of an image for a variety of purposes. For example, a
`face may be illuminated evenly post-capture, even in the
`event of severe shadows. A body of water and what lies
`beneath could be better understood even in the case of large
`specular reflections. An astronomy image can be better inter
`preted with knowledge of the terrestrial depth map. More
`generally, increased information about the physical world an
`image is captured within, including the Subject to be imaged,
`
`will better inform automatic approaches to increasing the
`image quality by correcting for distortions that have a physi
`cal basis in reality. This information could be used to correct
`for object occlusion, shadows, reflections, and other undes
`ired image artifacts. Such an approach offers many advan
`tages to the medical community and others who desire maxi
`mal information from images for the purposes of image
`analysis.
`0009. In addition to the qualitative improvements in image
`quality, 3D corrected imaging offers many advantages to an
`image analysis pipeline which relies on quantitative methods
`to extract information about underlying structures and details
`in images of interest. One major application of this quantita
`tive analysis is in the field of medical imaging, where diseases
`and abnormalities are often quantified, categorized, and ana
`lyzed in terms of their risk to the patient. A more robust
`approach to assess these conditions will help level the playing
`field and allow non-experts to make informed decisions
`regarding patient diagnosis. In addition to the information
`obtained from depth maps, additional insight can be gained
`through the use of multispectral or hyperspectral imaging.
`This will enable the observation and segmentation of previ
`ously obscured features, including blood vessels and wave
`length-specific contrast agents.
`0010 Medical imaging can be a subjective field. Often,
`experts are not completely Sure about the underlying physical
`explanations for perceived abnormalities or features observed
`in acquired images. The possibility of better image visualiza
`tion has led to the adoption of 3-dimensional technologies
`Such as magnetic resonance imaging (MRI) and X-ray com
`puted tomography (CT) and their use in medical imaging.
`Optically acquired images, however, still Suffer from a lack of
`depth information and sensitivity to distortion, as well as
`image artifacts and reflections.
`0011 Since optical images are an easily obtained, low-risk
`modality which can be used in real-time intraoperatively, it
`would be useful to improve the accuracy of these types of
`images. This will enable those analyzing the images to better
`understand the underlying physical phenomena, more readily
`identify abnormal growths or function, more easily commu
`nicate these observations to those without experience in
`medical image analysis, and to have greater confidence in
`treatment plans based on the acquired medical images.
`0012. One approach to better understand optical images is
`to “flatten' the image of a 3D object onto a plane for further
`analysis. Most previous applications of this idea have been in
`the domain of computer graphics. For instance, U.S. Pat. No.
`8.248,417 (incorporated herein by reference) discloses a
`computer-implemented method for flattening 3D images,
`using a plurality of polygons divided into patches. This is for
`the purpose of applying 2D texture maps to 3D Surfaces.
`Many such applications are of this “forward projection' type,
`where a generated 2D image is draped over the 3D surface.
`Further, U.S. Pat. Pub. No. 20110142306 (incorporated
`herein by reference) discloses the flattening of 3D medical
`images for the purpose of determining myocardial wall thick
`CSS.
`0013 The use of optical cameras allows real-time imaging
`without concern for radiation. The proliferation of laparo
`scopic tools, such as is described in U.S. Pat. No.
`2011 0306832 (incorporated herein by reference), allows
`Small imagers to be used inside the body and manipulated
`with dexterity. Previous applications of 3D flattening were
`mostly concerned with the projection of flat, pre-rendered
`
`Petitioner's Exhibit 1023,
` Page 17 of 25
`
`

`

`US 2015/O 145966 A1
`
`May 28, 2015
`
`textures onto a deformed 3D surface. The disclosed embodi
`ments regard the opposite intent. That is, a system and method
`to accurately flatten the existing textures and optical informa
`tion from 3D surfaces with minimal distortion and maximal
`information retention. Subsequently one may use informa
`tion from the 3D depth map to correct abnormalities in the
`optical image. These flattened, corrected images may then be
`overlaid on the 3D image for visualization purposes, or used
`to provide corrected 2D images.
`0014. The present embodiments provide a system and
`method to correct3D images for undesirable artifacts, includ
`ing reflections.
`According to one embodiment, the system comprises a cam
`era, which may be configured for spectral sensing, to obtain
`images of the area of interest, a distortion model to predict the
`degree of image distortion due to reflection or other physical
`phenomena, and 3-dimensional spatial information obtained
`from a light-field camera or other Suitable 3D camera arrange
`ment. According to one embodiment there is described a
`method for registering the optical image data with the 3D
`spatial information. According to one embodiment there is
`described a method to correct for undesired image distortion
`or artifacts which is informed by the 3D depth information
`and other knowledge of the camera arrangement and physical
`environment. According to one embodiment there is
`described a method for determining the possible deforma
`tions of the 3D image data, satisfying a multitude of relevant
`parameters designed to minimize the loss of relevant infor
`mation.
`0015 The method includes an automatic or manual pro
`cessing step where the desired distortion correction is
`weighed against the possible deformations of the 3D image
`data and an output preview which may include a variety of
`options for displaying the distortion-corrected image on a 3D
`model. According to one embodiment there is described a
`method to suggest camera and illuminating light positions,
`orientations, parameters, and model types based on previous
`images, optimized to minimize distortions and artifacts in
`regions of interest and a device to display the corrected image
`data.
`0016 A further understanding of the functional and
`advantageous aspects of the invention can be realized by
`reference to the following detailed description and drawings.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0017 FIG. 1 illustrates procedural flow to generate a cor
`rected 3D image.
`0018 FIG. 2 illustrates an exemplary embodiment where
`optical and light-field cameras are mounted to a laparoscopic
`tool, shown in closed and open form.
`0019 FIG. 3 illustrates an alternate procedural flow with
`an additional iterative step taken to generate a corrected 3D
`image.
`0020 FIG. 4 illustrates another embodiment where the
`laparoscopic tool includes a stereoscopic camera arrange
`ment for depth-sensing.
`0021
`FIG. 5 illustrates another embodiment where the
`cameras are used externally.
`0022 FIG. 6 illustrates the extension of the baseline of a
`Stereoscopic camera arrangement for depth-sensing.
`0023 FIG. 7 illustrates an embodiment wherein the imag
`ers and light source are controlled by a robot connected to the
`image processing workstation.
`
`0024 FIG. 8 illustrates an embodiment wherein the imag
`ers and light source are tracked to determine position and
`orientation, and navigated by the workstation.
`(0025 FIG. 9 illustrates the potential benefits from multi
`spectral imaging.
`0026 FIG. 10 illustrates a depth map to optical image
`registration.
`0027 FIG. 11 illustrates exemplary polygon generation
`with normal vectors on a sample sphere.
`0028 FIG. 12 illustrates the polygon normal vector gen
`eration on a realistic organ.
`0029 FIG. 13 illustrates image from a capable depth sens
`ing circuitry, in this case a 3D light-field camera.
`0030 FIG. 14 illustrates a block diagram showing an
`example of a hardware configuration of a special purpose
`computer according to the present embodiments.
`0031
`FIG. 15 illustrates an exemplary flow diagram.
`
`99 &g
`
`DETAILED DESCRIPTION
`0032. In the drawings, like reference numerals designate
`identical or corresponding parts throughout the several views.
`Further, as used herein, the words “a”, “an and the like
`generally carry a meaning of “one or more', unless stated
`otherwise. The drawings are generally drawn to Scale unless
`specified otherwise or illustrating schematic structures or
`flowcharts.
`0033. Furthermore, the terms “approximately.” “proxi
`“minor and similar terms generally refer to ranges
`mate,
`that include the identified value within a margin of 20%, 10%,
`5% or greater than 0%, and any values therebetween.
`0034. Without limitation, the majority of the systems
`described herein are directed to the acquisition and analysis
`of medical images. As required, embodiments of medical
`imaging systems are disclosed herein. However, the disclosed
`embodiments are merely exemplary, and it should be under
`stood that the disclosure may be embodied in many various
`and alternative forms. The systems and methods described
`herein may be applicable to any image acquired with any
`CaCa.
`0035. The figures are not to scale and some features may
`be exaggerated or minimized to show details of particular
`elements while related elements may have been eliminated to
`prevent obscuring novel aspects. Therefore, specific struc
`tural and functional details disclosed herein are not to be
`interpreted as limiting but merely as a basis for the claims and
`as a representative basis for teaching one skilled in the art to
`variously employ the present embodiments. For purposes of
`teaching and not limitation, the illustrated embodiments are
`directed to 3D corrected imaging.
`0036. The flowchart in FIG. 1 shows an exemplary proce
`dural workflow to perform the imaging corrections. The pro
`cess can be performed when an object and an imaging system
`are in place. An optical image 3 is acquired along with a depth
`map 2 over at least a portion of a field-of-view (FOV). Infor
`mation can be inferred from the optical image 3 Such as areas
`of specular reflection, color information and transform-in
`variant features. Information about the depth of correspond
`ing optical image 3 pixels can be determined based on their
`position in the depth map 2, which can be obtained from a
`3D-capable depth sensor. Example images developed from
`optical image 3 and depth map 2 are shown in FIGS. 9 and 13.
`FIG. 9 illustrates the images possible with multispectral
`imaging in tissue. The use of multispectral imaging allows the
`imaging system to focus on specific features which are better
`
`Petitioner's Exhibit 1023,
` Page 18 of 25
`
`

`

`US 2015/O 145966 A1
`
`May 28, 2015
`
`seen at different wavelengths of light, and combine these
`images to form a more complete overview of the object. FIG.
`13 presents exemplary depth images 2 obtained from a light
`field camera that would provide information about the 3D
`structure of the object of interest.
`0037. In addition to optical image 3 and depth map 2
`information, prior information 1 Such as tissue properties,
`light polarization, wavelength and electromagnetic field
`properties, is also input to the imaging system. Prior infor
`mation 1 also includes camera parameters, light source infor
`mation, and depth sensing mean information. Camera param
`eters may include focal length, FOV measurements,
`exposure, and other intrinsic or extrinsic camera parameters.
`Prior information 1 may include information relevant to the
`projected reflection distortion, Such as the intensity and direc
`tionality of illumination, material composition, wavelength,
`polarization, electric and magnetic fields. Prior information 1
`may also include information obtained from prior images or
`relevant image databases. These three sources of information
`converge to generate an image distortion model 4, where the
`distortion is caused due to the imaging system.
`0038. The distortion model 4 serves a dual purpose of
`informing the overall process of the goals for image correc
`tion and providing a physical basis for the desired image
`corrections. The distortion model 4 can be used to adjust the
`lighting in a scene after image acquisition with realistic out
`comes. Both the optical and 3D images previously acquired
`also send information to a separate algorithm, the deforma
`tion model 5, which is responsible for assessing the deforma
`tions to apply to the 2D images to achieve a 3D representa
`tion.
`0039. The distortion model 4 calculates the projected dis
`tortion and location of image artifacts based on current physi
`cal models of image acquisition and reflectance. The distor
`tion model 4 uses the optical and depth image along with the
`prior information 1. The distortion model 4 includes a 2D
`image with depth at each pixel, giving a 3D Surface at which
`each pixel contains additional information relevant to the
`correction of imaging abnormalities such as amount of extra
`light due to reflection, adjustment in illumination due to Sur
`face orientation and occlusion, expected radiance and diffu
`sion due to Surface roughness and other material properties,
`Such as irregularities in color inferred from adjacent areas.
`These are sensitive to the position and intensity of the light
`which must be known at the time of image acquisition. This is
`accomplished by an encoded or tracked light Source and a
`model of the light source, which may be a point or extended
`model. The distortion and deformation models are dynamic in
`nature and updatable over time as the user gains a better
`understanding of the specific underlying processes affecting
`image acquisition and quality in the particular context. One
`easily understood physical model is used to calculate the
`reflection from a surface. Knowing the Surface orientation,
`which may be represented as a vector normal to a patch of the
`Surface, and the angle of incoming light, the angle of reflected
`light can be predicted according to the law of reflection. The
`fundamental law of reflection states that the angle of incident
`light is equal to the angle of reflected light measured with
`respect to the surface normal (for instance, 0.0, described
`below). A more complicated case may arise when tissue with
`varying optical properties is used, absorbing or allowing
`transmission of some light and reflection of other amounts. In
`this case, the specular-only reflectance model is not fully
`accurate, and must be updated to include mechanisms of
`
`diffuse reflection. For example, the distortion model may
`incorporate more advanced lighting models such as the Oren
`Nayar model, which takes into account the roughness of a
`Surface. In many cases the assumption that a Surface appears
`equally bright from all viewing angles (Lambertian Surface)
`is false. Such a Surface would be required to calculate the
`distortion model radiance according to the Oren-Nayar or
`similar model. In one embodiment, the Oren-Nayar model
`takes the following form:
`
`0040 where,
`0041 0, is an angle of incidence of light
`0042 0, is an angle of reflection of light
`0043 p is a reflection coefficient of a surface,
`0044) A and B are constants determined by a surface
`roughness,
`0045 C. is a maximum of the angles of incidence and
`reflection,
`0046
`B is a minimum of the angles of incidence and
`reflection,
`0047 E is the irradiance when the surface is illuminated
`head-on.
`In the case of a perfectly smooth surface, A=1 and B=0 and
`the equation reduces to the Lambertian model as:
`ReflectedLight=(pf)*cos(0)*E.
`0048 E is determined first for the area of interest contain
`ing no objects. In this case the light illuminates a flat white
`Surface head on and uses this information in Subsequent steps
`for normalization of illumination. Also during this time it may
`be appropriate to calibrate the internal parameters of the
`optical and/or depth camera with a calibration procedure,
`typically utilizing a reference pattern or object to tease out
`distortions due to the cameras themselves. Calibration must
`also quantify the reflected light across the camera view under
`reference conditions, achieved by Shining the light Source
`perpendicular to a highly reflective, homogeneous Surface.
`This can then be used to normalize reflected light while cap
`turing Subsequent images at the same light source location.
`The calibration step is necessary once to inform the intrinsic
`parameters used in the prior information that informs the
`distortion model. The combination of this prior information
`and sensory information can then be used to correct for undes
`ired effects. For example, one factor in the distortion model,
`which can be assessed with knowledge of the 3D surface and
`lighting conditions, is occlusion. If a region is occluded, the
`region will appear darker due to shadows and have limited
`depth information. The distortion model recognizes Such
`areas, knowing the lighting conditions and 3D Surface, and
`will be used to generate possible surface features by interpo
`lating characteristics of the Surrounding unclouded area Such
`as color, texture, and 3D surface information. Another
`example using the distortion model is the case of specular
`reflections. Again with knowledge of the lighting conditions
`and 3D surface information, reflections can be predicted
`according to material properties. These predictions from the
`distortion model can be compared with the observed imaging
`Subject and used to Smartly reduce the negative effects of
`specular reflection, namely loss of underlying information
`through an interpolation of the Surrounding clear areas, selec
`tive adjustment of image post-processing parameters
`restricted to affected regions, or a combination of both. Com
`bining different embodiments of the distortion model
`
`Petitioner's Exhibit 1023,
` Page 19 of 25
`
`

`

`US 2015/O 145966 A1
`
`May 28, 2015
`
`approach allows the imaging Subject to be better understood
`even with biases due to imaging or Subject irregularities. It is
`even possible to correct for specular reflection without
`knowledge of the lighting conditions by capturing multiple
`images at different lighting angles and performing a data
`dependent rotation of the color space. One exemplary use of
`the distortion model is using the optical properties of the
`imager to predict and correct for distortion due to the intrinsic
`camera parameters like focal length, principle point, skew
`coefficient, lens arrangement etc. For example, fisheye or
`barrel distortions are example of effects caused due to intrin
`sic camera parameters. Such distortion correction only
`requires prior information and no knowledge of the depth
`map or optical image.
`0049. A deformation model 5, which uses the depth map
`and optical image but no prior information, breaks the joined
`optical image 3 and 3D image into a multitude of polygons
`(see example polygon 21 in FIG. 11) with known size and
`orientation, which may be grouped into discrete islands or
`patches. This information comes mainly from the 3D image
`obtained by a 3D camera, although there are techniques to
`further inform the deformation model 5 such as shape-from
`shadow, monocular depth cues such as occlusion or relative
`size, or movement-produced cues in images obtained from
`the 2D optical imager. The deformation model 5 may be tuned
`to accommodate various desired characteristics. Such poly
`gon size, deformed polygon transformation metrics, minimal
`deformation of certain regions, and minimization of total
`patch curvature. The size of the polygons is inherently related
`to the curvature of the region which it represents. FIG. 11
`illustrates the polygon 21 generation on an exemplary Sur
`face, including the normal vectors 22. FIG. 12 illustrates this
`normal vector 22 generation on a more relevant Surface,
`which used for illustration in different figures in the embodi
`ments of present disclosure.
`0050. The deformation model 5 places polygons (see
`example polygons 21 in FIG. 11) in accordance with model
`parameters, typically to minimize the overall curvature of the
`resulting mesh. Where areas of high curvature are encoun
`tered, the mesh algorithm may choose to separate patches
`comprised of a multitude of polygons and deform these sepa
`rately. The centroids of these patches are areas with relatively
`low curvature. The centroids may be manually placed or
`adjusted according to the desired features to be observed in
`the image. The deformation model 5 is constantly updated to
`include new information gained from the depth-sensing cir
`cuitry. In the case of the laparoscopic embodiment, discussed
`later, the constant updating allows for the computer worksta
`tion to keep a relatively local, up-to-date 3D model of the
`environment which is in the FOV. This can be useful not only
`for the image correction applications, but also for intraopera
`tive navigation of other tools.
`0051. The output from the distortion model 4, the pre
`dicted distortion due to image acquisition, is reconciled with
`the deformation model output according to the 3D structure
`of the tissue sample in a balancing step 6. This balancing step
`takes the 2D images with additional pixel information that are
`the result of the deformation and distortion modeling, and
`uses the combination of the two to adjust post-processing
`image parameters to account not only for the distortion
`caused by imaging abnormalities, but also by image artifacts
`caused by the imaging of a 3D deformed surface. Areas with
`high degrees of relevant predicted distortion, such as specular
`reflection, are assessed in terms of their ability to be corrected
`
`with prior information or additional information in the optical
`and depth image 2. For example, if there is an area with an
`image artifact, the Surrounding areas of acceptable quality
`may be extrapolated or interpolated in order to approximate
`the optical image 3 at the distorted region. These extrapola
`tions or interpolations would be better informed if the
`sampled regions were of similar angles to the distorted
`region. In another case, if there is an area with dark shadows,
`the angles and locations of these regions may be calculated
`relative to the illuminating source model, and their gains may
`be adjusted to more closely match an evenly illuminated
`object. In another case, a pinhole camera model may produce
`an undesirable non-uniform scale, where closer objects
`appear larger than those farther away. With the information
`from a light field camera or other depth-sensing circuitry, not
`only can the perspective of an optical camera be adjusted
`post-acquisition, but the entire pinhole camera model can be
`changed to a parallel model, representing objects with a uni
`form scale at every depth. This may be useful for feature size
`comparison across the depth FOV. In many cases, the recon
`ciliation of the distortion and deformation model may be done
`automatically by weighting the confidence in their respective
`parameters, and satisfying a cost function which prefers a
`certain balance between information from both models. In
`more special cases, a human operator may be involved in
`assessing the performance of different amounts of image
`correction by adjusting the contribution of these models and
`observing the results as they are displayed on an image.
`0052. The result of the balancing processing step 6 is one
`or a plurality of images which minimize the image distortion
`and sa

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket