throbber
Trials@uspto.gov
`Tel: 571-272-7822
`
`Paper 39
`Entered: June 25, 2015
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`_______________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`_______________
`
`TRW AUTOMOTIVE US LLC,
`Petitioner,
`
`v.
`
`MAGNA ELECTRONICS INC.,
`Patent Owner.
`_______________
`
`Case IPR2014-00266
`Patent 7,994,462 B2
`_______________
`
`
`Before JUSTIN T. ARBES, BENJAMIN D. M. WOOD, and
`NEIL T. POWELL, Administrative Patent Judges.
`
`WOOD, Administrative Patent Judge.
`
`
`
`FINAL WRITTEN DECISION
`35 U.S.C. § 318(a) and 37 C.F.R. § 42.73
`
`
`
`
`
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`
`I.
`
`INTRODUCTION
`
`Background
`A.
`TRW Automotive US LLC (“TRW”) filed a Petition (Paper 2, “Pet.”)
`to institute an inter partes review of claims 30–32, 34–41, 43–46, 48, and 49
`of U.S. Patent No. 7,994,462 B2 (Ex. 1002, “the ’462 patent”). Magna
`Electronics Inc. (“Magna”) filed a Preliminary Response. Paper 9. In our
`Decision on Institution (Paper 17, “Dec.”), we instituted an inter partes
`review of claims 30, 34, and 38 based on the proposed ground that these
`claims were unpatentable under 35 U.S.C. § 102(b) as anticipated by
`Kenue.1 Dec. 16. We subsequently granted-in-part TRW’s Request for
`Rehearing and instituted review of claims 31 and 37 on the same proposed
`ground of unpatentability. Paper 21, 4–5.
`After the Board instituted trial, Magna filed a Patent Owner Response
`(Paper 26, “PO Resp.”), to which TRW replied (Paper 29, “Pet. Reply”). An
`Oral Hearing was held on February 19, 2015, and the Hearing Transcript
`(Paper 38, “Tr.”) has been entered in the record.
`We have jurisdiction under 35 U.S.C. § 6(c). This Final Decision is
`entered pursuant to 35 U.S.C. § 318(a). We determine that TRW has shown
`by a preponderance of the evidence that the challenged claims are
`unpatentable.
`
`Related Proceedings
`B.
`TRW discloses that the ’462 patent has been asserted in Magna
`Electronics, Inc. v. TRW Automotive Holdings Corp., Case No. 1:12-cv-
`00654-PLM (W.D. Mich. 2012). Pet. 7; Paper 2, 2.
`
`1 U.S. Patent No. 4,970,653 to Kenue, Ex. 1004.
`
` 2
`
`
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`
`The ’462 Patent (Ex. 1002)
`C.
`The ’462 patent, titled “Vehicular Image Sensing System,” describes a
`system for controlling a vehicle—e.g., dimming the vehicle’s headlights—in
`response to detecting “objects of interest” in front of the vehicle—e.g., the
`headlights of oncoming vehicles and the taillights of leading vehicles.
`Ex. 1002, Abst., 1:22–27. The system uses an image sensor that divides the
`scene in front of the vehicle into “a plurality of spatially separated sensing
`regions.” Id. at 2:16–19. A control circuit with a processor receives image
`data from the image sensor and determines if individual regions include light
`sources having particular characteristics, such as a “spectral characteristic”
`(color), or intensity. Id. at 1:65–2:9, 3:43–51. By comparing the lights’
`characteristics with the distribution of the lights across the regions, such as
`the lights’ proximity to each other and to the vehicle’s central axis, the
`system can distinguish oncoming headlights and leading taillights from
`streetlights and other lights that are not of interest. Id. at 9:32–61, 10:53–56.
`The system also may detect traffic signs and lane markers, and assist the
`driver in other ways, such as alerting the driver to lane changes. Id. at
`11:60–12:13.
`
`Illustrative Claims
`D.
`Claims 30 and 34 are independent, and each is drawn to an image-
`sensing system for a vehicle. Ex. 1002, 15:18–47, 15:60–16:22. These
`claims share at least three common limitations: (1) an image sensor
`comprising a two-dimensional array of light-sensing photosensor elements;
`(2) the image sensor being inside the vehicle on which it is mounted, having
`a forward field of view through the vehicle’s windshield; and (3) a control
`
` 3
`
`
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`comprising a processor that processes the image data to identify objects of
`interest. Id.
`Claim 30 is illustrative and is reproduced below:
`30. An image sensing system for a vehicle, said image sensing
`system comprising:
`an imaging sensor comprising a two-dimensional array of
`light sensing photosensor elements;
`said imaging sensor having a forward field of view to the
`exterior of a vehicle equipped with said image sensing system
`and through the windshield of the equipped vehicle;
`wherein said imaging sensor is operable to capture image
`data;
`a control comprising an image processor;
`wherein said image sensing system identifies objects in said
`forward field of view of said image sensor via processing of
`said captured image data by said image processor;
`wherein said image processing comprises pattern
`recognition and wherein said pattern recognition comprises
`detection of at least one of (a) a headlight, (b) a taillight and (c)
`an object, and wherein said pattern recognition is based at least
`in part on at least one of (i) shape, (ii) reflectivity, (iii)
`luminance and (iv) spectral characteristic; and
`wherein said control at least one of (a) controls a headlamp
`of the equipped vehicle as a function of a speed of the equipped
`vehicle, (b) controls a headlamp of the equipped vehicle in
`response to said image processing, (c) controls a speed of the
`equipped vehicle in response to said image processing, and (d)
`generates an alert to the driver of the equipped vehicle in
`response to said image processing.
`
`
`II. ANALYSIS
`
`Claim Construction
`A.
`“A claim in an unexpired patent shall be given its broadest reasonable
`construction in light of the specification of the patent in which it appears.”
`37 C.F.R. § 42.100(b); see In re Cuozzo Speed Tech., LLC, 778 F.3d 1271,
`
` 4
`
`
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`1281 (Fed. Cir. 2015) (“We conclude that Congress implicitly adopted the
`broadest reasonable interpretation standard in enacting the AIA.”). Under
`that standard, the claim language should be read in light of the specification
`as it would be interpreted by one of ordinary skill in the art. In re Suitco
`Surface, Inc., 603 F.3d 1255, 1260 (Fed. Cir. 2010). Thus, we generally
`give claim terms their ordinary and customary meaning. See In re
`Translogic Tech., Inc., 504 F.3d 1249, 1257 (Fed. Cir. 2007) (“The ordinary
`and customary meaning is the meaning that the term would have to a person
`of ordinary skill in the art in question.”) (internal quotation marks omitted).
`We expressly interpret below only those claim terms that require
`analysis to resolve arguments related to the patentability of the challenged
`claims.
`
`“pattern recognition”
`1.
`In the Decision on Institution we construed “pattern recognition” to
`mean “detection of an object of interest based upon shape, reflectivity,
`luminance, or spectral characteristic.” Dec. 7. We based this construction
`on an express definition of the term in the Specification:
`Pattern recognition may be used to further assist in the
`detection of headlights, taillights, and other objects of interest.
`Pattern recognition identifies objects of interest based upon
`their
`shape,
`reflectivity,
`luminance,
`and
`spectral
`characteristics. For example, the fact that headlights and
`taillights usually occur in pairs could be used to assist in
`qualifying or disqualifying objects as headlights and taillights.
`By looking for a triad pattern, including the center high-
`mounted stoplight required on the rear of vehicles, stoplight
`recognition can be enhanced.
`
`Ex. 1002, 11:1–10 (emphasis added). We further noted that although the
`Specification describes pattern recognition as identifying objects of interest
`
` 5
`
`
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`based on their shape, reflectivity, luminance, and spectral characteristics,
`i.e., based on all four of these characteristics, the claims require pattern
`recognition to be based on only one of the characteristics. For example,
`claim 30 requires the image processing to comprise “pattern recognition . . .
`wherein said pattern recognition is based at least in part on at least one of
`(i) shape, (ii) reflectivity, (iii) luminance and (iv) spectral characteristic.” Id.
`at 15:33–39 (emphasis added). Claim 34 includes similar language. In
`addition, the examples of pattern recognition provided in the Specification—
`e.g., recognizing headlights and taillights because they occur in pairs—seem
`to be limited to the shape of the object of interest rather than to any of its
`other characteristics. For these reasons we construed “pattern recognition”
`to mean detection of objects of interest based upon their shape, reflectivity,
`luminance, or spectral characteristic. Dec. 7.
`Neither TRW nor Magna expressly disputes this construction.
`Furthermore, we find nothing in the record adduced subsequent to institution
`that contradicts it. Therefore, we adopt this construction as our final
`construction of this term.
`2.
`“detection” and “identification”
`a.
`The Parties’ Positions
`Claims 30 and 34 recite an image sensing system that “identifies
`objects” via image “processing,” the image processing comprising “pattern
`recognition,” which itself comprises “detection” of, e.g., an object. Magna
`argues that “identification” and “detection” have different meanings, and
`that the claims require both “functions” to be performed. PO Resp. 7–9.
`According to Magna, “detection can be said to mean the discovery of the
`
` 6
`
`
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`presence or existence of something,” whereas “‘[i]dentification’ requires
`establishing or indicating or knowing what an object is.” Magna explains:
`That identification is not a synonym for detection is evidenced
`by the plain and ordinary meaning of the terms. “Detect” is
`dictionary-defined to mean “to discover or notice the presence
`of” (Merriam-Webster definition of “detect”, Exhibit 2035),
`whereas “Identify” is dictionary-defined to mean “to know and
`say who someone is or what something is” (Merriam-Webster
`definition of “identify”, Exhibit 2036.) Plainly, identification is
`different from and more than detection. The presence of an
`object may be detected but the object may not necessarily be
`identified.
`
`Id. at 7 n.3 (emphasis added). Magna also provides a number of examples of
`how these terms are used in the Specification to support its position that the
`two terms have different meanings. Id. at 6–9.
`TRW disagrees with Magna’s construction. In particular, TRW
`disagrees that identification of an object requires something more than
`detecting the object. TRW asserts that the claims do not merely recite
`detection of an unknown object, but rather recite detection of specific
`objects of interest, e.g., a headlight and taillight, in the first instance. Pet.
`Reply 3–4 (citing Ex. 1002, claims 30, 34). TRW reasons that
`[U]sing Magna’s definitions [for detection and identification],
`to ‘detect a headlight’ (i.e., to determine that a headlight is
`present) the image sensing system must also ‘identify a
`headlight’ (i.e., recognize that the thing detected is a headlight).
`(Id.) Identifying an object as a headlight, therefore, does not
`“require[] more than” detecting a headlight—identifying a
`headlight and detecting a headlight are one and the same.
`
` 7
`
`
`
`Id. at 4.
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`
`b.
`Analysis
`In determining the ordinary and customary meaning of a claim term,
`the Board may consult a general dictionary definition of the word for
`guidance. Comaper Corp. v. Antec, Inc., 596 F.3d 1343, 1348 (Fed. Cir.
`2010). Based on the dictionary passages submitted by Magna, there is
`substantial similarity between the definitions of “detect” and “identify.”
`According to these dictionary excerpts, one definition of “detect” is “to
`discover the true character of” (Ex. 2035), whereas a definition of “identify”
`is “to find out . . . what something is” (Ex. 2036). Given this similarity, it is
`understandable that the Specification uses the terms interchangeably. For
`example, according to the Specification, the invention “identifies” oncoming
`headlights and leading taillights, but it also “detects” oncoming headlights
`and leading taillights. Ex. 1002, 3:2–3, 6:3–12, 7:7–8. Likewise, the
`Specification describes “pattern recognition” as assisting in the “detection”
`of objects of interest, but it also “identifies” objects of interest. Id. at 11:1–
`3. Although different terms in a claim are presumed to have different
`meanings, that presumption may be rebutted when, as here, the Specification
`uses the terms interchangeably. See In re Magna Elecs., Inc., Case Nos.
`2014–1798, 2014–1801, 2015 WL 2110525, at *5 (Fed. Cir. May 7, 2015)
`(rejecting Magna’s argument that a “positional relationship” has a different
`meaning than “an indication of a distance” because the patent “essentially
`treats the two terms coextensively”).
`Even assuming arguendo that a distinction exists between ”detect”
`and “identify,” and that Magna is correct that “the presence of an object may
`be detected but the object may not necessarily be identified” (PO Resp. 7
`n.3), we disagree with Magna that identification requires more than
`
` 8
`
`
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`detection in the context of the claims at issue. As stated above, claims 30
`and 34 recite an image sensing system that “identifies” objects via image
`processing, the image processing comprising pattern recognition, the pattern
`recognition, in turn, comprising “detection” of, e.g., an object. In other
`words, the claimed system identifies objects by detecting them.2 The claim
`does not present the terms as two separate requirements (e.g., a system that
`“identifies and detects”), but rather is structured such that performing one
`satisfies the other. “[I]dentif[ying]” an object is satisfied when the system
`“detect[s]” an object, the only additional requirement being that the
`detection be the result of “pattern recognition . . . based at least in part on at
`least one of (i) shape, (ii) reflectivity, (iii) luminance and (iv) spectral
`characteristic.”
`
`Kenue
`B.
`Kenue describes a “computer vision system” that detects lane markers
`and obstacles in front of an automobile. Ex. 1004, 1:53–61. Figure 1 of
`Kenue, reproduced below, depicts Kenue’s system in block-diagram form:
`
`
`2 Conversely, claim 1 of U.S. Patent No. 7,459,664 B2 (“the ’664 patent”),
`the grand-parent of the ‘462 patent, recites a system that “detects” objects by
`“identifying” them—that is, the “detects” and “identifies” terminology is
`reversed compared to the analogous language in claims 30 and 34 of the
`’462 patent. See IPR2015-00256, Ex. 1002, 13:35–36 (“said sensing system
`detecting objects by processing said image data to identify objects”).
`Generally, claims in patents in the same family that share a common written
`description are interpreted consistently. NTP, Inc. v. Research In Motion,
`Ltd., 418 F.3d 1282, 1293 (Fed. Cir. 2005), abrogated on other grounds by
`Zoltek Corp. v. United States, 672 F.3d 1309, 1323 (Fed. Cir. 2012) (en
`banc). The reversal of terms between claim 1 of the ’664 patent and claims
`30 and 34 of the ‘462 patent lends support to the notion that the drafter of
`these claims considered the terms to be interchangeable.
`
` 9
`
`
`
`
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`
`
`As depicted in Figure 1, Kenue’s system comprises black and white CCD
`[charge coupled device] video camera 10—mounted on a vehicle’s
`windshield to capture the driver’s view of the road in front of the vehicle—
`coupled to computer 14 via analog-to-digital converter 12. Id. at 2:28–34,
`Fig. 1. Computer 14 processes the image data received from the CCD
`camera, and drives output devices—i.e., display 16, obstacle warning alarm
`18, and utilization circuit 20—in response to the data. Id. at 2:34–48, Fig. 1.
`The “raw image” from the camera is “digitized . . . into a 512x512x8
`image.” Id. at 3:44–45. Computer 14 receives the digitized image data from
`the camera and, using one of two main algorithms, template matching and a
`Hough transform, “dynamically define[s] the search area for lane markers
`based on the lane boundaries of the previous [image] frame, and provide[s]
`estimates of the position of missing markers on the basis of current frame
`and previous frame information.” Id. at 2:32–48. The system also detects
`and alerts the driver to obstacles in the lane within about 50 feet of the
`vehicle. Id. at 2:48–51.
`
`Claim 30—Anticipation—Kenue
`C.
`TRW asserts that Kenue anticipates claim 30. Its contentions in this
`regard are summarized in the claim chart at pages 14–17 of the Petition.
`Magna disputes that claim 30 is anticipated. In particular, Magna argues
`
`
`
`
`10
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`that Kenue fails to disclose (1) a “two-dimensional array of light sensing
`photosensor elements,” (2) both “detection” and “identification,” and (3)
`“pattern recognition” based on either “shape” or “luminance.” PO Resp. 10–
`19. Magna further asserts that Kenue does not identify objects in “the
`forward field of view of [the] sensor,” and does not “process[] . . . image
`data.” Id. at 20–23.
`1.
`“an imaging sensor comprising a two-dimensional array
`of light sensing photosensor elements”
`Claim 30 recites “an imaging sensor comprising a two-dimensional
`array of light sensing photosensor elements.” TRW identifies Kenue’s CCD
`video camera as corresponding to the claimed imaging sensor. Pet. 14. In
`describing a block diagram of its system depicted in Figure 1, Kenue states
`that it uses a “black and white CCD video camera”; the raw image from the
`camera is “digitized . . . into a 512x512x8 image.” Ex. 1004, 2:27–30, 3:44–
`46, Fig. 1. This refers to a two-dimensional image that is 512 pixels high
`and 512 pixels wide, with each pixel represented by eight bits. Ex. 1020 ¶ 4.
`TRW argues that this shows that Kenue uses a two-dimensional array of
`light sensing photosensor elements to capture the raw image. Pet. 14.
`Magna disagrees. First, Magna argues that “a CCD camera is not
`explicitly, necessarily or inherently two-dimensional.” PO Resp. 11 (citing
`Ex. 2032 ¶¶ 35, 36–46). Magna notes, for example, that “a line-scan camera
`and a printer scanhead use a one-dimensional CCD.” Id. (citing Ex. 2032
`¶ 46). Second, Magna argues that “Kenue never discloses explicitly,
`necessarily or inherently that forming a two-dimensional image requires use
`of a two-dimensional sensor.” Id. Rather, Magna argues, “an image may be
`represented by a multidimensional matrix without the image sensor itself
`
`
`
`
`11
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`utilizing a two-dimensional array of light sensing photosensor elements.”
`Id. According to Magna, “Kenue’s search functions and algorithms could
`also be just as easily implemented using data from a one-dimensional array
`of light sensors.” Id. at 12–13 (citing Ex. 2032 ¶¶ 51–53).
`TRW responds that a person of ordinary skill would understand from
`Kenue’s discussion of the creation of a two-dimensional digital image that
`Kenue uses a two-dimensional image sensor to capture the raw image from
`which the digital image is created. Citing its declarant, Jeffrey A. Miller,
`Ph.D., TRW asserts that “[b]y pointing out that the digitized image is 512 x
`512 x 8, Kenue explicitly teaches to the skilled artisan use of a two-
`dimensional imaging array.” Pet. Reply 2 (citing Ex. 1020 ¶ 6). According
`to TRW, it would be difficult, if not impossible, to create a two-dimensional
`512x512 image in real time using, e.g., a 1x512 sensor array, because such a
`sensor would have to take “512 distinct images . . . [that] would need to be
`individually pieced together.” Id. at 1–2. TRW further notes that Kenue
`describes digitizing a single “raw image” into a 512x512 image, rather than
`multiple “raw images,” indicating that the patent does not contemplate such
`piecing together of multiple raw images as would be required if a one-
`dimensional sensor were used. Id. at 2. Furthermore, TRW contends that
`“Magna’s own expert . . . admits Kenue does indeed teach a two
`dimensional image sensor.” Id.
`“A claim is anticipated only if each and every element as set forth in
`the claim is found, either expressly or inherently described, in a single prior
`art reference.” Verdegaal Bros. v. Union Oil Co. of California, 814 F.2d
`628, 631 (Fed. Cir. 1987). However, “the reference need not satisfy an
`ipsissimis verbis test.” In re Gleave, 560 F.3d 1331, 1334 (Fed. Cir. 2009)
`
`
`
`
`12
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`(internal quotation marks omitted). Rather, the reference must provide
`sufficient description of the claim elements so that a person of ordinary skill
`in the art would understand them to be present. See Akzo N.V. v. U.S. Int’l
`Trade Comm’n, 808 F.2d 1471, 1479 (Fed. Cir. 1986) (stating that the
`Commission did not apply an improper ipsissimis verbis test in its
`anticipation analysis, but rather considered whether the prior art reference
`disclosed the claimed invention to one of ordinary skill in the art); In re
`Paulsen, 30 F.3d 1475, 1480 (Fed. Cir. 1994) (holding that prior art
`references must be “considered together with the knowledge of one of
`ordinary skill in the pertinent art”). Thus, it is not dispositive that Kenue
`does not state that it uses a “two-dimensional array of light sensing
`photosensor elements” in so many words, as long as a person of ordinary
`skill would understand from the disclosure that the reference is describing
`such structure.
`We find that Kenue discloses to a person of ordinary skill that its
`black and white CCD video camera system uses a two-dimensional array of
`photosensor elements. First, we credit the testimony of Dr. Miller that a
`person of ordinary skill would understand that, because Kenue’s system
`creates a two-dimensional digital image from a raw image, the system uses a
`two-dimensional photosensor array to capture the raw image. See Ex. 1020
`¶ 6. Further, we rely on Kenue’s disclosure that it digitizes a single raw
`image, rather than multiple raw images, to create the 512x512x8 digital
`image, indicating to a person of ordinary skill that the single raw image
`would had to have been captured by a 512x512 array of photosensor
`elements. Finally, although Magna’s declarant, Matthew A. Turk, Ph.D.,
`originally testified that Kenue’s CCD camera does not “necessarily” include
`
`
`
`
`13
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`a two-dimensional array of photosensor elements (Ex. 2032 ¶ 36), and that
`“Kenue never discloses explicitly, necessarily or inherently that forming a
`two-dimensional image requires use of a two-dimensional sensor” (id. ¶ 47),
`Dr. Turk provided testimony in his deposition that supports the opposite
`conclusion. Dr. Turk noted that the “camera image plane” depicted in
`Figure 2 “correspond[s] to the image sensor in Kenue,” and that a plane is “a
`two-dimensional surface.” Ex. 1013, 198:1–13.
`2.
`“said imaging sensor having a forward field of view to
`the exterior of a vehicle equipped with said imaging
`sensing system and through the windshield of the
`equipped vehicle”
`Kenue’s CCD video camera is “mounted in a vehicle, say at the upper
`center of the windshield to capture the driver’s view of the road ahead.”
`Ex. 1004, 2:29–32. We agree with TRW that this teaching corresponds to
`the claimed requirement that the imaging sensor has “a forward field of view
`to the exterior of a vehicle . . . and through the windshield of the equipped
`vehicle.”
`
`“said imaging sensor is operable to capture image data”
`3.
`Kenue teaches that its computer “is programmed with algorithms for
`processing the images sensed by the camera.” Id. at 2:40–41. We agree
`with TRW that this teaching corresponds to the claimed requirement that the
`imaging sensor is operable to capture image data.
`4.
`“a control comprising an image processor”
`As depicted in Figure 1, Kenue’s system comprises computer 14, and
`“output devices driven by the computer.” Id. at 2:33–35. We agree with
`TRW that this teaching corresponds to the claimed requirement for a control
`that comprises an image processor.
`
`
`
`
`14
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`
`5.
`
`“wherein said image sensing system identifies objects in
`said forward field of view of said image sensor via
`processing of said captured image data by said image
`processor”
`TRW argues that Kenue teaches this limitation because it identifies
`lane markers and obstacles in the forward field of view of the equipped
`vehicle by digitizing raw image data and processing the digital image using
`either a template-matching algorithm or a Hough transform. Pet. 15 (citing
`Ex. 1004, 2:41–51, 3:44–46, 59–66).
`Magna makes two arguments with respect to this limitation. First,
`Magna argues that Kenue does not identify objects in the “forward view of
`[the] sensor.” According to Magna, Kenue is limited to searching for lane
`markers only in a restricted view, and “‘forward field of view’ means more
`than th[is] restricted view.” PO Resp. 20–21. Magna derives this position
`from the fact that the Specification of the ’462 patent teaches that the
`invention is capable of “evaluation of light source characteristics made in
`each portion of the scene forward of the vehicle.” Id. (citing Ex. 1002,
`1:65–67, 2:1–4).
`As an initial matter, it is unclear from Magna’s Response whether it
`believes that this claim limitation requires that all objects present in the
`camera’s field of view be identified, or whether it just requires something
`more than searching for lane markers in Kenue’s limited search area. In any
`event, we do not find support in either the claim language or the
`Specification for Magna’s position. The limitation simply requires that
`“objects” be identified in the forward field of view of the image sensor, but
`does not otherwise specify what objects must be identified or where in the
`field of view the objects must be found (there does not seem to be any
`
`
`
`
`15
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`dispute that a lane marker is an “object” or that Kenue’s limited search areas
`are in the forward view of view of Kenue’s system). As the limitation does
`not expressly require more than this, we decline Magna’s invitation to read
`additional requirements into the claim.
`Second, Magna argues that “Kenue does not ‘process[] . . . data’
`because it does no more than convert analog to digital.” PO Resp. 22. TRW
`responds that Kenue’s computer 14 processes data as required by the claim.
`Pet. Reply 13. We agree. Kenue’s computer 14 receives digitized image
`data and processes it to detect lane markers using one of two algorithms:
`template matching or a Hough transform. Ex. 1004, 2:40–44. This
`corresponds to the claimed processing of image data.
`6.
`“wherein said image processing comprises pattern
`recognition, and wherein said pattern recognition
`comprises detection of at least one of (a) headlight, (b) a
`taillight and (c) an object, and wherein said pattern
`recognition is based at least in part on at least one of (i)
`shape, (ii) reflectivity, (iii) luminance and (iv) spectral
`characteristics”
`TRW asserts that Kenue discloses “a template matching algorithm and
`a Hough transform algorithm for detecting lane markers (i.e., ‘an object’)[,
`and b]oth these image processing algorithms comprise pattern recognition
`and are based at least in part on at least one of shape and luminance.” Pet.
`15–16.
`Magna disputes that Kenue discloses this limitation, making two
`arguments in this regard. First, Magna argues that Kenue does not disclose
`both the “detection” and “identification” functions. According to Magna,
`“TRW collapses [‘detection’ and ‘identification’] into one [requirement] and
`does not explain how Kenue’s teachings ‘identify’ objects.” PO Resp. 13.
`
`
`
`
`16
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`According to Magna, “[t]his is because Kenue does not need to nor does it
`identify objects as all of Kenue’s teachings are only specific to lane
`markers.” Id.
`TRW responds that the “skilled artisan would understand that
`detection and identification do not require the imaging system to perform
`two distinct functions.” Pet. Reply 6. TRW further argues that Kenue’s
`system identifies lane markers (i.e., more than one lane marker), and also
`identifies “objects as obstacles.” Id. According to TRW, the “claims do not
`require that the identified objects be identified with greater precision.” Id.
`Magna’s argument is based on its contention that this limitation
`requires the system to both “identify” and “detect” objects, i.e., that
`identification and detection are two separate required “functions.” As
`discussed above, however, we disagree with this construction. Instead, this
`limitation makes clear that, as recited in claim 30, detection of an object
`satisfies the requirement that an object be identified. Here, Kenue clearly
`teaches detection of lane markers and obstacles, both of which are
`indisputably objects. See, e.g., Ex. 1004, 1:59–62 (“The invention is . . . a
`method of detecting lane markers.”); id. at 2:49–51 (“preprocessing
`procedures detect obstacles in the lane within about 50 feet of the vehicle”);
`id. at 3:3–4 (“broken line boxes 28 around the markers 24 define the area to
`be searched for detecting markers”).
`Magna argues that “[a] system that only cares about one object does
`not need to, nor would it, identify objects as is recited in the claims.” PO
`Resp. 15. This argument implicitly reads into the claim the requirement that
`the system be able to identify more than one “type” of object; e.g., a lane
`marker and something other than a lane marker. But the claim language
`
`
`
`
`17
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`contains no such requirement. It merely requires that the system identifies
`“objects,” which Kenue satisfies by identifying lane markers. Even if this
`limitation did contain such a requirement, Kenue would satisfy it because it
`detects both lane markers and “obstacles,” e.g., other vehicles, both of which
`would correspond to the claimed “object.”
`Second, Magna argues that neither Kenue’s template matching
`algorithm nor its Hough transform discloses pattern recognition based on
`either shape or luminance. PO Resp. 16–20. Magna argues that the term
`“shape” in this limitation only “refers to the object of interest’s shape.” Id.
`at 18. According to Magna, “Kenue only checks edges of a pre-processed,
`digitized, and altered image,” or “the shape of a Hough transform,” but
`“does not detect the shape of an object or pattern.” Id. (citing Ex. 1004,
`6:30–46; Ex. 2032 ¶ 73). Magna further argues that “intensity,” which is
`“the measurable amount of a property,” is not the same as “luminance,”
`which “describes the amount of light that passes through or is emitted from a
`particular area.” Id. at 19 (citing Ex. 1004, 4:49–66; Ex. 2032 ¶ 74).
`According to Magna, TRW has not explained how the correlation of a
`window of constant “gray level” with the “gray level” of the image relates to
`luminance. Id.
`TRW responds that Dr. Turk admitted in his deposition that template
`matching is a form of pattern recognition, and that Kenue teaches that in
`template matching “‘a template or window of desired intensity and shape is
`correlated with the image to create a correlation matrix’ which is
`subsequently used to identify lane markers.” Pet. Reply 9–10 (citing Ex.
`1004, 3:23–26). In addition, TRW asserts that “shape” may refer to more
`than the shape of the object of interest, and that Kenue’s teaching of using
`
`
`
`
`18
`
`

`

`IPR2014-00266
`Patent 7,994,462 B2
`
`the desired shape of its template window falls within the scope of this
`limitation. Id. at 12 (citing Ex. 1002, claims 30 and 34; Ex. 1004, 3:23–26;
`Ex. 1020 ¶ 19). TRW further argues that the “lines (and the points to which
`they are transformed in the Hough space) have shapes, and these shapes are
`specifically utilized in the calculus.” Id.
`We have reviewed the record, including the declarations submitted by
`Dr. Turk and Mr. Miller, and determine that TRW has established by a
`preponderance of the evidence that Kenue teaches pattern recognition based
`at least in part on shape. First, as Dr. Turk acknowledges, template
`matching is a type of pattern recognit

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket