`
`
`
`Exhibit 11
`
`
`
`
`
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 2 of 24 PageID #: 388
`
`
`
`Infringement analysis of claims 1, 7, and 13 of U.S. Patent No. 7,336,814.
`Claim
`US 7,336,814 Claim Features
`Features of Accused Product
`1
`A method useful in machine-
`The ABB User Manual discloses detailed structures and functions for machine-vision
`vision of objects, the method
`of objects using, e.g., the robot shown at page 108 (reproduced below).
`comprising:
`
`
`“FlexVision™ allows you to generate a 3D calibration for a single camera or for all the
`cameras in the production environment simultaneously.” ABB User Manual, p. 137.
`“After finishing calibration, the robot position needs to be adjusted back to the, original
`position for acquiring images of the part. Now, the user can configure the Feature
`detection tool.” ABB User Manual, p. 96.
`“A 3D Vision Task is a vision process for acquiring images of the parts, like engine
`cylinder heads, transmission blocks, searching trained patterns, and estimate the 3D
`poses of the parts and then send the pose of the part to the robot.” ABB User Manual,
`p. 149.
`A figure in page 171 of the ABB User Manual (reproduced below) shows software
`images from two cameras in which 9 features are visible on an object.
`
`1.1
`
`acquiring a number of images of
`a first view of a training object
`from a number of image sensors;
`
`
`
`1
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 3 of 24 PageID #: 389
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`1.2
`
`identifying a number of features
`of the training object in the
`acquired at least one image of
`the first view;
`
`
`“A 3D Vision Task is a vision process for acquiring images of the parts, like engine
`cylinder heads, transmission blocks, searching trained patterns, and estimate the 3D
`poses of the parts and then send the pose of the part to the robot.” ABB User Manual,
`p. 149.
`“Now, the user can configure the Feature detection tool.” ABB User Manual, p. 96.
`“It is recommended to use at least 10 or 12 features.” ABB User Manual, p. 93.
`A figure in page 171 of the ABB User Manual (reproduced below) shows a software
`image in which 9 features are visible on an object.
`
`
`
`
`
`2
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 4 of 24 PageID #: 390
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`1.3
`
`determining a number of
`additional views to be obtained
`
`
`“vReference (vision reference) is used to acquire referencing images and triangulate
`the part model feature data.” ABB User Manual, p. 118.
`“Positions {*}[:] The array of positions used for the calibration task.” ABB User
`Manual, p. 119.
`“Referencing: This is where the user sets the acceptance criteria generating model
`points (referencing). … Num Robot Positions: It is the number of robot positions
`defined in RAPID code for the referencing. It should be equal or greater than 2” ABB
`User Manual, p. 154.
`
`
`
`
`
`3
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 5 of 24 PageID #: 391
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`1.3.1
`
`based at least in part on the
`number of image sensors,
`
`1.3.2
`
`the number of features identified,
`the number of features having an
`invariant physical relationship
`associated thereto, and
`
`
`“Num Views: It is the total number of Views that are required to be present under the
`VGR task. This number should not exceed the total number of cameras required for the
`task.” ABB User Manual, p. 154.
`“Your part has features that can be reliably and accurately found in 2D images from
`one or more 3D-calibrated cameras.” ABB User Manual, p. 149.
` “If there are more than one Views under a VGR Task, the corresponding PMAlign
`tasks should be identical.” ABB User Manual, p. 156.
`“A View is a display from a camera. The number of views added to a calibration task
`cannot exceed the number of cameras connected to the system.” ABB User Manual, p.
`7.
`“It is important to train features that are unique and not easily confused with nearby
`edges or features. These features should have good contrast and not have a lot of noise
`in the area…If any defined feature is found missing for the image of the object, its
`search region is marked in red.” ABB User Manual, p. 93.
`
`
`
`
`
`4
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 6 of 24 PageID #: 392
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`A figure from page 180 (shown below) shows 9 features identified on the part that have
`an invariant physical relationship to one another based on World (a type of robot
`coordinate frame) in millimeters:
`
`
`Another figure on page 182 (shown below) shows 9 features identified on the part
`having an invariant physical relationship to one another based on an unnamed
`coordinate system:
`
`
`
`
`
`
`
`5
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 7 of 24 PageID #: 393
`
`Claim
`1.3.3
`
`US 7,336,814 Claim Features
`a type of the invariant physical
`relationship associated with the
`features, sufficient to provide a
`system of equations and
`unknowns where the number of
`unknowns is not greater than the
`number of equations;
`
`Features of Accused Product
`A figure on page 179 of the ABB User Manual that shows the input data for the system
`of equations derived from the extracted feature data with the previously acquired
`camera calibration data that ultimately produces the system of equations to be solved
`for pose estimation:
`
`1.4
`
`1.5
`
`acquiring at least one image of
`each of the number of additional
`views of the training object by
`the at least one camera; and
`
`identifying at least some of the
`number of features of the
`training object in the acquired at
`least one image of the number of
`
`
`
`
`“Feature tools should have a relatively small search area. It is important to train
`features that are unique and not easily confused with nearby edges or features. These
`features should have good contrast and not have a lot of noise in the area…If any
`defined feature is found missing for the image of the object, its search region is marked
`in red.” ABB User Manual, p. 93.
`“Error Codes[:] 51 less features.” ABB User Manual, p. 193.
`“The Robot should point to all the features defined in the VGR Task.” ABB User
`Manual, p. 173.
`“Once the pattern has been trained, in order to test and see the results locate and click
`the 'Results' tab. To run the software press the red play button in the top-left corner of
`the window as depicted in figure 9 with a red border. This should display the results of
`
`
`
`
`
`6
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 8 of 24 PageID #: 394
`
`Claim
`
`US 7,336,814 Claim Features
`additional views of the training
`object
`
`1.6
`
`employing at least one of a
`consistency of physical
`relationships between some of
`the identified features to set up
`the system of equations; and
`
`Features of Accused Product
`the inputs that the user gave. The most important value will be the "Score," it will be a
`number from 0.000 to 1.000 with the latter being the most recognizable pattern.” ABB
`User Manual, p. 193.
`“The referencing task generate the model points in 3D space, which are used by the VGR
`task for pose estimation.” ABB user manual p. 154.
`“The relationship of the feature points is identified and based on the 3D model
`points, and the appearance of the features in the image, the pose of the part is estimated.
`It is the transformation from the Camera Coordinate System to the work frame used
`in RAPID code.” ABB User Manual, p. 99.
`
`
`
`
`
`
`
`7
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 9 of 24 PageID #: 395
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`1.7
`
`automatically computationally
`solving the system of equations.
`
`
`“Referencing: This is where the user sets the acceptance criteria for generating model
`points (referencing). The referencing task generates the model points in 3D space,
`which are used by the VGR task for pose estimation. The Referencing task and the
`corresponding 3D VGR task share the same Vision tools for feature detection.” ABB
`User manual, p. 154.
`“The relationship of the feature points is identified and based on the 3D model points,
`and the appearance of the features in the image, the pose of the part is estimated. It is
`
`
`
`
`
`8
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 10 of 24 PageID #: 396
`
`Claim
`
`US 7,336,814 Claim Features
`
`7
`
`A machine-vision system,
`comprising:
`
`Features of Accused Product
`the transformation from the Camera Coordinate System to the work frame used in
`RAPID code.” ABB User Manual, p. 99.
`The ABB User Manual discloses detailed structures and functions for machine-vision
`of objects using, e.g., the robot shown at page 108 (reproduced below).
`
`7.1
`
`at least one image sensor
`operable to acquire images of a
`training object and of target
`objects;
`
`7.2
`
`processor-readable medium
`storing instructions for
`
`
`“FlexVision™ allows you to generate a 3D calibration for a single camera or for all the
`cameras in the production environment simultaneously.” ABB User Manual, p. 137.
`“FlexVision™ allows you to generate a 3D calibration for a single camera or for all the
`cameras in the production environment simultaneously.” ABB User Manual, p. 137.
`“A 3D Vision Task is a vision process for acquiring images of the parts, like engine
`cylinder heads, transmission blocks, searching trained patterns, and estimate the 3D
`poses of the parts and then send the pose of the part to the robot.” ABB User Manual,
`p. 149.
`“Temperature: 0°C to 50°C (w/HDD), Extended Temperature: -20°C to
`
`
`
`
`
`9
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 11 of 24 PageID #: 397
`
`Claim
`
`US 7,336,814 Claim Features
`facilitating machine-vision for
`objects having invariant physical
`relationships between a number
`of features on the objects, by:
`
`7.2.1
`
`acquiring a number of images of
`a first view of a training object
`from a number of image sensors;
`
`Features of Accused Product
`55°C (-4 to 131°F) w/industrial SSD or CFast…The Processing Unit used for the
`current version of FlexVision™ is Matrix MXC-6300 Series from ADLINK
`Technology Inc.” ABB User Manual, p. 9.
`“It is important to train features that are unique and not easily confused with nearby
`edges or features. These features should have good contrast and not have a lot of noise
`in the area…If any defined feature is found missing for the image of the object, its
`search region is marked in red.” ABB User Manual, p. 93.
`“After finishing calibration, the robot position needs to be adjusted back to the, original
`position for acquiring images of the part. Now, the user can configure the Feature
`detection tool.” ABB User Manual, p. 96.
`“A 3D Vision Task is a vision process for acquiring images of the parts, like engine
`cylinder heads, transmission blocks, searching trained patterns, and estimate the 3D
`poses of the parts and then send the pose of the part to the robot.” ABB User Manual,
`p. 149.
`A figure in page 171 of the ABB User Manual (reproduced below) shows software
`images from two cameras in which 9 features are visible on an object.
`
`
`
`
`
`
`
`10
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 12 of 24 PageID #: 398
`
`Claim
`7.2.2
`
`US 7,336,814 Claim Features
`identifying a number of features
`of the training object in the
`acquired at least one image of
`the first view;
`
`Features of Accused Product
`“A 3D Vision Task is a vision process for acquiring images of the parts, like engine
`cylinder heads, transmission blocks, searching trained patterns, and estimate the 3D
`poses of the parts and then send the pose of the part to the robot.” ABB User Manual,
`p. 149.
`“Now, the user can configure the Feature detection tool.” ABB User Manual, p. 96.
`“It is recommended to use at least 10 or 12 features.” ABB User Manual, p. 93.
`A figure in page 171 of the ABB User Manual (reproduced below) shows a software
`image in which 9 features are visible on an object.
`
`7.2.3
`
`determining a number of
`additional views to be obtained
`
`
`“vReference (vision reference) is used to acquire referencing images and triangulate
`the part model feature data.” ABB User Manual, p. 118.
`“Positions {*}[:] The array of positions used for the calibration task.” ABB User
`Manual, p. 119.
`“Referencing: This is where the user sets the acceptance criteria generating model
`points (referencing). … Num Robot Positions: It is the number of robot positions
`defined in RAPID code for the referencing. It should be equal or greater than 2.” ABB
`User Manual, p. 154.
`
`
`
`
`
`11
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 13 of 24 PageID #: 399
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`7.2.3.1 based at least in part on the
`number of image sensors,
`
`7.2.3.2
`
`the number of features identified,
`the number of features having an
`invariant physical relationship
`associated thereto, and
`
`
`“Num Views: It is the total number of Views that are required to be present under the
`VGR task. This number should not exceed the total number of cameras required for the
`task.” ABB User Manual, p. 154.
`“Your part has features that can be reliably and accurately found in 2D images from
`one or more 3D-calibrated cameras.” ABB User Manual, p. 149.
`“If there are more than one Views under a VGR Task, the corresponding PMAlign
`tasks should be identical.” ABB User Manual, p. 156.
`“A View is a display from a camera. The number of views added to a calibration task
`cannot exceed the number of cameras connected to the system.” ABB User Manual, p.
`7.
`“It is important to train features that are unique and not easily confused with nearby
`edges or features. These features should have good contrast and not have a lot of noise
`in the area…If any defined feature is found missing for the image of the object, its
`search region is marked in red.” ABB User Manual, p. 93.
`A figure from page 180 (shown below) shows 9 features identified on the part that have
`an invariant physical relationship to one another based on World (a type of robot
`coordinate frame) in millimeters:
`
`
`
`
`
`12
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 14 of 24 PageID #: 400
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`
`Another figure on page 182 (shown below) shows 9 features identified on the part
`having an invariant physical relationship to one another based on an unnamed
`coordinate system:
`
`
`
`
`
`
`
`13
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 15 of 24 PageID #: 401
`
`US 7,336,814 Claim Features
`Claim
`7.2.3.3 a type of the invariant physical
`relationship associated with the
`features, sufficient to provide a
`system of equations and
`unknowns where the number of
`unknowns is not greater than the
`number of equations;
`
`Features of Accused Product
`A figure on page 179 of the ABB User Manual that shows the input data for the system
`of equations derived from the extracted feature data with the previously acquired
`camera calibration data that ultimately produces the system of equations to be solved
`for pose estimation:
`
`7.2.4
`
`employing at least one of a
`consistency of physical
`relationships between some of
`the identified features to set up
`the system of equations; and
`
`
`“The referencing task generate the model points in 3D space, which are used by the
`VGR task for pose estimation.” ABB user manual p. 154.
`“The relationship of the feature points is identified and based on the 3D model points,
`and the appearance of the features in the image, the pose of the part is estimated. It is
`the transformation from the Camera Coordinate System to the work frame used in
`RAPID code.” ABB User Manual, p. 99.
`Figures from pages 180, 181, and 182 (respectively) showing 9 features identified on
`the part that have an invariant physical relationship to one another based on various
`coordinate systems. This is showing how it is employing a consistency of physical
`relationships between the features in camera pixels, robot world (millimeters based on
`the robot), and local object frame coordinate systems (respectively).
`
`
`
`
`
`14
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 16 of 24 PageID #: 402
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`
`
`
`
`
`
`
`
`15
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 17 of 24 PageID #: 403
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`7.2.5
`
`automatically computationally
`solving the system of equations;
`and
`
`7.3
`
`13
`
`a processor coupled to receive
`acquired images from the at least
`one image sensor and operable to
`execute the instructions stored in
`the processor-readable medium.
`A processor readable medium
`storing instructions for causing a
`processor to facilitate machine-
`
`
`“Referencing: This is where the user sets the acceptance criteria for generating model
`points (referencing). The referencing task generates the model points in 3D space,
`which are used by the VGR task for pose estimation. The Referencing task and the
`corresponding 3D VGR task share the same Vision tools for feature detection.” ABB
`User manual, p. 154.
`“The relationship of the feature points is identified and based on the 3D model points,
`and the appearance of the features in the image, the pose of the part is estimated. It is
`the transformation from the Camera Coordinate System to the work frame used in
`RAPID code.” ABB User Manual, p. 99.
`“The Processing Unit used for the current version of FlexVision™ is Matrix MXC-
`6300 Series from ADLINK Technology Inc.” ABB User Manual, p. 9.
`
`The ABB User Manual discloses detailed structures and functions for machine-vision
`of objects using, e.g., the robot shown at page 108 (reproduced below).
`
`
`
`
`
`16
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 18 of 24 PageID #: 404
`
`Claim
`
`US 7,336,814 Claim Features
`vision for objects having
`invariant physical relationships
`between a number of features on
`the objects, by:
`
`Features of Accused Product
`
`
`“FlexVision™ allows you to generate a 3D calibration for a single camera or for all the
`cameras in the production environment simultaneously.” ABB User Manual, p. 137.
`“Temperature: 0°C to 50°C (w/HDD), Extended Temperature: -20°C to 55°C (-4 to
`131°F) w/industrial SSD or CFast…The Processing Unit used for the current version of
`FlexVision™ is Matrix MXC-6300 Series from ADLINK Technology Inc.” ABB User
`Manual, p. 9.
`“It is important to train features that are unique and not easily confused with nearby
`edges or features. These features should have good contrast and not have a lot of noise
`in the area…If any defined feature is found missing for the image of the object, its
`search region is marked in red.” ABB User Manual, p. 93.
`“After finishing calibration, the robot position needs to be adjusted back to the, original
`position for acquiring images of the part. Now, the user can configure the Feature
`detection tool.” ABB User Manual, p. 96.
`
`13.1
`
`acquiring a number of images of
`a first view of a training object
`from a number of cameras;
`
`
`
`
`
`17
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 19 of 24 PageID #: 405
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`“A 3D Vision Task is a vision process for acquiring images of the parts, like engine
`cylinder heads, transmission blocks, searching trained patterns, and estimate the 3D
`poses of the parts and then send the pose of the part to the robot.” ABB User Manual,
`p. 149.
`A figure in page 171 of the ABB User Manual (reproduced below) shows software
`images from two cameras in which 9 features are visible on an object.
`
`13.2
`
`identifying a number of features
`of the training object in the
`acquired at least one image of
`the first view;
`
`
`
`
`“A 3D Vision Task is a vision process for acquiring images of the parts, like engine
`cylinder heads, transmission blocks, searching trained patterns, and estimate the 3D
`poses of the parts and then send the pose of the part to the robot.” ABB User Manual,
`p. 149.
`“Now, the user can configure the Feature detection tool.” ABB User Manual, p. 96.
`“It is recommended to use at least 10 or 12 features.” ABB User Manual, p. 93.
`
`
`
`
`
`18
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 20 of 24 PageID #: 406
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`A figure in page 171 of the ABB User Manual (reproduced below) shows a software
`image in which 9 features are visible on an object.
`
`13.3
`
`determining a number of
`additional views to be obtained
`
`
`“vReference (vision reference) is used to acquire referencing images and triangulate
`the part model feature data.” ABB User Manual, p. 118.
` “Positions {*}[:] The array of positions used for the calibration task.” ABB User
`Manual, p. 119.
`“Referencing: This is where the user sets the acceptance criteria generating model
`points (referencing). … Num Robot Positions: It is the number of robot positions
`defined in RAPID code for the referencing. It should be equal or greater than 2”
`ABB User Manual, p. 154.
`
`
`
`
`
`19
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 21 of 24 PageID #: 407
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`13.3.1
`
`based at least in part on the
`number of image sensors,
`
`13.3.2
`
`the number of features identified,
`the number of features having an
`invariant physical relationship
`associated thereto, and
`
`
`“Num Views: It is the total number of Views that are required to be present under the
`VGR task. This number should not exceed the total number of cameras required for the
`task.” ABB User Manual, p. 154.
`“Your part has features that can be reliably and accurately found in 2D images from
`one or more 3D-calibrated cameras.” ABB User Manual, p. 149.
`“If there are more than one Views under a VGR Task, the corresponding PMAlign
`tasks should be identical.” ABB User Manual, p. 156.
`“A View is a display from a camera. The number of views added to a calibration task
`cannot exceed the number of cameras connected to the system.” ABB User Manual, p.
`7.
`“It is important to train features that are unique and not easily confused with nearby
`edges or features. These features should have good contrast and not have a lot of noise
`in the area…If any defined feature is found missing for the image of the object, its
`search region is marked in red.” ABB User Manual, p. 93.
`
`
`
`
`
`20
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 22 of 24 PageID #: 408
`
`Claim
`13.3.3
`
`US 7,336,814 Claim Features
`a type of the invariant physical
`relationship associated with the
`features, sufficient to provide a
`system of equations and
`unknowns where the number of
`unknowns is not greater than the
`number of equations;
`
`Features of Accused Product
`A figure on page 179 of the ABB User Manual that shows the input data for the system
`of equations derived from the extracted feature data with the previously acquired
`camera calibration data that ultimately produces the system of equations to be solved
`for pose estimation:
`
`13.4
`
`employing at least one of a
`consistency of physical
`relationships between some of
`the identified features to set up
`the system of equations; and
`
`
`“The referencing task generate the model points in 3D space, which are used by the
`VGR task for pose estimation.” ABB user manual p. 154.
`“The relationship of the feature points is identified and based on the 3D model points,
`and the appearance of the features in the image, the pose of the part is estimated. It is
`the transformation from the Camera Coordinate System to the work frame used in
`RAPID code.” ABB User Manual, p. 99.
`Figures from pages 180, 181, and 182 (respectively) showing 9 features identified on
`the part that have an invariant physical relationship to one another based on various
`coordinate systems. This is showing how it is employing a consistency of physical
`
`
`
`
`
`21
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 23 of 24 PageID #: 409
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`relationships between the features in camera pixels, robot world (millimeters based on
`the robot), and local object frame coordinate systems (respectively).
`
`
`
`
`
`
`
`
`
`22
`
`
`
`Case 1:22-cv-01257-GBW Document 1-11 Filed 09/22/22 Page 24 of 24 PageID #: 410
`
`Claim
`
`US 7,336,814 Claim Features
`
`Features of Accused Product
`
`13.5
`
`automatically computationally
`solving the system of equations.
`
`
`“Referencing: This is where the user sets the acceptance criteria for generating model
`points (referencing). The referencing task generates the model points in 3D space,
`which are used by the VGR task for pose estimation. The Referencing task and the
`corresponding 3D VGR task share the same Vision tools for feature detection.” ABB
`User manual, p. 154.
`“The relationship of the feature points is identified and based on the 3D model points,
`and the appearance of the features in the image, the pose of the part is estimated. It is
`the transformation from the Camera Coordinate System to the work frame used in
`RAPID code.” ABB User Manual, p. 99.
`
`
`
`
`
`
`
`23
`
`