`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`_____________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`_____________________________
`
`ABB INC.,
`
`Petitioner,
`
`- vs. –
`
`ROBOTICVISIONTECH, INC.,
`
`Patent Owner
`
`_____________________________
`
`EXPERT DECLARATION OF SETH HUTCHINSON, PH.D.
`IN SUPPORT OF PETITION FOR INTER PARTES
`REVIEW OF U.S. PATENT NO. 8,095,237
`
`
`
`
`ABB Inc. Exhibit 1003, Page 1 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`TABLE OF CONTENTS
`
`
`
`V.
`
`I.
`II.
`
`INTRODUCTION AND QUALIFICATIONS .......................................... 1
`UNDERSTANDING OF THE GOVERNING LAW ................................ 4
`a. Invalidity by Obviousness .......................................................................... 4
`b. Interpreting Claims Before the Patent Office ............................................. 8
`c. Materials Relied on in Forming My Opinions ........................................... 9
`III. BACKGROUND OF THE ART .............................................................. 10
`a. Camera Calibration and Single Image Three-Dimensional Vision
`Guided Robotics Were Well-Known Long Before the ’237 Patent ......... 10
`IV. OVERVIEW OF THE ’ 237 PATENT .................................................... 15
`a. Specification of the ’237 Patent ............................................................... 15
`b. The Relevant Claims of the ’237 Patent ................................................... 20
`c. The Prosecution History of the ’237 Patent ............................................. 30
`d. The Priority Date of the ’237 Patent ........................................................ 32
`STATE OF THE ART PRIOR TO THE ’237 PATENT ......................... 33
`a. The Person of Ordinary Skill in the Art ................................................... 33
`b. Corke ........................................................................................................ 33
`c. Wei-I ......................................................................................................... 35
`VI. CLAIM CONSTRUCTION ..................................................................... 36
`VII. SUMMARY OF POSITIONS .................................................................. 38
`VIII. CLAIMS 1–10 and 12–28 ARE INVALID ............................................. 39
`a. GROUND 1: CLAIMS 1-4, 6-10, 17-20, AND 24-28 ARE
`UNPATENTABLE AS OBVIOUS OVER CORKE IN VIEW OF
`THE KNOWLEDGE OF A POSITA ....................................................... 39
`1. Claim 1 ................................................................................................ 39
`2. Claim 2 ................................................................................................ 50
`3. Claims 3 and 4 ..................................................................................... 54
`4. Claim 6 ................................................................................................ 58
`5. Claim 7 ................................................................................................ 59
`6. Claim 8 ................................................................................................ 60
`7. Claim 9 ................................................................................................ 63
`
`i
`
`ABB Inc. Exhibit 1003, Page 2 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`8. Claim 10 .............................................................................................. 66
`9. Claims 17, 24, and 28 .......................................................................... 67
`10. Claim 18 .............................................................................................. 69
`11. Claim 19 .............................................................................................. 71
`12. Claims 20 and 25 ................................................................................. 73
`13. Claims 26 and 27 ................................................................................. 87
`b. GROUND 2: CLAIMS 5, 12–16, AND 21–24 ARE
`UNPATENTABLE AS OBVIOUS OVER CORKE IN VIEW OF
`WEI-I ........................................................................................................ 88
`1. Motivation to Combine Corke and Wei-I ........................................... 89
`2. Claims 5 and 12 ................................................................................... 91
`3. Claim 13 .............................................................................................. 96
`4. Claim 14 .............................................................................................. 97
`5. Claim 15 ............................................................................................100
`6. Claim 16 ............................................................................................102
`7. Claim 21 ............................................................................................106
`8. Claims 22 and 23 ...............................................................................111
`IX. OBJECTIVE INDICIA OF NON-OBVIOUSNESS..............................114
`
`
`
`ii
`
`ABB Inc. Exhibit 1003, Page 3 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`I, Seth Hutchinson, hereby declare as follows:
`
`I. INTRODUCTION AND QUALIFICATIONS
`
`1.
`
`I have been retained on behalf of ABB Inc. (“ABB” or “Petitioner”) to
`
`provide my technical review, analysis, insights, and opinions concerning the validity
`
`of claims 1–10 and 12–28 of U.S. Patent No. 8,095,237 (“the ’237 Patent”)
`
`(EX1001) entitled “Method and apparatus for single image 3D vision guided
`
`robotics.” I understand that the ’237 Patent is assigned to RoboticVISIONTech, Inc.
`
`(“RVT”).
`
`2.
`
`I am a Professor and KUKA Chair for Robotics at the School of
`
`Interactive Computing at the Georgia Institute of Technology (“Georgia Tech”). I
`
`have held that position since 2018. At Georgia Tech, I have taught and developed
`
`the courses: Robot Motion Planning, Mobile Manipulation, and Introduction to
`
`Perception and Robotics. At Georgia Tech, I have advised six Ph.D. students.
`
`3.
`
`I also serve as the Executive Director of the Institute for Robotics and
`
`Intelligent Machines (“IRIM”) at Georgia Tech. I have held that position since 2019.
`
`Previously, I held the position of Associate Director at IRIM. IRIM is a center for
`
`robotics research and education at Georgia Tech. IRIM conducts research on
`
`mechanics, control, perception, artificial intelligence and cognition, interaction, and
`
`systems, including on field and service robots and human-centered robotics. This
`
`includes research on manipulation and locomotion, safe and resilient autonomy, and
`
`ABB Inc. Exhibit 1003, Page 4 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`sensing and perception. IRIM hosts more than 80 faculty members, 150 graduate
`
`students, and 40 robotics labs.
`
`4.
`
`I was previously employed at
`
`the University of Illinois at
`
`Urbana-Champaign (the “University of Illinois”) from 1990 to 2018, where I still
`
`hold the position of Professor Emeritus of Electrical and Computer Engineering.
`
`During my time at the University of Illinois, I worked in the Electrical and Computer
`
`Engineering Department. From 1990 to 1996, I was an Assistant Professor of
`
`Electrical and Computer Engineering and a Research Assistant Professor at the
`
`Beckman Institute and Coordinated Science Laboratory. From 1996 to 2003, I was
`
`an Associate Professor of Electrical and Computer Engineering and a Research
`
`Associate Professor at the Beckman Institute and Coordinated Science Laboratory.
`
`From 2001 to 2007, I was the Associate Head for Undergraduate Affairs for
`
`Electrical and Computer Engineering. From 2003 to 2017, I was a Professor of
`
`Electrical and Computer Engineering and a Research Professor at the Beckman
`
`Institute and Coordinated Science Laboratory.
`
`5.
`
`At the University of Illinois, I taught and developed the courses:
`
`Introduction to Robotics, Robot Sensing, Introduction to Robotics, Advanced
`
`Robotic Planning, Control Systems, Control System Theory and Design,
`
`Introduction to Optimization, Senior Design Laboratory, Introduction to Computing
`
`Systems, Analog Signal Processing, Computer Engineering I, Probability with
`
`2
`
`ABB Inc. Exhibit 1003, Page 5 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`Engineering Applications, Logic Design, and Engineering Ethics. I also advised 18
`
`Ph.D. students in their thesis research.
`
`6.
`
`I have also been a visiting professor at various institutions between
`
`1989 and 2017, including Purdue University, Ecole Nationale Supérieure des
`
`Télécommunications, The Australian National University, Tecnológico de
`
`Monterrey, Université de Rennes I, L’Institut Français Méchanique Avancée, and
`
`Universitá di Roma “La Sapienza.”
`
`7.
`
`I have conducted various tutorials and short courses since 1993,
`
`including several courses on Visual Servo Control. I have also taught short courses
`
`on Path Planning, Robot Motion Planning, Multisensor Fusion Under Uncertainty,
`
`Underactuated Robots, Probabilistic Methods in Robotics, and Robotics and
`
`Computer Vision.
`
`8.
`
`I attended Purdue University, where I received a Ph.D. in electrical
`
`engineering in 1988, an M.S. degree in electrical engineering in 1984, and a B.S.
`
`degree in electrical engineering in 1983.
`
`9.
`
`A large part of my research work has involved machine vision and
`
`robotics. My research interests include vision-based control, motion planning and
`
`control, planning under uncertainty, pursuit-evasion, localization and mapping,
`
`locomotion, and bio-inspired robotics.
`
`3
`
`ABB Inc. Exhibit 1003, Page 6 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`10.
`
`I have served on the advisory and editorial boards for multiple journals
`
`since 1997, including the International Journal of Robotics Research, the Journal of
`
`Intelligent Service Robotics, Transactions on Robotics, Transactions on Robotics
`
`and Automation, and the IEEE Robotics and Automation Society.
`
`11.
`
`I have published three textbooks on Robot Modeling and Principles of
`
`Robot Motion. I am an author on over 75 articles on similar topics. According to
`
`Google Scholar, my work has been cited more than 27,000 times.
`
`12.
`
`I am very involved in the Robotics and Automation Society of IEEE,
`
`including serving as its President until 2021.
`
`13. Attached as Exhibit 1012 is my curriculum vitae, which includes a more
`
`detailed list of my qualifications. My work on this case is being billed at a rate of
`
`$500 per hour, with reimbursement for actual expenses. I have no direct financial
`
`interest in the dispute between the Petitioner and RVT, and my compensation is not
`
`contingent upon the outcome of this inter partes review.
`
`14.
`
`I have not testified as an expert at trial or by deposition during the
`
`previous 4 years.
`
`II. UNDERSTANDING OF THE GOVERNING LAW
`
`a.
`
`Invalidity by Obviousness
`
`15.
`
`I understand that a claim may be invalid under 35 U.S.C. § 103 if the
`
`subject matter described by the claim as a whole would have been obvious to a
`
`4
`
`ABB Inc. Exhibit 1003, Page 7 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`POSITA in view of a prior art reference, or in view of a combination of references
`
`at the time the claimed invention was made. I understand that obviousness is
`
`analyzed from the perspective of a POSITA at the time of the alleged invention. I
`
`also understand that a POSITA is presumed to have been aware of all pertinent prior
`
`art at the time of the alleged invention.
`
`16.
`
`I understand that an obviousness analysis involves comparing a claim
`
`to the prior art to determine whether the claimed invention as a whole would have
`
`been obvious to a POSITA in view of the prior art, and in light of the general
`
`knowledge in the art at the time the invention was made. I also understand that the
`
`invention may be deemed obvious when a POSITA would have reached the claimed
`
`invention through routine experimentation.
`
`17.
`
`I understand that obviousness can be established by combining or
`
`modifying the disclosures of the prior art to achieve the claimed invention. It is also
`
`my understanding that where there is a reason to modify or combine the prior art to
`
`achieve the claimed invention, there must also be a reasonable expectation of success
`
`in so doing to render the claimed invention obvious. I understand that the reason to
`
`combine prior art references can come from a variety of sources, not just the prior
`
`art itself or the specific problem the patentee was trying to solve. I also understand
`
`that the references themselves need not provide a specific hint or suggestion of the
`
`5
`
`ABB Inc. Exhibit 1003, Page 8 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`alteration needed to arrive at the claimed invention; the analysis may include
`
`recourse to logic, judgment, and common sense available to a POSITA.
`
`18.
`
`I understand that when there is some recognized reason to solve a
`
`problem, and there are a finite number of identified, predictable solutions, a POSITA
`
`has good reason to pursue the known options within his or her technical grasp. If
`
`such an approach leads to the anticipated success, it is likely the product not of
`
`innovation but of ordinary skill and common sense. In such a circumstance, when a
`
`patent simply arranges old elements with each performing the same function it had
`
`been known to perform and yields no more than one would expect from such an
`
`arrangement, I understand that the combination is obvious.
`
`19.
`
`I understand that when considering the obviousness of an invention,
`
`one should also consider whether there are any objective indicia that support the
`
`non-obviousness of the invention. I further understand that objective indicia of
`
`nonobviousness include failure of others, copying, unexpected results, information
`
`that “teaches away” from the claimed subject matter, perception in the industry,
`
`commercial success, and long-felt but unmet need. I also understand that in order for
`
`objective indicia of non-obviousness to be applicable, the indicia must have some
`
`sort of nexus to the subject matter in the claim that was not known in the art. I
`
`understand that such nexus includes a factual connection between the patentable
`
`subject matter of the claim and the objective indicia alleged. I also understand that
`
`6
`
`ABB Inc. Exhibit 1003, Page 9 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`an independently made invention that is made within a comparatively short period
`
`of time is evidence that the claimed invention was the product of ordinary skill.
`
`20. Finally, I understand that patent examiners at the U.S. Patent and
`
`Trademark Office (“USPTO”) rely upon certain exemplary rationales in reviewing
`
`patent applications to understand whether the subject matter of the claims is obvious.
`
`I understand that the following is the list of exemplary rationales relied upon by
`
`patent examiners at the USPTO:
`
`a. Combining prior art elements according to known methods to yield
`
`predictable results;
`
`b. Simple substitution of one known element for another to obtain
`
`predictable results;
`
`c. Use of a known technique to improve similar devices, methods, or
`
`products in the same way;
`
`d. Applying a known technique to a known device, method, or product
`
`ready for improvement to yield predictable results;
`
`e. “Obvious to try” – Choosing from a finite number of identified,
`
`predictable solutions, with a reasonable expectation of success;
`
`f. Known work in one field of endeavor may prompt variations of it for
`
`use in either the same field or a different one based on design incentives
`
`7
`
`ABB Inc. Exhibit 1003, Page 10 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`or other market forces if the variations are predictable to one of ordinary
`
`skill in the art; and
`
`g. Some teaching, suggestion, or motivation in the prior art that would
`
`have led one of ordinary skill to modify the prior art reference or to
`
`combine prior art reference teachings to arrive at the claimed invention.
`
`b.
`
`Interpreting Claims Before the Patent Office
`
`21.
`
`I understand that inter partes review is a proceeding before the USPTO
`
`for evaluating the validity of issued patent claims. I understand that, in an inter
`
`partes review, a claim term is interpreted in a manner consistent with the standard
`
`used in patent litigation, as set forth in Phillips v. AWH Corp., 415 F.3d 1303 (Fed.
`
`Cir. 2005) (en banc). I understand that such standard generally construes the claims
`
`according to their “ordinary and customary” meaning in view of the claim language,
`
`specification, and file history, and where applicable, other relevant evidence.
`
`22.
`
`I understand that a patent’s “specification” includes all the figures,
`
`discussion, and claims within the patent. I understand that the USPTO will look to
`
`the specification and prosecution history to see if there is a definition for a given
`
`claim term, and if not, will apply the ordinary and customary meaning from the
`
`perspective of a POSITA at the time in which the alleged invention was made.
`
`
`
`
`
`8
`
`ABB Inc. Exhibit 1003, Page 11 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`c. Materials Relied on in Forming My Opinions
`
`23.
`
`In forming my opinions expressed in this declaration, I have relied on
`
`my own knowledge, experience, and expertise, as well as the knowledge of a
`
`POSITA in the relevant timeframe. In addition, I have reviewed and relied upon all
`
`documents referenced in this declaration including the following materials. I
`
`understand the documents have been given the following exhibit numbers in this
`
`proceeding:
`
`• U.S. Patent No. 8,095,237 (“the ’237 Patent”) (EX1001);
`
`• Prosecution History of the ’237 Patent (EX1002);
`
`• “Visual Control of Robots: High-Performance Visual Servoing,” by Peter I.
`
`Corke (“Corke”) (EX1004);
`
`• Active Self-Calibration Of Robotic Eyes And Hand-Eye Relationships With
`
`Model Identification, Guo-Qing Wei at al., IEEE Transactions on Robotics
`
`and Automation (“Wei-I”) (EX1005);
`
`• Multisensory Visual Servoing by a Neural Network, Guo-Qing Wei and Gerd
`
`Hirzinger, IEEE Transactions on Systems, Man and Cybernetics (“Wei-II”)
`
`(EX1006).
`
`• U.S. Patent No. 4,146,924 to Birk et al. (“Birk”) (EX1008); and
`
`• U.S. Patent No. 5,959,425 to Bieman et al. (“Bieman”) (EX1009).
`
`
`
`9
`
`ABB Inc. Exhibit 1003, Page 12 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`III.
`
`BACKGROUND OF THE ART
`
`a.
`
`Camera Calibration and Single Image Three-Dimensional Vision
`
`Guided Robotics Were Well-Known Long Before the ’237 Patent
`
`24. The ’237 Patent relates to “a method and apparatus for single image
`
`three dimensional vision guided robotics.” EX1001, 1:13-15. As the ’237 Patent
`
`recognized, “machine vision [was] increasingly being used to guide robots in their
`
`tasks.” Id., 1:20-22. 3D vision guidance systems for robots were also known,
`
`including those mentioned in the various patents disclosed in the Background of the
`
`’237 Patent. Id., 1:25-30. Such systems “typically involved using two or more
`
`cameras.” Id., 1:26-30; see also EX1008, 4:41-42.
`
`25. Having only one camera decreased cost and took up less space, so it
`
`was viewed as “preferable.” EX1001, 1:30-32. In many single camera prior art
`
`systems, two or more 2D images from “different perspectives” were “used to convert
`
`the two-dimensional image data” to determine 3D location. EX1009, 3:37-49. These
`
`systems were not without problems. Some used laser triangulation and required
`
`“rigidly packaged” and “expensive specialized sensors.” EX1001, 1:32-35.
`
`“[S]ophisticated inter-tool calibration methods” were needed and the systems were
`
`often “susceptible to damage or misalignment when operating in industrial
`
`environments.” Id., 1:35-37.
`
`10
`
`ABB Inc. Exhibit 1003, Page 13 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`26. Known techniques to determine the “location in space of the target
`
`object using single or multiple cameras” included the use of “[t]arget points.”
`
`EX1001, 1:38-40. Some of those methods involved “computing the position of the
`
`object relative to a previous position, which requires knowledge of the 3D pose of
`
`the object at the starting point.” EX1001, 1:43-46. According to the ’237 Patent,
`
`such methods did not provide the accuracy and repeatability required for industrial
`
`applications. EX1001, 1:46-47.
`
`27. Certain robotic vision systems were able to determine the 3D pose of
`
`an object using a single camera mounted on a robot’s hand. Some systems used a
`
`look-and-move structure in an “open-loop fashion” where the camera would capture
`
`an image of an object, extract features from the image, determine the pose of the
`
`object using the extracted features and previous knowledge about the relationship
`
`between those features, and plan the robot’s motion based on that information.
`
`EX1004, pp.3, 151-54. Other systems used a visual servoing approach with a
`
`“closed” feedback loop where the above process repeated itself such that the robot’s
`
`position could be altered while the robot was in motion to increase task accuracy.
`
`EX1004, pp.3, 151-55.
`
`28. Robotic machine vision is the ability of a computerized robot to see and
`
`interact with the 3D world around it. At the time the application leading to the ’237
`
`Patent was filed, this vision capability was typically enabled by one or more sensors
`
`11
`
`ABB Inc. Exhibit 1003, Page 14 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`connected to a computer to interpret the information and provide information back
`
`to the robot allowing it to maneuver. The most common type of sensor was a camera,
`
`which could either be stationary of affixed to the robot’s arm and would capture
`
`images of an object of interest.
`
`29. Typically, machine vision involved the “extraction of a small number
`
`of generally numeric features from the image.” EX1004 at 123. These features
`
`would then be used by the machine vision system to gain further information. For
`
`instance, these features could be used along with “knowledge of the geometric
`
`relationship between feature points” on the object to determine the 3D pose of the
`
`object. EX1004 at 152.
`
`30.
`
` Many machine vision systems that existed as of the priority date had
`
`the ability to determine the 3D pose of an object using a single camera mounted on
`
`a moveable part of the robot. Some of these systems were of the “look and move”
`
`variety whereby the camera would capture an image of an object, extract features
`
`from the image, determine the pose of the object using previous knowledge about
`
`the relationship of those features to each other, plan the robot’s motion based on the
`
`pose information, and then await further instruction. EX1004 at 151-54.
`
`31. Broadly speaking, visual servoing “involves the use of one or more
`
`cameras and a computer vision system to control the position of the robot’s
`
`end-effector relative to the workplace.” EX1004, p.1. As of the earliest effective
`
`12
`
`ABB Inc. Exhibit 1003, Page 15 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`filing date, there were two classifications of visual servoing: position-based and
`
`image-based. In position-based visual servoing, a video camera captures images of
`
`a target that the system would process one at a time (at the camera’s frame rate) such
`
`that “features are extracted from the image and used in conjunction with a geometric
`
`model of the target to determine the pose of the target with respect to the camera.”
`
`EX1004, p.153 (the “camera frame rate” is “essentially the sample rate in a visual
`
`servo system”). The system sent the pose information to the robot to alter the robot’s
`
`motion, and the process would start again with the next image. EX1004, p.155.
`
`Image-based visual servoing was similar but lacked pose estimation. EX1004, p.155.
`
`32.
`
` The functionality of these two systems is shown in Figures 5.4 and 5.5
`
`of Corke:
`
`EX1004, p.155.
`
`13
`
`.
`
`ABB Inc. Exhibit 1003, Page 16 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`
`33. As of the priority date, many robotic vision systems required
`
`calibration. There were three main types of calibration: intrinsic, extrinsic, and
`
`hand-eye.
`
`34.
`
`Intrinsic calibration estimates the internal parameters of an image
`
`sensor and can be used to adjust for lens distortions and other imperfections that
`
`affect image quality and vision accuracy. EX1004, pp.139-46. Extrinsic calibration
`
`determines the position and orientation of the camera (specifically, the position and
`
`orientation of a 3D coordinate frame that is rigidly attached to the camera) relative
`
`to a reference 3D coordinate frame (e.g., the robot base frame, the Object Space, or
`
`a world coordinate frame). Together, this position and orientation information define
`
`the camera pose, which can be used to map, or “transform,” 3D points in the
`
`reference coordinate frame to two 3D points in the camera coordinate frame. Taken
`
`together, the intrinsic and extrinsic parameters define exactly the mathematical
`
`relationship between points in the robot’s workspace and their locations in the
`
`camera image. EX1004, pp.139-46. Hand-eye calibration is the process of
`
`determining the fixed transformation between the robot’s tool and the camera
`
`coordinate system, or the robot base and the world (Cartesian) coordinate system,
`
`and is typically required when the camera is mounted to the robot’s hand. EX1004,
`
`p.147. These transformations provide “positioning information” of the object,
`
`camera, and tool “directly in Cartesian or task space” (EX1004, p.3)—with the task
`
`14
`
`ABB Inc. Exhibit 1003, Page 17 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`or training space being a Cartesian coordinate frame “defined with respect to a point
`
`
`
`on the calibration template.” EX1001, 3:1-34.
`
`IV.
`
`OVERVIEW OF THE ’ 237 PATENT
`
`a.
`
`Specification of the ’237 Patent
`
`35. The ’237 Patent describes the use of machine vision for 3D pose
`
`estimation. The methods of the ’237 Patent include three steps: “a) calibration of the
`
`camera; b) teaching the features on the object; and c) finding the three dimensional
`
`pose of the object and using this information to guide the robot to approach the object
`
`to perform any operations (e.g. handling, cutting etc.).” EX1001, 2:60-67.
`
`36. Figure 1 of the ’237 Patent depicts a “vision-guided robot” 10 with a
`
`base 22 and manipulating arm 12 on which a camera 16 and tool 14—designed to
`
`manipulate a target object—are mounted. EX1001, 2:29, 2:53-59.
`
`15
`
`ABB Inc. Exhibit 1003, Page 18 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`
`
`37. The first step is to calibrate the camera. As was known in the art, the
`
`’237 Patent describes three types of calibrations: (1) intrinsic calibration, which
`
`involves finding the “camera intrinsic parameters” describing “how the camera
`
`forms an image,” including the focal length of the camera, a radial distortion
`
`coefficient, coordinates of the center of radial lens distortion, and a scale factor, (2)
`
`extrinsic calibration, which involves finding the camera’s position and orientation
`
`(i.e., “pose”) in the world coordinate frame by solving the transformation between
`
`the Camera Space—“a reference frame defined with respect to a point on, and
`
`therefore rigid to, the camera”—and the Training Space—a world coordinate
`
`“reference frame defined with respect to a point on the calibration template,” and (3)
`
`hand-eye calibration, which involves finding the position and orientation of the
`
`16
`
`ABB Inc. Exhibit 1003, Page 19 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`camera “relative to the tool of the robot.” EX1001, 3:36-38, 3:56-67, 4:1-10,
`
`5:51-65, 7:19-25, 7:48-49, 8:11-14, 8:30-39, 9:25-32, 9:44-65.
`
`38. The “first step” in calibration is to position the camera on the robot arm
`
`orthogonally to a calibration template “so the camera’s imaging plane is parallel to
`
`the template,” and “defining the ‘Training Space’ for the robot aligned with the
`
`template.” EX1001, 4:18-29. The calibration template is an object with a “series of
`
`fixed detectable features such as a grid of dots or squares.” EX1001, 4:22-25. Next,
`
`the camera is “moved to a plurality of stations,” the camera captures images at each
`
`station, and the camera intrinsic parameters (intrinsic calibration), the Camera
`
`Space-to-Training Space transformation (extrinsic calibration), and Camera
`
`Space-to-Tool Space
`
`transformation (hand-eye calibration) are determined.
`
`EX1001, 4:32-42; 4:57-5:11. The relevant coordinate spaces and frames are shown
`
`in Figure 2 of the ’237 Patent:
`
`17
`
`ABB Inc. Exhibit 1003, Page 20 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`
`39. The ’237 Patent also describes a process for “teaching” the features of
`
`the object, involving placing an object that will be manipulated by the robot in the
`
`“Training Space,” taking 2D images of the object, extracting features from the
`
`images, and computing “[r]eal world coordinates” for the selected features relative
`
`to the “Training Space.” EX1001, 5:12-45, FIG. 6.
`
`40. The ’237 Patent discloses that the calibration and teaching steps “can
`
`be combined by using a self-calibration of robotic eye and hand-eye relationship
`
`with model identification as described in” Wei-I, which provides the “camera
`
`intrinsic parameters, hand-eye calibration and position of selected features in camera
`
`space.” EX1001, 8:30-39, 6:6-19. This method involves placing a part in front of the
`
`18
`
`ABB Inc. Exhibit 1003, Page 21 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`camera, selecting features from the part, “moving the robot to a set of stations”
`
`around the robot’s base, “memorizing the tool position relative to [the] base at each
`
`station,” acquiring an image of the part at each station, extracting features from each
`
`image, and calculating the position of those features “in a space that suits the
`
`application.” EX1001, 6:52-65. Using the method described in Wei-I, the 3D
`
`position of the selected features may be determined automatically, “without any
`
`prior knowledge about the part,” and the coordinates of those features in camera
`
`space “can be transposed in any other space that is related to it, such as training
`
`space.” EX1001, 6:6-13, 6:45-48.
`
`41. The independent claims of the ’237 Patent recite a method (Claim 1)
`
`and apparatus (Claims 20 and 25) for single image 3D vision guided robotics.
`
`EX1001, 1:13-15. The claims recite, for example, a method for pose estimation
`
`“with a single camera mounted to a movable portion of a robot” comprising the steps
`
`of capturing a 2D image of a target object, locating features in said image, and
`
`determining an object space to camera space transformation for the target object
`
`based on a position of some of the captured features from a single image. EX1001,
`
`11:54-67. The claims also recite determining intrinsic and extrinsic parameters of
`
`the camera from the images of the calibration object, positioning the camera
`
`orthogonally to a ruled calibration template, determining a camera space-to-tool
`
`space transformation, and training an operation path of the robot. EX1001, 12:1-13,
`
`19
`
`ABB Inc. Exhibit 1003, Page 22 of 118
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`
`
`
`
`12:17-21, 12:53-57, 13:18-21. The claims further recite an apparatus comprising a
`
`single camera capable of calibration and pose estimation as described in the method
`
`claims. EX1001, 13:51-14:65.
`
`b.
`
`The Relevant Claims of the ’237 Patent
`
`42. The ’237 Patent includes 28 claims, and claims 1, 20, and 25 are
`
`independent claims. See EX1001, 11:54-14:65. I have been asked to evaluate the
`
`patentability of claims 1–10 and 12–28 of the ’237 Patent. Those claims are
`
`reproduced in full below.
`
`43. Claim 1 recites:
`
`image of a volume
`
`1. A useful in three-dimensional pose estimation for use
`with a single camera mounted to a movable portion of a
`robot, the method comprising:
`
`two-dimensional
`capturing a
`containing a target object;
`
`locating a number of features