throbber
V (t), and shifts to step ST16. The speed commanding value vector V contains the
`advancing-side-by-side speed (tx, ty, tz) and revolving speed (omegax, omegay, omegaz) of
`the robot end point position P, It is expressed V=(txtytzomegaxomegayomegaz) T_ And the
`speed commanding value vector V (t+deltat) in time (t+deltat) is searched for from a
`formula (4).
`[0053]
`[Equation 4]
`_
`V(t+H)=-ALr+e(t)-T 0 (t)
`
`• • • (4)
`
`[0054 ]lambda is a speed gain. The rightmost paragraph of the right-hand side of a
`formula (4) is a point estimate of the motion velocity of the subject 11, and is searched
`for from a formula (5).
`[0055]
`[Equation 5]
`T°(t)= E(t)-E(t-~t) -V(t)
`~t
`
`. . . (5)
`
`[OO56]The robot controller 5 computes error vector E (t) and the speed commanding
`value vector V (t) as mentioned above.
`[OO57]In step ST16, the robot controller 5 generates trajectory generation, and joint
`angular velocity and angular acceleration based on the computed speed commanding
`value vector V (t), controls the robot arm 4 according to these, and returns to step ST13.
`[OO58]In step STl 7 when E (t) judges with it being small enough by step ST14, the robot
`controller 5 judges with the gripper 3 having followed the subject 11, and starts approach
`to the subject 11. In the following explanation, it is called visual servo by processing the
`flow chart shown in drawing 7 to make the gripper 3 follow a subject.
`[OO59]Here, if time which shifted to step ST14 is made into time t1, as shown, for
`example in drawing l_Q, as for a speed commanding value vector, a position vector of
`Vactual (t1) and a robot end point position will be set to P (t1).
`[OO6O]In time t1, since the robot end point position P follows the subject 11, its speed
`commanding value vector Vactual at that time (t1) is equal to a velocity vector of the
`subject 11. Therefore, the robot controller 5 generates a speed pattern on which a speed
`pattern for moving by Po from the end point position P (t1) was made to superimpose
`speed commanding value vector Vactual (t1) mentioned above. Po is the positional attitude
`transformation matrix shown by teaching work. A speed pattern for moving by Po is
`computable with the conventional techniques, such as a trapezoid speed pattern, for
`example.
`[OO61]The robot controller 5 computes Pa according to the following formulas noting that
`it takes time TO to carry out Po part relative displacement.
`[0062]
`Pa=P(t1)+Po+Vactua1 (t1), To, and the robot controller 5 control the robot arm 4 based on
`computed Pa. Thereby, as shown in dravving 11, a robot hand (gripper 3) can be moved to
`position Pa which should grasp the subject 11 from the end point position P (t1). After
`such approach is completed, it shifts to step ST18.
`[OO63]In step ST18, the robot controller 5 grasps the subject 11 by the gripper 3, and ends
`processmg.
`[OO64]As mentioned above, the robot device can memorize the robot end point positions
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 501 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`Ps and Pd at the time of teaching work, and can follow the subject 11 under conveyance
`by a visual servo, and can perform grip operations of the subject 11 using the robot end
`point positions Ps and Pd. Thereby, the robot device can grasp the subject 11 under
`conveyance easily, without [ without it is restrained by attitude position of the subject 11
`under conveyance, and ] stopping the conveyor 10.
`[0065]Since work which searches for beforehand physical relationship the installed
`position, CCD camera 1, the gripper 3, and between subject 11 is unnecessary, the above(cid:173)
`mentioned robot device, For example, even if it is a case where installation requirements
`of CCD camera 1 change, the subject 11 under conveyance can be grasped only by
`simple teaching work which was mentioned above. Even if it is a case where the subjects
`11 which should be grasped differ when it is going to realize the same work by a different
`robot device, CCD camera 1, and the gripper 3, only simple teaching work is similarly.
`[0066]Thereby, the operator can reduce load in the conventional teaching work and
`programming substantially. Even if change arises in CCD camera 1, the gripper 3, and
`the subject 11, it can respond flexibly.
`[0067]In an embodiment mentioned above, although it mentioned as an example grasping
`the subject 11 currently conveyed by conveyor 10 and it was explained, a robot device
`can be made the same, also when grasping the stationary subject 11. Teaching work is the
`same as explanation mentioned above.
`[0068]In grip operations, in step ST14 shown in drawing 7, when the error vector E (t)
`becomes small enough, the speed commanding value vector V (t) becomes zero, and
`speed Vactual (t1) of a actual robot end point position becomes zero similarly.
`[0069]Therefore, in step STI 7, the robot controller 5 can grasp the subject 11 only using
`change part Po of the present robot end point position P (t1) and a position at the time of
`instruction. That is, position Pa which should be grasped is calculated by the following
`formulas.
`[0070]Pa=P(t1)+P0 and the robot controller 5 can control the robot arm 4 based on
`computed Pa, and can grasp the subject 11 stationary by the gripper 3.
`[0071](A 2nd embodiment) Below, a 2nd embodiment of this invention is described. The
`same numerals are given to the same part as a 1st embodiment, and the statement is
`omitted about overlapping explanation.
`[0072]A 1st embodiment explained a full flattery state where a subject under conveyance
`and relative physical relationship with the robot end point position P became almost the
`same as those relative physical relationship at the time of teaching work. On the other
`hand, by a 2nd embodiment, when it is not in a full flattery state, it explains that a robot
`device grasps a subject.
`[0073]Here, with a flattery state, it is defined as the state where relative position relation
`was kept constant. For example, in a case where a subject is exercising by speed V0, the
`robot end point position Pis in a state where it is exercising by speed V0. Next, two, a
`"full flattery state" and an "imperfect flattery state", are defined as a flattery state.
`[0074]A state which is carrying out fixed time continuation while relative physical
`relationship of the robot end point position Panda subject has been constant value P0 ,
`i.e., the state of being the same as those relative physical relationship at the time of
`teaching work, is called "full flattery state." A state which is changing fixed time
`continuation into an "imperfect flattery state" while relative physical relationship of the
`robot end point position Panda subject has been constant value P'0 (!=P0 ), That is,
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 502 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`

`

`

`

`are used in order to follow a subject and to grasp this. That is, an operator only performs
`easy teaching work and can make the subject under movement to a robot device grasp.
`
`TECHNICAL FIELD
`
`[Field of the Invention ]This invention relates to a robot device and a method for
`controlling the same, and relates to a robot device which grasps the object under
`movement especially, and a method for controlling the same.
`
`PRIORART
`
`[Description of the Prior Art]When working to a subject using a robot (for example,
`objective grasping), information, including the position of a subject, a posture, etc., is
`acquired by a sensor, and many examples which work based on the sensor information
`are put in practical use. As such an equipment configuration, it divides roughly and,
`generally there are the following two methods.
`[0003]For example, there are some which detect the position of a subject and a posture
`from the picture acquired by picturizing with the camera which had the subject which has
`a conveyor top conveyed fixed, and transmit the information to a robot as indicated to
`JP,8-63214,A. This robot is working grasping etc. from the position information on a
`subject.
`[0004]Sensors, such as a camera, are attached to the hand of a robot, the target positional
`attitude is detected from the picture from a camera as mentioned above, and there are
`some which follow the subject which moves based on the information, and work grasping
`etc.
`
`EFFECT OF THE INVENTION
`
`[Effect of the Invention ]According to a robot device concerning this invention, and a
`method for controlling the same, the operator should just perform the following two
`things as teaching work. A gripping mechanism is moved to the 1st end point position,
`and a subject is made 1st to grasp here. From a gripping mechanism, a subject is made to
`release and a gripping mechanism is moved [ 2nd ] to the 2nd end point position into an
`imaging range. The coordinates of the characteristic quantity of the subject in the 1st and
`2nd end point positions and the 2nd end point position which are memorized at this time
`are used in order to follow a subject and to grasp this. That is, an operator only performs
`easy teaching work and can make the subject under movement to a robot device grasp.
`
`TECHNICAL PROBLEM
`
`[Problem(s) to be Solved by the Invention ]However, when realizing the work of grasping
`of a subject, etc. using the camera installed apart from the robot like the former, the work
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 505 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`which asks for the installed position of a robot and a camera correctly beforehand is
`needed. It is necessary to perform the camera calibration which computes a focal
`distance, a lens strain coefficient, etc. of the camera beforehand.
`[0006]In addition, in the example in JP,8-63214,A etc., it is necessary to install so that
`the optic axis of a camera may generally become vertical still more nearly parallel to the
`transportation direction of a conveyor at a conveyor flat surface in the x axis or the y-axis
`in a screen. Since the subject is running by the band conveyor etc., in addition to the
`position attitude information of the subject detected by the picture from a camera, another
`means (for example, encoder of a conveyor) needs to detect the speed information of a
`conveyor, etc., and it is necessary to generate the command value to a robot based on
`both information. The orbit at the time of a robot grasping a subject from the installation
`position information of a camera and a robot, the position information on a subject, and
`the speed information of a conveyor must be generated. Therefore, there are problems,
`like the procedure for teaching work and trajectory generation becomes very complicated.
`[0007]It can say such a problem that the same may be said of the example of the latter
`which installed the camera to the hand of the robot in conventional technology. That is,
`when a subject is grasped by the gripper etc. which were attached to the hand of a robot
`based on the position attitude information of the subject detected from the picture from a
`camera, What is called a hand eye calibration that needs to change the position attitude
`information of the subject acquired from the camera into the position attitude information
`seen from the standard coordinates of the robot, and searches for the physical relationship
`of a robot hand and a camera precisely beforehand is needed. When sensors, such as a
`camera, are especially installed in a robot hand, a camera position relation with a robot
`hand frequently changes with the malfunctions under work, etc., and it is necessary to do
`calibration work again in that case. When the subject is moving like the former example,
`since the position attitude information of the subject seen from the robot normal
`coordinate changes every moment, it becomes very complicated [ the trajectory
`generation for grasping ] .
`[0008]In the former, even if the method of the former and the latter changes neither a
`component (the installation condition of a conveyor, a bearer rate, or the installed
`position relation between a camera and a robot), nor the shape of a subject which should
`be grasped, Whenever the robot and sensor to be used changed, the above-mentioned
`highly precise calibration and complicated teaching work needed to be performed.
`[0009]This invention is proposed in order to cancel the problem mentioned above, and it
`is a thing.
`The purpose is to provide a robot device which can save the time and effort which
`performs work and a calibration, and can make predetermined work do on a subject
`easily, and a method for controlling the same.
`
`MEANS
`
`[Means for Solving the Problem ]A holding means from which the invention according to
`claim 1 has a gripping mechanism which grasps a subject and which said gripping
`mechanism comprised movable, An imaging means which is fixed movable with a
`gripping mechanism of said holding means, and picturizes a subject, A feature amount
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 506 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`extracting means which extracts coordinates of one or more characteristic quantity from a
`picture of a subject which said imaging means picturized, A position of a gripping
`mechanism at a time of making said gripping mechanism grasp said subject is memorized
`as the 1st end point position, A position of said gripping mechanism when moving said
`gripping mechanism within limits which can make said subject able to open wide and in
`which said imaging means can picturize a subject is memorized as the 2nd end point
`position, A memory measure which memorizes coordinates of each characteristic
`quantity of said subject which said imaging means picturized in said 2nd end point
`position, and said feature amount extracting means extracted, A calculating means which
`computes a move procession for moving from said 2nd end point position to said end
`point position of 1, Coordinates of each characteristic quantity of a subject under
`movement extracted by said feature amount extracting means by moving said holding
`means so that it may be in agreement with coordinates of each characteristic quantity
`memorized by said memory measure. When a follow-up control means which carries out
`control which said gripping mechanism is made to follow to a subject under said
`movement, and a subject while said gripping mechanism is exercising are followed,
`Based on a move procession which said calculating means computed, said gripping
`mechanism is brought close to a subject under said movement, and it has a grasping
`control means which controls said holding means to make said subject grasp.
`[001 l]The robot device according to claim 1 grasps a subject which exercises relatively
`to the robot device concerned. Therefore, a case where said robot device exercises and
`said subject is being fixed, when said robot device and said subject are exercising at a
`speed different, respectively, and not only when said robot device is fixed and said
`subject exercises, but when, it can apply. Although said subject may be carrying out the
`acceleration straight-line motion of linear uniform motion, the uniform circular motion,
`etc. relatively to said robot device, it may be standing it still relatively.
`[0012]First, the operator needs to perform the following two things as teaching work. A
`gripping mechanism is moved to the 1st end point position, and a subject is made 1st to
`grasp here. A subject is made to release from a gripping mechanism and an imaging
`means moves [2nd] a gripping mechanism within limits which can picturize a subject to
`the 2nd end point position. Such teaching work is performed and a memory measure is
`made to memorize the 1st and 2nd end point positions. A memory measure is made to
`also memorize coordinates of one or more characteristic quantity extracted from a picture
`which an imaging means picturized in the 2nd end point position. An imaging means
`serves as a sensor of a gripping mechanism, and it is being fixed so that it may move with
`a holding means. A move procession for moving from the 2nd end point position to the
`1st end point position is computed.
`[0013]If such teaching work is completed, grip operations which grasp a subject under
`movement can be performed. A gripping mechanism is moved so that coordinates of each
`characteristic quantity of a subject under movement may be in agreement with
`coordinates of each characteristic quantity memorized by said memory measure. That is,
`what is called a visual servo is performed so that a subject while a gripping mechanism is
`exercising may be followed. Relative physical relationship of a subject at this time and a
`gripping mechanism is the same as a subject at the time of teaching work, and that
`relative of a gripping mechanism. When a subject while said gripping mechanism is
`exercising is followed, based on the 1st move procession that said calculating means
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 507 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`computed, said gripping mechanism is brought close to a subject under said movement,
`and said subject is made to grasp.
`[0014]It is desirable for a gripping mechanism to follow a subject under movement
`thoroughly in a case where a visual servo is being performed. However, a gripping
`mechanism may follow, separating prescribed distance to a subject under movement. In
`such a case, it is preferred that re-teaching work which is indicated to claim 2 is
`performed. While said gripping mechanism separates said memory measure prescribed
`distance to a subject under said movement by said follow-up control means, when it
`follows, specifically, Coordinates of each characteristic quantity of said subject which
`said feature amount extracting means extracted are newly memorized, What is necessary
`is just to memorize a position of said gripping mechanism when said follow-up control
`means makes said gripping mechanism follow a subject thoroughly from said 2nd end
`point position using coordinates of each newly memorized characteristic quantity as the
`2nd new end point position.
`[0015]The invention according to claim 3 is the control method of a robot device which
`grasps a subject by a gripping mechanism constituted movable, A position of a gripping
`mechanism at a time of making said gripping mechanism grasp said subject is memorized
`as the 1st end point position, A position of said gripping mechanism when making said
`subject open wide and moving said gripping mechanism within limits which can picturize
`a subject is memorized as the 2nd end point position, In said 2nd end point position,
`picturize said subject, and coordinates of one or more characteristic quantity are extracted
`from a picture of said picturized subject, In the state where memorized coordinates of
`each of said extracted characteristic quantity, computed a move procession for moving
`from said 2nd end point position to said end point position of 1, and it was fixed movable
`with said gripping mechanism. Picturize a subject under movement and coordinates of
`one or more characteristic quantity are extracted from a picture of a subject under said
`picturized movement, Coordinates of each characteristic quantity of a subject under said
`extracted movement by moving said gripping mechanism so that it may be in agreement
`with coordinates of each of said characteristic quantity memorized. When said gripping
`mechanism is made to follow a subject under said movement and a subject while said
`gripping mechanism is exercising is followed, based on said computed move procession,
`said gripping mechanism is brought close to a subject under said movement, Said
`gripping mechanism is made to grasp said subject.
`[0016]This invention is a method for controlling a robot device indicated to claim 1, and
`the operator should just perform two things mentioned above as teaching work. That is,
`only by an operator performing two easy teaching work, a gripping mechanism can be
`made to be able to follow a subject which is exercising and the subject can be grasped.
`[0017]It is desirable for a gripping mechanism to follow a subject under movement
`thoroughly in a case where a visual servo is being performed. However, a gripping
`mechanism may follow, separating prescribed distance to a subject under movement. In
`such a case, it is preferred that re-teaching work which is indicated to claim 4 is
`performed. While said gripping mechanism specifically separates prescribed distance to a
`subject under said movement, when it makes said gripping mechanism follow a subject
`under said movement, and it follows, it is in a state fixed movable with said gripping
`mechanism, Picturize a subject under movement and coordinates of one or more
`characteristic quantity are extracted from a picture of a subject under said picturized
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 508 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`movement, Newly memorize coordinates of each characteristic quantity of said said
`extracted subject, and coordinates of each of said newly memorized characteristic
`quantity are used, What is necessary is just to memorize a position of said gripping
`mechanism as the 2nd new end point position, when said gripping mechanism is moved
`so that said gripping mechanism may follow a subject thoroughly from said 2nd end point
`position, and followed thoroughly.
`[0018]
`[Embodiment of the Invention ]Hereafter, it explains in detail, referring to drawings for an
`embodiment of the invention.
`[0019](A 1st embodiment) As shown in drawini:-~J., the robot device concerning this
`embodiment grasps the subject 11 currently conveyed by conveyor 10 in the state where
`it was fixed. This invention is not limited to such an embodiment, and if the subject 11 is
`exercising relatively to a robot device, it is applicable. That is, they may be a case where
`a robot device exercises and the subject 11 is standing it still, and a case where it is
`exercising at the speed from which a robot device and the subject 11 differ. 2nd and 3rd
`embodiments are also the same.
`[0020]A robot device is provided with the following.
`CCD camera 1 which picturizes the subject 11.
`The sensor controller 2 which extracts characteristic quantity from the picture which
`CCD camera 1 picturized.
`The gripper 3 which grasps the subject 11.
`The robot controller 5 which performs the robot arm 4 which moves the gripper 3 in the
`direction of a three dimension, coordinate conversion, trajectory generation, etc., has a
`servo circuit, amplifier, etc. which are not illustrated, and controls operation of the
`gripper 3 or the robot arm 4.
`
`[0021]CCD camera 1 has adhered near the gripper 3 which is a tip of the robot arm 4.
`The robot controller 5 computes error vector E (t) and the speed commanding value
`vector V (t) based on the image characteristic quantity Fi obtained with the sensor
`controller 2, and controls movement of the robot arm 4, and grasping of the subject 11 by
`the gripper 3 according to these values. The conveyor 10 conveys the subject 11 at a
`fixed speed.
`[0022]Before the robot device of such composition actually grasps the subject 11
`currently conveyed, teaching work for grasping the subject 11 is performed.
`[0023](Teaching work) First, an operator makes a robot device grasp the subject 11, and
`teaches the robot end point position P. The position which should be grasped may be
`restricted by the shape of gripping mechanisms, such as the gripper 3, and the subject 11.
`The position which should be grasped for next processing (for example, attach other parts
`to the subject 11, or supply to a parts box) may be specified beforehand.
`[0024]Here, as shown, for example in drawing 2, the subject 11 was formed in 7
`prismatic forms, and is provided with the seven sides 111-117. Six attachment holes (for
`example, tapped hole) for attaching the parts which are not illustrated are formed in the
`upper surface 118 of the subject 11. Such an attachment hole is not formed in the
`undersurface 119 of the subject 11. As for the portion by which the subject 11 is grasped,
`it is preferred that they are the side 111,115 in which it faces among seven sides in
`consideration of stability when grasped, or the side 113,116. Although the subject 11 of
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 509 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`the above-mentioned shape was used in this embodiment, this invention may not be
`limited to this shape and the subjects 11 may be other shape. And teaching work is
`specifically performed according to processing of step STl to step ST6 shown in drawing
`J.
`[0025]In step STl, an operator shifts to step ST2, as the gripper 3 grasps the side 111,115
`or the side 113,116 of the subject 11 using the teaching pendant etc. which are not
`illustrated.
`[0026]In step ST2, as shown in drawing 4, the gripper 3 is in the state which grasped the
`subject 11, and the robot controller 5 memorizes, the position, i.e., robot end point
`position Pd, of the gripper 3 in robot standard coordinates, and shifts to step ST3. The
`"position" said here is expressed with the parameter of 6 flexibility including a posture.
`The parameter of 6 flexibility including a posture is called "position" like the following.
`[0027]In step ST3, the robot controller 5 makes the subject 11 which the gripper 3
`grasped release, and moves the gripper 3 to arbitrary positions, and shifts to step ST4.
`[0028]It requires that the position which the gripper 3 moves is in the visual field range
`of CCD camera 1. That is, it is a position where the characteristic quantity of the subject
`11 is extracted from the picture picturized with CCD camera 1. It is a position where
`characteristic quantity which should be extracted, such as a position to which the subject
`11 is not settled in the view of CCD camera 1, and a position in which CCD camera 1
`picturizes the subject 11 from just beside, is specifically suitably obtained from a picture.
`[0029]In step ST4, the robot controller 5 memorizes, as shown in drawing __ ~_, the position
`Ps, i.e., the robot end point position, of the gripper 3 in the present robot standard
`coordinates, and it shifts to step ST5.
`[0030]Here, the following formulas will be realized if the positional attitude
`transformation matrix for changing into Pd from the robot end point position Ps is made
`into Po.
`[0031]Ps-P0 =PdthereforeP0 =Ps- 1 and the Pd robot controller 5 compute positional
`attitude transformation-matrix Po at this time, and memorizes this. The robot controller 5
`may compute positional attitude transformation-matrix Po in the case of the grip
`operations mentioned later.
`[0032]In step ST5, CCD camera 1 picturizes the subject 11 in the robot end point position
`Ps, and supplies this picture to the sensor controller 2. The sensor controller 2 extracts
`image characteristic quantity based on the picture from CCD camera 1, and shifts to step
`ST6. Here, CCD camera 1 acquires the picture of the subject 11 as shown, for example in
`drawing 6.
`[0033]In step ST6, six attachment holes processed into the upper surface 118 of the
`subject 11 are extracted as the image characteristic quantity Fi (i= 1-6), the center
`position of each image characteristic quantity Fi carries out coordinate value (Xi, Yi)
`memory, and the sensor controller 2 ends processing of teaching work.
`[0034]What is necessary is just to use publicly known techniques, such as binarization
`image processing and edge processing, as an instrumentation method of the center
`position of the attachment hole in a picture. Although it attached here and the center
`position of the hole was made into the image characteristic quantity Fi, the area center of
`gravity of the isolated field of a line segment comer point, a comer, and a binarization
`picture, etc. may use other image characteristic quantity. An operator may give a
`definition artificially by the input means in which neither a mouse nor a keyboard
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 510 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`illustrates the image characteristic quantity Fi at the time of instruction.
`[0035](Grip operations) A robot device grasps the subject 11 under conveyance using the
`coordinates of the image characteristic quantity Fi memorized on the occasion of the
`work, and the robot end point positions Pd and Ps, after teaching work is completed. In
`these grip operations, processings from step STl 1 to step ST18 shown in drawing_l__are
`performed.
`[0036]In step STl 1, as shown in drawing 8, CCD camera 1 picturizes a transportation
`area [ of the subject 11 ] 10, i.e., conveyor, top, and shifts to step ST12.
`[0037]In step ST12, the sensor controller 2 judges whether the subject 11 was detected.
`In order to detect the subject 11 and to extract characteristic quantity from the picture, it
`is required for the subject 11 whole to enter within the limits of the view of CCD camera
`1. Here, the luminosity of a picture is used for detection of the subject 11. For example,
`when the subject 11 is upstream of the conveyor 10 and there is into the visual field range
`of CCD camera 1, the luminosity of the picture acquired with CCD camera 1 hardly
`changes. [no] On the other hand, if the subject 11 is conveyed by conveyor and it enters
`in the visual field range of CCD camera 1, the luminosity will change. Then, it judges
`with the sensor controller 2 providing the window which is not illustrated in the suitable
`position in a picture, and the subject 11 being in a visual field range, when the luminosity
`in the window is beyond a predetermined threshold, and when luminosity is less than a
`predetermined threshold, it judges with there being nothing within the limits of the view.
`And when the sensor controller 2 detects the subject 11, it shifts to step ST13, and when
`it does not detect, it returns to step STl 1.
`[0038]image-characteristic-quantity fi(t) (i=l in the time T from a picture when CCD
`camera 1 picturized the sensor controller 2 in step ST13, and 2 ... ) -- it extracts and shifts
`to step ST14. Here, drawing 9 is a figure showing the picture picturized at the time of
`execution of grip operations. It is necessary to extract the image characteristic quantity fi
`(t) from the picture picturized at the time tat the time of grip operations. However, there
`is a thing which should extract essentially by a certain cause and for which the image
`characteristic quantity f3 (t) is extracted for example, it does not come out. Then, the
`characteristic quantity fl (t) - f7 (t) need to perform corresponding point searching which
`asks for any of Fl - F6 which show drawing 6 are supported.
`[0039]This corresponding point searching can be performed with what is called checking
`methods, such as dynamic programming, a largest faction method, etc. which various
`methods are proposed, for example, are indicated to "robot vision" (Masahiko Y anaida
`work, Shokodo, 1990). If Fl - the thing corresponding to F6 are obtained out of fl-f7, the
`following image characteristic quantity can once be calculated easily.
`[0040]What is necessary is just to specifically extract the image characteristic quantity fi
`(t+deltat) in the next time (t+deltat) of the time t near the image characteristic quantity fi
`in the inside of a previous picture ( t ), if cycle time of the repetition processings from step
`ST13 to step ST16 is set to deltat. Thereby, the stability of characteristic quantity
`extraction can ease increase and the burden of computation.
`[0041 ]In step ST 14, the sensor controller 2, It judges whether the error vector E (t) in the
`time tis small enough, or the judging standard of IIE(t) ll<epsilon is specifically met,
`when small enough, it shifts to step STl 7, and when not small enough, it shifts to step
`ST15.
`[0042]Here, the error vector E (t) is searched for based on the image characteristic
`
`JP 2002-018754
`Date of Publication: 01/22/02
`
`ABB Inc. Exhibit 1002, Page 511 of 996
`ABB Inc. v. Roboticvisiontech, Inc.
` IPR2023-01426
`
`

`

`quantity fi (t) extracted from the picture of present CCD camera 1, and the image
`characteristic quantity Fi memorized by step ST6 mentioned above.
`[0043]In this invention, literature

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket