`
`A. A. Rizzi and D. E. Koditschek *
`
`Center for Systems Science, Yale University
`New Haven, CT 06520-1968
`
`Abstract
`
`In a continuing program ofresearch in robotic control of intermittent dynamical tasks, we
`have constructed a three degree of freedom robot. capable of “juggling” a ball falling freely
`in the earth’s gravitational field. ‘This work is a direct extension of that previously reported
`in (7, 3, 5, 4]. The present paper offers a comprehensive description of the new experimental
`apparatus and a brief accountof the more general kinematic, dynamical, and computational
`understanding of the previous work that underlie the operation of this new machine.
`
`Introduction
`
`in our continuing research on dynamically dexterous robots we have recently completed the
`construction of a second generation juggling machine.Its forebear, a mechanically trivial system
`that used a single motor to rotate a bar parallel to a near-vertical frictionless plane was capable
`of juggling one or two pucks sensed by a grid of wires into a specified stable periodic motion
`through repeated batting (7, 3, 6]. In this second generation machine, a three degree of freedom
`direct drive arm (Figure 1) relies on a field rate stereo vision system to bat an artificially
`illuminated ping-pong ball into a specified periodic vertical motion. Despite the considerably
`greater kinematic, dynamical, and computational complexity of the new machine,its principle
`of operation represents a straightforward generalization of the ideas introduced in the previous
`planar study. Moreover, its empirical performance reveals strong robust stability properties
`similar to those predicted and empirically demonstrated in the original machine. The arm will
`successfully bring a wide diversity of initial conditions to the specified periodic veritical motion
`through repeated batting. Recovery from significant perturbations introduced by unmodeled
`external forces applied during the ball’s free flight is quite reliable. We typically log thousands
`and thousands of successive impacts before a random imperfection in the wooden paddle drives
`the ball out of the robot’s workspace.
`The work presented here represents thefirst application of the controllers developed in [3] to
`a multi-axis robot, and demonstrates the capabilities of the Biighler arm and the Cyclops vision
`system. Both of these systems have been developed at the Yale University Robotics Laboratory
`to facilitate our investigations into robot control of intermittent dynamical tasks. Thus, the
`present paper takes on the strong aspect of displaying the fruits of previous research. Weoffer
`a comprehensive description of the components of the new apparatus in Section 2, Section 3
`
`
`
`RoboticVisionTech EX2006
`ABB v. RoboticVisionTech
`IPR2023-01426
`
`
`
`
`
`Figure 1: The Yale Spatial Juggling System
`
`briefly reviews the notion of a mirror algorithm, the sole repository of all “juggling intelligence”
`in our system, and displays its generalization to the present kinematics. Section 4 provides a
`system level “tour” describing the manner in which the machine’s physical and computational
`architecture is coordinated to realize the desired task. The paper concludes with a brief outline
`of our near-term future research directions,
`
`Juggling Apparatus
`
`This section describes the constituent pieces of our juggling machine. The system, pictured in
`Figure 1, consists of three major components: an environment (the ball); the robot; and an
`environmental sensor (the vision system). We nowdescribe in fairly specific terms the hardware
`underlying each component and propose a (necessarily simplified) mathematical model in each
`case that describes its properties in isolation.
`
`2.1. Environment: Striking a Ball in Flight
`
`The two properties of the ball relevant to juggling are its flight dynamics (behavior while away
`from the paddle), and its impact dynamics (howit interacts with the paddle/robot). For sim-
`plicity we have chosen to model the ball’s flight dynamics as a point mass under the influence
`of gravity. This gives rise to the flight model
`
`b= 4,
`
`(1)
`
`where 6 € B = IR®, and & = (0,0,~7)"is the acceleration vector experienced by the ball due to
`gravity.
`OQ at
`Suppose a ball with trajectory b(t) collides with the paddle in robot configuration ¢ €
`some point, p on the paddle which has a linear velocity v. Letting 7 Bx O denote the total
`configuration space of the problem, we seek a description of how the ball’s phase,(8, b) € TB is
`changed by the robot’s phase, (q,q¢) € TQ at an impact,
`As in [7, 6] we will assume that the components of the ball’s velocity tangent to the paddle
`
`
`
`expressed as
`
`(2)
`(6, - vi.) = —a(bn - Un),
`where b/, and vf, denote the normal componentsof the ball and paddle velocities immediately
`after impact, while 6, and v, are the velocities prior to impact. Assuming that the paddle
`is much more massive than the ball (or that the robot has large torques at its disposal), we
`conclude that the velocity of the paddle will remain constant throughout the impact (v’ = v).
`It follows that the coefficient of restitution law can now be re-written as
`
`bi, = bn + (1 + @)(n — by).
`
`and, hence,
`
`;
`;
`if = b4(1+a)nn"(v - 5),
`where n denotes the unit normal vector to the paddle.
`
`2.2 Robot Kinematics: An Almost Spherical Arm
`
`(3)
`
`(4)
`
`At the heart of the juggling system resides a three degree of freedom robot — the Biihgler
`Arm! --- equipped at its end effector with a paddle. The revolute joints give rise to the familiar
`difficulties in computing and analyzing the robot’s inverse kinematics. Moreover, as in our
`earlier work, the presence of revolute kinematics introduces a strongly nonlinear component
`to the “environmental control system”, an abstract discrete dynamical system with respect to
`which we find it effective to encode the juggling task.
`‘The robot kinematics relevant to the task of batting a ball relates the machine’s configu-
`ration to the normal vector at a point in its paddle,
`In order to represent this formally we
`parametrize the paddle’s surface geometry. Let j represent {in homogeneous coordinates) a
`planar transformation taking points in the unit box, § 8 [0,1] x [0,1] diffeomorphically onto
`the paddle’s (finite) surface area expressed with respect to the gripper frame, F,. Associated
`with each point on the paddle’s surface, fi(s) is the unit normal, #(s), again, the homogeneous
`coordinate representation of the vector with respect to F,. The paddle’s “Gauss map” [15] is
`now parametrized as *
`
`(5)
`N(3) = B3 x 87.
`N :S — N(3): 8 +> [n(s), p(s)];
`Denote by H(q) the robot’s forward kinematic map taking a configuration, g € Q, to the
`homogeneous matrix representation of the gripper frame with respect to the base. The world
`frame representation of any paddle normal at a point is thus specified by the extended forward
`kinematic map,
`G:Q— N(3): (4,8) > [n(q,8), p(g.8)]= H(Q)N(s);
`
`O=OxS.
`
`(6)
`
`At the cost of a little more notation, it will prove helpful to define the projections,
`
`To(q, s) = 4%
`
`m™s(q,8) = $.
`
`The linear velocity of the hit point due the robot’s motion may now be written explicitly as
`
`dimQ
`
`:
`.
`:
`v= DS) GDaH (q)p(s) = Dep = Dp Mog;
`i=l
`
`A
`dai
`i
`Ug = [Drg]" = Oneeny »
`dimSxdim@d
`
`(7%)
`
`1Pronounced byeog’—ler.
`?The appearance of n in (4) suggests that it is really a force vector, thus we will define the possible normal
`
`
`
`Additionally lying in the total configuration space is the contact submanifold, C, — the set of
`ball/robot configurations where the ball is in contact with the paddle — given by
`A
`c 8 {(b,q) € T 38 € 5,0 = (a, 8)}.
`which is evidently the only place that the normal appearing in (4) becomesrelevant. Since p is
`one-to-one by assumption there is a map s,:C — S such that
`(8)
`b = p(a, se(b, g)).
`Combining (7), and (8) we may now rewrite the impact event (4) in terms of a “collision map”
`ce: 7C -» TB, as
`i = b+ o(b, b, 4, q)
`j
`2 4
`e(b,b,q,4) 5 —(1 + a)n(q, <(b, 9))n™(q, 8<(b, ¢)) (b- Dp Mod).
`Choosing a gripper frame, 7,, for the Biihgler Arm depicted in Figure 1 located at the base
`of the of the paddle (the point of intersection of the second and third joints) whose v-axis is
`aligned with the paddie’s normal and whose z-axis is directed along the paddle’s major axis, we
`have
`
`:
`
`:
`
`(9)
`
`N(s)=
`
`aoocor
`
`dy
`$1
`82
`1
`
`}
`
`+
`
`[s, 5] for reasons to be madeclear below. The frame
`and we will artificially set 81 = 0,39 = s €
`transformation, H(q), is developed in [13], and yields a forward kinematic map of the form
`G(q,8) = {[n(q,s), p(4,s)]
`cos(q1) cos(q2) cos(gs) — sin(q1) sin(gs)
`cos(dz) cos(qa) sin(qi) + cos(q1) sin(gs)
`— cos(q3) sin(q2)
`0
`
`a(a,s)
`,
`
`(10)
`
`v(q,8) =
`
`— (sin(q; dz) + (cos(9q; ) cos(g2) cos(q3) ~ sin(q1) sin(qa)) dy + cos(q1) sin(g2)se
`cos(q1 )d2 + (cos(¢2) cos(q3) sin(q1) + cos(q:) sin(g3)) dg + sin(q1) sin(¢2)s2
`~ (os(t) sintad) + 09(08
`
`
`
`
`
`Analysis of the jacobian of p shows that it is rank three away from the surface defined by
`A
`.
`5n(q, 8) = (82 + cos*(q2) cos*(q3))(sin?(q2) + cos?(q3)} = 0,
`thus away from 6, = 0 we can define Dpt, the right inverse of Dp, and the workspace is
`now given by W & p(Q ~ H) where
`nH {Ge 6: 6,(4) = 0}
`Finally the inverse kinematic image of a point 6 € W maybe readily computed as
`:
`si
`d.
`
`A fe.
`
`=
`
`~
`
`ArcTan2(—b, —b,) + ArcSin Gc
`in{—Sestgado__p-(b) = —§ + ArcTan2 (6s, 1/6? + 83 - sn(e)) — ArcSin (seine) as € St,
`
`3 9
`
`i
`
`g
`
`
`
`with the freely chosen parameter, q3, describing the one dimensional set of robot configurations
`capable of reaching the point 6. Having simplified the kinematics via the artificial joint con-
`straint, s; = 0, the paddle contact map may simply be read off the inverse kinematics function,
`
`se(b,q) = ms 0p (0) =
`
`-17p)
`
`—
`
`_
`
`ifbTb— sin?(qg)d3 — cos?(q3)d? | ,
`
`0
`
`
`
`
`
`
`2.8 Sensors: A Field Rate Stereo Vision System
`
`Two RS-170 CCD television cameras with 1/2000sec. electronic shutters constitute the “eyes”
`of the juggling system. In order to makethis task tractable we have simplified the environment
`the vision system must interpret. The “world” as seen by the cameras contains only one white
`ball against a black background. The CYCLOPS vision system, described in Section 2.4, allows
`the straightforward integration of these cameras into the larger system.
`Following Andersson’s experience in real-time visual servoing [1] we employ the result of
`a first order moment computation applied to a small window of a threshold-sampled (that is,
`binary valued) image of each camera’s output. Thresholding, of course, necessitates a visually
`structured environment, and we presently illuminate white ping-pong bails with halogen lamps
`while putting black matte cloth cowling on the robot, floor, and curtaining off any background
`scene,
`
`2.3.1 Triangulation
`
`In order to simplify the construction of a trangulator for this vision system, we have employed a
`simple projective camera model. Let F, be a frame of reference whoseorigin is at the focal point
`and whose z-axis is directed toward the image plane of this camera. Let p = [pz, Dy, Pz, iJ’
`denote the homogeneous representation with respect to this frame of some spatial point. Then
`the camera, with focal length f, transforms this quantity as
`
`
`
`was] Pl|2 pig), (12)
`
`0
`
`P2{Pz
`
`IR* is the homogeneous vector, with respect to F,, joining the oragin of F, to the
`Here, %& €
`image plane coordinates of $@. Thus, for a camera whose position and orientation relative to
`the base frame, Fp are described by the homogeneous matrix °H, , the projection of a point, "p
`
`u = py( Hop).
`Given two such cameras separated in space, whose frames of reference with respect to Fo
`are represented by 9H; and °H,, it is straightforward to derive a triangulation function, pt,
`capable of reconstructing the spatial location of a point, given its projection in both images. In
`particular if projection onto the right and left image planes is given by
`"ur = Pg.(*Ho “p ),
`
`‘uw = pp(Ho “p )
`respectively, a (by no means unique) triangulation function is given by
`
`
`
`where
`
`
`i|4 Tray At rey. 0
`
`
`
`ic Cc) C'(o- It ‘HT, 0)
`
`
`
`of [0 oo ij; of (tu, |); "a, = "Ho 9H, 'uy.
`
`This amounts to finding the midpoint of the biperpendicular line segment joining the two lines
`defined by "u, and "ty. Note that there is considerable freedom in the definition of pi since it
`maps a four dimensional space (the two image plane vectors) onto a space of dimension three
`(ball position),
`Finally it is worth noting that although the implementation of a triangulation system of this
`type is simple, the measurement of the parameters required for ifs construction is quite difficult.
`A short description of the automatic method of calibration we have chosen to use in the juggling
`system can be found in Appendix A.
`
`2.3.2 Signal Processing
`
`In practice it is necessary to associate a signal processing system with the sensor to facilitate
`interpretation of the data. For the vision system in use here, sensor interpretation consists of
`estimating the ball’s position and velocity, correcting for the latency of the vision system, and
`improving the data rate out of the sensor system — the 60 liz of the vision system is far below
`the bandwidth of the robot control system.
`Given reports of the ball’s position from the triangulator it is straightforward to build a
`linear observer for the full state — positions and velocities — since the linear dynamical system
`defined by (1) is observable. In point offact, it is not the ball’s position, b,, which is input to the
`observer, but the result of a series of computations applied to the cameras’ image planes, and
`this “detail” comprises the chief source of difficulty in building cartesian sensors of this nature.
`
`2.4 Controller: A Flexibly Reconfigurable Computational Network
`
`All of the growing numberof experimental projects within the the Yale University Robotics
`Laboratory are controlled by widely various sized networks of Transputers produced by the
`INMOSdivision of SGS-Thomson. Pricing and availability of both hardware and software
`tools make this a natural choice as the building block for what we have come to think of as
`a computational “patch panel.” The recourse to parallel computation considerably boosts the
`processing power per unit cost that we can bring to bear on any laboratory application. At
`the same time the serial communication links have facilitated quick network development and
`modification.
`The choice of the INMOS product line represents a strategy which standardizes and places
`the burden of parallelism — inter-processor communications support, software, and development
`environment — around a commercial product, while customizing the computational “identity”
`of particular nodes by recourse to special purpose hardware. We provide here a brief sketch of
`the XP/DCS family of boards, a line of I/O and memory customized Transputer nodes devel-
`oped within the Yale Robotics Lab and presently employedin all our control experiments. The
`backbone of this system is the XP/DCS CPU, providing a transputer and bus extender. By
`coupling an XP/DCS to an 10/MOD a computational node can be customized for interfacing
`
`
`
`
`
`
`The XP/DCS processor The XP/DCS (produced by Evergreen Designs) was designed in
`conjunction with the Yale Robotics Laboratory in 1987 [9] in order to meet both the
`computational and 1/O requirements presented by robotic tasks. The board is based on
`the INMOS T800 processor, a 32 bit scalar processor capable of 10 MIPS and 1.5 MF'LOP
`(sustained) with four bidirectional 20MHz DMAdriven communication links and 4 Kbytes
`of internal (1 cycle) RAM. The processor is augmented with an additional 1-4 Mbytes of
`dynamic RAM (3 cycle), and an I/O connector which presents the T800’s bus to a daughter
`board.
`
`IO/MOD The 10/MOD (also produced by Evergreen Designs) allows an XP/DCS to “com-
`municate” with custom hardware in a simple fashion.
`In order to properly implement
`the ideal “processing path panel” it is essential that the integration of new sensors and
`actuators be simple and fast. The IO/MOD augments an XP/DCSby providing a 32 bit
`latched bidirectional data bus, six 4 bit wide digital output ports, and eight digital input
`signals, all of which are mapped directly into the memory space of the T800.
`CYCLOPSVision System Muchlike the IO/MOD the CYCLOPS system has been designed
`to augment a set of XP/DCS boards for a particular sensing task — vision. In actuality
`there are three major componentsto the vision system {8}:
`
`Digitizer: Digitizes an incoming RS-170 video signal and outputs it in digital form over
`a pixel bus.
`Filter: A filter board capable of performing real-time 2D convolution on an image may
`be placed on the pixel bus.
`Frame Memory: In much the same fashion as the IO/MOD the CYCLOPS Memory
`Board augments an XP/DCS with 128 Kbytes of video memory. By associating up
`to eight memory boards with a pixel bus it becomes easy to construct a real-time
`parallel processing vision system.
`
`Juggling Algorithm
`
`This section offers a brief review of how the juggling analysis and control methodology originally
`introduced for the planar system [7] may be extended in a straightforward manner to the present
`apparatus. After introducing the “environmental control system,” an abstract dynamical sys-
`tem formed by composing the free flight and impact models, it becomes possible to encode an
`elementary dexterous task, the “vertical one juggle,” as an equilibrium state —— a fixed point. A
`simple computation reveals that every achievable vertical one juggle can be madeafixed point,
`and conversely, the only fixed points of the environmental control system are those that encode a
`vertical one juggle. Leapfrogging the intermediatelinearized analysis of our planar work {3], we
`then immediately follow with a description of a continuous robot reference trajectory generation
`strategy, the “mirror law,” whose implementation gives rise to the juggling behavior.
`
`3.1 Task Encoding
`
`Denote by ¥ the robot’s choices of impact normal velocity for each workspace location. Suppose
`that the robot strikes the ball in state w; = (b;,6;) at time s with a velocity at normal v; =
`(q,@) €
`VY and allows the ball to fly freely until time s + ¢;. According to (9) derived in the
`previous section, composition with timeof flight yields the “environmental control system”
`
`
`
`that we will now be concerned with as a controlled system defined by the dynamics
`
`f:TBxVxR- TB,
`
`with control inputs in V x IR (0; and tj).
`Probably the simplest systematic behavior of this system imaginable (beyond the ball at rest
`on the paddle), is a periodic vertical motion of the ball.
`In particular, we want to be able to
`specify an arbitrary “apex” point, and from arbitrary initial conditions, force the ball to attain
`a periodic trajectory which passes through that apex point. This corresponds exactly to the
`choice of a fixed point, w*, in (14), of the form
`
`*
`
`w* = i |:
`
`bem?
`
`0
`
`&=} 0]; ver’,
`
`v
`
`(15)
`
`denoting a ball state-at-impact occurring at a specified location, with a velocity which implies
`purely vertical motion and whose magnitude is sufficient to bring it to a pre-specified height
`during free flight. Denote this four degree of freedom set of vertical one-juggles by the symbol
`
`In
`The question remains as to which tasks in 7 can be achieved by the robot’s actions.
`particular we wish to determine which elements of .7 can be made fixed points of (14). Analysis
`of the fixed point conditions imposes the following requirements on w’*:
`
`1
`
`and for some (g,¢) € TQ and A €
`
`IR*,
`
`p(q, -(b*, ¢))} = b* and c(b*, b*, 9,4) = —Aa.
`
`(17)
`
`Every element of .7 satisfies (16), since this simply enforces that the task be a vertical one-
`juggle. For the Biihgler Arm (17) necessitates that n be aligned with & so as not to impart some
`horizontal velocity on the ball. From (10) it is clear that this will only be the case when g € Q*,
`where
`
`gS {q € Q: cos(g3) sin(g2) = —1}.
`Thus, we can conclude that only those elements of 7 satisfying the condition 6* € p(Q*) will be
`fixable. In particular, Q* corresponds to the paddle being positioned parallel to the floor, and
`thus p(Q*) is an annulus above the floor,as is intuitively expected.
`This simple analysis now furnishes the means of tuning the spatial locus and height of the
`desired vertical one juggle. The fixed-point input satisfying these conditions, u*, is given by
`.
`2i[b*||
`
`us tie|*a]
`
`(sq4) Dots
`
`3.2 Controlling the Vertical One-Juggle via a Mirror Law
`
`Say that the abstract feedback law for (14), ¢: W-— VY x IR,is a verticle one-juggle strategy if
`it induces a closed loop system,
`
`fo(w) = f(w, g(w)),
`
`(18)
`
`
`
`vertical one juggle task. A similar analysis has not yet been completed for the Biihgler Arm,
`although a similar result is expected. Experiments with the planar system revealed that the
`linearized perspective was inadequate: the domain of attraction resulting from locally stabilizing
`linear state feedback was smaller than the resolution of the robot’s sensors(3).
`Instead, in [7] a rather different juggling strategy was proposed that implicitly realized an
`effective discrete feedback policy, g, by requiring the robot to track a distorted reflection of
`the ball’s continuous trajectory. This policy, the “mirror law,” may be represented as a map
`m:TB + Q, so that the robot's reference trajectory is determined by
`
`g(t) = m(w(t)).
`
`For a one degree of freedom environment it is not hard to show that this policy results in a
`(essentially) globally asymptotically stable fixed point [5]. For a two degree of freedom envi-
`ronment, we have shown that local asymptotic stability results [3]. The spatial analysis is in
`progress.
`The juggling algorithm used in the present work is a direct extension of this “mirror” law
`to the spatial juggling problem. In particular begin by using (11) to define the the joint space
`position of the ball
`
`Pb
`9,
`pe
`5b
`
`2 p16).
`
`(19)
`
`Wenowseek to express formulaically a robot strategy that causes the paddle to respond to the
`motions of the ball in four ways:
`
`(i) gay = ¢ causes the paddle tracks under theball at all times.
`
`(ii) The paddle “mirrors” the vertical motion of the ball through the action of & on qu2 as
`expressed by the original planar mirror law [7].
`
`(iii) Radial motion of the ball causes the paddle to raise and lower, resulting in the normal
`being adjusted to correct for radial deviation in the ball position.
`
`(iv) Lateral motion of the ball causes the paddle to roll, again adjusting the normalso as to
`correct for lateral position errors.
`
`To this end, define the ball’s vertical energy and radial distance as
`(20)
`" & yb, + ae and,
`ps & sin(@,)sp
`respectively. The complete mirror law combines these two measures with a set point description
`(fj, p, and ¢) to form the function
`
`1;
`
`:
`
`
`
`
`
`
`
`
`
`
`
`(i)
`an
`Ps
`
`T
`
`
`
`
`
`Koip
`— Po) +
`- 7 (+5) +0
`(Ko + kiln
`~Z 7~
`
`
`ta = mw) 2 (xo+(7-7)3 (O45 oolPs “ wore, | (1)
`
`aay
`
`ria
`
`(ii)
`
`
`
`Meessx 10°
`
`@
`
`@®
`
`Mere x 107
`$00.00
`
`$00.00
`
`-$00.00
`TOM 600.00
`
`
`400.00
`vowo-t|RN
`200.00
`iN
`Yh RAS|
`100.00
`aa hY
`om
`Mai
`600.00
`
`600,00
`
`Mess x 103
`-500.00
`
`Mexes x 107
`
`
`
`
`
`Figure 2: One-Juggle ball trajectory: (i) X-Y projection (ii) X-Z projection and (iii) Y-Z pro-
`jection.
`
`For implementation, the on-line reference trajectory formed by passing the ball’s state tra-
`jectory, w(t), through this transformation must be passed to the robot tracking controller.
`As described in Section 4.4, the high performance inverse dynamics tracking schemes that we
`presently employ require the availability of a target velocity and acceleration profile as well. By
`design m(w)is differentiable and the time derivatives of w are known — at least away from an
`impact event. Thus, denoting by
`
`cw) | % |
`
`the spatial vector field corresponding to the ball’s continuous free flight dynamics (1), we know
`that ga(t) = m{w(2)) implies
`
`Ga = Dm F
`
`da= Dm DF F+[F @ I! D?mF
`
`In practice, these terms are computed symbolically from (21) and F.
`We have succeeded in implementing the one-juggle task as defined above on the Bihgler
`arm. The overall performance of the constituent pieces of the system — vision module, juggling
`algorithm, and robot controller — have each been outstanding, allowing for performancethat is
`gratifyingly similar to the impressive robustness andreliability of the planar juggling system. We
`typically record thousands of impacts (hours of juggling) before random system imperfections
`(electrical noise, paddle inconsistencies) result in failure. Figure 2 shows the three projections
`of the ball’s trajectory for a typical run. As can be seen the system is capable of containing the
`ball within roughly 15cm of the target position above the floor and 10cm of the target height of
`60cm.
`
`It is worth noting that in the x-z and y-z projections there is evidently spurious acceleration of
`the ball in both the x and y directions. Tracing this phenomenon through the system confirmed
`an earlier suspicion; our assumption that gravity is exactly aligned with the axis of rotation of
`the base motor is indeed erroneous. Correction of this calibration error requires the addition
`
`
`
`4 The Yale Spatial Juggling System
`
`This section describes how we have combined the components of Section 2 to produce a coordi-
`nated juggling robot system. An engineering block diagram for this system is depicted in Figure
`3. Its implementation in a network of XP/DCS nodes is depicted in Figure 4, The juggling al-
`gorithm these diagrams realize is a straightforward application of contemporary robot tracking
`techniques to the mirror law presented in Section 3 as driven by the output of the vision system.
`Thus, there is no explicit pre-planning of robot motions. Instead, the ball’s natural motion as
`perceived through the stereo vision system stimulates a “reflex” reaction in the robot that gives
`rise to intermittent collisions.
`In turn, these “geometrically programmed” collisions elicit the
`proper juggling behavior.
`
`|
`
`
`
`Moter
`Control
`Algerithm 2: Optration
`t
`
`o 5éiB F
`
`Sensor
`
`Signal
`
`Juggling
`
`
`
`4.1 Vision: Environmental Sensing
`
`The vision system must provide sufficient information regarding the state of the environment.
`In our present implementation we have so structured the visual environment as to make the task
`conceptually trivial and computationally tractable. In particular, the vision system need only
`extract the three dimensional position of a single white ball against a black background.
`To perform even this apparently trivial task in real time we require two CYCLOPS vision
`systems — one for each “eye” — introducing a total of four nodes.
`In both Cyclops systems
`two memory boards, each with an associated XP/DCS processor, are attached to the digitizer.
`Computational limitations currently allow the system to process “windows” of less than 3000
`pixels out of an image of 131,072 pixels (a 256 x 512 image).
`Figure 5 depicts the flow of events on the five processors used for vision processing during
`an image cycle. The cycle begins with a field being delivered to one memory board associated
`with each camera (two of processors 20, 21, 30, 31), Note that these two events are guaranteed
`to occur simultaneously through the use of synchronized cameras. After the images have been
`deposited in the memory boards the centroid of the ball is estimated by calculating the first order
`moments over a window centered around the position of the ball, as determined by the most
`recentfield (depicted by arrows in Figure 5). Upon completion, the image coordinate locations
`are passed to the neighboring pixel processors, for use in determining window location for the
`next field, and up to the triangulation process which is located on processor 00 of Figure 4.
`Once the low-level pixel processors have determined the bail location in a pair of images, stereo
`triangulation introduced in Section 2.3 locates the position of the ball in space with respect to
`a fixed reference coordinate system.
`
`
`
`Memory Board 2)
`
` Syeeeea
`LESSBe
`
`Memory Board 21
` cmmmrmmrmmmmnrmrmmmnook—
`
`
`feESS
`
`NNN
`
`LEELES
`
`
`Ce lS
`
`
`
`|<sg |ir Iree |
`TRI =Trimgutatien
`OBS = Lincar Obecrres
`
`Figure 5: Timing for CYCLOPS vision system
`
`4.2 Signal Processing
`
`The signal processing block must “interpret” the environment and present it in a fashion that
`is acceptable for use by the remainder of the system.
`In this simple context, “interpretation”
`means producing good estimates of the ball’s position and velocity at the current time. This is
`accomplished by connecting the output of the triangulator to a standard linear observer.
`The timing diagram in Figure 5 shows that the vision block adds an unavoidable 1/30 sec.
`delay between the time an image is captured and the time a spatial position measurement has
`been formed. The ball’s flight model presented in Section 2.1 is a sufficiently simple dynamical
`system that its future can be predicted with reasonable accuracy and, accordingly, a current
`
`
`
`The data must now be passed to the interpolator. The task here involves stepping up the
`unacceptably slow data rate of the vision block (60 Hz): the time constant of the actuatorsis
`near 200 Hz. This interpolation stage uses the flight model of the ball, integrating the current
`state estimate of the ball forward over small time steps allows the data rate to be boosted from
`60 Hz to 1 kHz.
`This sequence of calculations is carried out on processor 00, (the coincidence with the tri-
`angulation process is incidental). The implementation of these signal processing functions is
`divided into two independent event driven processes. The first of these runs the observer and
`predictor, which are synchronized with the triangulation system and thereby with the cameras.
`Thus the sampling rate for the observer is set by the field rate of the cameras, The second
`process increases the effective data rate by scheduling itself at 1 msec intervals and updatesits
`estimates of the ball state at each interval.
`
`Juggling
`
`The execution of the jugling algorithms developed in [7, 4] and Section 3 are implemented in
`this segment of the network. The evaluation of (21) is again carried out on processor 00, where
`both the high-level vision and signal processing are performed. The implementation consists
`of a single process which evaluates qa, qa, and gg whenever new state information is received
`— yet another example of how we use naturally occuring events within the system to initiate
`computation. Since the input of this process is connected to the output of the interpolator the
`reference trajectory fed to the controller will be updated at the same rate as the output of the
`interpolator (1 kHz).
`
`4.4 Robot Control
`
`The geometric transformation introduced in Section 3, when applied to the joint space coordinate
`representation ofthe ball’s flight, results in a desired profile of joint locations over time, ga{t). For
`the planar juggling robot we have shown that if the robot were to track exactly this “reference
`signal,” then collisions with the ball would occur in such a fashion that the desired periodic
`motion is asymptotically achieved [7, 3]. We conjecture the same will be true in the present
`It now falls to the robot control block to ensure that the joint angles, q(t), track the
`reference signal, ga(t).
`We have implemented a large collection of feedback controllers on the Biihgler Arm, as re-
`ported in [17]. We find that as the juggling task becomes more complicated — e.g. simultaniously
`juggling two balls — that it becomes necessary to move to a more capable controller. We have
`had good succes with an inverse dynamics control law [17] of the form developed in [11],
`
`r= C(q,4)¢+ M(q)[da] + Kal — da) + Kp(¢ - 92).
`
`(22)
`
`At the present time, all experiments have all been run with a robot control block that
`includes the three nodes (10, 11, and 12 in Figure 4), The model based portion of the control
`algorithms are implemented on processor 11 with update rate of 400 Hz, while the feedback
`portion (along with uninteresting housekeeping and logging functions) is implemented on 10
`with update rate of 1 KHz, and a variety of message passing and buffering processes run on 12
`whichis really included in the network only for purposes of increasing the number of Transputer
`links converging on this most busy intersection of the entire network. There are two motivations
`for this seemingly lavish expenditure of hardware. First, in the interests of keeping the cross
`
`
`
`
`
`of maintaining the “orthogonality” of the real-time and logging data flows, we gain a sufficient
`numberof links at the controller to permit the dedicated assignment of logging channels back
`toward the user interface,
`Nonblocking communication between this collection of processors is implemente