`
`)1’1S.
`
`)O1S
`
`€53
`
`
`
`
`
`
`i%hiskiha.Ptef= %
`
`
`
`
`
`
`
`
`
`
`234
`
`Chapter 13 - Knowing Where You Are
`
`Introduction
`
`T
`
`After our first few months of experimenting with robotics using the MIND-
`STORMS kit, we began to wonder if there was a simple way to make our robot A
`know where it was and where it was going~—in other words, we wanted to create
`some kind of navigation system able to establish its position and direction.We
`started reading books and searching, the Internet, and discovered that this is still
`one of most demanding tasks in robotics and that there really isn’t any single or
`simple solution.
`In this chapter, we will introduce you to concepts of navigation, which can get
`very compleX.\X/e will start describing how positioning methods can be catego-
`rized into two general classes: aliso/are and relariz/6 positioning, the first based on
`external reference points, and the latter on internal measurements.Then we will
`provide some examples for both the categories, showing solutions and tricks that
`suit the possibilities of the MINDSTORMS system. In discussing absolute posi-
`tioning, we will introduce you to navigation on pads equipped with grids or gra—
`clients, and to the use of laser beams to locate your robot in a room. As for relative
`positioning, we will explain how to equip your robot for the proper measure-
`ments, and will provide the math to convert those measurements into coordinates.
`
`Choosing internal or External Guidance
`
`As we mentioned, there is no single method for determining the position and
`orientation of a robot, but you can combine several different techniques to get
`useful and reliable results.All these techniques can be classified into two general
`categories: absolute and relamze positioning methods.This classification refers to
`whether the robot looks to the surrounding environment for tracking progress, or
`just to its own course of movement.
`Absolute positioning refers to the robot using some external reference point
`to figure out its own position.These can be landmarks in the environment, either
`natural landmarks recognized through some kind of artificial vision, or more
`often, artificial landmarks easily identified by your robot (such as colored tape on
`the floor). Another common approach includes using radio (or light) beacons as
`landmarks, like the systems used by planes and ships to find the route under any
`weather conclition.Absolute positioning requires a lot of effort:You need a pre-
`pared environment, or some special equipment, or both.
`Relative positioning, on the other hand, doesn’t require the robot to know
`anything about the environment. It deduces its position from its previous (known)
`
`
`
`
`
`
`
`TC
`
`ltfi
`
`‘H
`
`'
`
`
`
`\V
`
`Knowing Where You Are - Chapter 13
`
`235
`
`position and the movements it made since the last known position. This is usually
`achieved through the use of encoders that precisely monitor the turns of the
`wheels, but there are also inertial systems that measure changes in speed and direc-
`tion.This method is also called dead recleoiziizg (short for (l€(llJC€(]
`recleorzzrzgl.
`Relative positioning is quite simple to implement, and applies to our LEGO
`robots, too. Unfortunately, it has an intrinsic, unavoidable problem that makes it
`impossible to use by itself it accumulates errors. Even if you put all possible care
`into calibrating your system, there will always be some very small difference due
`to slippage, load, or tire deformation that will introduce errors into your mea-
`surements.These errors accumulate very quickly, thus relegating the utility of rel—
`ative positioning to very short movements. Imagine you have to measure the
`length of a table using a very short ruler:You have to put it down several times,
`every time starting from the point Where you ended the previous measurement.
`Every placement of the ruler introduces a small error, and the final result is usu«
`ally very different from the real length of the table.
`The solution employed by ships and planes, which use beacons like Loran or
`Global Positioning Systems (GPS) systems, and more recently by the automotive
`industry, is to combine methods from the two groups: to use dead reckoning to
`continuously monitor movements and, from time to time, some kind of absolute
`positioning to zero the accumulated error and restart computations from a
`known location.This is essentially What human beings do:"v‘l/hen you walk down
`a street While talking to a friend, you don’t look around continuously to find ref-
`erence points and evaluate your position; instead, you walk a few steps looking at
`f your friend, then back to the street for an instant to get your bearings and make
`sure you havenlt veered off course, then you look back to your friend again.
`You ’re even able to safely move a few steps in a room with your eyes shut,
`. ‘because you can deduce your position from your last known one. But if you walk
`re~tlJ,an_a—fe:\A«'—.stepas—Aa.Litla-out
`er L—oueliing any familiar object, you
`will soon lose your orientation.
`ln the rest of the chapter, we will explore some methods for implementing
`‘ab olute and relative positioning in LEGO robots. It’s up to you to decide
`W ether or not to use any one of them or a combination in your applications.
`ither way, you will discover that this undertaking is quite a challenge!
`
`
`
`
`
`236
`
`Chapter 13 - Knowing Where You Are
`
`Looking for Landmarks:
`Absolute Positioning
`
`'
`
`V
`
`The most convenient way to place artificial landmarks is to put them flat on the
`floor, since they won’t obstruct the mobility of your robot and it can read them
`with a light sensor without any strong interference fiom ambient light.You can
`stick some self adhesive tape directly on the floor ofyour room, or use a sheet of
`cardboard or other material over which you make your robot navigate.
`Line following, which we have talked a lot about, is probably the simplest
`example of navigation based on using an artificial landmark. In the case of line
`following, your robot knows nothing about where it is, because its knowledge is
`based solely on whether it is to the right or left of the line. But lines are indeed
`an effective system to steer a robot from one place to another. Feel free to exper-
`iment with line following; for example, create some interruptions in a straight
`line and see if you are able to program your robot to find the line again after the
`break. It isn’t easy.\X/hen the line ends, a simple line follower would turn around
`and go back to the other side of the line.You have to make your software more
`sophisticated to detect the sudden change and, instead of applying a standard
`route correction, start a new searching algorithm that drives the robot toward a
`piece o‘fline further on.Y6ur robot will have to go forward_ior a specific distance
`(or time) corresponding to the approximate length of the break, then turn left
`and right a bit to find the line again and resume standard navigation.
`When you’re done and satisfied with the result, you can make the task even
`more challenging. Place a second line parallel to the first, with the same interrup-
`tions, and see if you can program the robot to turn 90 degrees, intercept the
`second line, and follow that one. If you succeed in the task, you’re ready to navi~
`gate a grid of short segments, either following along the lines or crossing over
`them like a bar code.
`
`You can improve your robot navigation capabilities, and reduce the com-
`plexity in the software, using more elaborate markers. As we explained in Chapter
`4, the LEGO light sensor is not very good at distinguishing different colors, but is
`able to distinguish between differences in the intensity of the reflected light.You
`can play with black and gray tapes on a white pad, and use their color as a source
`of information for the robot. Remember that a reading at the border between
`black and white can return the same value of another on plain gray. Move and
`turn your robot a bit to decode the situation properly, or employ more than a
`single light sensor if you have them.
`
`
`
`
`
`
`
`
`
`Knowing Where You Are ° Chapter 13
`
`237
`
`Instead of placing marks on the pad, you can also print on it with a special
`black and white gradient. For example, you can print a series of dots with an
`intensity proportional to their distance from a given point a.The closer to a, the
`darker the point; a is plain black (see Figure 13.1). On such a pad, your robot will
`be able to return to a from any point, by simply following the route until it reads
`the minimum intensity.
`
`Figure 13.1 A Gradient Pad with a Single Attractor
`
`i
`
`5
`‘
`
`if
`
`
`
`The same approach can be used with two points a and b, one being white
`and the other black. Searching for the whitest route, the robot arrives at a, while
`following the darkest it goes to b (Figure 13.2).We first saw this trick applied
`during the 1999 Mindfest gathering at the l\/lassachusetts Institute ofTechnology
`(MIT): two robots were playing soccer, searching for a special infrared (IR) emit-
`ting ball.\X/hen one got the ball, it used the pad to find the proper white or black
`“goal. Months later, we successfully replicated this setup with Marco Berti during
`7a demonstration at an exhibition in Italy.
`
`Figure 13.2 A Gradient Pad with Two Attractors
`
`
`
` Chapter 13 - Knowing Where You Are
`
`
`
`There are other possibilities. People have suggested using bar codes on the
`floo1':\X/hen the robot finds one, it aligns and reads it, decoding its position from
`the value. Others tried Complex grids made out of stripes of diffeifeiit colors.
`Unfortunately, there isn't a. simple solution valid for all cases, and you will very
`likely be forced to use some dead 1'ecl<oni1’ig after 21 landmarlq to improve the
`search.
`
`
`
`
`
`
`
`‘ ’Iov,’é‘r the s;:m..cf ’cheiidi‘s‘ta;ncesei»
`‘Una?Wh¢féi!0i0’feth
`K0
`
`
`
`
`
`
`
`
`
`Knowing Where You Are ° Chapter 13
`
`239
`
`Following the Beam
`
`
`
`In the real world, most positioning systems rely on beacons of some kind, typi-
`cally radio beacons. By using at least three beacons, you can determine your posi—
`tion on a two—dimensional plane, and with four or more beacons you can
`compute your position in a three—dirncnsional space. Generally speaking, there are
`two kinds of information a beacon can supply to a Vehicle: its distance and its
`heading (direction of travel). Distances are computed using the amount of time
`that a radio pulse takes to go from the source to the receiver: the longer the
`delay, the larger the distance. This is the technique used in the Loran and the GPS
`systems. Figure 13.3 shows why two stations are not enough to determine posi-
`tion:There are always two locations A and B that have the same distance from the
`two sources.
`
`Figure 13.3 There are Two Locations with the Same Distance from
`Two Stations
`I‘
`
`station 1
`%.
`
`3*‘)
`
`%"..
`‘aka 3
`
`13 y,
`
`
`
`A
`‘ dding a third s-tatron, the system
`hird station does not lie along the line that connects the previous two stations
`See Figure 13.4).
`1 The stations of the VHF Omni—directional Range system (VOR) can not tell
`11 the distance from the source of the beacon, but they do tell you their
`ading, that is, the direction of the route you should go to reach each station.
`licled that you also know your heading to the North, two VOR stations are
`ugh to locate your Vehicle in most cases.Three of them are required to cover
`,_ase where the vehicle is positioned along the line that connects the stations,
`.35 for the Loran and GPS systems, it’s essential that the third station itself
`Ot lay along that line (see Figure 13.5).
`
`
`
`
`
`240
`
`Chapter 13 - Knowing Where You Are
`
`Figure 13.4 Three Nonaiigned Stations Allow for Positioning with No
`
`Ambiguities Using Distances
`‘
`_
`_ __ __
`station 1
`
`~_...____%‘_\w
`
`a
`
`flea;
`13,” R313
`‘at
`J
`Am“
`55 in‘,
`La
`ii M’““2E
`iggitg
`1
`Mg’
`is
`‘M
`‘E
`5’
`'2
`“e
`x‘
`
`b
`
`M32,
`
`29
`
`statEon3
`
`:
`
`3f
`E’
`station 2
`
`Figure 13.5 VOR—Lii<e Systems Allow Positioning through the Headings
`of the Stations
`
`station 1
`
`
`
`Using three stations, you can do without a compass, that is, you don’t need to
`know your heading to the North.The method requires that you know only the
`angles between the stations as you see them from your position (see Figure 13.6).
`To understand how the method works, you can perform a simple experi-
`ment.Take a sheet of paper and mark three points on it that correspond to the
`three stations. Now take a sheet of transparent material and put it over the pre-
`vious sheet. Spot a point anywhere on it which represents your vehicle, and draw
`three lines from it to the stations, extending the lines over the stations themselves.
`Mark the lines with the name of the corresponding stations. Now, move the
`
`transparent sheet and try to intersect the lines again with the stations.There’s an
`
`
`
`
`
`unlimited number of positions which connect two of the three stations, but
`there’s only one location which connects all three of them.
`
`Knowing Where You Are - Chapter 13
`
`241
`
`
`
`“
`
`“
`
`Figure 13.6 Three Nonaligned Stations Allow for Positioning with No
`Ambiguities Using Angles
`F
`
`
`
`
`
`If you Want to give this approach a try, the first problem you have to solve is
`that there’s currently nothing in the LEGO world that can emit beacons of any
`kind, so you have to look for some alternative device. Unless you’re an electrical
`engineer and are able to design and build your own custom radio system, you
`better stick with something simple and easy to find. The source need not be nec-
`essarily based on radio waves~—light is effective as well, and we already have such
`a detector (the light sensor) ready to interface to the RCX.
`.
`By using light sources as small lighthouses, you can, in theory, make your
`robot find its way. But there are a few diificulties to overcome first:
`
`I The light sensor isn’t directional—you must shield it somehow to
`narrow its angle.
`
`An;ibient_light~i»ner-oduees interference, so you must eperateei-n an almost
`lightless room.
`For the robot to be able to tell the difference between the beacons, you
`must customize each one; for example, making them blink at different
`rates (as real lighthouses do).
`
`
`
`aser light is probably a better choice. It travels with minimum diffusion, so
`{in it hits the light sensor, it is read at almost 100 percent. Laser sources are
`Very common and very cheap.You can find small battery—powered pen laser
`ters for just a few dollars.
`
` swiww-syh ie.'ss»§.o*i1
`
`
`
`
`
`RNING
`
`242
`
`Chapter 13 - Knowing Where You Are
`
`Laser light, even at low levels, is very damaging to eyes——never direct it,
`towards people or animals.
`
`
`
`
`
`If you have chosen laser as a source oflight, you don’t need to worry about
`ambient light interference. But how exactly would you use laser? Maybe by
`making some rotating laser lighthouses? Too complex. Lets see What happens if
`we revert the problem and put the laser source on the robot. Now you need just
`one laser, and can rotate it to hit the different stations. So, the next hurdle is to
`
`figure out how you know when you have hit one of those stations. If you place
`an RCX with a light sensor in every station, you can make it send back a mes-
`sage when it gets hit by the laser beam, and using different messages for every
`station, make your robot capable of distinguishing one from another.
`When we actually tried this, we discovered that the light sensor is a very small
`target to hit with a laser beam, and as a result, realized we had set an almost impos-
`sible goal.To stick with the concept but make things easier, we discovered you
`could build a sort of diffuser in front of it to hzve a wider detection areajonathan
`.mBmwmmgg paper which worl ed very well.
`Now you have a working solution, but it’s a pity you need so many RCXs, at
`least three for the stations and one for your robot. Isn’t there a cheaper option? A
`different approach involves employing the simple plastic reflectors used on cars,
`bikes, and as cat’s—eyes on the side of the road.They have the property of
`reflecting any incoming light precisely back in the direction from which it came.
`Using those as passive stations, when your robot hits them with its laser beam
`they reflect it back to the robot, where you have placed a light sensor to detect it.
`This really seems the perfect solution, but it actually still has its weak spots.
`First, you have lost the ability to distinguish one station from the other.You also
`have to rely on dead reckoning to estimate the heading of each station.We
`explained that dead reckoning is not Very accurate and tends to accumulate
`errors, but it can indeed provide you with a good appr0xz'zImfz'ozz of the expected
`heading of each station, enough to allow you to distinguish between them. After
`having received the actual readings, you will adjust the estimated heading to the
`measured one. The second flaw to the solution is that most of the returning beam
`tends to go straight back to the laser beam.You must be able to very closely align
`the light sensor to the laser source to intercept the return beam, and even with
`that precaution, detecting the returning beam is not very easy.
`
`,
`
`
`
`
`
`
`
`
`To add to these difficulties, there is some math involved in deducing the posi-
`tion of the robot from the beacons, and it’s the kind of math whose very name
`sends shivers down most students spines: trigonometry! (Don’t worry, you can
`find formulas in some of the resources referenced by Appendix A.) This leads to
`another problem:The standard firmware has no support for trig functions, and
`though in theory you could implement them in the language Not Quite C
`(NQC) using some tables and interpolation, the LEGO firmware does not give
`you enough variables to get useful results. If you want to proceed with using bea-
`cons, you really have to switch to le}OS or legOS, which both provide much
`more computational power.
`If you’re not in the mood to face the complexity of trigonometry and alter~
`native firmware, you can experiment with simpler projects that still involve laser
`pointers and reflectors. For example, you can make a robot capable of“going
`home.” Place the reflector at the home base of the robot, that is, the location
`where you want it to return. Program the robot to turn in place with the laser
`active, until the light beam intercepts the reflector and the robot receives the light
`back, then go straight in that direction, checking the laser target firom time to
`time to adjust the heading.
`
`"""'_
`"ed ll
`
`""-
`{bout
`Y
`tns if
`just
`8 to
`flag:
`n€S_
`W
`l
`man
`__1pOS_
`
`than
`well.
`
`rs,
`
`me.
`
`:t it. i
`
`Knowing Where You Are - Chapter 13
`
`243
`
`_
`Measuring Movement:
`Relative Pesrtiomng
`Relative positioning, or dead reckoning, is based on the measurement of either
`the movements made by the vehicle or the force involved (acccleration).\X/e’ll
`“leave this latter category alone, as it requires expensive inertial sensors and gyro—
`scopic compasses that are not easy to interface to the RCXF
`The technique of measuring the movement of the vehicle, called odometry,
`C uii:es—an eneewier t-hat tr—a-nslatcs the t-ur—n ef the Wheels into the corresponding
`traveled distance. Choosing among LEGO supplies, the obvious candidate is the
`citation sensor. However, you already know fiom Chapter 4 that you can emulate
`he rotation sensor with a touch or a light sensor.You typically will need two
`Wthem, though by using some of the special platforms seen in Chapter 8 (the
`filial differential drive and synchro drive) you can implement odometry with a
`gle rotation sensor. If you don’t have any, look in Chapter 4 for some possible
`
`,,The equations for computing the position from the decoded movements
`‘lat ‘Ends on the architecture of the robot.We will explain it here using the
`
`‘
`
`syiigréss.
`
`
`
`Chapter 13 - Knowing Where You Are
`
`244
`
`example of the differential drive, once again referring you to Appendix A for fur.
`ther resources on the math used.
`
`Suppose that your robot has two rotation sensors, each connected through
`gearing to one of the main wheels. Given D as the diameter of the wheel, R as
`the resolution of the encoder (the number of counts per turn), and G the gear
`
`ratio between the encoder and the wheel, you can obtain the conversion factor F
`that translates each unit from the rotation. sensor into the corresponding traveled
`distance:
`
`F= (DXTC) /(GXR)
`
`The numerator of the ratio, D X 71, expresses the circumference of the wheel,
`which corresponds to the distance that the wheel covers at each turn.The
`denominator of the ratio, G X R, defines the increment in the count of the
`
`encoder (number of titles) that corresponds to a turn of the wheel. F results in the
`unit distance covered for every tick.
`Your robot uses the largest spoked wheels, which are 81.6mm in diameter.
`The rotation sensor has a resolution of 16 ticks per turn, and it is connected to
`the wheel with a 1:5 ratio (five turns of the sensor for one turn of the wheel).
`
`The resulting factor is:
`
`F = 81.6 ITTTT1 X 3.‘l4‘F6 / 65 x16 ticks)‘: 3.2 mTn7"L‘itk
`
`This means that every time the sensor counts one unit, the wheel has traveled
`3.2mm. In any given interval of time, the distance TL covered by the left Wheel
`will correspond to the increment in the rotation sensor count IL multiplied by
`the factor F:
`
`TL = IL x F
`
`And similarly, for the right wheel:
`
`TR = IR x F
`
`The centerpoint of the robot, the one that’s in the middle of the ideal line
`that connects the drive wheels, has covered the distance TC:
`
`To compute the change of orientation AO you need to know another
`parameter of your robot, the distance between the wheels B, or to be more
`precise, between the two points of the wheels that touch the ground.
`
`
`
`
`
`
`
`The two trigonometric functions convert the vectored representation of the
`traveled distance into its Cartesian components.
`0’ this villainous trigonometry again! Unfortunately, you can’t get rid of it
`when working with positioning. Thankfully, there are some special cases where
`you can avoid trig functions, however; for example, when you’re able to make
`, your robot turn in place precisely 90 degrees, and truly go straight when you
`’ expect it to. In this situation, either X or y remains constant, as well as the other
`increments (or decrements) of the traveled distance TC.Thinking back to Chapter
`t 8,tWo platforms make ideal candidates for this: the dual differential drive and the
`iiynchro drive.
`Using a dual differential drive you need just one rotation sensor, attached to
`ther the right or the left vvheel.The mechanics guarantees that When one
`’lotoiLis—en7the—robo~t-drives-perfectly-straight:-vvhile—Vvl‘ren-thte otl°rer‘i-s on, the
`robot turns in place. In the first case, the rotation sensor will measure the traveled
`.j€l1stanceTC, While in the second, you must count its increments to turn exactly
`In multiples of 90 degrees. In Chapter 23, you will find a robot which turns this
`Eory into a practical application, a dual differential drive that navigates using a
`, gle rotation sensor. We will go through the math again and will describe a
`Plete NQC implementation of the formulas.
`The synchro drive can be easilv limited to turn its wheels only 90 de rees.
`, h
`_
`/
`,
`g
`git
`- this physical constraint revardin chan e of direction, ou can be sure it
`a
`3
`g
`Y
`, Pmceed using only right angles. Connect the rotation sensor to the motion
`
`
`
`C
`
`Knowing Where You Are - Chapter 13
`
`245
`
`This formula returns A0 in radians.You can convert radians to degrees using
`the relationship:
`
`-/-\‘ODe-grees : AORadians X 180 / TC
`
`You can now calculate the new relative heading of the robot, the new orien-
`tation O at time i based on previous orientation at time i — 1 and change of ori-
`entation A0. 0 is the direction your robot is pointed at, and results in the same
`unit (radians or degrees) you choose for A0.
`
`Oi = 014 +
`
`Similarly, the new Cartesian coordinates of the centerpoint come from the
`previous ones incremented by the traveled distance:
`ll
`
`xi
`
`XH + TC x cos O,
`
`y, = yi_1 + TC x sin 0,
`
`
`
`
`
`246
`
`Chapter 13 - Knowing Where You Are
`
`motor. Just as in the previous case you will use it to measure the traveled dis-
`tance. ln this setup, as with that using the dual differential drive, one rotation
`
`sensor is enough.
`
`Summary
`
`We promised in the introduction that this was a difficult topic, and it was.
`
`Nevertheless, making a robot that has even a rough estimate of its position is a
`
`rewarding experience.
`
`There are two categories of methods for estimating position, one based on
`absolute positioning and the other on relative positioning. The first category usu-
`
`ally requires artificial landmarks or beacons as external references, both of which
`
`we eXplored.Artificial landmarks can range from a simple line to follow, to a grid
`of marks, to more complex gradient pads.All of them allow a good control of
`navigation through the use ofa light sensor. The MINDSTORMS line is not
`very well equipped for absolute positioning through the use of beacons; however,
`we described some possible approaches based on laser beams and introduced you
`to the difficulties that they entail.
`Relative positioning is more easily portable to LEGO robots.We explained
`the math required to implement deduced reckoning in a differential drive robot,
`and su
`ested some alternative architectures that hel
`in sim h "in the involved
`8%
`P
`P
`g
`
`calculations. Unfortunately, relative positioning methods have an unavoidable ten—
`denc
`to accumulate errors; ou can reduce the mavnitude of the
`roblem, but
`Y
`Y
`D
`P
`
`you cannot eliminate it.This is the reason that real life navigation systems—like
`those used in cars, planes, and ships——usually rely on a combination ofmethods
`taken from both categories: Dead reckoning isn’t very computation intensive and
`rovides ood recision in a short ran e of distances, while absolute references
`g
`2%
`are used to zero the errors it accumulates.