`
`fAls
`
`INTELLIGENT TRANSPORTATION r J
`SYSTEMS
`
`A PUBLICATION OF THE IEEE INTELLIGENT TRANSPORTATION SYSTEMS COUNCIL
`
`SEPTEMBER 2000 VOLUME 1
`
`NUMBER 3
`
`ITISFG
`
`(ISSN 1524-9050)
`
`
`
`This resource is also available
`on the WWW.
`Use MadCat to launch.
`
`SPECIAL ISSUE ON
`VISION APPLICATIONS AND TECHNOLOGY FOR INTELLIGENT VEHICLES—PART II: VEHICLES
`
`Editorial—Special Issue on Vision Applications and Technology for Intelligent Vehicles—Part II: Vehicles ... .A. Broggi 133
`
`Simultaneous Detection of Lane and Pavement Boundaries Using Model-Based Multisensor Fusion '
`
`........................................................................................... B. Ma, S. Lakshmattan, and A. O. Hero, HI 135
`Stereo- and Neural Network-Based Pedestrian Detection
`L. Zhao and C. E. Thorpe 148
`C. Curio, J. Edelbrunner, T. Kalinke, C. Tzomakas, and W. von Seelen 155
`Walking Pedestrian Recognition
`Visual Perception of Obstacles and Vehicles for Platooning
`A. Broggi, M. Bertozzi, A. Fascioli, C. Guarino Lo Bianco, and A. Piazzi 164
`
`Ex.1005 / Page 1 of 15
`TESLA, INC.
`
`
`
`
`
`� IEEE INTELLIGENT TRANSPORTATION SYSTEMS COUNCIL
`
`
`
`The Intelligent Transportation Systems Council is an organization within the framework of the IEEE on behalf of the following Societies: Aerospace
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`and Electronic Systems, Antennas and Propagation, Communications, Computer, Consumer Electronics, Control Systems, Electromagnetic Compatibility,
`
`
`
`
`
`
`
`
`
`
`
`Electron Devices, Industrial Electronics, Instrumentation and Measurement, Microwave Theory and Techniques, Power Electronics, Reliability, Robotics and
`
`
`
`
`
`
`
`Automation, Signal Processing, Systems Man and Cybernetics, Vehicular Technology. It's professional interest is in the application of information technology
`
`on joining the IEEE and/or
`
`
`
`
`
`to transportation. Members of sponsoring IEEE Societies may subscribe to this TRANSACTIONS for $20.00 per year. For information
`
`
`
`the participating societies, please contact the IEEE at the address below. Member copies are for personal
`use only.
`
`President
`D MIT OZGUNER
`
`OFFICERS
`Vice President
`Vice President
`
`(Conferences and Meetings)
`(Finance)
`lCHIRO MASAKI
`
`RICHARD Kl.AFTER
`
`Vice President
`( Publications)
`DANIEL J. DAILEY
`
`Secretary
`EMILY SOPENSKY
`
`Past President
`E. RYERSON CASE
`
`IEEE TRANSACTIONS® ON INTELLIGENT TRANSPORTATION SYSTEMS
`
`Editor
`
`PROF. CHELSEA C. WHJTE 111
`
`
`
`Department of Industrial and Operations Engineering
`
`The University of Michigan
`
`Ann Arbor, Ml 48109-2117
`
`(734) 764-5723 (734) 747-6644 (FAX)
`
`email: ccwiii@umich.edu
`
`
`Associate Editors
`ISMAIL CHABINI
`ALBERTO BROGGI
`TOSHIO fuKUDA
`HIDEKI HASHIMOTO
`PETROS A. IOANNOU
`HAN! S. MAHMASSANI
`RYU!! KOHNO
`YIU LIU
`ICHIRO MASAKI
`UMIT O2.GUNER
`
`
`
`WILLIAM T. SCHERER HIROSHI TAKAHASHI
`
`SHOICHI W ASHINO
`YILIN ZHAO
`
`mE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, INC.
`Officers
`BRUCE A. EISENSTEIN,
`MICHAEL S. ADLER, Vice President,
`President
`
`Publication Activities
`JoEL B. SNYDER,
`ANTONIO BASTOS,
`President-Elect
`
`
`Vice President, Regional Activities
`DAVID J. KEMP, Secretary
`DONALD C. LoUGHRY,
`
`
`President, Standards Association
`DAVID A. CONNER, Treasurer
`ROBERT A. DENT, Vice President,
`
`Technical Activities
`LYLE D. FEISEL,
`
`MERRILL W. BUCKLEY,
`Vice President, Educational Activities
`
`
`
`President, IEEE USA
`JANIE M. FOUKE,
`
`
`Director, Division X-Systems and Comrol
`
`Executive Staff
`DANIEL J. SENESE,
`
`Executive Director
`
`DoNALD CURTIS,
`Human Resources
`ANTHONY DURNIAK,
`Publications
`JUDITH GORMAN, Standards
`Activities
`
`CECELIA JANKOWSK.J,
`
`Regional Activities
`PETER A LEWIS, Educational
`Activities
`
`RICHARD D. SCHWARTZ,
`
`Business Administration
`W.THOMAS SUTTLE, Professional
`Activities
`MARY WARD-CALLAN,
`
`Technical Activities
`JOHN WITSKEN,
`fnformation Technology
`
`IEEE Periodicals
`
`Transactions/Journals Department
`FRAN ZAPPULLA
`Staff Director:
`DAWN M. MELLEY
`
`Editorial Director:
`ROBERT SMREK
`
`Production Director:
`GAIL S. FERENC
`
`Transactions Manager:
`
`STEPHEN COHEN
`
`
`Electronic Publishing Manager:
`MONA MITTRA
`
`Managing Editor:
`
`Engineers, qmrerly by The Institute of Electrical and Electronics
`
`
`
`IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (ISSN 1524-9050) is published
`
`
`
`
`
`
`
`
`
`Inc. Responsibility for the contents rests upon the authors and not upon the IEEE, the Society, Council, or its members.
`Office: 3 Park
`IEEE Corporate
`
`445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855-1331.
`
`
`Avenue, 17th Floor, New York, NY 10016-5997.
`IEEE Ope rations Center:
`
`NJ Telephone:
`
`+1 732 981 0060. Price/Publication Individual copies: IEEE Members $10.00 (first copy only), nonmembers $20.00 per copy. (Note: Add
`
`
`
`
`Information:
`
`
`
`) Member and nonmember subscription prices available
`
`
`
`
`$4.00 postage and handling charge to any order from $1.00 to $50.00, including prepaid orders.
`
`
`
`upon request. Available in microfiche and microfilm.
`
`
`
`
`Abstracting is permitted with credit to the source. Libraries
`
`
`Copyright and Reprint Permissions:
`
`
`
`
`
`
`
`
`are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the
`
`
`
`
`
`
`
`
`
`
`
`Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. For all other copying, reprint, or republication permission, write to Copyrights and
`
`
`
`
`
`
`
`Permissions Department, IEEE Publications Administration, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ 08855-1331. Copyright© 2000 by The Institute
`
`
`
`
`
`
`of Electrical and Electronics Engineers, Inc. All rights reserved. Application to Mail at Periodicals Postage Rates is Pending at New York, NY and at additional
`
`
`
`
`
`mailing offices. Send address changes to IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE, 445 Hoes Lane, P.O. Box 1331,
`Postmaster:
`
`
`GST Registration No. 125634188. Printed in U.S.A.
`
`Piscataway, NJ 08855-1331.
`
`Ex.1005 / Page 2 of 15
`TESLA, INC.
`
`
`
`164
`
`IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 1, NO. 3, SEPTEMBER 2000
`
`Visual Perception of Obstacles and Vehicles
`for Platooning
`
`Alberto Broggi, Member, IEEE, Massimo Bertozzi, Member, IEEE, Alessandra Fascioli, Member, IEEE,
`Corrado Guarino Lo Bianco, and Aurelio Piazzi, Member, IEEE
`
`Abstract—This paper presents the methods for sensing obsta
`cles and vehicles implemented on the University of Parma exper
`imental vehicle (ARGO). The ARGO project is hriefly described
`along with its main objectives; the prototype vehicle and its func
`tionalities are presented. The perception of the environment is per
`formed through the processing of images acquired from the vehicle.
`Details about the stereo vision-based detection of generic obsta
`cles are given, along with a measurement of the performance of the
`method; then a new approach for leading vehicles detection is de
`scribed, relying on symmetry detection in monocular images. This
`paper is concluded with a description of the current implementa
`tion of the control system, based on a gain scheduled controller,
`which allows the vehicle to follow the road or other vehicles.
`Index Terms—Automatic steering, image processing, obstacle
`detection, platooning, vehicle detection, vision-based autonomous
`vehicles, visual servoing.
`
`1. Introduction
`
`Among the many functionalities an intelligent vehicle
`
`Following a more general definition, an obstacle is defined as
`an object that can obstruct the vehicle’s driving path or, in other
`words, anything rising out significantly from the road surface. In
`this case the problem of Obstacle Detection is dealt with using
`more complex techniques, such as the analysis of the optical
`flow field [5], [6] or the processing of stereo images [7]—[9].
`As an example, the ASSET-2 [10], [11] is based on optical flow
`only. Its main feature is that no restrictive assumptions are made
`about the world, the motion or the calibration of the camera,
`or other parameters. A different approach has been used for the
`UTA demonstrator car; in this case a feature-based stereo vision
`system has been developed and is able to run in real-time even
`on a 200 MHz powerPC [7].
`Besides their intrinsic higher computational complexity,
`caused by a significant increment in the amount of data to
`be processed, these techniques must also be robust enough
`to tolerate the noise caused by vehicle movements and drifts
`in the calibration of the multiple cameras’ setup. Optical
`must perform, Obstacle and Vehicle Detection play a
`flow-based techniques permit the computation of ego-motion
`basic role. In fact, an autonomous vehicle must be able to
`and obstacle’s relative speed, but, as the presence of obstacles
`detect vehicles and potential obstacles on its path in order to
`is indirectly derived from the analysis of the velocity field,
`perform Road Following, namely, the automatic movement
`they fail when both the vehicle and obstacle have small or
`along a predefined path, or Platooning, namely, the automatic
`null speed. Conversely, from the analysis of stereo images,
`following of a preceding vehicle.
`obstacles can be directly detected and a three-dimensional
`A number of different vision-based techniques have been pro
`(3-D) reconstruction of the environment can be performed.
`posed in the literature and tested on prototype vehicles. Sev
`Moreover, to decrease the complexity of stereo vision, some
`eral approaches to obstacle detection rely on the localization of
`domain specific constraints can be adopted.
`specific patterns (possibly supported by features such as shape,
`On the ARGO autonomous vehicle, obstacle Detection is re
`symmetry, or edges) and are therefore based on the analysis of
`duced to the identification of the free-space (the area in which
`monocular images [1], [2]. They generally offer simple algo
`the vehicle can safely move). A geometrical transform is per
`rithmic solutions, allow fast processing and do not suffer from
`formed to detect the free space, using a pair of stereo images
`vehicle movements. For example, the research group of the Isti-
`of the portion of the road in front of ARGO. This functionality
`tuto Elettrotecnico Nazionale “G. Ferraris” limits the processing
`has been thoroughly tested on different obstacles—with varying
`to the image portion that is assumed to represent the road; bor
`shape and size—displaced in front of the vehicle in different po
`ders that could represent a potential vehicle are looked for and
`sitions. Results have been collected and analyzed highlighting
`examined [3]. People at the Universitat der Bundeswehr en
`the characteristics and deficiencies of this approach.
`force an edge detection process with obstacle modelization; the
`While Obstacle Detection has been proven to be robust
`system is able to detect and track up to twelve objects around
`allowing ARGO to reliably detect generic obstacles close to
`the vehicle [4].
`the vehicle, the tests demonstrated that its sensitivity is too low
`in the region far ahead of the vehicle. Therefore, a different ap
`Manuscript received March 7,2000; revised August 14, 2000. This work was
`proach is neec|ed for finding and following a preceding vehicle.
`supported in part by the Italian National Research Council (CNR) in the frame
`work of the MADESS2 Project. The Associate Editor for this paper was Dr.
`For this reason, a Vehicle Detection functionality has been
`Charles E. Thorpe.
`developed. This functionality is aimed at detecting vehicles
`A. Broggi is with the Dip. di Informatica e Sistemistica, Universita di Pavia,
`only, therefore it is based on a search for specific patterns using
`27100 Pavia, Italy (e-mail: alberto.broggi@unipv.it).
`M. Bertozzi, A. Fascioli, C. Guarino Lo Bianco, and A. Piazzi are with the
`shape, symmetry, and size constraints. Vehicles are localized
`Dip. di Ingegneria dell’Informazione, University di Parma, 43100 Parma, Italy
`and tracked using a single monocular image sequence whilst
`(e-mail: {bertozzi; fascioli; guarino; aurelio)@ce.unipr.it).
`a distance refinement is computed using stereo vision. The
`Publisher Item Identifier S 1524-9050(00)10334-5.
`
`1524-9050/00$ 10.00 © 2000 IEEE
`
`Ex.1005 / Page 3 of 15
`TESLA, INC.
`
`
`
`BROGGI et al.: VISUAL PERCEPTION OP OBSTACLES AND VEHICLES FOR PLATOONING
`
`165
`
`Steering control for the Platooning functionality is based on
`a gain scheduled proportional controller whose error input is
`evaluated using the estimated position of the preceding vehicle.
`This paper is organized as follows: the next section presents
`the ARGO project and the prototype vehicle developed within
`this framework. Section III presents the Obstacle Detection
`functionality used in the last few years on ARGO as well as a
`critical analysis of this algorithm. The Vehicle Detection func
`tionality is addressed in Section IV; Section V presents timings
`issues, while Section VI describes the control subsystem that
`drives ARGO. Section VII ends the paper with some remarks.
`
`II. The ARGO Project
`The main target of the ARGO Project [12] is the development
`of an active safety system which can also act as an automatic
`pilot for a standard road vehicle.
`The capability of perceiving the environment is essential for
`an intelligent vehicle which is expected to use the existing road
`network with no need for specific infrastructures, Although very
`efficient in some fields of application, active sensors—besides
`polluting the environment—feature some specific problems
`in automotive applications due to inter-vehicle interference
`amongst the same type of sensors, and due to the wide variation
`in reflection ratios caused by many different reasons, such as
`obstacles’ shape or material. Moreover, the maximum signal
`level must comply with safety rules, i.e., it must be lower than
`a safety threshold. For this reason, in the implementation of the
`ARGO vehicle, the use of sensors has been restricted to passive
`ones, such as cameras.
`A second design choice was to keep the system costs low.
`These costs include both production costs (which must be mini
`mized to allow a widespread use of these devices) and operating
`costs, which must not exceed a certain threshold in order not
`to alter vehicle performance. Therefore, low-cost devices have
`been preferred, both for image acquisition and processing: the
`prototype is based on cheap cameras and a commercial PC.
`
`A. The ARGO Vehicle Prototype
`ARGO, shown in Fig. 1, is an experimental autonomous ve
`hicle equipped with vision systems and automatic steering ca
`pability.
`It is able to determine its position with respect to the lane,
`to compute road geometry, to detect generic obstacles on the
`path, and to localize a leading vehicle. The images acquired
`by a stereo rig placed behind the windshield are analyzed in
`real-time by the computing system located in the boot. The re
`sults of the processing are used to drive an actuator mounted
`onto the steering wheel and other driving assistance devices.
`The system was initially conceived as a safety enhancement
`unit: in particular it is able to supervise the driver behavior and
`issue both optic and acoustic warnings or even take control of
`the vehicle when dangerous situations are detected. Further de
`velopments have extended the system functionalities to a com
`plete automatic steering.
`1) The Sensing System: Only passive sensors are used on
`ARGO to sense the surrounding environment:
`
`L
`
`Fig. 1. ARGO prototype vehicle.
`
`• A stereoscopic vision system consisting of two low-cost
`synchronized cameras able to acquire pairs of grey level
`images simultaneously. The cameras lie inside the vehicle
`at the top comers of the windshield, so that the longitu
`dinal distance between the two cameras is maximum.
`• A speedometer which is used to detect the vehicle’s ve
`locity. A Hall effect device has been chosen for its sim
`plicity and reliability and has been interfaced to the com
`puting system via a digital I/O board.
`
`In addition, a button-based control panel has been installed en
`abling the driver to modify a few driving parameters, select
`the system functionality, issue commands, and interact with the
`system.
`2) The Processing System: The architectural solution
`currently installed on the ARGO vehicle is based on a standard
`450 MHz Pentium II processor. Thanks to recent advances in
`computer technologies, commercial systems offer nowadays
`sufficient computational power for this application. All the
`processing needed for the driving task (image feature extraction
`and vehicle trajectory control) is performed in real-time: 50
`pairs of single field images are processed every second.
`3) The Output System: Several output devices have been in
`stalled on ARGO: acoustical (stereo loudspeakers) and optical
`(LED-based control panel) devices are used to issue warnings
`to the driver in case dangerous conditions are detected, while a
`color monitor is mainly used as a debugging tool.
`
`Ex.1005 / Page 4 of 15
`TESLA, INC.
`
`
`
`166
`
`IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 1, NO. 3, SEPTEMBER 2000
`
`A mechanical device provides autonomous steering capabili
`ties. It is composed of an electric stepping motor coupled to the
`Steering column by means of a belt. The output fed by the vision
`system is used to turn the steering wheel to maintain the vehicle
`inside the lane or follow the leading vehicle.
`
`B. System Functionalities
`Thanks to a control panel, the driver can select the level of
`system intervention. The following three driving modes are in
`tegrated.
`1 j Manual Driving: The system simply monitors and logs
`the driver’s activity.
`2) Supervised Driving; In case of danger, the system warns
`the driver with acoustic and optical signals. A LED row on the
`control panel encodes the vehicle position within the lane, while
`the right or left speakers warn in case the vehicle is approaching
`too much to the right or left lane marking, respectively.
`3) Automatic Steering; The system maintains the full con
`trol of the vehicle’s trajectory, and the two following function
`alities can be selected:
`Road Following: the automatic movement of the vehicle
`inside the lane. It is based on: Lane De
`tection (which includes the localization
`of the road, the determination of the rela
`tive position between the vehicle and the
`road, and the analysis of the vehicle’s
`heading direction) and Obstacle Detec
`tion (which is mainly based on local
`izing possible generic obstacles on the
`vehicle’s path).
`The automatic following of the preceding
`vehicle, that requires the localization and
`tracking of a target vehicle (Vehicle De
`tection and Tracking).
`
`Platooning:
`
`III. Obstacle Detection
`The Obstacle Detection functionality is aimed at the local
`ization of generic objects that can obstruct the vehicle’s path,
`without their complete identification or recognition. For this
`purpose a complete 3-D reconstruction is not required and a
`match with a given model suffices: the model represents the en
`vironment without obstacles, and any deviation from the model
`represents a potential obstacle.
`
`A. The Inverse Perspective Mapping
`The Obstacle Detection functionality is based on the removal
`of the perspective effect through the Inverse Perspective Map
`ping (IPM) [13]. The IPM allows to remove the perspective ef
`fect from incoming images remapping each pixel toward a dif
`ferent position. It exploits the knowledge about the acquisition
`parameters (camera orientation, position, optics, etc) and the as
`sumption of a flat road in front of the vehicle. The result is a new
`two-dimensional (2-D) array of pixels (the remapped image)
`that represents a bird’s eye view of the road region in front of
`the vehicle [Fig. 2(c) shows the result of the application of IPM
`technique on the image Fig. 2(a)].
`
`(a)
`
`(b)
`
`Fig. 2. Obstacle Detection: (a) the left stereo images; (b) the right stereo
`image; (c) and (d) the remapped images; (e) the difference image; (f) the angles
`of view overlapped with the difference image; (g) the polar histogram; and (h)
`the result of Obstacle Detection using a black marker superimposed on the
`original left image; the light-gray area represents the road region visible from
`both cameras.
`
`B. Obstacle Detection Processing Steps
`The application of IPM to stereo images [12], [13] plays a
`strategic role for Obstacle Detection.
`Assuming ^flat road, the IPM is performed on both stereo im
`ages. > The flat road model is checked through a pixel-wise dif
`ference between the two remapped images: in correspondence
`to a generic obstacle in front of the vehicle, namely anything
`rising up from the road surface, the difference image features
`sufficiently large clusters of nonzero pixels that possess a partic
`ular shape. Due to the stereo cameras’ different angles of view,
`an ideal homogeneous square obstacle produces two clusters of
`pixels with a triangular shape in the difference image, in corre
`spondence to its vertical edges [13].
`
`’An alternative solution is to warp the left or right image to the domain of the
`other one. Nevertheless, the Lane Detection functionality [12], not described in
`this work, relies on the image generated by the IPM. Moreover, the mapping of
`both images onto the ground eases the computation of object’s distance as well
`as other features.
`
`Ex.1005 / Page 5 of 15
`TESLA, INC.
`
`
`
`BROGGI et al. -. VISUAL PERCEPTION OF OBSTACLES AND VEHICLES FOR PLATOONING
`
`167
`
`Fig. 3. Obstacle Detection. Result is shown with a black marking superimposed onto a brighter version of the image captured by the left camera; a black thin line
`limits the portion of the road seen by both cameras.
`
`Unfortunately due to texture, irregular shape, and nonhomo-
`geneous brightness of generic obstacles, in real cases the de
`tection of the triangles becomes difficult. Nevertheless, in the
`difference image some clusters of pixels with a quasitriangular
`shape are anyway recognizable, even if they are not clearly dis
`jointed. Moreover, in case two or more obstacles are present in
`the scene at the same time, more than two triangles appear in
`the difference image. A further problem is caused by partially
`visible obstacles which produce a single triangle.
`The low-level portion of the process (see Fig. 2) is conse
`quently reduced to the computation of the difference between
`the two remapped images, a threshold, and a morphological
`opening aimed at removing small-sized details in the thresh-
`olded image.
`The following process is based on the localization of pairs of
`triangles in the difference image by means of a quantitative mea
`surement of their shape and position [14]. It is divided into: com
`puting a polar histogram for the detection of triangles, finding
`and joining the polar histogram’s peaks to determine the angle
`of view under which obstacles are seen, and estimating the ob
`stacle distance.
`1) Polar Histogram: A polar histogram is used for the de
`tection of triangles: it is computed scanning the difference image
`with respect to a point called focus and counting the number
`of over-threshold pixels for every straight line originating from
`the focus. It is important to note that the image area consid
`ered when building the polar histogram is not uniform along
`the scanning angle: under small angles, the considered sector is
`short, while for angles close to 90°, it gets longer. Therefore,
`the polar histogram’s values are normalized applying a noncon
`stant threshold computed using the polar histogram of an image
`where all pixels are set. Finally, a low-pass filter is applied in
`order to decrease the influence of noise.
`The polar histogram’s focus is placed in the middle point be
`tween the projection of the two cameras onto the road plane; in
`this case the polar histogram presents an appreciable peak corre-
`
`Fig. 4. Test-bed (black circles indicate the positions where obstacles have been
`placed).
`
`spending to each triangle [13]. Since the presence of an obstacle
`produces two disjointed triangles (corresponding to its edges) in
`the difference image, Obstacle Detection is limited to the search
`for pairs of adjacent peaks. The position of a peak in fact deter
`mines the angle of view under which the obstacle edge is seen.
`
`Ex.1005 / Page 6 of 15
`TESLA, INC.
`
`
`
`168
`
`IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 1, NO. 3, SEPTEMBER 2000
`
`27.15 ■■
`
`24.95 ■ ■
`
`22.75 ■ ■
`
`20.55 •
`
`18.70 ■ ■
`
`16.70 ■ ■
`
`14.70
`
`■
`
`12.70 • ■
`
`10.65 •
`
`0
`
`0
`
`48
`
`66
`
`0 0
`
`0 0
`
`0 0
`
`0 0
`
`0
`
`0
`
`48 0
`
`0 48 48
`
`36
`
`36
`
`66
`
`67 .65 24 24 65 67
`
`72
`33 66 72
`66 33
`^8 69 37 37 69 7#/
`
`\ 100 62 62 loop,'
`
`\ V83' 60 60 83 ' ,-'
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`23 21
`
`29 26
`
`38 50
`
`21
`
`26
`
`SO
`
`23 0
`
`29 0
`
`38 0
`
`41 68
`
`68
`
`41 0
`
`44 78 78, 44 0
`
`43 ,86 86 : 43 0
`
`.12 39 94
`94
`\
`41 too 100
`
`39 12'
`41
`!
`
`\\41 -9.3 ,.93 41,-' /'
`
`0
`
`20
`12 26
`
`18
`
`27
`
`39
`
`18
`
`27
`
`39
`
`20
`
`0
`
`26 12
`
`31
`
`12
`
`30 St
`
`30 13
`51
`60 60 34
`
`12
`
`31
`
`34
`
`38
`
`12
`
`13
`
`12
`
`0
`
`0
`
`73
`
`454 86
`
`73
`
`86
`
`38
`
`45
`
`58
`
`0
`
`0 ;
`
`/
`
`\
`93
`58
`93
`\\66.4(8t-X(»
`
`Fig. 5. Measured sensitivity in a 0-100 scale for three different kind of obstacles: (a) small and short obstacle; (b) large and tall obstacle; and (c) human shape.
`
`Peaks may have different characteristics, such as amplitude,
`sharpness, or width. This depends on the obstacle distance,
`angle of view, and difference of brightness and texture between
`the background and the obstacle itself.
`2) Peaks Joining: Two or more peaks can be joined ac
`cording to different criteria. Starting from the analysis of a
`large number of different situations a criterion has been found,
`aimed to the grouping of peaks, that takes into account several
`characteristics such as the peaks amplitude and width, the
`area they subtend, as well as their distance [15]. Obviously, a
`partially visible obstacle produces a single peak that cannot be
`joined to any other.
`The amplitude and width of peaks, as well as the interval
`between joined peaks, are used to determine the angle of view
`under which the whole obstacle is seen.
`3) Estimation of Obstacle Distance: The difference image
`can also be used to estimate the obstacle distance. For each peak
`of the polar histogram a radial histogram is computed scanning
`a specific sector of the difference image. The width of this sector
`is determined from the width of the polar histogram peak [14].
`The number of over-threshold pixels in the sector is computed
`as a function of the distance from the cameras and the result
`is normalized. The radial histogram is analyzed to detect the
`comers of triangles, which represent the contact points between
`obstacles and road plane, therefore allowing the determination
`of the obstacle distance through a simple threshold [13].
`
`C. Results
`Concerning qualitative outcomes. Fig. 3 shows the results ob
`tained in a number of different situations including multiple ob
`stacles placed in different positions inside the stereo field of
`view. The result is displayed with black markings superimposed
`on a brighter version of the left image; markers encode both the
`obstacles’ distance and width.
`
`Fig. 6. Average values of the sensitivity for Obstacle Detection.
`
`D. Performance Analysis
`
`Due to its fundamental importance, the Obstacle Detection
`module must be extremely robust and must reliably detect
`objects in a given distance range (i.e., in 100% of all cases). In
`order to evaluate the performance of the algorithm implemented
`on ARGO and determine possible enhancements, extensive
`tests have been carried out. Previous experiments [15] demon
`strated that Obstacle Detection is robust to errors in vision
`system calibration (i.e., vehicle movements or deviations from
`the flat road hypothesis like the ones that should be expected
`in a highway/freeway scenario).
`Neverthless, an extensive test has been carried out for deter
`mining the sensitivity of Obstacle Detection to dimensions and
`position of obstacles.
`1) The Test-Bed: Obstacles with different size and shape
`have been positioned in front of the vehicle at given distances
`and the sensitivity of the algorithm has been measured. The
`
`Ex.1005 / Page 7 of 15
`TESLA, INC.
`
`
`
`BROGGI et al. -. VISUAL PERCEPTION OF OBSTACLES AND VEHICLES FOR PLATOONING
`
`169
`
`Fig. 7. Three-dimensional scene and projection of the obstacle on a linear profile of the image: (a) a small obstacle far from the camera; (b) a high obstacle far
`from the camera; (c) a small obstacle near the camera; and (d) a small obstacle near the camera but located on the right of the viewing region.
`
`obstacle’s characteristics that have been varied during the tests
`are the following:
`• obstacle’s position: ahead distance and lateral offset,
`ranging from 10 to 30 m for the distance perpendicular to
`the camera’s stereo rig and from -4 to 4 m for the lateral
`offset;
`• obstacle’s size: the tests included small obstacles (25 cm
`wide X 60 cm high) and large ones (50 cm wide x 90 cm
`high);
`• obstacle’s height: the range varied from 60 to 180 cm in
`height.
`Moreover, sensitivity to human shapes has been tested.
`During the tests, the following set-up and assumptions were
`used:
`• The vehicle was standing still. Since noise is generally
`due to drifts in the cameras’ calibration (generated by ve
`hicle movements), this assumption permitted to remove
`this kind of noise.
`• The obstacle’s color has been selected to be homogeneous
`and different from the background.
`Although many experiments were performed, this section re
`ports on the tests made with the following three obstacles:
`• small obstacle: 25 cm wide x 60 cm high;
`• large obstacle; 50 cm wide x 90 cm high;
`• human shape: 40 cm wide x 180 cm high.
`The obstacles have been positioned on a grid, shown in Fig. 4.
`2) Obstacle Detection Sensitivity: In order to determine the
`sensitivity (S) to obstacles, the height of the polar histogram
`peak (H) is analyzed and compared to the threshold (T) used
`for the decision whether the peak is due to an obstacle or noise.
`In addition, the sensitivity is normalized using the height value
`(^max) of the highest peak, namely
`
`' 0
`I H
`
`whereH <T
`
`where H >T
`
`When two or more peaks are localized in the polar histogram,
`the highest is considered.
`Since different illumination conditions can slightly affect the
`final result, several images have been acquired and processed
`
`for each obstacle’s position on the grid shown in Fig. 4. In case
`all the values were greater than the threshold their average was
`computed, otherwise a zero was taken.
`Fig. 5 shows the results for three different obstacles: Fig. 5(a)
`shows a small sized obstacle; Fig. 5(b) shows a large and tall
`obstacle; and Fig. 5(c)shows a human shape. For each single
`obstacle, the values representing the sensitivity are scaled be
`tween 0 and 100, therefore they are not directly comparable.
`However, in order to give an overview of the system’s be
`havior, Fig. 6 graphically summarizes all the measurements; it
`has been computed as an average of all the tests performed on
`the different obstacles. It is clearly visible that the sensitivity
`to the presence of obstacles is high in the area right ahead of
`the vehicle (the cameras’ angular aperture is nearly 40°), and
`decreases—almost linearly—with the distance. The lateral re
`gions have a lower sensitivity.
`3) Analysis of the Results: The results obtained during the
`tests demonstrated that the sensitivity mainly depends on ob
`stacle’s height and position. Conversely the obstacle’s width
`barely impacts on the sensitivity, affecting only the distance be
`tween the peaks of the polar histogram.
`First of all, it is important to note that tall obstacles lying far
`from the camera share the same characteristics of short ones:
`this is due to the reduced region analyzed by the system, as
`it can be seen comparing Fig. 7(a) and (b). Therefore, the ob
`stacle’s height only impacts on the result when the obstacle is
`short enough to be fully visible by the cameras, as shown in
`Fig. 7(c). In this case, the sensitivity to obstacle’s height is linear
`with the distance. This is clearly shown in Fig. 5: the closer the
`obstacle to the camera, the more reliable its detection.
`Due to the variable threshold along the polar histogram’s
`scanning angle, the system is much more sensitive to small ob
`stacles when they lie on the sides of the viewing region. This
`behavior is explained by Fig. 7(d), which shows that in case of
`lateral obstacles, the considered area (sector) of the image is
`shorter than for the in-front analysis. Therefore, since the image
`profile is shorter, the projection of an obstacle covers a larger
`percenta