throbber
Invited Paper
`
`Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XIV, Gerald C. Holst,
`Editor, Proceedings of SPIE Vol. 5076 (2003) © 2003 SPIE · 0277-786X/03/$15.00
`
`1
`
`Resolution requirements and the Johnson criteria revisited
`
`Jon C. Leachtenauer*
`J/M Leachtenauer Associates Inc. 1281 Still Meadow Ave
`Charlottesville VA 22901
`
`Abstract
`
`Since the 1950s, numerous studies have been performed within the surveillance and reconnaissance (S&R) and
`target acquisition (TA) communities in an attempt to predict information extraction performance as a function of image
`collection and quality parameters. In general, the work followed two separate paths. The TA community developed
`models to predict probabilities of detection, recognition, and identification as a function of target size, range, and
`collection system design/performance parameters (e.g., MRT, FLIR92,NVTHERM,MRC). The S&R community
`developed models to predict National Imagery Interpretability Ratings (NIIRS) as a function of system design and
`collection parameters (e.g. IR GIQE). More recently, efforts have linked the two approaches such that NIIRS can be
`predicted from TA models and probabilities of identification can be predicted from NIIRS.
`With both approaches, resolution is a dominant term. A considerable amount of variability and uncertainty
`results from target differences. The criteria used to define the NIIRS generalize target type, size, and level of
`identification specificity. The TA predictions use the Johnson recognition criteria to relate lines on the target to
`recognition performance.
`A recent paper found that TA predictions differed substantially between the visible and IR. Further, the paper
`reported substantial differences among vehicles in terms of a confusion matrix. This finding was not surprising in light
`of other research, but suggested the need for a more detailed examination and explanation of results. Accordingly, the
`current effort was undertaken. Data from a variety of past studies dealing with target recognition were examined relative
`to the Johnson criteria, along with a more detailed analysis of data from two recent TA studies. A hypothesis of target
`recognition performance was generated and partially validated using available data.
`
`Key words: Johnson criteria, resolution, recognition performance
`
`1. INTRODUCTION
`
`The Johnson criteria provide the basis for current target acquisition models. Johnson presented various military
`targets to observers through electro-optical viewing devices.1 Target range was increased until the target could barely be
`identified (e.g., M-48 tank), recognized as to the type of target (e.g., tank, APC, truck), or detected. A bar pattern was
`placed in the same field of view and spatial frequency increased until it could just be resolved at the same range as the
`target. In this manner, the number of resolution cycles required to achieve some level of performance for each target was
`defined In subsequent Night Vision and Electronic Sensors Directorate (NVESD) studies, prediction models were
`refined.2-4 Corrections were made for the length of the target and two dimensional criteria developed using the geometric
`mean of the target height and width (as seen by the sensor or observer). Table 1.1 shows the Johnson (cycles across
`minimum dimension) and current NVESD criteria (cycles across geometric mean)..
`A key element of the Johnson criteria is the requirement for adequate contrast and signal-to-noise ratio. In early
`work by others, this requirement was sometimes overlooked.5 The criteria are defined at the 50% performance value, an
`equation is used to define performance at other levels. It was originally defined as:
`(
`)
`
`+
`.
`.
`2 7 0 7
`
`N
`
`N
`
`50
`
`
`
`N
`
`50
`
`N
`
`
`
`
`
`P N(
`
`)
`
`=
`
`(1.1)
`
`+
`.
`.
`2 7 0 7
`
`(
`
`)
`
`N
`
`N
`
`50
`
`
`
`N
`
`50
`
`N
`
`
`
`+
`
`1
`
`
`* jcleachtr@aol.com; phone 1 434 973-9582, fax 1 434 973-9582
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 10/08/2015 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
`
`Magna 2012
`TRW v. Magna
`IPR2015-00436
`
`

`
`2 Proc. of SPIE Vol. 5076
`
`where N50 is the required number of cycles for a 50% level of performance and N is the actual number of cycles.2 The
`exponent is also defined as 1.736 and 3.73.7 With the exponent in Eq. 1.1, the number of cycles required to achieve
`90% performance is 1.75 times the number for 50% performance. With the 1.73 exponent, the ratio is 3.56, with the
`3.73 exponent it is 1.8.
`
`Table 1.1
`NVESD -Modified Johnson Criteria
`______________________________________________________________________
`Task
`Description
`Cycles across
`Cycles across
` minimum dimension geometric mean
`Detection
`Target is military
`1.0±0.25
`0.75
`vehicle
`Vehicle type
`(tank, APC, truck )
`Identification Vehicle model 6.4±1.5 6.0
`
`Recognition
`
`4.0±0.8
`
`3.0
`
`The General Image Quality Equation uses sensor resolution and other physical quality factors to predict NIIRS
`values.8,9 NIIRS values define the tasks that can be performed on a given image in terms of object type, size, and level
`of identification.10 Recent studies have related the NVESD and GIQE predictions for the IR.11-13 A relationship has also
`been shown between the NIIRS and the Johnson criteria.14
`A recent study using both visible and IR imagery showed substantial differences between the two image types
`relative to the Johnson/NVESD criteria.5 The same study also showed substantial variability as a function of target type.
`This prompted a review of the Johnson criteria, both in terms of this recent study, as well as several previous studies
`relating resolution and recognition.
`
`2. BACKGROUND
`
`Johnson’s criteria were initially published in 1958 [1]. Many similar studies were performed after Johnson’s
`work, most investigators apparently unaware of the Johnson data. Whereas Johnson was concerned with direct viewing
`through optical systems, most of the later studies dealt with television and electro-optical line scan systems. Johnson,
`as well as several other investigators, was concerned with target acquisition from the ground. Others were concerned
`with observation from aerial platforms. A review of these other studies is of interest because of both similarities and
`differences relative to the Johnson criteria.
`Although Johnson’s study was one of the first relating resolution and recognition, efforts along these lines
`continued well into the seventies. A wide spectrum of target types was studied, ranging from simple shapes and
`Landoldt Cs to industrial targets. Viewing aspect ranged from elevation views to plan views and included both low and
`high oblique views. It was recognized that visual subtense was an important factor and thus target (or resolution line)
`subtense was varied in many of these studies. Target subtense is implicitly treated in the NVESD/Johnson approach by
`the requirement to resolve bar patterns. Results of these previous studies are briefly reviewed in the following sections.
`
`2.1 Symbol Identification
`
`Baker and Nicholson studied Landolt C and alpha-numeric symbol recognition.15 For the Landolt Cs, they
`found a performance threshold (point at which performance no longer improved) at 5 lines and 7 minutes of arc per C.
`Performance decreased at 4 lines and 5 minutes of arc. For letters and numbers, the threshold was 16 lines and 15
`minutes of arc. In a third study, they used a set of silhouettes with objects ranging from a table fork to a heavy jet
`aircraft. Symbols varied in height/width ratio and were presented at two orientations (major axis vertical and horizontal).
`The performance threshold was reached at about 7 lines through the minor axis and 15 lines through the major axis.
`Erickson and Hemingway studied geometric symbol recognition at three levels of lines per symbol and visual
`angle per symbol.16 At the smaller visual angles, all of the lines could probably not be resolved and performance did not
`reach an asymptote. Performance on individual symbols varied from 50% to 97% correct with performance seemingly
`varying as a function of symbol similarity to other symbols. For example, the circle and hexagon were the most
`frequently confused symbols. The differences which distinguished the symbols were much smaller than the symbols
`themselves, on the order of 20% of the symbol height or less.
`In an unpublished study, this author studied Snellen letter recognition (10 letters) at three visual angles over a
`range of lines/symbol.17 Snellen letters have a 1:1 aspect ratio and stroke width is 20% of overall width. Substantial
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 10/08/2015 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
`
`

`
`Proc. of SPIE Vol. 5076 3
`
`differences occurred between different letters with performance ranging from 9 errors (“L”) to 106 errors (“N”). For letter
`pairs, errors ranged from 0 (F vs. H, L, and T) to 60 (F vs. P). Error frequency was related to differences between letters.
`The differences between letters ranged from 0.5 to 3 stroke widths. If the difference could not be seen, recognition could
`not occur. The data indicated a requirement of one raster line per stroke (5 lines per letter) and 2 minutes subtense per
`line or stroke.
`
`2.2 Vehicle Identification
`
`Rosell and Willson studied the effects of subtending scan lines on identification using an approach similar to
`Johnson’s.18 Using oblique views of a sample of five tanks, they found that 13 TV lines (~6.5 cycles) were needed for
`identification. This value is virtually identical to the Johnson value of 6.4 cycles. However, a 42% difference in the
`number of required lines as a function of tank model was reported. It was attributed to differences in model similarity.
`Erickson and Hemingway studied vehicle identification using target silhouettes.16 The number of raster lines
`per vehicle ( 3.7, 7, 10.8) and vehicle subtense (4.4, 6.0, and 10.2 minutes of arc) were varied. A confusion matrix
`indicated that performance as a function of vehicle type ranged from 61% to 99%. A performance threshold appeared at 7
`lines per vehicle (3.5 cycles) for the 4.4 and 6.0 minute subtense, maximum performance was achieved at the 10.8
`lines/10.2 minute subtense condition.
`In a follow-on study, they used oblique photos of vehicle models on two different backgrounds (sand and
`foliage).16 Vehicle subtense (minimum dimension) was 6, 10, and 14 minutes; the number of subtending lines was 6,
`10, and 15. A performance asymptote was observed at 10 lines (5 cycles) for all angular subtenses, performance was
`highest at the 14 minute subtense. Performance was better for the foliage background than the sand. Performance as a
`function of vehicle ranged from 30% to 93%. Note that Erickson and Hemingway reported TV lines whereas Johnson
`and Rosell and Willson reported the number of scan lines required to achieve 50% performance. Further, Johnson’s
`criteria were in terms of cycles; the other studies reported in terms of scan lines. Although it is convenient to one cycle
`as two scan lines, this is only a crude approximation.
`Wagenaar and van Meeteran studied vehicle identification on line scan imagery.19 They found a requirement for
`5-10 lines per minimum vehicle dimension, but concluded that target type differences were important and were related to
`characteristics of the target other than overall size.
`Scott, Hollanda,, and Harabedian studied vehicle identification using vertical and oblique views of 25 different
`vehicles.20 The vehicles were oriented at 10 to 30 degrees from the scan lines so as to maintain a constant number of
`subtending scan lines across the minimum dimension. The number of subtending scan lines ranged from 4 to 30; scan
`line subtense was 9 minutes. It was concluded that about 20 scan lines were required to achieve 80-90% performance. In
`a follow-on study, Hollanda, Scott and Harabedian studied the effects of scan lines and signal-to-noise ratio.21 A total of
`20 vertical vehicle views were used. Ten of the vehicles were tanks and ten were “miscellaneous” vehicles (trucks,
`engineering equipment). Results showed a requirement for 30 lines for the tanks and at least 45 lines for the
`miscellaneous vehicles.
`The Scott/Hollanda vehicles and those used in the other referenced studies differ significantly. The Scott and
`Hollanda vehicles tended to be plan views, whereas the other studies used side profiles or low oblique views. The plan
`views required use of internal detail; the vehicles used in the other studies could be identified largely on the basis of
`outline shape. Contrast and possibly size of the identifying details thus differed.
`
`2.3 Aircraft Identification.
`
`Lacey conducted a study in which televised aircraft model photos were presented to 15 observers.22 The
`observers were asked to identify the photos as one of six different fighter aircraft. The 18 photos consisted of head-on,
`side view, and oblique views of the six aircraft against uniform backgrounds. Observers were shown the original photos
`and asked to identify the televised images as one of the 18 aircraft/orientation combinations.
`The camera zoom lens was set to display the aircraft images at 7.2, 10.1, and 14.4 scan lines (active TV lines)
`per aircraft height. Observers were seated at a defined distance from the display (using a head rest) such that the vertical
`dimension of the targets subtended 6, 10.2, and 14.4 minutes of arc.
`Results indicate that performance was improving at the best viewing condition, implying the need for
`additional scan lines or target subtense. Of equal interest was the effect of target type and orientation. Performance
`(across all conditions) ranged from 65% to 85% correct as a function of aircraft model and from 59% to 87% as a
`function of orientation. Performance was lowest on the oblique orientation and highest on the side. For the head-on and
`side views, height was measured from the bottom of the fuselage to the top of the tail. For the oblique view, it was
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 10/08/2015 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
`
`

`
`4 Proc. of SPIE Vol. 5076
`
`measured from the lowest point on the wing or horizontal stabilizer tip of the nose. The oblique views were at a smaller
`scale than the side and end views.
`At each orientation, certain aircraft showed far more errors than did others. It is apparent that some aircraft were
`frequently confused for others. In the side view, for example, the F-14 and MiG-21 were frequently confused for each
`other and accounted for 27% of the total side view error. The MiG-21 and MiG-23 confusion accounted for 26 % of total
`errors. In the head on view, the A-4 and F-4 were most frequently confused (44% of total errors). In the side view, they
`accounted for only 8 % of the total errors.
`Some insight can be gained by inspecting schematic drawings of the aircraft pairs. Figure 2.1 shows
`recognition errors for front views. Lines connecting aircraft pairs show the percentage of recognition errors. The A-4 and
`F-4 have two engine intakes above the wing; the F-14 and MiG-21 have rectangular intakes on both sides of the fuselage
`below the wings. This is reflected in the front view error rates.
`
`A-4
`
`44
`
`9
`
`6
`
`A-7
`
`1
`
`13
`
`MiG-23
`
`MiG-21
`
`4
`
`16
`
`2
`
`1
`
`3
`
`F-4
`
`Figure 2.1 Front view error rates.22
`
`F-14
`
`In two related studies (Jones, Leachtenauer and Pyle23, Leachtenauer and Jones 24 ), observers were asked to
`determine the equivalent ground resolution (one cycle or two lines) at which aircraft identification features could be
`identified. Equivalent ground resolution was defined in terms of visual angle subtense (cycles/degree), assuming a 53
`cycle/degree visual resolution capability. Equivalent ground resolution was varied by changing viewing distance. Plan
`view silhouettes were used in the first study, aerial photos in the second. The features required to uniquely identify 16
`fighter and attack aircraft were defined and the ground resolved distance (GRD) for those features specified. GRD is the
`width of a cycle (bar and space) on the ground. Some aircraft could be identified at 16 foot GRD, 2 foot GRD was
`required to identify all but two of the aircraft.
`The aircraft ranged in length from 37 to 95 feet and in wing span from 27 to 70 feet. The geometric mean of
`wingspan and length ranged from 34 to 74 feet. Using the geometric mean and one resolution cycle as two lines, the
`number of resolution lines needed for identification ranged from 3 to 44. There was thus no simple relationship between
`identification performance and number of subtending lines.
`In a later study, aircraft silhouettes were displayed on an 8 bit CRT at 0.5,1, and 2 minutes per raster line.17
`Observers (three) were required to identify which of 10 aircraft were presented using a hardcopy key for comparison. The
`silhouettes were presented one at a time so relative size could not be used as a cue. An ascending and descending
`method of limits was used where the number of subtending raster lines was both increased and decreased until 100%
`correct performance was achieved.
`Results showed that the number of subtending lines did not accurately predict performance. A confusion matrix
`showed significant target differences as well as response biases. Some targets were more often misidentified than others
`and where target “A” might frequently be called target “B”. the reverse was not always true.
`
`2.4 Other Target Types
`
`Brainard, Hanford, and Marshall used models of buildings, bridges, storage tanks, and aircraft imaged by a TV
`system.25 Observers were required to identify targets (using a photo as a briefing aid) on a simulated flyover. The range
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 10/08/2015 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
`
`

`
`Proc. of SPIE Vol. 5076 5
`
`at which correct identification occurred was recorded and translated to visual subtense and number of subtending TV
`lines. Target subtense for 50% identification performance ranged from 28 to 35 minutes. The required number of scan
`lines ranged from 5.8 to 7.
`Leachtenauer and Boucek studied detail analysis tasks on aerial photos at three levels of ground resolution and
`three levels of magnification.26 Observers were required to respond to questions regarding details in the scene such as
`aircraft wing shape, number of engines, location of roof vents, etc. The sizes of the cues needed to answer the questions
`were determined. It was concluded that the cue needed to be subtended by one to two cycles and a cycle needed to
`subtend one to two minutes of arc.
`
`2.5 Summary
`
`Taken together, the studies indicate a rather wide range in the number of resolution lines required for
`identification. Table 2.1 summarizes results. A wide variation in the required number of subtending lines is evident.
`Within some of the studies summarized, the range is even greater (3-44 for Reference 24). The variation appears to be
`greater when top or plan views of the target are used. It is also apparent that a resolution line must subtend 1-2 minutes
`of arc. The exception is the Brainard et al data (Reference 25), target motion may have increased the required value.
`
`Target
`Letters & numbers
`Symbols
`Symbols
`Snellen letters
`Military equip.
`Vehicles
`Vehicles
`Vehicles
`Vehicles
`Misc. vehicles
`Tanks
`A i r c r a f t
`A i r c r a f t
`A i r c r a f t
`Various Tgts.
`Features/cues
`
`View
`
`Dimension
`Height
`Height
`Height
`Height
`Min. dimen.
`Side
`Min. dimen.
`Side
`Min. dimen.
`Side
`Min. dimen.
`Oblique
`Min. dimen.
`Top
`Min. dimen.
`Top
`Min. dimen.
`Top
`F/S/Obl. Min. dimen.
`Top
`Geom. Mean
`Top
`Wing span
`Oblique
`Min. dimen.
`Top
`Min. dimen
`
`Table 2.1
`Results Summary
`Req. Lines1 Subtense Sub./Line % Correct2 Lines@50%3
`1 6
`1 5
`0.9
`9 5
`7.6
`1 0
`2 0
`2
`9 5
`4.8
`7
`1 0
`1.4
`9 8
`2.9
`5
`1 0
`2
`9 5
`2.4
`12.8
`N / A
`N / A
`5 0
`12.8
`1 3
`N / A
`N / A
`5 0
`1 3
`7
`1 0
`1.4
`9 5
`3.3
`1 0
`1 4
`1.4
`7 5
`7.4
`2 0
`N / A
`N / A
`9 0
`11.4
`3 2
`N / A
`N / A
`9 8
`13.3
`4 8
`N / A
`N / A
`9 2
`2 6
`14.4
`1 5
`1
`8 4
`9.3
`1 8
`2 0
`1.1
`9 0
`10.3
`2 0
`4 0
`2
`1 0 0
`7.33
`5.8 to 7
`28 to 35
`5
`5 0
`5.8 to 7
`3 to 4
`3 to 4
`1
`1 0 0
` 1 to 1.43
`
`Reference
`1 5
`1 5
`1 5
`1 6
`1 7
`1
`1 1
`1 6
`1 6
`2 1
`2 1
`2 2
`2 4
`1 7
`2 5
`2 6
`
`1 Lines are raster lines, TV lines, or 1/2 resolution cycle. Divide lines by 2 to obtain cycles.
`2 A value of 99% correct was used to compute the 50% threshold.
`3 The 50% threshold was computed using Eq. 1.1.
`
`Johnson,1 Rossel and Willson,18 and Brainard et al25 defined requirements at the 50% level of performance. The
`remainder of the referenced studies attempted to find a performance asymptote. Not all of these studies reached the
`asymptote and few showed performance at the 50% correct level used by Johnson and Rossel and Willson. A conversion
`was defined by Eq.1.1and is shown (lines @ 50%).
`In several of the studies, it appeared that variations in performance were related to the features that
`distinguished a given target from the remainder. Target confusion matrices were decidedly non-uniform. Although it
`appears obvious, targets which looked alike were more often confused with each other than those that did not. Logic
`would suggest that it is the ability to see distinguishing features that defines identification performance, and that this
`ability may not be uniformly related to overall target size.
`Studies using symbols and letters appear to show that the feature distinguishing one letter or symbol from
`another must be subtended by at least one raster line (and this raster line must be of a size sufficient to be resolved).
`Where sampling is an issue, the requirement may increase to two lines or one cycle. It thus appears logical to extend
`this reasoning to more complex objects. Any two objects will have a set of distinguishing features at a given
`orientation. The features may relate to external shape, or to the presence of internal (to the object outline) features or
`detail. In order for identification to occur, the features must be resolved by both the imaging system and the observer.
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 10/08/2015 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
`
`

`
`6 Proc. of SPIE Vol. 5076
`
`3.0
`Validation Studies
`
`As a means of attempting to validate the hypothesis regarding features and line requirements, two sets of data
`were examined in detail. The first data set represented responses from 10 trained observers viewing IR images of 12
`military vehicles.13 The second used both IR and visible spectrum imagery of the same vehicles; nine observers
`participated.6 Figure 3.1 shows visible spectrum side views of the vehicles.
`
`2S3
`
`M2
`
`BMP
`
`M109
`
`M113
`
`ZSU
`
`M1A
`
`M551
`
`M60
`
`T55
`
`T62
`
`T72
`
`Figure 3.1 Visible image side views.
`
`In the second study, silhouette views of the visible targets as well as images with the backgrounds removed were also
`tested. The silhouette views eliminated all internal detail and left only the vehicle shape. The views with the background
`removed left the internal detail and also increased the contrast of the vehicle outline. Figure 3.2 shows an example. The
`second study also removed some of the more obvious distinguishing features from some of the vehicles. In the two
`studies, vehicles were imaged from eight different orientations (both sides, front and rear, four oblique angles). Increases
`in viewing range (and decreases in number of subtending lines) were simulated by applying a Gaussian blur filter over
`varying numbers of pixels(5 to 30). Total blur was a combination of the Gaussian blur, display blur, and the contrast
`transfer function (CTF) of the human visual system. A counterbalanced experimental design was used such that for each
`target/blur combination, only two of the elevation/orientation conditions were used All targets and all
`elevation/orientation were represented at each blur condition. Blur and orientation were thus confounded. The number of
`cycles subtending the target was defined using the intersection of the human CTF and system MTF and a contrast of
`0.25.
`
`Figure 3.2 Full image, silhouette, and background-removed image examples.
`
`3.1 Overall Results
`
`Analysis of variance performed on both sets of IR data and the visible data set showed that observer, target
`type, blur, and orientation had statistically significant effects on identification probabilities. Elevation did not. All two
`way interactions were also statistically significant. Observer performance (percent correct) varied over a range of 25
`percentage points (e.g., 40% to 65% correct). Performance was generally best at side orientations and worst at end-on
`orientations. Figure 3.3 shows the effects of orientation for Study #2 when orientations were categorized as side, end,
`and quartering.6 Since blur factor was constant and size varied as a function of orientation (and target), target cross
`section was generally largest at the side orientation and smallest at the end orientation. The number of subtending lines
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 10/08/2015 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
`
`

`
`Proc. of SPIE Vol. 5076 7
`
`was thus greater for the side view than the end by about 28%. Performance on the two end-on views (0° vs. 180°) did
`not differ significantly for the IR, but did for the visible, favoring the front view. Performance on the two side views
`(90° vs. 270°) was nearly identical for both the visible and IR. The front quarter views were favored for the visible, the
`rear for the IR. The difference between the visible and IR results was significantly greater for the quartering views as
`opposed to the side and end views.
`
`Visible
`IR
`
`1
`
`0.8
`
`0.6
`
`0.4
`
`0.2
`
`0
`
`Proportion Correct
`
`Figure 3.3. Effect of target orientation-Study #2.
`
`Side
`
`End
`
`View
`
`Quarter
`
`In Study #1, the 50% threshold was achieved at 8.9 cycles.13 Study #2 required 11.5 cycles for the IR and 7.5
`for the visible.6 Performance on individual vehicles (proportion correct) varied by as much as 3:1. The two IR studies
`showed a low correlation when scores for individual targets were compared (R2=0.20 with all targets). The visible and
`IR data in Study #2 showed an R2 of 0.58 with all data and a value of 0.95 with one target (T62) removed. The T62
`showed low performance on the IR and good performance on the visible. The blur/orientation confounding in the two IR
`studies did not match, the confounding was the same for the IR and visible in Study #2.
`
`3.2 Effect of Vehicle Type
`
`In order to reduce the confounding effects of orientation and blur, data were normalized in terms of the
`orientations present at each blur factor. The resultant data were plotted and are shown in Figure 3.4 for the visible. Even
`with the normalization, performance differed by a factor of three or more across vehicles. The T-55 required 18 cycles
`(15-20 pixel blur) to achieve the 50% performance threshold The relationship between blur and cycles in this comparison
`is based on the average vehicle size across all orientations. The M-109 and M-113 were still well above the 50%
`criterion at 6.7 cycles. The remainder of the vehicles appeared to reach the threshold in the range of 5 to 8 cycles. The IR
`data showed similar target variability.
`Confusion matrices were generated for the three sets of data. Entries were the proportion of the total number of
`errors for the two targets compared. Table 3.1 shows those target pairs having an error rate ≥3 times the expected rate of
`0.015 (1÷ total possible pairs). A comparison of the error rates in the error matrices for the two IR data sets showed
`some similarities, but also some rather large differences. The M109/2S3 were frequently confused, as were the
`T/55/62/72. The T55/M551 were confused in the visible, but not the IR. In all cases, it was evident that some target
`pairs were frequently confused and others almost never confused. A Chi square test showed many of the differences to be
`statistically significant. Error rates for vehicle pairs differed by factors of 25:1 or more.
`Also of interest were the vehicles or vehicle pairs which were seldom misidentified or confused. Vehicle pairs
`showing error rates one third or less the expected rate were identified for each of the confusion matrices. The M-113
`stood out in Study#2 as having low error rates for both the visible and IR; no target stood out in Study #1.
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 10/08/2015 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
`
`

`
`8 Proc. of SPIE Vol. 5076
`
`M2
`M551
`M-60
`T55
`T-62
`T-72
`M1A1
`
`1 0
`
`2 0
`
`3 0
`
`4 0
`
`5 0
`
`1
`
`0.8
`
`0.6
`
`0.4
`
`0.2
`
`0
`
`0
`
`Proportion of Correct Responses
`
`2S3
`BMP
`M-109
`M-113
`ZSU
`M2
`
`1 0
`
`2 0
`
`3 0
`
`4 0
`
`5 0
`
`1
`
`0.8
`
`0.6
`
`0.4
`
`0.2
`
`0
`
`0
`
`Proportion of Correct Responses
`
`Cycles
`
`Cycles
`
`Figure 3.4 Effect of blur on orientation-normalized visible targets.
`
`Study#1 IR
`
`M109/2S3
`T62/T72
`
`M1A1/T62
`
`Error Rate
`
`0 . 0 5
`0 . 0 7
`
`0 . 0 5
`
`Table 3.1
`High Error Rate Vehicle Pairs
`Study#2 IR
`Error Rate
`
`M109/2S3
`T55/T62
`
`T62/T72
`
`0 . 0 5
`0 . 0 7
`
`0 . 0 6
`
`Study#2/Vis.
`
`Error Rate
`
`M109/2S3
`T55/T62
`
`T55/T72
`
`M 5 5 1 / T 5 5
`
`2S3/M60
`
`0 . 0 6
`0 . 0 5
`
`0 . 0 7
`
`0 . 0 7
`
`0 . 0 6
`
`3.3 Effect of Image Type
`
`Confusion matrices were generated for the visible silhouette data and the visible data with the background
`removed and compared to the error rates for the full images. Table 3.2 shows the vehicle pairs and error rates for those
`pairs having rates three times the expected rate or more . The overall error rate was significantly higher (143%) for the
`silhouette data as opposed to the full image data. This suggests that more than overall shape was involved in the
`identification of the visible images. A comparison of the full image error matrix with the silhouette matrix showed both
`similarities and differences in error rates for individual vehicle pairs. In both cases, the M-109 and 2S3 were frequently
`confused for each other. On the other hand, the silhouettes of the M551 and M60 were frequently confused, the images
`were not. The correlation between the two sets of error proportions was R2 =0.46. The number of errors with the
`background removed was about the same as when the background was present and the error matrix was similar (R2
`=0.60). This suggests that background cues and were not a major factor in identification, although they may have
`affected specific vehicle pairs. Also, backgrounds did not degrade performance.
`
`3.4 Effect of Orientation
`
`To further examine the data, errors for the side, end, and quartering views were examined separately for the
`visible images. The distribution was different for the views, indicating that confusion between targets is a function of
`view (the target by view interaction terms was statistically significant). There were 65 side view errors, 100 quartering
`view errors and 180 end view errors (out of a total possible of 432 for each). The distribution of errors for the end views
`was somewhat more even than that for the side views. Table 3.3 shows the more frequently confused vehicle pairs for
`
`Downloaded From: http://proceedings.spiedigitallibrary.org/ on 10/08/2015 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
`
`

`
`Proc. of SPIE Vol. 5076 9
`
`the side, end, and quartering views. Note that there were few similarities between the end view confusion pairs and those
`for the side and quartering views.
`
`Full Image
`
`M109/2S3
`
`T55/T62
`
`T55/T72
`
`M 5 5 1 / T 5 5
`
`2S3/M60
`
`Error Rate
`
`0 . 0 5
`
`0 . 0 5
`
`0 . 0 6
`
`0 . 0 6
`
`0 . 0 5
`
`Quartering View
`
`Error Rate
`
`Table 3.2
`Effect of Features on High Error Rates
`Silhouette
`Error Rate
`
`M109/2S3
`
`T55/T62
`
`T55/T72
`
`T62/T72
`
`M 5 5 1 / M 6 0
`
`0 . 0 7
`
`0 . 0 7
`
`0 . 0 9
`
`0 . 0 8
`
`0 . 0 5
`
`Table 3.3
`Effect of View on Confusion Targets
`Side View
`Error Rate
`
`No Back'gd
`
`M109/2S3
`
`T55/T62
`
`T55/T72
`
`T62/T72
`
`M 1 / T 5 5
`
`End View
`
`M109/BMP
`
`M 1 0 9 / M 1
`T55/T72
`
`BMP/M1
`
`BMP/T72
`
`Error Rate
`
`0 . 0 7
`
`0 . 1
`
`0 . 0 6
`
`0 . 0 5
`
`0 . 0 5
`
`Error Rate
`
`0 . 0 7
`
`0 . 0 7
`0 . 0 8
`
`0 . 0 6
`
`0 . 0 5
`
`M109/2S3
`
`T55/T62
`T55/T72
`
`T55/M1A1
`
`T62/T72
`
`M551/BMP
`
`M 5 5 1 / M 1 A 1
`
`2S3/M60
`
`0 . 0 9
`
`0 . 0 5
`0 . 0 8
`
`0 . 0 5
`
`0 . 0 8
`
`0 . 0 7
`
`0 . 0 6
`
`0 . 0 6
`
`M109/2S3
`
`T55/T62
`T55/T72
`
`M 5 5 1 / T 5 5
`
`M 5 5 1 / M 6 0
`
`T 5 5 / M 1
`
`T55/ZSU
`
`T62/ZSU
`
`0 . 0 8
`
`0 . 1 4
`0 . 0 8
`
`0 . 1 7
`
`0 . 0 6
`
`0 . 0 6
`
`0 . 0 8
`
`0 . 0 8
`
`Data for the si

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket