`
`,C:: -
`D
`
`-z_,c., - OT)
`PTO/SB/16 (2-98}
`Approved for use through 01/31/2001. 0MB 0651-0037
`Patent and Trademark Office; U.S. DEPARTMENT OF COMMERCE
`Under the Paperwork Reduction Act of 1995, no persons are required to respond to a collect1on of information unless It displays a
`valid 0MB control number.
`
`PROVISIONAL APPLICATION FOR PATENT COVER SHEET
`This is a request for filing a PROVISIONAL APPLICATION FOR PATENT under 37 CFR 1.53 {c).
`
`Given Name (first and middle [if any])
`
`Family Name or Surname
`
`INVENTOR($
`
`--~
`...
`Residence
`~
`(City and either State or Foreign Country)
`•
`Fredericksburg, VA
`•J\'.l3.sters
`H;i:eba,el W.
`Paul V. ·ip_
`Dahlgren, VA
`Wenne
`.. ➔
`• Lonnie R.
`Athens, OH
`Welch
`0 D Additional inventors are being named on the_ separately numbered sheets attached hereto
`
`c:::,
`~
`a..>
`~
`c:::,
`c:>
`
`C
`
`A Method And Apparatus For Resource 'Management
`
`TITLE OF THE INVENTION /280 characters max)
`
`Direct a// correspondence to:
`
`CORRESPONDENCE ADDRESS a11an1r
`l
`I
`
`►
`
`Bar ~9"&9'tre
`
`PATE.HT TRAuc.nn~~ ~ ••·
`
`23501
`Type Customer Number here
`
`~ Customer Number I
`OR
`Q Firm or
`
`0
`E--4
`
`P.....--1 °'
`mr- =,
`0
`·co
`o
`. c::,
`U')
`::>c...a
`(\J
`.-I--
`......
`OQ' c::::,
`U')
`i,n'I..P - o
`u
`I " ' )=
`
`I
`
`Individual Name
`
`James B. Bechtel. Eso.
`Naval Surface W;::irfare Center (O:Jde CD222)
`17120 D.i:ih 1 O'Y'tm t "'~d
`ZIP 22448-5100
`VA
`Dahlgren
`State
`Fax 540-653-7816
`Telephone 540-653-8061
`United States
`ENCLOSED APPLICATION PARTS (check all that apply)
`GJ
`Specification Number of Pages I 279
`□
`I
`I D 01her (specify) I
`□ Drawing(s) Number of Sheets I
`D A check or money order is enclosed to cover the filing fees
`[i] The Commissioner is hereby authorized to charge filing
`fees or credit any overpayment to Deposit Account Number.I 50-0967
`
`Address
`
`Address
`
`City
`
`Country
`
`Small Entity Statement
`
`I
`
`METHOD OF PAYMENT OF FILING FEES FOR THIS PROVISIONAL APPLICATION FOR PATENT (check one)
`FILING FEE
`AMOUNT/$)
`
`I
`
`$150.00
`
`The invention was made by an agency of the United States Government or under a contract with an agency of the
`United States Government.
`0 No.
`lxJ Yes, the name of the U.S. Government agency and the Government contract number ar~·
`DEET OE THE NAVY - NAVJJ.., SURFACE· WARFARE CENTER
`
`Respectfully submitt, d,
`
`SIGNATURE ~~~~
`
`(if appropriate)
`Docket Number:
`
`Date I 6125! e;)4
`REGISTRATION NO. I
`....... c .... b .... t .... e ... J _____ _
`__._,Tu;aro=e-"s>.....J.B..._ ... Be
`:= · ==2~9=~• 8::9::0::::=====~
`TYPED or PRI ED NAME
`._ I _N_C_S_Z_l_S_5 __ ___
`TELEPHONE 540-653-8061
`USE ONLY FOR FILING A PROVISIONAL APPLICATION FOR PATENT
`This collection of information is required by 37 CFR 1.51. The information is used by the public to file (and by the PTO to
`process) a provisional application. Confidentiality is governed by 35 U.S.C. 122 and 37 CFR 1.14. This collection is estimated
`to take 8 hours to complete, including gathering, preparing, and submitting the complete provisional application to the PTO.
`Time will vary depending upon the individual case. Any comments on the amount of time you require to complete this fom,
`and/or suggestions for reducing this burden, should be sent to the Chief lnfonnation Ott1cer
`U.S. Patent and Trademark
`6
`Office, U.S. Department of Commerce, Washington, D.C., 20231. DO NOT SEND FEES OR C MPLETED FORMS TO THIS
`ADDRESS. SEND TO: Box Provisional Application, Assistant Commissioner for Patents, Washington, D.C., 20231.
`
`+
`
`Ex.1009 / Page 1 of 280
`TESLA, INC.
`
`
`
`DRAFT
`
`18 May2000
`
`Description of Resource Management, NSWCDD patent case number TBD
`Michael W. Masters, NSWCDD, Code B35
`
`Resource Management consists of a set of cooperating computer programs that provides an ability to
`dynamically allocate computing tasks to a collection of networked computing resources (computer
`processors interconnected on a network) based on the following measures: an application developer/user
`description of application computer program performance requirements; measured performance of each
`application programs; measured workload (CPU processing load, memory accesses, disk accesses) of each
`computer in the network; and measured inter-computer message communication traffic on the network.
`
`The capabilities provided by Resource Management are as follows:
`
`• Dynamically allocate computer programs to computers within a network based on a user statement
`of computer program performance goals
`,
`• Dynamically change allocation according to changing system loading conditions
`• Change allocations based on manual operator direction
`• Dynamically adjust to overall computer workload by balancing processing loads among a number
`of scalable, replicated load sharing programs
`• Dynamically compensate for computer failures -and network link failures by restarting copies of
`lost computer programs on surviving computers within the network
`
`Resource management consists of the following computer program components:
`
`• A Performance Specification Language whereby application developers/users define the
`performance goals they want Resource Management to insure for each application. Application
`computer program performance requirements, or performance goals, consist of requested CPU
`execution times for each application. A performance goal may also be specified for the end-to-end
`processing time of a combination of several computer programs which are designed to process
`data in a sequence (referred to as a path). In a path, each computer program in sequence performs
`a defined set of processing steps and then passes its data to the next computer program in the path.
`
`• A Specification Language Processor Program that converts application developers/users
`requirements into instruction for action by the remainder of Resource Management
`
`• An Operating System Instrumentation Subsystem that collects measured performance data from
`each computer in the network. This subsystem consists of two types of components. The first is
`an Operating System Instrumentation Data Coilector Program, a copy of which runs on each
`program in the network and collects computer performance data from the operating system on
`which it resides. The second is a centralized Operating System Measurement Repository Program
`that accumulates operating system instrumentation data from all the collector programs. The
`collector programs periodically report the data they have collected to the central operating system
`measurement repository program.
`
`• A System Health Monitor Subsystem, consisting of a heartbeat mechanism (periodic messages to
`all computers in the network). The System Health Monitor Subsystem detects the failure of any
`computer in the network or the loss of a network link within the overall network and reports this
`information to the Operating System Instrumentation Subsystem.
`
`• An Application Instrumentation Subsystem that collects measured performance data from each
`application running under the scope and control of Resource Management. This subsystem
`consists of two types of components. The first is an Application Program Instrumentation Data
`Collector Program, a copy of which runs on each program in the network and collects computer
`performance data from the application computer programs running on the computer on which the
`collector program resides. The second is a central Application Program Measurement Repository
`Program that accumulates application instrumentation data from all the collector programs. The
`
`1 of 3
`
`Ex.1009 / Page 2 of 280
`TESLA, INC.
`
`
`
`DRAFT
`
`18 May 2000
`
`collector programs periodically report the data they have collected to the central application
`measurement repository program.
`
`• A Resource Allocation Program that utilizes measurement information from both the Operating
`System Measurement Repository Program and the Application Program Instrumentation Data
`Collector Program to make decisions concerning the allocation or assignment of computer
`programs to computers within the network. It compares the observed performance of each
`application program with the application developer/user requested performance level. For each
`application, if the application's performance is within bounds specified by the application
`developer/user, the resource allocation program makes no change of allocation to the system
`(computers, network and applications). If one or more applications are found to be performing in
`a less than satisfactory manner compared to the performance goals specified by the application
`developer/user, or if based on trend analysis they are projected to begin performing in a less than
`satisfactory manner, or if a computer failure or network link failure has been detected in the
`network, then the Resource Allocation Program examines data on the measured loading and
`performance of each computer in the network from the operating system instrumentation data
`collector program, applies an optimization algorithm, and selects a configuration change, or
`application computer program reassignment re-assignment to a different computer designed to
`restore the application's performance to the level specified by the application developer/user. The
`Resource Allocation Program sends the configuration change request to a Program Control
`Subsystem and its agents for implementation, (see description of program control component
`below). The Resource Allocation Program selects one of the following actions:
`
`o
`
`o
`
`o
`
`If the computer program that is not meeting performance goals has been designed as a
`scalable, replicated load-sharing computer program, then the Resource Allocation
`Program will select a computer from the network which has sufficient reserve capacity to
`provide adequate processing services and will direct the Program Control Subsystem to
`load and initialize a second (and eventually a third, and a fourth, etc.) copy of the
`application program that is not meeting its performance goals.
`
`If the program that is not meeting its performance goals is not a scalable, replicated load(cid:173)
`sharing program, then the Resource Allocation Program will direct that the Program
`Control Subsystem move it to a different computer. This move operation consists of
`starting a new copy of the application program that is not meeting its performance goal
`on a computer with the reserve capacity to run the program satisfactorily and then
`shutting down the copy of the application program that is not meeting its performance
`goals.
`
`If a computer or network link has failed, then the Resource Allocation Program selects
`one or more computers in the network with the capacity to run the applications on the
`computer or computers that have failed or that have been isolated from the rest of the
`network by the failure of the network link. It will direct the Program Control Subsystem
`to load and initialize copies of all application programs that have been rendered
`inoperable by the computer failures or network link failure.
`
`• A Program Control Subsystem that receives resource allocation configuration changes from the
`Resource Allocation Program and carries them out. The Program Control Subsystem consists of a
`Program Control Program and a set of Program Control Agents, one of which resides on each
`computer in the network. The Program Control Program has two modes of operation: a manual,
`Program Control Program Operator activated mode and an automatic mode commanded by the
`Resource Allocation Program. When the Program Control Program receives a configuration
`change directive, either from the Program Control Program Operator or the Resource Allocation
`Program, it sends a command to the Program Control Agent on the computer where the
`configuration change operation is to take place. The Program Control Agent on that computer
`performs the appropriate action by means of interaction with the operating system of the computer
`on which it resides and by means of interaction with the file system of the computer network.
`
`2of3
`
`Ex.1009 / Page 3 of 280
`TESLA, INC.
`
`
`
`DRAFT
`
`18 May2000
`
`o
`
`o
`
`If the requested configuration change results in starting a new program on the designated
`computer, then the Program Control Agent sends commands to the file system causing
`the new program to be loaded across the network and initiated on the designated
`computer.
`
`If the requested configuration change results in shutting down a program on the
`designated computer, then the Program Control Agent sends commands to the operating
`system causing the program to be stopped.
`
`Based on long-term oversight and technical direction of the Resource Management capability from its
`inception as a part of the joint DARPA and Navy funded HiPer-D program and the DARPA follow-on
`~Quorum program, it is my assessment that three individuals have contributed substantially to invention of
`-the concept and architecture of Resource Management. The initial concept and design were developed by
`the author, Michael W. Masters, and by Dr. Lonnie Welch while he was on sabbatical at NSWCDD as a
`-visiting professor. Subsequently, Mr. Paul Werme added substantial technical detail to-the architecture.
`_Two individuals have been predominant in the detailed design of the implementation of the components of
`Resource Management described above and in the demonstration and verification that the Resource
`Management concept is realizable. These are Dr. Lonnie Welch and Mr. Paul Werme. In addition, Mr.
`Larry Fontenot may have contributed substantially to the invention of the Performance Specification
`Language and the Specification Language Processor Program.
`
`This assessment. along with the technical accuracy and completeness of the description provided above, is
`· solely that of the author and should be considered preliminary subject to review and clarification by Dr.
`Welch and Mr. Werme. To the best of the author's knowledge, all work on Resource Management. from
`its inception, has been performed either by government employees or by non-government employees
`working under the direction of government employees through government contracts.
`
`3 of 3
`
`Ex.1009 / Page 4 of 280
`TESLA, INC.
`
`
`
`EXECUTIVE SUMMARY ................................................................................................. 1
`1.0 INTRODUCTION ........................................................................................................ 3
`1.1 HiPer-D Phase I DARPA Technology Evaluation ..................................................... 3
`1.1.1 Phase 1 Integrated Demonstration One (I 1) ................................................. , .............. 3
`1.2 HiPer-D Phase 2 - DARP NCOTS Technology and Critical Issues Evaluation ............ .4
`1.2.1 HiPer-D Phase 2 Engineering Testbed One (Tl) Demonstration ................................ 6
`1.2.2 HiPer-D Phase 2 Engineering Testbed Two (T2) Demonstration ................................ 7
`1.2.3 HiPer-D Phase 2 Engineering Testbed Two A (T2A) Demonstration ......................... 7
`1.2.4 HiPer-D Phase 2 Engineering Testbed Three (T3) Demonstration .............................. 8
`1.3 Demo 98 Objectives .................................................................................................... 9
`2.0 STAND-ALONE ENGINEERING TESTS ................................................................ 11
`2.1 Evaluating the Performance of Multicast Communications ......................................... 11
`2.2 Data Distribution Experiment ..................................................................................... 11
`2.3 Windows NT Investigations ........................................................................................ 12
`3.0 ADVANCED COMPUTING TESTBED DEMO 98 INTEGRATED
`DEMONSTRATION DESCRIPTION ........................................................................ 13
`3 .1 AA W Subsystem Functional Description ................................................................... 17
`3 .1.1 Advanced Track Correlation and Filtering (ATCF) .................................................. 19
`3.1.1.1 ATCFOverview ................................................................................................... 19
`3 .1.1.1.1 Standard Message Format .................................................................................. 20
`3.1.1.1.2 MFAR Broker. ................................................................................................... 20
`3 .1.1.1. 3 IP Multicast Communications ............................................................................ 21
`3.1.1.1.4 ATCF Fault Tolerance ....................................................................................... 24
`3 .1.1.1. 5 Track Number Mapping ..................................................................................... 25
`3 .1.2 Air Engagement Control (AEC) ............................................................................... 25
`3 .1.2.1 AEC Component Summary ................................................................................... 26
`3.1.2.2 AEC Display ......................................................................................................... 27
`3 .1.2.3 Display State Data Server ..................................................................................... 28
`3 .1.2.4 Manual Engage Control ........................................................................................ 28
`3 .1.2.5 Plan Server ........................................................................................................... 28
`3.1.2.6 Semi-Auto ............................................................................................................ 29
`3.1.2.7 Auto-SM .............................................................................................................. 29
`3.1.2.8 Auto-Special ......................................................................................................... 29
`3 .1.2. 9 Engagement Server ............................................................................................... 3 0
`3 .1.3 Track Data Services Components ............................................................................ 3 4
`3.1.3.1 Radar Track Data Server (RIDS) ......................................................................... 35
`3.1.3.2 CORBA Track Number Server (CTNS) ................................................................ 35
`3.1.3.2.1 CTNS Overview ................................................................................................ 36
`3 .1.3 .2.2 Overall Architecture .......................................................................................... 37
`3.1.3.2.3 CTNS / TNSS Client Communications .............................................................. 38
`3.1.3.2.4 CTNS Group Communication ............................................................................ 39
`3.1.3.2.5 Startup Processing .............................................................................................. 40
`3 .1.3 .2.6 Performance ....................................................................................................... 41
`3 .1.3 .3 Sensor Rate Server (SRS) ..................................................................................... 41
`3 .2 Land Attack and C4I Subsystem Functional Description ............................................ .41
`3.2.1 Advanced Tomahawk Weapons Control System (ATWCS) .................................... .42
`
`Ex.1009 / Page 5 of 280
`TESLA, INC.
`
`
`
`3.2.2 Joint Maritime Command Information System (JMCIS) .......................................... 47
`3 .2.2.1 DIS to OTHGOLD Converter ............................................................................... 48
`3.2.3 Advanced Battle Management and Execution (ABMX) System ............................... 49
`3 .2.4 Data Brokers - Legacy System Interface .................................................................. 49
`3.2.4.1 JMCIS/AACT/AAW Interface .............................................................................. 50
`3 .2.4.2 Real-Time Data AA W Track Path ......................................................................... 50
`3 .2.4.3 0TH Track Data Path ........................................................................................... 51
`3 .2.4.4 Aegis Air Correlator Tracker (AACT) ................................................................... 51
`3 .3 Simulation and Support Components .......................................................................... 52
`3.3.1 Environmental Simulations (EnvSims) .................................................................... 54
`3. 3 .1.1 Entity Simulations ................................................................................................ 5 5
`3.3.1.2 Sensors Simulations .............................................................................................. 58
`3.3.1.3 Displays ................................................................................................................ 59
`3 .3 .2 Simulation Control (SIM CON) ................................................................................ 63
`3.3.2.1 Modifications Description ..................................................................................... 63
`3.3.2.2 Restrictions ........................................................................................................... 63
`3.3.3 Kinematics Daemon (KINED) ........................................ ........................................ 64
`3.3.4 Weapons Control System Simulator (WCS Sim) ...................................................... 64
`3.3.5 Identification Upgrade Simulator (IDU Sim) ........................................................... 64
`3.3.6 NSFS Simulator (NSFSsim) .................................................................................... 64
`3 .3. 7 Digital Call For Fire Support Components ............................................................... 65
`3.3.7.1 Remote Digital Data Link (RDDL) ....................................................................... 65
`3.3.7.2 TACFIRE Processor ............................................................................................. 65
`3.3.7.3 C3I Broker. ........................................................................................................... 65
`3.3.8 System Control ........................................................................................................ 66
`3 .3. 9 Clock Synchronization ............................................................................................. 66
`3.3.10 Near Real-time Data Collection/Display (JEWEL) ................................................. 67
`3 .3 .11 Group Communications ......................................................................................... 69
`3. 4 Resource Management ............................................................................................... 69
`3.4.1 System Monitoring ................................................................................................... 73
`3.4.1.1 UNIX Operating System and Network Monitoring ................................................ 73
`3.4.1.1.1 Methodology ...................................................................................................... 74
`3.4.1.2 Windows NT Operating System and Network Monitoring ..................................... 76
`3.4.1.2.1 Windows NT Statistics Retrieval.. ....................................................................... 76
`3.4.1.2.2 Network Interface ............................................................................................... 77
`3.4.1.3 Monitoring Status and History Servers ................................................................... 77
`3 .4 .2 Dynamic Resource Management. ............................................................................. 78
`3. 4 .2.1 System Model. ....................................................................................................... 79
`3.4.2.2 Adaptive QoS and Resource Management ............................................................. 80
`3 .4.2.2.1 Path QoS Monitor ............................................................................................... 81
`3.4.2.2.2 QoS Diagnosis .................................................................................................... 82
`3 .4.2.2.3 Resource QoS Monitor. ...................................................................................... 82
`3.4.2.2.4 Resource Allocation ............................................................................................ 83
`3.4.2.3 Results ................................................................................................................... 83
`3 .4 .3 Resource Control I Program Control. ........................................................................ 83
`3 .4 .3 .1 Graphical User Interface ........................................................................................ 83
`
`11
`
`Ex.1009 / Page 6 of 280
`TESLA, INC.
`
`
`
`3.4.3.2 Subsystem Managers ............................................................................................. 88
`3.4.3.3 Host Agents ........................................................................................................... 88
`3.4.3.4 Summary ............................................................................................................... 89
`3.4.4 QoS and System Specifications ................................................................................. 90
`3.4.5 Visualization ............................................................................................................ 90
`3.4.5.1 Host Display .......................................................................................................... 90
`3 .4. 5 .1.1 Host Display Design ........................................................................................... 92
`3.4.5.1.2 Data Formats ...................................................................................................... 92
`3.4.5.1.2.1 Host Configuration File .................................................................................... 92
`3.4.5.1.2.2 Interface to Data Server ................................................................................... 93
`3.4.5.1.2.2.1 Host Configuration Message ......................................................................... 93
`3.4.5.1.2.2.2 Host Process Message ................................................................................... 94
`3.4.5.1.3 Graph Display Interface ...................................................................................... 95
`3.4.5.1.4 User Interface ..................................................................................................... 95
`3.4.5.2 Path Display .......................................................................................................... 96
`3.4.5.2.1 Path Display Design ............................................................................................ 96
`3.4.5.2.2 Data Display ....................................................................................................... 96
`3.4.5.2.2.1 DataFlow ........................................................................................................ 97
`3.4.5.2.2.2 Application and Path Performance Data ........................................................... 98
`3.4.5.2.3 User Interface ..................................................................................................... 99
`3.4.5.3 Resource Management Decision Review Display ................................................. 100
`3.4.5.3.1 Design .............................................................................................................. 100
`3.4.5.3.2 Data Formats .................................................................................................... 101
`3.4.5.3.2.1 EventMessage ............................................................................................... 101
`3.4.5.3.2.2 Scaleup Message ............................................................................................ 103
`3.4.5.3.3 User Interface ................................................................................................... 104
`3. 5 Demo 98 Hardware Configuration ............................................................................ 104
`3. 6 Demo 98 Scenario .................................................................................................... 106
`3. 7 Integrated System Demonstration ............................................................................. 107
`3. 7 .1 Environmental Simulation ..................................................................................... 107
`3.7.2 ATWCS Launch Control Real Time Group ........................................................... 111
`3.7.3 Fault Tolerant Engagement Server.. ....................................................................... 113
`3. 7 .3 .1 Fault Injection Control. ....................................................................................... 115
`3.7.3.2 Fault Recovery and Performance Impact.. ........................................................... 116
`3.7.3.3 Summary and Future ........................................................................................... 120
`3.7.4 Digital Call for Fire (CFF) ..................................................................................... 120
`3.7.4.1 FO/FAC Subsystem ............................................................................................ 121
`3.7.4.2 CFF Initiation Sequence ...................................................................................... 125
`3.7.4.3 Visual Deconfliction ........................................................................................... 125
`3.7.4.4 0TH Track Injection ........................................................................................... 125
`3.7.4.5 CFF Engagement Transmission .......................................................................... 126
`3.7.4.6 Engagement Sequence ........................................................................................ 126
`3.7.5 Demo 98 Resource Management Scenario ............................................................. 130
`3.7.5.1 Overview ............................................................................................................. 130
`3.7.5.2 Fault Tolerance of Resource Management Components ....................................... 13 l
`3.7.5.3 Control of Application Scalability ........................................................................ 134
`
`111
`
`Ex.1009 / Page 7 of 280
`TESLA, INC.
`
`
`
`3.7.5.4 Application Fault Detection and Recovery ........................................................... 138
`3.7.5.5 Summary ............................................................................................................. 139
`4.0 LESSONS LEARNED ........................................................................................... 140
`4.1 CORBA Plan Server Lessons Learned ...................................................................... 140
`4.1.1 Advantages Offered by the CORBA Technology ................................................... 140
`4.1.2 CORBA Learning Curve ........................................................................................ 140
`4.1.3 Difficulties with Legacy Systems: .......................................................................... 141
`4.1.4 CORBA Specifications versus Available ORB Implementations ............................ 141
`4.1.5 CORBA in Perspective .......................................................................................... 141
`4.2 CORBA TNS Lessons Learned ................................................................................. 142
`4.3 Engagement Server Lessons Learned ........................................................................ 143
`4.3.1 Synchronization and Determinism ......................................................................... 144
`4.3.2 Cross-Group Data Difficulties ............................................................................... 146
`4.3.3 Recovery Time and Group Coupling ...................................................................... 147
`4.3.4 Precise Fault Injection .



