`
`(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2004/0090923 A1
`(43) Pub. Date: May 13, 2004
`
`Kan et al.
`
`(54)
`
`(76)
`
`NETWORK MONITORING SYSTEM
`RESPONSIVE TO CHANGES IN PACKET
`ARRIVAL VARIANCE AND MEAN
`
`Inventors: Chao Kan, Frisco, TX (US); Aziz
`Mohammed, Plano, TX (US); Wei
`Hao, Richardson, TX (US); Jimin Shi,
`Plano, TX (US)
`
`Correspondence Address:
`ALCATEL USA
`INTELLECTUAL PROPERTY DEPARTMENT
`3400 W. PLANO PARKWAY, MS LEGL2
`PLANO, TX 75075 (US)
`
`(21)
`
`Appl. N0.:
`
`10/412,127
`
`(22)
`
`Filed:
`
`Apr. 11, 2003
`
`Related US. Application Data
`
`(60)
`
`Provisional application No. 60/424,495, filed on Nov.
`7, 2002.
`
`Publication Classification
`
`Int. Cl.7
`(51)
`(52) us. Cl.
`
`........................................................ H04J 1/16
`............................................ 370/252; 370/329
`
`(57)
`
`ABSTRACT
`
`A network monitoring system (10) for monitoring a network
`along which network traffic flows in a form of packets. The
`system comprises circuitry (36, 42) for receiving a packet
`communicated along the network and for determining
`whether the received packet satisfies a set of conditions. The
`system further comprises circuitry (36/30, 46), responsive to
`a determination that the received packet satisfies the set, for
`determining a measure, wherein the measure is determined
`over a defined time interval and comprises a ratio of packet
`arrival variance and a mean of packets arriving during the
`time interval and for comparing the measure to a threshold.
`Lastly, the system comprises circuitry (36, 52), responsive to
`the measure exceeding the threshold, for adjusting network
`resources.
`
`40
`
`CAPTURE PACKET;
`PACKET SATISFY RULE
`
`IN RULE SET(S)
`.
`
`
`
`
`
`
` 44
`
`
`46
`
`STORE PACKET INFORMATION.
`INCLUDING TIME OF ARRIVAL,
`IN FLOW CORRESPONDING
`
`
`TO SATISFIED RULE(S)
`
`
`
`FOR DEFINED INTERVAL, I,
`
`DETERMINE IDC FOR EACH FLOW
`
`
`
`
`IDC > THRESHOLD?
`
`
`
`QOS MET FOR
`
`THRESHOLDVEXCEEDING .
`.
`PACKET(S)?
`
`
`
`
`
`
`RE-ADJUST TRAFFIC
`PARAMETERS
`
`52
`
`Microsoft
`
`Ex. 1036 - Page 1
`
`Microsoft
`Ex. 1036 - Page 1
`
`
`
`Patent Application Publication May 13, 2004 Sheet 1 0f 2
`
`US 2004/0090923 A1
`
`
`
`E -.':
`I — '
`:34—E\NMX
`: _ '.
`136—:
`
`CORE NETWORK/ROUTER
`
`Microsoft
`
`Ex. 1036 - Page 2
`
`Microsoft
`Ex. 1036 - Page 2
`
`
`
`Patent Application Publication May 13, 2004 Sheet 2 0f 2
`
`US 2004/0090923 A1
`
`40
`
`
`
`IN RULE SET(
`
`CAPTURE PACKET
`
`PACKET SATISFY RULE
`
`V
`
`44
`
`46
`
`
`
`STORE PACKET INFORMATION,
`INCLUDING TIME OF ARRIVAL,
`
`
`IN FLOW CORRESPONDING
`
`
`
`
`TO SATISFIED RULE(S)
`
`FOR DEFINED INTERVAL, t,
`DETERMINE IDC FOR EACH FLOW
`
`
`
`
`IDC '> THRESHOLD?
`
`
`Q08 MET FOR
`
`THRESHOLDEXCEEDING _ -
`
`PACKET(S)? _
`‘
`
`
`
`
`
`PARAMETERS
`
`
`
`RE ADJUST TRAFFIC
`
`52
`
`FIG. 3
`
`Microsoft
`
`Ex. 1036 - Page 3
`
`Microsoft
`Ex. 1036 - Page 3
`
`
`
`US 2004/0090923 A1
`
`May 13, 2004
`
`NETWORK MONITORING SYSTEM RESPONSIVE
`TO CHANGES IN PACKET ARRIVAL VARIANCE
`AND MEAN
`
`CROSS-REFERENCES TO RELATED
`APPLICATIONS
`
`[0001] This application claims the benefit, under 35
`U.S.C. §119(e)(1), of US. Provisional Application No.
`60/424,495, filed Nov. 7, 2002, and incorporated herein by
`this reference.
`
`STATEMENT REGARDING FEDERALLY
`SPONSORED RESEARCH OR DEVELOPMENT
`
`[0002] Not Applicable.
`
`BACKGROUND OF THE INVENTION
`
`[0003] The present embodiments relate to computer net-
`works and are more particularly directed to a system for
`monitoring network performance and correcting network
`congestion by evaluating changes in packet arrival variance
`relative to mean packet arrival.
`
`[0004] As the number of users and traffic volume continue
`to grow on the global Internet and other networks, an
`essential need has arisen to have a set of mechanisms to
`
`monitor network performance and to take corrective mea-
`sures in response to falling performance. Such performance
`may be evaluated in various forms, including but not limited
`to detecting and troubleshooting network congestion. Net-
`work congestion results from mismatches between network
`capacity and network demand. The mismatch may be a
`long-term one, or at
`instantaneous time scales. Further,
`network capacity may appear to be ample when using tools
`that
`look at
`long-term traffic averages; however
`these
`approaches are not always suitable because a more subtle
`problem may arise with short bursts of packets, or peak
`demand. With congestion analyses mechanisms, the reliabil-
`ity and availability of the network nodes (e.g., IP routers)
`and the given internet paths can be evaluated. This is
`especially true for Internet Service Providers (“ISPs”) seek-
`ing to comply with the Service Level Agreements (“SLAs”)
`that they are now providing to customers. Additionally, such
`a need is prevalent for the underlying internet protocol
`(“IP”) networks in the Internet.
`
`[0005] The Internet is also evolving towards an advanced
`architecture that seeks to guarantee the quality of service
`(“QoS”) for real-time applications. QoS permits the control-
`ling of what happens to packets when there is congestion in
`a network, or more precisely when there is insufficient
`network capacity to deliver all of the offered load without
`any noticeable queuing delays. One type of QoS framework
`seeks to provide hard specific network performance guar-
`antees to applications such as band-width/delay reservations
`for an imminent or future data flow. Such QoS is usually
`characterized in terms of ability to guarantee to an applica-
`tion-specified peak and average band-width, delay, jitter and
`packet loss. Another type is to use Class-of—Service (“CoS”)
`such as Differentiated Services (“Diff-Serv”) to represent the
`less ambitious approach of giving preferential treatment to
`certain kinds of packets, but without making any perfor-
`mance guarantees.
`
`[0006] During the QoS process to provide services better
`than the traditional best effort, network congestion detection
`
`often becomes the starting point for the network perfor-
`mance analysis. In the past, a number of congestion detec-
`tion and control schemes have been investigated in data
`networks. One congestion detection scheme uses the trans-
`port-layer protocols to infer congestion from the estimated
`bottleneck service time or from changes in throughput or
`end-to-end delay, as well as from packet drops. Specifically,
`the Internet has traditionally relied on mechanisms in the
`Transport Control Protocol (“TCP”), such as sliding window
`control and retransmission timer deficiencies to avoid con-
`
`gestion. TCP operates to seek excess bandwidth by increas-
`ing transmission rates until the network becomes congested
`and then reducing transmission rate once congestion occurs.
`A few limitations arise from this approach. First, TCP
`congestion detection at a first node requires an acknowl-
`edgement from a second node, that is, the increased trans-
`mission is continued until no acknowledgement is received
`from the second node; thus, a feedback communication is
`required from another node and that feedback also utilizes
`bandwidth on the network. Second, in its effort to identify
`bandwidth, TCP necessarily causes the very congestion
`which it then seeks to minimize, where the congestion is
`caused as the TCP increases the bandwidth to a point that
`exceeds the network capacity. Another type of congestion
`detection scheme is to involve network components such as
`routers in the entire process. As most network congestion
`occurs in routers, they may be considered an ideal position
`to monitor network load and congestion and respond thereto
`in a control scheme. Such network-based congestion control
`uses explicit signaling between routers to provide feedback
`congestion information to a transmitting router, where the
`transmitting router may then alter its behavior in response to
`the feedback, or an overall scheme can change the packet
`processing within one or more routers so as to reduce
`congestion. In any event, this latter scheme also requires a
`form of feedback from a recipient router, thereby increasing
`traffic on the network to accommodate the feedback and also
`
`requiring the reliance of the transmitting router on the
`integrity of a different router.
`
`In view of the above, there arises a need to address
`[0007]
`the drawbacks of the prior art, as is accomplished by the
`preferred embodiments described below.
`
`BRIEF SUMMARY OF THE INVENTION
`
`In the preferred embodiment, there is a network
`[0008]
`monitoring system along which network traffic flows in a
`form of packets. The system comprises circuitry for receiv-
`ing a packet communicated along the network and for
`determining whether the received packet satisfies a set of
`conditions. The system further comprises circuitry, respon-
`sive to a determination that the received packet satisfies the
`set, for determining a measure and circuitry for comparing
`the measure to a threshold, wherein the measure is deter-
`mined over a defined time interval and comprises a ratio of
`packet arrival variance and a mean of packets arriving
`during the time interval. Lastly, the system comprises cir-
`cuitry, responsive to the measure exceeding the threshold,
`for adjusting network resources.
`
`[0009] Other aspects are also described and claimed.
`
`Microsoft
`
`Ex. 1036 - Page 4
`
`Microsoft
`Ex. 1036 - Page 4
`
`
`
`US 2004/0090923 A1
`
`May 13, 2004
`
`be constructed and function according to the art, with the
`exception that preferably selected ones of those routers may
`include additional functionality for purposes of traffic con-
`gestion detection and response based on packet arrival
`variance and mean as described later. In addition, selected
`routers may be further constructed to respond to the traffic
`congestion detection that the router determines as well as in
`response to the traffic congestion detection of another router
`in network 20. Moreover, in one approach, core routers may
`be configured to respond differently than edge routers in the
`case of detecting traffic congestion.
`
`[0015] Completing the discussion of FIG. 1, note that the
`various stations, edge routers, and core routers therein are
`shown connected to one another in various fashions and also
`
`by way of example. Such connections are intended to
`illustrate an example for later discussion of the preferred
`operation and also to reflect a general depiction of how
`networks are generally established. Thus, each station STX is
`shown connected to a single edge router ERX, where that
`edge router ERX is connected to one or more core routers
`CRX. The core routers CRX, also by way of example, are
`shown connected to multiple ones of the other core routers
`CRX. By way of reference, the following Table 1 identifies
`each station and router shown in FIG. 1 as well as the other
`
`device(s) to which each is connected.
`
`TABLE 1
`
`station or router
`
`connected nodes
`
`BRIEF DESCRIPTION OF THE SEVERAL
`VIEWS OF THE DRAWING
`
`[0010] FIG. 1 illustrates a block diagram of a network
`system 10 into which the preferred embodiments may be
`implemented.
`
`[0011] FIG. 2 illustrates a block diagram of each network
`monitor NM1 through NM8 of FIG. 1.
`
`[0012] FIG. 3 illustrates a flow chart of the operation of
`each network monitor NM1 through NM8 of FIG. 2.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`
`[0013] FIG. 1 illustrates a block diagram of a system 10
`into which the preferred embodiments may be implemented.
`System 10 generally includes a number of stations STl
`through ST4, each coupled to a network 20 via a router, and
`each operable to send packets as a source or receive packets
`as a destination. By way of example, network 20 is an
`internet protocol (“IP”) network such as the global Internet
`or other IP-using network, where each station and IP net-
`works in general are well known in the art. One skilled in the
`art should appreciate that the use of the IP protocol is by way
`of illustration, and many of the various inventive teachings
`herein may apply to numerous other protocols, including by
`way of examples asynchronous transfer mode (“ATM”),
`token ring, Novell, Apple Talk, and still others. In any event,
`returning to network 20 as an IP network, and also by way
`of an example, each station STX may be constructed and
`function as one of various different types of computing
`devices, all capable of communicating according to the IP
`protocol. Lastly and also by way of example, only four
`stations STX are shown so as to simplify the illustration and
`example, where in reality each such station may be proxi-
`mate other stations (not shown) and at a geography located
`at a considerable distance from the other illustrated stations.
`
`[0014] Continuing with FIG. 1, along the outer periphery
`of network 20 are shown a number of edge routers ER1
`through ERn, while within network 20 are shown a number
`of core routers CR1 through CR4. The terms edge router and
`core router are known in the art and generally relate to the
`function and relative network location of a router. Typically,
`edge routers connect
`to remotely located networks and
`handle considerably less traffic than core routers. In addition
`and due in part to the relative amount of traffic handled by
`core routers, they tend to perform less complex operations
`on data and instead serve primarily a switching function; in
`other words, because of the tremendous amount of through-
`put expected of the core routers, they are typically hardware
`bound as switching machines and not given the capability to
`provide operations based on the specific data passing
`through the router. Indeed, core routers typically do not
`include much in the way of control mechanisms as there
`could be 10,000 or more connections in a single trunk.
`Further, typically core routers do not involve their opera-
`tions with TCP related items and instead deal at the IP level
`
`and below. In contrast, edge routers are able to monitor
`various parameters within data packets encountered by the
`respective router. In any event, the various routers in FIG.
`1 are shown merely by way of example, where one skilled
`in the art will recognize that a typical network may include
`quite a different number of both types of routers. Finally,
`note that each core router CRX and each edge router ERX may
`
`ST1
`ST2
`ST3
`ST4
`E {1
`E {2
`E {3
`E {4
`E {5
`E {5
`E {7
`E {8
`E {9
`E {10
`E {11
`CR1
`
`Ciz
`
`C13
`C14
`
`
`
`E {1
`E {10
`E {5
`E {7
`STl; C {1
`C {1; CR2
`C {2
`C {2
`5T3; C12, CR3
`C {3; CR4
`5T4; C {4
`C {4
`C {4
`5T2; C {1
`C {1
`E11; E111; ERID; ERz; CR2;
`C {3; C {4
`E12; E13; ER4; CR1; CR3;
`C {4; E {5
`E15; E15; CR2; CR1; CR4
`E17; E18; ERQ; CR1; CR2;
`C {3; E {5
`
`
`
`
`
`[0016] Given the various illustrated connections as also set
`forth in Table 1, in general IP packets flow along the various
`illustrated paths of network 20, and in groups or in their
`entirety such packets are often referred to as network traffic.
`In this regard and as developed below, the preferred embodi-
`ments operate to identify and respond to congestion in such
`network traffic. Finally, note that FIG. 1 may represent a
`simplified version of a network or the Internet in that only
`a few stations and routers are shown, while one skilled in the
`art will readily appreciate that the inventive concepts in this
`document may be applied to a larger number of stations,
`routers, and the network interconnectivity between those
`devices.
`
`[0017] FIG. 1 also illustrates a number of network moni-
`tors NM1 through NM8 according to the preferred embodi-
`
`Microsoft
`
`Ex. 1036 - Page 5
`
`Microsoft
`Ex. 1036 - Page 5
`
`
`
`US 2004/0090923 A1
`
`May 13, 2004
`
`ments, where the choice of eight such network monitors is
`only by way of example given the amount of other hardware
`that
`is shown for network 20. As detailed below, each
`network monitor NMX is operable to sample each packet that
`is received along the conductor(s) to which the network
`monitor is connected, and if corrective action is deemed as
`useful then a routing table associated with a router that is
`also associated with the network monitor NMX may be
`modified to improve network performance. The components
`of each network monitor NMX are described below, but at
`this point the connections of each such monitor are noted in
`the following Table 2:
`
`TABLE 2
`
`network monitor
`
`connected nodes
`
`NM1
`NM2
`NM3
`NM4
`NM5
`
`NM6
`NM7
`NM8
`
`CR1; CR2
`CR2; CR3
`CR1; CR3
`CR3; CR4
`CR1; CR2; CR3; ER7; ERS;
`ERQ
`CR4; ST4
`5T2; CR1
`ERS; ST3
`
`[0018] FIG. 1 and Table 2 demonstrate that each of
`network monitors NM 1 through NM4 and NM8 is connected
`to sample packets passing along the conductor(s) between a
`pair of nodes, such as between routers or between a router
`and a station. However, network monitors NMS, NMG, and
`NM7 are each by way of alternative examples incorporated
`into respective routers CR4, ER7, and ERlO. As a result, each
`of network monitors NMS, NMG, and NM7 is able to sample
`packets communicated with any of the nodes to which its
`respective router is connected; for example with respect to
`network monitor NMS, it may sample packets communi-
`cated with respect to any node to which core router CR4 is
`connected, namely, core routers CR1, CR2, CR3, and edge
`routers ER7, ER8, and ERQ. Thus, the contrast of network
`monitors NMS, NMG, and NM7 to the other illustrated
`network monitors NM1 through NM4 is shown to demon-
`strate that in the preferred embodiment each network moni-
`tor NMX may sample packets as a stand alone entity or may
`be combined with the hardware and software of an existing
`router;
`indeed,
`in the preferred embodiments a network
`monitor NMX also may be combined with network or ele-
`ment management systems. In any event and by way of
`introduction to details provided later,
`in the preferred
`embodiments the sampling functionality of each network
`monitor NMX permits real-time monitoring, over a defined
`period of time, of a ratio of the packet arrival variance and
`mean for selected packets, and in response determinations
`may be made, and actions may be taken, based on thresholds
`exceeded by the ratio, thereby presenting an indication of
`likely network traffic congestion.
`[0019] FIG. 2 illustrates a block diagram of each network
`monitor NM1 through NM4 and NM8 of FIG. 1, with the
`further understanding that
`functionally the
`following
`description also may be applied to any of network monitors
`NMS, NMG, and NM7, with the addition that certain func-
`tionality may be provided by the hardware and software
`already available from each respective router CR4, ER7, and
`ERlO. Turning then to FIG. 2, a console 30 is associated
`with network monitor NMX, where in the preferred embodi-
`
`ment a single such console 30 communicates with multiple
`network monitors NMX. For example, returning briefly to
`FIG. 1, preferably each of network monitors NM1 through
`NM8 communicates with a single console 30, where such
`communications also may be by way of packets between
`console 30 and the network monitors NMX. Console 30 may
`be constructed by one skilled in the art using various forms
`of hardware and software, where the selection is a matter of
`implementation choice in order to achieve the functionality
`described in this document. Turning to that functionality,
`console 30 preferably provides an administration (configu-
`ration) function and a reporting function. To permit a user to
`perform these functions, various interfaces may be used such
`as a graphical Web-based configuration tool or a command
`line tool, where one preferable approach implements con-
`sole 30, by way of example, using the Apache Web server
`with the PHP server-side scripting language to provide a
`dynamic interface to a flow store 32. In any event, the user
`interface preferably provides different screens for each of
`the administration and reporting functions. The actual inter-
`face and screens, however, may be implemented in various
`forms, where it is desirable for any form that an operator
`may properly control console 30 to perform its administra-
`tion and reporting functions with respect to its flow store 32.
`Preferably, a network administrator or the like uses the
`administration functionality of console 30 to create a set of
`rules and to provide those rules, along with a methodology
`for determining an index dispersion for counts (“IDC”) and
`responding thereto, to an RTFM meter 36 described later. By
`way of introduction, the set of rules causes meter 36 to store
`packet arrival time for each packet that satisfies one (or
`more) of the provided rules,
`that
`is,
`those packets are
`selected from among all packets that pass through the meter
`with the arrival
`time of such selected packets being
`recorded; thereafter, meter 36 determines the IDC for the
`selected packets over a specified time interval t. Also, once
`the system is configured and left to collect packet arrival
`about the monitored packets, the network administrator can
`use the reporting screen to query the information so as to
`generate network status reports based on the collected
`information and to thereby analyze network congestion.
`
`[0020] As introduced above, console 30 is connected to a
`flow store 32, which preferably represents a storage medium
`that stores a flow database relating to monitored packets. In
`the preferred embodiment, each network monitor NMX
`includes its own flow database, although alternative embodi-
`ments may be created where more than one network monitor
`NMX shares a common flow database.
`In the preferred
`embodiment the flow database in flow store 32 is an SQL-
`compatible database using the PostgreSQL relational data-
`base management system, although in alternative embodi-
`ments various other types of databases may be used in flow
`store 32. Using the preferred embodiment as an example,
`console 30 communicates with this database through the
`Web server’s PHP link to the SQL database. Thus, any
`administration and configuration changes made via console
`30 are passed directly to flow store 32. Given the preceding,
`one skilled in the art should appreciate that access to flow
`store 32 can be achieved by SQL queries, enabling network
`administrators to automate the configuration process or
`integrate report generation. As introduced above,
`in the
`preferred embodiment, flow store 32 also stores what is
`referred to in this document as a “rule set” (or “rule sets”
`when plural), which is initially provided to the flow store 32
`
`Microsoft
`
`Ex. 1036 - Page 6
`
`Microsoft
`Ex. 1036 - Page 6
`
`
`
`US 2004/0090923 A1
`
`May 13, 2004
`
`from console 30 as part of the administration function and
`which is also thereby conveyed to meter 36. As shown by
`example below, each rule set specifies one or more criteria
`against which meter 36 evaluates each incoming packet to
`determine if the criteria are satisfied. Additionally, in one
`embodiment, flow store 32 may store the packet arrival
`information for those criteria-satisfying packets in a moni-
`tored flow so that such information may be evaluated,
`including the determination of IDC information and possible
`responses thereto, by console 30. Moreover and as also
`discussed below, flow store 32 may store numerous different
`sets of packet arrival information, each corresponding to a
`different set of flow criteria, that is, corresponding to one of
`the different specified rule sets. The stored information is
`therefore accessible by console 30 and permits other analy-
`ses of the flow information so as to provide information and
`reports that are useful for network engineering and manage-
`ment purposes.
`
`[0021] Continuing with FIG. 2, recorder layer 34 provides
`an interface between flow store 32 and a meter 36 (or
`meters) in use and coupled to the network. Generally, the
`applications of recorder layer 34 can be separated into two
`categories: manager applications and meter reading appli-
`cations. Manager applications configure meter 36, based on
`information, including rules in one or more rule sets, in flow
`store 32. Meter reading applications permit the data col-
`lected and/or determined by meter 36 to be provided in a
`data format usable by flow store 32 and, indeed, recorder
`layer 34 facilitates the passage of that data into the flow
`database of flow store 32. Recorder layer 34 applications
`may pass information between flow store 32 and the network
`probes of meter 36 either synchronously or asynchronously.
`This gives network administrators the flexibility of either
`using real-time network flow meters (i.e. NeTraMet) or
`importing data from other network analysis tools that are not
`able to provide real-time data (e.g. Cisco NetFlow data).
`[0022]
`In the preferred embodiment, meter 36 is a Real-
`Time Traffic Flow Measurement (“RTFM”) meter which is
`a concept
`from the Internet Engineering Task Force
`(“IETF”). As known in the RTFM art, RTFM meters are
`previously known to be used in systems for determining the
`service requested by IP packets that are passing through a
`network for purposes of collecting revenue, where such a
`service is identifiable by the transport port number specified
`in each IP packet. For example, RTFM meters are currently
`being considered for use in systems whereby an Internet user
`is charged based on the service he or she is using on the
`Internet; for example, a different fee may be charged to the
`user for each different Internet service,
`including mail,
`video, phone calls, and web browsing. However, as detailed
`in this document, the preferred embodiment implements the
`RTFM meter instead to analyze each packet and to deter-
`mine if the packet satisfies a rule in the rule set and, if so,
`to store sufficient packet arrival time corresponding to the
`defined interval t so that packet IDC may be determined and
`used as a basis to indicate, and respond to, network con-
`gestion. Thus, in real time, meter 36 physically probes the
`underlying network traffic and each time meter 36 detects an
`IP packet on the network, it determines whether the packet
`satisfies a rule in the rule set(s). Also, during the real-time
`passage of numerous IP packets by the evaluating meter 36,
`meter 36 does not always copy a fixed portion of each such
`packet into a database such as the entire packet or the entire
`packet header; instead, each meter 36 evaluates the appro-
`
`priate field(s) in the packet and, if a rule in the rule set(s) is
`satisfied, then meter 36 stores sufficient packet arrival time
`so that the IDC corresponding to that packet may be deter-
`mined. In addition, meter 36 may store additional informa-
`tion about each rule set-satisfying packet, as further explored
`later. Returning to the aspect of meter 36 storing information
`to determine the packet IDC, the present inventive scope
`contemplates two alternatives of the actual IDC determina-
`tion, namely, either meter 36 may itself determine the IDC,
`or meter 36 may store sufficient information and console 30
`may determine the IDC from the stored information, where
`in either case the IDC is determined as detailed later. Further,
`the trade-off between these two alternative embodiment
`
`approaches is that the meter-side implementation introduces
`the overhead on network processors but with less messaging
`bandwidth overhead to meter readers. The read-side solu-
`
`tion, on the other hand, actually functions as an application
`analysis component in the RTFM architecture. The overhead
`to the router processor is minimal but the messaging band-
`width overhead for passing raw data to the IDC monitor for
`computation may be unendurable.
`
`[0023] The Index of Dispersion for Counts (“IDC”) has
`heretofore been proposed to be used to characterize packet
`burstiness in an effort to model Internet traffic, whereas in
`contrast,
`in the present
`inventive scope IDC is instead
`combined with the other attributes described herein to detect
`
`and respond, in a real-time manner, to packet congestion. By
`way of background, in the prior art, in a document entitled
`“Characterizing The Variability of Arrival Processes with
`Index Of Dispersion,” (IEEE, Vol. 9, No. 2, February 1991)
`by Riccardo Gusella and hereby incorporated herein by
`reference,
`there is discussion of using the IDC, which
`provides a measure of burstiness, so that a model may be
`described for Internet traffic. Currently in the art, there is
`much debate about identifying the type of model, whether
`existing or newly-developed, which will adequately describe
`Internet
`traffic.
`In the referenced document, IDC, as a
`measure of burstiness, is suggested for use in creating such
`a model. IDC is defined as the variance of the number of
`
`packet arrivals in an interval of length t divided by the mean
`number of packet arrivals in t. For example, assume that a
`given network node has an anticipation (i.e., a baseline) of
`receiving 20 packets per second (“pps”), and assume further
`that
`in five consecutive seconds this node receives 30
`
`packets in second 1, 10 packets in second 2, 30 packets in
`second 3, 15 packets in second 4, and 15 packets in second
`5. Thus, over the five seconds,
`the node receives 100
`packets; on average, therefore, the node receives 20 packets
`per second, that is, the average receipt per second equals the
`anticipated baseline of 20 pps. However, for each individual
`second, there is a non-zero variance in the amount of packets
`received from the anticipated value of 20 pps. For example,
`in second 1, the variance is +10, in second 2 the variance is
`—10, and so forth. As such, the IDC provides a measure that
`reflects this variance, in the form of a ratio compared to its
`mean, and due to the considerable fluctuation of the receiv-
`ing rate per second over the five second interval, there is
`perceived to be considerable burstiness in the received
`packets, where the prior art describes an attempt to compile
`a model of this burstiness so as to model Internet traffic.
`
`[0024] Looking now to the preferred embodiment, it uses
`IDC not solely as a predicate for Internet traffic modeling,
`but instead to provide an ongoing determination of whether
`selected packet traffic is causing congestion, where such
`
`Microsoft
`
`Ex. 1036 - Page 7
`
`Microsoft
`Ex. 1036 - Page 7
`
`
`
`US 2004/0090923 A1
`
`May 13, 2004
`
`congestion is detected in part in response to a threshold-
`exceeding IDC, and preferably further in view of additional
`considerations, such as in relation to quality of service
`(“QoS”). Having thus introduced IDC in its past application
`as well as in the present inventive scope, its actual deter-
`mination is now explored in greater detail. Recalling that the
`
`weakly stationary, that is, that their first and second moments
`are time invariant, and that
`the auto-covariance series
`depends only on the distance k, the lag, between samples:
`cov(ci, ci+Q=cov (cj, cj+k), for all i, j, and k.
`[0027] Further in view of Equations 1 and 2, consider the
`following Equation 3:
`
`j: l,
`
`cov(l)+cov(2)+
`
`+cov(n—2)+cov(n— l)
`
`Equation3
`
`"71 n7]
`
`1:1 k:l
`
`COV(CJ', cJ-H‘) = sum of
`
`nil
`
`j=2
`1
`j = n — 2,
`j = n —l
`
`= Z (n — j)cov<j)
`j:1
`
`cov(l)+cov(2)+---+cov(n—2)
`
`cov(l) + cox/(2)
`cov(l)
`
`IDC is defined as the variance of the number of packet
`arrivals in an interval of length t divided by the mean
`number of packet arrivals in t, it may be written as shown in
`the following Equation 1:
`
`[0028] Further, for the auto-correlation coefficient EL j+k, it
`may be stated as in the following Equation 4:
`
`
`
`Equation 1
`
`€M+k = fk =
`
`cov(c;,c;
`
`)
`
`—
`+k
`m _ cox/(cf)
`
`
`cov(k)
`
`Equation 4
`
`In Equation 1, Nt indicates the number of arrivals
`[0025]
`in an interval of length t. In the preferred embodiment and
`for estimating the IDC of measured arrival processes, only
`considered are the time at discrete, equally spaced instants
`
`[0029] Then from Equation 4, the following Equation 5
`may be written:
`
`"71 ",1-
`n-var(cr) + 22 Z COV(Cj, Cj+k)
`1
`
`j:l k:l
`Var(0r)
`"7 ”‘j
`n-Em)
` E(Cr) +2;
`
`
`
`” 6]
`
`I"‘
`
`Equation 5
`
`=
`
`
`var(c )
`T
`Em)
`
`1
`
`
`
`"71
`2
`+ 1;]
`
`j
`1— —
`- b
`n)?!
`
`‘
`k =
`'
`y usmgw“ ) a “W
`
`"ti (iiO). Further, letting ci indicate the number of arrivals in
`the time interval "ti—ti_1, then the following Equation 2 may
`be stated:
`
`[0030] Finally, therefore, the unbiased estimate of E(ct),
`var(c1),and Ej are as shown in the following respective
`Equations 6 through 8:
`
`[DC _
`"_
`
`’1
`V Z 0;]
`[:1
`_ var(cl+cz+---+cn) _
`i0] _ E(cl+cz+---+cn) _
`{:1
`‘
`n71 nrj
`
`Equation 2
`
`n- va.r(cr) + 22 Z cox/(cf, 014k)
`j:l k:l
`n-E(cr)
`
`In Equation 2, var(c1) and E(c1) are the common
`[0026]
`variance and mean of ci, respectively,
`thereby assuming
`implicitly that the processes under consideration are at least
`
`’1
`1
`{:1
`Em) = 22 c;
`
`n
`1
`vattcr) = mg (c; — Em»
`
`2
`
`",1-
`— (Ci — E(Cr))(0i+j — E(Cr))
`
`_ COVU) _ ”_ [:1
`valor)
`
`V3I(Cr)
`
`J
`
`Equation6
`
`Equation7
`
`Equation 8
`
`Microsoft
`
`Ex. 1036 - Page 8
`
`Microsoft
`Ex. 1036 - Page 8
`
`
`
`US 2004/0090923 A1
`
`May 13, 2004
`
`[0031] Thus, the IDC may be determined by the preferred
`embodiment using Equations 6 and 7, and further in view of
`Equation 8.
`
`[0032] FIG. 3 illustrates a flow chart of an overall method
`40 of operation for each network monitor NMX of FIGS. 1
`and 2. While method 40 is described below with respect to
`