`
`US 20070061433
`
`(19) United States
`(12) Patent Application Publication (10) Pub. N0.: US 2007/0061433 A1
`(43) Pub. Date: Mar. 15, 2007
`
`Reynolds et at.
`
`(54} METHODS AND APPARATUS TO SUPPORT
`DYNAMIC ALLOCATION OF TRAFFIC
`MANAGEMENT RESOURCES IN A
`NETWORK ELEMEN’I‘
`
`(76}
`
`Inventors: Scott Reynolds. Vancouver (CA):
`Siegfried Johannes Iiut't. Vancouver
`(CA)
`
`Correspondence Address:
`BLAKFLY SOKOI OFF TAYLOR & ZAFNIAN
`12400 WILSIIIRE BOUI E\ARD
`SEVEN I II 1' LOOR
`
`LOS ANGI‘ILES, (LA 90025-1030 (US)
`
`(21) Appl. No.:
`
`HER-4,275
`
`(22)
`
`Filed:
`
`Step. 12, 2005
`
`Publication Classification
`
`[51)
`
`Int. Cl.
`(1196!"
`(52) U.S. (fl.
`
`(2006.01 )
`15/? 73
`.............................................................. 7U9r‘223
`
`(57)
`
`ABSTRACT
`
`Methods and apparatus to support dynamic allocation of
`traffic management resources in a network element. Shared
`pools of trafiic management resources comprising an aggre—
`gation of local litre card resources distributed across the line
`Cards or a network element maintained by apparatus soft-
`ware. incoming packets are classified into subscriber [lows
`using a hierarchical classification scheme. In view of sub-
`scriber services and flow application types. tmliic manage-
`ment resources are dynamically ailocatod front the shared
`pools‘ and traflic management policies associated with the
`subscriber services and application types are applied to the
`subscriber flours via the aliocated resources. in response to
`detecting a subscriber flow has terminated. the allocated
`resources are release and made available to be dynamically
`re-allocated to subsequent subscriber flows.
`
`214
`
`Lexical Analysis
`
`(J
`
`Subscriber :
`—-- inbound
`Traffic
`.
`Subscriber
`+Outbourid
`Traffic
`
`.
`Packet Forwarding
`
`:
`I
`
`Core
`"e
`Network
`;
`r Outbound
`i
`Traffic
`I
`I
`r
`Core
`I
`.
`.
`i
`I
`'
`Edugggffigfiom
`'
`Packet Forwarding
`________________________ |______ _____ ________
`'
`:
`
`
`
`{353:5
`Trafiic
`
`Subscriber
`Database
`
`Service Traffic
`Parameter
`Database
`
`Microsoft
`
`Ex. 1013 - Page 1
`
`Microsoft
`Ex. 1013 - Page 1
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 1 of 11
`
`US 2007/0061433 A1
`
`823552%
`
`{0202n:
`
`$53
`
`
`
`65855353
`
`5Es{9.552
`
`«Ex32...:N.mE
`
`695mb
`
`82>
`
`822mm
`
`Microsoft
`
`Ex. 1013 - Page 2
`
`Microsoft
`Ex. 1013 - Page 2
`
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 2 of 11
`
`US 2007/0061433 A1
`
`28
`
`{922
`
`233.5
`
`2:5
`
`28
`
`E25:
`
`3:35
`
`2%:
`
`
`
`
`
`«Seaman:oEmc.mEEmEou“mxommcozwoaamma
`
`
`
`“HE“:
`
`.3333
`
`2355+
`
`052..
`
`
`
`
`
`3.85:5mags
` 253:.I.F.3533
`
`3.2%8:235
`
`223:5236
`
` cozggmmgo
`
`8:2358355
`
`
`
`96.620“..381
`
`EN
`
`
`
`
`
`cosmogmmgoBoo—en.9.533
`
`N.EC
`
`mmgfimoMNDEE82%£533
`
`EEEEmm
`
`mmmnmfio
`
`Microsoft
`
`Ex. 1013 - Page 3
`
`Microsoft
`Ex. 1013 - Page 3
`
`
`
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 3 of 11
`
`US 2007/0061433 A1
`
`mum
`
`__fi___________
`
`
`2%:--uo.__a.@.é.=ms.mm,_.of..........h...........T.........1..........
`
`
`2%:111111T14+11_
`
`
`
`,..
`
`Ezmmmcoo
`
`Eva—2
`
`EN.mom\v.Ava:
`
`c3398
`
`
`
`“magmamango.3qu
`
`
`
`£323@5228£338
`
`
`
`
`
`Ban.«Enomwm23:0.320“.226
`
`
`
`m.mE
`
`Microsoft
`
`Ex. 1013 - Page 4
`
`Microsoft
`Ex. 1013 - Page 4
`
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 4 of 11
`
`US 2007/0061433 A1
`
`400
`
`408
`
`-404
`i406A l:
`
`410A
`
`402
`
`Called Party
`4063
`
`l
`
`:
`
`:
`
`RINGING (180)
`41GB
`
`li4128
`
`RINGING (180)
`
`412A
`
`
`
`Microsoft
`
`Ex. 1013 - Page 5
`
`Microsoft
`Ex. 1013 - Page 5
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 5 of 11
`
`US 2007/0061433 A1
`
`059:.wow
`
`€33385588”ES
`
`.uczoasl
`
`Som.oommnfiuoEuEunwcms
`Egg:223352233.3833
`
`
`
`
`
`3:2...333539535
`
`634.529:
`
`cémogmmgo
`
`25.62£925gags8m
`
`05gauge85.6550New9223
`
`
`
`$238.._.858mm.
`
`
`
`
`
`
`£2202Eo_§___e%_mafia“.3535
`
`“.535cozmozfimmangmEo“.618n.2:350...
`
`
`
`2th2:2...
`23_.
`
`
`
`mom354.535
`
`cozmucfimma
`
`m.mE
`
`mom
`
`mswso$55
`
`Boa858$
`
`
`
`Ememmmcmécam;
`
`uuuuuuuuuuuuI.
`
`
`mom\1
`
`Microsoft
`
`Ex. 1013 - Page 6
`
`Microsoft
`Ex. 1013 - Page 6
`
`
`
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 6 of 11
`
`US 2007/0061433 A1
`
`600
`
`Traffic Blade
`
`ATCA Chassis
`
`Backplane
`
`HiGtg
`nun-.4
`nun—.4
`b.-
`i.-
`r.-
`9--
`r.-
`ri-
`b.-
`b.-
`r.-
`
`Compute Blade
`
`Full mesh
`
`interconnect
`
`b.-
`
`
`
`
`502
`
`604
`
`Microsoft
`
`Ex. 1013 - Page 7
`
`Microsoft
`Ex. 1013 - Page 7
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 7 of 11
`
`US 2007/0061433 A1
`
`
`
`..‘5389:252oEmEE
`
`20822358
`
`23¢ng
`
`33822358x..58894232oEwEEE
`
`
`
`KY\\
`538214.52GEE;
`
`mw23293:50
`
`.3
`
`«5%
`
`Em
`
`m.MQ
`
`Microsoft
`
`Ex. 1013 - Page 8
`
`£9.19...»\Lmono»?
`
`.\h.....
`
`...
`
` 99.3009...
`
`.60.
`
`
`
`opouovonowvnon.»..rrM£2
`
`mmmm53891-25
`
`Fa232SEES
`
`Microsoft
`Ex. 1013 - Page 8
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 8 of 11
`
`US 2007/0061433 A1
`
`
`
`x.552m.\.
`
`
`Cafe?.\‘§¥K\
`
`ésuéafl/\\\\\\a.
`
`vowogfiEmEm.-.‘
`
`Now>1;
`
`ll1
`
`k.
`
`
`
`EéommiflEEEED
`
`82moEmc.
`
`.mmommm
`
`can
`
`Microsoft
`
`Ex. 1013 - Page 9
`
`Microsoft
`Ex. 1013 - Page 9
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 9 of 11
`
`US 2007/0061433 A]
`
`Fig. 9
`
`604\
`
`.
`g
`
`{J
`
`900
`
`502‘
`
`”I," \\ /\\~\
`//1l\\\w \
`Héfifiahfiifldfififlg
`
`
`
`6024
`
`
`
`Microsoft
`
`Ex.1013- Page 10
`
`Microsoft
`Ex. 1013 - Page 10
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 10 0f 11
`
`US 2007/0061433 A1
`
`6sz523an_
`
`
`
`
`
`
`
`
`
`
`
`
`
`6?;E80889.35m838.390.EoEmmmcms822mm.62$EmcoasooEwsommcms.£2333
`
`
`
`
`
`3:059:00g3ofiwcmEufiomwamE835m
`
`39SS52
`
`
`
`9:2FE53..xonucmwcozufi.n:w¢mn_
`
`
`
`
`
`VM Manager
`
`
`
`33052.wmEmcm055%:3:8w9moEmm
`mfiwucmcmoeamcgm.Q5m95x8...
`
`
`
`E2»?9:B7.2macmEEoo82%£3a3m
`
`$chch
`
`82mm
`
`2%...........
`
`“829....
`
`
`
`
`
`
`
`9%;memfimfiém3:8an8EmEmmmcmE‘550ch_92835
`
`350Eoi
`
`
`
`
`
`gm8.8me38.25£52...2693:0
`
`
`8385320EEEEQw9E52acorn.23m30E
`
`.m_89
`
`no?aw.MVNHN82
`
`Microsoft
`
`Ex.1013- Page 11
`
`Microsoft
`Ex. 1013 - Page 11
`
`
`
`
`Patent Application Publication Mar. 15, 2007 Sheet 11 of 11
`
`US 2007/0061433 A1
`
`05
`
`W/
`1.1/7
`.x\.«V
`
`m5
`
`li‘r’
`
`hummmoEn.\._H:IEx...
`
`
`
`
`
`woo:room
`
`
`
`0manna$29..
`
`noon
`
`
`
`V\\“\
`
`\.\\\
`\€205
`
`2:950inv_
`
`gag?\.\\\m_a§?m
`\\v\\\\év\
`§\\1\
`
`
`ea...05¥Em\.\.\1:222.3%v
`
`momI.
`
`
`
`
`
`Econ3.582.$.2533%
`
`
`
`
`
`A0362%.....2.3.5533
`
`
`
`
`
`Microsoft
`
`Ex.1013- Page 12
`
`Microsoft
`Ex. 1013 - Page 12
`
`
`
`
`
`
`
`
`
`US 200756061433 Al
`
`Mar. 15, 2007
`
`METHODS AND APPARATUS TO SUPPORT
`DYNAMIC ALLOCATION OF TRAFFIC
`MANAGEMENT RESOURCES IN A NETWORK
`ELEMENT
`
`FIELD OF THE INVENTION
`
`[0001] The field of invention relates generally to conges-
`tion and flow control in converged full service communica-
`tion systems. and. more specifically but not exclusively
`relates to employing dynamic allocation oi‘traflic manage—
`ment resources including queues and compute resources to
`support enhanced services and traflic management capabili—
`ties in a network element.
`
`BACKGROUND INFORMATION
`
`Incumbent telecommunication providers are using
`[0002]
`the capabilities of existing and next generation residential
`high-speed broadband connections to deliver services other
`than high—speed Internet (HSI) access. These new services
`include voice (utilizing Voice over IP technology) and
`streaming video. Such services may be olfered at a price
`premium over and above the existing HSI access‘ improving
`the revenue generating capability of providers’ network.
`
`[0003] Delivering streaming content (e.g. voice and video)
`requires specialized processingt'lreatment by the network to
`ensure acceptable service quality for these new applications.
`This specialized processing typically involves a Network
`Element
`(NE)
`identifying both the subscriber and the
`streaming media content and a) ensuring there exists suffi-
`cient bandwidth to accept
`the new service request; b)
`expediting the delivery of the content; and c) protecting the
`premium content [mm unregulated. greedy protocols and
`applications. Collectively. these functions can be aggregated
`into an “admission control“ element and a “tra the manage-
`ment” element.
`
`is responsible for identifying
`[0004] Admission control
`the service request and determining whether suflicient net-
`work resources exist to allow the request and honor the
`required quality guarantees. Admission control can be
`explicit, through techniques such as a signaling protocol
`(eg. RSVP. SIP etc] or implicit. by dynamically identifying
`the serviced/application in real-time.
`
`[0005] Traffic management (TM) is an umbrella tenn used
`to describe the allocation of network resources to competing
`services. It typically includes functions such as traflic queu—
`ing and servicing, trailic rate policing and shaping. 'I‘rall‘ic
`management functions can be applied at variotts levels of
`granularity—ranging from traflic from individual applica-
`tions and subscribers. to aggregates that contain traflic of
`similar classes from hundreds or
`thousands of users.
`
`Depending on the dynamic nature of the network‘s load. a
`
`NE may dynamically manage TM properties in real-time or
`merely statically provision the TM properties in response to
`results from the admission control element. A trafiic man~
`tiger implements a resource allocation scheme based on both
`an implied hierarchy of importance of service types and a
`model of the current resource availability and allocation. As
`new service requests are processed. network resources may
`be allocated or re-allocated. taken from lower priority flows
`and given to higher priority requests.
`
`[0006] Traffic management functions control the band—
`width. packet
`loss probability. delay and delay variation
`(jitter) for a given flow of (in this case) IP datagrams (also
`referred to herein as “packets"). Each service may require a
`unique combination of these parameters to deliver accept-
`able service quality. and each service request
`forces a
`re-evaluation of the resource allocation policy, potentially
`re—allocating the resources amongst all the competing flows.
`
`Implicit to both admission control and traflic man—
`[0007]
`agement is the process oftraflic classification. Classification
`is the process of matching incoming trafiic against a data-
`base of signatures in order to identify some descriptive
`property of the traffic—such as who the trafiic is from {for
`subscriber identification} or what type of traflic is being
`transmitted (service type classification for trailic manage-
`ment). Classification is a necessary and critical component
`of both admission control and traffic management elements
`described above.
`
`FIG. 1 depicts a typical topology for a high-speed
`[0008]
`broadband network. At the service end. services such as
`video, voice. and lntemet access are provided to subscribers
`100 via an interface to an access network 102, such as a
`cable or DSL {Digital Subscription Line) modern 104 and a
`router 106. Meanwhile. access network 100' is coupled to an
`aggregation network 108 via appropriate network elements.
`such as DSLAMs [Digital Subscription Line Access Militi-
`plexer) 110 and 112 and CMTS {Cable Modem Termination
`System} element 114. An IP network element (NE) 116 is
`used to couple aggregation network 108 to networks from
`which the services (typically) originate, such as a service
`provider network 118 and the Internet 120 and provide
`various subscriber services. Service provider network 118
`and Internet 120 are commonly referred to as “core“ net-
`works.
`
`in existing networks
`[0009] The IP Network Element
`generally will be one of either a Broadband Remote Access
`Server (BRAS) or an Edge Router (ER). Typical reference
`architectures use a BRAS for residential broadband deploy-
`ments and IERs to provide business leased-line and single
`ended services, such as Internet access. Table 1 below
`summarizes the architectural differences between a ERAS.
`an ER. and proposed next-generation NEs. with the focus on
`traflic management capabilities.
`
`Function
`
`Application
`
`Subscriber facing
`interfaces
`'l'mnkfrxrrc facing
`interfaces
`
`ERAS
`
`TABLE 1
`
`ER
`
`Residential broadband
`networks
`ATM. Ethernet
`
`l-Ithernel, P03, Gigabit
`Ethernet
`
`Business leased line
`
`PDI-I tDSl. T33.
`I'Itherrlet
`lilhernet‘ P055. (iigctbit
`Ethernet
`
`Next Generation
`
`Residential broadband
`Mold-service networks
`Gigabit Ethernet
`
`liltiigabil Iilherrtel
`
`Microsoft
`
`Ex.1013- Page 13
`
`Microsoft
`Ex. 1013 - Page 13
`
`
`
`US 2007t’0061433 Al
`
`Mar. 15, 2007
`
`Function
`
`ERAS
`
`IiR
`
`TABLE l-oontittued
`
`Subscriber-"customer
`identification
`
`Tunnels tPPPoA.
`PPPof-j)
`
`'frnflic type
`identification
`
`Not Applicable
`
`'I'rafl'tc Managemenl
`focus
`Trafiic Management
`granularity
`
`Managing subscriber
`lmfiic [virtual stream:
`Fine: [Bill's small
`pipes
`
`Physical ports. timeslot
`or Layer 2 technique
`tog. \‘LAN. Vl’l-“V'Cl,
`D113] etc}
`L2: VLANISDZJp.
`V'PIIVCI
`1.3: DSCPJ‘OS
`L4: Socket
`Managing port and-"or
`Cos traffic pcr port
`Coarse: 100's falter
`pints
`
`Queues
`
`IOOU'S. pcr subscriber
`
`Smaller: ports x Cos
`
`Queue allocation policy
`
`Fixed - per subscriber
`
`Fixed - (‘05 based
`
`TM sophistication
`
`Limited - ensure fair
`allocation of bandwidth
`between subscriber
`
`More sophisticated .
`ensure prioritization
`per port
`
`Next. Generation
`
`DIICP
`
`L2 + L3 + L4 + application
`
`Managing service
`traffic per subscriber
`Fine: 10.000'5 queue.
`supporting both thin
`and fat pipes
`100.0003. per
`subscriber x service
`‘2‘
`(— Innovation
`required
`Sophisticated — ensure
`sen-'ice quality with :1
`subscriber and service
`category
`
`[0010] As broadband residential access networks evolve
`to deliver services other than HS], the capabilities of the
`BRAS must extend to match. Similarly. ERs currently do itot
`have the TM capabilities to handle thousands of subscribers,
`each demanding their own set of service queues. These
`evolving requirements are captured in the next generation
`column of Table l.
`
`From Table I. it is clear that the area ofTM requires
`[001]]
`the most significant changes. Typically ERAS devices lack
`the sophisticated service-aware traffic management func-
`tions to provide dedicated queues per service. per subscriber.
`Secondly. the requirement to have a dedicated queue per
`subscriber. irrespective of whether the subscriber is on-line
`and using the service timdamentally limits the number of
`subscribers an NE can provide.
`
`[0012] The ER approaches the problem difl'erently. If only
`a small number of queues per interface are supported. an
`aggregate queuing model must be employed. in this model.
`all service-specific traffic [c.g. all voice traffic destined to all
`subscribers) is
`funneled or aggregated through a single
`service specific queue. The number of queues required is
`thus limited to the ntnnber of discrete services supported by
`the network per port.
`
`[0013] This model can only control the behavior of the
`aggregate queue (i.e. ensuring the aggregate bandwidth,
`aggregate packet loss. aggregate delay and jitter are suffi-
`cient), rather than the behavior of the constituent subscriber
`service flows. In this case, it is entirely possible (and likely)
`that although the aggregate quality of service is being meet.
`the quality of service for the individual subscriber service
`flows may not be satisfied.
`
`SUMMARY OF THE [NVENT ION
`
`in accordance with aspects of the present inven-
`[0014]
`tion, methods and apparatus to support dynamic allocation
`of trafiic management resources iii a network element are
`disclosed. Shared pools of traffic management resources are
`maintained by distributed software entities running on the
`blades (i.e.. line cards) of a network element. The shared
`
`pools comprise an aggregation of local resources hosted by
`the various blades, and include queues and packet process—
`ing resources. Incoming packets are classified iitto sub—
`scribe-r flows using a hierarchical classification scheme. In
`view of subscriber services and flow application types,
`traffic management resources are dynamically allocated
`from the shared pools. and traffic management policies
`associated with the subscriber services and application types
`are applied to the subscriber
`flows via the allocated
`resources. In response to detecting a subscriber flow has
`tenninated. the allocated resources are released and made
`available to be dynamically re-aiiocated to subsequent sub-
`scriber flows.
`
`In another aspect of the present invention. archi-
`[0015]
`tectures for implementing the method on a network element
`are disclosed. The architecture includes a pltu'ality of traffic
`blades and compute blades. each having local processing
`and memory resources. The traffic blades are used for
`performing ingress and egress operations. while the compute
`blades are employed for traflic analysis and other manage-
`ment operations. A distributed set of software components
`are run on various processor resources on the blades. and
`cooperatively implement various packet processing opera-
`tions associated with the methods.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0016] The foregoing aspects and many of the attendant
`advantages of this invention will become more readily
`appreciated as the same becomes boner understood by
`reference to the following detailed description, when taken
`in conjunction with the accompanying drawings, wherein
`like reference numerals refer to like parts throughout the
`various views unless otherwise specified:
`
`FIG. 1 is a schematic diagram of a conventional
`[0017]
`high—speed broadband network topology:
`
`FIG. 2 is a schematic diagram of a system archi—
`[0018]
`tecture used to implement a multi—level classification hier—
`arclty mechanism. according to one embodiment of the
`invention;
`
`Microsoft
`
`Ex. 1013 - Page 14
`
`Microsoft
`Ex. 1013 - Page 14
`
`
`
`US 200750061433 A1
`
`Mar. 15, 2007
`
`FIG. 3 is a schematic diagram illustrating further
`[0019]
`details of the Traflic Manager element of FIG. 2;
`
`FIG. 4 is a message flow diagram illustrating an
`[0020]
`exemplary protocol exchange performed during a typical
`VoIP session;
`
`FIG. 5 is a schematic flow diagram illustrating
`[0021]
`various operations performed during processing and for-
`warding of inbound and outbound subscriber traflic;
`
`FIG. 6 is a schematic diagram illustrating the
`[0022]
`communication interconnected between a Traffic Blade and
`a Compute Blade;
`
`FIG. 7 is a schematic diagram illustrating of one
`[0023]
`embodiment of a Compute Blade that is provisioned for an
`OAMP function:
`
`FIG. 8 is a schematic diagram illustrating one
`[0024]
`embodiment ofa Traflic Blade;
`
`FIG. 9 is a schematic diagram illustrating one
`[0025]
`configuration of a service node implemented via a ATCA
`chassis:
`
`FIG. 10 is a schematic diagram illustrating various
`[0026]
`componenls associated with a Service Management Engine
`(SME): and
`
`l-‘lG. ]] is a schematic diagram of an exemplary
`[0027]
`execution environment for a service node shared.
`
`DETAILED DESCRIPTION
`
`[Embodiments of methods and apparatus for sup-
`[0028]
`porting dynarnic allocation of tra {tic management resources
`in network elements are described herein. In the following
`description. numerous specific details are set forth to pro-
`vide a thorough understanding of embodiments of the inven-
`tion. One skilled in the relevant art will recognize. however.
`that the invention cart be practiced without one or more of
`the specific details. or with other methods. components.
`materials. etc. In other instances, well-known structures.
`materials. or operations are not shown or described in detail
`to avoid obscuring aspects of the invention.
`
`this specification to “one
`[0029] Reference throughout
`embodiment" or “an embodiment" means that a particular
`feature. structure. or characteristic described in connection
`with the embodiment is included in at least one embodiment
`
`the appearances of the
`invention. Thus.
`of the present
`phrases “in one embodiment” or “in an embodiment” in
`various places throughout this specification are not neces-
`sarily all referring to the same embodiment. Furthermore.
`the particular features. structures. or characteristics may be
`combined in any suitable manner in one or more embodi-
`ments.
`
`In the following description and claims. the term
`[0030]
`“coupled.“ along with its derivatives,
`is used. “Coupled“
`may mean that two or more elements are in direct physical
`or electrical contact. However, “coupled" may also mean
`that two or more elements are not in direct contact with each
`
`other. but yet still co—operate or interact with each other.
`
`[0031] The embodiments described herein address the
`limitations of the prior art by using statistical properties of
`the traffic and subscribers to share a (relatively) small set of
`queue resources. The embodiments combine the use of
`
`several technologies. such as deep packet inspection-based
`classification and high performance computing to identify
`both the subscribers and their service requests and dynami—
`cally provision a traffic management queue and associated
`compute resources for the duration of the service delivery.
`By dynamically allocating resources from shared pools.
`rather than statically provisioning them, far greater subscrib-
`ers can be supported per network element. without increas-
`ing the queuing or traffic management system complexity.
`
`In order to more clearly understand aspects of the
`[0032]
`invention. a generalized implementation of the aspects will
`first be discussed. Following this, details of implementing
`the techniques on an exemplary implementation environ-
`ment are then discussed.
`
`FIG. 2 illustrates components implemented as part
`[0033]
`of a packet processing datapath 200 on a network element
`referred to herein as the “Service Node.“ The datapath
`includes S-tuplc packet identificrsr'classifiors. IP packet for-
`warders.
`traffic management elements and the respective
`statistic gathering components. These components are sche-
`matically depicted in FIG. 2 by corresponding functional
`blocks. which include 5-tuple identiticationfclassification
`blocks 202 and 203. packet forwarding blocks 204 and 206.
`and traffic management blocks 208 and 210.
`
`In one implementation of the Service Node
`[0034]
`(described below). the packet processing datapath is imple-
`mented within the bounds of a Network Processor unit
`
`(NPU). The packet processing datapath has a strict process-
`ing budget bounded by the maximum inter-arrival rate ofthe
`incoming packets {i.e., the line rate of the NE). This pro—
`cessing budget fundamentally limits the amount of packet
`processing that can be performed while still meeting the line
`rate performance targets.
`
`[0035] The amount of packet processing performed by the
`datapath is sometimes not sufficient to fully resolve the type
`of packet, the source of the packet. or how to process the
`packet. Accordingly, a second series of components shown
`in 1? 1G. 2 illustrate a number of enhanced packet classifica-
`tion processes (including lexical analysis 214 and 216.
`protocol classification 218 and service classification 220).
`admission control processes {including subscriber classifi-
`cation 222 and authentication 224) and dynamic packet
`processing processes [including bandwidth management
`performed by a bandwidth manager 226 and traitic manage-
`ment performed by a trallic manager 228). In one embodi-
`ment of the Service Node architecture, these processes are
`implemented on high performance generalized compute
`resources.
`
`FIG. 2 illustrates lf’ datagram datatiow using solid
`[0036]
`lilies. The basic dataflow consists of II) datagrarns (packets)
`ingressing front the Subscriber Inbound Traflic interface.
`through S-tupie
`idcntificatiom’classifieation block 202,
`through packet forwarding block 204 and traliic manage-
`ment block 208 otlt the Core Network Outbound Traffic
`Interface. Since most IP sessions {TCP and UDP) involve
`the bi-directional How of datagrams between a client and a
`server.
`traffic entering from the Core Network Inbound
`interface follows a symmetric path. which includes 5-tuple
`identificationfclassitication block 203, packet forwarding
`block 206 and traffic management block 210.
`
`[0037] Due to the complexities of packet classification. a
`packet classification hierarchy is implemented. as more fully
`
`Microsoft
`
`Ex.1013- Page 15
`
`Microsoft
`Ex. 1013 - Page 15
`
`
`
`US 200756061433 A1
`
`Mar. 15, 2007
`
`discussed below. Each higher level of the classification
`hierarchy performs more complex packet analysis on a
`sub—sample of the packets that constitute a flow. A secondary
`datapath. called the “Bifurcated” datapath, duplicates pack-
`ets matching specific S-tuple filters to the higher layers of the
`classification hierarchy. Bifurcation offers the advantage of
`presenting the packet
`to the detailed classification and
`analysis algorithms while not introducing undue latency into
`the basic datapath. Furthermore, since only a subset of the
`entire packet traffic is bifurcated, significantly more detailed
`and arbitrarily conmlex analysis algorithms are possible
`while still maintaining the performance requirements of the
`subssampled bifurcated packet stream.
`[0038] Traditionally. the rules for classifying a message
`[i.e.. one or more associated packets) are called filters (or
`rules in firewall tenninology). and the packet classification
`problem is to determine the lowest cost matching filter or
`rule for each incoming message at the network element.
`Under the well-known N-tuple classification scheme.
`the
`relevant information is contained in N distinct headerfields
`
`(or partial header fields) in each packet.
`
`[0039] The corresponding filter database consists of a
`finite set of filters,
`filtl.
`filt2 .
`.
`.
`filtN. Each filter is a
`combination of N values. one for each header field. Ifiach
`field in a filter is allowed three kinds of matches: exact
`
`match, prefix match. or range match and wildcard. In an
`exact match. the header field of the packet should exactly
`match the filter field. In a prefix match. the filter field should
`be a prefix of the [reader field. In a range match or wildcard
`match, the header values should lie in the range specified by
`the filter (or be any value for a wildcard match). l-iach filter
`tilt-I has an associated directive disp-l. which specifies how to
`process a packet matching the filter.
`[0040] Under
`the
`5-tuple
`identificationlclassification
`scheme employed by S-tuple identificationfclassification
`blocks 202 and 203. the relevant fields for an va4 packet
`comprise the Destination Address (32 hits).
`the Source
`Address (32 bits), the Destination Port (16 bits). the Source
`Port (16 bits). and the Protocol
`I-‘ield (layer 4 protocol
`type-
`8 bits); the set of field values for a given packet is
`referred to as the 5-tt1ple signature. This L3lL4 classifier
`supports exact match. prefix match. ranges and wildcards on
`each of the search key elements. The 5-tuple identifier!
`classifier provides botmded search latency. and hence is
`performance-independent of packet length, making it suit-
`able for inclusion into datapath 200.
`[0041] As dittcussed above. packet forwarding blocks 204
`and 206 perform packet forwarding operations. The element
`is common to all NEs that forward IP datagrams, and its
`operations are well-known in the art and beyond the scope
`of the present invention. At a minimum. the packet forward-
`ing involves searching a table of IP address prefixes using
`the incoming packet’s destination IP address. The result of
`the search is an entry in an IP adjacency table. indicating the
`correct egress link for the datagram to be forwarded.
`[0042] Traffic Management blocks 208 and 210 imple-
`ment trafiic management functions such as traffic rate lim-
`iting (policing). queuing. queue servicing. queue congestion
`control and trafiic rate shaping. The queues contained in this
`block are statistically multiplexed between different sub—
`scribers and their traffic based on usage requirements. Traffic
`Management blocks 208 and 210 are controlled by means of
`Traffic Mauiager 223 as described below.
`
`[0043] An exemplary implementation scheme illustrating
`further details of operations performed by one embodiment
`of the Service Node are shown is shown in FIG. 3. As
`packets arrive at a Traffic Management block (208 or 210).
`they will have been already classified into corresponding
`subscriber service andt‘or application flows. A set of Traffic
`Management policies and operations are applicable to each
`flow classification. This is illustrated by a set of Policer
`blocks 300%. a set of Congestion Management blocks
`3020—.1- and a set of (Traffic) Shaper blocks 304%.
`
`[0044] Each Policer block perfomts policing operations,
`such as trafiic rate limiting. in view of policing policies
`applicable to the associated flow. Similarly. each Congestion
`Management block performs
`congestion management
`operations based on applicable congestion management
`policies for the associated flow. and each 'l'rafiic Shaper
`block performs traflic shaping operations in view of traffic
`shaping policies for the associated flow. For example, the
`respective operations performed for a flow classified to flow
`1 includes operations performed by Policer block 3001.
`Congestion Management block 3021. and Traffic Shaper
`block 304,.
`
`[0045] Another aspect of traffic management operations
`relates to dynamic queue allocation and management. As
`described below in further detail. Traffic Manager 228
`dynamically allocates queues 306,}n for respective flows O-n
`from a shared queue resource pool 308. In connection with
`queue allocation. each Traffic Management block also per-
`forms queue servicing 310. In one Service Node implemen-
`tation. each of Traffic Management blocks 208 and 210 is
`capable of supporting 64.000 queues. each individually
`serviced with a hybrid priorityfweighted round robin queue
`service discipline.
`1 M dual-rate token bucket I’olicers. and
`over lK independent shapers. Other numbers of queues.
`policers. and shapers may also be implemented in a similar
`manner.
`
`5-tuple identificatioruclassification blocks 202 and
`[0046]
`203 are capable of matching the packet against basic layer
`3 and layer 4 infonnation. In some instances this is not
`sufficient to identify the actual application payload or the
`subscriber (cg. due to the use of tunnels andt’or NAT
`(Network Address Translation”. The Service Node employs
`a unique classification. hierarchy architecture, where pack-
`ets determined to be of interest at a given layer of the
`hierarchy are presented to a subsequent layer for further
`processing. If a packet can be fully resolved at a given layer.
`it does not need to be promoted to the next higher layer.
`resulting in a decreasing number of packets processed at
`subsequently layers. This lower packet processing rate
`requirement allows more complex (and hence time-consum-
`ing) processing algorithms to be employed while still main-
`taining the fundamental pcrftmnanoe targets.
`
`[0047] By way of example and not limitation, differing
`aspects of the classification hierarchy are illustrated with
`reference to the SIP (Session Initiation Protocol) protocol
`exchange of FIG. 4. In accordance with this Volf’ example.
`a caller 400 employs the SIP protocol to establish a con-
`nection {i.e._. communication session) with a called party 402
`via proxy services provided by a proxy 404. In consideration
`of the foregoing diagrams, messages sent from caller 400
`comprise subscriber inbound traffic. while message sent
`from called party 402 represent core network inbound traffic.
`
`Microsoft
`
`Ex.1013- Page 16
`
`Microsoft
`Ex. 1013 - Page 16
`
`
`
`US 200750061433 A1
`
`Mar. 15, 2007
`
`To initiate the session. the caller sends a SIP call
`[0048]
`INVITE message 406A to proxy 404. which forwards the
`message as INVITE message 40613 to called party 402. In
`response to INVITE message 406A. proxy 404 returns a
`TRYING message 408 to caller 400 containing a response
`code 100. Upon receipt of INVITE message 4068. called
`party 402 returns a RINGING tnessage 410A to proxy 404
`containing a response code 180. which forwards a corre-
`sponding RINGING message 410B to caller 400.
`In
`response to establishing a connection, called party sends an
`OK message 412A containing a response code 200 to proxy
`404, which forwards a corresponding OK message 41213 to
`caller 400. In response. caller 400 sends an AC Knowledge
`message 414 directly to called party 402.
`
`the point-to-point connection is
`this point.
`[0049] At
`established enabling Iii-direction voice traffic 416 to be
`transmitted between caller 400 and called party 402. At the
`conclusion of the call session. called party 402 sends a BYE
`message 418 to caller 400, which returns an OK message
`420.
`
`[0050] Now let’s consider how this SIP exchange is
`handled from the viewpoint of the classification hierarchy.
`The initial SIP call INVITE message 406A may typically be
`encapsulated in a TC P packet. The 5—tuple identifierfclassi—
`lier will perform a primary {i.e._. first-level) classification of
`the packet ttsing its 5-tuple signature. Based on the signa-
`ture, the 5-tuple identifierr‘classifier will identify the packet
`as containing TC P and including a destination port t‘nat
`matches a well-known port number that is used for the SIP
`protocol. From a conceptual viewpoint. this first-level clas-
`sification provides a filtering function that filters out packets
`matching a first-level set of rttles.
`
`[0051] At t