throbber
1191
`United States Patent
`
`Hluchy' et al.
`[45] Date of Patent:
`.Jul. 27, 1993
`J
`
`5,231,633
`
`[11]
`
`Patent Number:
`
`US00523 1 63 3A
`
`[54]
`
`[75]
`
`[73]
`
`[21]
`
`[22]
`
`[51]
`[52]
`[53]
`
`[56]
`
`MEIHOD FOR PRIORITIZING,
`SELECHVELY D1sCARD1NG, AND
`MULTIPLEXING DIFFERING TRAFFIC
`
`TYPE FAST PACKETS
`‘“"°“‘°“* Mime‘G-H'“°"Y5vW°11e5‘°Y$ Am“
`Bhargave, Somerville; Nanying Yin,
`Cambridge, all of Mass.
`Assignee: Codex Corporation, Canton, Mass.
`App]. No.: 551,712
`Filed:
`Jul. 11, 1990
`Int. Cl.5 ............................................. H041. 12/56
`U.S. Cl. ................
`370/94.1; 370/60
`
`Field Of Search
`370/941. 110-1. 60.
`370/53. 56, 35-1. 103. 51, 53-1, 53-3; 240/3255
`325-52. 325-51, 825-02; 395/650
`References Cited
`U‘S‘ PATENT DOCUMENTS
`4,475,192 10/1984 Fernox et al. ......................... 370/60
`1,3232%: Iijiggg £v)11:Mi“en1{'"{"'1'
`"“33;%gg
`,
`,
`ermarc e a
`..
`4,862,454
`3/1929 Dias et al.
`.........
`370/60
`9/1939 Suzuki ............. ..
`4,868,813
`370/60
`
`. 370/58.3
`4,894,824
`1/1990 Hemmady et al.
`2/1990 Nichols et al. ........................ 370/60
`4,901,348
`
`370/60
`4,914,650 4/1990 Srirairi .................. ..
`5,001,702
`3/1991 Teraslinna et al.
`................ 370/94.1
`ER PUBLICATIONS
`d R
`d HOLP
`.
`.
`_
`d 1°TH_ h
`M
`t
`t
`n
`oun-
`d—i::i:gi’: P:elac§S:ivicii,i:?;lo11iii N. Daigle,aJun. 1987,
`P 609419
`P '
`'
`_
`tEI‘_;'a':l'.'neerr__l1))°a‘:lgg1 Olms
`Attorney, Agent, or Firm—Darleén J. Stockley; Steven
`G. Parmelee; Charles L. Warren
`[571
`APSTRACT
`A queueing and dequeueing mechanism for use in an
`integrated fast’ packet network, wherein fast packets
`from differing traffic types are multiplexed with one
`another through use of a weighted round-robin band-
`wildtth aillociiticgfr-i mechanism (5l]7).tF:stfpaclt¢ets witlisin a
`pa icu ar
`ra ic ype are se ec e
`or
`ransmis ion
`through use of a head of line priority service (514), a
`packet discard mechanism (516), or both. The weighted
`round-robin bandwidth allocation mechanism func-
`-
`-
`-
`‘ms’ 1“ pafiéhbfsed “pm: 8 °';L‘:_‘t1i“: ]°°“::f'ficf°: °:°h
`‘1““‘’- 9°“? 3 ’°P’°5°“ 5 3 P
`“ 3’
`VP ‘
`
`31 Claims, 6 Drawing Sheets
`
`
`
`ENQUEUER
`
`505
`
`5,5
`
`514
`
`PACKETS FROM SOURCES OF ONE TYPE
`
`c——
`8" 513
`A__—
`
`— L
`
`512
`
`C-..
`
`B“'
`
`A--
`
`1
`
`511
`
`c__
`B"
`A__
`=
`
`H
`
`516
`
`PACKET
`DISCARDER
`
`PACKET
`DISCARDER
`
`PACKET
`DISCARDER
`
`HEAD OF LINE PRIORITY
`
`WEIGHTED ROUND ROBIN PACKET SELECTOR
`
`TRUNK
`
`ERICSSON EXHIBIT 1004
`
`

`
`US. Patent
`
`July 27, 1993
`
`Sheet 1 of 6
`
`5,231,633
`
`25;;
`Lu
`R
`.;§ ____3EMR_Er_>
`‘=9
`RPEAK=RAvc
`§§
`
`
`
`:52:
`Lgz
`:§
`=9 __
`E’?
`
`TIME
`
`cso
`
`-PRIOR ART—
`
`FIG.1A%
`
`TIME
`
`voncs
`
`-PRIOR ART-
`
`FIG.1B
`
`;_,__u.l
`
`:35
`{U7
`:55:
`Ba
`
`PEAK
`--RREOUIRED
`....RAvG
`
`-PRIOR ART-
`
`°‘” FIG 10
`
`INTERNODAL
`TRUNK 1
`
`
`INTERNGDAL
`TRUNK N
`
`
`'
`
`
`
`INPUT 1
`
`INPUT 2
`
`INPUT M 3
`
`-PRIOR ART-
`
`FIG.2
`
`FIG.3A
`—PRIOR ART-
`FIG-3B MN-
`
`-PRIOR ART-
`
`'
`
`

`
`U.S. Pate_nt
`
`A
`
`July 27, 1993
`
`Sheet 2 of 6
`
`5,231,633
`
`FROM SWITCH OUTPUT PORT
`
`
`
`TO INTERNODALF TRUNK
`
`ENOUEUER
`
`505
`
`PACKETS FROM
`CBO SOURCES
`
`PACKETS FROM
`VOICE SOURCE
`
`PACKETS FROM
`DATA SOURCES
`
`I
`
`1,5081
`
`
`
`I
`
`I
`
`I
`
`1,5121
`
`,
`
`1
`509
`=
`511
`=
`1 Z 513
`1 1 C“ ‘
`1 = =
`1 = = B"
`= = = A—- 505 = = =
`1 2 2
`1
`1 1 1
`
`507
`
`H
`
`M
`
`L
`
`516
`
`H
`
`M
`
`L
`
`514
`
`HEAD OF LINE
`PRIORITY
`
`PACKET
`DISCARDER
`
`HEAD OF LINE
`PRIORITY
`
`"514
`
`WEIGHTED ROUND ROBIN PACET SELECTOR
`
`517
`
`TRUNK
`
`

`
`U.S. Patent
`
`July 27, 1993
`
`Sheet 3 of 5
`
`5,231,633
`
`"'
`
`ENOUEUER
`
`505 P
`
`PACKETS FROM SOURCES OF ONE TYPE
`
`511
`
`C-..
`B--
`A--
`j
`
`H
`
`512
`
`C"
`
`1
`B--
`=
`A“
`
`c—-
`B“ 513
`E
`A--
`j
`
`M516
`
`L
`
`515
`
`PACKET
`DISCARDER
`
`PACKET
`DISCARDER
`
`PACKET
`DISCARDER
`
`5,5
`
`HEAD OF LINE PRlOR|TY
`
`514
`
`3
`
`WEIGHTED ROUND ROBIN PACKET SELECTOR
`
`FIG.6
`
`TRUNK
`
`

`
`U.S. Patent
`
`July 27, 1993
`
`Sheet 4 of 6
`
`5,231,633
`
`\I O O
`
`QID=QID FIELD OF PACKET
`
`
`
`
`TIME DISCARDING
`BEING? USED
`
`FIG’.’7
`
`DISCARD FIRST PACKET
`IN QUEUE
`
`706
`
`
`
`ENQUEUE PACKET IN
`OUEUE(QID)
`
`.
`
`
`
`0UEUE_LENEOID}=
`
`QUEUE_LEN 010 +1
`
`
`
`
`

`
`U.S. Patent
`
`July 27, 1993
`
`Sheet 5 of 6
`
`5,231,633
`
`CREDITEJ =MIN(CNAX J].
`crew T J]+CMAX[J
`
`SCAN NEXT QUEUE IN GROUP[J]
`(IN ORDER OF PRIORITY)
`
`PACKET? FOUND
`
`K=NIN(I-CREDITEI]/CIAX£I])
`(MIN
`
`HUN 0V
`
`R ALL
`
`son ALL I. no
`CREDIT[I]=CREDIT[I]+K*CNAX[I]
`
`

`
`U.S. Patent
`
`July 27, 1993
`
`Sheet 6 of 6
`
`A
`
`5,231,633
`
`
`0UEUE__LEN((OID)=
`0UEUE_LEN OID)-1
`
`
`
`
`
`DEPARTURE
`TIME DISCARDING
`BEING? USED
`
`
`
`YES
`
`'
`
`~
`
`DP=DISCARD PRIORITY
`or PACKET
`
`907
`
`OUEUE LEN(0ID)+1 >
`
`WATER'MAf'<’K(QID,DP)
`
`
`
`
`
`
`
`
`
`TRANSMIT PACKET
`
`
`
`DISCARD FIRST PACKET
`IN QUEUE
`
`0UEUE_LEN(QID)
`>90
`
`
`
`‘'55
`
`J —
`
`H
`
`
`
`J =CREDIT
`N PACK
`
`T
`
`T
`CREDI
`BYTE
`
`NO
`
`
`
`
`
`OUEUE LEN§OID])>0
`AND CRE"D T[J >0
`
`NO
`
`

`
`1
`
`5,231,633
`
`METHOD FOR PRIORITIZING, SELECFIVELY
`DISCARDING, AND MULTIPLEXING DIFFERING
`TRAFFIC TYPE FAST PACKETS
`
`TECHNICAL FIELD
`
`This invention relates generally to integrated fast
`packet networks, and more particularly to prioritiza-
`tion, ata network trunk node, of fast packets of a com-
`mon traffic type, selective discarding of certain fast
`packets, and bandwidth allocation through multiplexing
`of fast packets from differing traffic type groups.
`BACKGROUND OF THE INVENTION
`
`Typical integrated fast packet networks carry at least
`three classes of traffic: continuous bit-stream oriented
`(CBO), voice, and data. FIG. IA-C illustrates the band-
`width characteristics and requirements of these differ-
`ing traffic types, and their attributes are summarized as
`follows.
`CBO: Packets from individual sources are fairly well-
`behaved and arrive at the intemodal trunk queues more
`or less periodically. The peak rate (Rpegk) of multi-
`plexed CBO sources is the same as the average rate
`(Rayg), and the trunk bandwidth required (Rreqd)
`is
`somewhat larger to keep the queueing delays small.
`Since Rpeak<Rregd, no statistical gain is obtained in this
`case. CBO streams are sensitive to severe fluctuations in
`queueing delay and to packet losses since both of these
`cause a loss in synchronization at the receiver. Packets
`from CBO sources with large packetization times have
`large delays when multiplexed together as opposed to
`packets from sources with small packetization times.
`When multiplexed together, the delays are dominated
`by the large packetization time sources.
`Packet Voice (with speech activity detection): The
`rate of multiplexed voice depends on the ‘number of
`sources simultaneously in talk spurt and fluctuates be-
`tween the maximum rate (Rpeak) and zero. The average
`rate (Ravg) is less than half of Rpeak (for conversational
`speech). The required rate (Rwy) can be made to lie in
`between these two rates (rather than being made equal
`to Rpegk), making statistical gain (i.e., Rpeak/Rmqd) possi-
`ble. Rf-eqd is chosen to keep the maximum delay and the
`packet loss rate under the given limits (a small loss is
`acceptable since degradation in voice quality can re-
`main within acceptable limits).
`Packets with excessive delays (typically a few hun-
`dred millisecs) are also dropped at the destination based
`on an end-to-end delay limit, since voice is delay sensi-
`tive. This type of dropping results in (with a high proba-
`bility) several packets being consecutively lost from the
`same voice call and degrades the fidelity of the received
`voice signal seriously.
`Framed Data: This type of traffic can have large
`fluctuations in the rate and in the difference between
`
`Rpeak and Rqyg. Rmqd is chosen to keep the end-to-end
`average frame delays acceptably small. Since the loss of
`a single fast ‘packet results in of an entire frame, it is not
`desirable to drop packets. However, since data traffic is
`bursty in nature, the loss of packets due to the non-avail-
`ability of buffers cannot be prevented. Additionally,
`data traffic from different sources have differing quality
`of service (QOS) requirements; e.g. interactive traffic, is
`relatively delay sensitive, whereas file transfers are less
`so.
`
`Integrated networks considered here rely on fast
`packet technology to transport such packets between
`
`l0
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`55
`
`60
`
`65
`
`2
`end systems. Before the flow of packets between the
`end systems begins, a connection (or virtual circuit) is
`established between them. This connection determines
`the path (i.e., the nodes and intemodal trunks) that the
`fast packets will follow from end to end. FIG. 2 depicts
`a switch typically used at an intermediate node, that
`receives fast packets from one or more input trunks and
`switches them to one or more output trunks.
`Packets‘ coming in on an intemodal trunk from each
`connection have a unique field in the header, called the
`Logical Channel Number (LCN), that corresponds to a
`logical channel on that trunk (see FIG. 3A). At the time
`of the establishment of the connection, a table is up-
`dated (at each node) that contains entries for the output
`Port number, the Queue ID (QID) of the output queue,
`and the new LCN. The LCN of each incoming packet
`is examined and this address is translated (by access to
`the table) to the output Port number, QID (for an out-
`put queue), and the new LCN for the next intemodal
`trunk (see FIG. 3B).
`The packets from the various output queues are trans-
`mitted (or discarded) in an order determined by the
`particular queueing discipline used in the network.
`(Though this invention is described with this environ-
`ment
`in mind,
`the queueing discipline structure and
`methods described herein are applicable to a broad class
`of connection oriented or connection-less networks and
`are not limited to an implementation using LCNs and
`table lookups).
`For example, one technique for queueing packets for
`transmission on an intemodal trunk uses a first-in-first-
`out (FIFO) queue. Multiplexed voice and data traffic,
`however, experience overload periods when the instan-
`taneous rate of packets entering a trunk becomes larger
`than Rmqd (which corresponds to the trunk bandwidth
`allocated to that traffic). The overload periods for voice
`can be expected to last from 10 ms to 1000 ms. For data,
`the periods can last from a few ms to several seconds.
`CBO sources are sensitive to fluctuations in queueing
`delays that vary between 1 ms and 32 ms per hop in
`typical Tl networks (when ‘CBO sources alone are
`multiplexed into the same queue). Clearly, long over-
`load periods for voice and data caused by a simple
`FIFO queue could make CBO queueing delay fluctua-
`tions as long as the overload periods, and would cause
`the end-to-end delays to be unacceptable high, since a
`longer smoothing delay would be required at the desti-
`nation to prevent gaps from occurring during the bit-
`stream. Similarly, when data goes into overload, it dom-
`inates the trunk bandwidth and causes voice quality to
`deteriorate (since it would cause voice packets to be
`delayed or dropped). These clearly undersirable charac-
`teristics occur when all the traffic enters one queue.
`Another approach uses pure Head Of Line Priority
`(HOLP) to give data priority over voice. However,
`I-IOLP does not solve the problem of data and voice
`queues affecting the QOS of each other and of CBO
`when they go into overload. In addition, CBO, voice,
`and data queues have typical busy periods (1 ms to 32
`ms for CBO, 10 ms to 1 sec for voice, and up to a few
`secs for data) that are not only different but also large
`enough to cause one type of traffic to dominate the link
`bandwidth for relatively large amounts of time. For
`example, if CBO is given highest priority, even though
`it never goes into overload, a long busy period (say 32
`ms) could affect the quality of the voice connections .
`because voice packets could be blocked for 32 ms at that
`
`

`
`3
`hop. If HOLP was used and voice was given either the
`highest or second highest priority (it cannot be given
`the lowest as it is delay sensitive),
`the voice queue
`would just take up more bandwidth when it went into
`overload. Instead of the voice queue growing and los-
`ing packets (via a discard mechanism), it would (un-
`fairly) affect the quality of service of the lower priority
`queues.
`Movable boundary schemes for multiplexing two
`traffic types, voice and data, have also been proposed.
`In such schemes, the multiplex structure consists of
`frames that are made of transmission time slots. At the
`start of the frame, there are S time slots reserved for
`voice and the next N slots for data. At the start of the
`frame, if there are S or more voice packets awaiting
`transmission, they are loaded into the S time slots and
`data packets are loaded into the remaining N slots (if
`there are more than N they are queued). If there are less
`than S voice packets at the start of the frame, data pack-
`ets are permitted to use the extra timeslots (thus leading
`to the term “movable boundary”). Such methods nei-
`ther consider priorities within a class of traffic nor do
`they propose an efficient discard mechanism.
`In another scheme, timers are used to implement the
`movable boundary between voice and data queues and
`a block dropping scheme that assumes embedded cod-
`ing is used for voice. One approach in particular pro-
`poses four classes of traffic (with I-IOLP within each
`class if required), along with a discard mechanism for
`the class corresponding to voice traffic. The movable
`boundary scheme is implemented by limiting the num-
`ber of packets transmitted in succession from any traffic
`class.
`Queueing structures for virtual circuit (VC) based
`networks have also been proposed. The queueing disci-
`plines described in these methods separates the VCs into
`priority classes and uses round-robin scheduling be-
`tween VC’s. This discipline is more appropriate for
`conventional networks that use link-by-link flow con-
`trol (as versus fast packet networks that use end-to-end
`flow control), since the flow into a queue can be tightly
`controlled. In addition, this discipline does not address
`the problem of integrating different
`traffic types to
`match quality of service requirements and are geared
`instead to providing fair service to VCs by serving them
`in a round-robin order before they are multiplexed. This
`would not work in the environment of the invention
`since CBO and voice sources cannot be flow controlled
`and the HOLP discipline between CBO, voice, and data
`is insufficient (as described earlier).
`Packet discarding has also been proposed to relieve
`congestion in an integrated voice/data network. In such
`mechanisms, new incoming packets are discarded if the
`queues becomes excessive (i.e., filled). A disadvantage
`of this mechanism is that it discards packets without
`considering the importance of the packet. Using speech
`coding and speech activity detection, some of the pack-
`ets contain particularly important information. To re-
`duce the impact of this sort of discarding, a selective
`packet discarding technique has been proposed. Only
`the packets considered to be less important for voice
`waveform reconstruction are discarded in face of con-
`gestion, which is identified by the queue depth or the
`number of calls in talk spurt (i.e., packet arrival rate). In
`the selective packet discarding, packets are classified
`into two discard priorities. Both schemes described
`above use only a single threshold for the indication of
`congestion.
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`65
`
`5,231,633
`
`4
`Pursuant to another packet discarding mechanism, if
`an output queue of an internodal trunk is full, an incom-
`ing packet of the queue is dropped. If the queue is not
`full but the queue depth exceeds a given threshold and
`the incoming packet is marked droppable, the packet is
`also dropped. Otherwise, the packet is placed in the
`queue waiting for transmission. An alternative for drop-
`ping marked packets, referred to as “output dropping”
`has also been suggested, in which all incoming packets
`may be placed in the queue if the queue is not full. When
`a marked packet moves up to the head of the queue and
`is ready for transmission, the threshold is checked, and
`the packet will be dropped or transmitted accordingly.
`None of the above described methodologies provide
`fully satisfactory performance, particularly when ap-
`plied in an integrated fast packet network that supports
`transmission of varying traffic types.
`SUMMARY OF THE INVENTION
`
`These needs and others are substantially met through
`provision of the fast packet priority queueing, selective
`discarding, and bandwidth allocation methodology
`disclosed herein. Pursuant to one embodiment in accor-
`dance with the invention, fast packets of differing traffic
`types are prioritized pursuant to differing prioritization
`methods vis-a-vis one another. The prioritized packets
`from each group are then multiplexed and transmitted.
`For example, a first prioritization method can utilize
`HOLP, and a second prioritization method can rely
`upon packet discarding. In the alternative, these ap-
`proaches can be combined in various ways; for example,
`a first group of fast packets for a particular traffic type
`can be prioritized using HOLP alone, and a second
`group for differing traffic type can be prioritized using
`both a packet discard mechanism and a HOLP method-
`ology.
`In another embodiment, the multiplexing can be ac-
`complished through use of a weighted round-robin
`bandwidth allocation protocol.
`Viewed another way, pursuant to one embodiment of
`the invention, fast packets for differing traffic types are
`received and prioritized as a function, at least in part, of
`a first prioritization method, and fast packets that are
`queued pursuant to the first prioritization method are
`then multiplexed in accordance with a bandwidth allo-
`cation protocol, such as weighted round-robin. In one
`embodiment, the first prioritization method could be,
`for example, HOLP.
`In yet another embodiment in accordance with the
`invention, fast packets for differing traffic types are
`multiplexed by queuing fast packets for each traffic type
`in a separate queue, such that queues that contain fast
`packets for a particular traffic type are grouped to-
`gether discrete from other traffic types. A first group of
`queues can then be selected, and a credit count associ-
`ated with the first group of queues can be examined to
`determine whether any credit remains therein. If so, one
`fast packet buffered within a queue corresponding to
`the highest priority queue in the group that has a packet
`can be removed and transmitted. Upon transmission, the
`credit count can be altered in a manner corresponding
`to the length of the fast packet. For example, in one
`embodiment, the credit count can be decremented by an
`amount equal to the number of bytes in the transmitted
`fast packet. The bandwidth allocation is achieved by
`incrementing the credit count for a queue group by
`different amounts for different groups.
`
`

`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIGS. 1A—C comprises a depiction of prior art fast
`packet traffic types;
`FIG. 2 comprises a block diagram depiction of a prior
`art fast packet switch with input analysis and output
`queue;
`FIGS. 3A-B comprises a prior art depiction of a
`information header as initially received and as subse-
`quently processed by the analysis block for the prior art
`switch;
`FIG. 4 comprises a depiction of a post-switching
`queueing and multiplexing process in accordance with
`the invention;
`FIG. 5 comprises a block diagram depiction of one
`possible queueing and multiplexing process in accor-
`dance with the invention;
`FIG. 6 comprises a block diagram depiction of an
`alternative embodiment of a queuing and multiplexing
`arrangement in accordance with the invention;
`FIGS. 7-9 comprise flow diagrams depicting opera-
`tion of queuing, discarding, and multiplexing aspects of
`an internodal switch in accordance with the invention.
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENT .
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`FIG. 4 provides a general depiction of the enqueue-
`ing and dequeueing process contemplated herein (400).
`Following an appropriate enqueueing process (4-01), fast
`packets from various traffic types, such as CBO, voice,
`and framed data, are buffered in appropriate queues,
`which queues are segregated into groups (402) that
`correspond to the differing traffic types. In this exam-
`ple, the first group (403) buffers CBO fast packets, the
`second group (404) buffers voice fast packets, and the
`third group (405) buffers framed data fast packets. (Ad-
`ditional description of the various queues will be pro-
`vided below where appropriate.) The buffered fast
`packets are then dequeued through use of an appropri-
`ate dequeueing process (406) and provided to an inter-
`nodal trunk to support continued transmission of the
`dequeued fast packets.
`Turning now to FIG. 5, fast packets of different traf-
`fic types are routed by a switch and enqueueing process
`(505) (using the header information) to the appropriate
`queue. In this embodiment, voice packets are queued
`into a voice queue (506). The remaining two traffic type
`groups (CBC and framed data) are each provided with
`multiple queues (507—9 and 511-13, respectively).
`These multiple queues provide the differing quality of 50
`service that is required by CBC and data traffic with
`different characteristics. CBO sources typically have
`varying packetization times and packet lengths; hence,
`this system will separate out packets from different
`sources into more than one queue because the worst
`case delay for any queue is determined by the largest
`packetization time for sources multiplexed into that
`queue. For data, the sources can generate short length
`data frames (for example, from an interactive terminal
`application) or large amounts of information (such as
`file transfers). Both types are not placed in a single
`queue since an interactive user expects quick response
`times whereas a file transfer can take longer.
`A HOLP service discipline (514) between difierent
`data queues (with high priority (511) to short frames) or
`CBO queues (with a higher priority for queues (507) for
`sources with low burst size (packetization times)) pro-
`vides an appropriate quality of service for the different
`
`55
`
`65
`
`5
`
`5,231,633
`
`6
`types of traffic in those queues. For example, with three
`CBO queues (507—9), all packets from sources with
`packetization times less than 5 ms could be put into the
`highest priority queue (507), sources with packetization
`time between 5 ms and 25 ms in the intermediate prior-
`ity queue (508) and sources with packetization times
`larger than 25 ms in the lowest priority queue (509). For
`data, the allocation is done similarly based on average
`frame lengths. As an example, sources with frame sizes
`less than 64-‘bytes could be assigned to the highest prior-
`ity queue (511), frame sizes between 64 and 512 bytes to
`the intermediate priority queue (S12), and larger than
`512 bytes to the lowest priority queue (513).
`In general, a HOLP discipline tends to give superior
`service to the high priority queues (507 and 511) at the
`expense of degrading the performance of the lower
`priority queues.
`In this embodiment, however,
`the
`lower priority queues do not experience a severe degra-
`dation because the higher priority CBO queues and
`short data frames in the higher priority data queues
`have short busy periods.
`The voice queue has traffic which is similar (even
`with different coding rates) and therefore only a single
`queue is needed.
`It will be noted that, in this embodiment, a head of
`line priority mechanism has not been provided for the
`voice fast packets, since only one voice fast packet
`queue (506) has been provided. Instead, a packet dis-
`carder (516) has been provided. The functioning of this
`packet discarder will now be described in more detail
`with reference to FIG. 6.
`FIG. 6 depicts an altered embodiment wherein the
`voice and framed data fast packets are processed
`through a packet discard mechanism as well as a HOLP
`service. There are two bits in the header of each fast
`packet that indicates a discard priority. The data pack-
`ets are classified into four discard priorities: first, sec-
`ond, third, and last discarded. The voice packets are
`classified into three discard priorities: first, second, and
`last discarded. The discard priority is assigned to a
`packet based on its importance in reconstructing the
`voice waveform or its effect on the number of retrans-
`mitted data frames. A number of water-marks (A—C)
`corresponding to the discard priorities are provided in
`the voice and data queues. With this discarding mecha-
`nism, all arriving packets are allowed to get into the
`queue. If the queue is filled (i.e., the last water-mark is
`exceeded),
`the packet
`in the front of the queue is
`dropped no matter what the discard priority of this
`packet is. If the queue is not full, packets can be dis-
`carded from the front of the queue (based on the water-
`marks, queue depth, and discard priority of the packet
`at the front of the queue) at the instant of packet depar-
`ture from the queue or at the instant of packet arrival to
`the queue. These are termed departure time discarding
`and arrival time discarding.
`In departure time discarding, the packet discarding is
`done prior to the departure of a packet. Once the trunk
`is ready for a packet transmission, the packet in the
`front of the queue is checked. The discard priority of
`this packet is read and the current queue depth is com-
`pared with the water-mark related to the discard prior-
`ity of this packet. If the queue depth exceeds the water-
`mark, the packet is discarded. The next packet in the
`front of queue is then examined. The procedure repeats
`until the queue depth does not exceed the water-mark
`related to the discard priority of a packet in the front of
`the queue. Finally, this packet is transmitted.
`
`

`
`7
`In arrival time discarding, the packet discarding is
`done after the arrival of a new packet. After the arrival
`of a new packet, the discard priority of the packet in the
`front of the queue is read and the current queue depth is
`compared with the water-mark related to the discard
`priority of that packet. If the queue depth exceeds the
`water-mark, the packet is discarded. Unlike departure
`time discarding, no more packets are checked for dis-
`carding. Once the trunk is ready for a packet transmis-
`sion, the packet in the front of the queue is transmitted.
`(Packets in a higher priority queue can be discarded, not
`only based on the queue depth of the queue they are
`queued in, but also based on the depth of some other
`queue in the queuing structure. The measurement of the
`queue depth can be an instantaneous queue depth, or an
`average queue depth (over some window of time).)
`Finally, returning to FIG. 5, this embodiment pro-
`vides a weighted round-robin (WRR) packet selector
`(517) to provide a bandwidth allocation service. In
`effect, the packet selector functions to multiplex fast
`packets from the various queues and provide them to
`the trunk for continued transmission.
`With WRR, the server serves the queues cyclically in
`a prescribed order. It examines each queue a specified
`number of times (proportional to its weight). If all the
`queues have packets to send, WRR has the effect of
`dividing the trunk bandwidth among the queues in the
`ratio of the weights. When some queues are idle, WRR
`does not waste bandwidth but continues to examine the
`queues in the prescribed order until it finds a packet.
`WRR is similar to a conventional TDM system except
`that timeslots do not go idle-allocated bandwidth un-
`used by one queue can be used by another queue.
`To efficiently multiplex additional arbitrary traffic
`types, one would add a new queue group (that will be
`served by WRR) if it is found that the traffic type can-
`not be placed into an existing queue group due to band-
`width usage incompatibilities with the other traffic in
`that group. This can be done by appropriately charac-
`terizing the rate fluctuations and the busy period behav-
`ior of the traffic type when identical sources of that type
`are multiplexed together. Following this, HOLP queues
`should be created within that group to finely adjust the
`quality of service (i.e., delay and packet loss) of sources
`of the same type but with different parameters (note
`that if these parameters are significantly different for
`some sources, new traffic types should be defined which
`are then placed in different queue groups).
`The methodology described below determines the
`order of service of the different queues so that the over-
`all service discipline is a hybrid of WRR and HOLP.
`Through use of this approach, the queue groups can
`have variable length packets and still be allocated the
`required fraction of trunk bandwidth on the average.
`Also, given that the smallest unit of transmission is a
`packet, the method attempts to interleave packets as
`finely as possible so that delay jitter for different queue
`groups is reduced.
`As explained below, a credit counter is maintained for
`each queue group and this counter is incremented by a
`certain number of credits at each scan. A queue group
`can transmit packets when scanned if the number of
`credits it has is greater than zero. In this embodiment,
`the credits are measured in bytes. Every time a packet is
`transmitted, the credit counter is decremented by the
`number of bytes in the packet. The credit counter is
`allowed to go negative (if a large size packet is transmit-
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`5,231,633
`
`8
`ted) but is never allowed to exceed a (positive) maxi-
`mum value.
`The method is described in terms of the enqueueing
`and dequeueing processes referred to above that per-
`form the task of queueing packets into the output
`queues and removing a packet from an output queue,
`respectively. In this description, there is one enqueueing
`process that moves packets from a switch output port to
`output queues and one dequeueing process that moves
`packets from output queues to the output trunk.
`A description of the variables used in the following
`explanation is as follows:
`LCN: Logical Channel Number that is in the packet
`header at an input.
`QID: The address of an output queue corresponding
`to the LCN, obtained from a lookup table.
`DP: The discard priority of a packet.
`QUEUE LEN(QID): The number of packets queued
`at some instant in a queue with the address of QID.
`WATERMARK(QID,DP): If the QUEUE LEN(-
`QID) for some queue with address QID is greater than
`WATERMARK(QID,DP), then a packet with discard
`priority DP can be discarded.
`CREDIT[J]: The value of the credit counter for
`queue group J.
`CMAX[J]: The maximum value that CREDIT[J] can
`attain. This is also the number of credits given to queue
`group J at each scan. CREDIT[J] is set to CMAX[I]
`initially.
`MAX-Q-GRPS: The total number of queue groups
`that the enqueueing and dequeueing processes serve.
`The proper selection of CMAX[I] allows the groups
`to have variable packet lengths and also enables the
`system to interleave traffic from the queue groups in a
`fine grained manner.
`These are precomputed as follows. We define:
`L[J]=average packet
`length (in bytes)
`in queue
`group J
`f[J]=fraction of trunk bandwidth to be allocated to
`group
`J when
`all
`groups
`have
`packets
`CT=CMAX[l]+CMAX[2]+ .
`. +CMAX[-
`MAX-Q-GRPS].
`Since fIJ] =CMAX[j]/CT, we have the relation
`CMAX[I]/CMAX[J]=f[I]/f[J] for all I,J. To ob-
`tain the maximum interleaving, the maximum num-
`ber of packets that can be transmitted together
`(when other groups have credits and packets wait-
`ing) is made as small as possible (i.e., l packet). The
`maximum number of packets that can be transmit-
`ted
`together
`is
`max(CMAX[J]/L[J]).
`CMAX[I]/L[J]=f[.I]‘CT/LU] attains its maxi-
`mum when f[J]/L[J] attains its maximum. The
`method described below makes use of these rela-
`tionships to compute the appropriate CMAX[J]s.
`I=J for which f[J]/LU] is maximum;
`CMAX[I] = L[I];
`for J =1 to MAX-Q-GRPS do
`CMAX[I] =f[J]"CMAX[I]/f[I].
`The enqueueing process (700) appears in FIG. 7. QID
`is first set as the QID identified in the packet field (701).
`If arrival time discarding is being used (702), the discard
`priority (DP) of the packet in the front of the queue is
`determined from the corresponding packet field (703).
`The appropriate queue length is then compared to the
`appropriate water-mark (704), and if the queue length
`exceeds the relevant watermark, the packet in the front
`of the queue is discarded (706). Otherwise, the incoming
`packet is enqueued (708) and the queue length variable
`
`.
`
`

`
`9
`is increased by one packet (707). If arrival time discard-
`ing is not being used (702), then the process simply
`enqueues the incoming packet (708) and again increases
`the queue length by 1 (707).
`Referring now to FIG. 8, the dequeueing process
`appears generally as depicted by reference numeral 800.
`To begin, CREDIT [J] is established as zero for all P5
`(801). Next, variable J is initialized (802). The credit
`counter for the selected queue group is then established
`(803), and the process determines whether positive
`credit exists for the queue group in question and
`whether there are any fast packets queued for transmis-
`sion in the relevant group (804). If true

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket