throbber
254
`
`CHAPTER 3
`
`• TRANSPORT LAYER
`
`3.6
`
`• PRINCIPLES OF CONGESTION CONTROL
`
`255
`
`segment to the source. This TCP segment has the RST flag bit (see Section 3.5.2) set to
`I. Thus, when a host sends a reset segment, it is telling the source "I don't have a socket
`for that segment. Please do not resend the segment." When a host receives a UDP
`packet whose destination port number doesn't jive with an ongoing UDP sockets, the
`host sends a special ICMP datagram, as discussed in Chapter 4.
`This completes our introduction to error control and flow control in TCP. In
`Section 3.7 we'll return to TCP and look at TCP congestion control in some depth.
`Before doing so, however, we first step back and examine congestion-control issues
`in a broader context.
`
`3. 6 Principles of Congestion Control
`
`In the previous sections, we examined both the general principles and specific TCP
`mechanisms used to provide for a reliable data transfer service in the face of packet
`loss. We mentioned earlier that, in practice, such loss typically results from the over(cid:173)
`flowing of router buffers as the network becomes congested. Packet retransmission
`thus treats a symptom of network congestion (the loss of a specific transport-layer
`too many sources
`segment) but does not treat the cause of network congestion-
`attempting to send data at too high a rate. To treat the cause of network congestion,
`mechanisms are needed to throttle senders in the face of network congestion.
`In this section, we consider the problem of congestion control in a general con(cid:173)
`text, seeking to understand why congestion is a bad thing, how network congestion
`is manifested in the performance received by upper-layer applications, and various
`approac hes that can be taken to avoid, or react to, network congestion. This more
`general study of congestio n control is appropriate since, as with reliable data trans(cid:173)
`fer, it is high o n our "top-ten" list of fundamentally important problems in network(cid:173)
`ing. We conclude this section with a discussio n of congestion control in the
`available bit-rate (ABR) service in asynchronous transfer mode (ATM) net(cid:173)
`works. The following section contains a detailed study ofTCP's congestion-control
`algorithm.
`
`3. 6. I The Causes and the Costs of Congestion
`Let's begin our general study of congestion control by examining three increasingly
`co mplex scenarios in which congestion occurs. In each case, we'll look at why con(cid:173)
`gestio n occurs in the first place and at the cost of congestion (in terms of resources
`not fully utilized and poor performance received by the end syste ms). We'll not (yet)
`focus on how to react to, or avoid, co ngestion but rather focus on the simpler issue
`of understanding what happens as hosts increase their transmission rate and the net(cid:173)
`work becomes congested.
`
`Scenario 1: Two Senders, a Router with Infinite Buffers
`
`We begin by considering perhaps the simplest congestion scenario possible: Two
`hosts (A and B) each have a connection that shares a single hop between source and
`destination, as shown in Figure 3.42.
`Let's assume that the application in Host A is sending data into the connection
`(for example, passing data to the transport-level protocol via a socket) at an average
`rate of X.. bytes/sec. These data are original in the sense that each unit of data is sent
`into the ;ocket only once. The underlying transport-level protocol is a simple one.
`Data is encapsulated and sent; no error recovery (for example, retransmission), flow
`control, or congestion control is performed. Ignoring the additional overhead due to
`adding transport- and lower-layer header information, the rate at which Host A offers
`traffic to the router in this first scenario is thus X.in bytes/sec. Host B operates in a sim(cid:173)
`ilar manner, and we assume for simplicity that it too is sending at a rate of >..inbytes/sec.
`Packets from Hosts A and B pass through a router and over a shared outgoing link of
`capacity R. The router has buffers that allow it to s tore incoming packets when the
`packet arrival rate exceeds the outgoing link's capacity. In this first scenario, we
`assume that the router has an infinite amount of buffer space.
`Figure 3.43 plots the performance of Host A's connection under this first sce(cid:173)
`nario. The left graph plots the peraconnection throughput (number of bytes per
`~econd at the receiver) as a function of the connection-sending rate. For a sending
`rate between O and R/2, the throughput at the receiver equals the sender's sending
`rate-everything sent by the sender is received at the receiver with a finite delay.
`When the sending rate is above R/2, however, the throughput is only R/2. This upper
`
`A.;.,: or iginal data
`
`Host A
`
`Host B
`
`Host C
`
`HOS1 D
`
`Unlimited shared
`output link buffers
`
`Figure 3.42 • Congestion scenario 1 : Two connections sharing a single
`hop with infinite buffers
`
`Hewlett Packard Enterprise Co. Ex. 1006, Page 129 of 196
`Hewlett Packard Enterprise Co. v. Intellectual Ventures II, LLC
`IPR2022-00211
`
`

`

`256
`
`CHAPTER 3
`
`• TRANSPORT LAYER
`
`3.6
`
`• PRINCIPLES OF CONGESTION CONTROL
`
`257
`
`R/2
`
`A;n: original data
`A.' n: original data, plus
`retransmitted data
`Host B
`
`Host C
`
`Aout
`
`Host A
`
`a.
`
`b.
`
`R/2
`
`R/2
`
`Figure 3.43 + Congestion scenario 1: Throughput and delay as a
`function of host sending rate
`
`limit on throughput is a consequence of the sharing of link capacity between two
`connections. The link simply cannot deliver packets to a receiver at a steady-state
`rate that exceeds R/2. No matter how high Hosts A and B set their sending rates, they
`will each never see a throughput higher than R/2.
`Achieving a per-connection throughput of R/2 might actually appear to be a
`good thing, because the link is fully utilized in delivering packets to their destina(cid:173)
`tions. The right-hand graph in Figure 3.43, however, shows the consequences of
`operating near link capacity. As the sending rate approaches R/2 (from the left), the
`average delay becomes larger and larger. When the sending rate exceeds R/2, the
`average number of queued packets in the router is unbounded, and the average delay
`between source and destination becomes infinite (assuming that the connections
`operate at these sending rates for an infinite period of time and there is an infinite
`amount of buffering available). Thus, while operating at an aggregate throughput of
`near R may be ideal from a throughput standpoint, it is far from ideal from a delay
`standpoint. Even in this (extremely) idealized scenario, we've already found one
`cost of a congested network- large queuing delays are experienced as the packet(cid:173)
`arrival rate nears the link capacity.
`
`Scenario 2: Two Senders and a Router with Finite Buffers
`
`Let us now slightly modify scenario l in the following two ways (see Figure 3.44).
`First, the amount of router buffering is assumed to be finite. A consequence of this
`real-world assumption is that packets will be dropped when arriving to an already(cid:173)
`full buffer. Second, we assume that each connection is reliable. If a packet contain(cid:173)
`ing a transport-level segment is dropped at the router, the sender will eventually
`
`Finite shared output
`link buffers
`
`Figure 3.44 t Scenario 2: Two hosts (with retransmissions) and a router
`with finite buffers
`
`retransmit it. Because packets can be retransmitted, we must now be more careful
`with our use of the term sending rate. Specifically, let us again denote the rate at
`which the application sends original data into the socket by Ain bytes/sec. The rate at
`which the transport layer sends segments (containing original data and retransmit(cid:173)
`ted data) into the network will be denoted A1 bytes/sec. A ~ is sometimes referred to
`rn
`in
`as the offered load to the network.
`The performance realized under scenario 2 will now depend strongly on how
`retransmission is performed. First, consider the unrealistic case that Host A is able
`to somehow (magically!) determine whether or not a buffer is free in the router and
`thus sends a packet only when a buffer is free. In this case, no loss would occur, Am
`would be equal to A. '. , and the throughput of the connection would be equal to A..
`In
`This case is shown in Figure 3.45(a). From a throughput standpoint, performance is
`ideal-everything that is sent is received. Note that the average host sending rate
`cannot exceed R/2 under this scenario, since packet loss is assumed never to occur.
`Consider next the slightly more realistic case that the sender retransmits only
`when a packet is known for certain to be lost. (Again, this assumption is a bit of a
`stretch. However, it is possible that the sending host might set its timeout large
`enough to be virtually assured that a packet that has not been acknowledged has
`been lost.) In this case, the performance might look something like that shown in
`Figure 3.45(b). To appreciate what is happening here, consider the case that the
`offered load, x.;n (the rate of original data transmiso;ion plus retransmissions), equals
`R/2. According to Figure 3.45(b}, at this value of the offered load, the rate at which
`data are delivered to the receiver application is R/3. Thus, out of the 0.5R units of
`
`~
`
`Hewlett Packard Enterprise Co. Ex. 1006, Page 130 of 196
`Hewlett Packard Enterprise Co. v. Intellectual Ventures II, LLC
`IPR2022-00211
`
`

`

`258
`
`CHAPTER 3
`
`• TRANSPORT LAYER
`
`3.6
`
`• PRINCIPLES OF CONGESTION CONTROL
`
`259
`
`R/2 ----------------1
`
`R/3
`
`I
`I
`I
`
`R/2
`
`~
`:I
`
`«o
`
`R/4
`
`a.
`
`R/2
`
`b.
`
`R/2
`
`c.
`
`R/2
`
`A;n: original data
`
`:>..;n: original
`data, plus
`retransmitted
`data
`
`R1
`
`Figure 3.45 • Scenario 2 performance with finite buffers
`
`R2
`
`data transmitted, 0.333R bytes/sec (on average) are original data and 0.166R bytes/
`sec (on average) are retransmitted data. We see here another cost of a congested net(cid:173)
`work-the sender must perform retransmissions in order to compensate for dropped
`(lost) packets due to buffer overflow.
`Finally, let us consider the case that the sender may time out prematurely and
`retransmit a packet that has been delayed in the queue but not yet lost. In this case,
`both the original data packet and the retransmission may both reach the receiver. Of
`course, the receiver needs but one copy of this packet and will discard the retrans(cid:173)
`mission. In this case, the work done by the router in forwarding the retransmitted
`copy of the original packet was wasted, as the receiver will have already received
`the original copy of this packet. The router would have better used the link trans(cid:173)
`mission capacity to send a different packet instead. Here then is yet another cost of
`a congested network- unneeded retransmissions by the sender in the face of large
`delays may cause a router to use its fink bandwidth to forward unneeded copies of a
`packet. Figure 3.45 (c) shows the throughput versus offered load when each packet
`is assumed to be forwarded (on average) twice by the router. Since each packet is
`forwarded twice, the throughput will have an asymptotic value of R/4 as the offered
`load approach R/2.
`
`Scenario 3: Four Senders, Routers with Finite Buffers, and
`Multihop Paths
`
`In our final congestion scenario, four hosts transmit packets, each over overlapping
`two-hop paths, as shown in Figure 3.46. We again assume that each host u<;es a time(cid:173)
`out/retransmission mechanism to implement a reliable data transfer service, that all
`hosts have the same value of~- , and that all router links have capacity R bytes/sec.
`in
`
`Host D
`
`Finite shared output
`link buffers
`~
`
`R3
`
`Ho«C
`
`Figure 3.46 • Four senders, routers with finite buffers, and multihop paths
`
`Let's consider the connection from Host A to Host C, passing through routers
`R 1 and R2. The A-C connection shares router R 1 with the D-B connection and
`shares router R2 with the B- D connection. For extremely small values of >-w buffer
`overflows are rare (as in congestion scenarios 1 and 2), and the throughput approxi(cid:173)
`mately equals the offered load. For slightly larger values of Am, the corresponding
`throug hput is also larger, since more original data is being transmitted into the net(cid:173)
`work and delivered to the destination, and overflows are still rare. Thus, for small
`values of A;n• an increase in Arn results in an increase in ~out·
`Having considered the case of extremely low traffic, let's next examine the case
`that A;n (and hence A~0) is extremely large. Consider router R2. The A--C traffic arriving
`to router R2 (which arrives at R2 after being forwarded from RI) can have an arrival
`rate at R2 that is at most R, the capacity of the link from RI to R2, regardless of the
`value of ~;n· If ~;n is extremely large for all connections (including the B-D connec(cid:173)
`tion), then the arrival rate of 8- D traffic at R2 can be much larger than that of the A--C
`
`Hewlett Packard Enterprise Co. Ex. 1006, Page 131 of 196
`Hewlett Packard Enterprise Co. v. Intellectual Ventures II, LLC
`IPR2022-00211
`
`

`

`260
`
`CHAPTER 3
`
`• TRANSPORT LAYER
`
`3.6
`
`• PRINCIPLES OF CONGESTION CONTROL
`
`261
`
`traffic. Because the A-C and B- D traffic must compete at router R2 for the limited
`amount of buffer space, the amount of A-C traffic that successfully gets through R2
`(that is, is not lost due to buffer overflow) becomes smaller and smaller as the offered
`load from B-D gets larger and larger. In the limit, as the offered load approaches infin(cid:173)
`ity, an empty buffer at R2 is immediately filled by a B-D packet, and the throughput of
`the A-C connection at R2 goes to zero. This, in tum, implies that the A-C end-to-end
`throughput goes to zero in the limit of heavy traffic. These considerations give rise to
`the offered load versus throughput tradeoff shown in Figure 3.47.
`The reason for the eventual decrease in throughput with increasing offered load
`is evident when one considers the amount of wasted work done by the network. In
`the high-traffic scenario outlined above, whenever a packet is dropped at a second(cid:173)
`hop router, the work done by the first-hop router in forwarding a packet to the sec(cid:173)
`ond-hop router ends up being "wasted." The network would have been equally well
`off (more accurately, equally bad off) if the first router had simply discarded that
`packet and remained idle. More to the point, the transmission capacity used at the
`first router to forward the packet to the second router could have been much more
`profitably used to transmit a different packet. (For example, when selecting a packet
`for transmission, it might be better for a router to give priority to packets that have
`already traversed some number of upstream routers.) So here we see yet another cost
`of dropping a packet due to congestion- when a packet is dropped along a path, the
`transmission capacity that was used at each of the upstream links to forward that
`packet to the point at which it is dropped ends up having been wasted.
`
`3.6.2 Approaches to Congestion Control
`In Section 3.7, we'll examine TCP's specific approach to congestion control in great
`detail. Here, we identify the two broad approaches to congestion control that are
`taken in practice and discuss specific network architectures and congestion-control
`protocols embodying these approaches.
`At the broadest level, we can distinguish among congestion-control approaches
`by whether the network layer provides any explicit assistance to the transport layer
`for congestion-control purposes:
`
`t End-to-end congestion control. In an end-to-end approach to congestion control,
`the network layer provides no explicit support to the transport layer for conges(cid:173)
`tion-control purposes. Even the presence of congestion in the network must be
`inferred by the end systems based only on observed network behavior (for exam(cid:173)
`ple, packet loss and delay). We will see in Section 3.7 that TCP must necessarily
`take this end-to-end approach toward congestion control, since the IP layer pro(cid:173)
`vides no feedback to the end systems regarding network congestion. TCP seg(cid:173)
`ment loss (as indicated by a timeout or a triple duplicate acknowledgment) is
`taken as an indication of network congestion and TCP decreases its window size
`accordingly. We will also see that new proposals for TCP use increasing round(cid:173)
`trip delay values as indicators of increased network congestion.
`
`R/2
`
`Figure 3.47 t Scenario 3 performance with finite buffers and multihop
`paths
`
`t Network-assisted congestion control. With network-assisted congestion control,
`network-layer components (that is, routers) provide explicit feedback to the
`sender regarding the congestion state in the network. This feedback may be as
`simple as a single bit indicating congestion at a link. This approach was taken in
`the early IBM SNA [Schwartz 1982} and DEC DECnet [Jain 1989; Ramakrish(cid:173)
`nan 1990] architectures, was recently proposed for TCP/IP networks [Floyd TCP
`1994; RFC 2481], and is used in ATM available bit-rate (ABR) congestion con(cid:173)
`trol as well, as discussed below. More sophisticated network feedback is also
`possible. For example, one form of ATM ABR congestion control that we will
`study shortly allows a router to inform the sender explicitly of the transmission
`rate it (the router) can support on an outgoing link.
`
`For network-assisted congestion control, congestion information is typically fed
`back from the network to the sender in one of two ways, as shown in Figure 3.48.
`Direct feedback may be sent from a network router to the sender. This fonn of notifica(cid:173)
`tion typically takes the form of a choke packet (essentially saying, 'Tm congested!").
`The second fonn of notification occurs when a router marks/updates a field in a packet
`flowing from sender to receiver to indicate congestion. Upon receipt of a marked
`packet, the receiver then notifies the sender of the congestion indication. Note that this
`latter fonn of notification takes at least a full round-trip time.
`
`3.6.3 Network-Assisted Congestion-Control Example:
`ATM ABR Congestion Control
`We conclude this section with a brief case study of the congestion control algorithm
`in ATM ABR-a protocol that takes a network-assisted approach toward congestion
`control. We stress that our goal here is not to describe aspects of the ATM architec(cid:173)
`ture in any detail, but rather to illustrate a protocol takes a markedly different
`
`Hewlett Packard Enterprise Co. Ex. 1006, Page 132 of 196
`Hewlett Packard Enterprise Co. v. Intellectual Ventures II, LLC
`IPR2022-00211
`
`

`

`262
`
`CHAPTER 3
`
`• TRANSPORT LAYER
`
`3 .6
`
`PRINCIPLES OF CONGESTION CONTROL
`
`263
`
`Host A
`
`t-lo$t B
`
`Network feedback via receiver
`
`Direct network
`feedback
`
`Figure 3.48 + Two feedback pathways for network-indicated congestion
`information
`
`approach towards congestion control from that of the Internet's TCP protocol.
`Indeed, we only present below those few aspects of the ATM architecture that arc
`needed to understand ABR congestion control.
`Fundamentally ATM takes a virtual -circuit (VC) oriented approach toward
`packet switching. Recall from our discussion in Chapter I, thi~ means that each
`switch on the source-to-destination path will maintain state about the source-to-des(cid:173)
`tination VC. This per-VC state allows a switch to track the behavior of individual
`senders (e.g., tracking their average transmission rate) and to take source-specific
`congestion control actions (such as explicitly signaling to the sender to reduce its
`rate when the switch becomes congested}. This per-VC state at network switches
`makes ATM ideally suited to perform network-assisted congestion control.
`ABR has been designed as an elastic data transfer service in a manner reminis(cid:173)
`cent of TCP. When the network is underloaded, ABR service should be able to take
`advantage of the spare available bandwidth; when the network is congested, ABR
`service should throttle its transmission rate to some predetermined minimum trans(cid:173)
`mission rate. A detailed tutorial on ATM ABR congestion control and traffic man(cid:173)
`agement is provided in [Jain 1996).
`Figure 3.49 shows the framework for ATM ABR congestion control. In our dis(cid:173)
`cussion we adopt ATM terminology (for example, using the term switch rather than
`router, and the term cell rather than packet). With ATM ABR ~ervice, data cells are
`transmitted from a source to a destination through a series of intermediate switches.
`Interspersed with the data cells are resource-management cells (RM cells); these
`RM cells can be used to convey congestion-related information among the host-; and
`
`Key:
`
`' RM cells
`
`Data cell>
`
`Figure 3.49 t Congestion-control framework for ATM ABR service
`
`switches. When an RM cell is at a destination, it will be turned around and sent back
`to the sender (possibly after the destination has modified the contents of the RM
`cell). It is also possible for a switch to generate an RM cell itself and send this RM
`cell directly to a source. RM cells can thus be used to provide both direct network
`feedback and network-feedback-via-the-receiver, as shown in Figure 3.49.
`ATM ABR congestion control is a rate-based approach. That is, the sender
`explicitly computes a maximum rate at which it can send and regulates itself accord(cid:173)
`ingly. ABR provides three mechanisms for signaling congestion-related information
`from the switches to the receiver:
`
`t EFCI bit. Each data cell contains an explicit forward congestion indication
`(EFCI) bit. A congested network switch can set the EFCI bit in a data cell to I
`to signal congestion to the destination host. The destination must check the EFCI
`bit in all received data cells. When an RM cell arrives at the destination, if the
`most recently received data cell had the EFCI bit set to J, then the destination
`sets the congestion indication bit (the CI bit) of the RM cell to J and sends the
`RM cell back to the sender. Using the EFCI in data cells and the CI bit in RM
`cells, a sender can thus be notified about congestion at a network switch.
`t Cl and NI bits. As noted above, sender-to-receiver RM cells are interspersed with
`data cells. The rate of RM cell interspersion is a tunable parameter, with the
`default value being one RM cell every 32 data cells. These RM cells have a c and
`a no .increase (NI) bit that can be set by a congested network switch. Specifically.
`a switch can set the NI bit in a passing RM cell to I under mild congestion and
`can set the Cl bit to I under severe congestion conditions. When a destination
`host receives an RM cell, it will send the RM cell back to the sender with its Cl
`and NI bits intact (except that Cl may be set to I by the dr,stination as a result of
`the EFCI mechanism described above).
`
`Hewlett Packard Enterprise Co. Ex. 1006, Page 133 of 196
`Hewlett Packard Enterprise Co. v. Intellectual Ventures II, LLC
`IPR2022-00211
`
`

`

`264
`
`CHAPTER 3
`
`• TRANSPORT LAYER
`
`3.7
`
`• TCP CONGESTION CONTROL
`
`265
`
`t ER setting. Each RM cell also contains a two-byte explicit rate (ER) field. A
`congested switch may lower the value contained in the ER field in a passing RM
`cell. In this manner, the ER field will be set to the minimum supportable rate of
`all switches on the source-to-destination path.
`
`An ATM ABR source adjusts the rate at which it can send cells as a function of
`the Cl, NI, and ER values in a returned RM cell. The rules for making this rate
`adjustment are rather complicated and a bit tedious. The interested reader is referred
`to [Jain 1996) for details.
`
`3. 7 TCP Congestion Control
`
`In this section we return to our study of TCP. As we learned in Section 3.5, TCP pro(cid:173)
`vides a reliable transport service between two processes running on different hosts.
`Another key component of TCP is its congestion-control mechanism. As indicated
`in the previous section, TCP must use end-to-end congestion control rather than net(cid:173)
`work-assisted congestion control, since the IP layer providM no explicit feedback to
`the end systems regarding network congestion.
`The approach taken by TCP is to have each sender limit the rate at which it
`sends traffic into its connection as a function of perceived network congestion. If a
`TCP sender perceives that there is little congestion on the path between itself and
`the destination, then the TCP sender increases its send rate; if the sender perceives
`that there is congestion along the path, then the sender reduces its send rate. But this
`approach raises three questions. First, how does a TCP sender limit the rate a_t which
`it sends traffic into its connection? Second, how does a TCP sender perceive that
`there is congestion on the path between itself and the destination? And third, what
`algorithm should the sender use to change its send rate as a function of perceived
`end-to-end congestion? We will examine these three issues in the context of the TCP
`Reno congestion control algorithm, which is used in most modem operating systems
`[Padhye 2001]. To keep the discussion concrete, we'll suppose that the TCP sender
`is sending a large file.
`Let's first examine how a TCP sender limits the rate at which it sends traffic
`into its connection. In Section 3.5 we saw that each side of a TCP connection con(cid:173)
`sists of a receive buffer, a send buffer, and several variables (LastByteRead,
`RcvWindow, and so on). The TCP congestion-control mechanism has each side of
`a connection keep track of an additional variable, the congestion window. The con(cid:173)
`gestion window, denoted Congwin, imposes a constraint on the rate at which a
`TCP sender can send traffic into the network. Specifically, the amount of unac(cid:173)
`knowledged data at a sender may not exceed the minimum of CongWin and
`RcvWindow, that is:
`
`LastByteSent - LastByteAcked ~ rnin{Congwin, RcvWindow}
`
`In order to focus on congestion control (as opposed to flow control), let us hence(cid:173)
`forth assume that the TCP receive buffer is so large that the receive-window con(cid:173)
`straint can be ignored; thus, the amount of unacknowledged data at the sender is
`solely limited by CongWin.
`The constraint above limits the amount of unacknowledged data at the sender
`and therefore indirectly limits the sender's send rate. To see this, consider a connec(cid:173)
`tion for which loss and packet transmission delays are negligible. Then, roughly, at
`the beginning of every RTT, the constraint permits the sender to send Congwin
`bytes of data into the connection; at the end of the RTT the sender receives acknowl(cid:173)
`edgments for the data. Thus the sender's send rate is roughly Cong Win/RIT
`bytes/sec. By adjusting the value of CongWin, the sender can therefore adjust the
`rate at which it sends data into its connection.
`Let's next consider how a TCP sender perceives that there is congestion on the
`path between itself and the destination. Let us define a "loss event" at a TCP sender
`as the occurrence of either a timeout or the receipt of three duplicate AC Ks from the
`receiver (recall our discussion in Section 3.5.4 of the timeout event in Figure 3.33
`and the subsequent modification to include fast retransmit on receipt of three dupli(cid:173)
`cate ACKs). When there is excessive congestion, then one (or more) router buffers
`along the path overflows, causing a datagram (containing a TCP segment) to be
`dropped. The dropped datagram, in tum, results in a loss event at the sender-either
`a timeout or the receipt of three duplicate ACKS-which is taken by the sender to
`be an indication of congestion on the sender-to-receiver path.
`Having considered how congestion in detected, let's next consider the more
`optimistic case when the network is congestion-free, that is, when a loss event does(cid:173)
`n't occur. In this case, acknowledgements for previously unacknowledged segments
`will be received at the TCP sender. As we'll see, TCP will take the arrival of these
`acknowledgements as an indication that all is well-that segments being transmit(cid:173)
`ted into the network are being successfully delivered to the destination-and will
`use acknowledgements to increase its congestion window size (and hence its trans(cid:173)
`mission rate). Note that if acknowledgements arrive at relatively slow rate (e.g., if
`the end-end path has high delay or contains a low bandwidth link), then the conges(cid:173)
`tion window will be increased at a relatively slow rate. On the other hand, if
`acknowledgements arrive at a high rate, then the congestion window will be
`increased more quickly. Because TCP uses acknowledgements to trigger (or clock)
`its increase in congestion window size, TCP is said to be self-clocking.
`We're now in a position to consider the details of the algorithm that a TCP
`sender uses to regulate its sending rate as a function of perceived congestion. This
`algorithm is the celebrated TCP congestion control algorithm. The algorithm has
`three major components: ( 1) additive-increase, multiplicative-decrease, (2) slow
`start, and (3) reaction to timeout events.
`
`Hewlett Packard Enterprise Co. Ex. 1006, Page 134 of 196
`Hewlett Packard Enterprise Co. v. Intellectual Ventures II, LLC
`IPR2022-00211
`
`

`

`266
`
`CHAPTER 3
`
`• TRANSPORT LAYER
`
`3.7
`
`• TCP CONGESTION CONTROL
`
`267
`
`Additive-Increase, Multiplicative-Decrease
`
`The basic idea behind TCP congestion control is for the sender to reduce its sending
`rate (by decreasing its congestion window size, CongWin) when a loss event
`occurs. Since other TCP connections that are passing through the same congested
`routers are also likely to be experiencing Joss events, they, too, are likely to reduce
`their sending rates by decreasing their own values of CongWin. The overall effect,
`then, is for sources with paths through congested routers to reduce the rate at which
`they send traffic into the network, which in turn should relieve congestion at the
`congested routers. But by how much should a TCP sender reduce its congestion
`window when a loss event occurs? TCP uses a so-called "multiplicative decrease"
`approach, halving the current value of CongWin after a loss event. Thus, if the
`value of a TCP sender's CongWin is currently 20 Kbytes and a loss is detected,
`CongWin is cut in half to I 0 Kbytes. If another loss event occurs, CongWin is fur(cid:173)
`ther reduced to 5 Kbytes. The value of CongWin may continue to drop, but it is not
`allowed to drop below I MSS. (This is a big-picture description of how the conges(cid:173)
`tion window changes after a Joss event. In truth, things are a bit more complicated.
`As we' 11 soon see, the value of CongWin actually drops to I MSS after the timeout
`event, and then quickly ramps back up to half its previous value.)
`Having described how a TCP sender decreases its sending rate in the face of
`detected congestion, it is natural to consider next how TCP should increase its send(cid:173)
`ing rate when it perceives no congestion, that is, when ACKs arrive for previously
`yet-to-be-acknowledged data. The rationale for an increase in rate is that if there is
`no detected congestion, then there is likely to be available (unused) bandwidth that
`could additionally be used by the TCP connection. In such circumstances, TCP
`increases its congestion window slowly, cautiously probing for additional available
`bandwidth in the end-to·end path. The TCP sender does this by incrementing
`CongWin a little each time it receives an acknowledgement, with the goal of
`increasing Cong Win by I MSS every round trip time [RFC 2581 ]. This can be
`accomplished in several ways; a common approach is for the TCP sender to increase
`its CongWin by MSS"(MSS/CongWin) bytes whenever a new acknowledgement
`arrives. For example, if MSS is 1,460 bytes and CongWin is 14,600 bytes, then 10
`segments are being sent within an RIT. Each arriving ACK (assuming one ACK per
`segment) increases the congestion window size by I/JO MSS, and thus after
`acknowledgments for all 10 segments have been received, the value of the conges(cid:173)
`tion window will have increased by MSS, as desired.
`In summary, a TCP sender additively increases its rate when it perceives that
`the end-end path is congestion-free, and multiplicatively decreases its rate when it
`detects (via a loss event) that the path is congested. For this reason, TCP congestion
`control is often referred to as an additive-increase, multiplicative-decrease
`(AIMD) algorithm. The linear increase phase ofTCP's congestion control protocol
`is known as

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket