rdist

March 24, 2008

A TCP stack swapout solves nothing

Filed under: Protocols — Nate Lawson @ 10:49 am

George Ou wrote a long post today about bandwidth fairness and congestion control. It was inspired by this draft article by Bob Briscoe, which spends a lot of time discussing the game theory behind bandwidth usage. While George’s post is interesting, it makes the wrong leap from “there are some problems” to “the Internet is broke, time to update all TCP stack implementations.” Since I enjoy protocol design, I’ve collected some historical documents that may be helpful to review if you need a refresher.

The claim is that there are two situations Van Jacobson’s TCP congestion control algorithm does not address: a single user with multiple TCP connections and persistence (a connection being kept open and reused for multiple transfers.) Supposedly, peer-to-peer software is exploiting these two loopholes to use more than its fair share of bandwidth, which results in ISPs like Comcast preventing BitTorrent seeding.

First, let’s analyze what these behaviors actually achieve. Multiple TCP connections will attempt to equally share bandwidth at their mutual bottleneck, whether initiated by the same user or by multiple hosts. For example, two connections from my PC to a website will result in each getting 50% of my available bandwidth, assuming my PC’s Internet connection is the bottleneck. It doesn’t matter whether each TCP connection has the same destination or not since it’s the source (client) that backs off when the bottleneck is hit.

Now, consider the case where a neighborhood gateway is the bottleneck with two competing houses. If each opens one connection, each will get 50% of the gateway’s bandwidth. But if one house opens an extra connection, it gets 66% of the gateway’s bandwidth. This is the only case that matters since the previous example would only result in “unfair” allocation between a single user’s applications, and there is little incentive to do that.

The second situation where a connection is kept open and reused has little impact and should be left out of the discussion. This behavior takes advantage of the fact that most TCP implementations are unnecessarily forgetful. If you close a connection to a website and then immediately reopen it, all the bandwidth estimation statistics would be regenerated by “slow start”. That’s wasteful since it’s unlikely the network changed very much in the split second between close and reopen. Some implementations have cached bandwidth estimates across connection restarts, which has the same effect as keeping a persistent connection open.

In any case, this has nothing to do with fairness since as long as congestion control is working, the persistent connection will exponentially back off as soon as congestion is encountered when a new connection is started (say, by another house). This situation was what caused Jacobsen to design his algorithm in the first place.

So we’re left with only one situation where there is any concern about unfairness: a user with multiple TCP connections sharing a bottleneck with another user. The simplest solution is to put a bandwidth cap at the entry point to the network. This would push the bottleneck back to the individual user. This was the assumption behind the original TCP congestion control design. However, most networks aren’t designed this way for business reasons. Instead, individual connections are made as fast as possible and the shared gateway is over-subscribed.

There’s a much simpler solution that fits in the current business model: random early detection or “RED”. With RED, a packet is randomly dropped (or marked with a congestion bit) with a probability that increases the more congested the network. The nice thing about probability is that it fairly targets the heavier user. If your house is sending twice as many packets as mine and the shared gateway increases its random drop ratio to 33%, you are twice as likely to have one of your two packets dropped as my one packet.

The nice thing about this solution is that ISPs are already installing the necessary equipment. The TCP RST spoofing approach Comcast is already using requires equipment to be able to see the entire connection. So they certainly can implement RED based on IP address since that only requires looking at the header, not the contents of the TCP stream. Even if high-end users have multiple IP addresses, those can be grouped into a common pool based on PPPoE/RADIUS login information.

RED addresses another huge problem that George Ou tries unsuccessfully to design around. Since the TCP stack is client-side software, it can be patched or configured to exploit congestion control. Apps that use pure UDP instead of TCP to avoid congestion control are a simple example. If a new scheme is dependent on the client software to be friendly, it will have this same problem. We all know you can’t trust the client.

So if RED is simpler than TCP RST spoofing and can be rolled out the same way, why would ISPs choose the more complicated option? I think the answer is simple: control. RED is completely fair since it pays no attention to the contents of the packets or streams. It targets users that are network hogs no matter what applications they are using. ISPs are all about control, the more fine-grained, the better. They want the ability to charge for how the network is being used, in addition to how much it is used.

As long as ISPs want to charge per-application instead of per-user rates, all these proposals are pointless. RED allows an ISP to over-subscribe the gateway and enforce fairness among users. They can even give users that pay extra more bandwidth by assigning them a more favorable drop ratio. But RED doesn’t discriminate how that bandwidth is used, so ISPs will continue deploying equipment that does discriminate, no matter what George Ou or Bob Briscoe say.

3 Comments

  1. While his prescription may be off, it’s nice to see someone with a soapbox making the case that there *IS* a problem to be solved. The loudest voices on this issue assume bad intentions on the part of Comcast and others and dismiss the technical concerns as distractions from the free speech concerns – of which there are very few actual examples. We need more folks who understand the technical issues making more noise.

    Comment by w0zz — March 25, 2008 @ 8:19 am

  2. Quote:
    So they certainly can implement RED based on IP address since that only requires looking at the header, not the contents of the TCP stream. Even if high-end users have multiple IP addresses, those can be grouped into a common pool based on PPPoE/RADIUS login information.

    The equipment does not need to examine the IP header, or to group a user with multiple IP addresses. With (W)RED packets are dropped probabilistically. The more packets a user sends the higher the probability that one of their packet will be dropped. It doesn’t matter if they have multiple IP addresses or not.

    Quote:
    As long as ISPs want to charge per-application instead of per-user rates, all these proposals are pointless. RED allows an ISP to over-subscribe the gateway and enforce fairness among users. They can even give users that pay extra more bandwidth by assigning them a more favorable drop ratio. But RED doesn’t discriminate how that bandwidth is used, so ISPs will continue deploying equipment that does discriminate, no matter what George Ou or Bob Briscoe say.

    The ISPs can use WRED (Weighted RED) to have different drop probabilities for different classes of traffic based on QoS marking. The QoS marking can be based off of the applications that the users are using. With WRED the service providers should have the control that you state they want. It does allow them to discriminate how the bandwidth is used.

    Comment by Ben C — March 26, 2008 @ 8:39 pm

  3. Ben C: thanks for the response and hi w0zz!

    You’re right that the IP address does not need to be examined to implement RED. I added that in to show that all a user’s connection could still be grouped (i.e. WRED) and RED did not have to be blind. I expected people to argue RED alone wasn’t a solution since it wouldn’t allow differentiated service. I wrote about WRED in my second post before I saw your comment but you’re definitely right.

    Comment by Nate Lawson — March 28, 2008 @ 9:54 am


RSS feed for comments on this post.

Blog at WordPress.com.