Late last week, Comcast officially disclosed to the FCC details of its network management practices which have been a subject of considerable discussion here on CircleID. (My thanks to Threat Level from Wired.com for providing a convenient copy of Comcast's "Attachment A" in which this disclosure is made.) There's not a lot of startling disclosure in this document, but it does provide some useful concrete facts and figures. I'll quote the more interesting parts of the document here, and offer comment on it. All citations refer to "Attachment A: Comcast Corporation Description of Current Network Management Practices" unless otherwise specified.
Comcast has approximately 3300 CMTSes deployed throughout our network, serving our 14.4 million HSI subscribers. [p.2]
These figures yield an average of approximately 4360 subscribers per CMTS.
Comcast's current congestion management practices focus solely on a subset of upstream traffic. [p.3]
More specifically, they focus on the "upload" channels of five particular file-sharing protocols, discussed later.
[I]n order to mitigate congestion, Comcast determined that it should manage only those protocols that placed excessive burdens on the network, and that it should manage those protocols in a minimally intrusive way utilizing the technology available at the time. More specifically, in an effort to avoid upstream congestion, Comcast established thresholds for the number of simultaneous unidirectional uploads that can be initiated for each of the managed protocols in any given geographic area; when the number of simultaneous sessions remains below those thresholds, uploads are not managed. [p.3-4]
By "protocol", the document here specifically means "application protocol". Comcast's approach to network management was thus to determine which applications were responsible for the most network load, and then manage those applications. The document offers nothing in the way of rationale for this approach: one might ask why they did not determine which customers were responsible for the most network load, and manage those customers.
The specific equipment Comcast uses to effectuate its network management practices is a device known as the Sandvine Policy Traffic Switch 8210 ("Sandvine PTS 8210"). [p.4]
Perhaps the decision to manage applications was born of an overriding business decision to use a particular vendor's appliances, and the scope of possible technical approaches was thus limited a priori. The document does not describe how Comcast came to choose this appliance, given the range of alternatives that exist.
On Comcast's network, the Sandvine PTS 8210 is deployed "out-of-line" (that is, out of the regular traffic flow) and is located adjacent to the CMTS. ... A "mirror" replicates the traffic flow that is heading upstream from the CMTS without otherwise delaying it and sends it to the Sandvine PTS 8210… [p.5]
There is one PTS 8210 per CMTS, except in some cases where "two small CMTSes located near each other may be managed by a single Sandvine PTS 8210." [p.5] The average number of subscribers per PTS 8210 is thus somewhere between one and two times the average number of subscribers per CMTS, or between approximately 4360 and 8730. These figures are significant, because the number of active upload links is limited over the customer pool serviced by each such appliance.
[T]he Sandvine PTS uses technology that processes the addressing, protocol, and header information of a particular packet to determine the session type. [p.7]
Note that "header" information includes application layer elements, as clarified by Diagram 3 [p.8], so this is "deep packet inspection". Roughly speaking, protocol control messages are subject to scrutiny, but the bulk data so transported is not. Such a distinction between data and metadata is a relative one, and Comcast's cut-off point for analysis is a little hazy in parts. For example, Diagram 3 [p.8] notes that an "SMTP address" is subject to scrutiny, but "email body" is not. It's not immediately clear whether "email body" includes all message data, or just the part beyond the header fields.
Deep packet inspection of the sort used here is not "minimally intrusive" [p.4] compared to some other approaches, but it may have been the least intrusive method of management available given a sufficient number of other arbitrary constraints.
[F]ive P2P protocols were identified to be managed: Ares, BitTorrent, eDonkey, FastTrack, and Gnutella. [p.8]
Note that Ares was a late entry (November 2007) [p.8], whereas management of the others commenced at roll-out in 2006 [p.5].
For each of the protocols, a session threshold is in place that is intended to provide for equivalently fair access between the protocols, but still mitigate the likelihood of congestion that could cause service degradation for our customers. [p.8-9]
Thresholds differ significantly between applications due to their inherently varied characteristics. See Table 1 [p.10] (but note a possible typo: the ratio for eDonkey is given as "~.3:1", but the actual ratio as computed from the other columns is "~1.3:1"). BitTorrent unidirectional flows have the lowest threshold, permitting only eight per PTS 8210. Bear in mind that each such device is managing thousands of customers, but that relatively few BitTorrent flows are unidirectional uploads (according to Table 1 [p.10]).
When the number of unidirectional upload sessions for any of the managed P2P protocols for a particular Sandvine PTS reaches the pre-determined session threshold, the Sandvine PTS issues instructions called "reset packets" that delay unidirectional uploads for that particular P2P protocol in the geographic area managed by that Sandvine PTS. The "reset" is a flag in the packet header used to communicate an error condition in communication between two computers on the Internet. As used in our current congestion management practices, the reset packet is used to convey that the system cannot, at that moment, process additional high-resource demands without creating risk of congestion. [p.10]
The above may be a true representation of Comcast's network management intentions in sending these reset segments, but the practice is in conflict with the TCP protocol specification. For one thing, only the two TCP endpoints (neither of which is the Sandvine PTS in this case) are considered to be participants in the protocol. If that isn't decisive in and of itself, the TCP specification has the following simple remarks on the subject of resets.
As a general rule, reset (RST) must be sent whenever a segment arrives which apparently is not intended for the current connection. A reset must not be sent if it is not clear that this is the case. [RFC 793, p.36]
Put simply, the TCP RST flag is not and was never intended to be a means of managing congestion. It is intended to convey a specific error condition, and the Sandvine appliances are issuing the message inappropriately so as to produce the side effects of this error condition as a means to influence application behaviour. The network management practices are thus in direct violation of basic Internet standards, which is distinctly unwelcome behaviour. It might be an understandable (if inelegant) strategy in an environment where the network provider sets policy as to what applications are permitted, such as a corporate network, but it is inappropriate for a general Internet Service Provider. This was the basis for many of the howls of protest when Comcast's network management practices were first discovered empirically.
[A]s Comcast previously stated and as the Order now requires, Comcast will end these protocol-specific congestion management practices throughout its network by the end of 2008. [p.11]
I hope that their future network practices will, as a first priority, aim to give each customer a fair share of the available network resources, without discriminating on the basis of the applications that the customer chooses to use. I also hope that these practices will uphold long-standing principles of Internet traffic management, rather than use inelegant side effects of lower-layer protocol control flags to manipulate specific applications.
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services
.eco launches globally at 16:00 UTC on April 25, 2017, when domains will be available on a first-come, first-serve basis. .eco is for businesses, non-profits and people committed to positive change for the planet. See list of registrars offering .eco more»