Home / Blogs

No Fines for Comcast

Note: this is an update on my earlier story, which incorrectly said that the AP reported that Chairman Martin was seeking to impose “fines” on Comcast. In fact, the story used the word “punish” rather than “fine,” and a headline writer at the New York Times added “penalty” to it “F.C.C. Chairman Favors Penalty on Comcast” (I won’t quote the story because I’m a blogger and the AP is the AP, so click through.) Much of the initial reaction to the story was obviously colored by the headline.

Martin’s concept of punishment is to order the company to do what it had already told the public it was doing, phasing out one system of traffic management in favor of another one. It’s a non-penalty punishment, akin to forcing a misbehaving child to eat the candies she’s already enjoying. Now back to our story.

At a press conference today, FCC Chairman Kevin Martin said he’s not seeking to fine Comcast. Rather, he will simply impose some reporting requirements on them and order them to do what they’ve already started to do, phase out the current traffic management system in favor of an application-agnostic one.

Confusion continues to surround this story. Cnet reported recently that both AT&T and Verizon spokesmen recommended, at NXTComm panel, that the FCC take action against Comcast, Cnet had to correct the story after the corporate spokesmen clarified they meant to say that the FCC should take some action that didn’t punish Comcast but did establish that the agency has jurisdiction over broadband management practices. The original AP story on Comcast written by Peter Svensson created a mistaken impression as well, since it represented a very artificial test as a typical P2P transaction. Svensson is not the reporter who wrote the wire story on the pending FCC action.

The empty “punishment” allows both sides to claim victory.

One of the more intelligent commentaries on that report was from the Bits blog at the New York Times:

Comcast, the nation’s largest cable provider, admitted that it was slowing down certain traffic but claimed it was legitimately managing its network so that a few bandwidth hogs didn’t bog things down for everyone else.

Still, in response to critics, the company decided to work with BitTorrent and experiment with other traffic-management techniques to handle the loads on its network.

The dirty little secret of the Internet industry is that all the providers use software tools to manage their network traffic. Comcast got caught and may have been more aggressive than some rivals, but it’s certainly not alone.

So why should Comcast be punished for engaging in a practice that’s necessary for network stability, doesn’t violate any actual rules, and helps the vast majority of its customers? There’s no rational reason for the Commission to rule this way, and if they do the only real result will be a court challenge that throws the order in the trash can.

I’ve said from the beginning that the one aspect of the Comcast network management incident that’s questionable is the lack of disclosure in the early days of the controversy, but all that warrants is a slap on the wrist. The real problem with disclosure is that no carrier can really explain how it manages its network in a way the typical consumer can understand.

Levying fines for necessary network management practices would be ridiculous, and the recent Free Press filing with the FCC shows that they don’t actually believe they have a legal leg to stand on. The net neutrality and reasonable management rules are impossibly vague. I’m not a lawyer, but there has to be common law principle against punishing people for violating secret laws.

Free Press is enjoying the moment, according to reporter Nate Anderson of the Ars Technica blog:

In a statement sent out late Thursday night, Ammori summed up Free Press’ achievement in typical fashion. “Nine months ago, Comcast was exposed for blocking free choice on the Internet. At every turn, Comcast has denied blocking, lied to the public and tried to avoid being held accountable. We have presented an open and shut case that Comcast broke the law. The FCC now appears ready to take action on behalf of consumers. This is an historic test for whether the law will protect the open Internet. If the commission decisively rules against Comcast, it will be a remarkable victory for organized people over organized money.”

Free Press has an annual budget of $5 million, some of which it has spent organizing demonstrators at FCC hearings.

At the end of the day, this is much more a question about jurisdiction than it is about policy. Martin is the lapdog of the Telcos and avowed enemy of the cable industry. The Telcos want to prevent Congress from passing either of the Markey or Conyers bills, as they lay out strict, easily-enforceable rules that are fundamentally ludicrous and unworkable. They figure the best way to block these bills is for the FCC to assert that it already has the authority to guarantee sweetness and light on the Internet through its ancillary authority and Policy Statement (the four freedoms.) Cable companies don’t trust Martin to give them a fair shake, so they deny the Policy Statement is a rule, which it says it isn’t.

The cable companies have a better feel for the tactical side of the question than the telcos do at the moment, as they’re the ones who’ve been taking the heat. Letting the FCC sanction Comcast will only embolden the neutrality mob, and they’ll damn sure push on to Congress regardless of the outcome in the FCC. The only way to beat them is to win every battle and take the wind out of their sails.

So this can only come out in two ways as a jurisdiction matter: either the FCC shows it can kick ass and take names, or Congress gets into the act with some very misguided legislation that’s going to cause five years of misery until the courts overturn it. So Comcast is in the position of a kid brother who’s been blamed for stealing the cookies that his older brother ate: it can take a whipping, or be grounded for a week. I’d personally take the whipping, but neither prospect is appealing.

It’s unfortunate that the matter has come to this pass, but there’s a lesson in it about responding to false allegations quickly and strongly: the Swiftboat Lesson, once again. Free Press’ complaint is an ocean of mendacity, but that sort of thing needs to be nipped in the bud. The next time, I suspect the ISPs will be better prepared. And make no mistake, this is a war, not a battle, and there will be a next time, and time after that, etc.

FYI, Gigi Sohn of Public Knowledge admits today that her group will seek legislation anyway:

“At the same time, this case is limited in scope to one company and to one type of behavior. Even if the Commission ultimately issues an order against Comcast, there is still a need for legislation to prohibit discrimination by telephone and cable companies while preserving the rights of Internet users and companies that do business on the Internet.

You can’t please these people.

By Richard Bennett, Consultant

Richard is co-creator of the Ethernet and Wi-Fi standards.

Visit Page

Filed Under

Comments

Correction requested John Dunbar  –  Jul 12, 2008 2:44 PM

This is not correct: “The AP earlier reported that Kevin Martin was going to circulate an order to the FCC fining Comcast.”

The order is an enforcement action, it does not include a recommendation for a fine, nor did I report that. As for the first story that got facts “ass backwards”? Details would be appreciated.

John Dunbar

I'm constantly stunned at how wolfkeeper  –  Jul 12, 2008 11:19 PM

I’m constantly stunned at how badly Richard Bennett fails to get this stuff.

Comcast are currently using deep packet inspection to try to enforce network non neutralities.

It’s a daft, losing strategy.

The reason it fails is because any company is outnumbered by their customers (by sheer economic necessity). That means that their customers run rings around them.  Customers simply change their packets unti the DPI fails to properly categorise it. Only a strategy which *cannot* be evaded can hope to succeed.

Comcast know this; this is why they’re abandoning their non neutral strategy.

DPI works, if you’re working *with* the customers. If you use DPI to make VOIP packets high priority so that VOIP works well, then customers don’t mind that, and if you tell them hour many hours of VOIP per week they can use, then your network doesn’t break, because you can provision it accordingly; and if they go above that, you downgrade their VOIP to best effort. Everybody is happy.

It’s that simple, it’s about agreeing with your customers what you’re going to provide and providing it. If you, Richard Bennett Network Architect (sic) don’t know how to do that, then you need to leave it those who do.

It's advocacy The Famous Brett Watson  –  Jul 13, 2008 2:56 PM

Ian Woollard said:

I'm constantly stunned at how badly Richard Bennett fails to get this stuff.
I think you have a problem with your expectations. Richard is clearly an advocate or apologist for Comcast: his knowledge and competence are being applied to the task of making Comcast's position seem reasonable, and its opponents' positions seem unreasonable. Like a defence lawyer, his conclusions are quite foregone: "my client is innocent". The only potentially interesting part is how he reaches that conclusion from the facts themselves -- or at least the claims he chooses to present as fact in argument. This activity is normally associated with lawyers, spin doctors, and fanboys. This isn't a court, and I have no reason to think that Richard is a lawyer. It's less clear whether he's acting in the capacity of spin doctor or fanboy. I haven't seen any disclaimer or disclosure from him as relates to a possible financial interest -- not that it matters much, since advocacy is advocacy whether it's paid or not.

Black helicopters Richard Bennett  –  Jul 13, 2008 6:06 PM

I have all sorts of financial interests in this debate, the primary one being that I make my living doing network engineering and I don't want my profession equated with dealing drugs. That's why I've been active in the debate since 2003, three employers ago. The essence of the net neutrality debate is that we have a group of lawyers and law professors on one side who want networks to work a certain way (based on an over-simplified view of the Internet) and on the other we have engineers and business people who understand how they actually do work and want to retain the freedom to improve the technology. In the middle are a bunch of average citizens who don't get what all the fuss is about but just want services that are fast, fair, and affordable. There are a lot of livelihoods at stake in this debate, no less on the side of the people who stir up fear and anxiety than among anybody else.

Richard Bennett said:...I make my The Famous Brett Watson  –  Jul 14, 2008 1:38 AM

Richard Bennett said:

...I make my living doing network engineering and I don't want my profession equated with dealing drugs.
For one thing, I don't recall having seen that parallel drawn before. For another thing, it's a ridiculous assertion and I don't think you ought to grace it with any kind of response beyond a derisive snort. The worst that you could say about some engineers is that they wilfully collaborate with their employers in efforts which are designed to maximise profits without being in the genuine interests of their customers (except by a twisted argument that only a spin-doctor could love). The worst such engineers are those who turn a blind eye to customer safety issues, like dangerously defective cars. Network engineers aren't generally in this category, so it's really not appropriate to accuse a conniving network engineer of being anything worse than a mendacious weasel.
The essence of the net neutrality debate is that we have a group of lawyers and law professors on one side who want networks to work a certain way (based on an over-simplified view of the Internet) and on the other we have engineers and business people who understand how they actually do work and want to retain the freedom to improve the technology.
Sometimes law professors get it right, and sometimes engineers are guilty of their own egregious oversimplifications. An example of that latter point springs immediately to mind, in fact.

I'm yet to read a Richard Bennett  –  Jul 14, 2008 1:53 AM

I'm yet to read a comment from you, Brett, that puts forward a serious argument. Making personal attacks may be amusing, but it doesn't advance the debate. The law professors who started the net neutrality campaign make a fundamental mistake in the relationship of network theory and practice. The Internet was created to facilitate experimentation in packet networking design, and its primary mode of operation has been guided from the inception by the results of the experiment. We've formulated theories, carried out experiments, and recorded the results in RFCs, especially in Best Current Practice documents. As new applications come on-line, new challenges appear for the transport and network protocols, and revisions of practice come about. The lawyers tend to view RFCs as they would books of legal code, a source of precedent that should bind present and future behavior by network operators; as a compass, in effect. But for those of us who are working engineers, the RFCs are more like a seismograph that records where we've been. The experiment is ongoing, and will most likely never come to an end as long as the Internet is a functioning, operational entity. We currently have to deal with the issue of P2P applications which, by design, consume great amounts of bandwidth for long periods of time. This is a different pattern of consumption than we've seen previously, and it needs a novel management response. The IETF is working on the problem in the P2PI working group. There is no RFC that says how P2P should be handled right now, so a large set of options are on the table. The lawyers never comment on the fact that this pattern of consumption is unique; rather, they blindly criticize management of over-consumption as if it took place in a vacuum. A balanced debate needs input from both sides, and the input needs to be intelligent. I'm trying to achieve that balance. What are you doing to advance the debate on Internet management?

So far as I can wolfkeeper  –  Jul 13, 2008 6:20 PM

Comment removed by CircleID Admin as per Codes of Conduct.

Mark Cuban said it best Richard Bennett  –  Jul 13, 2008 10:39 AM

In a recent Blog Maverick post, Mark Cuban said:

I just want to put it out there to save everyone and anyone who deals with me time. If at any point in time you utter the words “Just Don’t Get It” or “Just Doesn’t Get It” in any conversation with me, I will not do business with you.

If you try to justify your business, idea, proposal or whatever and in the course of conversation you utter these words, you have just proven to me that you are lazy. That rather than discussing the merits of another position, you think I’m stupid enough to dismiss that position because you want me to.

If you truly understand your topic its really easy to stand behind your position with facts and well thought concepts. If you have no idea what you are talking about, the other side “just doesn’t get it”

If you have a serious point to argue, Ian (AKA Wolfkeeper, etc.) make an argument for it instead of hiding behind personal attacks, slurs, and lazy insults.

In fact, DPI has already given way to Traffic Stream Analysis in the packet classification systems used by Comcast and the other ISPs. These systems operate on traffic characteristics that can’t readily be spoofed or obfuscated, because they represent stream parameters: number of streams, stream lifetime, and volume of data in the streams. The so-called DPI systems (which never were all that deep) triggered on port numbers and protocol IDs, and were simply an easy way of classifying traffic for prioritization purposes; so easy, they’ve been in universal use in interior routers since the late 80’s as they’re essential to make Jacobson’s Algorithm avoid its annoying tendency to cycle.

Nonsense posing and philosophizing may wash in Wikipedia-land, but it’s not good enough to cut the mustard on this site, Ian.

Traffic analysis? Good thing there's wolfkeeper  –  Jul 13, 2008 5:46 PM

Traffic analysis? Good thing there's no known ways to tackle that like 'Traffic flow security'. That was invented in WWII. What could go wrong? Really, as soon as you start treating your own customers like they were the enemy in WWII then your business is going down. Network non-neutralities are inherently customer-hostile, but customers aren't stupid, they talk to each other and they will often abandon service providers using it at the first opportunity.

Richard Bennett wrote:I'm yet to The Famous Brett Watson  –  Jul 14, 2008 4:30 AM

Richard Bennett wrote:

I’m yet to read a comment from you, Brett, that puts forward a serious argument. Making personal attacks may be amusing, but it doesn’t advance the debate.

The lack of a “serious argument” on my part is explained by the fact that I’m not trying to advocate a particular viewpoint. Rather, I’m trying to understand the problem as a whole. I do not intend to launch a personal attack, but when you make ludicrously overbroad claims of fact, such as framing the debate as being between “lawyers and law professors” with an “over-simplified view of the Internet” on one side, and “engineers and business people who understand how they actually do work and want to retain the freedom to improve the technology” on the other, I will point out the irony of your own over-simplification. I’m sure there are many network engineers who resent the suggestion that they are simplistic if they don’t agree with you, and there may even be some law professors who are insulted for the converse reason. Whatever the case, it’s pretty rich that you’re criticising my lack of contribution to the debate while you’re engaging in this level of rhetoric.

As to the remainder of your comment, I won’t dwell on your straw-man characterisation of a law professor and his lack of understanding; rather, I’ll try to address the technical issues. I invested quite a bit of effort in understanding your position on a previous occasion, and I still haven’t reached a satisfactory understanding of why RST injection is technically superior to packet dropping. That’s the meat and potatoes of this debate, and I’m quite willing to focus on it, but it hasn’t proved very fruitful in the past.

You make all sorts of claims as to the uniqueness of this situation and how the RST injection approach is fundamentally necessary, and so on. I don’t wish to assert that the old principles are fundamental laws of nature, but I do think that the burden of proof should be on those (such as you) who claim that the new situation of peer-to-peer file sharing warrants an adjustment to well-established principles. Several of those principles are related and inter-woven: the dumb network principle; the end-to-end principle; the layering principle. The practice of inserting RSTs into a TCP stream on an application-oriented basis flies directly in the face of all of these. I want compelling evidence that the traditional remedy of application-agnostic packet dropping is not good enough.

I do believe that Comcast has a network management problem, and they have chosen to address it with the RST-oriented approach. I see no need to invoke black helicopters and conspiracy theories. (I could be wrong, but I don’t want to cloud the technical issues if I am.) I also accept your earlier assertions that this network management problem relates to the DOCSIS architecture. I still do not accept that the use of RSTs constitutes a technically superior solution to the problem—although I do accept that it constitutes a technically expedient solution. You have presented a great many words in support of your argument that the RST approach is necessary, or at least superior. I have found those arguments lacking, and refer you to our previous discussions on the matter (linked above) for the details.

The argument over technical superiority raises the separate question of whether ISPs ought to employ application-neutral network management techniques in general, or whether it’s fair to interfere with the application that’s causing them the most difficulty. This latter question is where the law professors have an entirely valid point to raise within their own area of expertise. If we allow network management techniques to interfere with specific applications, we invite the service providers to sabotage those applications which most threaten their particular business models, under the pretext of “reasonable network management”. Such anti-competitive behaviour is not in the interests of consumers no matter how much “reasonable network management” wrapping adorns it.

Why packet drop doesn't work Richard Bennett  –  Jul 14, 2008 11:05 PM

Here's part of an e-mail I got the other day from Larry Roberts, the designer of ARPANET, that helps explain the problem, Brett: "...P2P expands to fill any capacity. In fact, as I have been testing and modeling P2P I find it taking up even higher fractions of the capacity as the total capacity expands. This is because each P2P app. can get more capacity and it is designed to take all it can. In the Universities we have measured, the P2P grows to between 95-98% of their Internet usage. It does this by reducing the rate per flow lower and lower, which by virtue of the current network design where all flows get equal capacity, drives the average rate per flow for average users down to their rate. They then win by virtue of having more flows, up to 1000 per user. I suspect they do not do this on cable since the upstream capacity is only 10 Mbps and when it saturates, they must stop at about 80%. But raise the capacity per user and the capacity of the upstream choke point and watch out! P2P can consume virtually any capacity. Larry" I hope that clears it up.

Still not clear The Famous Brett Watson  –  Jul 15, 2008 1:37 AM

This is a re-statement of the problem, not an explanation. Furthermore, this re-statement seems to be slightly different to the problem we have already discussed: it does not mention the DOCSIS upstream congestion problem, and seems to exclude cable modems generally ("I suspect they do not do this on cable"). There are numerous other questions I could raise about this explanation, but they are of lesser importance than the lack of a clear problem statement, and I don't want to cloud the issue. I'm afraid this has not made matters any clearer, Richard. A proper explanation starts with a very clear problem statement and works through to a solution, step by step. In this case, the explanation must also show why the novel network management technique (RST injection) is superior to the traditional approach (packet dropping). I thought we'd reached a firm problem statement, but this attempt at explanation has thrown that back into doubt. Here's a rough outline of the problem as I understand it so far. P2P protocols tend to degrade network performance for other applications. How? At this point I'm not sure how to continue, because I'm no longer sure if the problem is one of DOCSIS upstream congestion, subversion of router traffic management policies, or maybe both.

It's not just DOCSIS... Dan Campbell  –  Jul 15, 2008 2:54 AM

Brett,

I’m not sure if I’m reading you right, but I sense that you have a bit of doubt that the P2P problem actually exists.  Trust me, it does.  The behavior of the protocols and the traffic they generate are staggering.  They are like a virus.  A few users can generate so much traffic that they completely step on all “regular” subscribers and overwhelm the network.  In a DOCSIS-based broadband platform I worked on, even traditional bandwidth management appliances that partitioned and rate-shaped traffic based on traditional QoS parameters were not sufficient to address the issue; we were forced to pull them out and put in P-Cube (now Cisco SCE) appliances that specialize in P2P throttling.  They work, and they work well.  The issue is more evident in the DOCSIS world because DOCSIS is a shared system up to the head end, not unlike an old Ethernet hub or even a TDMA-based system.  DOCSIS uses a scheduler to assign timeslots designating when subscribers can transmit.  Since this was years ago, it’s been a while since I looked deeply into DOCSIS so I’ll have to look back on notes to give you more insight on the DOCSIS aspect, but it really doesn’t matter.  The P2P problem could exist on a DSL network, although it would not occur on the last mile to the subscriber but rather on backbone links just past the DSLAM and aggregator.  It also could occur on any corporate network, at the Internet gateways or even uplinks in between if the traffic were bad enough.  Again, it is common in DOCSIS because of the shared nature of the cable Internet architecture up to the head end.  We basically came to the conclusion that in any scheduler based system, you had no choice but to perform traffic shaping, else your entire service goes down the drain and you may as well shut it down.

One thing that is not clear is exactly how Comcast has their appliances (I think they are Sandvine) configured.  They may be throttling in both directions.  The one thing that is configurable in the appliances is to just limit or block P2P in the upstream direction (from the end user towards the Internet), since that bandwidth is more scarce.  This way you can discourage the P2P protocols from allowing others out on the Internet to pull files from the cable subscribers and wrecking the upstream, while allowing the cable subscribers to download from others out on the Internet (but not from cable subscribers on the same segment.)

Again, as I’ve said in other posts, what is missing from this entire Comcast / traffic shaping / network neutrality debate is a common understanding of the principles behind oversubcription, which governs our phone lines, cellular phones, previously dial-up Internet access and now broadband Internet access.  Without oversubscription, you would not be paying $40/month for the equivalent of a T1 to your home.  The business model doesn’t work.  (Even the non-technical lawyers and law professors should be able to understand this.)  If service providers had to truly guarantee the maximum bandwidth 24x7, they may as well be selling leased lines to the home, and those lines would be at a much higher price.  If the FCC pushes too hard on broadband providers, we may very well end up with usage (per-MB) based service, which would be a disaster for most subscribers.  The free market is a better way to pass judgement on the practices and service quality of a service provider.  If enough subscribers drop them for a competitor, it might convince them to alter their practice or, in the case of Comcast, expedite their upgrade to DOCSIS 3.0.  In the meantime, I would rather continue to pay a mere $40/month and refrain from file sharing, while permitting Comcast to throttle file sharers around me, rather than have per-MB or much higher priced service.

Focus! Focus! The Famous Brett Watson  –  Jul 15, 2008 11:06 PM

I'm not sure if I'm reading you right, but I sense that you have a bit of doubt that the P2P problem actually exists.
I don't doubt that P2P is posing a problem. I don't doubt that it creates a lot of traffic. What I doubt is that this traffic pattern is so unique as to demand "traffic shaping" techniques which involve selectively killing TCP sessions in a manner unprecedented except for its use as a denial of service attack or censorship tool. When I first heard that Comcast was using forged TCP RSTs to address this issue, my reaction was like that of many other people: "that's not traffic management, that's sabotage". I was initially baffled as to why they would do this. I've since reached the conclusion that the problem is P2P traffic, and they are using this ghastly hack of a management tool because it was available immediately, and had the desired effect. (Well, almost the desired effect. They clearly didn't anticipate the public relations impact.) Richard Bennet has been trying to persuade us that the RST injection method is not an ugly hack, but quite necessary given the nature of the problem. He claims that traditional traffic shaping techniques involving selective packet drop simply do not work. My interest is network protocols rather than network management, and if his claim is true, it may impact my work quite significantly. This is why I have been trying to pin down the exact nature of the problem and the alleged need for RST segments. His assertions challenge some fairly fundamental principles of packet network design. Frankly, if Richard is right, it exposes a much deeper problem: the P2P protocol designers can always switch to a UDP-based system which simply has no concept of "RST", and what traffic management techniques can be applied then? I'm still highly sceptical that RST injection has any merit as a traffic shaping tool from an engineer's perspective, however.
Again, as I've said in other posts, what is missing from this entire Comcast / traffic shaping / network neutrality debate is a common understanding of the principles behind oversubcription
Actually, what's missing from this debate is an ability to focus on one issue at a time. Your other post is a veritable smorgasbord of issues, guaranteeing that no progress will be made on the matter as people argue directly past each other on entirely different points, and switch focus the moment that it seems like a convenient tactical manoeuvre. Yes, I realise that there is more than one issue on the table here, but do try to keep them somewhat separate unless you actually intend to hinder progress. Communication is hard at the best of times. We are simple creatures and need to simplify our problems before we can solve them.

Paying for Bandwidth David Wieda  –  Jul 28, 2008 2:31 PM

I believe consumers pay for a certain amount of Bandwidth already. Shouldn't we be able to use the Bandwidth as we see fit? Here's a simple analogy. A casino must have enough cash onsite to cover all bets on the casino floor. Shouldn't an ISP like Comcast have enough Bandwidth/Infrastructure to support the advertised Bandwidth if all customers are concurrently using all of their connections' Bandwidth?

Shouldn't an ISP like Comcast The Famous Brett Watson  –  Jul 28, 2008 10:18 PM

Shouldn't an ISP like Comcast have enough Bandwidth/Infrastructure to support the advertised Bandwidth if all customers are concurrently using all of their connections' Bandwidth?
No. There would be no economy of scale worth mentioning if that were the case, and most of their network capacity would be wasted most of the time. This is the point: roughly speaking, you only need enough network capacity to cope with peak actual demand, and this is rarely anywhere near the theoretical maximum demand that could be placed on the network if all users simultaneously drove their connections to the limit. Of course, you can run into a problem when you sell your services as "unlimited", and too many people start to take you at your word. It's kind of like being an "all you can eat" buffet, and having a busload of Homer Simpson clones turn up. "Unlimited" is a draw-card, but it has its risks. If you were running the "all you can eat buffet", what would your strategy be? Would you change your pricing plan away from the "unlimited" model, or poison some of the food enough to make people ill and cut down demand?

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Brand Protection

Sponsored byCSC

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

Cybersecurity

Sponsored byVerisign

DNS

Sponsored byDNIB.com