Home / Blogs

Google Fiber: Technology Innovation Or Revenue Assurance?

Google’s announcement of its ‘Fiberhoods’ throughout Kansas City is yet another example of the thought leadership and innovation being brought forward by the popular advertising company. But what does this move say about the state of Internet access in America?

With Verizon FIOS, Comcast Xfinity and Time Warner Cable all vying for the nation’s top choice ISP, along with AT&T, Verizon Wireless, Sprint, and T-Mobile competing to be named the best 4G LTE network, it’s an interesting time to see a new player enter the commercial market.

So what is Google doing, anyway?

Google’s offering, up to 1000 Megabits per second (Mbps) to the home, enables customers to access the web at speeds far greater than its competitors’ ISPs can. This even boldly breaks the speed records for the average residential user in Japan, where home Internet access averages 61Mbps.

Undoubtedly, Google has done some amazing network engineering just to make their concept of Google Fiber operate at a city-wide scale. Broadband and mobile network operators are facing grave challenges in scaling their core and aggregation networks to level up to the speeds of the latest and greatest access technologies available.

Fiber to the home, DOCSIS 3.0, and 4G LTE are all enabling access to core networks at speeds easily over 100Mbps from the local node or mobile tower to the home or handheld device. However, the networks sitting behind local access nodes or towers, known as aggregation networks, are not currently up to the challenge of scaling up to the necessary speeds.

I live in a modest 36-home development in New Hampshire. At 100Mbps DOCSIS 3.0 speed, our neighborhood would need nearly 4Gbps of capacity into the neighborhood. That’s a lot. But if we had the Google Fiber in our neighborhood, it would require nearly 40Gbps of capacity!

For reference, Juniper’s largest switching fabric, the QFX-series, caps out at 40Tbps of throughput. With each home having 1000Mbps access, that’s capacity for only 40,000 homes, not including backbone connection! That’s hardly enough even for a moderately sized city.

How will networks handle this new requirement for capacity?

The potential load that these levels of service can place onto the network is huge and I’m not convinced that aggregation networks, core backbone networks or the critically important Internet exchange points (locations where IP traffic is exchanged between ISPs) are ready to handle the load.

Last November, I cited Comcast’s statement that they were already pushing 40Gbps in a single regional aggregation network in Philadelphia, PA. For comparison’s sake, one of the largest transpacific connections I’m aware of belongs to NTT Communications, currently built out to support 630Gbps of capacity from the US to Japan.

The load issue goes far beyond just the network layer and into the data center. Are content providers (Google, Yahoo, Microsoft, Netflix, Amazon, etc.) truly ready to see traffic flowing at 1Gbps to a single residence? It’s commonplace today for a single video server to be easily able to push over 4Gbps of streaming video but to thousands of clients. In our Google Fiberhood, does that mean service for only 4 homes?

Beyond the physical network, we get into the protocol layer.

When Vint Cerf’s team at Stanford Research Institute between 1973 and 1974 designed TCP/IP, they never thought networks could go as fast as they do today. The congestion control algorithm’s developed by Van Jacobson may have saved the networks of the late 1980s and early 1990s, but until RFC 5681 was drafted and implemented, networks with high latency were unable to handle these sizes of flows.

It’s very important to note that while the physical infrastructure between two points may have the bandwidth to handle 1000Mbps transfers, if the end-host’s TCP stacks don’t handle latency well (i.e. the network’s bandwidth delay product) due to the distance between hosts, achieving a 1000Mbps file transfer may not be possible. Operating systems and network software will continue to need to be evolved to be able to generate and consume the size of network flows possible in Google’s very fast network.

Beyond technology issues, is this just a run around the politics and economics of Inter-networking?

In 2008 and 2009, lengthy legislative sessions went into analyzing the merits of “Net Neutrality” and its impact upon the Internet ecosystem. We all know Google’s primary business model: advertising. In order to generate revenue (and profits), Google has to deliver their advertisements and they do that over the Internet, injected into multiple forms of Google services, from Gmail to YouTube.

In past years, we’ve seen ISPs consider (and implement) the practice of controlling and shaping different types of traffic across their networks, especially for higher bandwidth applications such as video streaming and peer-to-peer networking (as supported by Google+).

Traditional ISPs continue to be concerned about the capacity of their networks, especially in the aggregation layer as we discussed above, and how to ensure that they have the needed capacity to keep customers satisfied. One way to do this is to limit congestion from certain types of applications running, such as video. Or even, video with advertisements running in them.

I’m suggesting that Google Fiber may be a way for Google to ensure its advertising revenue streams by controlling the content, the advertisements, and the network that get them to the users—a sure means of revenue protection against the threat of traffic shaping.

Undoubtedly, Google is making a bold move. I don’t question the company having the technology, engineers and know-how to roll out something like Google Fiber and providing an excellent service. What I will question is the long term viability of ubiquitously deploying the service (due to the intense capital costs of deploying fiber optics to the home), the resultant load on the Internet’s various backbones and the economic / revenue implications of the project.

I’ll admit it: having Google Fiber service to my own home would be an enjoyable amenity, but what’s the macroeconomic picture here?

By Tom Daly, Chief Scientist and Co-Founder at Dyn Inc

Dyn is the Internet IaaS Infrastructure-as-a-Service leader that features a full suite of DNS and Email Delivery solutions. Follow on Twitter: @TomDynInc and @dyninc.

Visit Page

Filed Under

Comments

Undoubtedly, Google has done some amazing network Frank Bulk  –  Jul 31, 2012 12:41 PM

Undoubtedly, Google has done some amazing network engineering just to make their concept of Google Fiber operate at a city-wide scale.

While access network services may be new for Google, it should be categorized as “amazing”.  Service providers around the state of midwest and around the country are building, support, and expanding fiber-based networks all the time.  I’m not aware of Google doing anything special or unique here.

The potential load that these levels of service can place onto the network is huge and I’m not convinced that aggregation networks, core backbone networks or the critically important Internet exchange points (locations where IP traffic is exchanged between ISPs) are ready to handle the load.

Yes, it’s potentially huge, but there’s lot of FTTH networks in place today that operate just fine.  Just because the pipe is 50x bigger than before it does not mean total consumption increases linearly.  And content only goes as fast as the slowest link.  I’d argue that there’s a lot of content out there that can’t be retrieved any more quickly at 1 Gbps than 100 Mbps.  IXPs will build and expand to meet capacity—it’s what they do.

The load issue goes far beyond just the network layer and into the data center. Are content providers (Google, Yahoo, Microsoft, Netflix, Amazon, etc.) truly ready to see traffic flowing at 1Gbps to a single residence? It’s commonplace today for a single video server to be easily able to push over 4Gbps of streaming video but to thousands of clients. In our Google Fiberhood, does that mean service for only 4 homes?

Content providers probably aren’t ready, but since residential users won’t likely consume that much more, it’s almost a moot point.

Traditional ISPs continue to be concerned about the capacity of their networks, especially in the aggregation layer as we discussed above, and how to ensure that they have the needed capacity to keep customers satisfied. One way to do this is to limit congestion from certain types of applications running, such as video. Or even, video with advertisements running in them.

While ISPs are always keeping an eye on their network capacity, they’re not picking and choosing winners.  That would violate Net Neutrality. 

I’m suggesting that Google Fiber may be a way for Google to ensure its advertising revenue streams by controlling the content, the advertisements, and the network that get them to the users — a sure means of revenue protection against the threat of traffic shaping.

There’s no threat because Net Neutrality is in place and no one wants the FCC sniffing at their network.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

New TLDs

Sponsored byRadix

Cybersecurity

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC