Home / Blogs

Don't Overlook the Network When Migrating to the Cloud

Don't miss a thing – sign up for CircleID Weekly Wrap newsletter delivered to your inbox once a week.
Jim Morin

The success or failure of public cloud services can be measured by whether they deliver high levels of performance, security and reliability that are on par with, or better than, those available within enterprise-owned data centers. To emphasize the rapidly growing cloud market, IDC forecasts that public cloud IT spending will increase from $40 billion in 2012 to $100 billion in 2016. To provide the performance, security and reliability needed, cloud providers are moving quickly to build a virtualized multi-data center service architecture, or a "data center without walls."

This approach federate the data centers of both the enterprise customer and cloud service provider so that all compute, storage, and networking assets are treated as a single, virtual pool with optimal placement, migration, and interconnection of workloads and associated storage. This "data center without walls" architecture gives IT tremendous operational flexibility and agility to better respond and support business initiatives by transparently using both in-house and cloud-based resources. In fact, internal studies show that IT can experience resource efficiency gains of 35 percent over isolated provider data center architectures.

However, this architecture is not without its challenges. The migration of workload between enterprise and public cloud creates traffic between the two, as well as between clusters of provider data centers. In addition, transactional loads and demands placed on the backbone network, including self-service customer application operations (application creation, re-sizing, or deletion in the cloud) and specific provider administrative operations can cause variability and unpredictability to traffic volumes and patterns. To accommodate this variability in traffic, providers normally would have to over-provision the backbone to handle the sum of these peaks — an inefficient and costly approach.

Getting to Performance-on-Demand

In the future, rather than over-provisioning, service providers will employ intelligent networks that can be programmed to allocate bandwidth from a shared pool of resources where and when it is needed. This software-defined network (SDN) framework consists of virtualizing the infrastructure layer — the transport and switching network elements; a network control layer (or SDN controller) — the software that configures the infrastructure layer to accommodate service demands; and the application layer — the service-creation/delivery software that drives the required network connectivity — e.g. the cloud orchestrator.

SDN enables cloud services to benefit from performance-on-demand

The logically-centralized control layer software is the lynchpin to providing orchestrated performance-on-demand. This configuration allows the orchestrator to request allocation of those resources without needing to understand the complexity of the underlying network.

For example, the orchestrator may simply request a connection between specified hosts in two different data centers to handle the transfer of 1 TB with a minimum flow rate of 1 Gb/s and packet delivery ratio of 99.9999% to begin between the hours of 1:00 a.m. and 4:00 a.m. The SDN controller first verifies the request against its policy database, performs path computation to find the best resources for the request, and orchestrates the provisioning of those resources. It subsequently notifies the cloud orchestrator so that the orchestrator may initiate the inter-data center transaction.

The benefits to this approach include cost savings and operational efficiencies. Delivering performance-on-demand in this way can reduce cloud backbone capacity requirements by up to 50 percent compared to over-provisioning, while automation simplifies planning and operational practices, and reduces the costs associated with these tasks.

The network control and cloud application layers also can work hand-in-hand to optimize the service ecosystem as a whole. The network control layer has sight of the entire landscape of all existing connections, anticipated connections, and unallocated resources, making it more likely to find a viable path if one is possible — even if nodes or links are congested along the shortest route.

The cloud orchestrator can automatically respond to inter-data center workload requirements. Based on policy and bandwidth schedules, the orchestrator works with the control layer to connect destination data centers and schedule transactions to maximize the performance of the cloud service. Through communication with the network control layer, it can select the best combination of connection profile, time window and cost.


Whether built with SDN or other technologies, an intelligent network can transform a facilities-only architecture into a fluid workload orchestration workflow system, and a scalable and intelligent network can offer performance-on-demand for assigning network quality and bandwidth per application.

This intelligent network is the key ingredient to enable enterprises to inter-connect data centers with application-driven programmability, enhanced performance and at the optimal cost.

By Jim Morin, Product Line Director, Managed Services & Enterprise at Ciena

Related topics: Cloud Computing, Data Center



Great example Michael Bushong  –  Jun 14, 2013 8:41 AM PDT

I like that you are specifying application requirements in abstract terms that are not being converted all the way down into the networking primitives needed to do them. That is where I think things will end up as well. There is some focus on those abstractions now after a rather long look at low-level protocols like OpenFlow. I think the industry is correcting some.

One challenge will be the amount of trust required to specify things at a high level and believe that it will correctly translate into behavior. Should be interesting to see how DevOps, application guys, and network folks work through this trust thing.

I am also curious what will happen with reporting and troubleshooting tools. As you start to specify SLAs, it will be a trust but verify model. If you have any insight here, that would be excellent.

-Mike (@mbushong)

Great example Jim Morin  –  Jun 14, 2013 2:36 PM PDT

You bring up a very good point that we don't want a "Wild West" uncontrolled access to network resources.  Ciena is implementing key management tools, such as being able to allocate a portion of the network for dynamic behavior so that activity does not interrupt production workloads.  Dynamic job requests must also meet policy, and then go through a scheduler to ensure network resources are available for that task.  Tools such as these will help build the trust you mention between all parties involved even before SLA troubleshooting is required.

To post comments, please login or create an account.

Related Blogs

Related News

Explore Topics

Sponsored Topics

Promoted Posts

Now Is the Time for .eco

.eco launches globally at 16:00 UTC on April 25, 2017, when domains will be available on a first-come, first-serve basis. .eco is for businesses, non-profits and people committed to positive change for the planet. See list of registrars offering .eco more»

Boston Ivy Gets Competitive With Its TLDs, Offers Registrars New Wholesale Pricing

With a mission to make its top-level domains available to the broadest market possible, Boston Ivy has permanently reduced its registration, renewal and transfer prices for .Broker, .Forex, .Markets and .Trading. more»

Industry Updates – Sponsored Posts

i2Coalition to Host First Ever Smarter Internet Forum

What Holds Firms Back from Choosing Cloud-Based External DNS?

Verisign & Forrester Webinar: Defending Against Cyber Threats in Complex Hybrid-Cloud Environments

Dyn Evolves Internet Performance Space with Launch of Internet Intelligence

Hybrid Cloud Proves Clouds Are Worthy of Email Infrastructure

Verisign OpenHybrid for Corero and Amazon Web Services Now Available

Neustar Launches Global Partner Program

Neustar Chief Technology Officer Appointed to FCC's Technological Advisory Council

A Look at Traffic Management for External "Cloud" Load Balancing

Dyn Research: Where Do Companies Host Their Websites?

New Nixu NameSurfer 7.3 Series Powers the Software-Defined Data Centre

Nixu Integrates with Nominum N2 Platform in Hybrid Cloud Environments

Nominum and IBM Partner Around Big Data

New Nixu Solution Slashes Cloud Application Delivery Times from Weeks to Milliseconds

Automate IPAM Set-up with Nixu NEE 1.3 Series

Streamline Application Delivery Processes with Nixu NameSurfer 7.2.2

Neustar Labs Innovation Center Grand Opening (Video)

Neustar and University of Illinois Launch the Neustar Innovation Center

dotMobi And LuxCloud Collaborate on Integration of goMobi Onto LuxCloud SaaS Platform

Facets of gTLD Registry Technical Operations - Registry Services