Home / Blogs

Don't Overlook the Network When Migrating to the Cloud

Jim Morin

The success or failure of public cloud services can be measured by whether they deliver high levels of performance, security and reliability that are on par with, or better than, those available within enterprise-owned data centers. To emphasize the rapidly growing cloud market, IDC forecasts that public cloud IT spending will increase from $40 billion in 2012 to $100 billion in 2016. To provide the performance, security and reliability needed, cloud providers are moving quickly to build a virtualized multi-data center service architecture, or a "data center without walls."

This approach federate the data centers of both the enterprise customer and cloud service provider so that all compute, storage, and networking assets are treated as a single, virtual pool with optimal placement, migration, and interconnection of workloads and associated storage. This "data center without walls" architecture gives IT tremendous operational flexibility and agility to better respond and support business initiatives by transparently using both in-house and cloud-based resources. In fact, internal studies show that IT can experience resource efficiency gains of 35 percent over isolated provider data center architectures.

However, this architecture is not without its challenges. The migration of workload between enterprise and public cloud creates traffic between the two, as well as between clusters of provider data centers. In addition, transactional loads and demands placed on the backbone network, including self-service customer application operations (application creation, re-sizing, or deletion in the cloud) and specific provider administrative operations can cause variability and unpredictability to traffic volumes and patterns. To accommodate this variability in traffic, providers normally would have to over-provision the backbone to handle the sum of these peaks — an inefficient and costly approach.

Getting to Performance-on-Demand

In the future, rather than over-provisioning, service providers will employ intelligent networks that can be programmed to allocate bandwidth from a shared pool of resources where and when it is needed. This software-defined network (SDN) framework consists of virtualizing the infrastructure layer — the transport and switching network elements; a network control layer (or SDN controller) — the software that configures the infrastructure layer to accommodate service demands; and the application layer — the service-creation/delivery software that drives the required network connectivity — e.g. the cloud orchestrator.

SDN enables cloud services to benefit from performance-on-demand

The logically-centralized control layer software is the lynchpin to providing orchestrated performance-on-demand. This configuration allows the orchestrator to request allocation of those resources without needing to understand the complexity of the underlying network.

For example, the orchestrator may simply request a connection between specified hosts in two different data centers to handle the transfer of 1 TB with a minimum flow rate of 1 Gb/s and packet delivery ratio of 99.9999% to begin between the hours of 1:00 a.m. and 4:00 a.m. The SDN controller first verifies the request against its policy database, performs path computation to find the best resources for the request, and orchestrates the provisioning of those resources. It subsequently notifies the cloud orchestrator so that the orchestrator may initiate the inter-data center transaction.

The benefits to this approach include cost savings and operational efficiencies. Delivering performance-on-demand in this way can reduce cloud backbone capacity requirements by up to 50 percent compared to over-provisioning, while automation simplifies planning and operational practices, and reduces the costs associated with these tasks.

The network control and cloud application layers also can work hand-in-hand to optimize the service ecosystem as a whole. The network control layer has sight of the entire landscape of all existing connections, anticipated connections, and unallocated resources, making it more likely to find a viable path if one is possible — even if nodes or links are congested along the shortest route.

The cloud orchestrator can automatically respond to inter-data center workload requirements. Based on policy and bandwidth schedules, the orchestrator works with the control layer to connect destination data centers and schedule transactions to maximize the performance of the cloud service. Through communication with the network control layer, it can select the best combination of connection profile, time window and cost.

Summary

Whether built with SDN or other technologies, an intelligent network can transform a facilities-only architecture into a fluid workload orchestration workflow system, and a scalable and intelligent network can offer performance-on-demand for assigning network quality and bandwidth per application.

This intelligent network is the key ingredient to enable enterprises to inter-connect data centers with application-driven programmability, enhanced performance and at the optimal cost.

By Jim Morin, Product Line Director, Managed Services & Enterprise at Ciena

Related topics: Cloud Computing, Data Center, Networks

 
   

Don't miss a thing – get the Weekly Wrap delivered to your inbox.

Comments

Great example Michael Bushong  –  Jun 14, 2013 8:41 AM PDT

I like that you are specifying application requirements in abstract terms that are not being converted all the way down into the networking primitives needed to do them. That is where I think things will end up as well. There is some focus on those abstractions now after a rather long look at low-level protocols like OpenFlow. I think the industry is correcting some.

One challenge will be the amount of trust required to specify things at a high level and believe that it will correctly translate into behavior. Should be interesting to see how DevOps, application guys, and network folks work through this trust thing.

I am also curious what will happen with reporting and troubleshooting tools. As you start to specify SLAs, it will be a trust but verify model. If you have any insight here, that would be excellent.

-Mike (@mbushong)
Plexxi

Great example Jim Morin  –  Jun 14, 2013 2:36 PM PDT

Mike,
You bring up a very good point that we don't want a "Wild West" uncontrolled access to network resources.  Ciena is implementing key management tools, such as being able to allocate a portion of the network for dynamic behavior so that activity does not interrupt production workloads.  Dynamic job requests must also meet policy, and then go through a scheduler to ensure network resources are available for that task.  Tools such as these will help build the trust you mention between all parties involved even before SLA troubleshooting is required.

To post comments, please login or create an account.

Related Blogs

Related News

Explore Topics

Dig Deeper

Verisign

Cybersecurity

Sponsored by Verisign
Afilias

DNS Security

Sponsored by Afilias
Afilias Mobile & Web Services

Mobile Internet

Sponsored by Afilias Mobile & Web Services

Promoted Posts

Now Is the Time for .eco

.eco launches globally at 16:00 UTC on April 25, 2017, when domains will be available on a first-come, first-serve basis. .eco is for businesses, non-profits and people committed to positive change for the planet. See list of registrars offering .eco more»

Industry Updates – Sponsored Posts

Attacks Decrease by 23 Precent in 1st Quarter While Peak Attack Sizes Increase: DDoS Trends Report

Verisign Releases Q2 2016 DDoS Trends Report - Layer 7 DDoS Attacks a Growing Trend

Verisign Q1 2016 DDoS Trends: Attack Activity Increases 111 Percent Year Over Year

Mobile Web Intelligence Report: Bots and Crawlers May Represent up to 50% of Web Traffic

i2Coalition to Host First Ever Smarter Internet Forum

What Holds Firms Back from Choosing Cloud-Based External DNS?

Data Volumes and Network Stress to Be Top IoT Concerns

Verisign Mitigates More Attack Activity in Q3 2015 Than Any Other Quarter During Last Two Years

Verisign & Forrester Webinar: Defending Against Cyber Threats in Complex Hybrid-Cloud Environments

Dyn Evolves Internet Performance Space with Launch of Internet Intelligence

Verisign's Q2'15 DDoS Trends: DDoS for Bitcoin Increasingly Targets Financial Industry

Protect Your Network From BYOD Malware Threats With The Verisign DNS Firewall

Hybrid Cloud Proves Clouds Are Worthy of Email Infrastructure

Verisign OpenHybrid for Corero and Amazon Web Services Now Available

Verisign iDefense 2015 Cyber-Threats and Trends

3 Questions to Ask Your DNS Host About DDoS

Afilias Partners With Internet Society to Sponsor Deploy360 ION Conference Series Through 2016

Neustar to Build Multiple Tbps DDoS Mitigation Platform

3 Questions to Ask Your DNS Host about Lowering DDoS Risks

Tips to Address New FFIEC DDoS Requirements