Table of Contents
In this SPIE, Stratecast examines the concept of the bare metal cloud from the provider and the customer perspective. We compare benefits and challenges of bare metal cloud configurations with the more common virtualized cloud configurations. Finally, we look at the bare metal cloud offers from Internap and SoftLayer, an IBM company.
Does a cloud configuration require virtualization? It turns out, the answer is “no.”
In fact, the National Institute of Standards and Technology (NIST), whose cloud definition is widely accepted in the industry, omits virtualization as a criteria for cloud. NIST’s “essential characteristics” include on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service—but not virtualization.
This may surprise many in the IT community who have always assumed that a virtualized server infrastructure was necessary to provide the flexibility and scalability associated with cloud. However, the emergence of “bare metal” clouds—that is, clouds that do not utilize virtualization—is forcing a re-examination of what it takes to offer a cloud service. The bare metal options provide the flexibility and scalability associated with virtualized offers, while promising higher levels of performance and consistency. Currently, two cloud leaders—SoftLayer (an IBM company) and Internap—have developed a bare metal option as part of their cloud portfolios. Both tout their bare metal services as a way to differentiate themselves from the crowded cloud service market. Both have also had success in attracting cloud-skeptical businesses and performance-sensitive workloads that previously may not have been considered ideal for cloud deployment. In this SPIE, Stratecast examines the concept of the bare metal cloud from the provider and the customer perspective. We compare benefits and challenges of bare metal cloud configurations with the more common virtualized cloud configurations. Finally, we look at the bare metal cloud offers from Internap and SoftLayer, an IBM company.
Virtualization – The Value and the Cost
Server virtualization is well established in enterprise data centers and in hosting and cloud centers. More than half of businesses utilize server virtualization, according to the 2013 Stratecast | Frost & Sullivan Cloud User Survey.
Virtualization separates the logical from the physical components of the workload. Application code and associated operating system are packaged neatly into a virtual machine (VM). Multiple VMs, regardless of operating system, can share a physical server; a hypervisor installed on the server allocates resources and acts as a translator, making each VM believe it has full access to the server resources.
The virtualized workload is self-contained and highly portable. Like a turtle or a motor home, it carries all it needs on its back—operating system and application code—and isn’t fussy about where it sets up housekeeping. Thus, IT technicians do not have to custom-configure a server exoskeleton for a virtualized workload.
As such, virtualization is associated with infrastructure conservation and flexibility. Top benefits of virtualization include:
• Deferral of capital expenses: By accommodating multiple virtualized workloads per physical server, virtualization optimizes server utilization, and reduces the need for additional servers or expanded floorspace.
• Faster time to deploy workloads: In a virtualized environment, VMs can be tested, deployed, spun down, and moved via a management console, without requiring on-site technicians to perform labor-intensive tasks to configure the servers. This rapid deployment reduces operating costs and decreases time to provision servers.
• Support for high availability environments: In a virtualized server environment, routine hardware maintenance or unexpected interruptions do not need to shut down applications. Because VMs are portable, they can be moved to another server, in house or outside, that has spare capacity.
The resulting conclusion from these generalized benefits is that virtualization technologies offer the greatest benefit to the infrastructure owner. By optimizing hardware utilization, deferring costs, and allowing for flexibility, virtualization allows infrastructure to be managed more efficiently, easily and cost-effectively.
But in a cloud environment, infrastructure responsibility falls to the cloud service provider, so those benefits of virtualization do not automatically accrue to the customer. The enterprise customer can benefit indirectly from vitualization if the provider chooses to pass on cost savings in the form of lower rates, for example. Nonetheless, in comparing the end-user experience or application performance, a virtualized workload offers no advantages over a non-virtualized workload.
In fact, virtualization comes at a cost to the user. For some workloads, virtualization can offer infrastructure efficiency for the cloud service provider, at the cost of diminished performance for the customer. Primary sources of concern are “noisy neighbor syndrome” and the “hypervisor tax.”
As noted, virtualization is an excellent way to optimize use of server capacity. By loading multiple virtualized workloads on a shared physical server, overall resource utilization improves. However, the different applications are all contending for the same processor and memory resources, which inevitably brings the risk that the computing resource will not be available at the capacity level and at the instant it is needed. For many apps, the risk may be minimal—for example, if an internal intranet page loads slowly occasionally, employees will not go elsewhere. Also the performance impact is likely to be sporadic and unpredictable, occurring only on occasions when multiple apps attempt to access the shared resources simultaneously. However, for latency-sensitive applications such as e-commerce, gaming, and streaming media, any delay can be intolerable.
In a private data center, the enterprise can control the risks of resource contention by making decisions regarding assignment of VMs across available physical servers, monitoring and balancing loads as needed. However, that level of control is not possible for customers of a shared cloud, as only the provider has visibility across the entire, multi-tenant environment. In a shared cloud environment, customers have little control over where their VMs are loaded and which other customers’ workloads are sharing the processor. Furthermore, like an airline overbooking flights to ensure full planes, the cloud service provider has an incentive to “oversubscribe” each physical server. The greater the resource utilization, more customers are served at a lower cost per customer.
For customers eager to avoid the “noisy neighbor” risk, many providers offer a hosted private cloud or virtualized private cloud option. In these services, the server hardware and, perhaps, other infrastructure components are dedicated to a single enterprise. Thus, the virtualized workloads that share physical server resources all belong to the same enterprise, giving the enterprise some control over capacity utilization.
Even if there are no strangers sharing the facility—for example, in a dedicated or private cloud environment—virtualization extracts a toll on available capacity. The “hypervisor tax” is the amount of processing capacity that is consumed by the hypervisor layer. While virtualization providers have enhanced their hypervisor software to be as thin as possible, a hypervisor can still consume as much as percent of the available capacity of a server. For high-performance workloads that require large amounts of capacity, the tax can be significant, even impacting performance of the application.
In addition, as with every additional software layer, the hypervisor layer subjects data to delay; minuscule amounts, to be sure, but noticeable for latency-sensitive workloads.
Thus, enterprises are faced with trade-offs in running their high-capacity or high-performance workloads in the cloud; that is, trade optimal performance for the efficiency and low cost structure of the virtualized cloud, or trade efficiency and low cost for high performance in a dedicated hosting environment.
But suppose enterprises had the choice of a low-cost, scalable, easily managed hosting option without virtualization? This is the operating principle behind the bare metal cloud.
Get Industry Insights. Simply.
Talk to Veronica
+1 718 514 2762
This 2017 module has 138 pages and 67 tables and figures Worldwide hyperscale data center markets implement cloud computing with shared resource and the aim, more or less achieved of providing foolproof ...
C-Level Enterprise Data Center Assessment: WinterGreen Research announces a study to address the move from Ethernet wiring for the data center with Cat5e to fiber. The ability to get optics going in the ...
“The growing need for a highly scalable centralized system for the management of field services is expected to drive the Field Service Management (FSM) market” The FSM market size is expected to grow ...