The Software-Defined Data Center (SDDC) Private Cloud Economy

Mar 21 2016 | by Yoav Mor

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

Generally speaking, data centers are custom built, manually managed and lack uniformity in terms of physical resources, making it difficult to forecast and plan for future requirements. Consequently, these factors contribute to over-purchasing resources simply to be on “the safe side”.

Although enterprise IT leaders are aware of the need to increase their data center efficiency, according to past studies, the average CPU utilization level of typical on-premises physical servers was low. A McKinsey & Company research study once estimated an average server utilization rate of 6% to 12%. Moreover, the separation between the typical data center subsystems – compute, storage and network – has left data center environments rigid and not scalable. Data centers contain multiple bottlenecks, which include leaving overall scalability dependent on the least scalable subsystem (generally storage or network). In addition, on-premises traditional data centers suffer from high management and maintenance costs, which include specific experts for each subsystem.

The introduction of the cloud, and the private cloud, in particular, has created solutions that try to mitigate many of the issues mentioned above. In this article, we will cover a few of these technologies.

The Private Cloud Enablers

There is no doubt that virtualization is fundamental to the cloud. And so it is no surprise that server virtualization is the first and most prominent technologies that is used to enable private clouds. Below are additional technologies that have taken virtualization to the next level.

Converged Infrastructure – Currently, converged, software-defined infrastructure is still in its early stages. It takes multiple resources (i.e., physical servers) and unifies them into a single pool, and instead of having you manually manage a resource, the system does it automatically for you. The next step leads to hyper-converged infrastructure, which is a type of infrastructure system with software-centric architecture that tightly integrates compute, storage, and network data center subsystems into a commodity hardware box. This new spin on data center infrastructure allows the three subsystems to interact better with one another. For example, the storage subsystem can independently communicate with the compute subsystem, which can help when it comes to automatically allocating idle resources in order to overcome node failures.

Comprehensive Networking – Modern data centers embrace new network modeling that can be seen in comprehensive network architectures from the likes of Google, Facebook (check out Facebook Fabric) and other cloud-native hyperscale giants. By implementing modern network architectures such as the Core and Pod network design, network bandwidth bottlenecks can be alleviated. All of this culminates with the point where you can unify your resources into a single facility that relies on the network layer.

Distributed Storage – In comparison to traditional external appliances, storage in the world of modern data centers is distributed across compute nodes, which has a direct impact on costs. Physical SAN devices can cost hundreds of thousands of dollars, if not millions, not to mention the upkeep involved in constantly managing the required capacity. Distributed storage relies on commodity hardware and naturally integrates with the compute subsystem, eliminating traditional storage bottlenecks. Modern software defined storage systems support high availability by running automatic replicas of data across data center resources.

Provisioning and Management – On top of the subsystems’ smart software-defined solutions, we should also discuss resource provisioning and management. SDDC private cloud solutions come with APIs that enable IT and R&D teams to provision resources simply by using online consoles and CLI tools. These self-provisioning, inherent availability and security features reduce the need for traditional infrastructure experts.

The Economics

These enabling factors directly affect a data center’s total cost of ownership (TCO). When discussing IT economics, two areas should be covered – capital expenses (CAPEX) and operational expenses (OPEX).

One of the major benefits that SDDC provide in terms of CAPEX is maximized physical utilization across data center nodes, which can result in the eventual removal of the underutilized purpose-built silos of legacy data centers. If an underlying resource management solution is intelligent, it will be able to assign more workloads to a physical node and consequently increase utilization levels (obviously resource sharing systems need to consider performance or security as well). The direct impact is a reduction in traditional physical capacity purchasing and the elimination of over-provisioning. According to research performed by IDC, savings on the CAPEX side of a private cloud can be 46%, purely due to physical capacity cost savings!

Operational costs, on the other hand, are slightly more tricky to calculate. Naturally when an SDDC is being constructed, there might be an initial investment, but it’s not a must. Once it’s been built, most IT operations will go through the software layer, which means that all manual management and provisioning processes associated with legacy data centers have been made redundant. This has a direct impact on the amount of staff that are required to help end-enterprise users provision and manage resources. By working with an API based solution, users can utilize continuous integration and continuous delivery tools to automate configuration and deployment. In turn, this changes the commonly known ratio of administrators to servers of one to dozens, to something more like one to hundreds or even thousands. According to the same IDC report mentioned earlier, the private cloud can increase the productivity of an IT staff by 58%!

Final Notes

The promise of the private cloud comes with challenges that need to be mentioned, beginning with how to specify your organization’s cloud cost structures and models. Due to the dynamic nature of this type of environment, it is important to make sure that you have full visibility into your management layer in order to answer even the most basic questions, such as ‘what is your current utilization’ and ‘which resources are idle’. In addition, if you have different departments from within your organization that are using your private cloud, implementing visibility and control will enable you to allocate costs and chargeback on their usage in order to run an efficient IT organization with healthy finances. Read part two of this series.


Forgot Password?

No account yet? Register.

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×