Bernard Golden is a leading expert in cloud computing and was named one of the top cloud influencers and thought leaders in cloud computing by Wired.com. He is currently the Vice President of Strategy at ActiveState and is the cloud computing advisor for CIO Magazine’s highly revered and read blog. He has authored or coauthored four influential
books on virtualization and cloud computing, including Virtualization for Dummies and Amazon Web Services for Dummies, and has keynoted and presented at cloud computing events and conferences across the globe.
Cloudyn, cloud monitoring leader, invited the respected cloud blogger, Ofir Nachmani, to interview Bernard Golden. In their session, they dove deeply into the the current state of the cloud and the challenges that cloud consumers face within their growing cloud environment. We invite you to enjoy their stimulating and insightful discussion.
Ofir Nachmani: “I’ve had the pleasure of knowing Bernard for some time now, and his insights influenced me greatly over the years. He is always providing a deep analysis of the cloud market and its trends. I was thrilled that Cloudyn invited me to conduct this discussion with a worldwide, leading cloud evangelist. The interview was mainly focused on cloud consumers’ level of experience and the need to maintain control and transparency within the cloud. We also touched on the maturity of the hybrid cloud and covered the RI pricing model changes that were recently announced by Amazon.”
Cloud Transparency and Maturity
Ofir Nachmani: Three or four years ago, the market was still immature. Cloud customers’ environments were small and enterprises weren’t in the picture yet. In your opinion, what changes have occurred in the past five years regarding the need for cloud transparency in cost optimization and security, and need for the emergence of cloud cost optimization services like Cloudyn?
Bernard Golden: I think that we’re in the period of developing this maturity. The companies that have done significant work with the public cloud require knowledge to track what they’re running, assign it to who’s responsible, make sure they know what they’ve got, and understand the costs involved. This is often driven by parts of the organization that are significantly using the cloud within the general IT sphere. These tend to be certain segments of the organization, often the leading digital initiatives, and so forth.
I think that the general awareness of transparency needs to be tracked across all infrastructures, and not just public cloud infrastructures. This is still developing because enterprise IT comes from a legacy mindset and set of processes; it’s a cost center, a capital investment, and there isn’t a direct tie to business offerings. Let me give you an example. If you’re in the marketing department of your company, and your goal is to drive awareness of your products and services, and to drive leads, the cost per lead is the most critical thing to consider. It would be a fantastic scenario if the lifetime value of a customer is a thousand dollars, and the cost of the lead is two hundred. However, if the lead cost is fifteen hundred dollars, it wouldn’t be very beneficial. Ultimately, the ability to understand the marginal cost per unit is extremely important. If IT continues to get stuck in big capital investments and lumpy allocation, rather than transactional cost transparency, the ability to understand the marginal costs gets bypassed.
ON: There are two things that come to mind after hearing the points you’ve made. One is that transparency does not yet exist across all organizations, but only in dispersed, local IT pockets. There still isn’t a single, underlying tie between all of them, that is managed by IT. I’d also like to highlight that the link between cloud costs and actual business benefits still appears to be broken. Or, continuing with your example, how cloud expenses can impact lead costs.
BG: I agree. I would say that it’s still not fully developed or connected. It doesn’t yet capture all of the costs behind the means, but it does need to go in that direction.
There’s a very interesting organization called the Technology Business Management Council. I went to their first conference just over a year ago, and it was clear that most of the organizations that attended were just getting going with their cost tracking and assignment. Enterprises need to understand the true cost of service delivery as, for example, a lead generation or even a mailbox.
AWS’ Reserved Instances (RIs)
ON: It’s interesting to observe the evolution of Amazon’s payment model after the recent announcement of its new RI model. The RI model began by charging an initial upfront cost, which basically means that capital expenses were involved. However, following AWS’ last announcement, upfront payment is no longer charged, and they only require customers’ commitment. What conclusions did you draw from this and do you think that it implemented a real change?
BG: The way that I interpret this is that their previous RIs were great, but confusing. I think that it was really hard for organizations to evaluate what all of the options were, let alone which choice was the right one for them. An analogy that I draw is that cloud computing is like the airline industry. An airline is a very capital intensive business, but the service is sold by the seat, and utilization management is absolutely critical. Amazon is starting to figure out the right way to adjust the knobs to ensure that they have a consistent level of utilization that covers their own costs. Part of what they’re dealing with is how they can set their pricing in such a way that their users can understand it. It doesn’t do much good to have a really sophisticated yield management system if it’s unclear to those who it’s aimed for. It’d be as if an airline offered half of a seat on Tuesday, and half of a seat on Wednesday, and required the payment upfront. People wouldn’t understand why, and they’d take the train instead. Similarly in Amazon’s case, people would be reluctant if they didn’t know what would properly match their needs in addition to paying upfront. I think that Amazon simplified things, reducing people’s reluctance, all in the effort to continue to drive their data centers’ utilization to the exact levels they want. This is very important because they’re investing huge amounts, and they need to be confident that they can bring those up to speed and drive high utilization as soon as possible.
ON: I would just like to add that Amazon can grow more safely by gaining commitment from their customers. They’ll be able to better forecast the capacity and utilization they need for a given amount of years and ensure their own optimization levels.
BG: That’s a really good point. That gives them visibility for their upcoming investments and helps them with their demand forecast.
The Hybrid Cloud
ON: Is there already a significant hybrid cloud or is it just evolving? And if the so-called hybrid cloud is already there, do enterprises really know where their workloads need to be at any given time, inside or outside? Do enterprises have sufficient transparency in order to know what’s going on in their hybrid cloud environment?
BG: I think that there are a number of parts to your question. First, is there a reason to use a multiple deployment environment? That’s the impulse of IT leaders today, and I think that the answer is yes. The impulse towards wanting to have choices is logical for many different reasons including cost, regulatory needs, convenience, and network latency. I would say that it’s important to have transparency among all of these in order to make comparisons. One of the axes of comparison is the cost advantage to running a multiple deployment environment. I think that the vision and impulse is challenged because most organizations consider on-premises to be one of their desired options, and that’s moving much more slowly than most organizations would expect, or like.
I was at the OpenStack summit in Paris this year, where they presented market traction statistics. It’s obvious that the majority of organizations have selected OpenStack as their chosen on-premises option, as opposed to CloudStack or VMware. OpenStack acts as the vehicle for the on-premises cloud, but it’s clear that, for most organizations, OpenStack limitations are still what I would refer to as “experimentation” or “validation”. They are on the order of fifty nodes or less, which is not what I would consider to be a truly significant computing infrastructure environment, but rather, more of an experiment environment. What’s compelling is that the market has shown a lot of interest in addressing the impulse of wanting a hybrid cloud in order to deploy workloads to a multiple set of environments. These organizations want to build their our own private cloud with OpenStack, and it’s taking a lot longer than expected. When organizations work with us at Activestate, they get that workload deployment option, but at a higher level. They can deploy applications, like ours, directly onto vSphere and they don’t need to have an infrastructure service layer. We have customers use both on-premises on VSphere and off-premises Amazon Web Services or a hosted OpenStack from HP, or whatever cloud it might be. The benefit is the ability to choose where to put workloads without doing any of the heavy lifting, which entails implementing a private cloud.
Cloudyn’s industry award-winning SaaS solution delivers unprecedented insights into usage, performance and cost, coupled with custom prescriptive actions for enhancing performance and reducing cloud spend. It helps maximize ROI on AWS, Google Cloud and Openstack deployment with tools for cloud monitoring, comparison and optimization. We invite you to learn more on how Cloudyn supports cloud consumers with their cloud transparency and cost optimization.