The Public Cloud
In an on-premises data center, infrastructure is generally finite in scale and fixed in cost. By the time a new physical host hits the floor, the capital has been spent and has taken its hit on the business’s bottom line. Thus, the desired state in an on-premises environment is to assure workload performance and maximize utilization of the sunk cost of capital infrastructure. In the public cloud, however, infrastructure is effectively infinite. Resources are paid for as they are consumed—usually from an operating expenses budget rather than a capital budget.
The underlying market abstraction in Workload Optimizer is extremely flexible, and it can easily adjust to optimize for the emphasis on operating expenses. In the public cloud, the desired state is to ensure workload performance and minimize spending. This is a subtle but key distinction, as minimizing spending in the public cloud does not always mean placing a workload in the cloud VM instance that perfectly matches its requirements for CPU, memory, storage, and so on; instead, it means placing that workload in the cloud VM template that results in the lowest possible cost while still ensuring performance.
On-Demand Versus Reserved Instances
The public cloud’s vast array of instance sizes and types offers endless choices for cloud administrators, all with slightly different resource profiles and costs. There are hundreds of different instance options in AWS and Azure, and new options and pricing are emerging almost daily. To further complicate matters, administrators have the option of consuming instances in an on-demand fashion—that is, in a pay-as-you-use model—or via reserved instances (RIs) that are paid for in advance for a specified term (usually a year or more). RIs can be incredibly attractive as they are typically heavily discounted compared to their on-demand counterparts, but they are not without pitfalls.
The fundamental challenge of consuming RIs is that public cloud customers pay for the RIs whether they use them or not. In this respect, RIs are more like the sunk cost of a physical server on premises than like the ongoing cost of an on-demand cloud instance. You can think of on-demand instances as being well suited for temporary or highly variable workloads—analogous to city dwellers taking taxis, which is usually cost-effective for short trips. RIs are akin to leasing a car, which is often the right economic choice for longer-term, more predictable usage patterns (such as commuting an hour to work each day). As the artifact changes, the flexibility of the underlying economic abstraction of Workload Optimizer is up to the challenge.
When faced with myriad instance options, cloud administrators are generally forced down one of two paths: Purchase RIs only for workloads that are deemed static and consume on-demand instances for everything else (hoping, of course, that static workloads really do remain that way) or choose a handful of RI instance types (for example, small, medium, and large) and shoehorn all workloads into the closest fit. Both methods leave a lot to be desired. In the first case, it’s not at all uncommon for static workloads to have their demand change over a year (or more) as new end users are added or new functionality comes online. In such cases, the workload needs to be relocated to a new instance type, and the administrator has an empty hole to fill in the form of the old, already paid-for RI (see the examples in Figure 9-11).
Figure 9-11 Fluctuating demand creates complexity with RI consumption
What should be done with that hole? What’s the best workload to move into it? Keep in mind that if that workload is coming from its own RI, the problem simply cascades downstream. The unpredictability and inefficiency of such headaches often negates the potential cost savings of RIs.
In the second scenario, limiting the RI choices almost by definition means mismatching workloads to instance types, negatively affecting either workload performance or cost savings or both. In either case, human beings, even with complicated spreadsheets and scripts, will invariably get the answer wrong because the scale of the problem is too large, and everything keeps changing all the time—so the analysis done last week or even yesterday is likely to be invalid today.
Thankfully, Workload Optimizer understands both on-demand instances and RIs in detail through its direct API target integrations. Workload Optimizer constantly receives real-time data on consumption, pricing, and instance options from cloud providers, and it combines this data with knowledge of applicable customer-specific pricing and enterprise agreements to determine the best actions available at any given point in time (see Figure 9-12).
Figure 9-12 A pending action to purchase additional RI capacity in Azure
Not only does Workload Optimizer understand current and historical workload requirements and an organization’s current RI inventory, but it can also intelligently recommend the optimal consumption of existing RI inventory and recommend additional RI purchases to minimize future spending. Continuing with the previous car analogy, in addition to knowing whether it’s better to pay for a taxi or lease a car in any given circumstance, Workload Optimizer can even suggest a car lease (RI purchase) that can be used as a vehicle for ride sharing (that is, fluidly moving on-demand workloads in and out of a given RI to achieve the lowest possible cost while still ensuring performance).
Public Cloud Migrations
Finally, because Workload Optimizer understands both the on-premises and public cloud environments, it can bridge the gap between them. As noted in the previous section, the process of moving VM workloads to the public cloud can be simulated with a plan and the selection of specific VMs or VM groups to generate the optimal purchase actions required to run the workloads (see Figure 9-13).
Figure 9-13 Results of a cloud migration plan
The plan results offer two options: Lift & Shift and Optimized. The Lift & Shift column shows the recommended instances to buy and their costs, assuming no changes to the size of the existing VMs. The Optimized column allows for VM right-sizing in the process of moving to the cloud, which often results in a lower overall cost if current VMs are oversized relative to their workload needs. Software licensing (for example, bring your own versus buy from the cloud) and RI profile customizations are also available to further fine-tune the plan results.
Workload Optimizer’s unique ability to apply the same market abstraction and analysis to both on-premises and public cloud workloads in real time enables it to add value far beyond any cloud-specific or hypervisor-specific point-in-time tools that may be available. Besides being multivendor, multicloud, and real time by design, Workload Optimizer does not force administrators to choose between performance assurance and cost/resource optimization. In the modern application resource management paradigm of Workload Optimizer, performance assurance and cost/resource optimization are blended aspects of the desired state.