Last week, Microsoft announced the Preview of Capacity Reservation for VMs. You can reserve VM capacity in your DR region to ensure that you have VM resources available to create or turn on your protected VMs using ASR. ASR does not guarantee that your VMs can be turned on in your DR region in the event of disaster recovery. So, capacity reservation is a welcome feature and much needed. However, this is increasing the cost of your solution again.
ASR protected VM cost
Capacity Reservation Cost ( as same as your actual VM cost)
Note: DR is not just the VMs but including other components. I did not provide the details above because it applies to both the options.
Hmm.. Can we plan a DR cost-effectively in Azure? Let’s take a look:
The long waiting feature is now Public Preview. When there is a Azure region disaster recovery, we need our VMs to be turned on in the secondary region. This feature guarantees you the recovery. You may learn about this feature in this URL.
As you know already, Azure Site Recovery does not 100% guarantee to turn on your VMs in the event of DR. This feature addition helps you gaining more confidence with your BCP with this feature addition.
Take a look at the Microsoft statement about the use cases for this feature.
Business-critical applications—use on-demand capacity reservations to protect your capacity, for example when taking these VMs offline to perform updates.
Disaster recovery—you now have the option to set aside compute capacity to ensure a seamless recovery in the event of a natural disaster. The compute capacity can be repurposed to run other workloads whenever DR is not in effect. The VM maintenance can be handled by keeping core images up to date without the need to deploy or maintain VMs outside of DR testing.
Special events—claiming capacity ahead of time provides assurance that your business can handle the extra demand.
I have been asking for this feature for a long time now, and finally, it is here. I am happy about Microsoft as they are listening to customers and partners. However, the first bullet point is a bit worrisome as it states, It is not guaranteed to get your VM back if it is offline for some time due to maintenance. Does MSFT force customers to take this for all the critical workload? I hope they do not make things worse to this point.
Azure App Service supports connecting to Application (App) Server and Database (DB) Server hosted in Azure VM or on-premise server using vNET integration that is part of App Service Plan(ASP). You get such scenarios often if you are migrating your applications landscape to Azure from on-premise. You will have different scenarios like shared DBs or may be DB team is not ready yet to DBs to PaaS, or there is no PaaS DB service available for the like of DB2, Oracle database services. App Service vNET integration is a useful feature if you do not chose to go with App Service Environment (ASE).
When you configure the vNET integration, your WebApp will get private IP from the vNET. App Service will be able to communicate with a VM in the vNET or another vNET peered in the same region, or a server on-premises over express route or s2s VPN.
We talked about Azure Reservation for VM in my previous blog post. If you have not read it already, I suggest you to read it.
The VM RI discounts applicable only to the VM infrastructure cost but it does not include disks or storage used. Azure Disk Storage reservations combined with Azure reserved VM instances help you reduce total VM cost.
The common rules are applicable here as well “Use it or Lose it”,
Currently, Azure Disk Storage reservations are available only for selected Azure premium SSD SKUs. But it doesn’t apply to unmanaged, ultra disks and page blob consumption.
The reservation for disk is not based on capacity but it is based on total number of disks per SKU. That means you make reservation consumption based on the units of the disk SKUs instead of the provided size. Example, you cannot use P40 reservation for 2 P30 disks. It is does not have instance flexibility like in VM reservations.
Most of the organizations are keen on moving their workload to cloud today for several reasons like their IT vision, reduce the spend on hardware refreshes, data center consolidations etc.
Are they ready move into the Cloud? It is an important question that every organization should ask again and again before taking the decision to move in with big bang. We see a trend with many customers to move their existing legacy applications ‘as is’ to the cloud. Shouldn’t we move into the cloud and utilize those benefits, or we just move in and I don’t care about those cloud features?
Let me start with an example here. Let’s take a case of four webservers and two database servers clustered available 24/7 with environments like Dev, Test and Prod. And you wanted to move this workload to cloud ‘as is’. My question is, what is the objectives are you trying to achieve? If the answer is, our organization wanted to move all the workload to cloud for cost saving, changing from Capex to Opex model etc. Guys, hold on… Let’s calm down, think, look around and plan again.
Lift and shift should not be our strategy for cloud migration. We should make our application to live smartly in the cloud to utilize the cloud benefits and reduce the cost. Let’s use the above example to explore this further.
Can we make this application horizontally scalable?
Can we make this application to use cloud native authentication?
Can we make this application to work stateless?
Can we make the applications to use distributed data storage?
Microsoft has now announced its long pending Availability Zones in each region. It is currently on preview and recommended only for non-critical workload as Micorsoft does not provide any SLA now. With this, you can now provision your workload from different data centers in the same regions for resilience as you will have options to select between minimum of 3 AZs in each region with GA. However, it is now available only on East US2 and West Europe for Preview.
AWS currently operate 44 AZs across 16 Regions and 14 more AZs are already planned (44+14 =58). Microsoft currently operate in 36 regions and 6 more to come. If you assume that Microsoft will bring 3 AZs minimums at each of these regions, Microsoft would have (36+6) *3= 126 which itself is more than double the size of AZs across the globe. I agree, it does not make much sense to just play with the numbers so Microsoft need to bring the services that would help customers to make use availability zones and add value to their workloads hosted in Azure services. AWS currently offers multiple PaaS services for their Multi-AZ deployment model so Microsoft still need do good job on making sure that more services are available for Multiple AZ deployments.
It is not just the public cloud today, but it is Hybrid Cloud.
Microsoft is working on making our hybrid life less difficult by introducing Azure stack. We all know the pain of getting Microsoft System Center integrated and working on-premises to enable private cloud. Yes, I agree with you SC is not a candidate for comparing it with Azure Stack. However, I believe Azure Stack will be solving these issues and bringing cloud to your data center with ‘Pay as You Use’ Pricing model.
What is Azure stack as per Microsoft?
Microsoft Azure Stack is a hybrid cloud platform that lets you deliver Azure services from your organization’s datacenter. Bring the agility and fast-paced innovation of cloud computing to your on-premises environment with Azure Stack. This extension of Azure allows you to modernize your applications across hybrid cloud environments, balancing flexibility and control. Plus, developers can build applications using a consistent set of Azure services and DevOps processes and tools, then collaborate with operations to deploy to the location that best meets your business, technical and regulatory requirements.