We frequently ask questions to gather requirements and provide our designs and solutions. Many organizations standardize their questions for consistency among teams. Can chatbots handle these questions? Yes, AI services can be utilized to make decisions and trigger DevOps pipelines to deploy the desired design or service.
By using Infrastructure as Code (IAC), we can quickly deploy design templates using AI to assist in choosing the appropriate design. This allows customers to focus on workload migration instead of landing zone design. Although, not every customer is the same, for example, a secure Azure VNet hub and spoke design can be deployed initially and improved upon while testing non-critical workloads in the cloud. We can drive move from design workshop to design selection workshop where you are helping the customers to pick up one of best design available.
Another solution is deploying services on an existing subscription, where customers can quickly deploy VMs or PaaS services by answering a few questions posed by a chatbot, without waiting for a human response. This speeds up the deployment process and increases customer satisfaction.
In conclusion, by utilizing AI and chatbots, customers can avoid creating tickets and waiting for support, as they can immediately provision resources by answering questions and confirming with AI suggestions.
Last week, Microsoft announced the Preview of Capacity Reservation for VMs. You can reserve VM capacity in your DR region to ensure that you have VM resources available to create or turn on your protected VMs using ASR. ASR does not guarantee that your VMs can be turned on in your DR region in the event of disaster recovery. So, capacity reservation is a welcome feature and much needed. However, this is increasing the cost of your solution again.
ASR protected VM cost
Capacity Reservation Cost ( as same as your actual VM cost)
Note: DR is not just the VMs but including other components. I did not provide the details above because it applies to both the options.
Hmm.. Can we plan a DR cost-effectively in Azure? Let’s take a look:
The long waiting feature is now Public Preview. When there is a Azure region disaster recovery, we need our VMs to be turned on in the secondary region. This feature guarantees you the recovery. You may learn about this feature in this URL.
As you know already, Azure Site Recovery does not 100% guarantee to turn on your VMs in the event of DR. This feature addition helps you gaining more confidence with your BCP with this feature addition.
Take a look at the Microsoft statement about the use cases for this feature.
Business-critical applications—use on-demand capacity reservations to protect your capacity, for example when taking these VMs offline to perform updates.
Disaster recovery—you now have the option to set aside compute capacity to ensure a seamless recovery in the event of a natural disaster. The compute capacity can be repurposed to run other workloads whenever DR is not in effect. The VM maintenance can be handled by keeping core images up to date without the need to deploy or maintain VMs outside of DR testing.
Special events—claiming capacity ahead of time provides assurance that your business can handle the extra demand.
I have been asking for this feature for a long time now, and finally, it is here. I am happy about Microsoft as they are listening to customers and partners. However, the first bullet point is a bit worrisome as it states, It is not guaranteed to get your VM back if it is offline for some time due to maintenance. Does MSFT force customers to take this for all the critical workload? I hope they do not make things worse to this point.
It is an interesting point to discuss. I am taking example of Azure here but it is applicable to other Public Clouds as well.
Azure Site Recovery is great native tool which helps us enable disaster recovery (DR) by replicating VMs to another region with few clicks. Microsoft allows you to turn the VM ON during the disaster recovery or whenever you want to. It helps you saving the running cost of VMs for the DR set up. However, Can Microsoft Turn all the VM On in the secondary region if a region fails? How many of you thought about that scenario?
My concerns around this grew more and more last year during the early covid19 period when utilization peaked to a new height. There were lot cases reported that organizations were unable to create new VMs as Microsoft data centers including Azure region were running out of resources due sudden usage spike across the world. What would happen if thousands of customers in a region wanted to start their VMs in their secondary Azure region which results starting lakhs VMs on the same day.
What are the standards we should follow when we move on to Public Cloud?
We often get this questions during the Public Cloud conversations with different stakeholders. I would say ‘Change’ should be standard. Ahh…What?
Be ready to accept a Change
Be ready to execute a Change.
Be ready to prepare for future Changes.
The change should start from the minds of people. That means develop a mindset to accept a change is the first thing in the transformation journey. It is the first barrier as we know that each of us are comfortable to continue with existing system because change will bring interruptions, nobody like interruptions.
Azure App Service supports connecting to Application (App) Server and Database (DB) Server hosted in Azure VM or on-premise server using vNET integration that is part of App Service Plan(ASP). You get such scenarios often if you are migrating your applications landscape to Azure from on-premise. You will have different scenarios like shared DBs or may be DB team is not ready yet to DBs to PaaS, or there is no PaaS DB service available for the like of DB2, Oracle database services. App Service vNET integration is a useful feature if you do not chose to go with App Service Environment (ASE).
When you configure the vNET integration, your WebApp will get private IP from the vNET. App Service will be able to communicate with a VM in the vNET or another vNET peered in the same region, or a server on-premises over express route or s2s VPN.
Much awaited, Badly wanted but was missing from long time. It is now Public Preview…Thanks Microsoft for adding it now.
One of the reasons why I was staying away from recommending customers the Azure Bastion primarily because of this missing feature. I think it is the time to change my mind and recommend Azure Bastion as it save lots of dollars now. Because we are moving away from per vNET deployment model to per AAD tenant or as per customer requirement.
Let’s focus on Azure SQL Database reservation today. If you have not read my other blog posts on VM, please read it here. You can save cost on SQL costs with Azure reserved capacity. It covers both for Azure SQL database and SQL Managed Instance. You must be owner of the subscription or EA admin if it is EA and you must a admin agent or sale agent to buy Azure Blob Storage reserved capacity.
Importantly, a reservation covers only the compute charges of the instances in the subscription, it does not cover for software, networking, or storage charges associated with the services. You may note the below points when you think about the reserving the Azure SQL database or SQL Managed Instance. It provides good amount of cost saving along with Azure Hybrid benefits.
We talked about Azure Reservation for VM in my previous blog post. If you have not read it already, I suggest you to read it.
The VM RI discounts applicable only to the VM infrastructure cost but it does not include disks or storage used. Azure Disk Storage reservations combined with Azure reserved VM instances help you reduce total VM cost.
The common rules are applicable here as well “Use it or Lose it”,
Currently, Azure Disk Storage reservations are available only for selected Azure premium SSD SKUs. But it doesn’t apply to unmanaged, ultra disks and page blob consumption.
The reservation for disk is not based on capacity but it is based on total number of disks per SKU. That means you make reservation consumption based on the units of the disk SKUs instead of the provided size. Example, you cannot use P40 reservation for 2 P30 disks. It is does not have instance flexibility like in VM reservations.
Let me try bring some insight on Azure storage reservations. If you have not read my previous blog posts on VM reserved instance, please read it here. You can save on storage cost for blob data with Azure Storage reserved capacity. It covers both for block blobs and Data Lake Gen2. You must be owner of the subscription or EA admin if it is EA and you must a admin agent or sale agent to buy Azure Blob Storage reserved capacity. I tried to bring information from different MS article to one place with bullet points to help you on this.
Importantly, a reservation covers only the data stored in the subscription, it does not cover other actions like early deletion, operations, bandwidth and data transfer charges. You may note the below points when you think about the reserving the Azure storage.
We hear lot about the automation now a days. Should we automate anything and everything? Yes, most of it. However, we must understand why do we need to bring automation. As far as I understand, we need to automate if it relates to below mentioned points at least.
Is it a repeated task?
Does it save time during deployment (like saving down time)
Does it avoid human error?
Does it bring standardization in repeated tasks?
I have seen engineers trying to codify everything which might waste hours of time or days, but it could have done in few minutes vs hours vs days. I have done such things in the past, but it is for learning though.
I am writing this blog to explain different methods for connecting Azure Database for PostgreSQL server those who knows Azure but never worked in PostgreSQL. So, I will not be explaining how to manage databases but managing the PaaS from the portal. You may review details of this PaaS service from the Microsoft documentation to understand details like the SKUs, different plan, generations etc. I am just sharing some of my experience when I tried to deal with this for the first time.
Before we get into the connectivity methods, let’s talk about deploying PostgreSQL with the General-Purpose performance configuration which I think it is important . The available Generation is 5 as Generation 4 is not available for the deployment.
We can use either ARM templates or portal to perform the deployment. There are few ARM templates for PostgreSQL available in the Gibhub.
There are few things you need to remember when you input the parameters using ARM. The below things might confuse you.
I was exploring some of the information on each Azure regions but I could not find a single page with all the information listed. So I thought of creating a table and share it with our Cloud Community. The research for the blog made me to realize that some of the interesting facts about Azure regions. I think this will help the Architects who want to quickly do the fact checks for designing their Azure solutions.
I will try my best to update the table to ensure that you get latest information. At the same time you can also verify this information from the URLs provided bottom of this blog.
What you get from this blog post:
Geography, Azure Region, Availability Zones (AZ) in that region, Location and its Paired Region(s) in single table.
You can get all the resources in that Geography by clicking on each Geography in the first column. I have selected all the azure native services available in that area for you. So you get the services for all the regions in a single click.
I have marked Featured Regions in bold and you get details about that region by clicking on the region column where applicable.
Some facts about Azure regions:
Only South India, South Central US and US Gov Texas Azure regions are paired with more than one regions. But condition apply.
West India’s paired region is South India, it is paired only in one direction.
But South India’s secondary region is only Central India.
Brazil South’s secondary region is South Central US but its secondary region is not Brazil.
Interestingly, US Gov Virginia’s secondary region is US Gov Texas and then you see US Gov Arizona is the secondary region of US Texas.
Only 6 Azure regions have paired regions in different countries.
South Central US
Only three Azure Regions are located undisclosed locations in US.
Switzerland North is available only for selected customers now. You need to contact support for creating the resources
Switzerland West is reserved for customers requiring in country disaster recovery. You may need to contact Azure Support for creating the resources.
The below is nice illustration of mapping security services from different Cloud Service Providers. I see Azure is clearly winning as you hardly see third party solution mapped in their security product list. It does not tell you which service serve better for the multiple customer use cases. It is interesting to see Alibaba is catching up with list of products.
Azure dedicated host will enable you to run your organization’s Linux and Windows virtual machines on single-tenant physical servers. It helps you to provide with visibility and control to help address corporate compliance and regulatory requirements.
You can find the documentation from Microsoft here.
AWS had this feature available from some time now, it is good that Microsoft also catching up and closing the gap.
Benefits of Dedicated Hosts.
Azure Hybrid benefit to Azure Dedicated Hosts – Microsoft offers on-premise Windows
Host level isolation
Underlying hardware infrastructure
Processor brand, capabilities and more
Type and size of the Azure
With an Azure Dedicated Host, you can control all host-level platform maintenance initiated by Azure (e.g., host OS updates). An Azure Dedicated Host gives you the option to defer host maintenance operations and apply them within a defined maintenance window, 35 days. During this self-maintenance window, you can apply maintenance to your hosts at your convenience, thus gaining full control over the sequence and velocity of the maintenance process. Continue reading “Review of Preview – Azure Dedicated Host”
Azure Reservations help you save money by pre-paying for one-year or three-years or monthly but commitment for 1 or 3 years of virtual machines, SQL Database compute capacity, Azure Cosmos DB throughput, or other Azure resources. Pre-paying allows you to get a discount on the resources you use. Reservations can significantly reduce your virtual machine, SQL database compute, Azure Cosmos DB, or other resource costs up to 72% on pay-as-you-go prices.
I would like to talk about how best we can utilize reserved instances (RI) and other techniques (runbooks) to bring more cost savings. We will also talk about how we can decide whether we should go with RI or on Demand Virtual Machines (VMs).
Let’s look at the some of the terminologies and how is it being used in the buy the RI from Microsoft.
1 Year commitment – Paid upfront or monthly
3 Years commitment – Paid upfront or monthly
Microsoft has recently announced monthly payment of RI price which is really a welcome move from Microsoft. You can buy new reservations with monthly payment frequency and you can convert the existing RIs when you renew it to get the bills monthly.
You get the recommendation from the Azure Advisor which is available in the Azure portal for all the subscriptions. It is based on your usage. However, it is good if we could plan to select the right VM SKUs. Will talk about it.
One thing that you must remember that reservation discount is ‘USE IT OR LOSE IT’. You can’t carry forward unused reserved hours.
Generally, you do not get any benefits from RI if the VMs are not utilized above 60-70%. But I will talk about this how we can bring additional benefits on such scenarios.
ASGs enable you to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses. Implementing granular security traffic controls improves isolation of workloads and protects them individually. If a breach occurs, this technique limits the potential impact of lateral exploration of your networks from hackers.
You may find the details in the MS site more about this which I do not want to copy and paste it here. Let’s talk about the use case and how we can make use of his in better way.
Deny all the communication and open the specific communication using ASG. Yes, you can create a Deny All rule with lower priority within your vNET. Then you create specific ports to open but you will select ASG as source and destination. This will open the communication between those servers have the specific ASG configured. Looks at the below pictures (figure1&2) to understand this better.
You do not have any option to add a server in the ASG but you need to go and select the required ASG from the vNIC of the VMs. You can add this option in the ARM templates to configure when you create this VM. This will reduce number NSG changes you need to make every time you add a server rather you select required ASG while you create the VM.
You need to remember few things about ASG.
You cannot make any settings on ASG but you can only add tags.
You can only select one ASG as source or destination in every NSG rules.
You can select multiple ASGs for single VM.
3000 per subscription
20 per vNIC
4000 IP configuration per ASG
You can only assign ASG from the same subscription.
You cannot have VMs from different vNETs in one ASG.
Both source and destination ASGs in your NSG rules should be in same vNET.
Most of the organizations are keen on moving their workload to cloud today for several reasons like their IT vision, reduce the spend on hardware refreshes, data center consolidations etc.
Are they ready move into the Cloud? It is an important question that every organization should ask again and again before taking the decision to move in with big bang. We see a trend with many customers to move their existing legacy applications ‘as is’ to the cloud. Shouldn’t we move into the cloud and utilize those benefits, or we just move in and I don’t care about those cloud features?
Let me start with an example here. Let’s take a case of four webservers and two database servers clustered available 24/7 with environments like Dev, Test and Prod. And you wanted to move this workload to cloud ‘as is’. My question is, what is the objectives are you trying to achieve? If the answer is, our organization wanted to move all the workload to cloud for cost saving, changing from Capex to Opex model etc. Guys, hold on… Let’s calm down, think, look around and plan again.
Lift and shift should not be our strategy for cloud migration. We should make our application to live smartly in the cloud to utilize the cloud benefits and reduce the cost. Let’s use the above example to explore this further.
Can we make this application horizontally scalable?
Can we make this application to use cloud native authentication?
Can we make this application to work stateless?
Can we make the applications to use distributed data storage?
Microsoft has now announced its long pending Availability Zones in each region. It is currently on preview and recommended only for non-critical workload as Micorsoft does not provide any SLA now. With this, you can now provision your workload from different data centers in the same regions for resilience as you will have options to select between minimum of 3 AZs in each region with GA. However, it is now available only on East US2 and West Europe for Preview.
AWS currently operate 44 AZs across 16 Regions and 14 more AZs are already planned (44+14 =58). Microsoft currently operate in 36 regions and 6 more to come. If you assume that Microsoft will bring 3 AZs minimums at each of these regions, Microsoft would have (36+6) *3= 126 which itself is more than double the size of AZs across the globe. I agree, it does not make much sense to just play with the numbers so Microsoft need to bring the services that would help customers to make use availability zones and add value to their workloads hosted in Azure services. AWS currently offers multiple PaaS services for their Multi-AZ deployment model so Microsoft still need do good job on making sure that more services are available for Multiple AZ deployments.