I’ve always been fascinated by ideas. The kind of ideas that spark innovation, creativity, and problem-solving. The kind of ideas that make you want to jump out of bed and start working on them right away. That’s why I was drawn to the famous tagline from Idea Cellular company: “An idea can change your life” And it did.
Ideas are powerful. They can transform industries, create new solutions, and shape the future. But how do you cultivate a culture of innovation in your organization? How do you foster an environment where ideas are welcomed, nurtured, and implemented?
That’s where an idea engine comes in. An idea engine is a system that helps you generate, evaluate, and execute ideas effectively. It’s not just about brainstorming sessions or suggestion boxes. It’s about having a clear process that allows you to tap into the collective intelligence of your team and turn ideas into reality.
We frequently ask questions to gather requirements and provide our designs and solutions. Many organizations standardize their questions for consistency among teams. Can chatbots handle these questions? Yes, AI services can be utilized to make decisions and trigger DevOps pipelines to deploy the desired design or service.
By using Infrastructure as Code (IAC), we can quickly deploy design templates using AI to assist in choosing the appropriate design. This allows customers to focus on workload migration instead of landing zone design. Although, not every customer is the same, for example, a secure Azure VNet hub and spoke design can be deployed initially and improved upon while testing non-critical workloads in the cloud. We can drive move from design workshop to design selection workshop where you are helping the customers to pick up one of best design available.
Another solution is deploying services on an existing subscription, where customers can quickly deploy VMs or PaaS services by answering a few questions posed by a chatbot, without waiting for a human response. This speeds up the deployment process and increases customer satisfaction.
In conclusion, by utilizing AI and chatbots, customers can avoid creating tickets and waiting for support, as they can immediately provision resources by answering questions and confirming with AI suggestions.
Traditionally, Architects work on the end-to-end design and get it reviewed by different stakeholders at the end. You end up waiting several days to get this reviewed by many people in your organization and your customers’ organization. It is clearly a serial activity and waterfall model. In the era of the Public Cloud and agile world, I think we should also change our approach slightly to speed up the process and modernize our design process.
The world is pushing for Cloud Speed. When you design the Public Cloud infrastructure for your customers, you must travel along. Most of the organizations are shaping their organizational structures to pool different skills sets and work as groups. So, we should also engage the different teams and their members in the different stages of your design. This will reduce conflicts and friction at the end of your design because you are taking their inputs. You may even invite others to update some sections of the design guide, which will bring lots of collaboration to the design.
It is a good idea to do several draft reviews with the team and customers. This will reduce re-works at the end as you get instant feedback on your existing content. When your high-level diagram is ready, send to stakeholders, including customer saying ” Hey, take a look what I am currently working on.” This will avoid you creating content that may be incorrect for a specific requirement. You can even think about reviewing a completed section with them, and you work on other sections while you receive the feedback. With this approach, you need to get a sign-off from the stakeholder on the final version, and that will be easier as they are already familiar with your content.
Another advantage of engaging others teams, it would help the organization to grow good Architects. Junior members in the team will get the opportunity to work with senior Architects and also contribute to the design process. This will create a pool of resources that can be a good Architect in the future.
Last week, Microsoft announced the Preview of Capacity Reservation for VMs. You can reserve VM capacity in your DR region to ensure that you have VM resources available to create or turn on your protected VMs using ASR. ASR does not guarantee that your VMs can be turned on in your DR region in the event of disaster recovery. So, capacity reservation is a welcome feature and much needed. However, this is increasing the cost of your solution again.
Cost factors
VM cost
ASR protected VM cost
Capacity Reservation Cost ( as same as your actual VM cost)
other costs
Note: DR is not just the VMs but including other components. I did not provide the details above because it applies to both the options.
Hmm.. Can we plan a DR cost-effectively in Azure? Let’s take a look:
The long waiting feature is now Public Preview. When there is a Azure region disaster recovery, we need our VMs to be turned on in the secondary region. This feature guarantees you the recovery. You may learn about this feature in this URL.Â
As you know already, Azure Site Recovery does not 100% guarantee to turn on your VMs in the event of DR. This feature addition helps you gaining more confidence with your BCP with this feature addition.Â
Take a look at the Microsoft statement about the use cases for this feature.Â
Business-critical applications—use on-demand capacity reservations to protect your capacity, for example when taking these VMs offline to perform updates.
Disaster recovery—you now have the option to set aside compute capacity to ensure a seamless recovery in the event of a natural disaster. The compute capacity can be repurposed to run other workloads whenever DR is not in effect. The VM maintenance can be handled by keeping core images up to date without the need to deploy or maintain VMs outside of DR testing.
Special events—claiming capacity ahead of time provides assurance that your business can handle the extra demand.
I have been asking for this feature for a long time now, and finally, it is here. I am happy about Microsoft as they are listening to customers and partners. However, the first bullet point is a bit worrisome as it states, It is not guaranteed to get your VM back if it is offline for some time due to maintenance. Does MSFT force customers to take this for all the critical workload? I hope they do not make things worse to this point.
It is an interesting point to discuss. I am taking example of Azure here but it is applicable to other Public Clouds as well.
Azure Site Recovery is great native tool which helps us enable disaster recovery (DR) by replicating VMs to another region with few clicks. Microsoft allows you to turn the VM ON during the disaster recovery or whenever you want to. It helps you saving the running cost of VMs for the DR set up. However, Can Microsoft Turn all the VM On in the secondary region if a region fails? How many of you thought about that scenario?
My concerns around this grew more and more last year during the early covid19 period when utilization peaked to a new height. There were lot cases reported that organizations were unable to create new VMs as Microsoft data centers including Azure region were running out of resources due sudden usage spike across the world. What would happen if thousands of customers in a region wanted to start their VMs in their secondary Azure region which results starting lakhs VMs on the same day.
What are the standards we should follow when we move on to Public Cloud?
We often get this questions during the Public Cloud conversations with different stakeholders. I would say ‘Change’ should be standard. Ahh…What?
Be ready to accept a Change
Be ready to execute a Change.
Be ready to prepare for future Changes.
The change should start from the minds of people. That means develop a mindset to accept a change is the first thing in the transformation journey. It is the first barrier as we know that each of us are comfortable to continue with existing system because change will bring interruptions, nobody like interruptions.
Confidence makes you a good presenter. So, what gets you confidence? It is information, the more you know… the more you build the confidence.
Let me write today my story how I developed presentation skills. Now a days I do lot of presentations to clients across the world. It is mostly around technical stuff predominately around Azure which is obvious as I am an Azure Architect. If I look few year back, I used to get nervous about it. Over the time especially after I moved in to DXC Azure team, I have been given lot of opportunity present in front of internal as well as client senior leadership. Initial days I struggled bit as I was in the hands of fear which we can call it as stage fright.
We often hear “We want to achieve Cloud Speed” when we talk about deployment and management of Public Cloud. What does that mean? A VM can be deployed in few minutes in Public Cloud so the question is “why do you take longer time to start managing that?”. It is an interesting question, isn’t it? How many of you deals with such questions on daily basis. As far as I understand, it isn’t easy task for any company or service provider achieve it and sustain that. It require lot of preparation to gain that speed. Because, it is not a just about deployment but also taking care many things starting from monitoring, management, security, billing etc.
Let’s take look at some of the key areas we need to focus. Vrooooomm……
Azure App Service supports connecting to Application (App) Server and Database (DB) Server hosted in Azure VM or on-premise server using vNET integration that is part of App Service Plan(ASP). You get such scenarios often if you are migrating your applications landscape to Azure from on-premise. You will have different scenarios like shared DBs or may be DB team is not ready yet to DBs to PaaS, or there is no PaaS DB service available for the like of DB2, Oracle database services. App Service vNET integration is a useful feature if you do not chose to go with App Service Environment (ASE).
When you configure the vNET integration, your WebApp will get private IP from the vNET. App Service will be able to communicate with a VM in the vNET or another vNET peered in the same region, or a server on-premises over express route or s2s VPN.
Much awaited, Badly wanted but was missing from long time. It is now Public Preview…Thanks Microsoft for adding it now.
One of the reasons why I was staying away from recommending customers the Azure Bastion primarily because of this missing feature. I think it is the time to change my mind and recommend Azure Bastion as it save lots of dollars now. Because we are moving away from per vNET deployment model to per AAD tenant or as per customer requirement.
Let’s focus on Azure SQL Database reservation today. If you have not read my other blog posts on VM, please read it here. You can save cost on SQL costs with Azure reserved capacity. It covers both for Azure SQL database and SQL Managed Instance. You must be owner of the subscription or EA admin if it is EA and you must a admin agent or sale agent to buy Azure Blob Storage reserved capacity.
Importantly, a reservation covers only the compute charges of the instances in the subscription, it does not cover for software, networking, or storage charges associated with the services. You may note the below points when you think about the reserving the Azure SQL database or SQL Managed Instance. It provides good amount of cost saving along with Azure Hybrid benefits.
We talked about Azure Reservation for VM in my previous blog post. If you have not read it already, I suggest you to read it.
The VM RI discounts applicable only to the VM infrastructure cost but it does not include disks or storage used. Azure Disk Storage reservations combined with Azure reserved VM instances help you reduce total VM cost.
The common rules are applicable here as well “Use it or Lose it”,
Currently, Azure Disk Storage reservations are available only for selected Azure premium SSD SKUs. But it doesn’t apply to unmanaged, ultra disks and page blob consumption.
The reservation for disk is not based on capacity but it is based on total number of disks per SKU. That means you make reservation consumption based on the units of the disk SKUs instead of the provided size. Example, you cannot use P40 reservation for 2 P30 disks. It is does not have instance flexibility like in VM reservations.
Let me try bring some insight on Azure storage reservations. If you have not read my previous blog posts on VM reserved instance, please read it here. You can save on storage cost for blob data with Azure Storage reserved capacity. It covers both for block blobs and Data Lake Gen2. Â You must be owner of the subscription or EA admin if it is EA and you must a admin agent or sale agent to buy Azure Blob Storage reserved capacity. I tried to bring information from different MS article to one place with bullet points to help you on this.
Importantly, a reservation covers only the data stored in the subscription, it does not cover other actions like early deletion, operations, bandwidth and data transfer charges. You may note the below points when you think about the reserving the Azure storage.
There lived a monster in a tiny village. Several Men tried to fight the monster. When they attack the Monster with Swords, it grabbed the weapon and pulled out another one twice as sharp, large and attacked them back. It continued regardless of any methods they tried to fight the monster. However, one day a little boy went along with others and the boy offered him an apple. The monster grabbed it and returned two delicious apples twice as red and large as the apple the boy had offered. Soon, the villagers realized that the monster was not a curse but a blessing. I read this story from the book The secret of leadership by Prakash Iyer.
Work from home is nothing new for us, but amid Covid 19 outbreak we are forced to WFH now. It is our responsibility to exercise social distancing to break the chain and at the same time we need to support our company and our customers.
I have been doing the WFH from more than a month now also cancelled all time official and personal trips. I had already set up my home office since I used to WFH few days i n a week. Often, I tent to work long hours so it was important to make sure that I have a comfortable working experience at my home office.
Occasionally, we work from home, but it is different situation now. I think this situation will continue for another two or three months if I analyse the situation correctly. So, it is important that you need to have necessary facilities at home for you to work 8 or more hours continuously. Let me list down some of the important tips to get you better working environment at home.
We hear lot about the automation now a days. Should we automate anything and everything? Yes, most of it. However, we must understand why do we need to bring automation. As far as I understand, we need to automate if it relates to below mentioned points at least.
Is it a repeated task?
Does it save time during deployment (like saving down time)
Does it avoid human error?
Does it bring standardization in repeated tasks?
I have seen engineers trying to codify everything which might waste hours of time or days, but it could have done in few minutes vs hours vs days. I have done such things in the past, but it is for learning though.
I am writing this blog to explain different methods for connecting Azure Database for PostgreSQL server those who knows Azure but never worked in PostgreSQL. So, I will not be explaining how to manage databases but managing the PaaS from the portal. You may review details of this PaaS service from the Microsoft documentation to understand details like the SKUs, different plan, generations etc. I am just sharing some of my experience when I tried to deal with this for the first time.
Before we get into the connectivity methods, let’s talk about deploying PostgreSQL with the General-Purpose performance configuration which I think it is important . The available Generation is 5 as Generation 4 is not available for the deployment.
We can use either ARM templates or portal to perform the deployment. There are few ARM templates for PostgreSQL available in the Gibhub.
There are few things you need to remember when you input the parameters using ARM. The below things might confuse you.
I am starting my 2020 year with the review of Azure Bastion Host.
It was welcoming to see Microsoft introducing the Azure Bastion Host. It allows you to connect to your VMs without having public IP configured on the VM. I had reviewed the preview of this in my blog last year. I suggest you read the my blog and other Microsoft articles to get the details of that as I am not explaining that in this blog.
I noticed Microsoft have added below features with GA.
Extended to few more regions
Integrated with Log analytics that provides audit logs.
A small help can bring smile on someone’s face. Doesn’t matter how big or small, what matters is we do it. How can we make it part of our life style?
Let’s decide that we will donate something when we achieve something in our life. Example, when you get a salary increment or bonus or it could be a certification or a degree.
Personally, I make a wish before I do things in my life that I will donate an amount to those who really need that. It may be a trek, travel, exam, promotion, new job or anything else. I do not know if there is someone hearing this or not, but that gives me lot of happiness and peace. I suggest you try this at least one time, I am sure this would bring a smile on someone and you as well.
It was almost 2 years of break from the MS certifications before I tried AZ-500 early this week, it was an interesting one. It was the first MS certification I have ever appeared with hands on lab though it was bit of a surprise. I thought of sharing my experience on exam which might be helpful if you are trying get this certification.
The exam is total 3 and half hours with 3 hours of exam time. I suggest you to go through exam skills outline before you starting the preparation. I started with course in the Linux Academy. I found it is especially good for Azure Active Directory as it covers all the features of AAD that is part of P2. The course covers almost all the subject required for the exam for us to start preparing for the exam. However, don’t stop it there… we need to deep dive in to each subject with MS documentations. Importantly, you need to do lot of hands-on for each topic described in the exam skills outline.
Every failure is lot of learning and learning is key for the
success. Question is can we not learn
without the failure, may be. But failure is not bad if you don’t stop there.
You have seen this subject already in many articles on this I do not want to talk about it but I would like to talk about building a culture in the organization to encourage people to take risks without fear of failure which result in loosing your job . Leadership in the organization should be able bring the innovation to accelerate the growth.
If there is a fear about the losing the job due to failure, they will be scared to execute a change. Instead, they try to stick to safest route to save their own ass. It is the dangerous thing to happen to an organization because that organization will die due to the lack of innovation. We have read about the companies like Kodak, Blackberry, Nokia etc. those who forgot/late to make much needed changes.
Do you see your ideas are not accepted by your management because you are not so experienced as others? If yes, I think it is not right. Experience always play good role in taking decisions and selecting a solution which is right but that should not be limiting in accepting ideas from others.
I just read this below caption from a restaurant in Chennai.
It was an interesting caption.
Taken from Parkway Inn Restaurant Chennai
It is important for every team to build a culture to encourage everybody in the team to come up with ideas and award them for the good ideas. It would help the team to be more innovative, innovation is key for success.
What is key for this? Good leadership team who does not think ‘Egg or Chicken First’
I was exploring some of the information on each Azure regions but I could not find a single page with all the information listed. So I thought of creating a table and share it with our Cloud Community. The research for the blog made me to realize that some of the interesting facts about Azure regions. I think this will help the Architects who want to quickly do the fact checks for designing their Azure solutions.
I will try my best to update the table to ensure that you get latest information. At the same time you can also verify this information from the URLs provided bottom of this blog.
What you get from this blog post:
Geography, Azure Region, Availability Zones (AZ) in that region, Location and its Paired Region(s) in single table.
You can get all the resources in that Geography by clicking on each Geography in the first column. I have selected all the azure native services available in that area for you. So you get the services for all the regions in a single click.
I have marked Featured Regions in bold and you get details about that region by clicking on the region column where applicable.
Some facts about Azure regions:
Only South India, South Central US and US Gov Texas Azure regions are paired with more than one regions. But condition apply.
West India’s paired region is South India, it is paired only in one direction.
But South India’s secondary region is only Central India.
Brazil South’s secondary region is South Central US but its secondary region is not Brazil.
Interestingly, US Gov Virginia’s secondary region is US Gov Texas and then you see US Gov Arizona is the secondary region of US Texas.
Only 6 Azure regions have paired regions in different countries.
Brazil South
South Central US
North Europe
West Europe
East Asia
Southeast Asia
Only three Azure Regions are located undisclosed locations in US.
Switzerland North is available only for selected customers now. You need to contact support for creating the resources
Switzerland West is reserved for customers requiring in country disaster recovery. You may need to contact Azure Support for creating the resources.
The below is nice illustration of mapping security services from different Cloud Service Providers. I see Azure is clearly winning as you hardly see third party solution mapped in their security product list. It does not tell you which service serve better for the multiple customer use cases. It is interesting to see Alibaba is catching up with list of products.