Azure VM Availability Sets

Azure VM Availability Sets


Intro

My notes on Availability Sets

Azure availability sets allow you to distribute virtual machines across items such as multiple physical servers, racks, storage units, and networks in an Azure datacenter. This reduces the impact of hardware failure and maintenance on the virtual machines in the availability set. When you assign virtual machines to an availability set, the virtual machines are evenly distributed across the configured fault and update domains.


Documentation

 


Tips and Tidbits

 

  • An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide for redundancy and availability.

    • It is recommended that two or more VMs be created within an availability set to provide for a highly available application and to meet the 99.95% Azure SLA.

    • There is no cost for the Availability Set itself, you only pay for each VM instance that you create.

    • Azure makes sure that the VMs within the same availability set run across multiple physical servers, compute racks, storage units, and network switches.

  • Each virtual machine in your availability set is assigned an update domain and a fault domain by the underlying Azure platform.

    • For a given availability set, five update domains are assigned by default to indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time.

    • When more than five virtual machines are configured within a single availability set, the sixth virtual machine is placed into the same update domain as the first virtual machine

    • Virtual machines in the same update domain will be restarted together during planned maintenance.

    • Azure never restarts more than one update domain at a time.

  • VMs are created within an availability set to provide for a highly available application and to meet the 99.95% Azure SLA

  • Availability sets do not distribute network traffic or workload to the virtual machines. A network load-balancing solution is required for that.

 

  • Fault domains define the group of virtual machines that share a common power source and network switch.

    • By default, the virtual machines configured within your availability set are separated across up to three fault domains for Resource Manager deployments

  • The maximum number of fault domains in an availability set is three. So, in a best-case scenario, if a set of resources failed, one third of your virtual machines would be offline.

  • The maximum number of update domains is 20. This means that, at best, only five percent of your virtual machines would be affected by underlying maintenance.

    • Each availability set can be configured with up to three fault domains and twenty update domains

  • You cannot change the fault or update domain configuration after you create the availability set.

  • You must associate a virtual machine with an availability set when you create the virtual machine.

    • You cannot move a virtual machine in or out of an availability set once you have created the virtual machine.

    • If the availability set must change, then you need to provision a new virtual machine.

  • In situations where virtual machines are required to have identical configurations, such as load-balancing scenarios, you can use custom script extensions or DSC to ensure that the configurations of virtual machines are the same.

    • you can apply custom script extensions and DSC after deployment

  • If you want to add a new VM to an availability set, the VM must be in the same region and RG as the availability set


Proximity Placement Groups

  • A proximity placement group is a logical grouping used to make sure that Azure compute resources are physically located close to each other.

  • Proximity placement groups are useful for workloads where low latency is a requirement.

  • Placing VMs in a single region reduces the physical distance between the instances.

  • Placing them within a single availability zone will also bring them physically closer together.

    • However, as the Azure footprint grows, a single availability zone may span multiple physical data centers, which may result in a network latency impacting your application.

    • To get VMs as close as possible, achieving the lowest possible latency, you should deploy them within a proximity placement group.


Deploying An Availability Set From CLI

Lab source: https://github.com/MicrosoftLearning/AZ-303-Microsoft-Azure-Architect-Technologies/blob/master/Allfiles/Labs/05/azuredeploy30305suba.json

 

az provider register --namespace 'Microsoft.Insights'

 

Identity Azure regions

az account list-locations --query "[].{name:name}" -o table Name ------------------ eastasia southeastasia centralus eastus eastus2 westus northcentralus southcentralus northeurope westeurope japanwest japaneast brazilsouth australiaeast australiasoutheast southindia centralindia westindia jioindiawest canadacentral canadaeast uksouth ukwest westcentralus westus2 koreacentral koreasouth francecentral francesouth australiacentral australiacentral2 uaecentral uaenorth southafricanorth southafricawest switzerlandnorth switzerlandwest germanynorth germanywestcentral norwaywest norwayeast brazilsoutheast westus3

 

  • Configure a network watcher

$LOCATION="eastus" PS C:\Users\Roger> az network watcher configure --resource-group NetworkWatcherRG --locations $LOCATION --enabled -o table Location Name ProvisioningState ResourceGroup ---------- --------------------- ------------------- ---------------- eastus NetworkWatcher_eastus Succeeded NetworkWatcherRG

 

az deployment sub create \ --location $LOCATION \ --template-file azuredeploy30305suba.json \ --parameters rgName=az30305a-labRG rgLocation=$LOCATION

 

az deployment group create \ --resource-group az30305a-labRG \ --template-file azuredeploy30305rga.json \ --parameters @azuredeploy30305rga.parameters.json

 

  • It took a good 20 mins for the Network Watcher to see the Resource Group and create a topology.

 

 

  • Check the effective security rules on a NIC

 

  • Check the HTTP 80 connection between the machines.

  • It may take a while. Apparently, a network watcher agent needs to be installed in the VMs

    • Connection troubleshoot requires that the VM you troubleshoot from has the AzureNetworkWatcherExtension VM extension installed.

 

 

  • Availability Set shows fault and update domains

 

Load Balancer

 

  • Load balancer and its backend pool

  • Generate some HTTP traffic to the load balancer’s public IP address and note the responses from the servers are delivered in a round robin fashion.

roger@Azure:~$ for i in {1..4}; do curl 52.255.183.130; done Hello World from az30305a-vm0 Hello World from az30305a-vm1 Hello World from az30305a-vm0 Hello World from az30305a-vm1

 

  • Check out the load balancing rule in the load balancer.

 

  • By enabling session persistence, all traffic from one client will be directed to the same VM


Inbout NAT Rules

 

  • This NAT rule connects the public IP:33891 to VM1’s RDP port 3389

 

  • Test connectivity to the front-end port

 

curl -v telnet://<lb_IP_address>:33890

 


Creating An Availability Set From Portal

 

 

 


Create Linux virtual machines in an availability set