PowerShell, Credentials & Azure

Not the snappiest title, I know, but it does describe perfectly what this post is about. I have been looking into credentials using PowerShell for some scripts I am writing.

I’m sure you will all have needed to include credentials in your scripts at sometime other, particularly if your scripts are creating resources in Azure. Using the following command lets you put credentials into an array for easy use within your script.

$Cred = Get-Credential

This command displays a dialog box where you can enter a username and password as shown below.

These details are then stored in the Array so that you can use them when required. You can even prepopulate the User name field by using the -Credential switch.

$Cred = Get-Credential -Credential Domain\User01

But what if I don’t want a prompt?

The above dialog box is an easy and safe way to add credentials to a script, but what if your script is designed not to be interactive?

Well, instead of using Get-Credential we can use the PSCREDENTIAL object directly.

First of all we need to create an encrypted password by doing the following.

$Password = ConvertTo-SecureString "Password01" -AsPlainText -Force

We can then create the new PSCREDENTIAL object

$cred = New-Object System.Management.Automation.PSCredential ("User01", $Password)

Avoiding storing password within the script

The problem with this approach is that the password is showing within the script, in plain text. So anyone with access to the script can get access to the password.

To get around this problem the password needs to be stored outside of the script, preferably in a way so that the user running the script can’t see the password. Within Azure this can be done by using a Key Vault. A Key Vault is an object that is used to store information that needs to be kept secure. The following should be done outside of your script.

Create your key Vault.

New-AzKeyVault -Name "<your-unique-keyvault-name>" -ResourceGroupName "myResourceGroup" -Location "UK South"

Create your password and store it in the Key Vault.

$Password = ConvertTo-SecureString "Password01" -AsPlainText -Force
Set-AzKeyVaultSecret -VaultName "<your-unique-keyvault-name>" -Name "MyPassword" -SecretValue $Password

Then, within your script get the password and create the credential.

$LocalPassword = Get-AzKeyVaultSecret -VaultName "<your-unique-keyvault-name>" -Name "MyPassword"
$Cred = New-Object System.Management.Automation.PSCredential ("User01", $LocalPassword)

If you want to hide the user name of the credential as well simply add an additional secret to the Key Vault and set it within your script in the same way as was done for the password.

Migrate a physical device to Azure

I was working with a client recently who needed to move away from Windows 7. Most of the existing devices could simply be replaced with Windows 11 ones, the software they used was either already packaged for deployment or was packaged for deployment to enable the migration. However some devices had software that couldn’t be repackaged. Often this was bespoke software, so it was impossible to get an updated version and the original developer was long gone.

As leaving the Windows 7 devices on the network was not an option (support for Windows 7 was expiring), I suggested that we should look at moving these devices to Azure. Although this would not remove Windows 7 from the network, it would be much easier to isolate the devices from the rest of the network and only allow required access to resources. It would also help to ensure that users used their new Windows 11 devices for day to day work and only use the Azure device for it’s specific purpose. The initial downside of Azure cost can also be considered an upside because it allows the additional costs to be charged back to the department owning the devices, thereby providing a financial incentive to migrate away from Windows 7.

Step One: Take an image of the volumes

The first thing to do is take an image of the disk or disks that need to be migrated. This is done using a tool called Disk2VHD, which can be downloaded from https://learn.microsoft.com/en-us/sysinternals/downloads/disk2vhd.

As its name suggests, it creates a VHD file from a physical disk.

Don’t forget to untick the “Use Vhdx” option otherwise you will need to convert the resulting vhdx image to vhd to be able to perform the next step in the process. I had to add a USB disk to the device so that I had somewhere to create the large vhd file that gets created. The Disk2VHD tool does not seem to support network drives.

I also ensured that RDP was enabled on the device and added a local ID and added it to the local Administrators group so that I would be able to RDP onto the device once it was in Azure.

If additional drives are used on the device, each one can be imaged and then uploaded to a Managed Disk in Azure.

Step 2: Upload VHD into an Azure Managed Disk

Uploading the disk image to Azure is done using Powershell. It needs to be done from a device with Hyper-V enabled as the powershell command uses Hyper-V functionality during the process.

Start by installing the AZ.Compute module

install-module az.compute

Connect to your Azure Tenant


Create a managed disk in Azure from your vhd file

add-azVHD -LocalFilePath <path to vhd file> -ResourceGroupName <resource group name> -Location <location (e.g. UKSOUTH)> -DiskName <Name for the managed disk>

During the uploading process the command resizes the vhd file by creating a copy of the file, so ensure you have sufficient disk space to allow this to happen.

Step 3: Create the Virtual Machine

The Add-AzVHD command creates a new Managed Disk and uploads the VHD image to it. Once that has completed a VM needs to be created. This is done from within the Managed Disk, so open the Managed disk in the Azure Portal.

Check on the Overview blade that the Disk state is Unattached.

Then select the Create VM option at the top of the page. Fill in all the elements you normally need for a VM and the VM will be created, with the Managed disk attached.

PowerShell: Automatically Deploy an Azure Test or Development Environment as Code (IaC)

We often need to build testing, packaging or development environments in Azure. Often these are built in an ad-hoc way, manually. This is fine, if a little time consuming, but Azure makes it easy to create scripts that will automate the provisioning of these environments. In fact, automation should really be the preferred method of deployment as it ensures that environments can be deployed or re-deployed rapidly whilst maintaining build quality.

The script that I am going to describe here uses PowerShell Azure cmdlets to provide a very flexible method of deploying the Azure environment, and deploying as many fully configured VMs (desktops or servers) as you may need.

This script relies on the AZ PowerShell module, and this should be installed on any workstation prior to running the script by running the following command. The script uses functions, for code minimisation and to simplify code reuse.

Install-Module -Name Az

Azure Environment and Networking

All the variables required for the script are set at the beginning, and I have added comments in the script describing all of these. All output is sent to the console.

The ConfigureNetwork function creates a Virtual Network, a subnet and a Network Security Group (NSG) name. The NSG also has a rule enabled for RDP. You can add other rules if you need them. If you don’t need the networking configured simply comment out the ConfigureNetwork call later in the script.

function ConfigureNetwork {
    $virtualNetwork = New-AzVirtualNetwork -ResourceGroupName $RGName -Location $Location -Name $VNet -AddressPrefix
    $subnetConfig = Add-AzVirtualNetworkSubnetConfig -Name default -AddressPrefix -VirtualNetwork $virtualNetwork
    $rule1 = New-AzNetworkSecurityRuleConfig -Name rdp-rule -Description "Allow RDP" -Access Allow -Protocol Tcp -Direction Inbound -Priority 100 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange 3389
    $nsg = New-AzNetworkSecurityGroup -ResourceGroupName $RGName -Location $location -Name $NsgName -SecurityRules $rule1
        # $rule1, $rule2 etc
        If ($nsg.ProvisioningState -eq "Succeeded") {Write-Host "Network Security Group created successfully"}Else{Write-Host "*** Unable to create or configure Network Security Group! ***"}
    $Vnsc = Set-AzVirtualNetworkSubnetConfig -Name default -VirtualNetwork $virtualNetwork -AddressPrefix "" -NetworkSecurityGroup $nsg
    $virtualNetwork | Set-AzVirtualNetwork >> null
        If ($virtualNetwork.ProvisioningState -eq "Succeeded") {Write-Host "Virtual Network created and associated with the Network Security Group successfully"}Else{Write-Host "*** Unable to create the Virtual Network, or associate it to the Network Security Group! ***"}

I wanted the script to support fully configured VMs, which can be configured using scripts and Azure’s Custom Script Extension functionality. This enables scripts to be run during, or after VM deployment. These scripts need storing somewhere, so I have included a CreateStorageAccount function.

function CreateStorageAccount {
    If ($StorAccRequired -eq $True)
        $storageAccount = New-AzStorageAccount -ResourceGroupName $RGName -AccountName $StorAcc -Location uksouth -SkuName Standard_LRS
        $ctx = $storageAccount.Context
        $Container = New-AzStorageContainer -Name $ContainerName -Context $ctx -Permission Container
        If ($storageAccount.StorageAccountName -eq $StorAcc -and $Container.Name -eq $ContainerName) {Write-Host "Storage Account and container created successfully"}Else{Write-Host "*** Unable to create the Storage Account or container! ***"}    
        #$BlobUpload = Set-AzStorageBlobContent -File $BlobFilePath -Container $ContainerName -Blob $Blob -Context $ctx 
        Get-ChildItem -Path $ContainerScripts -File -Recurse | Set-AzStorageBlobContent -Container "data" -Context $ctx
            Write-Host "Creation of Storage Account and Storage Container not required"

This function creates a Storage Account and a Container (for holding the scripts). It then copies all files found in the $ContainerScripts variable up to the Container. Again, if this functionality is not required (e.g. you are storing your scripts in GitHub) you can comment out the CreateStorageAccount call later in the script.

Creating the VM/s

To provide the correct details for the image, the available SKUs need to be listed. This is done with the following command (don’t forget to enter the location that you want to use when creating the resource, as availability varies between regions).

Get-AzVMImageSku -Location "uksouth" -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer"

To identify available Windows 10 SKUs, use this command.

Get-AzVMImageSku -Location "uksouth" -PublisherName "MicrosoftWindowsDesktop" -Offer "Windows-10"

From this data we can put together the image name to be used for the VM. For example, We want to deploy the latest 20h2 Windows 10 image, so we would use the following string for the ImageName parameter.


The CreateVMp function creates the VM, sets the local Admin User and Password, and creates a Public IP address.

function CreateVMp($VMName) {
    $PublicIpAddressName = $VMName + "-ip"

    $Params = @{
    ResourceGroupName = $RGName
    Name = $VMName
    Size = $VmSize
    Location = $Location
    VirtualNetworkName = $VNet
    SubnetName = "default"
    SecurityGroupName = $NsgName
    PublicIpAddressName = $PublicIpAddressName
    ImageName = $VmImage
    Credential = $VMCred

    $VMCreate = New-AzVm @Params
    If ($VMCreate.ProvisioningState -eq "Succeeded") {Write-Host "Virtual Machine $VMName created successfully"}Else{Write-Host "*** Unable to create Virtual Machine $VMName! ***"}

Configuring the VM/s

Configuring VMs is done by means of a script, run on the newly provisioned VM. Obviously, a script allows you to do anything that you want. In my example the script is run from the Storage Account but copies media from a GitHub location and then installs Orca. You can use the same technique to do pretty much anything that you need to.

The script is run using the Custom Script Extension functionality in Azure. The function, RunVMConfig, can be run against any Azure Windows VM, it does not have to have been created using this script.

function RunVMConfig($VMName, $BlobFilePath, $Blob) {

    $Params = @{
    ResourceGroupName = $RGName
    VMName = $VMName
    Location = $Location
    FileUri = $BlobFilePath
    Run = $Blob
    Name = "ConfigureVM"

    $VMConfigure = Set-AzVMCustomScriptExtension @Params
    If ($VMConfigure.IsSuccessStatusCode -eq $True) {Write-Host "Virtual Machine $VMName configured successfully"}Else{Write-Host "*** Unable to configure Virtual Machine $VMName! ***"}

The script that will be run is taken from the Storage Account Container, but it will also accept the RAW path to a GitHub repository. You would then not need to create the Storage Account.

Script Main Body

The main body of the script is actually quite short, due to the use of functions. The Resource Group is created, and then the ConfigureNetwork and CreateStorageContainer functions are called (if required, they can be commented out).

# Main Script

# Create Resource Group
$RG = New-AzResourceGroup -Name $RGName -Location $Location
If ($RG.ResourceGroupName -eq $RGName) {Write-Host "Resource Group created successfully"}Else{Write-Host "*** Unable to create Resource Group! ***"}

# Create VNet, NSG and rules (Comment out if not required)

# Create Storage Account and copy media (Comment out if not required)

# Build VM/s
$Count = 1
While ($Count -le $NumberOfVMs)
    Write-Host "Creating and configuring $Count of $NumberofVMs VMs"
    $VM = $VmNamePrefix + $VmNumberStart
    CreateVMp "$VM"
    RunVMConfig "$VM" "https://packagingstoracc.blob.core.windows.net/data/VMConfig.ps1" "VMConfig.ps1"
    # Shutdown VM if $VmShutdown is true
    If ($VmShutdown)
        $Stopvm = Stop-AzVM -ResourceGroupName $RGName -Name $VM -Force
        If ($RG.ResourceGroupName -eq $RGName) {Write-Host "VM $VM shutdown successfully"}Else{Write-Host "*** Unable to shutdown VM $VM! ***"}

In order to provision multiple VMs, a While loop is used, to rerun code until the required number of VMs are provisioned. The VM name is created by concatenating variables, and the name is passed, first to the CreateVMp function, and then to the RunVMConfig function. A variable at the beginning of the script is used to check if the newly provisioned VM should be shutdown or not. This is very useful as an aid to manage costs.

The full script is located here.


The VM config script I used is written to output into the Windows Event Log for ease of fault finding. It is located here.


The script outputs to the console and successful completion shouls look something like this.

Resource Group created successfully
Network Security Group created successfully
Virtual Network created and associated with the Network Security Group successfully
Storage Account and container created successfully
Account              SubscriptionName          TenantId                             Environment
-------              ----------------          --------                             -----------
graham@*********.*** Microsoft Partner Network ********-****-****-****-********73ab AzureCloud 

ICloudBlob                         : Microsoft.Azure.Storage.Blob.CloudBlockBlob
BlobType                           : BlockBlob
Length                             : 2034
IsDeleted                          : False
BlobClient                         : Azure.Storage.Blobs.BlobClient
BlobProperties                     : Azure.Storage.Blobs.Models.BlobProperties
RemainingDaysBeforePermanentDelete : 
ContentType                        : application/octet-stream
LastModified                       : 11/30/2020 10:03:49 AM +00:00
SnapshotTime                       : 
ContinuationToken                  : 
Context                            : Microsoft.WindowsAzure.Commands.Storage.AzureStorageContext
Name                               : VMConfig.ps1

Creating and configuring 1 of 1 VMs
Virtual Machine PVMMSI500 created successfully
Virtual Machine PVMMSI500 configured successfully
VM PVMMSI500 shutdown successfully

Phased Deployment of Azure Conditional Access Multi-factor Authentication (MFA) using PowerShell

I recently had a client that needed to deploy Azure MFA, controlled by a conditional access policy, to around 30,000 users. Now, obviously, from a technical point of view this is quite straightforward, however because of the large number of users, and the fact that user communications and support needed to be managed, I wanted this to be done in a controlled way. I didn’t want to overload the helpdesk, or cause business problems for the users, due to unexpected issues.

I therefore wrote 3 scripts that would allow us to easily manage the deployment and ensure that things progressed in a tightly controlled way.

The process that was to be followed was this.

  1. Create the Conditional Access policy and assign to an MFA Deployment group.
  2. Identify all the affected users and gather their User Principal Names into multiple batches. Create files with, perhaps 500 users in each file. This ensures that users are enabled for MFA in small, manageable batches.
  3. Communications are sent to the user to tell them to register their additional authentication methods, together with the URL to the correct web page (aka.ms/MFASetup *). The users will automatically be prompted to register additional authentication methods when they first login, once they are enabled for MFA, however many users tend to ignore this if they can, so its better to try and get them to register before they need to. This saves issues building up with the helpdesk if they encounter problems after waiting until the last minute to register additional authentication methods.
  4. Run the CheckVerificationStatus.ps1 script. This script reads the UPNs from the Input File and lists each UPN together with a list of their registered Strong Authentication Methods. If nothing is listed for a user then they have not registered additional authentication methods for MFA. Communications could then be sent to those users to request that they register.
  5. Run the EnableMFA.ps1 script that takes the UPN names from a file, ensures that sufficient EMS licenses exist, provisions the user with an EMS license, then adds the user to the MFA deployment group.
  6. If irresolvable problems occur, run the DisableMFA.ps1 script which will back-out MFA for all users in the batch, or create a list of users to be backed out and run DisableMFA using that file.

* The aka.ms/MFASetup URL will send each user to the Additional Security Verification page for their own ID. If they haven’t registered additional security verification previously they will be prompted to configure it. If they have configured additional security verification previously they will be shown the page below.

The input files that provide the UPN list are simply TXT files that list each UPN on it’s own line. Try to ensure that there are no blank lines at the end of the file, otherwise you will see the following error in the PowerShell console for each blank line. It doesn’t affect how the scripts work though. The output file is simply a log of what the script has done.

Get-MsolUser : Cannot bind argument to parameter 'UserPrincipalName' because it is an empty string.
At line:10 char:52
+     $CurrentUser = Get-MsolUser -UserPrincipalName $UPN
+                                                    ~~~~
    + CategoryInfo          : InvalidData: (:) [Get-MsolUser], ParameterBindingValidationException
    + FullyQualifiedErrorId : ParameterArgumentValidationErrorEmptyStringNotAllowed,Microsoft.Online.Administration.Automation.GetUser


Import-Module -Name Msonline

$InputFile = "C:\Data\UpnList.txt" # List of UPNs
$OutputFile = "C:\Data\Output.txt"
 # Output file

# Read in list of users who will be enabled for MFA
$UPNs = Get-Content -Path $InputFile

ForEach ($UPN in $UPNs)
    $CurrentUser = Get-MsolUser -UserPrincipalName $UPN
    If ($CurrentUser.StrongAuthenticationmethods -eq "")
        $NewLine = $UPN + "`t" + "nul" >> $OutputFile
        $NewLine = $UPN + "`t" + $CurrentUser.StrongAuthenticationmethods.MethodType >> $OutputFile

After prompting for Azure credentials this script takes input from the $InputFile and outputs tab separated data to the $OutputFile. This file can be easily opened in Excel for data manipulation. An example of the output is shown below, showing the Strong Authentication Methods registered for each user. Users with no methods listed have not registered any additional security methods

graham.higginson@xxxxxx.com	OneWaySMS TwoWayVoiceMobile
joe.bloggs@xxxxxx.com	PhoneAppOTP PhoneAppNotification OneWaySMS TwoWayVoiceMobile


Import-Module -Name MsolService

$InputFile = "C:\Data\UpnList.txt"
 # List of UPNs
$OutputFile = "C:\Data\Output.txt"
 # Output File
$License = "xxxxx:EMS"
 # Name of license
$GroupID = "4adcacb1-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
 # GUID of the MFA Deployment group

# Read in list of users who will be enabled for MFA
$UPNs = Get-Content -Path $InputFile

# Identify how many available EMS licenses exist
$Licenses = Get-AzureADSubscribedSku | where {$_.SkuPartNumber -eq 'EMS'}
$PrepaidLicenses = $Licenses.PrePaidUnits.Enabled
$ConsumedLicenses = $Licenses.ConsumedUnits
$AvailableLicenses = $PrepaidLicenses - $ConsumedLicenses

If ($UPNs.count -lt $AvailableLicenses)
     Write-Warning "Sufficient licenses are available"
     Write-Warning "Script continuing..."
     Write-Warning "Insufficient licenses are available"
     Write-Warning "Script stopping!!"

# Add EMS license to user and add to MFA group
ForEach ($UPN in $UPNs)
    $CurrentUser = Get-MsolUser -UserPrincipalName $UPN
    If ($CurrentUser.Licenses.AccountSkuId -Contains $License)
        Write-Host $UPN "already has an EMS license assigned" >>$OutputFile
        Set-MsolUserLicense -UserPrincipalName $UPN -AddLicenses $License
        If ($Error[0])
            Write-Host "Unable to add license to" $UPN >>$OutputFile
            Write-Host "Successfully added license to" $UPN >>$OutputFile

    Add-MsolGroupMember -GroupObjectId $GroupID -GroupMemberType User -GroupMemberObjectId $CurrentUser.ObjectId
    If ($Error[0])
        Write-Host "Unable to add" $UPN "to the MFA group" >>$OutputFile
            Write-Host "Successfully added" $UPN "to MFA group" >>$OutputFile

After prompting for Azure credentials this script reads in the list of user UPNs and checks that enough licenses are available for the number of users to be enabled for MFA. If there aren’t enough licenses the script will exit. Otherwise the script will assign and EMS license to each user, unless they already have a license assigned. The script then adds each user to the specified MFA Deployment group.


Import-Module -Name MsolService

$InputFile = "C:\Data\UpnList.txt"
 # List of UPNs
$OutputFile = "C:\Data\Output.txt"
 # Output File
$License = "xxxxx:EMS"
 # Name of license
$GroupID = "4adcacb1-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
 # GUID of the MFA Deployment group

# Read in list of users who will be enabled for MFA
$UPNs = Get-Content -Path $InputFile

ForEach ($UPN in $UPNs)
    $CurrentUser = Get-MsolUser -UserPrincipalName $UPN
    Remove-MsolGroupMember -GroupObjectId $GroupID -GroupMemberType User -GroupMemberObjectId $CurrentUser.ObjectId
    If ($Error[0])
        Write-Host "Unable to remove " $UPN " from MFA group" >>$OutputFile
        Write-Host "Successfully removed " $UPN " from MFA group" >>$OutputFile
    Set-MsolUserLicense -UserPrincipalName $UPN -RemoveLicenses $License
    If ($Error[0])
        Write-Host "Unable to remove license from " $UPN >>$OutputFile
        Write-Host "Successfully removed license from " $UPN >>$OutputFile
    $NewLine = $UPN + "`t" + "Removed from Group" >> $OutputFile

After prompting for Azure credentials this script reads in the list of UPNs and then attempts to remove the user from the MFA Deployment group. It then attempts to remove the EMS license from the user.

*** You may want to comment out the license removal if many of your users already have EMS licenses assigned prior to rolling out MFA. ***

This solution was designed for a very particular use case, and may not be a perfect fit for every requirement, but I hope it provides food for thought. Feel free to take the bits that work for you, to produce your own solution.

Troubleshooting a Custom Script Extension as part of an ARM Template Deployment

I recently decided to test the use of a Custom Script Extension, as I wanted to deploy a VM with client software installed. I created a Powershell script that downloaded, extracted and installed the MSI tool Orca. If I could get this to deploy and run successfully it would give me a basis to install any software I needed, plus perform any other configuration I might need.

I added lots of error logging to the script so that I would have plenty of information when it came to troubleshooting. The script I used is located here.

Create ARM Template

I took an existing ARM template that I used to create a Windows 10 VM, renamed it and added the Custom Script Extension code. The template is here and the parameter file I used is here.

The Custom Script Extension code itself is shown here.

To deploy the template I used the following Powershell commands (after installing the Az module).

New-AzResourceGroupDeployment -Name TestDeployment -ResourceGroupName HiggCon -TemplateFile C:\Dev\VM\VMcse.template.json -TemplateParameterFile C:\Dev\VM\VM.parameters.json

And that is where the problems started…

There is a part of me that always hopes that something like this will work first time, but I have to be honest, it hardly ever does! So the error I got back is below.

The deployment failed

I looked back through my Custom Script Extension code (which I had got, and edited, from the Microsoft documentation), and found the following line that I hadn’t noticed.

"name": "virtualMachineName/config-app",

Hmm, that didn’t look right, so I changed it to this.

"name": "[Concat(parameters('virtualMachineName'),'/config-app')]",

Hurrah, the deployment was then successful…but.

Looking in Azure, I could see that the VM appeared to have been created successfully. Unfortunately, after logging into the VM, Orca wasn’t installed.

I looked in the Eventlog. There were no entries from my script so it looked like the script hadn’t run, but why?

First I looked in the C:\WindowsAzure\Logs\WaAppAgent.Log file. It looked like some Custom ScriptExtensions had run successfully, but they looked Azure specific, with no hint that my script had run. After some digging I managed to find the VMConfig.ps1 file, that was my script, in C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.10.8\Downloads\0. So that showed that the script had been copied down correctly. I tried running the script manually…and got the “Running Scripts is disabled” message. Doh!

I changed my command to execute to the following, deleted the VM and reran the deployment.

"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File VMConfig.ps1"

Success! My Deployment worked AND my script worked. Orca was installed successfully. My script had also written successfully to the Event Viewer.

I looked in C:\WindowsAzure\Logs\WaAppAgent.Log and found these entries near the bottom.

Its not exactly clear, but I think it is related to my script as it wasn’t there previously. The middle line includes this “ExtensionName: ”’ seqNo ‘0’ reached terminal state”. My script was downloaded to a directory named 0, so my thinking is that they are related. The end of the last line stating “completed with status: RunToCompletion” sounds right too. Unfortunately it is difficult to know definitely.


So what have we learnt? Well first of all, I think that when writing a script to deploy using Custom Script Extension, it makes a lot of sense to include as much logging as possible, and to test if fully before trying to deploy, as it will be difficult to determine what is wrong otherwise.

The deployment can really be split into two sections, Deployment and Execution.


It is easy to misconfigure the ARM template, as I found to my cost. I used Visual Studio Code to edit the template as it supports ARM templates and identifies error syntax. Make sure all errors are resolved before trying to deploy. The “New-AzResourceGroupDeployment” Powershell cmdlet will tell you if the deployment was successful or not. The error messages aren’t always easy to interpret though.


Once successfully deployed, if you don’t get the result you are expecting check the


directory for a subdirectory holding your script. That way you know that it is actually there. Then trying running the script and see what error messages you get.

I also looked in


and found a “0.status” file that gave me a little more information on my script, and a success status code.



I found a file named “0”, which contained, amongst other things, the URL the script was download from and the command to execute. So be aware if you are working in a secure environment, or are using a public repository.