The following is my input parameter file(parameter.json)
{
"VNetSettings":{
"value":{
"name":"VNet2",
"addressPrefixes":"10.0.0.0/16",
"subnets":[
{
"name": "sub1",
"addressPrefix": "10.0.1.0/24"
},
{
"name":"sub2",
"addressPrefix":"10.0.2.0/24"
}
]
}
}
}
The following is my arm template that should deploy the subnets.(deploy.json)
{
"contentversion":"1.0.0.0",
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"parameters":{
"VNetSettings":{"type":"object"},
"noofsubnets":
{
"type":"int"
},
"newOrExisting":
{
"type":"string",
"allowedvalues":
[
"new",
"existing"
]
}
},
"resources":
[{
"condition":"[equals(parameters('newOrExisting'),'new')]",
"type": "Microsoft.Network/virtualNetworks",
"mode":"Incremental",
"apiVersion": "2015-06-15",
"name":"[parameters('VNetSettings').name]",
"location":"[resourceGroup().location]",
"properties":
{
"addressSpace":{
"addressPrefixes":["[parameters('VNetSettings').addressPrefixes]"]
},
"copy":
[{
"name":"subnets",
"count":"[parameters('noofsubnets')]",
"input":
{
"name": "[parameters('VNetSettings').subnets[copyIndex('subnets')].name]",
"properties":
{
"addressPrefix": "[parameters('VNetSettings').subnets[copyIndex('subnets')].addressPrefix]"
}
}
}]
}
}]
}
What the template should be doing is add these two subnets(sub1 & sub2) to the Vnet in addition to the existing subnets if there is already one.But what it is doing is replacing the existing subnets with these two subnets present in the input file. Mode: Incremental should be doing this but I'm not sure whether I'm placing it in the right place. I'm deploying this template using the following powershell commmand:
New-AzureRmResourceGroupDeployment -Name testing -ResourceGroupName rgname -TemplateFile C:\Test\deploy.json -TemplateParameterFile C:\Test\parameterfile.json
This is expected behavior. you should read on 'Idempotence'. what you need to do is create a subnet resource, that way you will work around it.
{
"apiVersion": "2016-03-30",
"name": "vnetName\subnetName",
"location": "[resourceGroup().location]]",
"type": "Microsoft.Network/virtualNetworks/subnets",
"properties": {
"addressPrefix": "xx.x.x.xx"
}
}
vnetName has to be the vnet you want to create resource in.
Related
I have a Logic App that uses the Azure Data Factory action "Create a pipeline run" that works perfectly.
This is how the Logic App looks like
The authentication method to Azure Data Factory that I use is "System assigned" managed identity.
After creating and testing the Logic App, I now want to create an ARM template to save it in the code repository for deployment, however I'm struggling to get the authentication part of the ARM template to work. I'm not sure how the syntax should be and I don't find anything in the Microsoft documentation.
In the Logic App resource I have added:
"identity": {
"type": "SystemAssigned"
}
This is how the connections part of the Logic app resource looks like:
"$connections": {
"value": {
"azuredatafactory": {
"connectionId": "[parameters('connections_azuredatafactory_externalid')]",
"connectionName": "[parameters('connections_azuredatafactory_name')]",
"connectionProperties": {
"authentication": {
"type": "ManagedServiceIdentity"
}
},
"id": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/francecentral/managedApis/azuredatafactory')]"
}
}
}
And this is how the connector resource look like (I think I'm missing something here (?)):
{
"type": "Microsoft.Web/connections",
"apiVersion": "2016-06-01",
"name": "[parameters('connections_azuredatafactory_name')]",
"location": "francecentral",
"kind": "V1",
"properties": {
"displayName": "[parameters('connections_azuredatafactory_displayname')]",
"alternativeParameterValues": {},
"parameterValueSet": {
"name": "managedIdentityAuth",
"values": {}
},
"statuses": [
{
"status": "Ready"
}
],
"api": {
"id": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/francecentral/managedApis/azuredatafactory')]"
}
}
}
The error message I get when trying to deploy this through Visual studio 2022 is:
Template deployment returned the following errors:
Resource Microsoft.Logic/workflows 'logic-d365-dwh-01-ip-dev-rxlse' failed with message '{
"error": {
"code": "WorkflowManagedIdentityConfigurationInvalid",
"message": "The workflow connection parameter 'azuredatafactory' is not valid. The API connection 'azuredatafactory' is not configured to support managed identity."
}
}'
Anyone who knows what the problem could be?
1)I have created azure logic App with 3 actions (http request, create ADF pipeline, response).
Here is the reference image:
2)Then to connect to ADF used system assigned managed identity & I have given access for logic App to create pipeline in ADF.
Here is the reference image:
Then I have tested in portal & it is succussed
Then I have exported ARM Template & downloaded.
Then in visual studio I have created new project of type Azure resource group then I have edited logicapp.json & logic app parameters file based on template.
Then I have deployed it and it is succussed.
ARM template code which I have used for reference:
{
"$schema": "[https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#"](https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#%22 "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#%22"),
"contentVersion": "1.0.0.0",
"parameters": {
"workflows_so1LP_name": {
"defaultValue": "so1LP",
"type": "String"
},
"connections_azuredatafactory_1_externalid": {
"defaultValue": "/subscriptions/<subscription-id>/resourceGroups/so1/providers/Microsoft.Web/connections/azuredatafactory-1",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Logic/workflows",
"apiVersion": "2017-07-01",
"name": "[parameters('workflows_so1LP_name')]",
"location": "centralus",
"identity": {
"type": "SystemAssigned"
},
"properties": {
"state": "Enabled",
"definition": {
"$schema": "[https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#"](https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#%22 "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#%22"),
"contentVersion": "1.0.0.0",
"parameters": {
"$connections": {
"defaultValue": {},
"type": "Object"
}
},
"triggers": {
"manual": {
"type": "Request",
"kind": "Http",
"inputs": {}
}
},
"actions": {
"Create_a_pipeline_run": {
"runAfter": {},
"type": "ApiConnection",
"inputs": {
"host": {
"connection": {
"name": "#parameters('$connections')['azuredatafactory_1']['connectionId']"
}
},
"method": "post",
"path": "/subscriptions/#{encodeURIComponent('<subscription id>')}/resourcegroups/#{encodeURIComponent('so1')}/providers/Microsoft.DataFactory/factories/#{encodeURIComponent('sodf1')}/pipelines/#{encodeURIComponent('sopipeline')}/CreateRun",
"queries": {
"x-ms-api-version": "2017-09-01-preview"
}
}
},
"Response": {
"runAfter": {
"Create_a_pipeline_run": [
"Succeeded"
]
},
"type": "Response",
"kind": "Http",
"inputs": {
"statusCode": 200
}
}
},
"outputs": {}
},
"parameters": {
"$connections": {
"value": {
"azuredatafactory_1": {
"connectionId": "[parameters('connections_azuredatafactory_1_externalid')]",
"connectionName": "azuredatafactory-1",
"connectionProperties": {
"authentication": {
"type": "ManagedServiceIdentity"
}
},
"id": "/subscriptions/<subscription-id>/<Subscriotion id>providers/Microsoft.Web/locations/centralus/managedApis/azuredatafactory"
}
}
}
}
}
}
],
"outputs": {}
}
Here is the reference image:
NOTE: I am using free subscription, so I don't have any restrictions but, in your case, maybe you have some restrictions that's why maybe your facing issue.
The second reasons may be your using system assigned access after creating logic app to give access to ADF & once check are you giving managed identity after creating ADF give access to logic app also. so maybe you are skipping one of managed identity that's why getting error in ARM template deployment. So, give access to both from ADF to logic app and logic app to ADF.
Here are some images for reference for logic app to ADF:
Go to "access control" of logic app.
Select owner as role.
Select managed identity as data factory.
Here are some images for reference for ADF to logic app:
Go to "access control" of data factory.
Select owner as role.
Select managed identity as logic app.
Did you try using "parameterValueType": "Alternative" instead of "parameterValueSet"?
{
"type": "Microsoft.Web/connections",
"apiVersion": "2016-06-01",
"name": "[parameters('connections_azuredatafactory_name')]",
"location": "francecentral",
"kind": "V1",
"properties": {
"displayName": "[parameters('connections_azuredatafactory_displayname')]",
"customParameterValues": {},
"parameterValueType": "Alternative"
"api": {
"id": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/francecentral/managedApis/azuredatafactory')]"
}
}
}
I am trying to pass multiple parameters to a Custom Script Extension using an ARM template, here is a snippet of the ARM template that currently works without issue:
{
"name": "Microsoft.CustomScriptExtension-20210604105657",
"apiVersion": "2015-01-01",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "incremental",
"templateLink": {
"uri": "https://catalogartifact.azureedge.net/publicartifactsmigration/Microsoft.CustomScriptExtension-arm.2.0.56/Artifacts/MainTemplate.json"
},
"parameters": {
"vmName": {
"value": "real-vm-name"
},
"location": {
"value": "uksouth"
},
"fileUris": {
"value": "https://realwebsite/script.ps1"
},
"arguments": {
"value": "[parameters('param1')]"
}
}
},
But whenever I add another parameter to the arguments section, the template validation fails. This is what I have tried to do:
"arguments": {
"value": "[parameters('param1')], [parameters('param2')]"
}
Please can somebody help?
Can you try to concat or format the parameters depending on what parameters you're using like this:
"value": "[concat(parameters('param1'), parameters('param2'))]"
"value": "[format(parameters('param1'), parameters('param2'))]"
The Azure doc I referred to:
ARM Deployment Scripts
Concat function for ARM templates
Format function for ARM templates
I was searching a way to run ecs task. I already have a cluster and task definition settings. I just wanted to trigger a task using CloudFormation template. I know that I can run a task by clicking on the console and it works fine. For cfn, approach needs to be define properly.
Check the attached screenshots. I wanted to run that task using CloudFormation and pass container override environment variables. As per my current templates, it is not allowing me to do same like I can do using console. Using console I just need to select the following options
1. Launch type
2. Task Definition
Family
Revision
3. VPC and security groups
4. Environment variable overrides rest of the things automatically selected
It starts working with console but with cloudformaton template how can we do that. Is it possible to do or there is no such feature?
"taskdefinition": {
"Type" : "AWS::ECS::TaskDefinition",
"DependsOn": "DatabaseMaster",
"Properties" : {
"ContainerDefinitions" : [{
"Environment" : [
{
"Name" : "TARGET_DATABASE",
"Value" : {"Ref":"DBName"}
},
{
"Name" : "TARGET_HOST",
"Value" : {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]}
}
]
}],
"ExecutionRoleArn" : "arn:aws:iam::xxxxxxxxxx:role/ecsTaskExecutionRole",
"Family" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"TaskRoleArn" : "arn:aws:iam::xxxxxxxxxxxxxxx:role/xxxxxxxxxxxxxxx-XXXXXXXXX"
}
},
"EcsService": {
"Type" : "AWS::ECS::Service",
"Properties" : {
"Cluster" : "xxxxxxxxxxxxxxxxx",
"LaunchType" : "FARGATE",
"NetworkConfiguration" : {
"AwsvpcConfiguration" : {
"SecurityGroups" : ["sg-xxxxxxxxxxx"],
"Subnets" : ["subnet-xxxxxxxxxxxxxx"]
}
},
"TaskDefinition" : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
There is no validity error in the code however, I am talking about the approach. I added image name container name but now it is asking for memory and cpu, it should not ask as it is already defined we just need to run a task.
Edited
I wanted to run a task after creation of my database and wanted to pass those database values to the task to run and complete a job.
Here is the working example of what you can do if you wanted to pass variable and run a task. In my case, I wanted to run a task after creation of my database but with environment variables, directly AWS does not provide any feature to do so, this is the solution which can help to trigger you ecs task.
"IAMRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Description": "Allow CloudWatch Events to trigger ECS task",
"Policies": [
{
"PolicyName": "Allow-ECS-Access",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:*",
"iam:PassRole",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
}
],
"RoleName": { "Fn::Join": [ "", ["CloudWatchTriggerECSRole-", { "Ref": "DBInstanceIdentifier" }]]}
}
},
"DummyParameter": {
"Type" : "AWS::SSM::Parameter",
"Properties" : {
"Name" : {"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"},
"Type" : "String",
"Value" : {"Fn::GetAtt": "DatabaseMaster.Endpoint.Address"}
},
"DependsOn": "TaskSchedule"
},
"TaskSchedule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "Trigger ECS task upon creation of DB instance",
"Name": { "Fn::Join": [ "", ["ECSTaskTrigger-", { "Ref": "DBName" }]]},
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EventPattern": {
"source": [ "aws.ssm" ],
"detail-type": ["Parameter Store Change"] ,
"resources": [{"Fn::Sub":"arn:aws:ssm:eu-west-1:XXXXXXX:parameter/${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"detail": {
"operation": ["Create"],
"name": [{"Fn::Sub": "${AWS::StackName}-${DatabaseMaster}-EndpointAddress"}],
"type": ["String"]
}
},
"State": "ENABLED",
"Targets": [
{
"Arn": "arn:aws:ecs:eu-west-1:xxxxxxxx:cluster/NameOf-demo",
"Id": "NameOf-demo",
"RoleArn": {"Fn::GetAtt": "IAMRole.Arn"},
"EcsParameters": {
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"AwsVpcConfiguration": {
"SecurityGroups": {"Ref":"VPCSecurityGroups"},
"Subnets": {"Ref":"DBSubnetName"}
}
},
"PlatformVersion": "LATEST",
"TaskDefinitionArn": "arn:aws:ecs:eu-west-1:XXXXXXXX:task-definition/NameXXXXXXXXX:1"
},
"Input": {"Fn::Sub": [
"{\"containerOverrides\":[{\"name\":\"MyContainerName\",\"environment\":[{\"name\":\"VAR1\",\"value\":\"${TargetDatabase}\"},{\"name\":\"VAR2\",\"value\":\"${TargetHost}\"},{\"name\":\"VAR3\",\"value\":\"${TargetHostPassword}\"},{\"name\":\"VAR4\",\"value\":\"${TargetPort}\"},{\"name\":\"VAR5\",\"value\":\"${TargetUser}\"},{\"name\":\"VAR6\",\"value\":\"${TargetLocation}\"},{\"name\":\"VAR7\",\"value\":\"${TargetRegion}\"}]}]}",
{
"VAR1": {"Ref":"DBName"},
"VAR2": {"Fn::GetAtt": ["DatabaseMaster", "Endpoint.Address"]},
"VAR3": {"Ref":"DBPassword"},
"VAR4": "5432",
"VAR5": {"Ref":"DBUser"},
"VAR6": "value6",
"VAR7": "eu-west-2"
}
]}
}
]
}
}
For Fargate task, we need to specify in CPU in Task Definition. and memory or memory reservation in either task or container definition.
and environment variables should be passed to each container as ContainerDefinitions and overrided when task is run from ecs task-run from console or cli.
{
"ContainerTaskdefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"Family": "SomeFamily",
"ExecutionRoleArn": !Ref RoleArn,
"TaskRoleArn": !Ref TaskRoleArn,
"Cpu": "256",
"Memory": "1GB",
"NetworkMode": "awsvpc",
"RequiresCompatibilities": [
"EC2",
"FARGATE"
],
"ContainerDefinitions": [
{
"Name": "container name",
"Cpu": 256,
"Essential": "true",
"Image": !Ref EcsImage,
"Memory": "1024",
"LogConfiguration": {
"LogDriver": "awslogs",
"Options": {
"awslogs-group": null,
"awslogs-region": null,
"awslogs-stream-prefix": "ecs"
}
},
"Environment": [
{
"Name": "ENV_ONE_KEY",
"Value": "Valu1"
},
{
"Name": "ENV_TWO_KEY",
"Value": "Valu2"
}
]
}
]
}
}
}
EDIT(from discussion in comments):
ECS Task Run is not a cloud-formation resource, it can only be run from console or CLI.
But if we choose to run from a cloudformation resource, it can be done using cloudformation custom resource. But once task ends, we now have a resource in cloudformation without an actual resource behind. So, custom resource needs to do:
on create: run the task.
on delete: do nothing.
on update: re-run the task
Force an update by changing an attribute or logical id, every time we need to run the task.
I am using a series of json ARM templates to deploy Azure VMs, and am having issues passing information from one resource deployment to another.
I deploy two resources using linked templates from blob storage, one which in and of itself deploys nothing, but returns an object populated with configuration settings, and a second which then passes that output of configuration settings to another template as a parameter:
"resources": [
{
"name": "[concat(deployment().name, '-config')]",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2016-09-01",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[variables('configurationTemplate')]",
"contentVersion": "1.0.0.0"
},
"parameters": {
"subscriptionParameters": { "value": "[variables('subscriptionParameters')]" }
}
}
},
{
"name": "[concat(deployment().name, '-vm')]",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2016-09-01",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[variables('vmTemplate')]",
"contentVersion": "1.0.0.0"
},
"parameters": {
"configuration": { "value": "[reference(concat(deployment().name, '-config').outputs.configuration.value)]" },
"vmName": { "value": "[parameters('vmName')]" },
"vmSize": { "value": "[parameters('vmSize')]" },
"os": { "value": "[parameters('os')]" },
"managedDiskTier": { "value": "[parameters('managedDiskTier')]" },
"dataDisksToProvision": { "value": "[parameters('dataDisksToProvision')]" },
"dataDiskSizeGB": { "value": "[parameters('dataDiskSizeGB')]" },
"domainJoined": { "value": "[parameters('domainJoined')]" },
"localAdminUsername": { "value": "[parameters('localAdminUsername')]" },
"localAdminPassword": { "value": "[parameters('localAdminPassword')]" },
"numberOfNics": { "value": "[parameters('numberOfNics')]" },
"subnetName": { "value": "[parameters('subnetName')]" },
"highlyAvailable": { "value": "[parameters('highlyAvailable')]" },
"availabilitySetName": { "value": "[parameters('availabilitySetName')]" },
"availabilitySetUpdateDomains": { "value": "[parameters('availabilitySetUpdateDomains')]" },
"availabilitySetFaultDomains": { "value": "[parameters('availabilitySetFaultDomains')]" }
}
}
}
],
"outputs": {
"configuration": {
"type": "object",
"value": "[reference(concat(deployment().name, '-config')).outputs.configuration.value]"
}
}
Deploying the first resource on it's own succeeds, and the output [reference(concat(deployment().name, '-config')).outputs.configuration.value] is correctly returned, and contains all the correct information and is well formed.
If I then add the second resource into the mix, then the deployment fails with
the following error:
08:57:41 - [ERROR] New-AzureRmResourceGroupDeployment : 08:57:41 - Error: Code=InvalidTemplate;
08:57:41 - [ERROR] Message=Deployment template validation failed: 'The template resource
08:57:41 - [ERROR] 'rcss.test.vm-0502-0757-rcss-vm' at line '317' and column '6' is not valid:
08:57:41 - [ERROR] The language expression property 'Microsoft.WindowsAzure.ResourceStack.Frontdoo
08:57:41 - [ERROR] r.Expression.Expressions.JTokenExpression' can't be evaluated.. Please see
08:57:41 - [ERROR] https://aka.ms/arm-template-expressions for usage details.'.
If I remove the "configuration" parameter from both this parameter set and from the referenced template (the referenced template has all contents commented out to ensure we are testing only the pass-through of the parameters), then the deployment succeeds, indicating that the issue is related to the parsing of the parameter string "[reference(concat(deployment().name, '-config').outputs.configuration.value)]".
Can anyone offer any insight as to whether I need to refer to output objects from deployment resources in a specific way in the context of a linked template parameter set?
So after examining this more closely, I found that the syntax I was using was incorrect, but not reported by the parser:
"[reference(concat(deployment().name, '-config').outputs.configuration.value)]"
Should have been:
"[reference(concat(deployment().name, '-config')).outputs.configuration.value]"
Schoolboy error.
As part of a template I want to retrieve the SharedKeys of an OMS / Operational Insights Workspace, rather than having to pass it in as a parameter.
Is this possible? I'm following the documentation here
It does not appear that the Microsoft.OperationalInsights/workspaces/ resource provider has any list* provider operations, and I can't find any reference for other:
Get-AzureRmProviderOperation -OperationSearchString * | where {$_.Operation -like "*operational*sharedkeys*"} | FT Operation
Microsoft.OperationalInsights/workspaces/sharedKeys/action
My desired usage:
"variables": { workspaceKey: "[listKeys(parameters('workspaceResourceId'), '2015-05-01-preview').primarySharedKey]" }
In the meantime, assuming this isn't actually supported, I added a request for it on the Log Analytics UserVoice site
Per Ryan Jones, [listKeys()] against the OMS Workspace will work as expected and return a JSON object with primarySharedKey & secondarySharedKey properties:
"outputs": {
"listKeys": {
"value": "[listKeys(parameters('workspaceResourceId'), '2015-11-01-preview')]",
"type": "object"
}
}
yields:
{
"primarySharedKey":"",
"secondarySharedKey":""
}
Important Caveat:
listKeys() can not be specified in the variables section of an ARM template, since it derives its value from a runtime state.
See this blog post for how to use a Linked Template, specified as a resource, in order to retrieve the output value and assign it to a property in another resource.
Alternatively, you can use it directly. Here is my final template:
(don't actually keep the keys in the output!)
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workspaceResourceId": { "type": "string" },
"virtualMachines": { "type": "array" }
},
"variables": {
"extensionType": {
"Windows": "MicrosoftMonitoringAgent",
"Linux": "OmsAgentForLinux"
}
},
"resources": [
{
"copy": {
"name": "VMMonitoringExtensionsCopy",
"count": "[length(parameters('virtualMachines'))]"
},
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2015-05-01-preview",
"location": "[parameters('virtualMachines')[copyIndex()].location]",
"name": "[concat(parameters('virtualMachines')[copyIndex()].name, '/Microsoft.EnterpriseCloud.Monitoring')]",
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "[variables('extensionType')[parameters('virtualMachines')[copyIndex()].osType]]",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"workspaceId": "[reference(parameters('workspaceResourceId'), '2015-11-01-preview').customerId]"
},
"protectedSettings": {
"workspaceKey": "[listKeys(parameters('workspaceResourceId'), '2015-11-01-preview').primarySharedKey]"
}
}
}
],
"outputs": {
"workspaceCustomerId": {
"value": "[reference(parameters('workspaceResourceId'), '2015-11-01-preview').customerId]",
"type": "string"
},
"workspacePrimarySharedKey": {
"value": "[listKeys(parameters('workspaceResourceId'), '2015-11-01-preview').primarySharedKey]",
"type": "securestring"
},
"workspaceSecondarySharedKey": {
"value": "[listKeys(parameters('workspaceResourceId'), '2015-11-01-preview').secondarySharedKey]",
"type": "securestring"
}
}
}
The array parameter virtualMachines follows this schema:
[
{ "name": "", "location": "", "osType": "" }
]
listKeys requires that you put the resource type in. So have you tried this?
"variables": { workspaceKey: "[listKeys(resourceId('Microsoft.OperationalInsights/workspaces', parameters('workspaceResourceId'), '2015-05-01-preview').primarySharedKey]" }
Unfortunately, atm there is nothing at all in the Azure quickstart repo on that resource so I'm not 100% sure...
But passing it in as a parameter would be fine. You could do this... In your deployment script, before you run New-AzureRmResourceGroupDeployment, create/use existing workspace, get key, pass in as param, create primarySharedKey as a param in the template:
$workSpace = Get-AzureRmOperationalInsightsWorkspace -ResourceGroupName $RGName -Name $workSpaceName -ErrorAction SilentlyContinue
if($workSpace -eq $null){
New-AzureRmOperationalInsightsWorkspace -ResourceGroupName $RGName -Name $workSpaceName -Location $Location
}
$keys = Get-AzureRmOperationalInsightsWorkspaceSharedKeys -ResourceGroupName $RGName -Name $workSpaceName
New-AzureRmResourceGroupDeployment <other stuff here> -primarySharedKey $keys.PrimarySharedKey