How to refer a resource which is already created in ARM template? - json

I'm trying to create alerts for my cosmosdb account using arm template, the cosmosdb is already created, so Im not able use dependsOn to refer the rosurce.
"resources": [
{
"type": "microsoft.insights/alertrules",
"name": "[parameters('alertrules_alert_name')]",
"apiVersion": "2014-04-01",
"location": "southcentralus",
"scale": null,
"properties": {
"name": "[parameters('alertrules_alert_name')]",
"description": null,
"isEnabled": true,
"condition": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
"dataSource": {
"odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
"resourceUri": "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('databaseAccounts_cosmosaccount_name_1'))]",
"metricNamespace": null,
"metricName": "Http 401"
},
"operator": "GreaterThan",
"threshold": 1,
"windowSize": "PT30M"
},
"action": null
}
}
],
"outputs": {}
}

Please review the following documentation to enable (Classic) Alerts and Diagnostic Settings via ARM template when creating a NEW Cosmos DB resource.
1) Create a classic metric alert with a Resource Manager template
2) Automatically enable Diagnostic Settings at resource creation using a Resource Manager template
3) Azure Cosmos DB diagnostic logging
Please Up Vote the existing entries for ARM templete functionality or create a new User Voice entry that is specific to your use case: Azure Cosmos DB User Voice

Related

APIM logger resource referring to AI in other subscription

Trying to enable Application Insights on an API Management Service. The Application Insights is in another subscription. Parameter "ApplicationInsightsInstanceRI" contains the full resource AI id. Any idea of why this error occurs?
Error:
InvalidResourceType: The resource type could not be found in the namespace 'Microsoft.Insights' for api version '2019-12-01'.
"type": "Microsoft.ApiManagement/service/loggers",
"name": "[concat(parameters('apiManagementServiceName'), '/', parameters('ApplicationInsightsInstanceName'))]",
"dependsOn": ["[resourceId('Microsoft.ApiManagement/service', parameters('apiManagementServiceName'))]"],
"apiVersion": "2018-06-01-preview",
"properties": {
"loggerType": "applicationInsights",
"description": "Logger resources to APIM",
"resourceid": "[parameters('ApplicationInsightsInstanceRI')]"
"credentials": {
"instrumentationKey": "[reference(resourceId('Microsoft.Insights/component', parameters('ApplicationInsightsInstanceName')), '2019-12-01', 'Full').properties.InstrumentationKey]",
Any idea of why this error occurs?
This error is due to invalid instrumentation key. After directly mentioning the instrumentation key in my template I was able to get the desired result & the API call is working fine.
Below is the template that worked for me.
{
"type": "Microsoft.ApiManagement/service/loggers",
"apiVersion": "2022-04-01-preview",
"name": "[concat(parameters('service_HelloWorld_APimanagement_name'), '/sangammigrationmetrics')]",
"dependsOn": [
"[resourceId('Microsoft.ApiManagement/service', parameters('service_HelloWorld_APimanagement_name'))]"
],
"properties": {
"loggerType": "applicationInsights",
"credentials": {
"instrumentationKey": "{{<INSTRUMENATION_KEY>}}"
},
"isBuffered": true,
"resourceId": "[parameters('components_SangamMigrationMetrics_externalid')]"
}
},

NotAuthorizedOrNotFound when pushing custom metric

When I try to push a custom metric to the Oracle Cloud Monitoring service using the Oracle Cloud CLI, I receive the following error:
ServiceError:
{
"code": "NotAuthorizedOrNotFound",
"message": "Authorization failed or requested resource not found.",
"opc-request-id": "request id",
"status": 404
}
This occurs when using the Administrator account and when using an instance principal which has monitoring permission.
Here is the JSON that I am pushing to the Monitoring service:
[
{
"namespace": "myFirstNamespace",
"compartmentId": "tenant id",
"resourceGroup": "myFirstResourceGroup",
"name": "successRate",
"dimensions": {
"resourceId": "ocid1.exampleresource.region1.phx.exampleuniqueID",
"appName": "myAppA"
},
"metadata": {
"unit": "percent",
"displayName": "MyAppA Success Rate"
},
"datapoints": [
{
"timestamp": "2021-06-01T22:19:20Z",
"value": 83.0
}
]
}
]
The CLI command that I am using is:
oci monitoring metric-data post --metric-data file://metric-data.json
The OCI CLI command should be:
oci monitoring metric-data post --metric-data file://metric-data.json --endpoint https://telemetry-ingestion.{{ region }}.oraclecloud.com
replacing {{ region }} with your region.
The --endpoint https://telemetry-ingestion.{{ region }}.oraclecloud.com parameter needs to be added.
Looks like some authorization issue. Please cross check if the instance principle has all the required permission assigned. Please review this document Publishing Custom Metrics and Overview of Monitoring

When i deploy my Azure ARM Template it creates a storage, not an alert

I created a Azure Template for an alert, because i want to upload the script (.json) with the new microservice the same time. But if I deploy this .json file it creates a new storage, not an alert. I used the Powershell commands New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile c:\MyTemplates\storage.json -storageAccountType Standard_GRS. In my template i need to define the parameter kind, which is only acceptable with a value of Storage or Blobstorage, but i want non of these two. So how can i create an alert by using a script .json file and does anybody have a template, because MS isn't providing the correct one.
EDIT: Here is the .json file:
{
"$schema":
"http://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"name": "[concat('storage', uniqueString(resourceGroup().id))]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2016-01-01",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"id":
"/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Storage/storageAccounts/storageName",
"location": "westeurope",
"properties": {
"name": "tryAgain",
"description": null,
"isEnabled": true,
"condition": {
"$type":
"Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.ThresholdRuleCondition, Microsoft.WindowsAzure.Management.Mon.Client",
"odata.type":
"Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
"dataSource": {
"$type":
"Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.RuleMetricDataSource, Microsoft.WindowsAzure.Management.Mon.Client",
"odata.type":
"Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
"resourceUri":
"/subscriptions/subscriptionID/resourcegroups/resourceGroupName/providers/microsoft.web/sites/name",
"resourceLocation": null,
"metricNamespace": null,
"metricName": "AverageMemoryWorkingSet",
"legacyResourceId": null
},
"operator": "GreaterThanOrEqual",
"threshold": 120000000,
"windowSize": "PT10M",
"timeAggregation": "Average"
},
"actions": [
{
"$type":
"Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.RuleWebhookAction, Microsoft.WindowsAzure.Management.Mon.Client",
"odata.type":
"Microsoft.Azure.Management.Insights.Models.RuleWebhookAction",
"serviceUri":
"Logic-app URL",
"properties": {
"$type":
"Microsoft.WindowsAzure.Management.Common.Storage.CasePreservedDictionary`1[[System.String, mscorlib]], Microsoft.WindowsAzure.Management.Common.Storage",
"logicAppResourceId":
"/subscriptions/subscriptionID/resourceGroups/Default-Storage-WestEurope/providers/Microsoft.Logic/workflows/Microsoft-Teams-Notifier"
}
}
]
}
}
]
}
Please refer to the following reference for creating a metric alert via Azure Resource Manager template. If you want to create a single ARM template which creates a storage account and then a metric alert to monitor the created storage account, you should make sure you have a dependsOn so that the alert rule is only created after the storage account. The following document references the newer metric alerts, as opposed to the classic metric alerts.
https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-create-metric-alerts-with-templates

How to check if name already exists? Azure Ressource Manager Template

is it possible to check, in an ARM Template, if the name for my Virtual Machine already exists?
I am developing a Solution Template for the Azure Marketplace. Maybe it is possible to set a paramter in the UiDefinition uniqe?
The goal is to reproduce this green Hook
A couple notes...
VM Names only need to be unique within a resourceGroup, not within the subscription
Solution Templates must be deployed to empty resourceGroups, so collisions with existing resources aren't possible
For solution templates the preference is that you simply name the VMs for the user, rather than asking - use something that is appropriate for the workload (e.g. jumpbox) - not all solutions do this but we're trying to improve that experience
Given that it's not likely we'll ever build a control that checks for naming collisions on resources without globally unique constraints.
That help?
This looks impossible, according to the documentation.
There are no validation scenarious.
I assume that you should be using the Microsoft.Common.TextBox UI element in your createUiDefinition.json.
I have tried to reproduce a green check by creating a simple createUiDefinition.json as below with a Microsoft.Common.TextBox UI element as shown below.
{
"$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json",
"handler": "Microsoft.Compute.MultiVm",
"version": "0.1.2-preview",
"parameters": {
"basics": [
{
"name": "textBoxA",
"type": "Microsoft.Common.TextBox",
"label": "VM Name",
"defaultValue": "",
"toolTip": "Please enter a VM name",
"constraints": {
"required": true
},
"visible": true
}
],
"steps": [],
"outputs": {}
}
}
I am able to reproduce the green check beside the VM Name textbox as shown below:
However, this green check DOES NOT imply the VM Name is Available.
This is because based on my testing, even if I use an existing VM Name in the same subscription, it is still showing the green check.
Based on the official documented constraints that are supported by the Microsoft.Common.TextBox UI element, it DOES NOT VALIDATE Name Availability.
Hope this helps!
While bmoore's point is correct that it's unlikely you would ever need this for a VM (nor is there an API for it), there are other compute resources that do have global naming requirements.
As of 2022 this concept is possible now with the use of the ArmApiControl UI element. It allows you to call ARM apis as part of validation in the createUiDefinition.json. Here is an example using the check name API for an Azure App service.
{
"$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#",
"handler": "Microsoft.Azure.CreateUIDef",
"version": "0.1.2-preview",
"parameters": {
"basics": [
{}
],
"steps": [
{
"name": "domain",
"label": "Domain Names",
"elements": [
{
"name": "domainInfo",
"type": "Microsoft.Common.InfoBox",
"visible": true,
"options": {
"icon": "Info",
"text": "Pick the domain name that you want to use for your app."
}
},
{
"name": "appServiceAvailabilityApi",
"type": "Microsoft.Solutions.ArmApiControl",
"request": {
"method": "POST",
"path": "[concat(subscription().id, '/providers/Microsoft.Web/checknameavailability?api-version=2021-02-01')]",
"body": "[parse(concat('{\"name\":\"', concat('', steps('domain').domainName), '\", \"type\": \"Microsoft.Web/sites\"}'))]"
}
},
{
"name": "domainName",
"type": "Microsoft.Common.TextBox",
"label": "Domain Name Word",
"toolTip": "The name of your app service",
"placeholder": "yourcompanyname",
"constraints": {
"validations": [
{
"regex": "^[a-zA-Z0-9]{4,30}$",
"message": "Alphanumeric, between 4 and 30 characters."
},
{
"isValid": "[not(equals(steps('domain').appServiceAvailabilityApi.nameAvailable, false))]",
"message": "[concat('Error with the url: ', steps('domain').domainName, '. Reason: ', steps('domain').appServiceAvailabilityApi.reason)]"
},
{
"isValid": "[greater(length(steps('domain').domainName), 4)]",
"message": "The unique domain suffix should be longer than 4 characters."
},
{
"isValid": "[less(length(steps('domain').domainName), 30)]",
"message": "The unique domain suffix should be shorter than 30 characters."
}
]
}
},
{
"name": "section1",
"type": "Microsoft.Common.Section",
"label": "URLs to be created:",
"elements": [
{
"name": "domainExamplePortal",
"type": "Microsoft.Common.TextBlock",
"visible": true,
"options": {
"text": "[concat('https://', steps('domain').domainName, '.azurewebsites.net - The main app service URL')]"
}
}
],
"visible": true
}
]
}
],
"outputs": {
"desiredDomainName": "[steps('domain').domainName]"
}
}
}
You can copy the above code and test it in the createUiDefinition.json sandbox azure provides.

Data Factory: AzureSQL in- and output for pipeline activity type AzureMLBatchExecution

In Azure Data Factory, I’m trying to call an Azure Machine Learning model by a Data Factory Pipeline. I want to use a Azure SQL table as input and another Azure SQL table for the output.
First I deployed a Machine Learning (classic) web service. Then I created an Azure Data Factory Pipeline, using a LinkedService (type= ‘AzureML’, using Request URI and API key of the ML-webservice) and a input and output dataset (‘AzureSqlTable’ type).
Deploying and Provisioning is succeeded. The pipeline starts as scheduled, but keeps ‘Running’ without any result. The pipeline activity is not being shown in the Monitor&Manage: Activity Windows.
On different sites and tutorials, I only find JSON-scripts using the activity type ‘AzureMLBatchExecution’ with BLOB in- and outputs. I want to use AzureSQL in- and output but I can’t get this working.
Can someone provide a sample JSON-script or tell me what’s possibly wrong with the code below?
Thanks!
{
"name": "Predictive_ML_Pipeline",
"properties": {
"description": "use MyAzureML model",
"activities": [
{
"type": "AzureMLBatchExecution",
"typeProperties": {},
"inputs": [
{
"name": "AzureSQLDataset_ML_Input"
}
],
"outputs": [
{
"name": "AzureSQLDataset_ML_Output"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 3,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Week",
"interval": 1
},
"name": "My_ML_Activity",
"description": "prediction analysis on ML batch input",
"linkedServiceName": "AzureMLLinkedService"
}
],
"start": "2017-04-04T09:00:00Z",
"end": "2017-04-04T18:00:00Z",
"isPaused": false,
"hubName": "myml_hub",
"pipelineMode": "Scheduled"
}
}
With a little help from a Microsoft technician, I've got this working. The JSON script as mentioned above is only changed in the schedule-section:
"start": "2017-04-01T08:45:00Z",
"end": "2017-04-09T18:00:00Z",
A pipeline is active only between its start time and end time. Because the scheduler is set to weekly, the pipeline is triggered at the start of the week: that date should be within start- and end date. For more details about scheduling, see: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-scheduling-and-execution
The Azure SQL Input dataset should look like this:
{
"name": "AzureSQLDataset_ML_Input",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "SRC_SQL_Azure",
"typeProperties": {
"tableName": "dbo.Azure_ML_Input"
},
"availability": {
"frequency": "Week",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
I added the external and policy properties to this dataset (see script above) and after that, it worked.