Deploying ARM template with 2 hostnamebindings returns conflict error can't modify because another operation is in progress - json

I am trying to deploy an ARM template through Azure DevOps. I've tried doing a test deployment (Test-AzResourceGroupDeployment) through PowerShell without any issues.
This issue has persisted for several weeks, and i've read some posts stating it dissapeared after a few hours or after a day, however this has not been the case for me.
In Azure DevOps my build is succeeding just fine. But when i try to create a release through my release pipeline using the resource "Azure resource group deployment" it will fail stating the error:
"Code": "Conflict",
"Message": "Cannot modify this site because another operation is in progress. Details: Id: 4f18af87-8848-4df5-82f0-ec6be47fb599, OperationName: Update, CreatedTime: 9/27/2019 8:55:26 AM, RequestId: 691b5183-aa8b-4a38-8891-36906a5e2d20, EntityType: 3"
Update
I have later noticed that the error surfaces when trying to deploy my hostNameBindings for the site.
I have 2 different hostNameBindings in my template which causes the failure.
It fails apparently because it tries to deploy both of them at the same time, though i am not aware of an apparent fix for this so any help would still be appreciated!
I tried to use the copy function but as far as i know that will make an exact copy for both hostNameBindings which is not what i need. first of all they have different names and properties, anyone got a fix for this?

Make the one hostNameBindings depend on the other host name binding. Then they will be executed 1 after another and you should not get the same error message.
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', variables('websitename'))]",
"[resourceId('Microsoft.Web/sites/hostNameBindings/',variables('websitename'), variables('firstbindingame-aftertheslash-sowithoutthewebsitename'))]"
],

Look like people already notice this issue and trying to fix it.
https://status.azure.com/

I had the same issue when using the Copy function in order to add multiple Custom Domains. Thanks to David Gnanasekaran's blog I was able to fix this issue.
By default the copy function will execute in parallel. By setting the mode to serial and setting the batchSize to 1 I did not receive any operation is in progress errors.
Here is my piece of ARM template to set the custom domains.
"copy": {
"name": "hostNameBindingsCopy",
"count": "[length(parameters('customDomainNames'))]",
"mode": "Serial",
"batchSize": 1
},
"apiVersion": "[variables('webApiVersion')]",
"name": "[concat(variables('webAppName'), '/', parameters('customDomainNames')[copyIndex()])]",
"type": "Microsoft.Web/sites/hostNameBindings",
"kind": "string",
"location": "[resourceGroup().location]",
"condition": "[greater(length(parameters('customDomainNames')), 0)]",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('webAppName'))]"
],
"properties": {
"customHostNameDnsRecordType": "CName",
"hostNameType": "Verified",
"siteName": "parameters('webAppName')"
}

Related

issue with CICD release pipeline in Azure Data Factory deployment

I have created a release pipeline in Azure DevOps where I am moving the dev resources (pipeline, dataset, trigger, Integration runtime) to the Prod data factory.
Challenge is, In my dev ADF I am using an integration runtime say ir-myonpremdata that is self-hosted and shared, and the same ir is also present in prod data factory as self-hosted linked.
Whenever I am deploying the pipelines I am updating the arm_templates for factory and JSON with the following
1. ARMTemplateForFactory.JSON-code-
{
"name": "[concat(parameters('factoryName'), '/ir-myonpremdata')]",
"type": "Microsoft.DataFactory/factories/integrationRuntimes",
"apiVersion": "2018-06-01",
"properties": {
"type": "SelfHosted",
"typeProperties": {
"linkedInfo": {
"resourceId": "[parameters(**'ir-myonpremdata_properties_typeProperties_linkedInfo_resourceId'**)]",
"authorizationType": "Rbac"
}
}
},
"dependsOn": []
}
2. ARMTemplateParametersForFactory.json-
"**ir-myonpremdata_properties_typeProperties_linkedInfo_resourceId**": {
"type": "string",
"defaultValue": "/subscriptions/d9368466-XXXX-4d83-XXXX-bb9f336fb6a7/resourcegroups/ModernDataPlatform/providers/Microsoft.DataFactory/factories/cpo-adf-dev/integrationruntimes/ir-myonpremdata"
}
Then I am able to deploy the ADF otherwise it keeps throwing error messages. But it is very painful like if again someone will publish the master branch then in the adf_publish branch that extra piece of code will be removed automatically (as we added manually) from ARMTemplateForFactory.JSON and ARMTemplateParametersForFactory.json and the release pipeline will keep failing onwards.
I don't want to keep updating my JSON files whenever someone will publish from adf branch.
I have a similar issue... The best practice suggestion in this situation is to have a Shared IR on a separate ADF just to host IR and then use linked IR in all the dev/test/prod ADF's.
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment#best-practices-for-cicd
I am trying to do this ... will update whether it worked or not.

Unable to extend schema within a verified sub domain directory

I live in an enterprise environment where most of our production domains are currently non-routable (e.g. .local).
I tried extending the schema but since the non-routable cannot be verified and the default .onmicrosoft I don't think could either. My enterprise allows me to easily create subdomains so I attached it and verified for testing purposes and ran into the same verified domain error.
Per the documentation, I should be able to either us the ID of my domain name or just the scheme name and get 8 random-alpha-chars added. Neither approach works in this case.
POST: https://graph.microsoft.com/v1.0/schemaExtensions
{
"id": "idmdomain.sub.domain.net_Owners",
"description": "Owners of the group",
"targetTypes": [
"Group"
],
"properties": [{
"name": "PrimaryOwners",
"type": "String"
},
{
"name": "SecondaryOwners",
"type": "String"
}
]
}
Message Received:
{
"code": "BadRequest",
"message": "Your organization must own the namespace idmdomain.sub.domain.net as a part of one of the verified domains.",
"request-id": "1c7363f9-d54b-408a-8b29-2c0d2a94280a",
"date": "2018-03-22T21:47:22"
}
From the documentation:
If you already have a vanity .com,.net, .gov, .edu or a .org domain that you have verified with your tenant, you can use the domain name along with the schema name to define a unique name, in this format {domainName}_{schemaName}.
For example, if your vanity domain is contoso.com, you can define an id of, contoso_mySchema. This is the preferred option.
So in your example, idmdomain.sub.domain.net_Owners should simply be domain_Owners. It shouldn't include idmdomain, sub, net or any ..
Thank you Marc for pointing me in the correct direction. Even though my app had the correct delegated permissions (Directory.AccessAsUser.All) I now understand that I needed to execute this change in the user context instead of application as application is not supported.
For those that come behind me {domainName}_{schemaName} works if you validate your domain, if dont and you just leave schemename then the generated guid works as documented. I recommended reviewing the two links below as they were what finally unlocked the puzzle for me.
Helped me understand how this is working (authentication vs authorization)
https://developer.microsoft.com/en-us/graph/docs/concepts/rest
Helped me setup postman to quickly validate
https://blogs.msdn.microsoft.com/softwaresimian/2017/10/05/using-postman-to-call-the-graph-api-using-azure-active-directory-aad/
I should add for the postman route, a few changes...
Auth URL
https://login.microsoftonline.com/yourtennantid/oauth2/authorize?resource=https%3A%2F%2Fgraph.microsoft.com
Access Token URL
https://login.microsoftonline.com/yourtennantid/oauth2/token
Scope = Directory.AccessAsUser.All

How to execute a script on packer build failure?

In my current template.json, I am executing a number of scripts with the shell provisioner. For example:
{
"type": "shell",
"scripts": [
"scriptA.sh",
"scriptB.sh"
]
},
{
"type": "shell",
"scripts": [
"scriptC.sh"
]
}
The current behavior I have observed is that, if there is a error in scriptA.sh, subsequent scripts do not get executed. However, I would like to have scriptC.sh no matter what. Is there another setting or proper location of scriptC.sh that would allow it to be executed on success or failure of packer build?
Although the 'packer build -on-error=cleanup' is present, I have not seen an option to add any additional clean up behavior on error.
Edit 1: I suppose I could combine all the scripts into one, perform a "try, catch, finally", and provide an exit code. However, for sake of readability and reusability, I would prefer to keep the scripts separate.

Azure Functions: How do I control development/production/staging app settings?

I've just started experimenting with Azure functions and I'm trying to understand how to control the app settings depending on environment.
In dotnet core you could have appsettings.json, appsettings.development.json etc. And as you moved between different environments the config would change.
However from looking at Azure function documentation all I can find is that you can set up config in the azure portal but I can't see anything about setting up config in the solution?
So what is the best way to manage build environment?
Thanks in advance :-)
The best way, in my opinion, is using a proper build and release system, like VSTS.
What I've done in one of my solutions is creating an ARM template of my Function App and deploy this using a release pipeline with VSTS RM.
This way you can just add a value to the template.json, like the one from below.
"appSettings": [
// other entries
{
"name": "MyValue",
"value": "[parameters('myValue')]"
}
You will need another file, called parameters.json which will hold the values. This file looks like so (at the moment).
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"name": {},
"storageName": {},
"location": {},
"subscriptionId": {}
}
}
Back in VSTS you can just change/override the values of these parameters in the portal.
By using such a workflow you will get a professional CI/CD implementation where no one has to bother themselves with the actual secrets. They are only known to the system administrators.

Context Broker, ONTIMEINTERVAL subscribe immediatelly sends request to reference

The problem is even if I put condValues to PT10S, when I send request to contextBroker it requests back the reference url rigth away, not after 10 sec, and then it continues to send requests at 10 sec.
My question: is there a way to avoid the first initial request?
Here is a body of the request that I send to server where contextBroker is installed.
{
"entities": [{
"type": "Cycle",
"isPattern": "false",
"id": "someid"
}],
"attributes": [
...
],
"reference": "someurl"
"duration": "P1M",
"notifyConditions": [{
"type": "ONTIMEINTERVAL",
"condValues": [
"PT10S"
]
}]
}
At the present moment (Orion 1.1) initial notification cannot be avoided. However, being able to configure that behaviour would be an interesting feature to develop in the future and, consecuently, a github issue was created time ago about it.
In addition, note that ONTIMEINTERVAL subscriptions are no longer supported so you should avoid to use them:
ONTIMEINTERVAL subscriptions have several problems (introduce state in CB, thus making horizontal scaling configuration much harder, and makes it difficult to introduce pagination/filtering). Actually, they aren't really needed, as any use case based on ONTIMEINTERVAL notification can be converted to an equivalent use case in which the receptor runs queryContext at the same frequency (and taking advantage of the features of queryContext, such as pagination or filtering).
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).