Can I have a scripts call each other? How does packer make it possible to compose many scripts? - packer

I haven't found any examples of this so I am not sure it's possible.
I want to have utility scripts other packer scripts use. Can scripts call each other directly?
For example if inside of script1.ps1 it calls script2.ps1 (via relative path) will it be there? Will packer upload both files to the same location and the scripts can access each other?
{
"script": "./script1.ps1",
"type": "powershell"
},
{
"script": "./script2.ps1",
"type": "powershell"
}
What if I have them in diff folders like this, will I be able to access script2 inside script1 like ./../subdir/script2.ps1?
{
"script": "./otherdir/script1.ps1",
"type": "powershell"
},
{
"script": "./subdir/script2.ps1",
"type": "powershell"
}

Yes, you can certainly do that, although I am not sure what the real advantage is over running the two Powershell provisions like you are already are.
For the first script to call the second, the second script will already have to be present on the target machine. You can do the following:
{
"type": "file",
"source": "./script2.ps1",
"destination": "C:/Windows/Temp/script2.ps1"
},
{
"script": "./script1.ps1",
"type": "powershell"
}
Call script2 from script1 and then remove it:
& C:\Windows\Temp\script2.ps1
Remove-Item C:\Windows\Temp\script2.ps1

Related

Substituting service url is arm template

I have an ARM template that deploys API's to an API Management instance
Here is an example of one API
{
"properties": {
"authenticationSettings": {
"subscriptionKeyRequired": false
},
"subscriptionKeyParameterNames": {
"header": "Ocp-Apim-Subscription-Key",
"query": "subscription-key"
},
"apiRevision": "1",
"isCurrent": true,
"subscriptionRequired": true,
"displayName": "DDD.CRM.PostLeadRequest",
"serviceUrl": "https://test1/api/FuncCreateLead?code=XXXXXXXXXX",
"path": "CRMAPI/PostLeadRequest",
"protocols": [
"https"
]
},
"name": "[concat(variables('ApimServiceName'), '/mms-crm-postleadrequest')]",
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"dependsOn": []
}
When I am deploying this to different environments I would like to be able to substitute the service url depending on the environment. I'm wondering the best approach?
Can I read in a config file or something like that?
At the time of deployment I have a variable that tells me the environment so I can base decisions on that. Just not sure the best way to do it
See about ARM template parameters: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates#parameters They can be specified in a separate file. So you will have single template, but environment specific parameter files.

How can I retrieve the query key for Bing Maps API for Enterprise in an Azure Resource Group Template?

I am working on an ARM template that deploys an entire infrastructure from scratch:
The resource group
App Service plans
Application Insights
an so forth...
At some point I get to the part where I write the scripts for deploying my App Service (for hosting and deploying my web app later on) to my resource group. Prior to that I have my BingMaps API deployed in the same script.
I am stuck at the part where I am setting the Application Settings for my web app:
"type": "Microsoft.Web/sites",
"properties": {
"siteConfig": {
"appSettings": [
{
"name": "SomeKey",
"value": "SomeValue"
}, //rest of the code omitted
I would like to know how could I retrieve my BING MAPS query key within an ARM template?
I have tried, and have a feeling that this might be close to it, something like:
"value": "[reference(resourceId('Microsoft.BingMaps/mapApis', variables('bingMapsName')), '2016-08-18').queryKey]"
Anybody who has done this before? Many thanks in advance! Cheers
If you want to access query key in your ARM template for your web app setting, I would suggest you to use something like below:
{
"name": "appsettings",
"type": "config",
"apiVersion": "2015-08-01",
"dependsOn": [
"[concat('Microsoft.Web/sites/', variables('webSiteName'))]"
],
"tags": {
"displayName": "WebAppSettings"
},
"properties": {
"key1": "[parameter('AppSetting_Key1_Value')]",
"key2": "value2"
}
}
and then in your template.Parmeter.jso file , you can declare the key AppSetting_Key1_Value with the value of your Bing maps query key.
Specify the Parameter Value
After the Parameter has been added to the ARM Template and it’s being used to populate an Application Setting, the final step is to define the Parameter value within the ARM Templates Parameter file used for deployments. In the Azure Resource Group project template in Visual Studio the Parameters file for the default deployment is the file that ends with “.parameters.json”.
Here’s a screenshot of the “WebSite.parameters.json” file created in the previous articles in this series with the “AppSetting_Key1_Value” Parameter set to a value:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"hostingPlanName": {
"value": "WebApp1HostingPlan"
},
"WebApplication1PackageFolder": {
"value": "WebApplication1"
},
"WebApplication1PackageFileName": {
"value": "package.zip"
},
"WebApp_ConnString1": {
"value": "Server=myServerAddress;Database=myDataBase;Trusted_Connection=True;"
},
"AppSetting_Key1_Value": {
"value": "Template Value 1"
}
}
}
for security complaint solution , you can move all your secure key and connection string to Azure key vault if you are not comfortable to have keys in param file.
This should work. Hope it helps.

How to pass argument in packer provision script?

I am struggling to pass input parameter to packer provisioning script. I have tried various options but no joy.
Objective is my provision.sh should accept input parameter which I send during packer build.
packer build -var role=abc test.json
I am able to get the user variable in json file however I am unable to pass it provision script. I have to make a decision based on the input parameter.
I tried something like
"provisioners":
{
"type": "shell",
"scripts": [
"provision.sh {{user `role`}}"
]
}
But packer validation itself is failed with no such file/directory error message.
It would be real help if someone can help me on this.
Thanks in advance.
You should use the environment_vars option, see the docs Shell Provisioner - environment_vars.
Example:
"provisioners": [
{
"type": "shell"
"environment_vars": [
"HOSTNAME={{user `vm_name`}}",
"FOO=bar"
],
"scripts": [
"provision.sh"
],
}
]
If your script is already configured to use arguments; you simply need to run it as inline rather than in the scripts array.
In order to do this, the script must exist on the system already - you can accomplish this by copying it to the system with the file provisioner.
"provisioners": [
{
"type": "file",
"source": "scripts/provision.sh",
"destination": "/tmp/provision.sh"
},
{
"type": "shell",
"inline": [
"chmod u+x /tmp/provision.sh",
"/tmp/provision.sh {{user `vm_name`}}"]
}
]

Azure RM Templates. How to upload assets automatically with VS instead of fetching them from GitHub

I would like to be able to deploy a complex ARM template that utilizes DSC extensions and nested templates from my local Visual Studio.
The example is set to download assets from GitHub:
https://github.com/Azure/azure-quickstart-templates/tree/master/active-directory-new-domain-ha-2-dc
What changes do I have to make that I can tie the assets to my local Visual Studio project and use them instead of downloading them from GitHub?
Here is the strip down version of the template responsible for downloading:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
...
"adPDCVMName": {
"type": "string",
"metadata": {
"description": "The computer name of the PDC"
},
"defaultValue": "adPDC"
},
"assetLocation": {
"type": "string",
"metadata": {
"description": "The location of resources such as templates and DSC modules that the script is dependent"
},
"defaultValue": "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/active-directory-new-domain-ha-2-dc"
}
...
},
"variables": {
...
"adPDCModulesURL": "[concat(parameters('assetLocation'),'/DSC/CreateADPDC.ps1.zip')]",
"adPDCConfigurationFunction": "CreateADPDC.ps1\\CreateADPDC",
...
},
"resources": [
...
{
"name": "[parameters('adPDCVMName')]",
"type": "Microsoft.Compute/virtualMachines",
...
"resources": [
{
"name": "[concat(parameters('adPDCVMName'),'/CreateADForest')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
...
"properties": {
...
"settings": {
"ModulesUrl": "[variables('adPDCModulesURL')]",
"ConfigurationFunction": "[variables('adPDCConfigurationFunction')]",
...
}
}
}
}
}
]
}
]
}
Do the following in your 'Azure Resource Group' project in Visual Studio:
Copy the files to your project in Visual Studio using the same
directory structure. So a DSC directory and a nestedtemplates directory with
the files that belong there.
Set the files in the directories as content (azuredeploy.json is not needed, only the files you are referring to). This way the powershell script to deploy the templates will upload it to a storage account in azure.
Make it possible to use files uploaded to the storage account. In this case the template that you are referring to is not using the common
namingconvention. So you need to change it a bit:
Change azuredeploy.json: Change the name of parameter
assetLocation to _artifactsLocation.
Second: Add a parameter
_artifactsLocationSasToken as securestring. These 2 parameters will be filled automatically by the powershell script in Visual Studio.
part of the azuredeploy.json:
"parameters": {
"_artifactsLocation": {
"type": "string",
"metadata": {
"description": "The base URI where artifacts required by this template are located. When the template is deployed using the accompanying scripts, a private location in the subscription will be used and this value will be automatically generated."
}
},
"_artifactsLocationSasToken": {
"type": "securestring",
"metadata": {
"description": "The SAS token to access the storage account"
}
},
Because the original azuredeploy.json is not using the _artifactsLocationSasToken parameter. You need to change all variables where the assetlocation is used. Change all variables so it uses the _artifactsLocation and add a part to use the _artifactsLocationSasToken.
one sample:
"vnetTemplateUri": "[concat(parameters('_artifactsLocation'),'/nestedtemplates/vnet.json', parameters('_artifactsLocationSasToken'))]",
After you changed all variables. You can deploy the template from Visual Studio using the resources in your project instead of from github.

How to specify metadata for GCE in packer?

I'm trying to create a GCE image from packer template.
Here is the part that I use for that purpose.
"builders": [
...
{
"type": "googlecompute",
"account_file": "foo",
"project_id": "bar",
"source_image": "centos-6-v20160711",
"zone": "us-central1-a",
"instance_name": "packer-building-image-centos6-baz",
"machine_type": "n1-standard-1",
"image_name": "centos6-some-box-name",
"ssh_username": "my_username",
"metadata": {
"startup-script-log-dest": "/opt/script.log",
"startup-script": "/opt/startup.sh",
"some_other_custom_metadata_key": "some_value"
},
"ssh_pty": true
}
],
...
I have also created the required files. Here is that part
"provisioners": [
...
{
"type": "file",
"source": "{{user `files_path`}}/startup.sh",
"destination": "/opt/startup.sh"
},
...
{
"type": "shell",
"execute_command": "sudo sh '{{.Path}}'",
"inline": [
...
"chmod ugo+x /opt/startup.sh"
]
}
...
Everything works for me without "metadata" field. I can create image/instance with provided parameters. but when I try to create an instance from the image, I can't find the provided metadata and respectively I can't run my startup script, set logging file and other custom metadata.
Here is the source that I use https://www.packer.io/docs/builders/googlecompute.html#metadata.
Any suggestion will be helpful.
Thanks in advance
The metadata tag startup-script should contain the actuall script not a path. Provisioners run after the startup script has been executed (at least started).
Instead use startup_script_file in Packer and supply a path to a startup script.