I'm trying to get Hashicorp's Packer (1.5.5) to deploy multiple disks for a specialized (USGCB compliant) template and I need it to have four separate disks. I'm using the vsphere-iso builder. The "disk_size" parameter deploys the root drive fine. I found the 'storage' parameter which I think is used to deploy the other disks...
storage ([]DiskConfig) - A collection of one or more disks to be provisioned along with the VM
But I can't seem to figure out the format that the data needs to be in - in the json file. I tried...
"storage": [
{
"storage[1]": "20673",
"storage[2]": "16384",
"storage[3]": "4096"
}
]
and
"storage": [
{
"20673",
"16384",
"4096"
}
]
but none of those worked.
the storage section contains just the additional disks
you need the following
"disk_controller_type": "pvscsi",
"disk_size": "20673",
"disk_thin_provisioned": true,
"storage": [
{
"disk_size": "16384",
"disk_thin_provisioned": true
},
{
"disk_size": "4096",
"disk_thin_provisioned": true
}
],
Related
I have an ARM template that deploys API's to an API Management instance
Here is an example of one API
{
"properties": {
"authenticationSettings": {
"subscriptionKeyRequired": false
},
"subscriptionKeyParameterNames": {
"header": "Ocp-Apim-Subscription-Key",
"query": "subscription-key"
},
"apiRevision": "1",
"isCurrent": true,
"subscriptionRequired": true,
"displayName": "DDD.CRM.PostLeadRequest",
"serviceUrl": "https://test1/api/FuncCreateLead?code=XXXXXXXXXX",
"path": "CRMAPI/PostLeadRequest",
"protocols": [
"https"
]
},
"name": "[concat(variables('ApimServiceName'), '/mms-crm-postleadrequest')]",
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"dependsOn": []
}
When I am deploying this to different environments I would like to be able to substitute the service url depending on the environment. I'm wondering the best approach?
Can I read in a config file or something like that?
At the time of deployment I have a variable that tells me the environment so I can base decisions on that. Just not sure the best way to do it
See about ARM template parameters: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates#parameters They can be specified in a separate file. So you will have single template, but environment specific parameter files.
I am working on an ARM template that deploys an entire infrastructure from scratch:
The resource group
App Service plans
Application Insights
an so forth...
At some point I get to the part where I write the scripts for deploying my App Service (for hosting and deploying my web app later on) to my resource group. Prior to that I have my BingMaps API deployed in the same script.
I am stuck at the part where I am setting the Application Settings for my web app:
"type": "Microsoft.Web/sites",
"properties": {
"siteConfig": {
"appSettings": [
{
"name": "SomeKey",
"value": "SomeValue"
}, //rest of the code omitted
I would like to know how could I retrieve my BING MAPS query key within an ARM template?
I have tried, and have a feeling that this might be close to it, something like:
"value": "[reference(resourceId('Microsoft.BingMaps/mapApis', variables('bingMapsName')), '2016-08-18').queryKey]"
Anybody who has done this before? Many thanks in advance! Cheers
If you want to access query key in your ARM template for your web app setting, I would suggest you to use something like below:
{
"name": "appsettings",
"type": "config",
"apiVersion": "2015-08-01",
"dependsOn": [
"[concat('Microsoft.Web/sites/', variables('webSiteName'))]"
],
"tags": {
"displayName": "WebAppSettings"
},
"properties": {
"key1": "[parameter('AppSetting_Key1_Value')]",
"key2": "value2"
}
}
and then in your template.Parmeter.jso file , you can declare the key AppSetting_Key1_Value with the value of your Bing maps query key.
Specify the Parameter Value
After the Parameter has been added to the ARM Template and it’s being used to populate an Application Setting, the final step is to define the Parameter value within the ARM Templates Parameter file used for deployments. In the Azure Resource Group project template in Visual Studio the Parameters file for the default deployment is the file that ends with “.parameters.json”.
Here’s a screenshot of the “WebSite.parameters.json” file created in the previous articles in this series with the “AppSetting_Key1_Value” Parameter set to a value:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"hostingPlanName": {
"value": "WebApp1HostingPlan"
},
"WebApplication1PackageFolder": {
"value": "WebApplication1"
},
"WebApplication1PackageFileName": {
"value": "package.zip"
},
"WebApp_ConnString1": {
"value": "Server=myServerAddress;Database=myDataBase;Trusted_Connection=True;"
},
"AppSetting_Key1_Value": {
"value": "Template Value 1"
}
}
}
for security complaint solution , you can move all your secure key and connection string to Azure key vault if you are not comfortable to have keys in param file.
This should work. Hope it helps.
I create by cloudformation an EFS in a VPC, this VPC contains few EC2 instances.
I need to create as well mount targets for each subnet.
This cloudformation template can be executed in different AWS accounts.
How to know how meny Mount targets resources to create?
I can have a vpc in account 1 with 3 subnets, another vpc in account 2 with 2 subnets...
How to make the template generic so that it's going according to every account environment ?
"FileSystem" : {
"Type" : "AWS::EFS::FileSystem",
"Properties" : {
"FileSystemTags" : [
{
"Key" : "Name",
"Value" : "FileSystem"
}
]
}
},
"MountTarget1": {
"Type": "AWS::EFS::MountTarget",
"Properties": {
"FileSystemId": { "Ref": "FileSystem" },
"SubnetId": { "Ref": "Subnet1" },
"SecurityGroups": [ { "Ref": "MountTargetSecurityGroup" } ]
}
},
"MountTarget2": {
"Type": "AWS::EFS::MountTarget",
"Properties": {
"FileSystemId": { "Ref": "FileSystem" },
"SubnetId": { "Ref": "Subnet2" },
"SecurityGroups": [ { "Ref": "MountTargetSecurityGroup" } ]
}
},
As I understand, you want to be able to re-use this same template across accounts and create the appropriate number of MountTargets based on the number of subnets required by an account.
There are many variables and conditions that would apply depending on the number of subnets (ACL, Routetables, etc). You could perhaps accomplish it with a large number of conditions and parameters, but the template would get quite messy. Although that's not an elegant solution.
The better approach would be creating your template using Troposphere. Here's an example for EFS to get you started. https://github.com/cloudtools/troposphere/blob/master/examples/EFS.py
I'm trying to create a GCE image from packer template.
Here is the part that I use for that purpose.
"builders": [
...
{
"type": "googlecompute",
"account_file": "foo",
"project_id": "bar",
"source_image": "centos-6-v20160711",
"zone": "us-central1-a",
"instance_name": "packer-building-image-centos6-baz",
"machine_type": "n1-standard-1",
"image_name": "centos6-some-box-name",
"ssh_username": "my_username",
"metadata": {
"startup-script-log-dest": "/opt/script.log",
"startup-script": "/opt/startup.sh",
"some_other_custom_metadata_key": "some_value"
},
"ssh_pty": true
}
],
...
I have also created the required files. Here is that part
"provisioners": [
...
{
"type": "file",
"source": "{{user `files_path`}}/startup.sh",
"destination": "/opt/startup.sh"
},
...
{
"type": "shell",
"execute_command": "sudo sh '{{.Path}}'",
"inline": [
...
"chmod ugo+x /opt/startup.sh"
]
}
...
Everything works for me without "metadata" field. I can create image/instance with provided parameters. but when I try to create an instance from the image, I can't find the provided metadata and respectively I can't run my startup script, set logging file and other custom metadata.
Here is the source that I use https://www.packer.io/docs/builders/googlecompute.html#metadata.
Any suggestion will be helpful.
Thanks in advance
The metadata tag startup-script should contain the actuall script not a path. Provisioners run after the startup script has been executed (at least started).
Instead use startup_script_file in Packer and supply a path to a startup script.
I'm playing around with Google's managed VM feature and finding you can fairly easily create some interesting setups. However, I have yet to figure out whether it's possible to use persistent disks to mount a volume on the container, and it seems not having this feature limits the usefulness of managed VMs for stateful containers such as databases.
So the question is: how can I mount the persistent disk that Google creates for my Compute engine instance, to a container volume?
Attaching a persistent disk to a Google Compute Engine instance
Follow the official persistent-disk guide:
Create a disk
Attach to an instance during instance creation, or to a running instance
Use the tool /usr/share/google/safe_format_and_mount to mount the device file /dev/disk/by-id/google-...
As noted by Faizan, use docker run -v /mnt/persistent_disk:/container/target/path to include the volume in the docker container
Referencing a persistent disk in Google Container Engine
In this method, you specify the volume declaratively (after initializing it as mentioned above...) in the Replication Controller or Pod declaration. The following is a minimal excerpt of a replication controller JSON declaration. Note that the volume has to be declared read-only because no more than two instances may write to a persistent disk at one time.
{
"id": "<id>",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 3,
"replicaSelector": {
"name": "<id>"
},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "<id>",
"containers": [
{
"name": "<id>",
"image": "<docker_image>",
"volumeMounts": [
{
"name": "persistent_disk",
"mountPath": "/pd",
"readOnly": true
}
],
...
}
],
"volumes": [
{
"name": "persistent_disk",
"source": {
"persistentDisk": {
"pdName": "<persistend_disk>",
"fsType": "ext4",
"readOnly": true
}
}
}
]
}
},
"labels": {
"name": "<id>"
}
}
},
"labels": {
"name": "<id>"
}
}
If your persistent disk is attached and mounted already to the instance, I believe you can use it as a data volume with your docker container. I was able to find docker documentation which explains the steps on how to manage data in containers.