I'm trying to create a GCE image from packer template.
Here is the part that I use for that purpose.
"builders": [
...
{
"type": "googlecompute",
"account_file": "foo",
"project_id": "bar",
"source_image": "centos-6-v20160711",
"zone": "us-central1-a",
"instance_name": "packer-building-image-centos6-baz",
"machine_type": "n1-standard-1",
"image_name": "centos6-some-box-name",
"ssh_username": "my_username",
"metadata": {
"startup-script-log-dest": "/opt/script.log",
"startup-script": "/opt/startup.sh",
"some_other_custom_metadata_key": "some_value"
},
"ssh_pty": true
}
],
...
I have also created the required files. Here is that part
"provisioners": [
...
{
"type": "file",
"source": "{{user `files_path`}}/startup.sh",
"destination": "/opt/startup.sh"
},
...
{
"type": "shell",
"execute_command": "sudo sh '{{.Path}}'",
"inline": [
...
"chmod ugo+x /opt/startup.sh"
]
}
...
Everything works for me without "metadata" field. I can create image/instance with provided parameters. but when I try to create an instance from the image, I can't find the provided metadata and respectively I can't run my startup script, set logging file and other custom metadata.
Here is the source that I use https://www.packer.io/docs/builders/googlecompute.html#metadata.
Any suggestion will be helpful.
Thanks in advance
The metadata tag startup-script should contain the actuall script not a path. Provisioners run after the startup script has been executed (at least started).
Instead use startup_script_file in Packer and supply a path to a startup script.
Related
I haven't found any examples of this so I am not sure it's possible.
I want to have utility scripts other packer scripts use. Can scripts call each other directly?
For example if inside of script1.ps1 it calls script2.ps1 (via relative path) will it be there? Will packer upload both files to the same location and the scripts can access each other?
{
"script": "./script1.ps1",
"type": "powershell"
},
{
"script": "./script2.ps1",
"type": "powershell"
}
What if I have them in diff folders like this, will I be able to access script2 inside script1 like ./../subdir/script2.ps1?
{
"script": "./otherdir/script1.ps1",
"type": "powershell"
},
{
"script": "./subdir/script2.ps1",
"type": "powershell"
}
Yes, you can certainly do that, although I am not sure what the real advantage is over running the two Powershell provisions like you are already are.
For the first script to call the second, the second script will already have to be present on the target machine. You can do the following:
{
"type": "file",
"source": "./script2.ps1",
"destination": "C:/Windows/Temp/script2.ps1"
},
{
"script": "./script1.ps1",
"type": "powershell"
}
Call script2 from script1 and then remove it:
& C:\Windows\Temp\script2.ps1
Remove-Item C:\Windows\Temp\script2.ps1
I have an ARM template that deploys API's to an API Management instance
Here is an example of one API
{
"properties": {
"authenticationSettings": {
"subscriptionKeyRequired": false
},
"subscriptionKeyParameterNames": {
"header": "Ocp-Apim-Subscription-Key",
"query": "subscription-key"
},
"apiRevision": "1",
"isCurrent": true,
"subscriptionRequired": true,
"displayName": "DDD.CRM.PostLeadRequest",
"serviceUrl": "https://test1/api/FuncCreateLead?code=XXXXXXXXXX",
"path": "CRMAPI/PostLeadRequest",
"protocols": [
"https"
]
},
"name": "[concat(variables('ApimServiceName'), '/mms-crm-postleadrequest')]",
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"dependsOn": []
}
When I am deploying this to different environments I would like to be able to substitute the service url depending on the environment. I'm wondering the best approach?
Can I read in a config file or something like that?
At the time of deployment I have a variable that tells me the environment so I can base decisions on that. Just not sure the best way to do it
See about ARM template parameters: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates#parameters They can be specified in a separate file. So you will have single template, but environment specific parameter files.
I am struggling to pass input parameter to packer provisioning script. I have tried various options but no joy.
Objective is my provision.sh should accept input parameter which I send during packer build.
packer build -var role=abc test.json
I am able to get the user variable in json file however I am unable to pass it provision script. I have to make a decision based on the input parameter.
I tried something like
"provisioners":
{
"type": "shell",
"scripts": [
"provision.sh {{user `role`}}"
]
}
But packer validation itself is failed with no such file/directory error message.
It would be real help if someone can help me on this.
Thanks in advance.
You should use the environment_vars option, see the docs Shell Provisioner - environment_vars.
Example:
"provisioners": [
{
"type": "shell"
"environment_vars": [
"HOSTNAME={{user `vm_name`}}",
"FOO=bar"
],
"scripts": [
"provision.sh"
],
}
]
If your script is already configured to use arguments; you simply need to run it as inline rather than in the scripts array.
In order to do this, the script must exist on the system already - you can accomplish this by copying it to the system with the file provisioner.
"provisioners": [
{
"type": "file",
"source": "scripts/provision.sh",
"destination": "/tmp/provision.sh"
},
{
"type": "shell",
"inline": [
"chmod u+x /tmp/provision.sh",
"/tmp/provision.sh {{user `vm_name`}}"]
}
]
I have a packer json like:
"builders": [{...}],
"provisioners": [
{
"type": "file",
"source": "packer/myfile.json",
"destination": "/tmp/myfile.json"
}
],
"variables": {
"myvariablename": "value"
}
and myfile.json is:
{
"var" : "{{ user `myvariablename`}}"
}
The variable into the file does get replaced, is a sed replacement with shell provisioner after the file the only option available here?
Using packer version 0.12.0
You have to pass these as environment variables. For example:
"provisioners": [
{
"type": "shell"
"environment_vars": [
"http_proxy={{user `proxy`}}",
],
"scripts": [
"some_script.sh"
],
}
],
"variables": {
"proxy": null
}
And in the script you can use $http_proxy
So far I've come just with the solution to use file & shell provisioner. Upload file and then replace variables in file via shell provisioner which can be fed from template variables provided by e.g. HashiCorp Vault
Yo may use OS export function to set environment and pass it to Packer
Here is a config using OS ENV_NAME value to choose local folder to copy from
export ENV_NAME=dev will set local folder to dev
{
"variables": {
...
"env_folder": "{{env `ENV_NAME`}}",
},
"builders": [{...}]
"provisioners": [
{
"type": "file",
"source": "files/{{user `env_folder`}}/",
"destination": "/tmp/"
},
{...}
]
}
User variables must first be defined in a variables section within your template. Even if you want a user variable to default to an empty string, it must be defined. This explicitness helps reduce the time it takes for newcomers to understand what can be modified using variables in your template.
The variables section is a key/value mapping of the user variable name to a default value. A default value can be the empty string. An example is shown below:
{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
// ...
}]
}
check this link for more information
I am trying to move an S3 bucket from one account (A) to another (B).
I have succeeded with that operation and remove the bucket from account A.
I am trying to move the new bucket from account B to another bucket on account B, but learning that beside the bucket itself I have no access to the files.
After much fighting with s3 cli and its permissions I checked s3api commands and found out that the files (surprise surprise) still holds the old ownership.
I am trying now to change it, but came to a stand still with the put-bucket-acl, the JSON file isn't working for s3api command.
I tried running the command in debug , but didn't make too much out of it.
Anybody knows what to do ?
Maybe a better way to solve this issue ?
what I did so far:
the command:
aws s3api put-bucket-acl --bucket my-bucket --cli-input-json file://1.json
(Same with put-object-acl)
1.json file:
"Grantee": {
"DisplayName": "account_B",
"EmailAddress": "user#mail.com",
"ID": "111111hughalphnumericnumber22222",
"Type": "CanonicalUser",
"Permission": "FULL_CONTROL"
}
The errors I get :
Unknown parameter in input: "Grantee", must be one of: ACL,
AccessControlPolicy, Bucket, ContentMD5, GrantFullControl, GrantRead,
GrantReadACP, GrantWrite, GrantWriteACP Unknown parameter in input:
"Permission", must be one of: ACL, AccessControlPolicy, Bucket,
ContentMD5, GrantFullControl, GrantRead, GrantReadACP, GrantWrite,
GrantWriteACP
UPDATE:
AssumeRole between the 2 accounts doesn't work in my case.
cli (s3cmd,s3api) GUI (MCSTools,bucketexplorer), ACL using headers,body (Postman) did not help as well..
I'm connecting AWS support and hoping for the best.
I'll update when I have a solution.
So, AWS support came to the rescue... I'm leaving this for others to see, so they won't have to waste 2 days like I did trying to figure what the hell went wrong...
aws s3api get-object-acl --bucket <bucket_on_B> --key <Key_on_B_Owned_by_A> --profile IAM_User_A > A_to_B.json
apply the outcome of:
aws s3api get-bucket-acl --bucket <Bucket_on_B> --profile IAM_User_B
onto the json file that was created, and then run
aws s3api put-object-acl --bucket <Bucket_on_B> --key <Key_on_B_Owned_by_A> --access-control-policy file://A_to_B.json --profile IAM_User_A
Your JSON is wrong. According to the documentation for the put-bucket-acl option you can generate valid JSON template ('skeleton') using --generate-cli-skeleton. For example:
aws s3api put-bucket-acl --bucket BUCKETNAME --generate-cli-skeleton
And here is the output:
{
"ACL": "",
"AccessControlPolicy": {
"Grants": [
{
"Grantee": {
"DisplayName": "",
"EmailAddress": "",
"ID": "",
"Type": "",
"URI": ""
},
"Permission": ""
}
],
"Owner": {
"DisplayName": "",
"ID": ""
}
},
"Bucket": "",
"ContentMD5": "",
"GrantFullControl": "",
"GrantRead": "",
"GrantReadACP": "",
"GrantWrite": "",
"GrantWriteACP": ""
}
For anyone who's still looking to do this - OP probably looked at the right aws doc but overlooked the right command. I'm just glad I got to right command because of this stackoverflow page :)
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html
^^ The json syntax with example is present there and instead of --cli-input-json , use --access-control-policy
{
"Grants": [
{
"Grantee": {
"DisplayName": "string",
"EmailAddress": "string",
"ID": "string",
"Type": "CanonicalUser"|"AmazonCustomerByEmail"|"Group",
"URI": "string"
},
"Permission": "FULL_CONTROL"|"WRITE"|"WRITE_ACP"|"READ"|"READ_ACP"
}
...
],
"Owner": {
"DisplayName": "string",
"ID": "string"
}
}
I had the policy as a json file and used this command it worked just fine.
aws s3api put-bucket-acl --bucket bucketname --access-control-policy file://yourJson.json
Also one more thing to note is that I wasn't able to add permissions along with existing ones, old acl was being overwritten. So any permission you want to add needs to be in json policy file along with existing policy. It will be easier when you use some command to describe all the ACLs first.
The syntax is the following (with example):
aws s3api put-bucket-acl --bucket bucket_name --access-control-policy file://grant.json
grant.json file:
{
"Grants": [
{
"Grantee": {
"ID": "CANONICAL_ID_TO_GRANT",
"Type": "CanonicalUser"
},
"Permission": "WRITE"
},
{
"Grantee": {
"ID": "CANONICAL_ID_TO_GRANT",
"Type": "CanonicalUser"
},
"Permission": "READ"
}
],
"Owner": {
"DisplayName": "example_owner",
"ID": "CANONICAL_ID_OWNER"
}
}