AWS S3 permissions - error with put-bucket-acl - json

I am trying to move an S3 bucket from one account (A) to another (B).
I have succeeded with that operation and remove the bucket from account A.
I am trying to move the new bucket from account B to another bucket on account B, but learning that beside the bucket itself I have no access to the files.
After much fighting with s3 cli and its permissions I checked s3api commands and found out that the files (surprise surprise) still holds the old ownership.
I am trying now to change it, but came to a stand still with the put-bucket-acl, the JSON file isn't working for s3api command.
I tried running the command in debug , but didn't make too much out of it.
Anybody knows what to do ?
Maybe a better way to solve this issue ?
what I did so far:
the command:
aws s3api put-bucket-acl --bucket my-bucket --cli-input-json file://1.json
(Same with put-object-acl)
1.json file:
"Grantee": {
"DisplayName": "account_B",
"EmailAddress": "user#mail.com",
"ID": "111111hughalphnumericnumber22222",
"Type": "CanonicalUser",
"Permission": "FULL_CONTROL"
}
The errors I get :
Unknown parameter in input: "Grantee", must be one of: ACL,
AccessControlPolicy, Bucket, ContentMD5, GrantFullControl, GrantRead,
GrantReadACP, GrantWrite, GrantWriteACP Unknown parameter in input:
"Permission", must be one of: ACL, AccessControlPolicy, Bucket,
ContentMD5, GrantFullControl, GrantRead, GrantReadACP, GrantWrite,
GrantWriteACP
UPDATE:
AssumeRole between the 2 accounts doesn't work in my case.
cli (s3cmd,s3api) GUI (MCSTools,bucketexplorer), ACL using headers,body (Postman) did not help as well..
I'm connecting AWS support and hoping for the best.
I'll update when I have a solution.

So, AWS support came to the rescue... I'm leaving this for others to see, so they won't have to waste 2 days like I did trying to figure what the hell went wrong...
aws s3api get-object-acl --bucket <bucket_on_B> --key <Key_on_B_Owned_by_A> --profile IAM_User_A > A_to_B.json
apply the outcome of:
aws s3api get-bucket-acl --bucket <Bucket_on_B> --profile IAM_User_B
onto the json file that was created, and then run
aws s3api put-object-acl --bucket <Bucket_on_B> --key <Key_on_B_Owned_by_A> --access-control-policy file://A_to_B.json --profile IAM_User_A

Your JSON is wrong. According to the documentation for the put-bucket-acl option you can generate valid JSON template ('skeleton') using --generate-cli-skeleton. For example:
aws s3api put-bucket-acl --bucket BUCKETNAME --generate-cli-skeleton
And here is the output:
{
"ACL": "",
"AccessControlPolicy": {
"Grants": [
{
"Grantee": {
"DisplayName": "",
"EmailAddress": "",
"ID": "",
"Type": "",
"URI": ""
},
"Permission": ""
}
],
"Owner": {
"DisplayName": "",
"ID": ""
}
},
"Bucket": "",
"ContentMD5": "",
"GrantFullControl": "",
"GrantRead": "",
"GrantReadACP": "",
"GrantWrite": "",
"GrantWriteACP": ""
}

For anyone who's still looking to do this - OP probably looked at the right aws doc but overlooked the right command. I'm just glad I got to right command because of this stackoverflow page :)
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html
^^ The json syntax with example is present there and instead of --cli-input-json , use --access-control-policy
{
"Grants": [
{
"Grantee": {
"DisplayName": "string",
"EmailAddress": "string",
"ID": "string",
"Type": "CanonicalUser"|"AmazonCustomerByEmail"|"Group",
"URI": "string"
},
"Permission": "FULL_CONTROL"|"WRITE"|"WRITE_ACP"|"READ"|"READ_ACP"
}
...
],
"Owner": {
"DisplayName": "string",
"ID": "string"
}
}
I had the policy as a json file and used this command it worked just fine.
aws s3api put-bucket-acl --bucket bucketname --access-control-policy file://yourJson.json
Also one more thing to note is that I wasn't able to add permissions along with existing ones, old acl was being overwritten. So any permission you want to add needs to be in json policy file along with existing policy. It will be easier when you use some command to describe all the ACLs first.

The syntax is the following (with example):
aws s3api put-bucket-acl --bucket bucket_name --access-control-policy file://grant.json
grant.json file:
{
"Grants": [
{
"Grantee": {
"ID": "CANONICAL_ID_TO_GRANT",
"Type": "CanonicalUser"
},
"Permission": "WRITE"
},
{
"Grantee": {
"ID": "CANONICAL_ID_TO_GRANT",
"Type": "CanonicalUser"
},
"Permission": "READ"
}
],
"Owner": {
"DisplayName": "example_owner",
"ID": "CANONICAL_ID_OWNER"
}
}

Related

How to save Terraform output variable into a Github Action’s environment variable

My project uses Terraform for setting up the infrastructure and Github Actions for CI/CD. After running terraform apply I would like to save the value of a Terraform output variable as Github Action environment variable to be later used by the workflow.
According to Github Action's docs, this is the way to create or update environment variables using workflow commands.
Here is my simplified Github Action workflow:
name: Setup infrastructure
jobs:
run-terraform:
name: Apply infrastructure changes
runs-on: ubuntu-latest
steps:
...
- run: terraform output vm_ip
- run: echo TEST=$(terraform output vm_ip) >> $GITHUB_ENV
- run: echo ${{ env.TEST }}
When running locally the command echo TEST_VAR=$(terraform output vm_ip) outputs exactly TEST="192.168.23.23" but from the Github Action CLI output I get something very strange:
I've tried with single quotes, double quotes. At some point I changed the strategy and tried to use jq. So I've added the following steps in order to export all Terraform Outputs to a json file and parse it using jq:
- run: terraform output -json >> /tmp/tf.out.json
- run: jq '.vm_ip.value' /tmp/tf.out.json
But now it throws the following error:
parse error: Invalid numeric literal at line 1, column 9
Even though the JSON generated is perfectly valid:
{
"cc_host": {
"sensitive": false,
"type": "string",
"value": "private.c.db.ondigitalocean.com"
},
"cc_port": {
"sensitive": false,
"type": "number",
"value": 1234
},
"db_host": {
"sensitive": false,
"type": "string",
"value": "private.b.db.ondigitalocean.com"
},
"db_name": {
"sensitive": false,
"type": "string",
"value": "XXX"
},
"db_pass": {
"sensitive": true,
"type": "string",
"value": "XXX"
},
"db_port": {
"sensitive": false,
"type": "number",
"value": 1234
},
"db_user": {
"sensitive": false,
"type": "string",
"value": "XXX"
},
"vm_ip": {
"sensitive": false,
"type": "string",
"value": "206.189.15.70"
}
}
The commands terraform output -json >> /tmp/tf.out.json and jq '.vm_ip.value' /tmp/tf.out.json work accordingly on local.
After hours searching I've finally figured it out.
It seems that the Terraform's Github Action offers an additional parameter called terraform_wrapper which needs to be set to false if you plan using the output in commands. You can read a more in depth article here.
Otherwise, they will be automatically exposed to the step's output and they can be accessed like steps.<step_id>.outputs.<variable>. You can read more about them here and here.
For me, what worked was using terraform-bin output instead of terraform output.
More info here.

How to export multiple parameters from JSON to AWS SSM Parameter store

I am trying to copy SSM Parameters from one account to a different account and different region. I have 100's of parameters which I have imported using get-parameters-by-path.
Now I want to export them to a and different region in a different account. When I add one after the other using :
aws ssm put-parameter --cli-input-json file:///../parameters.json --region us-east-2
With parameters.json as:
{
"Name": "/env/../../..",
"Type": "String",
"Value": ".."
}
it works without any issues, but I would like to know how I could export more than one at a time, I want them all to be loaded at once.
Here is the sample paramaters.json which does not work. It doesn't throw any error but prints the same again.
{
"Name": "/env/../../..",
"Type": "String",
"Value": ".."
},
{
"Name": "/env/../../..",
"Type": "String",
"Value": ".."
},
{
"Name": "/env/../../..",
"Type": "String",
"Value": " "
}
I cannot use aws-ssm-copy because both are different regions in different accounts and I am also, modifying the imported values before exporting to the new account which is not possible with aws-ssm-copy.

Substituting service url is arm template

I have an ARM template that deploys API's to an API Management instance
Here is an example of one API
{
"properties": {
"authenticationSettings": {
"subscriptionKeyRequired": false
},
"subscriptionKeyParameterNames": {
"header": "Ocp-Apim-Subscription-Key",
"query": "subscription-key"
},
"apiRevision": "1",
"isCurrent": true,
"subscriptionRequired": true,
"displayName": "DDD.CRM.PostLeadRequest",
"serviceUrl": "https://test1/api/FuncCreateLead?code=XXXXXXXXXX",
"path": "CRMAPI/PostLeadRequest",
"protocols": [
"https"
]
},
"name": "[concat(variables('ApimServiceName'), '/mms-crm-postleadrequest')]",
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"dependsOn": []
}
When I am deploying this to different environments I would like to be able to substitute the service url depending on the environment. I'm wondering the best approach?
Can I read in a config file or something like that?
At the time of deployment I have a variable that tells me the environment so I can base decisions on that. Just not sure the best way to do it
See about ARM template parameters: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates#parameters They can be specified in a separate file. So you will have single template, but environment specific parameter files.

How to specify metadata for GCE in packer?

I'm trying to create a GCE image from packer template.
Here is the part that I use for that purpose.
"builders": [
...
{
"type": "googlecompute",
"account_file": "foo",
"project_id": "bar",
"source_image": "centos-6-v20160711",
"zone": "us-central1-a",
"instance_name": "packer-building-image-centos6-baz",
"machine_type": "n1-standard-1",
"image_name": "centos6-some-box-name",
"ssh_username": "my_username",
"metadata": {
"startup-script-log-dest": "/opt/script.log",
"startup-script": "/opt/startup.sh",
"some_other_custom_metadata_key": "some_value"
},
"ssh_pty": true
}
],
...
I have also created the required files. Here is that part
"provisioners": [
...
{
"type": "file",
"source": "{{user `files_path`}}/startup.sh",
"destination": "/opt/startup.sh"
},
...
{
"type": "shell",
"execute_command": "sudo sh '{{.Path}}'",
"inline": [
...
"chmod ugo+x /opt/startup.sh"
]
}
...
Everything works for me without "metadata" field. I can create image/instance with provided parameters. but when I try to create an instance from the image, I can't find the provided metadata and respectively I can't run my startup script, set logging file and other custom metadata.
Here is the source that I use https://www.packer.io/docs/builders/googlecompute.html#metadata.
Any suggestion will be helpful.
Thanks in advance
The metadata tag startup-script should contain the actuall script not a path. Provisioners run after the startup script has been executed (at least started).
Instead use startup_script_file in Packer and supply a path to a startup script.

sensu mailer and pipe

i'm switching over from nagios to sensu. i'm using chef to automated the process. everything is working great except the mailer or actually, i narrowed it down to the "pipe" that is suppose to redirect the json output from the check to the handler. it doesn't. when i use
{
"handlers": {
"email": {
"type": "pipe",
"command": "mail -s \"sensu alert\" alert#example.com",
"severities": [
"ok",
"critical"
]
}
}
}
i get a blank email. when i use the mailer.rb handler, i get no email whatsoever. i made sure to include mail to and mail from in the mailer.json. i see the logs have the correct information for the handler and email parameters.
so i've concluded the "pipe" isn't working. can anybody help with that? i would greatly appreciate it. i wish there was a sensu community, but it may be too new to have one.
With regards to the mailer.rb, have you checked the server logs (by default in /var/log/sensu/sensu-server.log) for errors? If there is an error in any of the handlers, they will show up in those logs.
mailer.rb requires several gems in order to run. To find out if you are using sensu's embedded ruby or not, check /etc/default/sensu for EMBEDDED_RUBY. If that is false, you will need to make sure your system ruby has all those gems (sensu-handler, mail, timeout) installed. If it is set to true, do the same with sensu's embedded ruby:
/opt/sensu/embedded/bin/gem list
Make sure the gems are installed, try again, and check the sensu-server.log for errors.
If you have more issues, there is in fact a community - check out #sensu on Freenode.
You can write you own event data JSON and pass it through a PIPE as follows:
cat event.json | /opt/sensu/embedded/bin/ruby mailer.rb
The easiest way to get the event.json file is from the sensu-server.log.
To use mailer.rb you need your own mail server ! if you'll post sensu server logs i think i can help you.
I've done some testing and the mail into pipe does not with GNU mail/mailx (assume you're using Ubuntu or something?).
Two solutions:
1) install BSD mail:
sudo apt-get install bsd-mailx
2) Or modify the command slightly get mail to read from stdin you'll need to do something like:
{
"handlers": {
"email": {
"type": "pipe",
"command": " echo $(cat) > /tmp/mail.txt; mail -s \"sensu alert\" alert#example.com < /tmp/mail.txt"
}
}
}
The idea is normally that you read the event json from stdin within a scripting language and then pull out bits of the event.json that you want to send. The above will e-mail out the entire json file.
You can use sensu mailer handler. Please find below steps to setup:-
sensu-install -p sensu-plugins-mailer
apt-get install postifx
/etc/init.d/postfix start
cd /etc/sensu/conf.d/
when we install this plugin will get 3 ruby files.
This time we are using this file:- handler-mailer.rb
First we need to creat handler file in this location /etc/sensu/conf.d/ :-
vim handler-mailer.json
{
"mailer": {
"admin_gui": "http://127.0.0.1:3000/",
"mail_from": "localhost",
"mail_to": ["yourmailid-1","yourmailid-2"],
"smtp_address": "localhost",
"smtp_port": "25"
}
}
Now we need to create one mail handler file in this location /etc/sensu/conf.d/:-
{
"handlers": {
"mymailer": {
"type": "pipe",
"command": "/opt/sensu/embedded/bin/handler-mailer.rb",
"severities": [
"critical",
"unknown"
]
}
}
}
in above file handler name is mymailer we need to use this handler name in our checks.
Use bin/handler-mailer-mailgun.rb or bin/handler-mailer-ses.rb or bin/handler-mailer.rb
Example:
echo '{
"id": "ef6b87d2-1f89-439f-8bea-33881436ab90",
"action": "create",
"timestamp": 1460172826,
"occurrences": 2,
"check": {
"type": "standard",
"total_state_change": 11,
"history": ["0", "0", "1", "1", "2", "2"],
"status": 2,
"output": "No keepalive sent from client for 230 seconds (>=180)",
"executed": 1460172826,
"issued": 1460172826,
"name": "keepalive",
"thresholds": {
"critical": 180,
"warning": 120
}
},
"client": {
"timestamp": 1460172596,
"version": "1.0.0",
"socket": {
"port": 3030,
"bind": "127.0.0.1"
},
"subscriptions": [
"production"
],
"environment": "development",
"address": "127.0.0.1",
"name": "client-01"
} }' | /opt/sensu/embedded/bin/handler-mailer-mailgun.rb
Output:
mail -- sent alert for client-01/keepalive to your.email#example.com