Why does this IAM policy have a syntax error? - json

I'm new to/learning about AWS, currently using LocalStack in lieu of real live AWS.
arn:aws:s3:::my-bucket/path/to/foo.json is a valid S3 key to an object in a newly-created S3 bucket. Because the bucket is newly created and pristine, other than the one file upload, nothing in it is externally accessible. I'm trying to learn about IAM by working through examples to create a policy that grants read-access to parts of a S3 bucket.
I created the following policy file based on this example from the AWS CLI reference:
$ cat ./policy
{
"Version": "2020-04-27",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::my-bucket/path/to/foo.json"
]
}
]
}
From the same linked example, I tried to create my policy with this command, which failed:
$ aws --endpoint-url=http://localhost:4593 iam create-policy --policy-name my-bucket-policy --policy-document file://policy
An error occurred (MalformedPolicyDocument) when calling the CreatePolicy operation: Syntax errors in policy.
Why did this command fail, or is there a way I can get a more-descriptive error message?
(policy is a file in the cwd where I execute the aws CLI command)
By my reading, the error message implies malformed JSON, but linters like https://jsonlint.com/ indicate that the text is valid JSON. Moreover, the changes from the source example are minimal and would appear reasonable: "Version" is changed to today's date, and the "Resource" ARN is changed to what's relevant to me.

There is incorrect version given. It should be: "2012-10-17"
Edit: Mistake with Principle. See comments. Principle is required for resource-based policies:
Principal (Required in only some circumstances) – If you create a resource-based policy, you must indicate the account, user, role, or federated user to which you would like to allow or deny access. If you are creating an IAM permissions policy to attach to a user or role, you cannot include this element. The principal is implied as that user or role.

Related

Unathenticated user error when exporting database table from MySQL in Gcloud

I am running the follwing code in my gcloud cmd shell which is intended to export a specific table from my database to a storage bucket:
gcloud sql export sql databasen gs://my_bucket/file_nam,e.sql --async --database=database--table=table --offload
I keep on getting an error:
{
"error": {
"code": 401,
"message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"errors": [
{
"message": "Login Required.",
"domain": "global",
"reason": "required",
"location": "Authorization",
"locationType": "header"
}
],
"status": "UNAUTHENTICATED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "CREDENTIALS_MISSING",
"domain": "googleapis.com",
"metadata": {
"method": "google.cloud.sql.v1beta4.SqlOperationsService.Get",
"service": "sqladmin.googleapis.com"
}
}
]
}
I have already authenticated myself in the shell with gcloud auth login and given CloudAdmin IAM authority to the service account for the sql instance. I also have all the cloudsql cloud APIs enabled.
I am at loss and would appreciate any direction here. Thank you!
Every request your application sends to the Cloud SQL Admin API needs to identify your application to Google.
This error can occur if you haven't set up the OAuth header properly.To address this issue, repeat the steps to verify your initial setup.
When you run the code from your local machine, you need to make your own OAuth token accessible to the code by using one of the following authentication methods:
-- Using Service account:
Create a Service Account and download a JSON key.
Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the downloaded service account key file (JSON).
The same google.auth.default documentation code you use also checks this special environment variable for the set JSON OAuth token and uses it automatically.
You can check if the service account is the active one on the instance by running the following command:
gcloud auth list
I would also recommend you check the official documentation for Export SQL and Best Practices
Also do check these examples for similar implementation:
Authorize Cloud sql admin api call
Google Cloud admin authentication
Permissions for google cloud sql export

How to figure out why VS Code considers the certificate expired whereas it is not?

In VS Code, there is an error loading particular JSON schema (Renovate Bot).
Unable to load schema from 'https://docs.renovatebot.com/renovate-schema.json': certificate has expired.(768)
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"...": "..."
}
I've also tried associate the file with the schema via workspace settings, the same result.
Web server certificate seem to be valid:
Other schemas are loaded successfully, for example for firebase.json (set in workspace settings).
"json.schemas": [
{
"fileMatch": ["firebase.json"],
"url": "https://raw.githubusercontent.com/firebase/firebase-tools/master/schema/firebase-config.json"
}
],
How to figure out why VS Code considers the certificate stale whereas it is not? I have not found any details on this in any of the Output panels.
This has been annoying me for so long I can't remember. However, it turns out I only encountered this issue when inside my company's network where there is a self-signed certificate between me and the internet. On a whim I removed all of the expired self-signed certificates from "Manager Computer Certificates" and this error disappeared.
Therefore, for reasons I can't comprehend, VSCode (or whatever is actually running under the hood) is attempting to use expired system CA certificates and reporting an error as a result, despite a good certificate being present.

CLI always returns NotAuthorizedOrNotFound

I am trying to get the CLI working on Ubuntu 16.04.1, but I always keep running into
(cli_env) rnayak#ubuntuvm:~$ bmcs network vcn list -c c21
ServiceError:
{
"code": "NotAuthorizedOrNotFound",
"message": "Authorization failed or requested resource not found.",
"opc-request-id": "9F219FA4DBAB4E95B3A6F1025DF17507/14CE5DEB567A43B68CC8694D24023497/DD9D0EB116C04F76ACDF93DCFEA06A08",
"status": 404
}
Here is what I have done:
Ran
bmcs setup config
Entered the user OCID, tenancy OCID and region.
Also generated a key pair.
Then went to the console and added an API key (the public key that was generated by the CLI from the previous step.
But every invocation of bmcs results in "NotAuthorizedOrNotFound" "Authorization failed or requested resource not found.".
What am I missing? Any pointers appreciated.
-c (--compartment-id) takes a compartment id (ocid), not a compartment name.
So you'd want to do something like:
C=ocid1.compartment.oc1..aaaaaarhifmvrvuqtye5q65flzp3pp2jojdc6rck6copzqck3ukcypxfga
bmcs network vcn list -c $C
Where C is set to your compartment's id. Please see Using the CLI for more info.

packer ssh_private_key_file is invalid

I am trying to use the OpenStack provisioner API in packer to clone an instance. So far I have developed the script:
{
"variables": {
},
"description": "This will create the baked vm images for any environment from dev to prod.",
"builders": [
{
"type": "openstack",
"identity_endpoint": "http://192.168.10.10:5000/v3",
"tenant_name": "admin",
"domain_name": "Default",
"username": "admin",
"password": "****************",
"region": "RegionOne",
"image_name": "cirros",
"flavor": "m1.tiny",
"insecure": "true",
"source_image": "0f9b69ee-4e9f-4807-a7c4-6a58355c37b1",
"communicator": "ssh",
"ssh_keypair_name": "******************",
"ssh_private_key_file": "~/.ssh/id_rsa",
"ssh_username": "root"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 60"
]
}
]
}
But upon running the script using packer build script.json I get the following error:
User:packer User$ packer build script.json
openstack output will be in this color.
1 error(s) occurred:
* ssh_private_key_file is invalid: stat ~/.ssh/id_rsa: no such file or directory
My id_rsa is a file starting and ending with:
------BEGIN RSA PRIVATE KEY------
key
------END RSA PRIVATE KEY--------
Which I thought meant it was a PEM related file so I found this was weird so I made a pastebin of my PACKER_LOG: http://pastebin.com/sgUPRkGs
Initial analysis tell me that the only error is a missing packerconfig file. Upon googling this the top searches tell me if it doesn't find one it defaults. Is this why it is not working?
Any help would be of great assistance. Apparently there are similar problems on the github support page (https://github.com/mitchellh/packer/issues) But I don't understand some of the solutions posted and if they apply to me.
I've tried to be as informative as I can. Happy to provide any information where I can!!
Thank you.
* ssh_private_key_file is invalid: stat ~/.ssh/id_rsa: no such file or directory
The "~" character isn't special to the operating system. It's only special to shells and certain other programs which choose to interpret it as referring to your home directory.
It appears that OpenStack doesn't treat "~" as special, and it's looking for a key file with the literal pathname "~/.ssh/id_rsa". It's failing because it can't find a key file with that literal pathname.
Update the ssh_private_key_file entry to list the actual pathname to the key file:
"ssh_private_key_file": "/home/someuser/.ssh/id_rsa",
Of course, you should also make sure that the key file actually exists at the location that you specify.
Have to leave a post here as this just bit me… I was using a variable with ~/.ssh/id_rsa and then I changed it to use the full path and when I did… I had a space at the end of the variable value being passed in from the command line via Makefile which was causing this error. Hope this saves someone some time.
Kenster's answer got you past your initial question, but it sounds like from your comment that you were still stuck.
Per my reply to your comment, Packer doesn't seem to support supplying a passphrase, but you CAN tell it to ask the running SSH Agent for a decrypted key if the correct passphrase was supplied when the key was loaded. This should allow you to use Packer to build with a protect SSH key as long as you've loaded it into SSH agent before attempting the build.
https://www.packer.io/docs/templates/communicator.html#ssh_agent_auth
The SSH communicator connects to the host via SSH. If you have an SSH
agent configured on the host running Packer, and SSH agent
authentication is enabled in the communicator config, Packer will
automatically forward the SSH agent to the remote host.
The SSH communicator has the following options:
ssh_agent_auth (boolean) - If true, the local SSH agent will be used
to authenticate connections to the remote host. Defaults to false.

Can't download file from S3 in AWS Lambda without any HTTP error

This is my code in AWS lambda:
import boto3
def worker_handler(event, context):
s3 = boto3.resource('s3')
s3.meta.client.download_file('s3-bucket-with-script','scripts/HelloWorld.sh', '/tmp/hw.sh')
print "Connecting to "
I just want to download a file stored in S3, but when I start the code, the program just run until timeout and print nothing on.
This is the Logs file
START RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Version: $LATEST
END RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010
REPORT RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Duration: 300000.12 ms Billed Duration: 300000 ms Memory Size: 128 MB Max Memory Used: 28 MB
2016-07-18T23:42:10.273Z 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Task timed out after 300.00 seconds
I have this role in the this Lambda function, it shows that I have the permission to get file from S3
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Is there any other set up I missed? Or anyway I can continue this program?
As you mentioned a timeout, I would check the network configuration. If you are going through a VPC, this may be caused by the lack of route to the internet. This can be solved using a NAT Gateway or S3 VPC endpoint. The video below explains the configuration required.
Introducing VPC Support for AWS Lambda
Per the docs your code should be a little different:
import boto3
# Get the service client
s3 = boto3.client('s3')
# Download object at bucket-name with key-name to tmp.txt
s3.download_file("bucket-name", "key-name", "tmp.txt")
Also, note that Lambda has a ephemeral file structure, meaning downloading the file, does nothing really. You just downloaded it and then the Lambda shut down and ceased to exist, you need to send it somewhere after you download it to Lambda if you want to keep it.
Also, you may need to tweak your timeout settings to be higher.
As indicated in another answer you may need a NAT Gateway or a S3 VPC endpoint. I needed it because my Lambda was in a VPC so it could access RDS. I started going through the trouble of setting up a NAT Gateway until I realized that a NAT Gateway is currently $0.045 per hour, or about $1 ($1.08) per day, which is way more than I wanted to spend.
Then I needed to consider a S3 VPC endpoint. This sounded like setting up another VPC but it is not a VPC, it is a VPC endpoint. If you go into the VPC section there is a "endpoint" section (on the left) along with subnets, routes, NAT gateways, etc. For all the complexity (in my opinion) of setting up the NAT gateway, the endpoint was surprisingly simple.
The only tricky part was selecting the service. You'll notice the service names are tied to the region you are in. For example, mine is "com.amazonaws.us-east-2.s3"
But then you may notice you have two options, a "gateway" and an "interface". On Reddit someone claimed that they charge for interfaces but not gateways, so I went with gateway and things seem to work.
https://www.reddit.com/r/aws/comments/a6yppu/eli5_what_is_the_difference_between_interface/
If you don't trust that Reddit user, I later found that AWS currently says this:
"Note: To avoid the NAT Gateway Data Processing charge in this example, you could setup a Gateway Type VPC endpoint and route the traffic to/from S3 through the VPC endpoint instead of going through the NAT Gateway. There is no data processing or hourly charges for using Gateway Type VPC endpoints. For details on how to use VPC endpoints, please visit VPC Endpoints Documentation."
https://aws.amazon.com/vpc/pricing/
Note, I also updated the pathing type per an answer in this other question, but I'm not sure that really mattered.
https://stackoverflow.com/a/44478894/764365
did you check if your time out was set correctly? I had the same issue, and it was timing out since my default value was set to 3 seconds and the file would take longer than that to download.
here is where you set your timeout setting: