Deny S3 access if the request does not comes from specific role - json

I have a policy below here:
If the request comes from that role it should give delete access otherwise it should give access denied including IAM users as well.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:DeleteObject",
"Resource": [
"arn:aws:s3:::abc-bucket",
"arn:aws:s3:::abc-bucket/*"
],
"Condition": {
"StringNotEquals": {
"aws:PrincipalArn": [
"arn:aws:iam::000000000:role/EC2",
"arn:aws:iam::000000000:role/Eventbridge"
]
}
}
}
]
}
After trying this Still got access denied I am i doing something wrong here?
I tried deleting that via this command:
aws s3 rm s3://abc-bucket/tmp-test-delete
delete failed: s3://abc-bucket/test-delete An error occurred (Access Denied) when calling the DeleteObject operation: Access Denied

It appears that you have two requirements:
Allow specific roles to Delete objects from the bucket
This can be done by adding an Allow policy on the IAM Role itself (without using a Bucket Policy). It would look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::abc-bucket/*",
}
]
}
Deny other users/roles from being able to Delete objects on the bucket
Amazon S3 is "Deny by Default". Therefore, if you are not currently granting any IAM Users or IAM Roles the ability to Delete objects off this (or all) S3 bucket, then there is no need to do anything -- they will be denied the ability to delete objects by default.
However, if you have an existing policy that grants this permission to users (eg Admin users have s3:* permissions on all buckets), then you would need to use the Bucket Policy that you have shown. The Deny will overrule the Allow, thereby preventing them from being able to Delete objects from the bucket.
However, think carefully about why they were given the permissions in the first place -- it might be that users have been granted too much access and the existing Allow policies should be reduced, rather than adding Deny policies.

Related

Unathenticated user error when exporting database table from MySQL in Gcloud

I am running the follwing code in my gcloud cmd shell which is intended to export a specific table from my database to a storage bucket:
gcloud sql export sql databasen gs://my_bucket/file_nam,e.sql --async --database=database--table=table --offload
I keep on getting an error:
{
"error": {
"code": 401,
"message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"errors": [
{
"message": "Login Required.",
"domain": "global",
"reason": "required",
"location": "Authorization",
"locationType": "header"
}
],
"status": "UNAUTHENTICATED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "CREDENTIALS_MISSING",
"domain": "googleapis.com",
"metadata": {
"method": "google.cloud.sql.v1beta4.SqlOperationsService.Get",
"service": "sqladmin.googleapis.com"
}
}
]
}
I have already authenticated myself in the shell with gcloud auth login and given CloudAdmin IAM authority to the service account for the sql instance. I also have all the cloudsql cloud APIs enabled.
I am at loss and would appreciate any direction here. Thank you!
Every request your application sends to the Cloud SQL Admin API needs to identify your application to Google.
This error can occur if you haven't set up the OAuth header properly.To address this issue, repeat the steps to verify your initial setup.
When you run the code from your local machine, you need to make your own OAuth token accessible to the code by using one of the following authentication methods:
-- Using Service account:
Create a Service Account and download a JSON key.
Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the downloaded service account key file (JSON).
The same google.auth.default documentation code you use also checks this special environment variable for the set JSON OAuth token and uses it automatically.
You can check if the service account is the active one on the instance by running the following command:
gcloud auth list
I would also recommend you check the official documentation for Export SQL and Best Practices
Also do check these examples for similar implementation:
Authorize Cloud sql admin api call
Google Cloud admin authentication
Permissions for google cloud sql export

How to create role via aws cli with a JSON instead of explicit file

My operation is very simple i like to create a role using aws cli, but without external json file for example
aws iam create-role --role-name authenticateDeviceCrossAccountAssumeRole --assume-role-policy-document "{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::984589850232:root" },
"Action": "sts:AssumeRole"
}
}"
The above one doesn't work it says An error occurred (MalformedPolicyDocument) when calling the CreateRole operation: This policy contains invalid Json
So if try removing the double quotes am getting this error in terminal zsh: parse error near `}'
However if i create a file explicitly and run it works fine for ex:
aws iam create-role --role-name authenticateDeviceCrossAccountAssumeRole --assume-role-policy-document file://$STAGE.json
But for some reasons i can't add a file and run it, i want to run with plain json can someone help me to achieve this?

Why does this IAM policy have a syntax error?

I'm new to/learning about AWS, currently using LocalStack in lieu of real live AWS.
arn:aws:s3:::my-bucket/path/to/foo.json is a valid S3 key to an object in a newly-created S3 bucket. Because the bucket is newly created and pristine, other than the one file upload, nothing in it is externally accessible. I'm trying to learn about IAM by working through examples to create a policy that grants read-access to parts of a S3 bucket.
I created the following policy file based on this example from the AWS CLI reference:
$ cat ./policy
{
"Version": "2020-04-27",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::my-bucket/path/to/foo.json"
]
}
]
}
From the same linked example, I tried to create my policy with this command, which failed:
$ aws --endpoint-url=http://localhost:4593 iam create-policy --policy-name my-bucket-policy --policy-document file://policy
An error occurred (MalformedPolicyDocument) when calling the CreatePolicy operation: Syntax errors in policy.
Why did this command fail, or is there a way I can get a more-descriptive error message?
(policy is a file in the cwd where I execute the aws CLI command)
By my reading, the error message implies malformed JSON, but linters like https://jsonlint.com/ indicate that the text is valid JSON. Moreover, the changes from the source example are minimal and would appear reasonable: "Version" is changed to today's date, and the "Resource" ARN is changed to what's relevant to me.
There is incorrect version given. It should be: "2012-10-17"
Edit: Mistake with Principle. See comments. Principle is required for resource-based policies:
Principal (Required in only some circumstances) – If you create a resource-based policy, you must indicate the account, user, role, or federated user to which you would like to allow or deny access. If you are creating an IAM permissions policy to attach to a user or role, you cannot include this element. The principal is implied as that user or role.

Can't download file from S3 in AWS Lambda without any HTTP error

This is my code in AWS lambda:
import boto3
def worker_handler(event, context):
s3 = boto3.resource('s3')
s3.meta.client.download_file('s3-bucket-with-script','scripts/HelloWorld.sh', '/tmp/hw.sh')
print "Connecting to "
I just want to download a file stored in S3, but when I start the code, the program just run until timeout and print nothing on.
This is the Logs file
START RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Version: $LATEST
END RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010
REPORT RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Duration: 300000.12 ms Billed Duration: 300000 ms Memory Size: 128 MB Max Memory Used: 28 MB
2016-07-18T23:42:10.273Z 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Task timed out after 300.00 seconds
I have this role in the this Lambda function, it shows that I have the permission to get file from S3
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Is there any other set up I missed? Or anyway I can continue this program?
As you mentioned a timeout, I would check the network configuration. If you are going through a VPC, this may be caused by the lack of route to the internet. This can be solved using a NAT Gateway or S3 VPC endpoint. The video below explains the configuration required.
Introducing VPC Support for AWS Lambda
Per the docs your code should be a little different:
import boto3
# Get the service client
s3 = boto3.client('s3')
# Download object at bucket-name with key-name to tmp.txt
s3.download_file("bucket-name", "key-name", "tmp.txt")
Also, note that Lambda has a ephemeral file structure, meaning downloading the file, does nothing really. You just downloaded it and then the Lambda shut down and ceased to exist, you need to send it somewhere after you download it to Lambda if you want to keep it.
Also, you may need to tweak your timeout settings to be higher.
As indicated in another answer you may need a NAT Gateway or a S3 VPC endpoint. I needed it because my Lambda was in a VPC so it could access RDS. I started going through the trouble of setting up a NAT Gateway until I realized that a NAT Gateway is currently $0.045 per hour, or about $1 ($1.08) per day, which is way more than I wanted to spend.
Then I needed to consider a S3 VPC endpoint. This sounded like setting up another VPC but it is not a VPC, it is a VPC endpoint. If you go into the VPC section there is a "endpoint" section (on the left) along with subnets, routes, NAT gateways, etc. For all the complexity (in my opinion) of setting up the NAT gateway, the endpoint was surprisingly simple.
The only tricky part was selecting the service. You'll notice the service names are tied to the region you are in. For example, mine is "com.amazonaws.us-east-2.s3"
But then you may notice you have two options, a "gateway" and an "interface". On Reddit someone claimed that they charge for interfaces but not gateways, so I went with gateway and things seem to work.
https://www.reddit.com/r/aws/comments/a6yppu/eli5_what_is_the_difference_between_interface/
If you don't trust that Reddit user, I later found that AWS currently says this:
"Note: To avoid the NAT Gateway Data Processing charge in this example, you could setup a Gateway Type VPC endpoint and route the traffic to/from S3 through the VPC endpoint instead of going through the NAT Gateway. There is no data processing or hourly charges for using Gateway Type VPC endpoints. For details on how to use VPC endpoints, please visit VPC Endpoints Documentation."
https://aws.amazon.com/vpc/pricing/
Note, I also updated the pathing type per an answer in this other question, but I'm not sure that really mattered.
https://stackoverflow.com/a/44478894/764365
did you check if your time out was set correctly? I had the same issue, and it was timing out since my default value was set to 3 seconds and the file would take longer than that to download.
here is where you set your timeout setting:

HTTP password setup on mod_register_web ejabberd

I have configured mod_register_web module in ejabberd in the following way.. added configurations in listen part
{5281, ejabberd_http, [
%%tls, %% currently https not implemented
%%{certfile, "/etc/ejabberd/certificate.pem"},
{request_handlers, [
{["register"], mod_register_web}
]}
]},
Added module in modules part
{mod_register_web, []}
then tried
http://localhost:5281/register/
and page becomes available without any authentication means anyone can access and can add users. Then i have tried to make it secure with different combinations like..
{5281, ejabberd_http, [
http_bind,
http_poll,
web_admin,
{access, configure, [{allow, admin}]} %% actually admin has password
{request_handlers, [
{["register"], mod_register_web}
]}
]},
But it is still not asking for any password. While port 5280 for admin pages, is password protected. Can anyone guide how i can apply security on mod_register_web module like whenever anyone access through IP then it should prompt for username and password.
It can be done by modifying source code (mod_register_web.erl).
Like 'ejabberd_web_admin.erl', call get_auth_admin() and check result at the process().