I am trying to use cost explorer API from a IAM user credential but I am getting access denied error.
Below is the policy attached for the IAM user. Is any other permission required? Where i am going wrong?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ce:"
],
"Resource": [
""
]
}
]
}
Your policy is probably missing an asterisk (*) which is why you are getting a access denied error. You can use the policy described below to access Cost Explorer:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ce:*",
"Resource": "*"
}
]
}
First make sure you are using root user credentials.
You can enable Cost Explorer only if you are the owner of the AWS account and you signed in to the account with your root credentials. If you are the owner of a master account in an organization, enabling Cost Explorer enables Cost Explorer for all the organization accounts. In other words, all member accounts in the organization are also granted access. You can't grant or deny access individually.
Cost Explorer and IAM Users
An AWS account owner who is not using consolidated billing has full access to all Billing and Cost Management information, including Cost Explorer. After you enable Cost Explorer, you should interact with Cost Explorer as an IAM user. If you have permission to view the Billing and Cost Management console, you can use Cost Explorer.
An IAM user must be granted explicit permission to view pages in the Billing and Cost Management console. With the appropriate permissions, the IAM user can view costs for the AWS account to which the IAM user belongs. For the policy that grants the necessary permissions to an IAM user, see Controlling Access.
More details read this
Related
I have a similar setup as described in the question I can't enable MFA for Oracle Identity Cloud Service user but a different problem: I cannot enable Multi-Factor Authentication for any user.
On the Oracle Cloud Infrastructure (OCI) console, I do see the "Enable Multi-Factor Authentication" in one of the accounts under Identity >> Users >> User Details. After following all the steps, including scanning the barcode and entering the verification code, when I click the verify button on OCI I get this error: "Multi-factor authentication can only be enabled by the user."
What does this mean? I thought I was the user! I've searched online for this error and looked at documentation, but see no clue.
MFA can only be enabled for Your Own Account.
Tenancy Administrators have no way to enable MFA for other users in OCI but Administrators can disable MFA for other users.
Your Own Account meaning the one which you used for login.
For ex: In below snapshot from OCI, I am trying to enable MFA for other user. I am the Administrator for this tenancy.
I am trying to hit the amazon SP-API endpoint. When I am hitting the API endpoint, I am always getting an error 403 Forbidden with the message Access to the requested resource is denied.
{
"errors": [
{
"message": "Access to requested resource is denied.",
"code": "Unauthorized",
"details": ""
}
]
}
Steps I did:
I have created the IAM user, IAM role, and IAM policies.
I have created the seller account and developer account as well.
I am using the role ARN in my seller central app.
Using the access token to sign the API request.
I'm using Amazon Elastic Beanstalk with a VPC and I want to have multiple environments (workers) with different IP addresses. I don't need them to be static, I would actually prefer them to change regularly if possible.
Is there a way to have multiple environments with dynamic external IP addresses?
It's hard to understand the use case of wanting to change the instance IP address of an Elastic Beanstalk environment. The fundamental advantage that a managed service like Elastic Beanstalk provides is abstraction over the underlying architecture for a deployment. You are given a CNAME to access the environment's (your application's) API and you shouldn't be relying on the internal IP addresses or Load Balancer URLs for anything as they can be added, removed by the beanstalk service at will.
That being said, there is a way that you can achieve having changing IPs for the underlying instances.
Elastic Beanstalk Rebuild Environment destroys the existing resources including EC2s and creates new resources resulting in your instances having new IP addresses. This would work given that a scheduled downtime (of a few minutes depending on your resources) is not a problem for this use case.
You can use one the following two ways to schedule an environment rebuild
Solution 1:
You can schedule your Rebuild Environment using a simple lambda function.
import boto3
envid=['e-awsenvidid']
client = boto3.client('elasticbeanstalk')
def handler(event, context):
try:
for appid in range(len(envid)):
response = client.rebuild_environment(EnvironmentId=str(envid[appid].strip()))
if response:
print('Restore environment %s' %str(envid[appid]))
else:
print('Failed to Restore environment %s' %str(envid[appid]))
except Exception as e:
print('EnvironmentID is not valid')
In order to do this you will have to create an IAM role with the required permissions.
You can find a comprehensive guide in this AWS Guide.
Solution 2:
You can use a cron job to rebuild the environment using aws-cli. You can follow the steps below to achieve this.
Create EC2 instance
Create IAM Role with permission to rebuild environment
The following example policy would work
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"elasticbeanstalk:RebuildEnvironment"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Attach the IAM role to the EC2 instance
Add a cron job using command crontab -e
The following example cron job rebuilds the environment at 12.00 am on the 1st of every month
0 0 1 * * aws elasticbeanstalk rebuild-environment --environment-name my-environment-name
Save the cronjob and exit.
It is not recommended to rebuild the environment unnecessarily, but as of now there is no explicit way to achieve your particular requirement. So hope this helps!
Further Reading:
https://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/rebuild-environment.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-management-rebuild.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html
https://awspolicygen.s3.amazonaws.com/policygen.html
Is it possible to give a Google Compute Engine instance permission to delete itself without also giving it permission to delete other instances?
That is, I'd like instance name ABC to be able to run:
gcloud compute instances delete ABC
using it's own name, ABC, but no other name.
From the delete instance API docs, to delete any instance in the project you have to have:
compute.instances.delete permission
One of the following OAuth scopes:
https://www.googleapis.com/auth/compute or https://www.googleapis.com/auth/cloud-platform OAuth scope.
Which seems to me that you either have permission to delete any instance in the project or none at all.
No, the service account that assigned to the instance it's running the gcloud command not the instance.
Permissions are granted by setting policies that grant roles to a user, group, or service account as a member of your project.
Example: The role "compute Instance Admin" can create, modify, and delete virtual machine instances, that's means all the instances in your project. You cannot specify for a specific instance.
The gcloud command below can be applied for the ABC instance or any other instances in your project.
gcloud compute instances delete ABC --zone <zone>
The permission compute.instances.delete is in these roles:
Compute Admin
Compute Instance Admin
Project Editor
Project Owner
You can as well create a custom Role that have mixed permissions and assign it to a service account that will, but you need to be sure that you set every permission required for the action.
Scopes is to Select the type and level of API access that you grant grant to the VM.
By Default: read-only access to Storage and Service Management, write access to Stackdriver Logging and Monitoring, read/write access to Service Control
But you can select which Cloud APIs that the VM I mean the service account can access.
With a successful subscription in Orion Context Broker, a listening accumulator server fails to recieve any data, under any circumstances I can find to test.
We are using an Ubuntu virtual machine that has a nested virtual machine with FIWARE Orion in it. Having subscribed to Orion Context Broker and confirmed that it was successful by checking the database and also having confirmed that data is successfully updated, the accumulator server fails to respond. unable to tell if this is a failure to send from Orion, or to receive by accumulator, and unsure how to check and continue, we humbly beg the wisdom of the stack overflow community.
We have run the accumulator server on both virtual machine on the same PC and on another PC with non-vm Ubuntu. The script we are using to subscribe is presented below:
Orion VM
{
"duration": "P1M",
"entities": [
{
"type": "Thing",
"id": "Sensor_GV_01",
"isPattern": "false"
}
],
"throttling": "PT1S",
"reference": "http://10.224.24.236:1028/accumulate",
"attributes": [
"temperature",
"pressure"
]
}
EDIT 1
upon using GET/v2/subscriptions/ we receive that the subscription is present but it gives only basic info, no Timesent values. It is pretty much the same thing we receive when we ask MongoDB directly.
Also, forgot to mention, Orion version we are using is 1.9.0 Subscription check