How can I get the addresses of all the EC2 instances tied to a LoadBalancer using Boto? - boto

I'd like to get all the instances that are on a LoadBalancer with boto, how can I accomplish that?
This is what I've got so far:
import boto
from boto.regioninfo import RegionInfo
from boto import ec2
ACCESS_KEY_ID = '*****'
SECRET_ACCESS_KEY = '********'
reg = RegionInfo(
name='ap-southeast-1',
endpoint='elasticloadbalancing.ap-southeast-1.amazonaws.com')
conn = boto.connect_elb(
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=SECRET_ACCESS_KEY,
region=reg)
ec2_connection = boto.ec2.connection.EC2Connection(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_ACCESS_KEY,
region=reg)
instances = [ instance.id for instance in conn.get_all_load_balancers()[3].instances ]
# instances is now [u'i-62448d36'], so far so good.
ec2_connection.get_all_instances(instances)
Which ends with:
<ErrorResponse xmlns="http://webservices.amazon.com/AWSFault/2005-15-09">
<Error>
<Type>Sender</Type>
<Code>InvalidAction</Code>
<Message>Could not find operation DescribeInstances for version 2012-03-01</Message>
</Error>
<RequestId>c6aab70d-b22b-11e1-a990-a747bbde9f63</RequestId>
</ErrorResponse>
I'm using boto 2.4.1.

This is what I ended up with:
import boto
from boto import regioninfo
from boto import ec2
ACCESS_KEY_ID = '***********'
SECRET_ACCESS_KEY = '***********'
elb_region = boto.regioninfo.RegionInfo(
name='ap-southeast-1',
endpoint='elasticloadbalancing.ap-southeast-1.amazonaws.com')
elb_connection = boto.connect_elb(
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=SECRET_ACCESS_KEY,
region=elb_region)
ec2_region = ec2.get_region(
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=SECRET_ACCESS_KEY,
region_name='ap-southeast-1')
ec2_connection = boto.ec2.connection.EC2Connection(
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=SECRET_ACCESS_KEY,
region=ec2_region)
load_balancer = elb_connection.get_all_load_balancers(load_balancer_names=['MediaPopClients'])[0]
instance_ids = [ instance.id for instance in load_balancer.instances ]
reservations = ec2_connection.get_all_instances(instance_ids)
instance_addresses = [ i.public_dns_name for r in reservations for i in r.instances ]
Gives:
[u'ec2-46-137-194-58.ap-southeast-1.compute.amazonaws.com']

I think something like this should work:
>>> import boto
>>> elb = boto.connect_elb()
>>> load_balancer = elb.get_all_load_balancers(['my_lb_name'])[0]
>>> for instance_info in load_balancer.instances:
... print instance_info.id
The instances attribute of the LoadBalancer object is a list of InstanceInfo objects. These are not actual Instance object but it does have the instance-id so you can then look up the Instance object associated with it.

Related

Inserting json file into postgres with SQLAlchemy flask

I am trying to insert a json file into postgress db using flask/sqlalchemy however I'm running into
TypeError: 'farmers' is an invalid keyword argument for Farmer
This is my upload script:
import os
import flask
from flask_sqlalchemy import SQLAlchemy
from flask import Flask, jsonify, send_from_directory
from sqlalchemy.dialects.postgresql import JSON
APP = Flask(__name__)
APP.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:admin#localhost:5432/flaskwebapp'
db = SQLAlchemy(APP)
class Farmers(db.Model):
Farmers = db.Column(db.Integer, primary_key=True)
W = db.Column(db.Integer)
Z = db.Column(db.Integer)
J1_I = db.Column(db.Integer)
def insert_data():
f1 = Farmers(farmers={"farmers":[{"W":1000000,"Z":22758,"J1_I":0.66},{"W":3500000,"Z":21374,"J1_I":2.69},{"W":2500000,"Z":14321,"J1_I":0.76},{"W":2500000,"Z":14321,"J1_I":0.76}]})
db.session.add(f1)
db.session.commit()
print('Data inserted to DB!')
insert_data()
This is the json file in question, however i dont think this is an issue as i'm able to read it in cmd:
{
"farmers":[ {
"W":1000000,
"Z":22758,
"J1_I":0.66
},
{
"W":3500000,
"Z":21374,
"J1_I":2.69
},
{
"W":2500000,
"Z":14321,
"J1_I":0.76
},
{
"W":2500000,
"Z":14321,
"J1_I":0.76
}]}
Any ideas why i'm not able to upload the data?
The model constructor won't accept a list, you need to pass each dictionary individually:
farmer_data = {"farmers":[{"W":1000000,"Z":22758,"J1_I":0.66},{"W":3500000,"Z":21374,"J1_I":2.69},{"W":2500000,"Z":14321,"J1_I":0.76},{"W":2500000,"Z":14321,"J1_I":0.76}]}
farmers = [Farmer(**data) for data in farmer_data["farmers"]]
db.session.add_all(farmers)
db.session.commit()

AWS API re-deployment using ansible

I have an existing API in my AWS account. Now I am trying to use ansible to redeploy api after introducing any resource policy changes.
According to AWS I need to use below CLI command to redeploy the api:
- name: deploy API
command: >
aws apigateway update-stage --region us-east-1 \
--rest-api-id <rest-api-id> \
--stage-name 'stage'\
--patch-operations op='replace',path='/deploymentId',value='<deployment-id>'
Above, 'deploymentId' from previous deployment will be different after every deployment that's why trying to create that as a variable so this can be automated for redeployment steps.
I can get previous deployment information using below CLI:
- name: Get deployment information
command: >
aws apigateway get-deployments \
--rest-api-id 123454ne \
--region us-east-1
register: deployment_info
And output looks like this:
deployment_info.stdout_lines:
- '{'
- ' "items": ['
- ' {'
- ' "id": "abcd",'
- ' "createdDate": 1228509116'
- ' }'
- ' ]'
- '}'
I was using deployment_info.items.id as deploymentId and couldn't able to make this work. Now stuck on what can be Ansible CLI command to get id from output and use this id as deploymentId in deployment commands.
How can I use this id for deploymentId in deployment commands?
I created a small ansible module which you might find useful
#!/usr/bin/python
# Creates a new deployment for an API GW stage
# See https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html
# Based on https://github.com/ansible-collections/community.aws/blob/main/plugins/modules/aws_api_gateway.py
# TODO needed?
# from __future__ import absolute_import, division, print_function
# __metaclass__ = type
import json
import traceback
try:
import botocore
except ImportError:
pass # Handled by AnsibleAWSModule
from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
def main():
argument_spec = dict(
api_id=dict(type='str', required=True),
stage=dict(type='str', required=True),
deploy_desc=dict(type='str', required=False, default='')
)
module = AnsibleAWSModule(
argument_spec=argument_spec,
supports_check_mode=True
)
api_id = module.params.get('api_id')
stage = module.params.get('stage')
client = module.client('apigateway')
# Update stage if not in check_mode
deploy_response = None
changed = False
if not module.check_mode:
try:
deploy_response = create_deployment(client, api_id, **module.params)
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.EndpointConnectionError) as e:
msg = "Updating api {0}, stage {1}".format(api_id, stage)
module.fail_json_aws(e, msg)
exit_args = {"changed": changed, "api_deployment_response": deploy_response}
module.exit_json(**exit_args)
retry_params = {"retries": 10, "delay": 10, "catch_extra_error_codes": ['TooManyRequestsException']}
#AWSRetry.jittered_backoff(**retry_params)
def create_deployment(client, rest_api_id, **params):
result = client.create_deployment(
restApiId=rest_api_id,
stageName=params.get('stage'),
description=params.get('deploy_desc')
)
return result
if __name__ == '__main__':
main()

This XML file does not appear to have any style information associated with it when try to download the file in Google Cloud Storage

I have a cloud functions that copy and paste a file from one standard bucket to a Nearline bucket. I also tried to save the file by opening as a dataframe and write it as dask dataframe. They both worked but Every time I try to download the file through the
GUI I get the an XML error message as stated below. Does anyone know why this is happening? How Can I prevent it to happen?
This XML file does not appear to have any style information associated with it
import base64
import json
from google.cloud import storage
import dask.dataframe as dd
import pandas as pd
def hello_pubsub(event, context):
"""Triggered from a message on a Cloud Pub/Sub topic.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
print('here')
print(event)
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
payload = json.loads(pubsub_message)
bucket_name = payload['data']['bucket_name']
print(bucket_name)
blob_name = payload['data']['file_name']
print(blob_name)
destination_bucket_name = 'infobip-email-uploaded'
#destination_blob_name = blob_name[0:10]+'.csv'
destination_blob_name = 'ddf-*.csv'
df = pd.read_excel('gs://'+bucket_name+'/'+blob_name, sheet_name='Data', engine='xlrd')
print('excel has been read')
ddf = dd.from_pandas(df,npartitions=1, sort=True)
print('dataframe has been transformed into dask')
path = 'gs://'+destination_bucket_name +'/'+ destination_blob_name
print('path is')
print(path)
ddf.to_csv(path, index=False, sep=',', header=False)
destination_blob_name = blob_name[0:10]+'.xlsx'
copy_blob(bucket_name,blob_name,destination_bucket_name,destination_blob_name)
print('File has been successfully copied')
delete_blob(bucket_name,blob_name)
print('File has been successfully deleted')
return '200'
def copy_blob(bucket_name, blob_name, destination_bucket_name, destination_blob_name):
"""Copies a blob from one bucket to another with a new name."""
# bucket_name = "your-bucket-name"
# blob_name = "your-object-name"
# destination_bucket_name = "destination-bucket-name"
# destination_blob_name = "destination-object-name"
storage_client = storage.Client()
source_bucket = storage_client.bucket(bucket_name)
source_blob = source_bucket.blob(blob_name)
destination_bucket = storage_client.bucket(destination_bucket_name)
blob_copy = source_bucket.copy_blob(
source_blob, destination_bucket, destination_blob_name
)
print(
"Blob {} in bucket {} copied to blob {} in bucket {}.".format(
source_blob.name,
source_bucket.name,
blob_copy.name,
destination_bucket.name,
)
)
def delete_blob(bucket_name, blob_name):
"""Deletes a blob from the bucket."""
# bucket_name = "your-bucket-name"
# blob_name = "your-object-name"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
blob.delete()
print("Blob {} deleted.".format(blob_name))

Modifying Python 3 code to deal with RSA key fingerprint

The following code fails to connect to a Cisco switch because of the:
RSA key fingerprint is 3e:b7:7b:55:6b:a3:xx:xx:xx:xx
Are you sure you want to continue connecting (yes/no)? yes
#!/usr/bin/env python
from __future__ import print_function
from netmiko import ConnectHandler
import sys
import time
import select
import paramiko
import re
fd = open(r'output_twinax.log','w') # Where you want the file to save to.
old_stdout = sys.stdout
sys.stdout = fd
platform = 'cisco_ios'
username = 'username' # edit to reflect
password = 'password' # edit to reflect
ip_add_file = open(r'IP-list','r') # a simple list of IP addresses you want to connect to each one o
n a new line
for host in ip_add_file:
host = host.strip()
device = ConnectHandler(device_type=platform, ip=host, username=username, password=password)
find_hostname = device.find_prompt()
hostname = find_hostname.replace(">","")
print(hostname)
output = device.send_command('terminal length 0')
output = device.send_command('enable') #Editable to be what ever is needed
output = device.send_command('sh int status | i SFP')
print(output)
fd.close()
Please help modifying it to account for the RSA key. Thank you much.
Did you try use_keys keyword argument?
#!/usr/bin/env python
from __future__ import print_function
from netmiko import ConnectHandler
import sys
import time
import select
import paramiko
import re
fd = open(r'output_twinax.log','w') # Where you want the file to save to.
old_stdout = sys.stdout
sys.stdout = fd
platform = 'cisco_ios'
username = 'username' # edit to reflect
password = 'password' # edit to reflect
# List of IP addresses in each line
ip_add_file = open(r'IP-list','r')
key_file = "./rsa_key.txt"
for host in ip_add_file:
host = host.strip()
device = ConnectHandler(device_type=platform,
ip=host,
username=username,
key_file=key_file,
use_keys=True)
find_hostname = device.find_prompt()
hostname = find_hostname.replace(">","")
print(hostname)
output = device.send_command('terminal length 0')
output = device.send_command('enable')
output = device.send_command('sh int status | i SFP')
print(output)
fd.close()

AWS: Boto3: AssumeRole example which includes role usage

I'm trying to use the AssumeRole in such a way that i'm traversing multiple accounts and retrieving assets for those accounts. I've made it to this point:
import boto3
stsclient = boto3.client('sts')
assumedRoleObject = sts_client.assume_role(
RoleArn="arn:aws:iam::account-of-role-to-assume:role/name-of-role",
RoleSessionName="AssumeRoleSession1")
Great, i have the assumedRoleObject. But now i want to use that to list things like ELBs or something that isn't a built-in low level resource.
How does one go about doing that? If i may ask - please code out a full example, so that everyone can benefit.
Here's a code snippet from the official AWS documentation where an s3 resource is created for listing all s3 buckets. boto3 resources or clients for other services can be built in a similar fashion.
# create an STS client object that represents a live connection to the
# STS service
sts_client = boto3.client('sts')
# Call the assume_role method of the STSConnection object and pass the role
# ARN and a role session name.
assumed_role_object=sts_client.assume_role(
RoleArn="arn:aws:iam::account-of-role-to-assume:role/name-of-role",
RoleSessionName="AssumeRoleSession1"
)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
credentials=assumed_role_object['Credentials']
# Use the temporary credentials that AssumeRole returns to make a
# connection to Amazon S3
s3_resource=boto3.resource(
's3',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
)
# Use the Amazon S3 resource object that is now configured with the
# credentials to access your S3 buckets.
for bucket in s3_resource.buckets.all():
print(bucket.name)
To get a session with an assumed role:
import botocore
import boto3
import datetime
from dateutil.tz import tzlocal
assume_role_cache: dict = {}
def assumed_role_session(role_arn: str, base_session: botocore.session.Session = None):
base_session = base_session or boto3.session.Session()._session
fetcher = botocore.credentials.AssumeRoleCredentialFetcher(
client_creator = base_session.create_client,
source_credentials = base_session.get_credentials(),
role_arn = role_arn,
extra_args = {
# 'RoleSessionName': None # set this if you want something non-default
}
)
creds = botocore.credentials.DeferredRefreshableCredentials(
method = 'assume-role',
refresh_using = fetcher.fetch_credentials,
time_fetcher = lambda: datetime.datetime.now(tzlocal())
)
botocore_session = botocore.session.Session()
botocore_session._credentials = creds
return boto3.Session(botocore_session = botocore_session)
# usage:
session = assumed_role_session('arn:aws:iam::ACCOUNTID:role/ROLE_NAME')
ec2 = session.client('ec2') # ... etc.
The resulting session's credentials will be automatically refreshed when required which is quite nice.
Note: my previous answer was outright wrong but I can't delete it, so I've replaced it with a better and working answer.
You can assume role using STS token, like:
class Boto3STSService(object):
def __init__(self, arn):
sess = Session(aws_access_key_id=ARN_ACCESS_KEY,
aws_secret_access_key=ARN_SECRET_KEY)
sts_connection = sess.client('sts')
assume_role_object = sts_connection.assume_role(
RoleArn=arn, RoleSessionName=ARN_ROLE_SESSION_NAME,
DurationSeconds=3600)
self.credentials = assume_role_object['Credentials']
This will give you temporary access key and secret keys, with session token. With these temporary credentials, you can access any service. For Eg, if you want to access ELB, you can use the below code:
self.tmp_credentials = Boto3STSService(arn).credentials
def get_boto3_session(self):
tmp_access_key = self.tmp_credentials['AccessKeyId']
tmp_secret_key = self.tmp_credentials['SecretAccessKey']
security_token = self.tmp_credentials['SessionToken']
boto3_session = Session(
aws_access_key_id=tmp_access_key,
aws_secret_access_key=tmp_secret_key, aws_session_token=security_token
)
return boto3_session
def get_elb_boto3_connection(self, region):
sess = self.get_boto3_session()
elb_conn = sess.client(service_name='elb', region_name=region)
return elb_conn
with reference to the solution by #jarrad which is not working as of Feb 2021, and as a solution that does not use STS explicitly please see the following
import boto3
import botocore.session
from botocore.credentials import AssumeRoleCredentialFetcher, DeferredRefreshableCredentials
def get_boto3_session(assume_role_arn=None):
session = boto3.Session(aws_access_key_id="abc", aws_secret_access_key="def")
if not assume_role_arn:
return session
fetcher = AssumeRoleCredentialFetcher(
client_creator=_get_client_creator(session),
source_credentials=session.get_credentials(),
role_arn=assume_role_arn,
)
botocore_session = botocore.session.Session()
botocore_session._credentials = DeferredRefreshableCredentials(
method='assume-role',
refresh_using=fetcher.fetch_credentials
)
return boto3.Session(botocore_session=botocore_session)
def _get_client_creator(session):
def client_creator(service_name, **kwargs):
return session.client(service_name, **kwargs)
return client_creator
the function can be called as follows
ec2_client = get_boto3_session(role_arn='my_role_arn').client('ec2', region_name='us-east-1')
If you want a functional implementation, this is what I settled on:
def filter_none_values(kwargs: dict) -> dict:
"""Returns a new dictionary excluding items where value was None"""
return {k: v for k, v in kwargs.items() if v is not None}
def assume_session(
role_session_name: str,
role_arn: str,
duration_seconds: Optional[int] = None,
region_name: Optional[str] = None,
) -> boto3.Session:
"""
Returns a session with the given name and role.
If not specified, duration will be set by AWS, probably at 1 hour.
If not specified, region will be left unset.
Region can be overridden by each client or resource spawned from this session.
"""
assume_role_kwargs = filter_none_values(
{
"RoleSessionName": role_session_name,
"RoleArn": role_arn,
"DurationSeconds": duration_seconds,
}
)
credentials = boto3.client("sts").assume_role(**assume_role_kwargs)["Credentials"]
create_session_kwargs = filter_none_values(
{
"aws_access_key_id": credentials["AccessKeyId"],
"aws_secret_access_key": credentials["SecretAccessKey"],
"aws_session_token": credentials["SessionToken"],
"region_name": region_name,
}
)
return boto3.Session(**create_session_kwargs)
def main() -> None:
session = assume_session(
"MyCustomSessionName",
"arn:aws:iam::XXXXXXXXXXXX:role/TheRoleIWantToAssume",
region_name="us-east-1",
)
client = session.client(service_name="ec2")
print(client.describe_key_pairs())
import json
import boto3
roleARN = 'arn:aws:iam::account-of-role-to-assume:role/name-of-role'
client = boto3.client('sts')
response = client.assume_role(RoleArn=roleARN,
RoleSessionName='RoleSessionName',
DurationSeconds=900)
dynamodb_client = boto3.client('dynamodb', region_name='us-east-1',
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token = response['Credentials']['SessionToken'])
response = dynamodb_client.get_item(
Key={
'key1': {
'S': '1',
},
'key2': {
'S': '2',
},
},
TableName='TestTable')
print(response)
#!/usr/bin/env python3
import boto3
sts_client = boto3.client('sts')
assumed_role = sts_client.assume_role(RoleArn = "arn:aws:iam::123456789012:role/example_role",
RoleSessionName = "AssumeRoleSession1",
DurationSeconds = 1800)
session = boto3.Session(
aws_access_key_id = assumed_role['Credentials']['AccessKeyId'],
aws_secret_access_key = assumed_role['Credentials']['SecretAccessKey'],
aws_session_token = assumed_role['Credentials']['SessionToken'],
region_name = 'us-west-1'
)
# now we make use of the role to retrieve a parameter from SSM
client = session.client('ssm')
response = client.get_parameter(
Name = '/this/is/a/path/parameter',
WithDecryption = True
)
print(response)
Assuming that 1) the ~/.aws/config or ~/.aws/credentials file is populated with each of the roles that you wish to assume and that 2) the default role has AssumeRole defined in its IAM policy for each of those roles, then you can simply (in pseudo-code) do the following and not have to fuss with STS:
import boto3
# get all of the roles from the AWS config/credentials file using a config file parser
profiles = get_profiles()
for profile in profiles:
# this is only used to fetch the available regions
initial_session = boto3.Session(profile_name=profile)
# get the regions
regions = boto3.Session.get_available_regions('ec2')
# cycle through the regions, setting up session, resource and client objects
for region in regions:
boto3_session = boto3.Session(profile_name=profile, region_name=region)
boto3_resource = boto3_session.resource(service_name='s3', region_name=region)
boto3_client = boto3_session.client(service_name='s3', region_name=region)
[ do something interesting with your session/resource/client here ]
Credential Setup (boto3 - Shared Credentials File)
Assume Role Setup (AWS)
After a few days of searching, this is the simplest solution I have found. explained here but does not have a usage example.
import boto3
for profile in boto3.Session().available_profiles:
boto3.DEFAULT_SESSION = boto3.session.Session(profile_name=profile)
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket)
This will switch the default role you will be using. To not make the profile the default, just do not assign it to boto3.DEFAULT_SESSION. but instead, do the following.
testing_profile = boto3.session.Session(profile_name='mainTesting')
s3 = testing_profile.resource('s3')
for bucket in s3.buckets.all():
print(bucket)
Important to note that the .aws credentials need to be set in a specific way.
[default]
aws_access_key_id = default_access_id
aws_secret_access_key = default_access_key
[main]
aws_access_key_id = main_profile_access_id
aws_secret_access_key = main_profile_access_key
[mainTesting]
source_profile = main
role_arn = Testing role arn
mfa_serial = mfa_arn_for_main_role
[mainProduction]
source_profile = main
role_arn = Production role arn
mfa_serial = mfa_arn_for_main_role
I don't know why but the mfa_serial key has to be on the roles for this to work instead of the source account which would make more sense.
Here's the code snippet I used
sts_client = boto3.client('sts')
assumed_role_object = sts_client.assume_role(
RoleArn=<arn of the role to assume>,
RoleSessionName="<role session name>"
)
print(assumed_role_object)
credentials = assumed_role_object['Credentials']
session = Session(
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
self.s3 = session.client('s3')