AWS API re-deployment using ansible - json

I have an existing API in my AWS account. Now I am trying to use ansible to redeploy api after introducing any resource policy changes.
According to AWS I need to use below CLI command to redeploy the api:
- name: deploy API
command: >
aws apigateway update-stage --region us-east-1 \
--rest-api-id <rest-api-id> \
--stage-name 'stage'\
--patch-operations op='replace',path='/deploymentId',value='<deployment-id>'
Above, 'deploymentId' from previous deployment will be different after every deployment that's why trying to create that as a variable so this can be automated for redeployment steps.
I can get previous deployment information using below CLI:
- name: Get deployment information
command: >
aws apigateway get-deployments \
--rest-api-id 123454ne \
--region us-east-1
register: deployment_info
And output looks like this:
deployment_info.stdout_lines:
- '{'
- ' "items": ['
- ' {'
- ' "id": "abcd",'
- ' "createdDate": 1228509116'
- ' }'
- ' ]'
- '}'
I was using deployment_info.items.id as deploymentId and couldn't able to make this work. Now stuck on what can be Ansible CLI command to get id from output and use this id as deploymentId in deployment commands.
How can I use this id for deploymentId in deployment commands?

I created a small ansible module which you might find useful
#!/usr/bin/python
# Creates a new deployment for an API GW stage
# See https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html
# Based on https://github.com/ansible-collections/community.aws/blob/main/plugins/modules/aws_api_gateway.py
# TODO needed?
# from __future__ import absolute_import, division, print_function
# __metaclass__ = type
import json
import traceback
try:
import botocore
except ImportError:
pass # Handled by AnsibleAWSModule
from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
def main():
argument_spec = dict(
api_id=dict(type='str', required=True),
stage=dict(type='str', required=True),
deploy_desc=dict(type='str', required=False, default='')
)
module = AnsibleAWSModule(
argument_spec=argument_spec,
supports_check_mode=True
)
api_id = module.params.get('api_id')
stage = module.params.get('stage')
client = module.client('apigateway')
# Update stage if not in check_mode
deploy_response = None
changed = False
if not module.check_mode:
try:
deploy_response = create_deployment(client, api_id, **module.params)
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.EndpointConnectionError) as e:
msg = "Updating api {0}, stage {1}".format(api_id, stage)
module.fail_json_aws(e, msg)
exit_args = {"changed": changed, "api_deployment_response": deploy_response}
module.exit_json(**exit_args)
retry_params = {"retries": 10, "delay": 10, "catch_extra_error_codes": ['TooManyRequestsException']}
#AWSRetry.jittered_backoff(**retry_params)
def create_deployment(client, rest_api_id, **params):
result = client.create_deployment(
restApiId=rest_api_id,
stageName=params.get('stage'),
description=params.get('deploy_desc')
)
return result
if __name__ == '__main__':
main()

Related

Context Function error while using Jinja2 when trying to build templates

Im following ChristopherGS's tutorial for FASTapi but i'm stuck on part 6 because I believe his syntax may be already deprecated.
I get AttributeError: module 'jinja2' has no attribute 'contextfunction at the end when the program stops. How do I solve this, I've been stuck here for 3 days.
This is my code:
from fastapi.templating import Jinja2Templates
from typing import Optional, Any
from pathlib import Path
from app.schemas import RecipeSearchResults, Recipe, RecipeCreate
from app.recipe_data import RECIPES
BASE_PATH = Path(__file__).resolve().parent
TEMPLATES = Jinja2Templates(directory=str(BASE_PATH / "templates"))
app = FastAPI(title="Recipe API", openapi_url="/openapi.json")
api_router = APIRouter()
# Updated to serve a Jinja2 template
# https://www.starlette.io/templates/
# https://jinja.palletsprojects.com/en/3.0.x/templates/#synopsis
#api_router.get("/", status_code=200)
def root(request: Request) -> dict:
"""
Root GET
"""
return TEMPLATES.TemplateResponse(
"index.html",
{"request": request, "recipes": RECIPES},
)
#api_router.get("/recipe/{recipe_id}", status_code=200, response_model=Recipe)
def fetch_recipe(*, recipe_id: int) -> Any:
"""
Fetch a single recipe by ID
"""
result = [recipe for recipe in RECIPES if recipe["id"] == recipe_id]
if not result:
# the exception is raised, not returned - you will get a validation
# error otherwise.
raise HTTPException(
status_code=404, detail=f"Recipe with ID {recipe_id} not found"
)
return result[0]
if __name__ == "__main__":
# Use this for debugging purposes only
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001, log_level="debug")
It could be due to the version mismatch bewteen jinja and starlette(fastapi).
I faced similar issue with the latest fastapi docker image(python3.9). It was resolved by installing an older version of jinja2.
Try downgrading jinja2, if you are using jinja2 >3.0.3:
pip install jinja2==3.0.3
Other option would be to upgrade fastapi/starlette.
Ref:
FastAPI Jinja2Templates - Error while running initialising templates?
https://github.com/pallets/jinja/blob/1b714c7e82c73575d1dba48f560db07fe9a5cb74/CHANGES.rst#version-310

JINA#4428[C]:Can not fetch the URL of Hubble from `api.jina.ai`

I was trying out the Semantic Wikipedia Search from jina-ai.
This is the error I am getting after running python app.py -t index.
app.py is used to index the data.
JINA#4489[C]:Can not fetch the URL of Hubble from api.jina.ai
HubIO#4489[E]:Error while pulling jinahub+docker://TransformerTorchEncoder:
JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
This is app.py:
__copyright__ = "Copyright (c) 2021 Jina AI Limited. All rights reserved."
__license__ = "Apache-2.0"
import os
import sys
import click
import random
from jina import Flow, Document, DocumentArray
from jina.logging.predefined import default_logger as logger
MAX_DOCS = int(os.environ.get('JINA_MAX_DOCS', 10000))
def config(dataset: str):
if dataset == 'toy':
os.environ['JINA_DATA_FILE'] = os.environ.get('JINA_DATA_FILE', 'data/toy-input.txt')
elif dataset == 'full':
os.environ['JINA_DATA_FILE'] = os.environ.get('JINA_DATA_FILE', 'data/input.txt')
os.environ['JINA_PORT'] = os.environ.get('JINA_PORT', str(45678))
cur_dir = os.path.dirname(os.path.abspath(__file__))
os.environ.setdefault('JINA_WORKSPACE', os.path.join(cur_dir, 'workspace'))
os.environ.setdefault('JINA_WORKSPACE_MOUNT',
f'{os.environ.get("JINA_WORKSPACE")}:/workspace/workspace')
def print_topk(resp, sentence):
for doc in resp.data.docs:
print(f"\n\n\nTa-Dah🔮, here's what we found for: {sentence}")
for idx, match in enumerate(doc.matches):
score = match.scores['cosine'].value
print(f'> {idx:>2d}({score:.2f}). {match.text}')
def input_generator(num_docs: int, file_path: str):
with open(file_path) as file:
lines = file.readlines()
num_lines = len(lines)
random.shuffle(lines)
for i in range(min(num_docs, num_lines)):
yield Document(text=lines[i])
def index(num_docs):
flow = Flow().load_config('flows/flow.yml')
data_path = os.path.join(os.path.dirname(__file__), os.environ.get('JINA_DATA_FILE', None))
with flow:
flow.post(on='/index', inputs=input_generator(num_docs, data_path),
show_progress=True)
def query(top_k):
flow = Flow().load_config('flows/flow.yml')
with flow:
text = input('Please type a sentence: ')
doc = Document(content=text)
result = flow.post(on='/search', inputs=DocumentArray([doc]),
parameters={'top_k': top_k},
line_format='text',
return_results=True,
)
print_topk(result[0], text)
#click.command()
#click.option(
'--task',
'-t',
type=click.Choice(['index', 'query'], case_sensitive=False),
)
#click.option('--num_docs', '-n', default=MAX_DOCS)
#click.option('--top_k', '-k', default=5)
#click.option('--dataset', '-d', type=click.Choice(['toy', 'full']), default='toy')
def main(task, num_docs, top_k, dataset):
config(dataset)
if task == 'index':
if os.path.exists(os.environ.get("JINA_WORKSPACE")):
logger.error(f'\n +---------------------------------------------------------------------------------+ \
\n | 🤖🤖🤖 | \
\n | The directory {os.environ.get("JINA_WORKSPACE")} already exists. Please remove it before indexing again. | \
\n | 🤖🤖🤖 | \
\n +---------------------------------------------------------------------------------+')
sys.exit(1)
index(num_docs)
elif task == 'query':
query(top_k)
if __name__ == '__main__':
main()
This is flow.yml
version: '1' # This is the yml file version
with: # Additional arguments for the flow
workspace: $JINA_WORKSPACE # Workspace folder path
port_expose: $JINA_PORT # Network Port for the flow
executors: # Now, define the executors that are run on this flow
- name: transformer # This executor computes an embedding based on the input text documents
uses: 'jinahub+docker://TransformerTorchEncoder' # We use a Transformer Torch Encoder from the hub as a docker container
- name: indexer # Now, index the text documents with the embeddings
uses: 'jinahub://SimpleIndexer' # We use the SimpleIndexer for this purpose
And when I try to execute app.py -t index
This is the error:
JINA#3803[C]:Can not fetch the URL of Hubble from `api.jina.ai` HubIO#3803[E]:Error while pulling jinahub+docker://TransformerTorchEncoder: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
I think this just happened because the API was down. It should work now.

Zabbix Web scenarios variables random number or other function

I need post variable with random number value. How can i generate random variable in web scenario? Can i run some script or macro to generate random value for scenario or step?
There is no native way to do it, as you guessed you can make it work with a macro and a custom script.
You can define a {$RANDOM} host macro and use it in the web scenario step as a post field value.
Then you have to change it periodically with a crontabbed script, a python sample like:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Set a random macro to a value.
Provide user from the commandline or from Env var support:
# export ZABBIX_SERVER='https://your_zabbix_host/zabbix/'
# export ZABBIX_USERNAME='admin'
# export ZABBIX_PASSWORD='secretPassword'
$ ./setRandomMacro.py -u admin -p zabbix -Z http://yourzabbix -H yourHost -M '{$RANDOM}'
Connecting to http://yourzabbix
Host yourHost (Id: ----)
{$RANDOM}: current value "17" -> new value "356"
$ ./setRandomMacro.py -u admin -p zabbix -Z http://yourzabbix -H yourHost -M '{$RANDOM}'
Connecting to http://yourzabbix
Host yourHost (Id: ----)
{$RANDOM}: current value "356" -> new value "72"
"""
from zabbix.api import ZabbixAPI
import json
import argparse
import getopt
import sys
import os
import random
# Class for argparse env variable support
class EnvDefault(argparse.Action):
# From https://stackoverflow.com/questions/10551117/
def __init__(self, envvar, required=True, default=None, **kwargs):
if not default and envvar:
if envvar in os.environ:
default = os.environ[envvar]
if required and default:
required = False
super(EnvDefault, self).__init__(default=default, required=required,
**kwargs)
def __call__(self, parser, namespace, values, option_string=None):
setattr(namespace, self.dest, values)
def jsonPrint(jsonUgly):
print(json.dumps(jsonUgly, indent=4, separators=(',', ': ')))
def ArgumentParser():
parser = argparse.ArgumentParser()
parser.add_argument('-Z',
required=True,
action=EnvDefault,
envvar='ZABBIX_SERVER',
help="Specify the zabbix server URL ie: http://yourserver/zabbix/ (ZABBIX_SERVER environment variable)",
metavar='zabbix-server-url')
parser.add_argument('-u',
required=True,
action=EnvDefault,
envvar='ZABBIX_USERNAME',
help="Specify the zabbix username (ZABBIX_USERNAME environment variable)",
metavar='Username')
parser.add_argument('-p',
required=True,
action=EnvDefault,
envvar='ZABBIX_PASSWORD',
help="Specify the zabbix username (ZABBIX_PASSWORD environment variable)",
metavar='Password')
parser.add_argument('-H',
required=True,
help="Hostname",
metavar='hostname')
parser.add_argument('-M',
required=True,
help="Macro to set",
metavar='macro')
return parser.parse_args()
def main(argv):
# Parse arguments and build work variables
args = ArgumentParser()
zabbixURL = args.Z
zabbixUsername = args.u
zabbixPassword = args.p
hostName = args.H
macroName = args.M
# API Connect
print('Connecting to {}'.format(zabbixURL))
zapi = ZabbixAPI(url=zabbixURL, user=zabbixUsername,
password=zabbixPassword)
hostObj = zapi.host.get(search={'host': hostName}, output='hostids')
print('Host {} (Id: {})'.format(hostName, hostObj[0]['hostid']))
currentMacro = zapi.usermacro.get(
hostids=hostObj[0]['hostid'], filter={'macro': macroName})
if (currentMacro):
newMacroValue = random.randint(1, 1001)
print('{}: current value "{}" -> new value "{}"'.format(macroName,
currentMacro[0]['value'], newMacroValue))
zapi.usermacro.update(
hostmacroid=currentMacro[0]['hostmacroid'], value=newMacroValue)
else:
print('No {} macro found on host {}'.format(macroName, hostName))
if __name__ == "__main__":
main(sys.argv[1:])

Python Google Drive API file-delete() method broken

I cannot get google-drive file-delete() method to work via the Python API.
It is acting broken.
I offer some info about my setup:
Ubuntu 16.04
Python 3.5.2 (default, Nov 12 2018, 13:43:14)
google-api-python-client (1.7.9)
google-auth (1.6.3)
google-auth-httplib2 (0.0.3)
google-auth-oauthlib (0.3.0)
Below, I list a Python script which can reproduce the bug:
"""
googdrive17.py
This script should delete files named 'hello.txt'
Ref:
https://developers.google.com/drive/api/v3/quickstart/python
https://developers.google.com/drive/api/v3/reference/files
Demo (Ubuntu):
sudo apt install python3-pip
sudo pip3 install --upgrade google-api-python-client
sudo pip3 install --upgrade google-auth-httplib2
sudo pip3 install --upgrade google-auth-oauthlib
python3 googdrive17.py
"""
import pickle
import os.path
from googleapiclient.discovery import build
from googleapiclient.http import MediaFileUpload
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# I s.declare a very permissive scope (for training only):
SCOPES = ['https://www.googleapis.com/auth/drive']
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as fh:
creds = pickle.load(fh)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server()
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
# I s.create a file so I can upload it:
with open('/tmp/hello.txt','w') as fh:
fh.write("hello world\n")
# From my laptop, I s.upload a file named hello.txt:
drive_service = build('drive', 'v3', credentials=creds)
file_metadata = {'name': 'hello.txt'}
media = MediaFileUpload('/tmp/hello.txt', mimetype='text/plain')
create_response = drive_service.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
file_id = create_response.get('id')
print('new /tmp/hello.txt file_id:')
print(file_id)
# Q: With googleapiclient, how to filter files list()-response?
# A1: https://developers.google.com/drive/api/v3/reference/files/list
# A2: https://developers.google.com/drive/api/v3/search-files
list_response = drive_service.files().list(
orderBy = "createdTime desc",
q = "name='hello.txt'",
pageSize = 22,
fields = "files(id, name)"
).execute()
items = list_response.get('files', [])
if items:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id'])
print('del_response.body:')
print( del_response.body)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash()
print('trash_response.body:')
print( trash_response.body)
else:
print('hello.txt not found in your google-drive account.')
When I run the script I see output similar to that listed below:
$ python3 googdrive17.py
new /tmp/hello.txt file_id:
1m8nKOfIeB0E5t60F_-9bKwIJds8PSvYY
I will try to delete this file:
hello.txt (1m8nKOfIeB0E5t60F_-9bKwIJds8PSvYY)
del_response.body:
None
I will try to delete this file:
hello.txt (1Ow4fcUBgEYUy3ezYScDKlLSMbp-hyOLT)
del_response.body:
None
I will try to delete this file:
hello.txt (1TiUrLgQdY1Cb9w0UWHjnmj7HZBaFsKcp)
del_response.body:
None
I will try to emptyTrash:
trash_response.body:
None
$
I see that two of the API calls work well:
files.list()
files.create()
Two calls appear broken:
files.delete()
files.emptyTrash()
Perhaps, though, I call them incorrectly?
How about this modification?
At first, the official document of Files: delete method and Files: emptyTrash method says as follows.
If successful, this method returns an empty response body.
By this, when the file was deleted and the trash was cleared, the returned del_response and trash_response are empty.
Modified script:
From your question, I could understand that files.list() and files.create() works. So I would like to propose the modification points for files.delete() and files.emptyTrash(). Please modify your script as follows.
From:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id'])
print('del_response.body:')
print( del_response.body)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash()
print('trash_response.body:')
print( trash_response.body)
To:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id']).execute() # Modified
print('del_response.body:')
print(del_response)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash().execute() # Modified
print('trash_response.body:')
print(trash_response)
execute() was added for drive_service.files().delete() and drive_service.files().emptyTrash().
References:
Files: delete
Files: emptyTrash
If this was not the result you want, I apologize.

Using python argparse arguments as variable values within a json file

I've googled this quite a bit and am unable to find helpful insight. Basically, I need to take the user input from my argparse arguments from a python script (as shown below) and plug those values into a json file (packerfile.json) located in the same working directory. I have been experimenting with subprocess, invoke and plumbum libraries without being able to "find the shoe that fits".
From the following code, I have removed all except for the arguments as to clean up:
#!/usr/bin/python
import os, sys, subprocess
import argparse
import json
from invoke import run
import packer
parser = argparse.ArgumentParser()
parser._positionals.title = 'Positional arguments'
parser._optionals.title = 'Optional arguments'
parser.add_argument("--access_key",
required=False,
action='store',
default=os.environ['AWS_ACCESS_KEY_ID'],
help="AWS access key id")
parser.add_argument("--secret_key",
required=False,
action='store',
default=os.environ['AWS_SECRET_ACCESS_KEY'],
help="AWS secret access key")
parser.add_argument("--region",
required=False,
action='store',
help="AWS region")
parser.add_argument("--guest_os_type",
required=True,
action='store',
help="Operating system to install on guest machine")
parser.add_argument("--ami_id",
required=False,
help="AMI ID for image base")
parser.add_argument("--instance_type",
required=False,
action='store',
help="Type of instance determines overall performance (e.g. t2.medium)")
parser.add_argument("--ssh_key_path",
required=False,
action='store',
default=os.environ['HOME']+'/.ssh',
help="SSH key path (e.g. ~/.ssh)")
parser.add_argument("--ssh_key_name",
required=True,
action='store',
help="SSH key name (e.g. mykey)")
args = parser.parse_args()
print(vars(args))
json example code:
{
"variables": {
"aws_access_key": "{{ env `AWS_ACCESS_KEY_ID` }}",
"aws_secret_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}",
"magic_reference_date": "{{ isotime \"2006-01-02\" }}",
"aws_region": "{{ env 'AWS_REGION' }}",
"aws_ami_id": "ami-036affea69a1101c9",
"aws_instance_type": "t2.medium",
"image_version" : "0.1.0",
"guest_os_type": "centos7",
"home": "{{ env `HOME` }}"
},
so, the user input for the --region as shown in the python script shoul get plugged into the value for aws_region in the json file.
I am aware of how to print the value of args. The full command that I am providing to the script is: python packager.py --region us-west-2 --guest_os_type rhel7 --ssh_key_name test_key and the printed results are {'access_key': 'REDACTED', 'secret_key': 'REDACTED', 'region': 'us-west-2', 'guest_os_type': 'rhel7', 'ami_id': None, 'instance_type': None, 'ssh_key_path': '/Users/REDACTEDt/.ssh', 'ssh_key_name': 'test_key'} .. what i need is to import thos values into the packerfile.json variables list.. preferably in a way that i can reuse it (so it musn't overwrite the file)
Note: I have also been experimenting with using python to export local environment variables then having the JSON file pick them up, but that doesn't really seem like a viable solution.
I think that the best solution might be to take all of these arguments and export them to its own JSON file called variables.json and import these variables from JSON (variables.json) to JSON (packerfile.json) as a seperate process. Still could use guidence here though :)
You might use the __dict__ attribute from the SimpleNamespace that is returned by the ArgumentParser. Like so:
import json
parsed = parser.parse_args()
with open('packerfile.json', 'w') as f:
json.dump(f, parsed.__dict__)
If required, you could use add_argument(dest='attrib_name') to customise attribute names.
I was actually able to come up with a pretty simple solution.
args = parser.parse_args()
print(json.dumps(vars(args), indent=4))
s.call("echo '%s' > variables.json && packer build -var-file=variables.json packerfile.json" % json_formatted, shell=True)
arguments are captured under the variable args and dumped to the output with json.dump while vars is making sure to also dump the arguments with their key values and I currently have to run my code with >> vars.json but I'll insert logic to have python handle that.
Note: s == subprocess in s.call