Pass json into AWS CLI Invalid JSON - json

I have a script to add some custom metric data and it works great of i write the metric data to a file and then read that in like:
aws cloudwatch put-metric-data --namespace "ec2" --metric-data file://metric2.json
But if i have the script just print and call it like this:
aws cloudwatch put-metric-data --namespace "ec2" --metric-data $(python aws-extra-metrics.py)
I get the following error:
Error parsing parameter '--metric-data': Invalid JSON:
Is their any way around this i would prefer not to have to write it to a file everytime as this will be ran from a cronjob.
We are running ubunutu

is the python script generating the json file? The difference is between passing a file name and passing the file content.
You could try:
python aws-extra-metrics.py > metric2.json && aws cloudwatch put-metric-data --namespace "ec2" --metric-data file://metric2.json
or
aws cloudwatch put-metric-data --namespace "ec2" --metric-data $(python aws-extra-metrics.py)
you may need quotes around the invocation of the python script

Related

Start execution of existing SageMaker pipeline using Python SDK

SageMaker documentatin explains how to run a pipeline, but it assumes I have just defined it and I have the object pipeline available.
How can I run an existing pipeline with Python SDK?
I know how to read a pipeline with AWS CLI (i.e. aws sagemaker describe-pipeline --pipeline-name foo). Can the same be done with Python code? Then I would have pipeline object ready to use.
If the Pipeline has been created, you can use the Python Boto3 SDK to make the StartPipelineExecution API call.
response = client.start_pipeline_execution(
PipelineName='string',
PipelineExecutionDisplayName='string',
PipelineParameters=[
{
'Name': 'string',
'Value': 'string'
},
],
PipelineExecutionDescription='string',
ClientRequestToken='string',
ParallelismConfiguration={
'MaxParallelExecutionSteps': 123
}
)
If you prefer AWS CLI, the most basic call is:
aws sagemaker start-pipeline-execution --pipeline-name <name-of-the-pipeline>

Amazon AWS CLI not allowing valid JSON in payload parameter

I am getting an error when I try and invoke a lambda function from the AWS CLI. I am using version 2 of the CLI.
I understand that I should pass the --payload argument as a string containing a JSON object.
aws lambda invoke --function-name testsms --invocation-type Event --payload '{"key": "test"}' response.json
I get the following error:
Invalid base64: "{"key": "test"}"
I have tried all sorts of variants for the JSON escaping characters etc. I have also tried to use the file://test.json option I receive the same error.
As #MCI said, AWS V2 defaults to base 64 input. For your case to work, simply add a --cli-binary-format raw-in-base64-out parameter to your command, so it'd be
aws lambda invoke --function-name testsms \
--invocation-type Event \
--cli-binary-format raw-in-base64-out \
--payload '{"key": "test"}' response.json
Looks like awscli v2 requires some parameters be base64-encoded.
By default, the AWS CLI version 2 now passes all binary input and binary output parameters as base64-encoded strings. A parameter that requires binary input has its type specified as blob (binary large object) in the documentation.
The payload parameter to lamba invoke is one of these blob types that must be base64-encoded.
--payload (blob)
The JSON that you want to provide to your Lambda function as input.
One solution is to use openssl base64 to encode your payload.
echo '{"key": "test"}' > clear_payload
openssl base64 -out encoded_payload -in clear_payload
aws lambda invoke --function-name testsms --invocation-type Event --payload file://~/encoded_payload response.json
Firstly, a string is a valid json.
In my case I had this problem
$ aws --profile diegosasw lambda invoke --function-name lambda-dotnet-function --payload "Just Checking If Everything is OK" out
An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Could not parse payload into json: Unrecognized token 'Just': was expecting ('true', 'false' or 'null')
at [Source: (byte[])"Just Checking If Everything is OK"; line: 1, column: 6]
and it turns out the problem was due to the AWS CLI trying to convert it to JSON. Escaping the double quotes did the trick
$ aws --profile diegosasw lambda invoke --function-name lambda-dotnet-function --payload "\"Just Checking If Everything is OK\"" out
{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
In Windows, I have tried the following, which worked for me
aws lambda invoke --function-name testsms --invocation-type Event --cli-binary-format raw-in-base64-out --payload {\"key\": \"test\"} response.json
Note that, added --cli-binary-format raw-in-base64-out in the command and escaped " to \" in payload
This solution worked for me and I find it simpler than having to remember/check the man page for the correct flags each time.
aws lambda invoke --function-name my_func --payload $(echo "{\"foo\":\"bar\"}" | base64) out
On my windows PowerShell running LocalStack I had to use:
--payload '{\"key\": \"test\"}' response.json

Filtering with Azure CLI and JMESPath for network vnet peering list

I'm using the Azure CLI to get a list of vnet peerings: az network vnet peering list. This returns json in the following structure:
[
{
...
"name": "prefix1-name",
...
},
{
...
"name": "prefix2-name",
...
}
]
I am trying to filter the results by names starting with some prefix. I have tried various combinations of the following:
az network vnet peering list --resource-group my-rg --vnet-name my-vnet --query "[?starts_with(name,'prefix1-')].{name}"
However this always fails with a message like ].{name} was unexpected at this time.
What am I missing?
Try to use az network vnet peering list --resource-group my-rg --vnet-name my-vnet --query "[?starts_with(name,'prefix1-')].name". You do not need to include the {}in the name. This works on my side.
Edit
For the error message, I can reproduce it with these Azure CLI commands in PowerShell locally on the windows 10 machine. But this does not appear in my local Linux Bash Shell as the above screenshots. This error also does not appear on PowerShell and Bash in the Azure cloud shell.
By my validation, It looks like a specific scenario on local PowerShell. When you filter with JMESPath, I just tried that the function starts_with or contains requires a space between the function, it works like these:
--query "[?starts_with(name, 'vnet')].name"
--query "[?starts_with (name,'vnet')].name"
--query "[?starts_with(name,'vnet') ].name"
--query "[?starts_with(name,'vnet')] .name"
but this does not work as below.
--query "[?starts_with(name,'vnet')].name"
This is a Windows PowerShell issue: https://github.com/Azure/azure-cli/blob/dev/doc/use_cli_effectively.md#argument-parsing-issue-in-powershell. To workaround it, insert --% after az to force PowerShell to treat the remaining characters in the line as a literal.
The answer was apparently putting a space between the ) and ] so az network vnet peering list --resource-group my-rg --vnet-name my-vnet --query "[?starts_with(name,'prefix1-') ].name worked for me. No idea why and not found any documentation that mentions anything like this. If I remove that space I get the ].name was unexpected at this time message.

bash causes invalid json for ec2 cli request

I am using a bash script to dynamically create an EC2 CLI request. When the bash script is executed the AWS CLI returns Error parsing parameter '--launch-specification': Invalid JSON:, but if I copy the CLI string and submit it directly via the CLI, the CLI command works no problems.
Could / is the bash script generating some code characters that cause the CLI request to fail that are not present when I use copy/paste in the terminal?
BASH SCRIPT CODE
CMD01=("aws --profile ${myProf} --region ${myRegion} ec2 request-spot-instances --spot-price ${PRICE} --instance-count ${6} --type \"one-time\" --launch-specification \"{\\\"ImageId\\\":\\\"${1}\\\",\\\"KeyName\\\":\\\"${2}\\\",\\\"InstanceType\\\":\\\"${!5}\\\",\\\"IamInstanceProfile\\\":{\\\"Arn\\\":\\\"${16}\\\"},\\\"Placement\\\":{\\\"AvailabilityZone\\\":\\\"${18}\\\",\\\"GroupName\\\":\\\"${11}\\\"},\\\"NetworkInterfaces\\\":[{\\\"DeviceIndex\\\":0,\\\"SubnetId\\\":\\\"${4}\\\",\\\"AssociatePublicIpAddress\\\":${17}}],\\\"UserData\\\":\\\"string\\\"}\" --dry-run")
echoed via
echo "$CMD01"
aws --profile myProfile --region eu-west-1 ec2 request-spot-instances --spot-price 0.004 --instance-count 1 --type "one-time" --launch-specification "{\"ImageId\":\"ami-9c7ad8eb\",\"KeyName\":\"myKey\",\"InstanceType\":\"t1.micro\",\"IamIns tanceProfile\":{\"Arn\":\"arn:aws:iam::000000000000:instance-profile/myprofile\"},\"Placement\":{\"AvailabilityZone\":\"eu-west-1c\",\"GroupName\":\"myGroup\"},\"NetworkInterfaces\":[{\"DeviceIndex\":0,\"SubnetId\":\"subnet-xxxyyy\",\"AssociatePublicIpAddress\":true}],\"UserData\":\"string\"}" --dry-run
executed via ${CMD01[#]} > $logFile
generates error
Error parsing parameter '--launch-specification': Invalid JSON:
"{\"ImageId\":\"ami-9c7ad8eb\",\"KeyName\":\"myKey\",\"InstanceType\":\"t1.micro\",\"IamInstanceProfile\":{\"Arn\":\"arn:aws:iam::000000000000:instance-profile/myprofile\"},\"Placement\":{\"AvailabilityZone\":\"eu-west-1c\",\"GroupName\":\"myGroup\"},\"NetworkInterfaces\":[{\"DeviceIndex\":0,\"SubnetId\":\"subnet-xxxyyy\",\"AssociatePublicIpAddress\":true}],\"UserData\":\"string\"}"
now if I take the earlier echo echo "$CMD01" from the terminal and do a simple copy/paste, the CLI output
A client error (DryRunOperation) occurred when calling the RequestSpotInstances operation: Request would have succeeded, but DryRun flag is set.
So it seems the JSON is valid, but when executed from the bash scrip it is invalid. What am I doing wrong?
I think you have a bash error.
Try with:
CMD01=$(aws commands commands...)

How to capture JSON result from Azure CLI within NodeJS script

Is there a way to capture the JSON objects from the Azure NodeJS CLI from within a NodeJS script? I could do something like exec( 'azure vm list' ) and write a promise to process the deferred stdout result, or I could hijack the process.stream.write method, but looking at the CLI code, which is quite extensive, I thought there might be a way to pass a callback to the cli function or some other option that might directly return the JSON result. I see you are using the winston logger module -- I might be familiar with this, but perhaps there is a hook there that could be used.
azure vm list does have a --json option:
C:\>azure vm list -h
help: List Azure VMs
help:
help: Usage: vm list [options]
help:
help: Options:
help: -h, --help output usage information
help: -s, --subscription <id> use the subscription id
help: -d, --dns-name <name> only show VMs for this DNS name
help: -v, --verbose use verbose output
help: --json use json output
You can get the json result in the callback of an exec(...) call. Would this work for your?
Yes you can, check this gist: https://gist.github.com/4415326 and you'll see how without doing exec. You basically override the logger hanging off the CLI.
As a side note I am about to publish a new module, azure-cli-buddy that will make it easy to call the CLI using this technique and to receive results in JSON.