aws cli lambda-Could not parse request body into json - json

I have created aws lambda function in .net core and deployed.
I have tried executing function in aws console with test case and its working. but i am not able achieve the same with cli command
aws lambda invoke --function-name "mylambda" --log-type Tail --payload file://D:/Files/lamdainputfile.json file://D:/Files/response.txt
i got getting error with cli command
An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unexpected character ((CTRL-CHAR, code 138)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: (byte[])"�zn�]t�zn�m�"; line: 1, column: 2]
I tried passing json
aws lambda invoke --function-name "mylambda" --log-type Tail --payload "{'input1':'100', 'input2':'200'}" file://D:/Files/response.txt
but it's not working
This lambda function is executing aws console with test case and giving correct result. I have added same input in local json file and tried with cli command.
Json input:
{
"input1": "100",
"input2": "200"
}
EDIT:
After correction in inline json i am getting error for output file
Unknown options: file://D:/Files/response.txt
is there any command to print output in cli only?

The documentation is not updated from the cli version 1. For the aws cli version 2 we need to base64 encode the payload.
Mac:
payload=`echo '{"input1": 100, "input2": 200 }' | openssl base64`
aws lambda invoke --function-name myfunction --payload "$payload" SomeOutFile &

Adding the option --cli-binary-format raw-in-base64-out will allow you to pass raw json in the invoke command.
aws lambda invoke \
--cli-binary-format raw-in-base64-out \
--function-name "mylambda" \
--payload '{"input1": "100", "input2": "200"}' \
file://D:/Files/response.txt

Based on ASW CLI invoke command options --payload only accepts inline blob arguments (i.e. JSON). In other words --payload parameter can not be used to read input from a file, so --payload file://D:/Files/lamdainputfile.json will not work.
In the example provided what probably happens is --payload is ignored, file://D:/Files/lamdainputfile.json is treated as <outfile>, and an error is raised for file://D:/Files/response.txt as it is an unexpected positional argument.
What is required is reading the contents of D:/Files/lamdainputfile.json with a separate command. How this can be done is different based on the type of shell used. Bash example:
aws lambda invoke --payload "$(cat /path/to/input.json)" ...
Original answer:
I don't know about the first case (--payload file:///...), however the second case is not a valid JSON, as JSON requires strings to be double quoted. Try the following JSON:
{
"input": "100",
"input2": "200"
}

Related

ws ec2 create-launch-template-version passing variables into json array

I have 4 environment variables on my laptop, ami_id, instance_type, key_name and security_group_ids. I am trying to create a launch template version using these variables but I do not know how to pass them into the JSON array properly
aws ec2 create-launch-template-version --launch-template-id lt-xxx --launch-template-data '{"ImageId":"$ami_id", "InstanceType": "$instance_type", "KeyName": "$key_name", "SecurityGroupIds": ["$security_group_ids"]}'
An error occurred (InvalidAMIID.Malformed) when calling the CreateLaunchTemplateVersion operation: The image ID '$ami_id' is not valid. The expected format is ami-xxxxxxxx or ami-xxxxxxxxxxxxxxxxx.
Using a here-document allows you to feed some readable text into a variable while expanding the shell variables, like this:
#!/bin/sh
ami_id=ami1234
instance_type=t3.nano
key_name=key1
security_group_ids=sg123,sg456
template_data=$(cat <<EOF
{
"ImageId":"$ami_id",
"InstanceType": "$instance_type",
"KeyName": "$key_name",
"SecurityGroupIds": ["$security_group_ids"]
}
EOF
)
echo "$template_data"
You can then test the JSON syntax with jq:
./template.sh | jq -c
{"ImageId":"ami1234","InstanceType":"t3.nano","KeyName":"key1","SecurityGroupIds":["sg123,sg456"]}

Invalid JSON while submitting spark submit job via NiFi

I am trying to submit a spark job where I am setting a date argument in conf property and I am running it through a script in NiFi. However, when I am running the script I am facing an error.
Spark Submit Code in the script:
aws emr add-steps --cluster-id "$1" --steps '[{"Args":["spark-submit","--deploy-mode","cluster","--jars","s3://tvsc-lumiq-edl/jars/ojdbc7.jar","--executor-memory","10g","--driver-memory","10g","--conf","spark.hadoop.yarn.timeline-service.enabled=false","--conf","currDate='\"$5\"'","--class",'\"$2\"','\"$3\"','\"$4\"'],"Type":"CUSTOM_JAR","ActionOnFailure":"CONTINUE","Jar":"command-runner.jar","Properties":"","Name":"Spark application"}]' --region "$6"
and after I run it, I get the below error:
ExecuteStreamCommand[id=5b08df5a-1f24-3958-30ca-2e27a6c4becf] Transferring flow file StandardFlowFileRecord[uuid=00f844ee-dbea-42a3-aba3-0edcabfc50a2,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1607082757752-507103, container=default, section=223], offset=29, length=-1],offset=0,name=6414901712887990,size=0] to nonzero status. Executable command /bin/bash ended in an error:
Error parsing parameter '--steps': Invalid JSON:
[{"Args":["spark-submit","--deploy-mode","cluster","--jars","s3://tvsc-lumiq-edl/jars/ojdbc7.jar","--executor-memory","10g","--driver-memory","10g","--conf","spark.hadoop.yarn.timeline-service.enabled=false","--conf","currDate="Fri
Where am I going wrong?
You can use JSONLint to validate your JSON, which makes it easier to see why its wrong.
In your case, you are wrapping the final 3 values in single quotes ' rather than double quotes "
Your steps JSON should look like:
[{
"Args": [
"spark-submit",
"--deploy-mode",
"cluster",
"--jars",
"s3://tvsc-lumiq-edl/jars/ojdbc7.jar",
"--executor-memory",
"10g",
"--driver-memory",
"10g",
"--conf",
"spark.hadoop.yarn.timeline-service.enabled=false",
"--conf",
"currDate='\"$5\"'",
"--class",
"\"$2\"",
"\"$3\"",
"\"$4\""
],
"Type": "CUSTOM_JAR",
"ActionOnFailure": "CONTINUE",
"Jar": "command-runner.jar",
"Properties": "",
"Name": "Spark application"
}]
Specifically, these 3 lines:
"\"$2\"",
"\"$3\"",
"\"$4\""
Instead of the original:
'\"$2\"',
'\"$3\"',
'\"$4\"'

Bash JSON string into variable

my idea is to put json string into variable JSON, i have a command whom takes a console login to user in AWS IAM, the comand is : aws iam create-login-profile --cli-input-json file://create-login-profile.json
in create-login-profile.json is the json following content :
{
"UserName": "roberto.viquezzz",
"Password": "aaaaaaaaaaa",
"PasswordResetRequired": true}
i try to write bash script whom contains json var "JSON" like following code:
JSON="{\"UserName\": \"roberto.viquezzz\",\"Password\": \"aaaaaaaaaaa\",\"PasswordResetRequired\" : true}"
aws iam create-login-profile --cli-input-json $JSON
and if i type in console ./file.sh the file execute create a console user.
if it try to execute this code i get an error : Unknown options: "aaaaaaaaaaa","PasswordResetRequired", :, true}, "roberto.viquezzz","Password":
but if i execute this code from command line like :
aws iam create-login-profile --cli-input-json "{\"UserName\": \"roberto.viquezzz\",\"Password\": \"aaaaaaaaaaa\",\"PasswordResetRequired\": true}"
all is be ok , maybe whom know whats wrong ? please suggest!
Put quotes around $JSON:
aws iam create-login-profile --cli-input-json "$JSON"
The quotes that are there during the assignment get consumed by the shell. You can verify this by issuing echo $JSON.
By adding the quotes you will make sure that the entire string is passed to the command "aws" as a single argument.

Error while Passing Json string in scala using Curl

I am trying to post a json string using curl in scala. My curl command works fine if executed from linux box but throes an error(("message": "Must provide query string.",) always from scala.
my working curl command in linux:
curl http://laptpad1811:5000/graphql -H "Content-Type: application/json"
-X POST -d '{"query":"mutation
CreateFileReceivedEvent($createFileReceivedEventInput:
CreateFleReceivedEventInput!) { createFileReceivedEvent(input:
$createFileReceivedEventInput) { clientMutationId }}","variables":
{"createFileReceivedEventInput":
{"clientMutationId":"Test","fileReceivedEvent":{"file":
{"fileTrackingId":"83a86c44-66a5-4de0-9b7f-
c6995877279d","name":"textfile_2017-08-21T15:58:45Z","fileType":
{"code":"textfile"}},"eventTimestamp":"2017-08-
21T15:59:30Z"}}},"operationName":"CreateFileReceivedEvent"}'
My scala code:
step1: copying the entire json string(pay load) to txt file
'{"query":"mutation CreateFileReceivedEvent($createFileReceivedEventInput:
CreateFleReceivedEventInput!) { createFileReceivedEvent(input:
$createFileReceivedEventInput) { clientMutationId }}","variables":
{"createFileReceivedEventInput":
{"clientMutationId":"Test","fileReceivedEvent":{"file":
{"fileTrackingId":"83a86c44-66a5-4de0-9b7f-
c6995877279d","name":"textfile_2017-08-21T15:58:45Z","fileType":
{"code":"textfile"}},"eventTimestamp":"2017-08-
21T15:59:30Z"}}},"operationName":"CreateFileReceivedEvent"}'
step2:
val data=fromFile("/usr/test/data.txt").getLines.mkString
step3:
val cmd = Seq("curl", "http://laptpad1811:5000/graphql", "-H",
"'Content-Type:application/json'" ,"-X", "POST", "-d" , data)
step4:
cmd.!!
I get the below error
String =
"{
"errors": [
{
"message": "Must provide query string.",
"stack": "BadRequestError: Must provide query string.\n
I have tried to change " to ' and mutiple combinations of the json string but I always get the same error.
I suspect that your issue is that sys.process doesn't pass commands through the shell (e.g. bash), so quotes that are necessary in the shell become unnecessary in Scala (and get passed through to the command which in the case of Unix-style utilities will probably result in unexpected behavior).
So try:
val cmd = Seq("curl", "http://laptpad1811:5000/graphql", "-H", "Content-Type: application/json", "-X", "POST", "-d", data)
Likewise remove the single quote wrapping from your text file.
I would however, counsel against spawning a curl in Scala and advise using one of the existing http client libraries (I personally like Gigahorse).

Invalid numeric literal with jq

I have a large amount of JSON from a 3rd party system which I would like to pre-process with jq, but I am having difficulty composing the query, test case follows:
$ cat test.json
{
"a": "b",
"c": "d",
"e": {
"1": {
"f": "g",
"h": "i"
}
}
}
$ cat test.json|jq .e.1.f
jq: error: Invalid numeric literal at EOF at line 1, column 3 (while parsing '.1.') at <top-level>, line 1:
.e.1.f
How would I get "g" as my output here? Or how do I cast that 1 to a "1" so it is handled correctly?
From jq manual :
You can also look up fields of an object using syntax like .["foo"]
(.foo above is a shorthand version of this, but only for
identifier-like strings).
You also need quotes and use -r if you want raw output :
jq -r '.e["1"].f' test.json
I wrote a shell script function that calls the curl command, and pipes it into the jq command.
function getName {
curl http://localhost:123/getname/$1 | jq;
}
export -f getName
When I ran this from the CLI,
getName jarvis
I was getting this response:
parse error: Invalid numeric literal at line 1, column 72
I tried removing the | jq from the curl command, and I got back the result without jq parsing:
<Map><timestamp>1234567890</timestamp><status>404</status><error>Not Found</error><message>....
I first thought that I had a bad character in the curl command, or that I was using the function param $1 wrong.
Then I counted the number of chars in the result string, and I noticed that the 72nd char in that string was the empty space between "Not Found".
The underlying issue was that I didn't have a method named getname yet in my spring REST controller, so the response was coming back 404 Not Found. But in addition, jq wasn't handling the empty space in the response except by outputting the error message.
I'm new to jq so maybe there is a way to get around the empty space issue, but that's for another day.