How do I Pass JSON as a parameter to AWS Lambda - json

I have a CloudFormation template that consists of a Lambda function that reads messages from the SQS Queue.
Lambda function will read the message from the queue and transform it using a JSON template(Which I want it to be injected externally)
I will deploy different stacks for different products and for each product I will provide different JSON templates to be used for transformation.
I have different options but couldn't decide which one is better;
I can write all JSON files under the project and pack them together and pass related JSON name as a parameter to lambda.
I can store JSON files on S3 and pass S3 URL to lambda so I can read on runtime.
I can store JSON files on Dynamo DB and read from there using the same approach with 2
The first one seems like a better approach as I don't need to read from an external file on every lambda execution. But I will need to pack all templates together.
The last two are a more clear approach but require an external call to read JSON for every call.
Another approach could be (I'm not sure if it is possible) to inject a JSON file to Lambda on deploy from S3 bucket or sth. And Lambda function will read it like an environment variable.

As you can see from the cloudformation documentation Lambda environment variables can be only a Map of Strings, so the actual value you can pass to the function as an environment variable must be a String. You could pass your JSON as a string but the problem is that the max size for all environment variables is 4 KB.
If your templates are bigger and you don't want to call S3 or DynamoDB at runtime you could do a workaround like writing a simple shell script that copies the correct template file to the lambda folder before building and deploying the stack. This way the lambda gets deployed in a package with the code and only the desired json template.

I decided to go with S3 setup and also improved efficiency by storing Json on a global variable (after reading the first time). So I read once and use it for the lifetime of the Lambda container.
I'm not sure this is the best solution but works well enough for my scenario.

Related

Terraform JSON configuration

I am trying to take a JSON input, complete some validation/transforms on it, and apply it as a terraform template. What would be most convenient is to pass the terraform template json (in env variables) definitions of resources, and have terraform apply those, however I cannot get this to work (I have a variables.tf file which uses the TF_VAR variables to set configuration, and am passing in JSON config per the terraform documetnation)
Is there a standard way of doing this? (Passing tf json definitions via env vars to terraform to apply?)
I'm not really sure about what the error is, since there is no code and no error log or example to be reproduced.
I will assume you already know how to pass and access environment variables (if not check this medium post), and you are receiving a large "string" inside terraform and want to read it as json.
The solution for it would be to use the functions jsondecode/jsonencode.
Just little reminder, when posting a question, if you paste the code with it, it makes it easier for people to understand what you are trying to do. The quality of answers will be higher.

dynamically update the request json and send it as multipart form data in karate [duplicate]

In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.

Google Cloud Functions: loading GCS JSON files into BigQuery with non-standard keys

I have a Google Cloud Storage bucket where a legacy system drops NEW_LINE_DELIMITED_JSON files that need to be loaded into BigQuery.
I wrote a Google Cloud Function that takes the JSON file and loads it up to BigQuery. The function works fine with sample JSON files - the problem is the legacy system is generating a JSON with a non-standard key:
{
"id": 12345,
"#address": "XXXXXX"
...
}
Of course the "#address" key throws everything off and the cloud function errors out ...
Is there any option to "ignore" the JSON fields that have non-standard keys? Or to provide a mapping and ignore any JSON field that is not in the map? I looked around to see if I could deactivate the autodetect and provide my own mapping, but the online documentation does not cover this situation.
I am contemplating the option of:
Loading the file in memory into a string var
Replace #address with address
Convert the json new line delimited to a list of dictionaries
Use bigquery stream insert to insert the rows in BQ
But I'm afraid this will take a lot longer, the file size may exceed the max 2Gb for functions, deal with unicode when loading file in a variable, etc. etc. etc.
What other options do I have?
And no, I cannot modify the legacy system to rename the "#address" field :(
Thanks!
I'm going to assume the error that you are getting is something like this:
Errors: query: Invalid field name "#address". Fields must contain
only letters, numbers, and underscores, start with a letter or
underscore, and be at most 128 characters long.
This is an error message on the BigQuery side, because cols/fields in BigQuery have naming restrictions. So, you're going to have to clean your file(s) before loading them into BigQuery.
Here's one way of doing it, which is completely serverless:
Create a Cloud Function to trigger on new files arriving in the bucket. You've already done this part by the sounds of things.
Create a templated Cloud Dataflow pipeline that is trigged by the Cloud Function when a new file arrives. It simply passes the name of the file to process to the pipeline.
In said Cloud Dataflow pipeline, read the JSON file into a ParDo, and using a JSON parsing library (e.g. Jackson if you are using Java), read the object and get rid of the "#" before creating your output TableRow object.
Write results to BigQuery. Under the hood, this will actually invoke a BigQuery load job.
To sum up, you'll need the following in the conga line:
File > GCS > Cloud Function > Dataflow (template) > BigQuery
The advantages of this:
Event driven
Scalable
Serverless/no-ops
You get monitoring alerting out of the box with Stackdriver
Minimal code
See:
Reading nested JSON in Google Dataflow / Apache Beam
https://cloud.google.com/dataflow/docs/templates/overview
https://shinesolutions.com/2017/03/23/triggering-dataflow-pipelines-with-cloud-functions/
disclosure: the last link is to a blog which was written by one of the engineers I work with.

jmeter multi response data used in request

i have a test plan that has many POST calls like
/api/v1/budgets
now each of those call has a response which return a uuid from the data base, i extract that using a json path extractor and save it to a variable
after i'm doing all the post calls, i need to do the same amount of calls but with DELETE and do it with the uuid i got from the response
is there an efficient way to extract those uuid? for now i had to add a json path extractor manually to each call
and after that, is there a way to save them and run on those saved vars in a loop just send the next one each time?
also i'm gonna use multiple users for each thread, so i don't know if jmeter will be able to solve that issue or i need to handle that as well the threads and the users per thread
JMeter provides ForEach Controller which can iterate variables having numeric postfix like:
uuid_1
uuid_2
uuid_3
etc.
So you can store the uuids above way using for example __counter() function and use single HTTP Request under ForEach Controller in order to delete them.
I would also recommend getting familiarized with Here’s What to Do to Combine Multiple JMeter Variables article to learn how to work with compound variables in JMeter scripts.

JBPM rest calls with JSON

We want to start a process in JBPM6 using the rest API. We need to pass an object as a process variable.
We know how to do it JAXB and the execute call. But we want to do it with JSON and /runtime/{deploymentId}/process/{processDefId}/start
Is it possible? we try and have no success.
I am not sure whether my answer exactly addresses the question. But for someones use in future I place couple of lines here.
If you want to set a process variable when starting a process using the RESTful API, you can do it like this.
If your variable name is myVar just add the value as a URL parameter by appending the phrase "map_" to the parameter name. It means the parameter name should now be map_myVar. For an example see below request.
http://<host>:<port>/jbpm-console/rest/runtime/{deploymentId}/process/{processDefId}/start?map_myVar=myValue
You can confirm whether the value is set by writing below code in a script task.
Object var = kcontext.getVariable("myVar");
System.out.println("myVar : " + var);
See the 17.1.1.3. Map query parameters section of JBPM6 documentation.
After talking to the dev that is responsible for the REST API. I was able to confirm how it works.
The
/runtime/{deploymentId}/process/{processDefId}/start
Is a POST request where all the contents in the payload are ignored. The variables are written as key=value in the GET string.
With deployment id: com.web:work:1.0
With processDefId: work.worload
2 variables: var1 and var2
For example:
/runtime/com.web:work:1.0/process/work.worload/start?var1=a&var2=b
I'm still after trying to understand how to define objects with the remote API.
Meanwhile, I also confirmed that it is impossible to define objects using this way. The only way to define objects is only by using the JaxB. This uses the "/execute" path