I need to merge 2 JSON objects in Azure Logic App. Is that possible? Properties of one object is known but the other one is not.
There are different options available to merge JOSN objects in Logic App.
Logic App - Compose Action Very easy to implement.
Logic App - Inline Code You can write JavaScript code.
Related
I have backend APIs with multiple controllers which split up operations which are for 3rd parties, other are for frontend proxies / other micro-services and then there is an support/admin controller. I dont want all of these in the same APIM API / Product.
Currently either having to manually edit the OpenAPI def of the API before importing it into APIM or have to manually create the API in APIM and then using the dev tools extractor to export the templates for other environments.
My stack is
dotnet 5.0 / 6.0 (aspnet) with NSwag to document the API. Using the azure apim development toolkit to automate the bits we can.
I'd like to have a automated pipeline where the backend api is built, have separate OpenAPI defs or way to filter controller during importing, the output from that goes into a pipeline for APIM which then updates the dev environment APIM and then can be auto deployed into other environments when needed.
Does anyone else do this type of thing or do you put the whole API into a single APIM API/Product? Or do you have completely separate backend APIs that make up the a microservice? Or something else?
I want to share what we do at work.
The key idea is to have a custom OpenAPI processor (be it a command line tool so that you can call that in a pipeline) combined with OpenAPI Extension specs. The processor is basically a YAML or JSON parser based on your API spec format.
You create a master OpenAPI spec file that contains all the operations in your controller.
Create an extension, say x-api-product: "Product A", add this to the relevant API operations.
Your custom processor takes in the master spec file (e.g. YAML), and groups the operations by x-api-product then outputs a set of new OpenAPI specs.
Import the output files into APIM.
The thing to think about is how you manage the master API spec file. We follow the API Spec First approach, where we manually create and modify the YAML file, and use OpenAPI Generator to code gen the controllers.
Hope this gives you some ideas.
In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.
I have a CloudFormation template that consists of a Lambda function that reads messages from the SQS Queue.
Lambda function will read the message from the queue and transform it using a JSON template(Which I want it to be injected externally)
I will deploy different stacks for different products and for each product I will provide different JSON templates to be used for transformation.
I have different options but couldn't decide which one is better;
I can write all JSON files under the project and pack them together and pass related JSON name as a parameter to lambda.
I can store JSON files on S3 and pass S3 URL to lambda so I can read on runtime.
I can store JSON files on Dynamo DB and read from there using the same approach with 2
The first one seems like a better approach as I don't need to read from an external file on every lambda execution. But I will need to pack all templates together.
The last two are a more clear approach but require an external call to read JSON for every call.
Another approach could be (I'm not sure if it is possible) to inject a JSON file to Lambda on deploy from S3 bucket or sth. And Lambda function will read it like an environment variable.
As you can see from the cloudformation documentation Lambda environment variables can be only a Map of Strings, so the actual value you can pass to the function as an environment variable must be a String. You could pass your JSON as a string but the problem is that the max size for all environment variables is 4 KB.
If your templates are bigger and you don't want to call S3 or DynamoDB at runtime you could do a workaround like writing a simple shell script that copies the correct template file to the lambda folder before building and deploying the stack. This way the lambda gets deployed in a package with the code and only the desired json template.
I decided to go with S3 setup and also improved efficiency by storing Json on a global variable (after reading the first time). So I read once and use it for the lifetime of the Lambda container.
I'm not sure this is the best solution but works well enough for my scenario.
I am starting an Ember application from scratch that will connect to many non-standard JSON APIs which I don't control and from which I only need bits and pieces of data. My first attempt was to use jQuery alone but the code quickly became hard to read and maintain.
I want to use the Ember-Data RESTAdapter with some Serializers. I may need multiple Adapters and Serializers for the different APIs.
I am trying to figure out a good way to break down the work into logical steps.
What process should I follow?
For example:
"Start with what I need" approach:
Model ALL my objects using the FixtureAdapter as the ApplicationAdapter
Implement sample app using the models to ensure it's logically correct
Switch the FixtureAdapter for the RESTAdapter
Extend the RESTAdapter for each Model to map to the different APIs
Create a Serializer for each Model Adapter
-or-
"Start with what I can get" approach:
Extend a SINGLE ModelAdapter at a time, mapping it to the necessary API end-point
Create the Model for my ModelAdapter
Create the Serializer for that ModelAdapter
Implement model in the app
Repeat
I've been using Google Tools (library, templating) for almost a year... and I came to the point where a I have to connect the backend with all the templates i've been working on. The backend receives the data in JSON format.
Here's my problem. I want to submit a JSON that represents my object model in the backend and I know closure library offers this...
var json = goog.json.serialize(goog.dom.forms.getFormDataMap(form).toObject());
Problem is that the method getFormDataMap returns a goog.structs.Map which works like a hashMap... It means that all values of the form submitted are nested into arrays.
I was wondering if anyone has found a solution to this. I know that there is some library that does the trick like this one (https://github.com/maxatwork/form2js) but I can't believe that closure doesn't have anything to deal with this problem.
Thanks a lot !
why not access the data yourself and build the data structure you require, it is not like this will be a bottleneck of any sorts.