document or project criteria needed to use JSON Diff? - json

JSON Diff is used when we run web projects that use API features. Does this JSON Diff not work on projects that don't use the API feature? Are there any special criteria?

yes, we're using JSONDiff for find a differences between 2 file code.
The following criteria for use JSON Diff:
The project using API feature.
Want to do compare 2 file with a file format JSON.

Related

Best data processing software to parse CSV file and make API call per row

I'm looking for ideas for an Open Source ETL or Data Processing software that can monitor a folder for CSV files, then open and parse the CSV.
For each CSV row the software will transform the CSV into a JSON format and make an API call to start a Camunda BPM process, passing the cell data as variables into the process.
Looking for ideas,
Thanks
You can use a Java WatchService or Spring FileSystemWatcher as discussed here with examples:
How to monitor folder/directory in spring?
referencing also:
https://www.baeldung.com/java-nio2-watchservice
Once you have picked up the CSV you can use my example here as inspiration or extend it: https://github.com/rob2universe/csv-process-starter specifically
https://github.com/rob2universe/csv-process-starter/blob/main/src/main/java/com/camunda/example/service/CsvConverter.java#L48
The example starts a configurable process for every row in the CSV and includes the content of the row as a JSON process data.
I wanted to limit the dependencies of this example. The CSV parsing logic applied is very simple. Commas in the file may break the example, special characters may not be handled correctly. A more robust implementation could replace the simple Java String .split(",") with an existing CSV parser library such as Open CSV
The file watcher would actually be a nice extension to the example. I may add it when I get around to it, but would also accept a pull request in case you fork my project.

How can we read a json file only once for entire feature file in karate

I am calling multiple json and js files in my feature file in background, which is required for every scenarion in my feature file.
def test= read('classpath:testData/responseFiles/test.json')
problem is that, it is running/reading for each scenario. Is there something i can do, so that it read only once for feature file and can use for all scenarios. I am using 9.0.0 karate version
callonce is only working to call feature filen not json file
Read the JSON file in a called feature and then use callonce.

How properties.db is used in Forge Viewer?

The sqlite database file properties.db is usually the biggest file in the output from https://extract.autodesk.io/.
What is it used for in Forge Viewer, and if it's not used, why is it available in the ZIP file?
The reason this example is copying both is that the purpose of the sample is to demo how to extract the 'bubble' from the Autodesk server. The Design File' properties are extracted in 2 formats: aka json (json.gz) and sqlLite (sdb/db).
The Autodesk Viewer only uses the json format, but other systems may prefer using sqlLite. The json approach makes it easier when you code executes in client browsers.
It is fairly easier to modify the sample to exclude the sqlLite database if you are not interested to get this file. I can point you which code you need to modify if that's something you want to do.
That file contains the components properties as a sqlite database, which are also contained in objects_xxx.json.gz. The viewer only uses the json format.
That article shows how you can easily run the extraction code your your side, it doesn't extract the .db file:
Forge SVF Extractor in Node.js

MarkLogic Java API batch upload files (.csv)

Im trying out the MarkLogic Java API and would want to bulk upload some files with the extension .csv
I'm not sure what to use, since the Java API only supports JSON, XML, and TXT files.
How do I batch upload files using the MarkLogic Java api? Do i convert everything to JSON?
Do i convert everything to JSON?
Yes, that is a common way to do it.
If you would like additional examples of how you can wrangle CSV with the Java Client API, check out OpenCSVBatcherExample and JacksonDatabindTest.testDatabindingThirdPartyPojoWithMixinAnnotations. The first demonstrates converting the csv to XML and using a custom REST extension. The second example (well, unit test...) demonstrates converting the csv to JSON and using the batch upload (Bulk Writes) capabilities Justin linked to.
If you have CSV files on your filesystem, I’d start with mlcp, as suggested above. It will handle all of the parsing and splitting into multiple transactions/batches for you. Take a look at the mlcp documentation for more details and some example configurations.
If you’d like more control over the parsing and splitting logic than mlcp gives you out-of-the-box or you’re getting CSV from some other source (i.e. not files on the filesystem), you can use the Java Client API. The Java Client API allows you to efficiently write batches using a WriteSet. Take a look at the “Bulk Writes” example.
According to your reply to Justin, you cannot use MLCP because it is command line and you need to integrate it into a web portal.
Well, MLCP is released as open cource software under the Apache2 licence. So if you are happy with this licence, then you have the source to integrate.
But what I see as your main problem statement is more specific:
How can I create miltiple XML OR JSON documents from a CSV file [allowing the use of the java API to then upload them as documents in MarkLogic]
With that specific problem statement:
1) have a look at SplitDelimitedTextReader.java from the mlcp source
2) try some java libraries for this purpose such as http://jsefa.sourceforge.net/quick-tutorial.html

Convert Swagger JSON to RAML/YAML

How do I convert Swagger JSON to RAML/YAML and validate it? I am not looking for a programmatic way, just a one off conversion.
Here are the steps:
Export Swagger JSON into a file on your drive. This JSON should be published on your server at the following URI: /swagger/docs/v1
Go to http://editor.swagger.io/#/
On the top left corner, select File-> Import File... Point to the local Swagger JSON file you exported in step #1 to open in the Swagger Editor
Select Generate Client -> Swagger YAML option from the menu
It will generate the YAML that you can validate at http://www.yamllint.com/ site
To convert API spec between various formats (e.g. Swagger/OpenAPI, RAML, Postman, etc), you can use the following free and open source tools:
https://github.com/lucybot/api-spec-converter
https://github.com/stoplightio/api-spec-converter
Its actually pretty simple
Web version of swagger editor gives the flexibility to import your existing swagger file(JSON/YAML) and download the configuration file that is currently being shown. So just combine these two.
Note: Converting JSON to YAML exists, but not JSON to RAML
First import your swagger JSON at http://editor.swagger.io/#/ (File > Import File)
Once you see your configurations, just download the corresponding YAML version (File > Download YAML).
The YAML version of the JSON you just uploaded will be downloaded.
Conversion
If you are looking to convert from any version Swagger to RAML 0.8 then APITransformer.com can do it for you. We're almost done with RAML 1.0 export. Will release it in a week's time.
Validation
The converted description comes out of the same code-gen engine that APIMatic uses to validate an API description before generating SDKs/Client libraries. Therefore, the converted RAML will be validated by default.
API descriptions in a variety of formats can also be validated via APIMatic's CLI or APIMatic's API
While I wish there was a command line tool, this company has made a converter it seems:
https://apitransformer.com/