Generate JSON test data from RAML - json

Is there a tool that allows me to create random JSON test data starting from a RAML file?
Example: starting from a RAML file describing an API generate random static JSON responses to register in a WireMock mock server mappings so that I can run automated tests against the API.
I'm working with Java but tools/libraries in other languages would fit too.
Thanks

Maybe one of these suits your needs:
RAML Mock Server: Library for validating MockServer calls against a RAML API specification
RAML Tester: Test if a request/response matches a given raml definition
soapui-raml-plugin
Allows you to import RAML files into SoapUI for testing your REST APIs
Allows you to generate a REST Mock Service for a RAML file being imported
Otherwise check: http://raml.org/projects/projects and filter by type 'test' and language.

Perhaps https://github.com/wavesoft/raml-lipsum might be useful.
That seems to indicate that it can do as you ask. I didn't think that the other answers seemed to generate sample requests for a raml service, which is I think what the question asks for, and I was asked today at work.

Related

Render HAL formatted links in openapi3 json with SpringDoc

I'm running a Spring Boot REST application with Spring HATEOAS support and generating OpenAPIv3 docs with the Springdoc Maven plugin. However when I call my REST endpoints I get links the in HAL formatted JSON ("_links"). But the generated OpenAPIv3 documentation is giving me a different format for the links ("links").
How can I get the generated OpenAPIv3 docs to match the HAL formatted links?
The only resource I've found is this link: https://github.com/springdoc/springdoc-openapi/issues/446
However, the solution given there involves using spring-data-rest which I am not using (do I need to?)
I've also tried adding #EnableHyperMediaSupport which says it configures the JSON rendering, but that had no effect on the OpenAPIv3 docs.
The answer was simple enough, I needed to pull in the springdoc-openapi-hateoas dependency (https://springdoc.org/#spring-hateoas-support). After pulling this in the JSON documentation was generated correctly with no additional configuration (I did not need #EnableHypermediaSupport or spring-data-rest).
However, if you are using the Swagger UI be aware that it will automatically generate bogus 'additionalProperty' links as an example for the resource schema. This is only in the Swagger UI, if you look at the generated openapiv3 json the structure is correct (cf: https://github.com/springdoc/springdoc-openapi/issues/237). To remedy this you can provide your own example of the resource schema.

Azure APIM - import single controller from an API or split controllers from single API into different APIM APIs

I have backend APIs with multiple controllers which split up operations which are for 3rd parties, other are for frontend proxies / other micro-services and then there is an support/admin controller. I dont want all of these in the same APIM API / Product.
Currently either having to manually edit the OpenAPI def of the API before importing it into APIM or have to manually create the API in APIM and then using the dev tools extractor to export the templates for other environments.
My stack is
dotnet 5.0 / 6.0 (aspnet) with NSwag to document the API. Using the azure apim development toolkit to automate the bits we can.
I'd like to have a automated pipeline where the backend api is built, have separate OpenAPI defs or way to filter controller during importing, the output from that goes into a pipeline for APIM which then updates the dev environment APIM and then can be auto deployed into other environments when needed.
Does anyone else do this type of thing or do you put the whole API into a single APIM API/Product? Or do you have completely separate backend APIs that make up the a microservice? Or something else?
I want to share what we do at work.
The key idea is to have a custom OpenAPI processor (be it a command line tool so that you can call that in a pipeline) combined with OpenAPI Extension specs. The processor is basically a YAML or JSON parser based on your API spec format.
You create a master OpenAPI spec file that contains all the operations in your controller.
Create an extension, say x-api-product: "Product A", add this to the relevant API operations.
Your custom processor takes in the master spec file (e.g. YAML), and groups the operations by x-api-product then outputs a set of new OpenAPI specs.
Import the output files into APIM.
The thing to think about is how you manage the master API spec file. We follow the API Spec First approach, where we manually create and modify the YAML file, and use OpenAPI Generator to code gen the controllers.
Hope this gives you some ideas.

dynamically update the request json and send it as multipart form data in karate [duplicate]

In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.

How does a .json file work in a API call?

I've been doing research in order to write an API for a school project, and when referencing the API documentation of YouTube and Twitter, I see API URLs like this
https://api.twitter.com/1.1/account/settings.json
My understanding was that you execute a method on the backend which will return information to the caller, but I thought those files had to be of extension type .py or .java or whatever language you're using, and JSON was just the return type. I've been unable to find any information on how a .json file works in this example. Is there code in settings.json that is being executed?
JSON is just a format of your data, that you can then use, for example in JavaScript.
It is back-end language independent. By this I mean, that front-end of the application does not care who produced .json file.
Maybe it was Java application, maybe it was Python, or PHP application it does not matter. It can be also static file, with fixed content which just has json format.
After you receive such thing in front-end, you can do with it whatever you want. From your perspective it will be probably some kind of nested array.
In example you provided, I get:
{"errors":[{"code":215,"message":"Bad Authentication data."}]}
And it is fine, it's just data you get. It is JSON format - that is true. But you don't care that path has .json in the URL, it could have any extension, what is important is what's inside.
That is a beauty of JSON. You can prepare for yourself static file with mocked data in JSON format, and use it while developing front-end of the application. When you wish, you can have back-end application which will return real data for your app.
You can also see here, how to return json file from PHP:
Returning JSON from a PHP Script
Or here to see how to do it in Python (Django Framework):
Creating a JSON response using Django and Python

Single serialization layer to Json with Casbah/Salat

I am trying to create a serialization layer which allows me to:
Store my classes in a MongoDB data source
Convert them to JSON to use them in a REST API.
Some classes are clearly not case classes (because they are inherited from a Java codebase) and I would have to write ad-hoc code for that. Is registering a BSON Hook for my non standard type the correct approach, and does it provide Json serialization?
Salat maintainer here.
You might prefer to create a Salat custom transformer instead of registering a BSON hook with Casbah.
See simple example and spec.
If you run into any issues, feel free to ping the mailing list with a small sample Github project that demonstrates what isn't working.