Unable to import swagger JSON or YAML into Postman - json

Problem
Unable to convert swagger 2.0 into a format which is being affected by Postman import functionality
Generated via /swagger.json|yaml
Swagger endpoint exposed via dropwizard jetty using swagger
swagger-core: 1.5.17
swagger-jaxrs: 1.5.17
swagger-jersey2-jaxrs: 1.5.17
swagger-models: 1.5.17
Attempts
Tried manually importing the JSON or YAML versions via the import screen
import file
import from link
paste raw text
Tried converting to different formats using: api-spec-converter and swagger2-postman-generator
Result
Error on import: Must contain an info object
Question
Has anyone managed to get around this issue allowing the import

In Swagger 2.0 the info field is mandatory. Just add the following to your YAML root:
info:
title: 'EmptyTitle'
description: 'EmptyDescription'
version: 0.1.0
Or like this if you have it in JSON format (in the root too):
"info": {
"title": "EmptyTitle",
"description": "EmptyDescription",
"version": "0.1.0"
}
Hope it helped !

Have you tried converting to Postman v2?
The swagger2-postman-generator you tried converts Swagger v2 to Postman v1. This one converts Swagger v2 to Postman v2: https://www.npmjs.com/package/swagger2-postman2-converter as used in this tutorial.

My setup: Spring boot generated swagger-ui, which gives me the raw open-api documentation.
In my case
"info": {
"title": "EmptyTitle"
}
was already available in the json which spring-boot open api generated for me, but was missing the other two fields which #BBerastegui mentions in his answer:
"description": "EmptyDescription",
"version": "0.1.0"
I added them, and the esult looks like
so that the rsult looks like this, which works:
"info": {
"title": "EmptyTitle",
"description": "EmptyDescription",
"version": "0.1.0"
}

Related

Continue error trying to change Google cloud trigger with REST APIs

I'm trying to change a trigger using REST API, specifically https://cloud.google.com/build/docs/api/reference/rest/v1/projects.triggers/patch. Note that I'm able to use curl and list all the triggers. Also I tried to download the trigger in JSON using https://cloud.google.com/build/docs/api/reference/rest/v1/projects.triggers/get (which works perfectly) but when I tried to upload the same file the error is always:
{
"error": {
"code": 400,
"message": "exactly 1 build config required, got: 0",
"status": "INVALID_ARGUMENT"
}
}
if I try to upload an invalid Json correctly it gives error parsing JSON, so surely it's trying to parse using JSON format.
So I tried the same experiment using "Try it!" button on Google page which opens Google APIs explorer. Same results. The interface gave me a warning that some fields are only output so I tried also to remove these fields but I got the same error.
The file I'm trying to upload is (changed some strings to remove company name)
{
"description": "Push to any branch",
"github": {
"push": {
"branch": ".*"
},
"owner": "company",
"name": "repo-utils"
},
"tags": [
"github-default-push-trigger"
],
"name": "default-push-trigger-127"
}
I think I found the issue. Google API seems to require either build or filename to be passed to specify a way to build. On the other hand the web interface allows to insert an Autodetect option for the build which will try to look for either cloudbuild.yaml or Dockerfile in the root directory. If you specify Autodetect on the web interface the Json configuration won't have either build or filename so when you try to import back that configuration it will fail.
I tried to pass filename as empty string, the web interface shows a cloudbuild.yaml (which is present) but the execution of the trigger fails.
So it seems there's no way to insert a Autodetect mode trigger using the API.

Angular use Json for blog posting vs BBDD

I'm doing an app with angular since it's easy for me to make it work as a progressive web app, etc. In my app I have a blog section.
The problem is that I need to load a json files dynamically because I'm using it to make new post pages as entry blogs.
I need to know if i can create something like a folder named json and throw all my json files with the content of each post, keeping the angular parsing the folder and looking for new jsons while it's in run time. Or i must use a backend with some mongo or mysql bbdd?
You need to use backend to implement.
Try to receive the service response in JSON array as below:
{
"posts": [
{
"author": "Chinua Achebe",
"content": "abc"
},
{
"author": "Hans Christian Andersen",
"content": "def"
}
]
}
Just iterate over the posts array to display your posts

Grafana import a simple json file as data source

I have a bunch of data in a JSON file.
I would like to use that as (static) data source in Grafana, but I don't know how to do that.
I have installed Grafana (in a Docker container) and have added the Simple JSON plugin. But to my understanding that takes as input a URL... not a JSON file :( How can I do that?
I've had a look at the Fake JSON data source example. I see it implements a web server that will answer to some typical requests like /search or /query. But I don't understand how to adapt that. I am pretty new to Grafana as you can see...
This is what my json looks like:
{"eventid": "cowrie.direct-tcpip.request", "timestamp": "2019-01-15T10:03:24.604331Z", "session": "f3f60d4e", "src_port": 0, "message": "direct-tcp connection request to xxxx:443 from ::1:0", "system": "SSHService ssh-connection on HoneyPotSSHTransport,874,xxxxxx", "isError": 0, "src_ip": "xxxxxxxx", "dst_port": 443, "dst_ip": "xxxx", "sensor": "90a9ea4c9756"}
Thanks for your help.

What is the practical different between the usage of JSON and YAML in Swagger?

It appears that JSON includes the path information and http request verb, whereas YAML seems to just definition a tree structure alone.
What is the difference between them? Or am I mixing up different concepts/hierarchies here? newbie to swagger, just started learning.
If YAML is a superset of JSON what specifically is the superset adding here - is it URL paths and HTTP verbs ? Is adding an example also something that YAML adds to JSON for Swagger ?
According to the OpenAPI Specification,
An OpenAPI document that conforms to the OpenAPI Specification is itself a JSON object, which may be represented either in JSON or YAML format.
So feature-wise, there is no difference between using JSON or YAML. What YAML as a superset of JSON adds here is merely a different syntax. Using an example from the specification, these two documents are identical:
{
"servers": [
{
"url": "https://development.gigantic-server.com/v1",
"description": "Development server"
},
{
"url": "https://staging.gigantic-server.com/v1",
"description": "Staging server"
},
{
"url": "https://api.gigantic-server.com/v1",
"description": "Production server"
}
]
}
and
servers:
- url: https://development.gigantic-server.com/v1
description: Development server
- url: https://staging.gigantic-server.com/v1
description: Staging server
- url: https://api.gigantic-server.com/v1
description: Production server
The first document is valid JSON and valid YAML (since YAML is a superset of JSON). The second document is valid YAML and structurally identical to the first document.
So to answer the question about what the YAML superset adds here: It adds different syntax for specifying the same structures. Nothing more.
YAML does include some features that are not mappable to JSON, but they are irrelevant here because Swagger/OpenAPI doesn't use them.
JSON does not support comments, while YAML does!
YAML is neater, easier to read and write than JSON if you look at it from another angle.

Bulk loading JSON object as document into elasticsearch

Is there a way to bulk load the data below into elasticsearch without modifying the original content? I POST each object to be a single document. At the moment I'm using Python to parse through individual objects and POST them one at a time.
{
{"name": "A"},
{"name": "B"},
{"name": "C"},
{"name": "D"},
}
Doing this type of processing in production from REST servers into elasticsearch is taking a lot of time.
Is there a single POST/curl command that can upload the file above at once and elasticsearch parses it and makes each object into its own document?
We're using elasticsearch 1.3.2
Yes, you can do bulk api via curl by using the _bulk endpoint. But not custom parsing. Whatever process that creates the file can format it to ES specification if that is an option. See here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html
There is also bulk support in python via helper. See here:
http://elasticsearch-py.readthedocs.org/en/master/helpers.html