WP REST API v2 JSON endpoints appear difficult to read - json

I found WP REST API very interesting in making custom functionalities in WordPress websites. However, I find it hard to read my JSON endpoints' results.
The normal output of JSON endpoint is wrapped in html and pre tags. T result appears in one long line of compressed string.
I need to integrate my website to a mobile app to be done by another developer and I would like to display the API endpoints (e.g. link) to appear as a regular JSON Object like:
I'm trying to find a workaround like a hook or a filter to make the JSON results appear as I desired. Or equivalent AJAX related code would be nice.

I use a Chrome extension of JSON Formatter to view the results which prints out with readability in mind.
https://github.com/callumlocke/json-formatter

Related

Initialize an Angular page with large json from client?

I am used to creating webapplications with Wicket. There it is possible to generate a HTML page using POST, receiving a large JSON from the client (browser) to generate some charts. This can be done for example with CURL.
In Angular, I could not find a similar approach.
What is the recommended way to render a chart based on the JSON that a browser provides? An URL parameter is not really the way to go as the URL length is limited.
I can think of a work-around wheren I first post the data to some webservice, receive an id, and then pass that id to an URL in Angular, but that seems a lot of work for something simple :)

Tool to generate web client using JSON Rest Interface

Do any of the front end frameworks (Like Vue.js) have the ability to generate a prototype form/view directly from various an Endpoints?
I am wanting to quickly knock up a few forms to capture and submit data to a set of JSON Rest Post APIs and Display pages to render data retrieved from other JSON Rest Get APIs.
I don't want to have to go through the pain of having to map each and every field out in a set of .js file, it would be great if the boilerplate could just be generated from interface in a similar way to what Apache Isis does from a Domain model.
The above would allow me/clients to generate a UI directly off the interface and interact with it using a web browser in a similar way one would using Postman, without having to install and understand Postman.
You can try one of these:
https://vue-crud.github.io/
https://github.com/dionmaicon/vue-crudgen
https://github.com/ais-one/vue-crud-x - described in https://codeburst.io/vue-crud-x-a-highly-customisable-crud-component-using-vuejs-and-vuetify-2b1539ce2054
https://api-platform.com/docs/distribution/

Is there a possibility to retrieve liquipedia page content in json?

I got this page - https://liquipedia.net/dota2/Players_(all) which I want to get in JSON format, is there any open liquipedia (which is actually wikipedia based site) api's I can use? Or maybe there are approaches to convert html to json?
There appears to be a python API that would do what your looking for at https://github.com/c00kie17/liquipediapy#dota_get_players

Returning MarkLogic EVAL REST service output as JSON

I am working on a demo using MarkLogic to store emails exported from Outlook as XML, so that they stay searchable and accessible when I move away from Outlook.
I am using an AngularJS front-end calling either the native MarkLogic REST services of own REST services written in JAVA using Jersey.
MarkLogic SEARCH REST service works very well to get back a list of references to documents based on various search criteria, but I also want to display information stored inside the found documents.
I would like to avoid multiple REST calls and to get back only the needed information, so I am trying to use the EVAL REST service to run an xQuery.
It works well to get XML back (inside a multipart/mixed message) but I don't seem to be able to get JSON instead which would be much more convenient and is very easy with most other MarkLogic REST services.
I could use "json:transform-to-json()" in my xQuery or transform the XML to JSON in my JAVA code, but that does not look very elegant to me.
Is there a more efficient method to get where I am trying to go ?
First, json:transform-to-json seems plenty elegant to me. But of course it's not always the right answer.
I see three options you haven't mentioned.
server-side transforms - REST search supports server-side transforms which transform each document when you perform a bulk read by query. Those server-side transforms could generate any json you need.
search extract-document-data - this the simplest way to extract portions of documents. But it seems best if your documents are json to match your json response. Otherwise you get xml in your json response . . . unless you're ok with that.
custom search snippets - another very powerful way to customize what search returns
All of these options don't require the privileges that eval requires, which is a very good thing. Since eval allows execution of arbitrary code on your server, it requires special privileges and should be used with great care. Two other options before you use eval are (1) custom xquery installed in an http server, and (2) REST extensions.
The answers from Sam are what I would suggest. Specifically I would set a search option for search-extract-document-data (This is a search API option. If you are posting the request, then you can add the option in the XML you post back. If you are using GET, then you need to register the option ahead of time and call it. Relevant URLs to assist:
https://docs.marklogic.com/guide/rest-dev/search#id_48838
https://docs.marklogic.com/guide/search-dev/appendixa#id_44222
As for json.. ML8 will transform content. Use the accept-header or just add format=json to your results...
Example - xml which is what my content is stored as:
http://localhost:8000/v1/search?q=watermellon
...
<search:result index="1" uri="/sample/collections/1.xml" path="fn:doc("/sample/collections/1.xml")" score="34816" confidence="0.5982239" fitness="0.6966695" href="/v1/documents?uri=%2Fsample%2Fcollections%2F1.xml" mimetype="application/xml" format="xml">
<search:snippet>
<search:match path="fn:doc("/sample/collections/1.xml")/x">
<search:highlight>watermellon</search:highlight>
</search:match>
</search:snippet>
</search:result>
...
Example - json which is what my content is stored as:
http://localhost:8000/v1/search?q=watermellon&format=json
...
"index":1,
"uri":"/sample/collections/1.xml",
"path":"fn:doc(\"/sample/collections/1.xml\")",
"score":34816,
"confidence":0.5982239,
"fitness":0.6966695,
"href":"/v1/documents?uri=%2Fsample%2Fcollections%2F1.xml",
"mimetype":"application/xml",
"format":"xml",
"matches":[
{
"path":"fn:doc(\"/sample/collections/1.xml\")/x",
"match-text":[
{
"highlight":"watermellon"
}
]
}
]
}
...
For real heavy-lifting, you can use server-side transforms as in Sam's description. One note about this. Server-side transformations are not part of the search API, but part of the REST API. Just mentioning it so you have some idea of which tool you are using in each case..

Is it possible to parse a Google+ (Google Plus) profile page?

If you view the source of a Google+ profile page, it appears rather complex. It seems most of the data is kept in a huge JSON-like objects. However, they don't seem to be really JSON, since they don't get recognized when I try to decode them. I am hoping the format is more clear to other people here. How would you go about parsing it? It seems it would fairly trivial, if you know where to start.
Here is a sample profile, for example: http://plus.google.com/104560124403688998123
Here's a PHP API I'm working on. It can download and parse the data for a profile page and people's public relationships.
https://github.com/jmstriegel/php.googleplusapi
The JSON piece is a bit mangled. To generate valid JSON, you basically have to remove the first 5 characters that prevent XSRF attacks and then add in all the nulls that have been removed. Here's the code specific to handling parsing the weird Google Plus JSON responses:
https://github.com/jmstriegel/php.googleplusapi/blob/master/lib/GooglePlus/GoogleUtil.php
Call GoogleUtil::FetchGoogleJSON( $url ) and you'll get back a giant array that you can then pull data from. Using this, it should be trivial to make a proxy service to translate stuff into valid json(p) for you to use in your own apps.
I don't have access to Google+ yet, so I'll just answer the general question - that is, how to parse JSON.
JSON is just JavaScript, so parsing it is as simple as evaluating the script. To do this, use the eval() JavaScript function.
var obj = eval('{"JSON":"goes here"}');
Another option is to leverage a console tool. Popular modern browsers pretty much all have them. I recommend Firebug for Firefox in particular.
Using Firefox, log into Google+, then open the Firebug console. You can use the console's dir() command to create a browseable representation of the data. Ex:
console.dir(eval('{"JSON":"goes here"}'));
Sorry I can't be more specific about how to get a handle on Google+'s JSON in particular; without access to the service, this is about the best I can do blind. Good luck!
Thanks to Jason for the excellent php class which reads a profile page into an array.
I've used this class as a base and then parsed it, based upon Russell Beattie's python code from the original appspot rss feed application.
Code here
A few notes:
I use this to merge G+ and WP feeds, hence writing posts into an intermediate array ($items).
I have a convention of creating a pseudo title in Google Plus posts, by emboldening a line and adding two newlines before writing the post. The function getTitle strips this out as a better formatted title in my website and getSummary produces the rest of the post with duplicating the title.
It's made up of a number of parts, an object describing your picasa images, one describing the fields on your profile, one describing your friends.
Most of the long numbers are the internal IDs of people, posts and photos. For instance, my ID is 105249724614922381234. Other than that, it could be parsed if you needed to.