I'm looking at the Reddit API and I'm not finding any information on how to pull subcomment JSON.
I want to get a comment and all of its children.
http://www.reddit.com/r/subreddit/comments/commentid/anytext.json will get any particular post posted to Reddit (if you put in the subreddit and the id of any particular article), however using this same method to try to get a comment won't work.
Is there anyway to get this data? I'm doing this in Javascript, but this is an endpoint issue above all else, so that doesn't really matter.
Add .json to the comment permalink, eg https://www.reddit.com/r/test/comments/3azr6z/doge/cshhtsi.json .
Related
This is my first post on the community and im not a native english speaker so please pardon me for my bad english and for any mistakes i might do in posting this.
Im creating an application(NodeJS) that will search for a planet name on Wikipedia and return the first result description and image in JSON format.
My Requirements are:
Must be in JSON Format;
Must be done with only ONE Api Call;
Of course i searched google and Stackoverflow for a solution , before posting.
By following the Wikipedia API DOC (https://www.mediawiki.org/wiki/API:Opensearch) && (https://www.mediawiki.org/wiki/API:Main_page)
I tried this query:
https://en.wikipedia.org/w/api.php?action=opensearch&search=planet%20mars&limit=1&namespace=0&format=json
This only gives me The title and the link for the article
If i try the same query but in xml format:
https://en.wikipedia.org/w/api.php?action=opensearch&search=planet%20mars&limit=1&namespace=0&format=xml
As you can see by changing format to xml , it works , i can get the image tag ! But my application won't accept xml format(for security reason) !
How can i get the same result , but in JSON FORMAT ?
Is there any other way of successfully fetching the description and image of a search result from wikipedia ?
I found a solution , so i will answer my own question maybe it might help someone one day
the api call i must use is: https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts|pageimages&exintro&explaintext&generator=search&gsrsearch=intitle:planet%20mars&gsrlimit=1&redirects=1
I’m in the middle of making an Express app. It’s just a learning project.
I’m getting some info from an Anime api called jikan.me, it provides info about different Anime series like a picture url and synopsis.
For example one is at https://api.jikan.me/anime/16 .
Now, the jikan api might have a json endpoint at anime/1 but there's nothing at anime/2.
I want to find a list of all the numbers (https://api.jikan.me/anime/[numbers]) that actually contain endpoints.
I've tried simply going to https://api.jikan.me/anime but it returns error: No ID/Path Given.
I'm expecting there is likely no absolute answer to this problem but that I might learn something about server-side code along the way.
Where would I begin to look to find this info?
This is a bit late but, Jikan is an unofficial REST API for MyAnimeList. The IDs are respective to the IDs on MAL. For example; https://myanimelist.net/anime/1 can be parsed through https://api.jikan.moe/anime/1 but the ID 2 does not exist on MAL. It's a 404, hence that error.
To initially get some IDs, you can try the search endpoint.
Furthermore, I'll be releasing REST 2.2 quite soon (this month) which will give you the ability to parse from pages like these and thus you'll get another endpoint that provides a handful of IDs to get their data from.
Source: I'm the developer of Jikan
If it's not in the documentation it's probably information not available to you... a REST api needs to be specifically configured to offer certain endpoints, that number at the end might just be an ID that's searched for in an internal database and there's no way for the application to know if there's gonna be something there; all they can do is return an error message for you to handle as is the case here.
I'm working with the Behance API to build a plugin. I got the API key and built the URL to get my project list data in JSON format.
The weird thing is that, the JSON is not complete at all, there are many project missing, 14 missing to be specific.
Does someone have any idea?
Thanks in advance.
Found the solution here: https://help.behance.net/hc/en-us/community/posts/202357274-Number-of-Behance-API-request-results-limited-to-25-
It seems that by default the number of items is limited to 25. To get more you need to paginate adding a query get after the URL.
ex: http://www.behance.net/v2/users/gokhanaltinigne/projects?api_key=##&per_page=25&page=2
The question is about JSON API specification and how properly do a request
(I'm using ruby on rails and the json api resources gem but that's a general question anyway, I know how to implement it, I just want to follow the rules of JSON API at: http://jsonapi.org/format/)
Situation 1:
I want to get all shelves
I want to include all books that are on those shelves
The get I'm supposed to use in this case is:
www.library.com/shelves?include=books
Situation 2:
I want to get all books but only books that are marked as unread
The get I'm supposed to use is:
www.library.com/books?filter[unread]=true
What would be correct way of designing request for all shelves with included unread books?
Can't figure this one out
www.library.com/shelves?include=books&filter[books.unread]=true ?
www.library.com/shelves?include=unread_books ? <- would have to specify another resource, books that are unread
www.library.com/shelves?filter[books.unread]=true ?
What's the most correct way of doing this?
EDIT
After speaking with my tech lead and a few other programmers, the first options is favoured the most in such cases
I would bet on the first one:
www.library.com/shelves?include=books&filter[books.unread]=true
JSON API currently does not support filtering includes, but this doesn't mean you have to be strict on the definition (check https://github.com/cerebris/jsonapi-resources/issues/314)
I would go with the same approach as brian:
www.library.com/shelves?include=books&filter[books.unread]=true
I just wanted to give some more background to the answer.
If you view the source of a Google+ profile page, it appears rather complex. It seems most of the data is kept in a huge JSON-like objects. However, they don't seem to be really JSON, since they don't get recognized when I try to decode them. I am hoping the format is more clear to other people here. How would you go about parsing it? It seems it would fairly trivial, if you know where to start.
Here is a sample profile, for example: http://plus.google.com/104560124403688998123
Here's a PHP API I'm working on. It can download and parse the data for a profile page and people's public relationships.
https://github.com/jmstriegel/php.googleplusapi
The JSON piece is a bit mangled. To generate valid JSON, you basically have to remove the first 5 characters that prevent XSRF attacks and then add in all the nulls that have been removed. Here's the code specific to handling parsing the weird Google Plus JSON responses:
https://github.com/jmstriegel/php.googleplusapi/blob/master/lib/GooglePlus/GoogleUtil.php
Call GoogleUtil::FetchGoogleJSON( $url ) and you'll get back a giant array that you can then pull data from. Using this, it should be trivial to make a proxy service to translate stuff into valid json(p) for you to use in your own apps.
I don't have access to Google+ yet, so I'll just answer the general question - that is, how to parse JSON.
JSON is just JavaScript, so parsing it is as simple as evaluating the script. To do this, use the eval() JavaScript function.
var obj = eval('{"JSON":"goes here"}');
Another option is to leverage a console tool. Popular modern browsers pretty much all have them. I recommend Firebug for Firefox in particular.
Using Firefox, log into Google+, then open the Firebug console. You can use the console's dir() command to create a browseable representation of the data. Ex:
console.dir(eval('{"JSON":"goes here"}'));
Sorry I can't be more specific about how to get a handle on Google+'s JSON in particular; without access to the service, this is about the best I can do blind. Good luck!
Thanks to Jason for the excellent php class which reads a profile page into an array.
I've used this class as a base and then parsed it, based upon Russell Beattie's python code from the original appspot rss feed application.
Code here
A few notes:
I use this to merge G+ and WP feeds, hence writing posts into an intermediate array ($items).
I have a convention of creating a pseudo title in Google Plus posts, by emboldening a line and adding two newlines before writing the post. The function getTitle strips this out as a better formatted title in my website and getSummary produces the rest of the post with duplicating the title.
It's made up of a number of parts, an object describing your picasa images, one describing the fields on your profile, one describing your friends.
Most of the long numbers are the internal IDs of people, posts and photos. For instance, my ID is 105249724614922381234. Other than that, it could be parsed if you needed to.