I was surfing gaana.com music website that has also released its developer version api.gaana.com. The documentation of api is here http://developer.gaana.com/resources/meta-data-api/tracks/
I wish to query the database but i am struggling with the syntax and I am unable to follow the documentation guidelines. try and retry got me a Json result but I dont know how to put conditions.
Example, I want to search the database for all tracks where the artist name is "kishor kumar" and the rating/popularity of the track is 10. I tried the below url but it does not satisfy the artist name. Can someone help me how to use this api?
http://api.gaana.com?type=song&subtype=most_popular&token=b2e6d7fbc136547a940516e9b77e5990&format=JSON&order=alltime&language=hindi
In the Search API(Search Song) you can see,
APIURL/?type=search&subtype=search_song&key=disco deewane
Just replace disco deewane with kishore kumar.
For example, http://api.gaana.com/?type=search&subtype=search_song&key=kishore%20kumar&token=b2e6d7fbc136547a940516e9b77e5990&format=JSON&order=alltime&language=hindi
There are 6486 tracks listed.
Related
Doesn't seem like i can log a question on IBM forums without having a support contract.
For QRadar, I'm using this: https://www.ibm.com/docs/en/qradar-common?topic=endpoints-get-asset-modelassets
which returns:
My questions are: Is it possible to retrieve details from one of the api's?
I need to map this asset, "1278", 327 vulnerabilities, what are they how can i get the titles of the 327 vulnerabilities specific to asset 1278?
A product/ software that is install installed is ID: 48846, variant ID 97265. How can I map that to what it actually is, like "adobe"
From what I've read I don't know if it's possible or if that information is in postgres database which is in-accessible from the API.
Does anyone know?
I'd like to have a list of just the current titles for all questions in one of the smaller (less than 10,000 questions) stackexchange site. I tried the interactive utility here: https://api.stackexchange.com/docs/questions and it both reports the result as a json at the bottom, and produces the requesting url at the top. For example:
https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=apples&site=cooking
returns this JSON in my browser:
{"items":[{"tags":["apples","crumble"],"owner":{ ...
...
...],"has_more":true,"quota_max":300,"quota_remaining":252}
What is quota? It was 10,000 on one search on one site, but suddenly it's only 300 here.
I won't be doing this very often, what I'd like is the quickest way to edit that (or similar of course) url so I can get a list of all of the titles on a small site. I don't understand how to use paging, and I don't need any of the other fields. I don't care if I get them, but I'm thinking if I exclude them I can have more at once.
If I need to script it, python (2.7) is my preferred (only) language.
quota_max is the number of requests your application is allowed per day. 300 is the default for an unregistered application. This used to be mentioned directly on the page describing throttles, but seems to have been removed. Here is historical information describing the default.
To increase this to 10,000, you need to register an application and then authenticate by passing an access token in your script.
To get all titles on a site, you can use a Python library to help:
StackAPI. The answer below will use this library. DISCLAIMER: I wrote this library
Py-StackExchange
SEAPI
StackPy
Assuming you have registered your application and authenticated we can proceed.
First, install StackAPI (documentation):
pip install stackapi
This code will then grab the 10,000 most recent questions (max_pages * page_size) for the site hardwarerecs. Each page costs you one API hit, so the more items per page, the few API calls.
from stackapi import StackAPI
SITE = StackAPI('hardwarerecs')
SITE.page_size = 100
SITE.max_pages = 100
# Filter to only get question title and link
filter = '!BHMIbze0EQ*ved8LyoO6rNjkuLgHPR'
questions = SITE.fetch('questions', filter=filter)
In the questions variable is a dictionary that looks very similar to the API output, except that the library did all the paging for you. Your data is in questions['data'] and, in this case, contains a list of dictionaries that look like this:
[
...
{u'link': u'http://hardwarerecs.stackexchange.com/questions/29/sound-board-to-replace-a-gl2200-in-a-house-of-worship-foh-setting',
u'title': u'Sound board to replace a GL2200 in a house-of-worship FOH setting?'},
{ u'link': u'http://hardwarerecs.stackexchange.com/questions/31/passive-gps-tracker-logger',
u'title': u'Passive GPS tracker/logger'}
...
]
This result set is limited to only the title and the link because of the filter we applied. You can find the appropriate filter by adjusting what fields you want in the web UI and copying the filter field.
The hardwarerecs parameter that is passed when creating the SITE parameter is the first part of the site's domain URL. Alternatively, you can find it by looking at the api_site_parameter for your site when looking at the /sites end point.
I'm currently looking at reading out posts and related json data from a given number of Instagram users using the following URL:
https://www.instagram.com//media/
This will only bring back the latest 20 posts. I have done some hunting around and I am unable to see how to form the url to bring back the next 20 results. I've seen some places that have suggested using max_timestamp, but I can't see how to make this work.
For various reasons I do not wish to use the standard Instagram API.
You should use a max_id parameter to pagination.
Example: https://www.instagram.com/[user-login]/media/?max_id=[last-min-id], where [last-min-id] is a minimal id from previous page. The id does not repeat in new page.
This endpoint 'https://www.instagram.com/[user-login]/media/' is currently turned off in the last few days, unsure exactly when.
If you are dependant on it, you might want to check it now in your apps.
e.g. https://www.instagram.com/fosterandpartners/media/
I'm trying to get the hours of TF2 played from Steam profiles for an application I'm developing. I'm not very experienced at manipulating JSON, so I'm not sure if the API is bad or if I'm bad.
According to this: https://developer.valvesoftware.com/wiki/Steam_Web_API#GetOwnedGames_.28v0001.29 I can call include_played_free_games to show TF2. However when I make a web request using this: http://api.steampowered.com/IPlayerService/GetOwnedGames/v1/?key=XXXXXXXXXXXXXXXXXXXXXXX&include_played_free_games=true&format=json&steamid=XXXXXXXXXXXXXXXXXXXXXXX
The request is valid, however TF2, appid 440, doesn't show up. So am I going crazy, or should this be working?
The user has to have played the game at some point for it to be returned when specifying 'include_played_free_games'.
From the API Documentation:
include_played_free_games: By default, free games like Team Fortress 2
are excluded (as technically everyone owns them). If
include_played_free_games is set, they will be returned if the player
has played them at some point. This is the same behavior as the games
list on the Steam Community.
The url requires a numeric value '1' for the parameters and will not work if you use 'true'. The following url worked for me when using my own steam id and web key:
http://api.steampowered.com/IPlayerService/GetOwnedGames/v0001/?key=XXXXXXXXXXXXXXXX&include_played_free_games=1&include_appinfo=1&format=json&steamid=XXXXXXXXXXX
Is there a way to retrieve the amount of times a certain URL was "dented" (shared on identi.ca, status.net and/or the likes?).
For twitter there are several services that give this information.
Twitter itself: http://urls.api.twitter.com/1/urls/count.json?url=http://example.com&callback=twttr.receiveCount
Tweetmeme: http://api.tweetmeme.com/url_info.jsonc?url=http://example.com
Topsy: http://otter.topsy.com/stats.js?url=http://example.com&callback=?
I don't need the fancy extra information that Tweetmeme or Topsy deliver, only the amount.
I am aware that this is problematic, seen from the "distributed" nature of status.net: it will only give a count from once single silo, e.g. identi.ca. However, for me, for now, that would be enough.
Is there such an endpoint that gives me such JSON?
I don't think so. There's a file table in StatusNet databases that holds references to dented URLs (so it wouldn't be hard to count them if you had access to database or could write a plugin -- i.e., you wouldn't have to parse all notices, just lookup the file table), but it's not exposed through the API.
The list of API possible calls for StatusNet is here: http://status.net/wiki/TwitterCompatibleAPI
In addition, there's a proposed Google Summer of Code project on this subject: Social Analytics plugin