I'm trying to get the hours of TF2 played from Steam profiles for an application I'm developing. I'm not very experienced at manipulating JSON, so I'm not sure if the API is bad or if I'm bad.
According to this: https://developer.valvesoftware.com/wiki/Steam_Web_API#GetOwnedGames_.28v0001.29 I can call include_played_free_games to show TF2. However when I make a web request using this: http://api.steampowered.com/IPlayerService/GetOwnedGames/v1/?key=XXXXXXXXXXXXXXXXXXXXXXX&include_played_free_games=true&format=json&steamid=XXXXXXXXXXXXXXXXXXXXXXX
The request is valid, however TF2, appid 440, doesn't show up. So am I going crazy, or should this be working?
The user has to have played the game at some point for it to be returned when specifying 'include_played_free_games'.
From the API Documentation:
include_played_free_games: By default, free games like Team Fortress 2
are excluded (as technically everyone owns them). If
include_played_free_games is set, they will be returned if the player
has played them at some point. This is the same behavior as the games
list on the Steam Community.
The url requires a numeric value '1' for the parameters and will not work if you use 'true'. The following url worked for me when using my own steam id and web key:
http://api.steampowered.com/IPlayerService/GetOwnedGames/v0001/?key=XXXXXXXXXXXXXXXX&include_played_free_games=1&include_appinfo=1&format=json&steamid=XXXXXXXXXXX
Related
I'd like to have a list of just the current titles for all questions in one of the smaller (less than 10,000 questions) stackexchange site. I tried the interactive utility here: https://api.stackexchange.com/docs/questions and it both reports the result as a json at the bottom, and produces the requesting url at the top. For example:
https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=apples&site=cooking
returns this JSON in my browser:
{"items":[{"tags":["apples","crumble"],"owner":{ ...
...
...],"has_more":true,"quota_max":300,"quota_remaining":252}
What is quota? It was 10,000 on one search on one site, but suddenly it's only 300 here.
I won't be doing this very often, what I'd like is the quickest way to edit that (or similar of course) url so I can get a list of all of the titles on a small site. I don't understand how to use paging, and I don't need any of the other fields. I don't care if I get them, but I'm thinking if I exclude them I can have more at once.
If I need to script it, python (2.7) is my preferred (only) language.
quota_max is the number of requests your application is allowed per day. 300 is the default for an unregistered application. This used to be mentioned directly on the page describing throttles, but seems to have been removed. Here is historical information describing the default.
To increase this to 10,000, you need to register an application and then authenticate by passing an access token in your script.
To get all titles on a site, you can use a Python library to help:
StackAPI. The answer below will use this library. DISCLAIMER: I wrote this library
Py-StackExchange
SEAPI
StackPy
Assuming you have registered your application and authenticated we can proceed.
First, install StackAPI (documentation):
pip install stackapi
This code will then grab the 10,000 most recent questions (max_pages * page_size) for the site hardwarerecs. Each page costs you one API hit, so the more items per page, the few API calls.
from stackapi import StackAPI
SITE = StackAPI('hardwarerecs')
SITE.page_size = 100
SITE.max_pages = 100
# Filter to only get question title and link
filter = '!BHMIbze0EQ*ved8LyoO6rNjkuLgHPR'
questions = SITE.fetch('questions', filter=filter)
In the questions variable is a dictionary that looks very similar to the API output, except that the library did all the paging for you. Your data is in questions['data'] and, in this case, contains a list of dictionaries that look like this:
[
...
{u'link': u'http://hardwarerecs.stackexchange.com/questions/29/sound-board-to-replace-a-gl2200-in-a-house-of-worship-foh-setting',
u'title': u'Sound board to replace a GL2200 in a house-of-worship FOH setting?'},
{ u'link': u'http://hardwarerecs.stackexchange.com/questions/31/passive-gps-tracker-logger',
u'title': u'Passive GPS tracker/logger'}
...
]
This result set is limited to only the title and the link because of the filter we applied. You can find the appropriate filter by adjusting what fields you want in the web UI and copying the filter field.
The hardwarerecs parameter that is passed when creating the SITE parameter is the first part of the site's domain URL. Alternatively, you can find it by looking at the api_site_parameter for your site when looking at the /sites end point.
I'm currently looking at reading out posts and related json data from a given number of Instagram users using the following URL:
https://www.instagram.com//media/
This will only bring back the latest 20 posts. I have done some hunting around and I am unable to see how to form the url to bring back the next 20 results. I've seen some places that have suggested using max_timestamp, but I can't see how to make this work.
For various reasons I do not wish to use the standard Instagram API.
You should use a max_id parameter to pagination.
Example: https://www.instagram.com/[user-login]/media/?max_id=[last-min-id], where [last-min-id] is a minimal id from previous page. The id does not repeat in new page.
This endpoint 'https://www.instagram.com/[user-login]/media/' is currently turned off in the last few days, unsure exactly when.
If you are dependant on it, you might want to check it now in your apps.
e.g. https://www.instagram.com/fosterandpartners/media/
I have recently installed FreeBPX with asterisk included. I activated the rest interface, so I can see /ari/asterisk/info and it responds with a JSON. Now I want to see all my call recordings. I configured recordings and the server saves them in wav format. It's ok, but how can I see them through json/rest? I tried open /ari/asterisk/recordings, but it responds with "resource not found".
As yo can see in the docs, you can use:
GET /recordings/stored/{recordingName}
EDIT: You can see the list of recordings stored with
GET /recording/stored
You are missing the point here, the ARI recordings interface isn't meant to be used with the files that you have stored via FreePBX. The recordings API is meant to allow you to manage recordings, from within a Stasis application. That means, start a recording from a Stasis application and manage it. If the recording had been performed outside of Stasis, the ARI engine will not be aware of it.
Well, at least that's what it's supposed to do.
Nir
This is partly doable - FreePBX doesn't seem to use the native Asterisk recording APIs so you can only retrieve the filename
First get all the channels:
GET /ari/channels
Find your channel's ID from the response's id field
Then you can request the variable CALLFILENAME from the channel's variable endpoint:
GET /ari/channels/{id}/variable?variable=CALLFILENAME
Is this possible to get all items with their TAGS like
(Rarity,Quality,Hero,Slot,Type,Description)
for DOTA2(570), TF2(440), CS:GO(730), Steam(753)
I haven't found any api to get response with all items available for particular game.If anyone know how to get this please reply to my question.
There's no official API (e.g. the Web API) to get all information for all games. Web API only supports Dota 2 (IEconItem_570) and TF2 (IEconItems_440). There’s also an interface for CS:GO (IEconItem_730), but it's rudimentary and doesn't include weapon skins.
Because of that lack of official APIs Steam Condenser doesn't include a way to do this.
There's a way to mimic Steam's own web interface and mobile apps which use a JSON interface, e.g. http://steamcommunity.com/id/koraktor/inventory/json/730/2/ (where 730 is the app ID and 2 is the item type). Steam use other types than 2: 3, 6 and 7. The data structure is almost self-explanatory.
The language can be changed by setting the GET parameter l to the name of the language, e.g. english, german or french .
Is there a way to retrieve the amount of times a certain URL was "dented" (shared on identi.ca, status.net and/or the likes?).
For twitter there are several services that give this information.
Twitter itself: http://urls.api.twitter.com/1/urls/count.json?url=http://example.com&callback=twttr.receiveCount
Tweetmeme: http://api.tweetmeme.com/url_info.jsonc?url=http://example.com
Topsy: http://otter.topsy.com/stats.js?url=http://example.com&callback=?
I don't need the fancy extra information that Tweetmeme or Topsy deliver, only the amount.
I am aware that this is problematic, seen from the "distributed" nature of status.net: it will only give a count from once single silo, e.g. identi.ca. However, for me, for now, that would be enough.
Is there such an endpoint that gives me such JSON?
I don't think so. There's a file table in StatusNet databases that holds references to dented URLs (so it wouldn't be hard to count them if you had access to database or could write a plugin -- i.e., you wouldn't have to parse all notices, just lookup the file table), but it's not exposed through the API.
The list of API possible calls for StatusNet is here: http://status.net/wiki/TwitterCompatibleAPI
In addition, there's a proposed Google Summer of Code project on this subject: Social Analytics plugin