Instagram Media Endpoint Paging - json

I'm currently looking at reading out posts and related json data from a given number of Instagram users using the following URL:
https://www.instagram.com//media/
This will only bring back the latest 20 posts. I have done some hunting around and I am unable to see how to form the url to bring back the next 20 results. I've seen some places that have suggested using max_timestamp, but I can't see how to make this work.
For various reasons I do not wish to use the standard Instagram API.

You should use a max_id parameter to pagination.
Example: https://www.instagram.com/[user-login]/media/?max_id=[last-min-id], where [last-min-id] is a minimal id from previous page. The id does not repeat in new page.

This endpoint 'https://www.instagram.com/[user-login]/media/' is currently turned off in the last few days, unsure exactly when.
If you are dependant on it, you might want to check it now in your apps.
e.g. https://www.instagram.com/fosterandpartners/media/

Related

Amazon: product advertising api pagination top sellers

Is this a limitation of the amazon API?
I would like to pull data similar to this page: amazon.com/Best-Sellers-Home-Improvement-Pumps-Plumbing-Equipment/zgbs/hi/13749581/ref=zg_bs_nav_hi_1_hi
STACKOVERFLOW BREAKS THIS LINK!
am using:
operation: 'BrowseNodeLookup',
response_group: "BrowseNodeInfo,TopSellers"
The TopSeller response group only returns 10 items and does not respond to ItemPage.
Is there a way to do item lookup without a query using a browse node and sorting by popularity?
The AWS documentation on the BrowseNodeLookup API and the TopSellers response group indicates that it only includes the top 10, and there is no mention of pagination.
The TopSellers response group returns the ASINs and titles of the 10 best sellers within a specified browse node.
However, the results from TopSellers are basically equivalent to the results of an ItemSearch with Sort set to salesrank. Therefore, you can solve pagination requirements as follows:
On initial load (such as a user loading a web page or opening a particular view in a mobile application), issue BrowseNodeLookup and retrieve TopSellers. Populate some portion of the UI with information from the browse node and some other portion of the UI with the TopSellers results.
If the user never goes past the first page, then do nothing more. (There is no need to spend time on an additional service call.)
As the user navigates to subsequent pages, issue ItemSearch with Sort set to salesrank and ItemPage set to the page number. Use these results to update the portion of the web page/view in your application that was previously populated from the browse node TopSellers.
Note that you will still only be able to retrieve up to 10 pages worth of results. This is an ItemSearch API limitation.

How to restrict fields returned by stackexchange api, and turn off paging?

I'd like to have a list of just the current titles for all questions in one of the smaller (less than 10,000 questions) stackexchange site. I tried the interactive utility here: https://api.stackexchange.com/docs/questions and it both reports the result as a json at the bottom, and produces the requesting url at the top. For example:
https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=apples&site=cooking
returns this JSON in my browser:
{"items":[{"tags":["apples","crumble"],"owner":{ ...
...
...],"has_more":true,"quota_max":300,"quota_remaining":252}
What is quota? It was 10,000 on one search on one site, but suddenly it's only 300 here.
I won't be doing this very often, what I'd like is the quickest way to edit that (or similar of course) url so I can get a list of all of the titles on a small site. I don't understand how to use paging, and I don't need any of the other fields. I don't care if I get them, but I'm thinking if I exclude them I can have more at once.
If I need to script it, python (2.7) is my preferred (only) language.
quota_max is the number of requests your application is allowed per day. 300 is the default for an unregistered application. This used to be mentioned directly on the page describing throttles, but seems to have been removed. Here is historical information describing the default.
To increase this to 10,000, you need to register an application and then authenticate by passing an access token in your script.
To get all titles on a site, you can use a Python library to help:
StackAPI. The answer below will use this library. DISCLAIMER: I wrote this library
Py-StackExchange
SEAPI
StackPy
Assuming you have registered your application and authenticated we can proceed.
First, install StackAPI (documentation):
pip install stackapi
This code will then grab the 10,000 most recent questions (max_pages * page_size) for the site hardwarerecs. Each page costs you one API hit, so the more items per page, the few API calls.
from stackapi import StackAPI
SITE = StackAPI('hardwarerecs')
SITE.page_size = 100
SITE.max_pages = 100
# Filter to only get question title and link
filter = '!BHMIbze0EQ*ved8LyoO6rNjkuLgHPR'
questions = SITE.fetch('questions', filter=filter)
In the questions variable is a dictionary that looks very similar to the API output, except that the library did all the paging for you. Your data is in questions['data'] and, in this case, contains a list of dictionaries that look like this:
[
...
{u'link': u'http://hardwarerecs.stackexchange.com/questions/29/sound-board-to-replace-a-gl2200-in-a-house-of-worship-foh-setting',
u'title': u'Sound board to replace a GL2200 in a house-of-worship FOH setting?'},
{ u'link': u'http://hardwarerecs.stackexchange.com/questions/31/passive-gps-tracker-logger',
u'title': u'Passive GPS tracker/logger'}
...
]
This result set is limited to only the title and the link because of the filter we applied. You can find the appropriate filter by adjusting what fields you want in the web UI and copying the filter field.
The hardwarerecs parameter that is passed when creating the SITE parameter is the first part of the site's domain URL. Alternatively, you can find it by looking at the api_site_parameter for your site when looking at the /sites end point.

How to get more than 1 stock information per call on Google Financials?

I'm using Google Script and Google Financials to get information for a list of stocks I have in a text file. The problem is that the class FinanceApp just seems to be able to get one stock at a time and since I have to do this for more than 250 stocks I reach the maximum call limit.
Is there a better way to do this?
Since there are limitations and you are making repeted tests, I suggest using a cache : You can then repeat the test without hitting the limit (assuming you request always the same data for the same date, i.e. using StockInfoSnapshot).
You do it by wrapping FinanceApp.getHistoricalStockInfo() so that it serves from the cache if possible, or add to it if info is not available.
The cache could conveniently resides in the "script-related storage" : https://developers.google.com/apps-script/script_user_properties
Good luck !

determining if a status update is a retweet using api 1.1

I want a simple, robust way to identify retweets in a hashtag search using twitter api 1.1.
For example, if I send the following request with the proper authentication:
https://api.twitter.com/1.1/search/tweets.json?q=%23stackoverflow
I'll get the last 15 tweets tagged with #stackoverflow.
It looks like only retweeted status updates have the 'retweet_status' property. Is checking to see if the tweet has a 'retweet_status' property a reliable way to determine if it is a retweet?
'retweet' and 'retweet_count' don't give me what I need.
sounds rather like you've answered your own question. retweeted_status is present when the retweeter has used Twitter's official Retweet function.
However people still to the old style RT: <quote> approach which won't give you any solid data bindings in the data returned from the API. The only way to handle these is to compare the text and see if the original text is contained. If they've modified the text then you're stuck, but then if they've modified the text then technically it's not a Retweet - it's just plagiarism ;)
Thought I'd share my solution...
if (eventMsg.retweeted_status == null) {
//run code
}

Tweet counter for identi.ca

Is there a way to retrieve the amount of times a certain URL was "dented" (shared on identi.ca, status.net and/or the likes?).
For twitter there are several services that give this information.
Twitter itself: http://urls.api.twitter.com/1/urls/count.json?url=http://example.com&callback=twttr.receiveCount
Tweetmeme: http://api.tweetmeme.com/url_info.jsonc?url=http://example.com
Topsy: http://otter.topsy.com/stats.js?url=http://example.com&callback=?
I don't need the fancy extra information that Tweetmeme or Topsy deliver, only the amount.
I am aware that this is problematic, seen from the "distributed" nature of status.net: it will only give a count from once single silo, e.g. identi.ca. However, for me, for now, that would be enough.
Is there such an endpoint that gives me such JSON?
I don't think so. There's a file table in StatusNet databases that holds references to dented URLs (so it wouldn't be hard to count them if you had access to database or could write a plugin -- i.e., you wouldn't have to parse all notices, just lookup the file table), but it's not exposed through the API.
The list of API possible calls for StatusNet is here: http://status.net/wiki/TwitterCompatibleAPI
In addition, there's a proposed Google Summer of Code project on this subject: Social Analytics plugin