I want a simple, robust way to identify retweets in a hashtag search using twitter api 1.1.
For example, if I send the following request with the proper authentication:
https://api.twitter.com/1.1/search/tweets.json?q=%23stackoverflow
I'll get the last 15 tweets tagged with #stackoverflow.
It looks like only retweeted status updates have the 'retweet_status' property. Is checking to see if the tweet has a 'retweet_status' property a reliable way to determine if it is a retweet?
'retweet' and 'retweet_count' don't give me what I need.
sounds rather like you've answered your own question. retweeted_status is present when the retweeter has used Twitter's official Retweet function.
However people still to the old style RT: <quote> approach which won't give you any solid data bindings in the data returned from the API. The only way to handle these is to compare the text and see if the original text is contained. If they've modified the text then you're stuck, but then if they've modified the text then technically it's not a Retweet - it's just plagiarism ;)
Thought I'd share my solution...
if (eventMsg.retweeted_status == null) {
//run code
}
Related
I logged in today to make another logic app today, and i noticed the return output for (in this example) a Http-call has changed.
Before, i have a memory of the whole object showing in the output of an action in the workflow. Now i just se this:
Picture below:
The output body is only a string in some kind of encryption...
Does the Workflow definition Language where i want to reach one specific value in the Json-body still work? Or was this Update a major overhall.
I'm lost here.
Does the Workflow definition Language where i want to reach one specific value in the Json-body still work?
Yes, We can still do it, seems like you at the designer over view of a run, if you want to see full of inputs and output click on the Run details tab on top of this screen.
enter image description here
The content is base64 encoded, if you want to decode with in the logic apps for any other action input, we can use the base64 function which are explained here (https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-workflow-definition-language#functions)
I'd like to have a list of just the current titles for all questions in one of the smaller (less than 10,000 questions) stackexchange site. I tried the interactive utility here: https://api.stackexchange.com/docs/questions and it both reports the result as a json at the bottom, and produces the requesting url at the top. For example:
https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=apples&site=cooking
returns this JSON in my browser:
{"items":[{"tags":["apples","crumble"],"owner":{ ...
...
...],"has_more":true,"quota_max":300,"quota_remaining":252}
What is quota? It was 10,000 on one search on one site, but suddenly it's only 300 here.
I won't be doing this very often, what I'd like is the quickest way to edit that (or similar of course) url so I can get a list of all of the titles on a small site. I don't understand how to use paging, and I don't need any of the other fields. I don't care if I get them, but I'm thinking if I exclude them I can have more at once.
If I need to script it, python (2.7) is my preferred (only) language.
quota_max is the number of requests your application is allowed per day. 300 is the default for an unregistered application. This used to be mentioned directly on the page describing throttles, but seems to have been removed. Here is historical information describing the default.
To increase this to 10,000, you need to register an application and then authenticate by passing an access token in your script.
To get all titles on a site, you can use a Python library to help:
StackAPI. The answer below will use this library. DISCLAIMER: I wrote this library
Py-StackExchange
SEAPI
StackPy
Assuming you have registered your application and authenticated we can proceed.
First, install StackAPI (documentation):
pip install stackapi
This code will then grab the 10,000 most recent questions (max_pages * page_size) for the site hardwarerecs. Each page costs you one API hit, so the more items per page, the few API calls.
from stackapi import StackAPI
SITE = StackAPI('hardwarerecs')
SITE.page_size = 100
SITE.max_pages = 100
# Filter to only get question title and link
filter = '!BHMIbze0EQ*ved8LyoO6rNjkuLgHPR'
questions = SITE.fetch('questions', filter=filter)
In the questions variable is a dictionary that looks very similar to the API output, except that the library did all the paging for you. Your data is in questions['data'] and, in this case, contains a list of dictionaries that look like this:
[
...
{u'link': u'http://hardwarerecs.stackexchange.com/questions/29/sound-board-to-replace-a-gl2200-in-a-house-of-worship-foh-setting',
u'title': u'Sound board to replace a GL2200 in a house-of-worship FOH setting?'},
{ u'link': u'http://hardwarerecs.stackexchange.com/questions/31/passive-gps-tracker-logger',
u'title': u'Passive GPS tracker/logger'}
...
]
This result set is limited to only the title and the link because of the filter we applied. You can find the appropriate filter by adjusting what fields you want in the web UI and copying the filter field.
The hardwarerecs parameter that is passed when creating the SITE parameter is the first part of the site's domain URL. Alternatively, you can find it by looking at the api_site_parameter for your site when looking at the /sites end point.
I'm currently looking at reading out posts and related json data from a given number of Instagram users using the following URL:
https://www.instagram.com//media/
This will only bring back the latest 20 posts. I have done some hunting around and I am unable to see how to form the url to bring back the next 20 results. I've seen some places that have suggested using max_timestamp, but I can't see how to make this work.
For various reasons I do not wish to use the standard Instagram API.
You should use a max_id parameter to pagination.
Example: https://www.instagram.com/[user-login]/media/?max_id=[last-min-id], where [last-min-id] is a minimal id from previous page. The id does not repeat in new page.
This endpoint 'https://www.instagram.com/[user-login]/media/' is currently turned off in the last few days, unsure exactly when.
If you are dependant on it, you might want to check it now in your apps.
e.g. https://www.instagram.com/fosterandpartners/media/
I have a personal blog I built using rails. I want to add a section to my site that displays my current streak of github contributions. What would be the best way about doing this?
edit: for clarification, here is what I want:
just the number of days is all that is necessary for me.
Considering the GitHub API for Users doesn't yet expose that particular information (number of days for current stream of contributions), you might have to:
scrape it (extract it by reading the user's GitHub page)
As klamping mentions in his answer (upvoted), the url to scrap would be:
https://github.com/users/<username>/contributions_calendar_data
https://github.com/users/<username>/contributions
(for public repos only, though)
SherlockStd has an updated (May 2017) parsing code below:
https://github-stats.com/api/user/streak/current/:username
try projects which are using https://github.com/users/<username>/contributions_calendar_data (as listed in Marques Johansson's answer, upvoted)
IonicaBizau/git-stats:
akerl/githubchart (Github contribution SVG generator)
akerl/githubstats (Github contribution statistics)
build that graph yourself: see the GitHub project git-cal
git-cal is a simple script to view commits calendar (similar to GitHub contributions calendar) on command line.
Each block in the graph corresponds to a day and is shaded with one of the 5 possible colors, each representing relative number of commits on that day.
or establish a service that will report, each day, any new commit for that given day to a Google Calendar (using the Google Calendar API through a project like nf/streak).
You can then read that information and report it in your blog.
You can find various example of scraping that information:
github_team_calendar.py
weekend-commits.js
As in:
$.getJSON('https://github.com/users/' + location.pathname.replace(/\//g, '') + '/contributions_calendar_data', weekendWork);
leaderboard.rb:
Like:
leaderboard = members.map do |u|
user_stats = get("https://github.com/users/#{u}/contributions_calendar_data")
total = user_stats.map { |s| s[1] }.reduce(&:+)
[u, total]
end
... (you get the idea)
The URL for the plain JSON data was:
https://github.com/users/[username]/contributions_calendar_data
[Edit: Looks like this URL no longer works)
There is a URL which generates the SVG, which other answers have indicated. That is here:
https://github.com/users/[username]/contributions
Simply replace [username] with your github username in the URL and you should be able to see the chart. See other answers for more in-depth explanations
If you want something that matches the visual appearance of GitHub's chart, check out these projects which use https://github.com/users/<username>/contributions_calendar_data but also apply other factors based on Github's logic.
https://github.com/akerl/githubchart
https://github.com/akerl/githubstats
[Project deprecated and unavalaible for now, will be back online soon.]
Since the URL https://github.com/users/<username>/contributions_calendar_data don't work anymore, you have to parse the SVG from https://github.com/users/<username>/contributions.
Unfortunately, Github loves security and CORS is disabled on their server.
To solve this issue, I've setup an API for me and everyone who needs it, just GET https://github-stats.com/api/user/streak/current/{username} (CORS allowed), and you'll get and answer like:
{
"success":true,
"currentStreak": 3
}
https://github-stats.com will soon implement more stats endpoints :)
Please ask for new endpoint at https://github.com/SherloxFR/github-stats.com/issues, it will be a pleasure to find a way to implement them !
Hi, i have been having some problems using JSON data in flash builder lately and I was hoping someone could help me out here.
I have spent the past month working solidly on this issue, so I HAVE been looking around, HAVE been trying out everything I can find or think of. I am simply stuck.
I have been working on a flex mobile application for the Blackberry Playbook tablet with Adobe Flash Builder 4.6. It is a reddit app, designed to give users the main reddit feed, subreddits, search function, hopefully log in stuff etc. Of course I need the aid of the reddit API to access this information, of which the documentation can be found here: https://github.com/reddit/reddit/wiki/API/ The api uses either XML or JSON formatted data.
Now onto my problem- Like mentioned above, i want to display the reddit feed inside the app. I want to be able to use a item renderer to customize the data that is shown within each entry of the list.
One entry would consist of:
1) a thumbnail of the image in the post
2) The title of the post
3) a 'like/dislike' button, but this is unimportant at the moment.
Of course to start out, i placed a spark List component onto the design view. Then i configured a new HTTP data service using the Data/Services panel. I specified http://www.reddit.com/r/all.json for the url. I configured the return type, and the did a Test Operation. Everything connected just fine. All the data came through as normal. I will give you an idea of what the data comes back as so that you may understand my issue later on.
Test Operation Results (json data structure):
NoName1
data
after
before
children
[0]
data
media_embed
score
id
title
thumbnail
url
(etc etc...)
kind
[1]
data
media_embed
score
title
thumbnail
(etc etc...)
kind
[2] (array continues)
modhash
kind
As you can see, to get to the thumnail for example, you would have to go through data.children[].data.thumnail. When I tried to bind this data to the spark List component, I specified the data service to be from the one above. Then I specified the Data provider option to be Children[], as this value is typically set to the array. Now this is where the trouble begins. The final option, Label Field, only gave me one value to choose from: 'kind'. So as you can tell, it wasnt expecting the data to go any further nested. It stops at the two value just within each array item, which would be Data and Kind, though it only offers me Kind. I need to go one level further to access Title and Thumbnail. This is my problem.
Now, I have analyzed the code for the binding, and I have tried altering it to accomodate the further nested value. No success what so ever. The following is the code that the binding generates:
<s:List
id="myList" width="100%" height="100%" change="myList_changeHandler(event)"
creationComplete="myList_creationCompleteHandler(event)" labelField="kind">
<s:AsyncListViewlist="{TypeUtility.convertToCollectionredditFeedJSONResult.lastResult.data.children)}"/>
<s:List>
Obviously i would want to have something like along the lines of:
"TypeUtility.convertToCollection(redditFeedJSONResult.lastResult.data.children.data" and then set the labelField="title" or "thumbnail".
I certainly hope someone can help me with this. I am out of my mind as to what to do. If you need any further clarification I would be happy to provide it. I tried to explain the situation above as clearly as possible. Thank you so much.
Ted
I have this situation often: get an XML or JSON data from the server, then trying to use it as a dataProvider for a spark.components.List or for a mx.controls.Menu and then they just wouldn't display the data as I want them, because something in the data is different from what they expect. And then they display wrong XML-children or [Object,Object,etc.]
And in such cases I just create an empty ArrayCollection() which serves as dataProvider instead (and can be sorted and/or filtered too). And when data comes from the server, I push() new Objects into it:
[Bindable]
private var _data:ArrayCollection = new ArrayCollection();
public function update(xlist:XMLList):void {
_data.length = 0;
for each (var xml:XML in xlist)
_data.push({label: xml, event: xml.#event});
}
This always works. And if you get your next problem - flickering of the List, then that is solvable by merging data too.
Good luck with your Playbook development, which is a cool piece of hardware :-)