How to count retweets - actionscript-3

I am trying to create a flash retweet button, and i want to know how to count the retweets for a specific status. Can anybody help me?
Thanks in advance,
Alex

Have a look at the Twitter API documentation on retweets.
Here's an example:
http://api.twitter.com/1/statuses/retweets/16208928355.json
Just finish with id.format.

I just thought I should add (from the Twitter API Retweet documentation):
statuses/retweets
Returns up to 100 of the first retweets of a given tweet.
So it looks like there is a limit of 100. There is also rate-limiting on this part of the Twitter API:
API rate limited (about rate limiting):
1 call per request
http://apiwiki.twitter.com/Rate-limiting
Sorry to provide a problem and not a solution. To get around this you might be able to use Tweetmeme with the Tweetmeme API:
http://www.webmaster-source.com/2009/11/23/count-your-retweets-with-the-tweetmeme-api/
This would only work if you had some sort of URL in the Tweet (e.g. a shortened bit.ly URL). Sorry I have misunderstood this.
As an aside, I am looking at using this approach to allow website users to Tweet a particular bit of content from a page (similar to using the page meta-tags with Facebook Open Graph 'Like' buttons):
http://ejohn.org/blog/retweet/
So I guess what I'm advocating for is tracking the shortened link as a method for tracking retweets, and also as a means of tracking 'tweets' from a landing page. Has anyone done anything similar to this?

Related

How to get an RSS feed of tweets WITH images?

I'm trying to get an rss feed of a list of tweets with a given hashtag, including the images that may be attached to the tweets.
I've used several different scripts out there, but none include the media_url entity that I believe I need, according to twitter's docs on API entities. They do include other necessary things like author, tweet description, author profile pic, etc.
I've used labnol's script, no luck.
I'm currently using Twitter-RSS-Parser, which doesn't give me an image link either.
I'm not very familiar with any of the actual coding, just trying to piece together other people's findings.
Is there a way to edit either of these scripts to provide a link to the image attached to each tweet, or any other script out there that already does this?
Thanks!
Those labnol scripts will need the following parameter added to them &include_entities=true
That will ensure that Tweets which have photos will have their entity meta data returned.
I ended up using tweedledee (can't find a link anymore!) scripts, which allow for specific queries and output in JSON. From there I was able to format the JSON data as needed.

Facebook Graph api Search post with Picture

So If I post a status update on Facebook with a photo (public), I want to use Graph Search api to find it.
Here is the link I've been using:
https://graph.facebook.com/search?q=%23tacomaevent&type=post
I am hoping to be able to use the hashtag such as #tacomaevent so I can search for public text and picture post.
Thanks for your help.
I know this is an old question and would like to point out that, as of now, the search you proposed returns a JSON string containing a list of Facebook Post objects, which may have a property named picture. This property will contain an URL of the picture if one is available.
It has many other properties and they're documented here.

How do I change the amount of items retrieved from a flickr feed?

Does anyone know the parameter to change (decrease) the amount of items retrieved in the flickr feed?
It always returns 20 items by default. Example:
http://api.flickr.com/services/feeds/photos_public.gne?format=json
The documentation is rather scarce:
http://www.flickr.com/services/feeds/
Not possible it seems.
There doesn't seem to be a way to control this from the flickr API, however, I did come up with another site which offers RSS of flickr feeds (precisely because the flickr API is limiting)
http://www.degraeve.com/flickr-rss/

Extracting *relevant* image from a web-page

I have a couple of twitter-powered news aggregation website. I have been planning to add images from articles that I find on twitter.
If I download the page and extract image using <img> tag, I get a bunch of images; not all of them relevant to the article. For example, images of button, icons, ads etc are captured. How do I extract the image accompanying the article? I know there is a solution -- Facebook link sharer does this pretty well.
Mithun
Duplicate of : How to find and extract "main" image in website
Download all images from the page,
blacklist all images coming from an ad server.
then find some heuristic which will get you the correct image...
I think something like:
Biggest resolution += 5pts
Biggest filesize += 10 pts
Jpeg += 2 pts
then take the image with the most points and throw the rest away
Probably works for majority of sites.
(Would require some fiddling with the heuristics though)
It's been a long time. But this may help next time.
You can use this API https://urlmeta.org/
It's very simple to use and result is the best we need.
example for using API:
<?php
$url = "http://timesofindia.indiatimes.com/business/india-business/Raghuram-Rajan-not-fit-to-be-RBI-Governor-Subramanian-Swamy/articleshow/52236298.cms";
$result = file_get_contents('https://api.urlmeta.org/?url='.$url);
$array = json_decode($result,1);
print_r($array['meta']['image']);
?>
And that's the result you needed.
I kind of came-up with a solution that is a bit hacky but works for me. Here is what I do to get thumbnails.
Say the headline of the page I find is "this is a headline"
I use this as a query to the Google Image API and then extract the first thumbnail I find.
It actually works quite well for a majority of the cases. Check it out for yourself http://cricketfresh.in
Mithun
ps: I think this is a good answer. Will give credit to someone who comes with a more elegant answer.
I would guess that Facebook has a link extractor for the various sites it supports. Something like id="content" -> img (1st).
Guess I am wrong. Seems that Facebook uses the Open Graph Protocol to define which image (og:image) and which metadata to use.

Parsing a website and getting the info I need

hi so I need to retrieve the url for the first article on a term I search up on nytimes.com
So if I search for Apple. This link would return the result
http://query.nytimes.com/search/sitesearch?query=Apple&srchst=cse
And you just replace Apple with the term you are searching for.
If you click on that link you would see that NYtimes ask you if you mean Apple Inc.
I want to get the url for this link, and go to it.
Then you will just get a lot of information on Apple Inc.
If you scroll down you will see the articles related to Apple.
So what I ultimately want is the URL of the first article on this page.
So I really do not know how to go about this. Do I use Java, or what do I use? Any help would be greatly appreciated and I would put a bounty on this later, but I need the answer ASAP.
Thanks
EDIT: Can we do this in Java?
You can use Python with the standard urllib module to fetch the pages and the great HTML parser BeautifulSoup to obtain the information you need from the pages.
From the documentation of BeautifulSoup, here's sample code that fetches a web page and extracts some info from it:
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://www.icc-ccs.org/prc/piracyreport.php")
soup = BeautifulSoup(page)
for incident in soup('td', width="90%"):
where, linebreak, what = incident.contents[:3]
print where.strip()
print what.strip()
print
This this is a nice and detailed article on the topic.
You certainly can do it in Java. Look at the HttpURLConnection class. Basically, you give it a URL, call the connect function, and you get back an input stream with the contents of the page, i.e. HTML text. You can then process that and parse out whatever information you want.
You're facing two challenges in the project you are describing. The first, and probably really the lesser challenge, is figuring out the mechanics of how to connect to a web page and get hold of the text within your program. The second and probably bigger challenge will be to figure out exactly how to extract the information you want from that text. I'm not clear on the details of your requirements, but you're going to have to sort through a ton of text to find what you're looking for. Without actually looking at the NY Times site at the momemnt, I'm sure it has all sorts of decorations like pretty pictures and the company logo and headlines and so on, and then there are going to be menus and advertisements and all sorts of stuff. I sincerely doubt that the NY Times or almost any other commercial web site is going to return a search page that includes nothing but a link to the article you are interested in. Somehow your program will have to figure out that the first link is to the "subscribe on line" page, the second is to an advertisement, the third is to customer service, the fourth and fifth are additional advertisements, the sixth is to the home page, etc etc until you finally get to the one you're actually interested in. How will you identify the interesting link? There are probably headings or formatting that make it recognizable to a human being, but you use a lot of intuition to screen out the clutter that can be difficult to reproduce in a program.
Good luck!
You can do this in C# using the HTML Agility Pack, or using LINQ to XML if the site is valid XHTML. EDIT: It isn't valid XHTML; I checked.
The following (tested) code will get the URL of the first search result:
var doc = new HtmlWeb().Load(#"http://query.nytimes.com/search/sitesearch?query=Apple&srchst=cse");
var url = HtmlEntity.DeEntitize(doc.DocumentNode.Descendants("ul")
.First(ul => ul.Attributes["class"] != null
&& ul.Attributes["class"].Value == "results")
.Descendants("a")
.First()
.Attributes["href"].Value);
Note that if their website changes, this code might stop working.