What's the point of oEmbed API endpoints and URL schemes vs. link tags? - html

The oEmbed specification mentions 2 different ways of finding the oEmbed content of an URL:
Knowing the API endpoint of the website and passing it, through a GET parameter, the URL you want info about, if it matches the URL pattern it declared.
Discovering the URL of the oEmbed version thanks to a <link rel="alternate" type="application/json+oembed" ... /> (or text/xml+oembed) HTML header.
The 2nd ways seems more generic, as you don't have to store and maintain a whole list of providers. Moreover, lists of providers are the sign of a centralized internet, where only a few actors exist. This approach is hardly scalable.
I can see a use for the 1st approach, though, for websites that can parse resources made available by someone else. For example, I can provide an oEmbed version of video pages from website Foo. However, for several reasons, mainly security-related, I wouldn't trust someone who says "I can parse resource X for you" unless X's author is OK with that, which brings us back to approach 2.
So my question is: what did I miss here? What's the use of the 1st method of dealing with oEmbed? For instance, why store (and maintain up-to-date) a whole list of endpoints and patterns like oohEmbed does if you have a generic way of discovering it on-the-fly and for virtually any resource on the internet?
As a very closely related question, which I think may be asked at the same time (please correct me if I'm wrong): what happens if one doesn't provide a central endpoint for oEmbed contents, but rather, say, expect a '?version=oembed' parameter on each URL, that returns the oEmbed version instead of the standard one?

If I recall correctly, supporting both mechanisms was a compromise that we figured would help drive adoption. It's much easier to persuade large web properties to add a single endpoint vs. adding markup (that's irrelevant to most clients) to every response body. It was a pragmatic choice.
Longer term we planned to leverage some of the work Eran Hammer-Lahav was doing around discovery rather than re-inventing it (poorly, again). Unfortunately, his ideas still haven't gotten much traction and the web still lacks a good, standardized way to do this sort of thing.

I was hoping to find an answer here but it looks like everyone else is as confused as we are. The advantage of using option 1 in my opinion is that it only uses 1 json request instead of a potentially expensive html request followed by the json request. You can always use option 2 as a fallback in case you can't match a pattern in your pre-baked list of oEmbed providers.

OEmbed discovery is a major security concern. WordPress for example has a whitelist of supported OEmbed providers.
Suppose that every random URL at the internet can trigger an OEmbed code. That means everyone can hack your site.
Steps:
Create a new site, add an OEmbed discovery.
Post the URL to a form at your site. Now your site perform the OEmbed on my behalf.
Exploit:
by denial-of-service (DOS): e.g. redirect the URL to a tarpit or feed it a 1GB json response.
by cross site scripting (XSS): inject random HTML to pages that other people can see.
by stealing the admin's session-cookie via XSS: now the attacker can login to your CMS to upload files, and exploit even more.
It's XSS to the max, with little to stop it. The only sane thing to do, is whitelisting proper endpoints. That's the oEmbed endpoints are explicitly listed.
If you want something scalable, you might like www.noembed.com and www.embedly.com They provide OEmbed support for various sites which don't do OEmbed themselves.

Related

REST And Can Delete/Update etc actions

I am writing a REST API. However, one of the requirements is to allow the caller to determine if an action may be performed (so that, for example, a button can be enabled or disabled, etc.)
The action might not be allowed for several reasons - perhaps user permissions, but possibly because, for example, you can't delete a shared object, or you can't create an item with the same name as another item or an array of other business rules.
All the logic to determine if something can be deleted should be determined in the back end, but the front end must show this in the GUI.
I am trying to find the right pattern to use for this in REST, and am coming up a bit short. I could create a parallel API so for every entity endpoint there was an EntityPermissions endpoint, but that seems to be overkill. I could also do something like add an HTTP header that indicates that the request was only to check permisisons, not perform it, but that seems a bit dubious, and likely to mess up the http cache.
Can anyone point me to the common pattern for doing something like this? Does it have a name? Or a web page that discusses it? I'm sure everyone has their own ideas on this (like my dumb ideas) but I this seems to be a common enough requirement that I figure there must be a common pattern for it. But google didn't help much.
There's going to be multiple opinionated answers about this. I'll share mine. Might not be the best for your problem, but it's a valid solutions.
If you followed the real definition of REST, you would be building a hypermedia/HATEOAS-style webservice. Urls would not be hardcoded, they would be discovered and actions would be discovered by the existence of a link.
If an action may not be performed, you can just hide the link. If a user fetches the next resource they just see all the available actions right there.
A popular format for hypermedia API's is HAL. You might decorate the links further with more information from HTTP Link hints.
If this is the first time you heard of hypermedia API's, there might be a bit of a learning curve. The results of learning this can be very beneficial though.

Best practice for email links that will set a DB flag?

Our business wants to email our customers a survey after they work with support. For internal reasons, we want to ask them the first question in the body of the email. We'd like to have a link for each answer. The link will go to a web service, which will store the answer, then present the rest of the survey.
So far so good.
The challenge I'm running into: making a server-side changed based on an HTTP GET is bad practice, but you can't do a POST from a link. Options seem to be:
Use an HTTP GET instead, even though that's not correct and could cause problems (https://twitter.com/rombulow/status/990684453734203392)
Embed an HTML form in the email and style some buttons to look like links (likely not compatible with a number of email platforms)
Don't include the first question in the email (not possible for business reasons)
Use HTTP GET, but have some sort of mechanism which prevents a link from altering the server state more than once
Does anybody have any better recommendations? Googling hasn't turned up much about this specific situation.
One thing to keep in mind is that HTTP is specifying semantics, not implementation. If you want to change the state of your server on receipt of a GET request, you can. See RFC 7231
This definition of safe methods does not prevent an implementation from including behavior that is potentially harmful, that is not entirely read-only, or that causes side effects while invoking a safe method. What is important, however, is that the client did not request that additional behavior and cannot be held accountable for it. For example, most servers append request information to access log files at the completion of every response, regardless of the method, and that is considered safe even though the log storage might become full and crash the server. Likewise, a safe request initiated by selecting an advertisement on the Web will often have the side effect of charging an advertising account.
Domain agnostic clients are going to assume that GET is safe, which means your survey results could get distorted by web spiders crawling the links, browsers pre-loading resource to reduce the perceived latency, and so on.
Another possibility that works in some cases is to treat the path through the graph as the resource. Each answer link acts like a breadcrumb trail, encoding into itself the history of the clients answers. So a client that answered A and B to the first two questions is looking at /survey/questions/questionThree?AB where the user that answered C to both is looking at /survey/questions/questionThree?CC. In other words, you aren't changing the state of the server, you are just guiding the client through a pre-generated survey graph.

RESTful API and web navigation - are they compatible?

Maybe I'm confusing things or over-complicating them, but I'm struggling to develop a RESTful API that supports both HTML and JSON content types. Take for example a simple user management feature. I would expect to have an API that looks like this:
GET /users: lists all users
GET /users/{id}: views a single user
POST /users: creates a new user
A programmatic client posting to /users with a JSON payload would expect a 201 Created response with a Location header specifying the URL to the newly created user, e.g. /users/1. However, a person creating a user through his web browser would post to the same URL with a form-encoded payload and would expect to be redirected to the user list page, requiring the API to return a 302/303 redirect with a Location header of /users.
From a purely conceptual point of view, I find it surprising that an API would react differently based on the submitted content type, and wonder if this is bad design. Then again, maybe it's a mistake to consider the programmatic API and the web-centric API to be the same API and one shouldn't worry about such concerns and worry more about providing a good experience to the different clients.
What do you think?
You've stumbled upon two separate issues.
One, the typical web browser is a pretty lousy REST client.
Two, web application APIs are not necessarily REST APIs (see #1).
And thus, your conundrum of trying to serve two masters.
Arguably representation has little to do with application semantics when it comes to details such as workflow, particularly if you have equally rich media types (vs a constrained media type such as an image, or something else).
So, in those terms, it's really not appropriate to have the application behave differently given similar media types.
On the other hand, media type IS Yet Another aspect of the request which can influence operation on the back end. You could, for example be requesting an elided "lite" data type that may well not offer links to other parts of the api that a richer media type would, or your authorization level is a factor on what data you can view, as well as what other relations are available, or even what media types are supported at all.
So it's fair that every aspect of the request payload can have impact on the particular semantics and effect of any particular request to the server. In that case, you're scenario is not really off the mark.
In the end, it's down to documentation to clarify your intent as an API designer.

Anyone ever tried to use Twitter to replace comments sections on web apps?

Here's the scenario I'm imagining.
Simple blog, users typically post comments in a comments form at the bottom of each blog article. Instead of that, using the Twitter API, pull tweets based on a hashtag. Base the hashtag on the article id (i.e. #site10201) where site is a prefix and the number is the article id.
Then provide a link to post a tweet using the hashtag., which would then get picked up in your twitter api pull.
I'm imagining horrible spam issues, but other than that, bad idea?
Has some drawbacks to more run-of-the-mill database systems:
Additional network overhead. Most self-hosted blogs would typically rely on database and blog being on the same server (physical or virtual) so db-lookup is fast (and reliable) compared to twitter API requests.
Caching issues. One host is only allowed X requests of twitter at a time (the next request is going to end up a 404), and how are you going to manage that from your website for a scenario which becomes steadily more complex as multiple articles are added? Presumably you need to authenticate so the easy-way out is a security liability. (The easy way out being to use JavaScript on the at the browser to perform the actual request, neatly circumvents the problem in 20/80 fashion.) Granted most blogs don't get that kind of traffic. ;)
You tie your precious or not so precious comments to the mercy of the fail whale. Which is kind of odd considering a self-hosted blog basically means you want to have such control in the first place by not using a service like blogger.
Is it possible to ensure unicity of hash tags --in the general case? What are you going to do if someone had the same bright idea, only took the name of the tag 5ms before you did? Would you end up pulling the drivel of someone else's blog comments rather than the brilliance you have come to expect from yours? ;)
Lesser point: you rely on others to have a twitter account. Anonymous replies are off the table.
TOS and other considerations that may be imposed on you by twitter, either now or in future. (2) is actually a major item of Twitter's TOS.

Screen scraping gotchas

When screen-scraping, what are the "gotcha"s to look out for?
The inspiration for this is: my spouse's co-worker asked me to scrape all the pages from a Blogger-hosted blog that her friend with cancer kept in her final months and this lady wanted to keep all of the posts in case the blog were ever deleted. I eventually found a free tool that was barely good enough.
One issue with scraping many Blogger pages is that there's often a navigation menu where you can click on the triangles to expand the post lists by year or month. These little buggers created insane amounts of duplicate content because you'd have the same page over and over again with different combinations of the menus being expanded/collapsed. In Blogger's case I'm not sure this is avoidable since the links are all formatted as real http links and not obvious JavaScript calls. Still, it got me thinking:
If you were to scrape a website, what kinds of potentially non-obvious things would you compensate for?
Do not use regex to scrape
While regular expressions can be good for a large variety of tasks, I find it usually falls short when parsing HTML DOM. The problem with HTML is that the structure of your document is so variable that it is hard to accurately (and by accurately I mean 100% success rate with no false positive) extract a tag.
What I recommend you do is use a DOM parser such as BeautifulSoup or equivalent (SimpleHTMLDom in PHP).
Some may think this is overkill, but in the end, it will be easier to maintain and also allows for more extensibility.
A regular expression could be devised to achieve the same goal but would be limited. For example, developing a regex to get the src and alt tag would force the alt attribute to be after the src or the opposite, and to overcome this limitation would add more complexity to the regular expression.
Also, consider the following. To properly match an <img> tag using regular expressions and to get only the src attribute (captured in group 2), you need the following regular expression:
<\s*?img\s+?[^>]*?\s*?src\s*?=\s*?(["'])((\\?+.)*?)\1[^>]*?>
And then again, the above can fail if:
The attribute or tag name is in capital and the i modifier is not used.
Quotes are not used around the src attribute.
Another attribute then src uses the > character somewhere in their value.
Some other reason I have not foreseen.
So again, simply don't use regular expressions to parse a dom document.
I screen scrape a lot. Some advice:
Emulate a User-Agent string for some browser you want to use. Different websites frequently return very different results depending on what your user agent is. If they don't recognize the User-Agent they will often revert to lowest common denominator, so it's usually best to start with some recent browser. (For example the World of Warcraft Armory returns beautiful, easy to parse XML if it thinks you're a recent Firefox. If it doesn't know what you are it sends terrible HTML).
Be polite to the site you're scraping; don't hit it too hard. Your scraper will go faster if you multi-thread it, making many requests at once, but that will annoy the site owner.
Be smart about error handling. Do not write code like while (1) { makeRequest(); }. If your code or the server throws an error a loop like this will immediately fetch another request, generating another error. It can get ugly quickly. Handle errors well and consider putting in sleeps or exits if you see a lot of errors.
When developing your parsing code, test against a cached version rather than hitting the server every time. Will make your development go faster and is the basis of a simple test suite.
First, I'd check for an RSS feed. On blogger, you just have to add /rss to the root url, if I remember correctly.
Then I'd check if there isn't already some tool to scrape blogger.
Then if there's no RSS feed, and no existing tool, I'd give up and do it by hand with copy/paste. Unless we're talking 5000 pages, it's much faster and easier that way. Take it from someone who's tried.
If you have access to the actual account, blogger has an export function.
edit: Or of course, you could try mechanical turk.
As far as gotchas are concerned..It's usually a good idea to limit the amount of requests made over a certain period of time. Smashing a site with alot of requests in a short space of time is a good way to have your requests rejected.
Aside from the technical considerations, make sure your not putting yourself at legal risk. Most large sites have specific legal language in their terms of use that disallows programmatic access to their services via an automated computer program, and also, the obvious copyright concerns.
From a technical standpoint, definitely use a DOM parser library and you'll save loads of time. Many provide the ability to read HTML into an XML structure that can be queried using XPath to find exactly what you need.
If you know someone who has access to the account, they can use Blogger's export "Export blog" feature.