What is the difference between the two names? Thanks.
The pingback automatically finds URI’s in the post and pings them while a trackback needs to have a URI entered manually.
See this Whitepaper: Pingback vs Trackback
Both Trackback and Pingback allow you to notify other URLs (webpages) that you linked to them from your page (e.g. a blog post).
The difference between both are:
Trackback uses a simple HTTP POST request to notify the other side, while Pingback uses XML-RPC.
Trackback support on the remote site has to be discovered by parsing the HTML source for some commented RDF, while the Pingback URL is sent as HTTP header.
By using the HTTP header, pingbacks are possible for files other than HTML, e.g. images and videos.
The Trackback specification leaves many things undefined, while the Pingback spec is very clear.
The pingback specification requires pingback receivers to check if the original URL contains a link to them. This is not necessary for trackbacks.
What the others said about "automatic": This has nothing to do with both specs. It's your blogging software that (automatically or not) sends requests to the remote servers for any links in your blog post.
Both Pingback & Trackback offer bloggers the facility to be in touch between websites.
Pingback:
A writes a post on B’s blog.
B writes a post on B’s blog mentioning/linking to A article.
B’s blogging software will automatically send a pingback to A.
A’s blogging software will receive the pingback. It will then automatically go to B’s blog to confirm that the pingback originates there (the link is present).
Then A will have the ability to display B’s pingback as A’s comment. This will solely be a link to B’s site.
Pingbacks also work within your site. So if one of your posts link to another post, then your WordPress will send a self-ping. This can get really annoying.
Trackback:
A writes a post on B’s blog.
B wants to comment on A’s post, but B want his/her own readers to see what he/she have to say and able to comment on it.
B will then write a post on his/her blog and send a trackback to A’s blog post.
A will receive B’s trackback, and choose to display it as a comment or not. The comment display will be a title, excerpt and a link to B’s blog post.
--Anurag Birthare
Pingback and Trackback use different protocols. Pingback is automatic and Trackback is manual. You can read up on the difference on Geeklog.
Trackback URL receives a copy of the originating server's address. If enabled, a Pingback is sent to the originating server's URL to verify that it is not a spam Trackback.
Related
It's basically an OAuth2/OIDC (IdentityServer4) problem happening when the user's browser loads Identity Service (Site A) page that FORM POSTs authorization code and id_token back to the relying party site (Site B).
The relying party site (Site B) receives the FORM POST request but it doesn't have the cookies of Site B it previously placed on the user's browser, which makes the verification process failing.
I have tried to set ALLOW-CROSS-ORIGIN-ACCESS header but it didn't seem to help with the FORM POST scenario (not ajax call).
In the common application of OAuth2/OIDC integration, should I not expect cookies being posted back to the relying part site (Site B) along with the authorization code & id_token? Or more commonly describing it, when FORM POSTing from Site A to Site B, should I not expect any cookies of Site B will be part of the request to Site B?
That's not CORS, that's another issue, most likely related to Same Site Cookie Policy. And that's very browser specific.
When your Site B is ASP.NET core you can either set:
services.ConfigureApplicationCookie(opts=>{opts.Cookie.SameSite = SameSiteMode.None;});
//and
app.UseCookiePolicy(new CookiePolicyOptions{MinimumSameSitePolicy = SameSiteMode.None});
(see longer discussion on ASP.NET Core github)
or use more intelligent and secure trick with switching other site to same site POST, offered by Identity Server author.
I am running a website with affiliate links .
When the visitors of mydomain.com/page.php click on such an affiliate link,
they are being sent to a link on a domain owned by the affilate network (network.com/link), and then redirected through the affiliate network, to the relevant page in the store (store.com/page.asp).
Over the last two months, the reports of the affiliate network indicate that about 13,000 clicks that I sent to such links, carried mydomain.com/page.php as the referring URL, as I would expect.
However, about 20 other clicks carried abnormal referring URLs, such as:
http://app.mam.vaccint.com/getapp/CT3297962/mam.html
http://www.store.com/page.asp
http://www.network.com/link
http://apnwidgets.ask.com/widget/everest/radio/4/radio-button.html
http://search.yahoo.com/search
http://www.google.com/webhp
http://www.bing.com/
http://192.168.1.1/spyware/blockpage
Unfortunately, This has led the compliance team of my affiliate network to believe that I have a hidden traffic source apart from my website, they claim that it appears to be as if I am using some kind of a third party software to send traffic to store.com, which is not true of course.
They are holding me accountable for this situation and I am required to provide explanations to this situation.
What could have caused my website visitors to arrive at network.com / store.com while carrying the above referring URLs?
Not sure though, but looking at the referring URL's its quite certain that these pages had your content listed on their webpages. Like:
e.g. google.com/webhp - listing the result content / cache / image result of your webpage
Bing.com - another result related webpage (generally web cache)
192.168.1.1/spyware/blockpage - looks like someone accessed your portal but ended up reaching this firewall custom page. But somehow the affiliate widget got loaded as it would have been permitted by the firewall.
Store.com/page.asp & network.com/link - looks like some internal redirected urls which sent traffic to the relevant page (store.com/page.asp)
(rest other) - all other links also can have a similar story which ended up sending traffic to your affiliate network, but had another URL.
I'm sure if you replicate this case in front of them via Google cache / Bing cache, they would get a better understanding of the issue.
Else, try to identify the source referrer of page: network.com/link, which probably is under their control and they would have access to the logs.
I am working on a project involving finding out what http requests were made by the user.
I have all the http request and response headers (but not the data), and I need to find out what content was requested by the user and what content was automatically sent (e.g. ads pages, streaming on the background, and all sorts of unrelevant content).
When recording the net traffic (even for a short period) alot of content gets generated, and most of it is not relevant.
since im no expert in http, i'd like some help with directions as of which headers I can safely use (assuming most web pages send them), and which headers might be omitted and so it will not be safe to rely on.
my current idea involves:
find all the html files, and check what the main html files were (no referrer or search engine referrer), and then recursively mark all the files called by these html files onward as relevant, and discard the rest.
the problem with this is that I've been told that I can't trust the referrer header, and I have no idea as of how to identify what html files were clicked by the user.
Every kind of help will be appreciated, sorry if the post is not formatted well, this is my first question here.
EDIT:
I've been told the question is'nt clear enough, so all I'm asking is for some way to determine which requests were triggered by the user and whic requests were automatically made
To determine which request was send by the user itself you should look at the first request send through the connection and look at it's response body.
All external files referenced in this first body which then consecutively get send to the user are most likely to be send automatically without the users interaction.
Time passing between requests could also be an factor worth looking at.
Another thing you already mentioned yourself would be looking at the Referer header. As far as the RFC 2616 14.36 goes it can be trusted, as the Referer header must not be sent if the Request URI comes from user input. Although there could be automatically send content which does not have the Referer header set, as it's optional.
I've seen articles and posts all over (including SO) on this topic, and the prevailing commentary is that same-origin policy prevents a form POST across domains. The only place I've seen someone suggest that same-origin policy does not apply to form posts, is here.
I'd like to have an answer from a more "official" or formal source. For example, does anyone know the RFC that addresses how same-origin does or does not affect a form POST?
clarification: I am not asking if a GET or POST can be constructed and sent to any domain. I am asking:
if Chrome, IE, or Firefox will allow content from domain 'Y' to send a POST to domain 'X'
if the server receiving the POST will actually see any form values at all. I say this because the majority of online discussion records testers saying the server received the post, but the form values were all empty / stripped out.
What official document (i.e. RFC) explains what the expected behavior is (regardless of what the browsers have currently implemented).
Incidentally, if same-origin does not affect form POSTs - then it makes it somewhat more obvious of why anti-forgery tokens are necessary. I say "somewhat" because it seems too easy to believe that an attacker could simply issue an HTTP GET to retrieve a form containing the anti-forgery token, and then make an illicit POST which contains that same token. Comments?
The same origin policy is applicable only for browser side programming languages. So if you try to post to a different server than the origin server using JavaScript, then the same origin policy comes into play but if you post directly from the form i.e. the action points to a different server like:
<form action="http://someotherserver.com">
and there is no javascript involved in posting the form, then the same origin policy is not applicable.
See wikipedia for more information
It is possible to build an arbitrary GET or POST request and send it to any server accessible to a victims browser. This includes devices on your local network, such as Printers and Routers.
There are many ways of building a CSRF exploit. A simple POST based CSRF attack can be sent using .submit() method. More complex attacks, such as cross-site file upload CSRF attacks will exploit CORS use of the xhr.withCredentals behavior.
CSRF does not violate the Same-Origin Policy For JavaScript because the SOP is concerned with JavaScript reading the server's response to a clients request. CSRF attacks don't care about the response, they care about a side-effect, or state change produced by the request, such as adding an administrative user or executing arbitrary code on the server.
Make sure your requests are protected using one of the methods described in the OWASP CSRF Prevention Cheat Sheet. For more information about CSRF consult the OWASP page on CSRF.
Same origin policy has nothing to do with sending request to another url (different protocol or domain or port).
It is all about restricting access to (reading) response data from another url.
So JavaScript code within a page can post to arbitrary domain or submit forms within that page to anywhere (unless the form is in an iframe with different url).
But what makes these POST requests inefficient is that these requests lack antiforgery tokens, so are ignored by the other url. Moreover, if the JavaScript tries to get that security tokens, by sending AJAX request to the victim url, it is prevented to access that data by Same Origin Policy.
A good example: here
And a good documentation from Mozilla: here
I don't understand: how are webserver and trackers like Google Analytics able to track referrals?
Is it part of HTTP?
Is it some (un)specified behavior of the browsers?
Apparently every time you click on a link on a web page, the original web page is passed along the request.
What is the exact mechanism behind that? Is it specified by some spec?
I've read a few docs and I've played with my own Tomcat server and my own Google Analytics account, but I don't understand how the "magic" happens.
Bonus (totally related) question: if, on my own website (served by Tomcat), I put a link to another site, does the other site see my website as the "referrer" without me doing anything special in Tomcat?
Referer (misspelled in the spec) is an HTTP header. It's a standard header that all major HTTP clients support (though some proxy servers and firewalls can be configured to strip it or mangle it). When you click on a link, your browser sends an HTTP request that contains the page being requested and the page on which the link was found, among other things.
Since this is a client/request header, the server is irrelevant, and yes, clicking a link on a page hosted on your own server would result in that page's URL being sent to the other site's server, though your server may not necessarily be accessible from that other site, depending on your network configuration.
One detail to add to what's already been said about how browsers send it: HTTPS changes the behavior a bit. I am not aware if it's in any spec, but if you jump from HTTPS to HTTP, and if you stay on the same domain or go to different domains, then sometimes the referrer is not sent. I don't know the exact rules, but I've observed this in the wild. If there's some spec or description about this, it would be great.
EDIT: ok, the RFC says plainly:
Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.
So, if you go from HTTPS page to a HTTP link, referrer info is not sent.
From: http://en.wikipedia.org/wiki/HTTP_referrer
The referrer field is an optional part
of the HTTP request sent by the
browser program to the web server.
From RFC 2616:
The Referer[sic] request-header field
allows the client to specify, for
the server's benefit, the address
(URI) of the resource from which
the Request-URI was obtained (the
"referrer", although the header
field is misspelled.)
If you request a web page using a browser, your browser will sent the HTTP Referer header along with the request.
Your browser passes referrer with each page request.
It seems unusual that JavaScript has access to this as well, but it does.
Yes, the browser sends the previous page in the HTTP headers. This is defined in the HTTP/1.1 spec:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.36
The answer to your question is yes, as the browser sends the referer.
"The referrer field is an optional part of the HTTP request sent by the browser program to the web server."
http://en.wikipedia.org/wiki/HTTP_referrer
When you click on a link the browser adds a Referer header to the request. It is part of HTTP. You can read more about it here.