I am sending a httponly/secure cookie to the client from my server running at default port. From the client, the request does not specify any port number for this and gets the cookie back in response.
When another call is made to the same server with different port, the cookies are not sent to the server. whereas if I make the call without any port number, the cookies are sent.
What am I missing here? Is there anything needs to be enabled in order for cookie to be sent cross domain. According to RFC 6265, cookies are not port specific, then is it specific browser behavior thats preventing this? I have tried Firefox and chrome and its not working on both.
Although this is an old question, I'm posting this for people who encounter the problem and find themselves here. This is likely caused by cross origin policy.
It can be circumvented by making sure your server sends "allow-credentials" CORS headers. And then you need to send your XHR with a "withCredentials" parameter.
Related
Users have started having problems with Flash-based traffic under Chrome 80: Cookies are not being sent with POST requests.
I'm aware of the SameSite updates, but our traffic is all same-domain, so I assumed this wouldn't affect us.
Debugging the request headers from the debug tools:
Here's what I note:
In an older version of Chrome 73:
there are no Sec-Fetch-* headers
Origin header is always correct
cookies are sent properly
In Chrome 80, GET requests:
Origin is correct, and cookies are sent
now has Sec-Fetch-* headers
the Sec-Fetch-Site cookie says cross-site -- Is this right? This is determined by the browser, correct? Why would Chrome label the traffic as cross-site? - the request URL is the same as my page, same as window.location.hostname.
In Chrome 80, POST requests:
Sec-Fetch-* cookies same as GET
the Origin header is null - wait, why? This also is assigned by the browser, right? Why null?
cookies are not sent
This makes absolutely no sense to me. It's always worked, and we don't use multiple domains, and our cookies are secure and httponly. Can someone help me understand:
why Chrome 80 would label my requests as Sec-Fetch-Site: cross-site?
why Chrome 80 would send Origin: null and no cookies for POSTs?
I am experiencing the same problem. For the post request from Flash, sometimes the request doesn't contain cookies at all, sometimes it only contains one cookie key (I have multiple keys in my cookie). It looks like a bug in Chrome 80, unless they did it on purpose b/c they want to kill Flash for a long time.
I have a page on domain A which loads a webworker script from domain B. The webworker is fetching some PNGs from doman A's server.
In Firefox, the request to get the PNGs contains the cookie for my site (domain A).
In Chrome, it does not include the cookie for my site, and so fails because the request must be coming from a logged in user (which requires the session cookie to be sent in the request).
Which browser is behaving correctly, and can I do anything to make Chrome send the cookie for the current domain from within a webworker?
UPDATE:
I pulled all the files from domain B and hosted them on my server at domain A, so the webworker file is now on the same domain as the site itself, but Chrome still does not send the session cookie with the requests from the web worker.
With regards to the first problem, it looks like the Firefox is incorrect, you shouldn't be able to instantiate a Worker on another domain to quote the spec:
"If the scheme component of worker URL is not "data", and the origin
of worker URL is not the same as the origin specified by the incumbent
settings object, then throw a SecurityError exception and abort these
steps."
With regards to Chrome the Workers run in a separate they work for me and without seeing more code it's hard to answer. But if you visit this demo and break before the postMessage to the worker set document.cookie='test=1' you will see that when the request goes out from the worker it is set.
I have this app that uses EWS to access mail, using the standard /EWS/Exchange.asmx SOAP endpoint.
One my user's mail servers is protected by Microsoft Forefront, and the initial HTTP request to
https://server_name/EWS/Exchange.asmx
is redirected (HTTP 302) to:
https://server_name/
CookieAuth.dll?GetLogon?curl=Z2FEWSZ2FExchange.asmx&reason=0&formdir=3
which is an regular HTML page, the point of which, I guess, is to make the user authenticate "manually".
I've not heard about Forefront until today, not sure how to handle it.
Is this normal behavior for Forefront (i.e. it always redirects the initial HTTP request), or is it triggered by something in my app? For example, user-agent?
If it's normal, how am I supposed to get past this page and access /EWS/Exchange.asmx?
If it's triggered by something my app is doing, how can I find out what it is?
My code runs on Android and forms its own XML requests without using any SOAP library. At the transport level, I use Apache HTTP client components. The code works fine with Office 365/Exchange Online, and, according to user reports, "self-hosted" corporate Exchange servers with NTLM.
However, in this case, I'm not even getting an HTTP 401: the HTTP 302 is returned by the very first HTTP roundtrip.
Trying to preemptively authenticate the initial request using Basic authentication didn't make any difference.
The user who reported this issue also mentioned that another EWS based app works, so there must be a solution to it.
If your API and Website making ajax calls to that API are on the same server (even domain), how would you secure that API?
I only want requests from the same server to be allowed! No remote requests from any other domain, I already have SSL installed does this mean I am safe?
I think you have some confusion that I want to help you clear up.
By the very fact that you are talking about "making Ajax calls" you are talking about your application making remote requests to your server. Even if your website is served from the same domain you are making a remote request.
I only want requests from the same server to be allowed!
Therein lies the problem. You are not talking about making a request from server-to-server. You are talking about making a request from client-to-server (Ajax), so you cannot use IP restrictions (unless you know the IP address of every client that will access your site).
Restricting Ajax requests does not need to be any different than restricting other requests. How do you keep unauthorized users from accessing "normal" web pages? Typically you would have the user authenticate, create a user session on the server, pass a session cookie back tot he client that is then submitted on every request, right? All that stuff works for Ajax requests too.
If your API is exposed on the internet there is nothing you can do to stop others from trying to make requests against it (again, unless you know all of the IPs of allowed clients). So you have to have server-side control in place to authorize remote calls from your allowed clients.
Oh, and having TLS in place is a step in the right direction. I am always amazed by the number of developers that think they can do without TLS. But TLS alone is not enough.
Look at request_referer in your HTTP headers. That tell you where the request came from.
It depends what you want to secure it from.
Third parties getting their visitors to request data from your API using the credentials those visitors have on your site
Browsers will protect you automatically unless you take steps to disable that protection.
Third parties getting their visitors to request changes to your site using your API and the visitors' credentials
Nothing Ajax specific about this. Implement the usual defences against CSRF.
Third parties requesting data using their own client
Again, nothing Ajax specific about this. You can't prevent the requests being made. You need authentication/authorisation (e.g. password protection).
I already have SSL installed does this mean I am safe
No. That protects data from being intercepted enroute. It doesn't prevent other people requesting the data, or accessing it from the end points.
you can check ip address, if You want accept request only from same server, place .htaccess in api directory or in virtualhost configuration directive, to allow only 127.0.0.1 or localhost. Configuration is based on what webserver You have.
I don't understand: how are webserver and trackers like Google Analytics able to track referrals?
Is it part of HTTP?
Is it some (un)specified behavior of the browsers?
Apparently every time you click on a link on a web page, the original web page is passed along the request.
What is the exact mechanism behind that? Is it specified by some spec?
I've read a few docs and I've played with my own Tomcat server and my own Google Analytics account, but I don't understand how the "magic" happens.
Bonus (totally related) question: if, on my own website (served by Tomcat), I put a link to another site, does the other site see my website as the "referrer" without me doing anything special in Tomcat?
Referer (misspelled in the spec) is an HTTP header. It's a standard header that all major HTTP clients support (though some proxy servers and firewalls can be configured to strip it or mangle it). When you click on a link, your browser sends an HTTP request that contains the page being requested and the page on which the link was found, among other things.
Since this is a client/request header, the server is irrelevant, and yes, clicking a link on a page hosted on your own server would result in that page's URL being sent to the other site's server, though your server may not necessarily be accessible from that other site, depending on your network configuration.
One detail to add to what's already been said about how browsers send it: HTTPS changes the behavior a bit. I am not aware if it's in any spec, but if you jump from HTTPS to HTTP, and if you stay on the same domain or go to different domains, then sometimes the referrer is not sent. I don't know the exact rules, but I've observed this in the wild. If there's some spec or description about this, it would be great.
EDIT: ok, the RFC says plainly:
Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.
So, if you go from HTTPS page to a HTTP link, referrer info is not sent.
From: http://en.wikipedia.org/wiki/HTTP_referrer
The referrer field is an optional part
of the HTTP request sent by the
browser program to the web server.
From RFC 2616:
The Referer[sic] request-header field
allows the client to specify, for
the server's benefit, the address
(URI) of the resource from which
the Request-URI was obtained (the
"referrer", although the header
field is misspelled.)
If you request a web page using a browser, your browser will sent the HTTP Referer header along with the request.
Your browser passes referrer with each page request.
It seems unusual that JavaScript has access to this as well, but it does.
Yes, the browser sends the previous page in the HTTP headers. This is defined in the HTTP/1.1 spec:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.36
The answer to your question is yes, as the browser sends the referer.
"The referrer field is an optional part of the HTTP request sent by the browser program to the web server."
http://en.wikipedia.org/wiki/HTTP_referrer
When you click on a link the browser adds a Referer header to the request. It is part of HTTP. You can read more about it here.