How does one programmatically determine if a given proxy is elite?
What is the general method/headers checked for?
One method would be to send an HTTP request to yourself, via the proxy. Make the request something uniquely identifiable... perhaps with a dummy query string with a unique signature.
Then, check the access log for the request. Did the request appear to come from your own IP address? If so, the proxy is not elite. Otherwise... it is!
It's my experience that you can't detect elite proxy use from HTTP response headers. In the case of onion routers, the header will show you an IP address that makes it seem as if the traffic originated from the onion router's exit node. And w/r/t Tor, which i would imagine is the most widely used onion router, Tor publishes their exit nodes and allow it to be accessed via API. I know that some Sites access this list and then block any IP address originating from it.
Related
First of all I want to use Google Places API for autocomplete. I have created API key and it works fine. I make api calls from client so I need to protect or restrict it. I tried to use HTTP restriction, but it doesn't work with Places API. There are recommendation in the docs to use IP restriction but it requires that some proxy server to make api calls. So which way is right? Do I need proxy server with IP restriction to make api calls? Or is there some way to make secure api calls from client?
Normally, when you are calling the requests from the Client-Side, it should be restricted via HTTP referrers, and IP address restrictions are used when you are calling the requests from the server-side which has a static IP address. If you're calling from the Client-Side and your HTTP restrictions are not working, it will be best to file a support case via https://console.cloud.google.com/google/maps-apis/support in order to open personalized communication channel as this must be an isolated case and might have something to do with your configuration in your GCP console.
I would also recommend to check the sample HTTP restriction below:
example.com
*.example.com
These two will allow your API key to be used in all subdomains and paths in your website.
Lets say I make an elm app; it requests data from a websocket to check the price of bitcoin from say poloniex.com. I compile it to an .html file and I deploy it to say Heroku or whatever server I like on the backend.
When a user comes to my website and requests that .html file, and is then looking at the bitcoin price from the websocket request, is the user's IP address making that websocket request or is it the backend's (eg Heroku in this case) IP address making the websocket request?
I ask because I was considering two different designs. Either have my backend pull the bitcoin price data and then serve that to my users or have the users directly request the price from the source itself (i.e poloniex in this case). The latter would be less headache but won't be possible if all the requests end up coming from the backend and therefore one ip address (they would have request limits)
Edit: Bolded for people who couldn't see where the question was.
Assuming you are using the standard Elm Websocket package, elm-lang/websocket, the websocket connects with whatever URL you point it at. If you set it up like this:
subscriptions model =
listen "ws://echo.websocket.org" Echo
Then the client browser will connect directly with echo.websocket.org. The target of that websocket connection will likely see your application as a referrer, but its connection will be with the IP of the user's browser that is acting as the client.
If you instead want your backend server application to act as a proxy, you would use that URL in listen
subscriptions model =
listen "ws://myapp.com" ...
Consider OAuth-2.0 Authorization Code Grant protocol.
As described in standard draft http://tools.ietf.org/html/ietf-oauth-v2-26 on Figure 3 : Authorization Code Flow a Client is getting token on behalf of Authorization Code received from User-Agent. Suppose that User-Agent is intentionally sending wrong codes to the Client. If Authorization Server makes some protection against brute force way of obtaining Access Token by banning Client for some reasonable amount of time (by IP or by Redirection URI host name). If in our case the Client is supposed to process horde of requests from multiple different User-Agent's the Client will stop to serve all its users altogether if there's only one malicious one exists.
So the Client becomes a bottleneck in a situation described above.
==== EDITED ====
Any ideas how to evade the bottleneck problem?
I believe you're asking:
"how to evade this problem and NOT to expose Authorization Code to User-Agent?"
This is not possible. The OAuth request flows through the user's browser so you can't prevent exposing the authorization code to the user.
If you're a victim to an attack like this, I'd suggest putting the same protection into your Client that the OAuth provider is putting into their Authorization Server. Namely, stop allowing new authorization codes to be sent from a User-Agent that's abusing your service. If they send more than, say, 3 invalid tokens per hour, ban them for an hour or two (by IP address). Of course, this could lead to you denying access to your site from proxy servers because of one bad user on the proxy, but that's life.
If your API and Website making ajax calls to that API are on the same server (even domain), how would you secure that API?
I only want requests from the same server to be allowed! No remote requests from any other domain, I already have SSL installed does this mean I am safe?
I think you have some confusion that I want to help you clear up.
By the very fact that you are talking about "making Ajax calls" you are talking about your application making remote requests to your server. Even if your website is served from the same domain you are making a remote request.
I only want requests from the same server to be allowed!
Therein lies the problem. You are not talking about making a request from server-to-server. You are talking about making a request from client-to-server (Ajax), so you cannot use IP restrictions (unless you know the IP address of every client that will access your site).
Restricting Ajax requests does not need to be any different than restricting other requests. How do you keep unauthorized users from accessing "normal" web pages? Typically you would have the user authenticate, create a user session on the server, pass a session cookie back tot he client that is then submitted on every request, right? All that stuff works for Ajax requests too.
If your API is exposed on the internet there is nothing you can do to stop others from trying to make requests against it (again, unless you know all of the IPs of allowed clients). So you have to have server-side control in place to authorize remote calls from your allowed clients.
Oh, and having TLS in place is a step in the right direction. I am always amazed by the number of developers that think they can do without TLS. But TLS alone is not enough.
Look at request_referer in your HTTP headers. That tell you where the request came from.
It depends what you want to secure it from.
Third parties getting their visitors to request data from your API using the credentials those visitors have on your site
Browsers will protect you automatically unless you take steps to disable that protection.
Third parties getting their visitors to request changes to your site using your API and the visitors' credentials
Nothing Ajax specific about this. Implement the usual defences against CSRF.
Third parties requesting data using their own client
Again, nothing Ajax specific about this. You can't prevent the requests being made. You need authentication/authorisation (e.g. password protection).
I already have SSL installed does this mean I am safe
No. That protects data from being intercepted enroute. It doesn't prevent other people requesting the data, or accessing it from the end points.
you can check ip address, if You want accept request only from same server, place .htaccess in api directory or in virtualhost configuration directive, to allow only 127.0.0.1 or localhost. Configuration is based on what webserver You have.
In my example, I want to build an application that sends users who join a network some kind of interface and manage this at a central station (possibly the router, or a central server). The new user's input to this interface will be sent back to the central station and controlled.
How plausible is this? Is sending something to a newly discovered IP realistic?
As long as you control the DNS server, you can send them to any web server you like.
Completely plausible, but you'll need a router with open source firmware and you'll need to program in the language of that source code and have the toolchain to build the binary for the firmware.
The only thing I can think of is NoCatAuth and friends. The user has to use their web browser, but most are accustomed to that.
Are you trying to FORCE the users to use your application (e.g. by selling these routers via an ISP), or are you expecting users to co-operate (e.g. inside a organisation's WAN)?
If the latter, it may be sufficient to set the DHCP server inside the router to serve the address of an HTTP proxy. That will get picked up by most OS/browsers. The proxy can then be used to control web-traffic - which pages they can see, and which ones are redirected to your own web-app.
If the user is considered an adversary, it would be trivial for them to override the proxy settings. In a LAN/WAN situation, you need to make sure nothing is connecting them to the outside world, except through the proxy.