I am trying to route all requests using:
var config = {
mode: "fixed_servers",
rules: {
singleProxy: {
scheme: "https",
host: "localhost"
},
bypassList: ["foobar.com"]
}
};
chrome.proxy.settings.set(
{value: config, scope: 'regular'},
function() {});
This works wonderfully for all http:// websites but not https://. It seems that Chrome doesn't even connect to the proxy in those cases but simply returns ERR_EMPTY_RESPONSE (no packets in the VPN from chrome)
https proxies differ from http proxies. It is HTTPS protocol, not HTTP.
https://en.wikipedia.org/wiki/HTTPS
As i understand you run proxy on local machine, so you need https proxy software.
Smth like this https://www.npmjs.com/package/https-proxy-agent
http:// and https:// means protocol. So is ftp, pop, smtp, ed2k, bitcoin... an so on.
Don't use "https" scheme instead use "http"
If proxy server protocol is "https" then "http" scheme will work like "https"
Google Chrome version 103.0.5060.114 64 bit
Related
I am trying to get the Client IP (or Real IP).
I am using a third tier cloud provider with basic services and there is a LB in that.
My current nginx-ingress controller config is:
data:
allow-snippet-annotations: "true"
ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
allow-snippet-annotations: "true"
real_ip_recursive: "on"
real-ip-header: "X-Real-IP"
use-proxy-protocol: "false"
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
And yes, i alr turned on the below in the service.
externalTrafficPolicy: Local
My ingress resource has this:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Forwarded-For $remote_addr";
However, I have tried on and off many options above and I could not retrieve the client IP when a pass in, Nginx ingress always gives me the Node private IPv4 as the x-forwaded-header and remote_add. Note that if I turned on the Proxied Cloudflare , it worked fine, if i turned the Proxied option off , it returns the private IPv4 of the k8s node, because I have other DNS management in other DNS tools, so using Cloudflare proxied is not always an option for me.
My node backend is set on Heroku, React frontend on Netlify. On firefox everything works good but on chrome I can login (post request) and then I can't move on my page cuz cors policy blocks my get requests to the server.
Access to XMLHttpRequest at 'https://xxx.herokuapp.com/api/fetchPurchasedPrizes' from origin 'https://xxx.netlify.app' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
My backend cors setup
app.use(
cors({
credentials: true,
origin: 'xxx.netlify.app',
}),
);
On localhost both browsers worked good.
Simple solution, I just had to add
cookie: {
sameSite: 'none',
secure: 'true',
},
to my session config.
I Have a simple back-end Kotlin application that runs a Netty server on a Google cloud virtual machine. It receives websocket connections and sends some simple messages to clients. I also have nginx server running on same machine, it listens to 443 port and redirects requests to my application (127.0.0.1:8080). Here is nginx configuration:
server {
listen 443 ssl;
server_name www.mydomain.com;
ssl_certificate /etc/nginx/certs/my-cert.crt;
ssl_certificate_key /etc/nginx/certs/my-key.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_read_timeout 86400;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
}
}
Ssl certificate in config is valid and certified by real CA.
Now I'm trying to write a simple front-end Angular app that connects to my proxy:
return webSocket({
url: `wss://www.mydomain.com/${path}`,
closeObserver: {
next: (event: CloseEvent) => { console.log(event) }
}
}).multiplex(() => {}, () => {}, () => true)
I am subscribing to an Observable returned by this method and printing incoming messages.
All this works fine in every browser except Google Chrome. (I tried Firefox, Opera, Chromium, Edge). Also everything works in chrome extension Smart Websocket Client. It even works fine in Chrome's incognito mode, but fails in ragular mode.
On chrome I get
WebSocket connection to 'wss://mydomain.com/something' failed.
CloseEvent that I log isn't very helpful, it just says that the code is 1006, the field reason is empty.
When I bypass the proxy and connect directly to my app with ws://www.mydomain.com:8080/something, it works fine on chrome.
I guess something is wrong in my nginx config, buy I cant really tell what. All the guides for configuring nginx for websockets say that this is how it should be configured.
I spent 2 days searching for information about this and didn't find any real answers to why this is happening and what can I do to fix it.
Does anyone have any ideas why is this happening?
UPDATE
Here is another interesting thing. I wrote a simple script that connects to my proxy, just like my Angular app, but using standard api.
let ws = new WebSocket("wss://mydomain.com/something");
ws.onmessage = (ev) => {
console.log(ev.data);
document.getElementById("result").innerHTML += ev.data
};
ws.onerror = (err) => {
console.log(err)
}
When I just open this file in chrome using file://, everything works. It connects to my ws server and prints incoming messages on screen. But when I run local Apache server and serve the same file on localhost:80, I get the same error as before in Chrome. (Other browsers and Incognito mode still work fine when the file is accessed through localhost)
So this issue doesn't have much to do with Angular.
You mentioned that in private/incognito mode it works, have you tried to disable all extensions and connect to websocket?
I am trying to connect to wss://mydomain.com/ws from electron app (render process), but:
events.js:177 Uncaught Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1317)
at TLSSocket.emit (events.js:200)
at TLSSocket._finishInit (_tls_wrap.js:792)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:606)
I am using same code which works from plain Chrome and Firefox browsers
const WebSocket = require('isomorphic-ws')
const ws = new WebSocket('wss://mydomain.com/',
[],
{
headers: {
Cookie: cookie.serialize('X-Authorization', bearerToken),
},
},
);
I used https://www.ssllabs.com/ssltest/analyze.html?d=mydomain.com
to check certificate and it says:
Protocols
TLS 1.3 No
TLS 1.2 Yes
TLS 1.1 Yes
TLS 1.0 Yes
SSL 3 No
SSL 2 No
For TLS 1.3 tests, we only support RFC 8446.
However I can't see request in electron devtools, so can't verify tls version
Using https://www.ssllabs.com/ssltest/ I discovered
Chain issues.........Incomplete
So I fixed it on nginx side, before I used:
ssl_certificate /www/xx/certs/mydomain_com.crt;
ssl_certificate_key /www/xx/certs/live_server.key;
Now I did:
cat mydomain-com.crt mydomain-com.ca-bundle > mydomain_com.ca-bundle.crt
And changed in nginx:
ssl_certificate /www/xx/certs/mydomain_com.ca-bundle.crt;
ssl_certificate_key /www/xx/certs/live_server.key;
I am looking to figure out a way to make incoming request to a browser. Installing an extension in the browser is OK. The goal of this is to allow another machine to connect to the extension to control a game without needing an intermediary server.
Is this feasible? Is it possible to make a Chrome or Firefox extension open a port to allow for incoming request?
What you are asking for are server sockets. For Chrome the answer is "no", Chrome extensions can only open client connections. Firefox extensions on the other hand can use nsIServerSocket interface to listen for incoming TCP connections on a port. If you use the Add-on SDK you would need to use the chrome package. Something like this:
var {Cc, Ci} = require("chrome");
var socket = Cc["#mozilla.org/network/server-socket;1"]
.createInstance(Ci.nsIServerSocket);
socket.init(12345, false, -1);
socket.asyncListen({
onSocketAccepted: function(socket, transport)
{
...
},
onStopListening: function(socket, status)
{
}
});