I am trying to configure nginx as a tcp proxy for ejabberd.
The nginx configuration for tcp proxying is as shown below
stream{
upstream ejabberd-servers{
server ejabberd:5222;
}
server{
listen 5222;
proxy_pass ejabberd-servers;
}
}
The ejabberd server is the ejabberd server node name. Since this is done in a docker environment.
When I connect to nginx using smack client library, I get the error is SOCKS5 socket fail.
When I try connection using SOCKS4, I get Servers response VN 60
The reason I am trying to do so is because I do not want to expose ejabberd directly to the Internet. I need to have a proxy to load balance connections and also prevent DDos attacks.
Setting up ejabberd for Internet is nicely explained here.
[how to open ejabberd server to public
Has anybody done so successfully?
Related
I'm currently building a monitoring system with Zabbix. I'm especially trying to set up a Zabbix proxy and
As far as I know, if I set the Zabbix proxy as active, it's then not necessary for the Zabbix server to have a connection with the Zabbix proxy.
Now, what I wonder is if each Zabbix proxy must have its own static IP.
If the proxy is set to active, there is no need for them to have static IP addresses - all that is needed is a TCP connection to the server.
I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks
You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.
I got my Dropwizard application running in an Openshift DIY Cartridge.
The application uses Https and binds to port 8080. I can access the application with curl from within an ssh connection via rhc ssh appname.
What do I have do configure that I can access my Dropwizard application via the appname-username.rhcloud.com domain?
I always get a proxy error 502. Error reading from remote server.
Any suggestion is greatly appreciated.
tmy
In OpenShift your application is deployed behind a proxy server ant this proxy server can only communicate with your application using http.
The OpenShift proxy server allows you to use both http and https connections and to communicate which type of connection was used the proxy server adds x-forwarded headers in the request to your application.
Tho configure Dropwizard, you will need to configure the http connector on port 8080, the default, and set useForwardedHeaders to true, also the default. See http://dropwizard.io/manual/configuration.html#http for more information.
At this point Dropwizard is aware whether a http or https connection was used. The thing I did not find is how to make content "confidential" so that the jetty container inside Dropwizard redirect the client to the https connector served by the OpenShift proxy server when the client tries to connect to your application using http.
I am looking to deploy an nginx/dns server on a vps proxy that maps to the real back-end in a different geographical location. The back-end runs apache,mysql,dovecot,postfix. It is a pay-for mail server. The users get entered through apache through php into mysql, and when users set up IMAP, dovecot/postfix pools them from mysql and delivers or uses the smtp outbound.
I read about something in the nginx.conf file, that I can declare the mail hostname on the proxy as so:
mail {
server_name mail.example.com;
...
}
This mail.example.com is the actual mx for the example.com mail exchanger listed in DNS? Here is where that came from:
"As you can see, we declared the name of this server at the top of the mail context.
This is because we want each of our mail services to be addressed as mail.example.
com. Even if the actual hostname of the machine on which NGINX runs is different,
and each mail server has its own hostname, we want this proxy to be a single point
of reference for our users. This hostname will in turn be used wherever NGINX
needs to present its own name, for example, in the initial SMTP server greeting."
So from my understanding, the physical hostname of the proxy should be something else besides mail.example.com. So in DNS on the proxy, I can define that as anyhost.example.com? The proxy also proxies back to my apache on the back-end.
Finally, on the back-end, how do I set up my DNS for that? What hostname do I choose for the actual box running apache,mysql,dovecot,postfix? Its all on one box. I understand that on the registrar, I point 2 nameservers, these should be two proxies, that way running a dig would only pull up the proxies and the MX which should be "known" to be on the proxy.
in your case where all of the services in one box including the proxy, you can set the apache, mysql and other services accessible only from localhost / 127.0.0.1. Then from nginx you put
upstream: 127.0.0.1:80
upstream: 127.0.0.1:3306
therefore the nginx is serving frontend request and forward them to designated services
I am using Squid as a proxy server for web cache in my local network. I have developed a utility in VB.NET that requires a remote connection to a MySQL database on a remote server over internet. I am able to connect to remote server if I disable the proxy server but can not if the proxy is enabled.
I don't know if I can use MySql Proxy in this scenario on my local proxy server and what configurations will I have to make.
Below is my squid configuration;
ACL to define ports allowed to passthrough Squid acl SSL_ports
port 443 acl Safe_ports port 80 # http acl Safe_ports
port 21 # ftp acl Safe_ports port 1025-65535 #
unregistered ports acl Safe_ports port 3306 # mysql remote
connection acl CONNECT method CONNECT http_access
deny !Safe_ports http_access allow Safe_ports http_access
allow CONNECT !SSL_ports
What alternative I can have to achieve a similar setup that is web cache + remote connection to mysql database.
Squid can't proxy MySQL at all.
You have to configure your firewall (or use direct connect, nat, etc) to use remote connection.
I've been looking into proxying database traffic, too. Squid can't proxy MySQL traffic, but you do want to proxy MySQL traffic, you can try SQLProxy, which is an IIS plug-in to proxy MySQL traffic.
A Java-based solution that runs on Windows, Mac, and Linux is TcpCatcher. It's primarily intended to monitor and change TCP traffic, but it can also be used as a pure proxy server.
If you are open to a *nix-based solution, there's High Availability Proxy ("HAProxy"), which is as a TCP/HTTP load balancer, which can be used to proxy MySQL database connections as well as HTTP connections.
There's a tutorial and information page on using HAProxy to proxy MySQL connections at http://www.severalnines.com/resources/clustercontrol-mysql-haproxy-load-balancing-tutorial (as of June 2013). Here's an example of using HAProxy to proxy a single MySQL connection: http://flavio.tordini.org/a-more-stable-mysql-with-haproxy.
I have managed to get it worked with Microsoft Forefront TMG. An access rule is to be created with a port 3306 outbound connection from internal to external for all the users. Firewall client should also be installed on client machines.
Squid and Polipo can not be used in an environment where remote connections to MySql are required. In such a scenario setting up a local server before proxy and some sync mechanism with the remote server or VPN/SSH is a possibility.
Hope it helps to other relevant readers.