I have what is going to be a production MySQL database, and we want to access such remotely but haven't found a secure way to do it.
Docker Swarm do not have support for host bound ports such as 127.0.0.1:3303:3303, however normal mode does. Making a port public becomes also public in all swarm nodes.
Using firewalls is not really an option since we would have to configure these on every single node in the swarm.
We have on table only two options
Opening the port and only allowing connections through TLS and enforcing REQUIRE options Issuer and Subject, to only one single user and probably read_only. Still seems to be highly insecure due to having the open port.
Creating a temporary dockerized sshd service and making it available in the same network as MySQL service, it is more hazzle to manage these ssh containers. Still more secure since it would be turn on/off when needed
Question: Is there any other/better options to approach this? and how badly insecure is it to have open port + tls connections?
If you have a good argument against accessing MySQL remotely I would appreciate it
Related
I'm currently trying to build my services on kubernetes using istio and have trouble trying to whitelist all host IPs that are allowed to connect to the Mysql database through mysql.user table.
I always get the following error after a new deployment:
Host 'X.X.X.X' is not allowed to connect to this MySQL server
Knowing that every time i deploy my service a new pod IP always pops out and i have to add replace the old user with the new host IP. I would really like to avoid using '%' for the host.
Is there any way how i could just register the node IP instead to keep its persistence?
Both Kubernetes and Istio provide network-level protections and setting the allowed hosts to "all" is safe.
A Kubernetes network policy is probably the best cluster-level match for what you're looking for. You'd set the database itself to accept connections from all addresses, but then would set a network policy to refuse connections except from pods that have a specific set of labels. Since you control this by label, any new pods that have the appropriate set of labels will be automatically granted access without manual changes.
Depending on your needs, the default protection given by a ClusterIP service may be enough for you. If a service is ClusterIP but not any other type, it is unreachable from outside the cluster; there is no network path to make it accessible. This is often enough to prevent casual network snoopers from finding your database.
Istio's authorization system is a little bit more powerful and robust at a network level. It can limit calls by the Kubernetes service account of the caller, and uses TLS certificates rather than just IP addresses to identify the caller. However, it doesn't come enabled by default, and in my limited experience with it it's very easy to accidentally configure it to do things like block Kubernetes health checks or Prometheus metric probes. If you're satisfied with IP-level security this might be more power than you need.
I have a basic stack of containers on their own user-defined network with a subnet of 172.21.0.0/16. My MySQL container's address is 172.21.0.2 and the PHP/Apache container's address is 172.21.0.3.
Until this point I had to permit MySQL to allow incoming connections via PHP from 172.21.0.3, which made perfect sense. Now, it seems as though the connections are coming from 172.21.0.1, the gateway, and this doesn't make much sense to me. My (basic to intermediate) understanding suggests that the gateway should only be used when traffic is destined for an address outside of its local network - but obviously in this case MySQL and PHP/Apache are on the same network.
Two of our environments have started acting like this, and while it's a simple fix to permit connections from the gateway address, I'm hesitant to proceed without an understanding as to what has happened and why. This also seems to add extra delay to database queries within the application.
Logging in to an affected environment via phpMyAdmin displays "User: root#172.21.0.1" in the "Database Server" information pane. An unaffected environment displays "root#phpmyadmin_1.test_default" (user#[container].[network]).
Both environments are using the exact same images, and the same version of Docker - 18.06.1-ce. Other than a version upgrade of Docker, nothing else has changed with regards to the docker-compose.yml I was using.
Why has my environment started acting like this? Should I prefer the connection coming in from the actual source, and not via the gateway? How can I return to that way of operation?
Thank you for any guidance or knowledge.
For anyone else that experiences a similar rut, I'm of the mind that this was caused by an upgrade of Docker from 18.03.1-ce to 18.06.1-ce via Docker's own repository. Performing a server reboot after this operation has (for now) restored sense to the networking of the stack.
The connection to my MySQL container is now correctly coming from the PHP/Apache container and not from the gateway address of the bridge network. The lag this introduced is gone, and I'm able to remove the privilege associated with the gateway address.
By default, when you open the bind-address to listen to the outside, the default communication between the MySQL client & server is not secured, that means anyone that can do a MitM attack can view every transactions made.
There is options out there to protect against this type of attack (SSH Tunneling or enabling SSL in MySQL) but from what I understand, Amazon RDS doesn't implement, by default, any SSL security.
So I'm wondering, when you create an RDS instance, is it like installing MySQL on a server and opening the 3306 port or am I missing something?
A few points. Firstly AWS RDS for mysql does support ssl. This is discussed here
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.SSLSupport
Second, the usual way to arrange servers in a AWS VPC is to have "private" and "public" subnets. The private subnets route to other private hosts and perhaps to other hosts in the same VPC. But they have no Elastic IPs and no direct access to the Internet Gateway. It is usual to put databases on private subnets so that their ports are not exposed
There is a nice diagram on this page showing this concept
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html
Lastly, AWS RDS exists within the philosphy of the shared responsiblity model
This tries to make it clear what security AWS services provide and what is supposed to be the responsibility of the customer
If you're creating an instance you've to also allow port 3306 to be open at your endpoints. This means you've to also configure your security settings as to which IP you've to allow for this connection. Regarding SSL security or SSH, as a good practice you should rely on ssh keys with Pass phrase.
I have built an application in vb.net that needs to connect to a mysql database. This all works fine from my own network and several other home networks.
But if i want to use the application on my company's network, i get the error
Unable to connect to any of the specified MySQL hosts
I thought that this is caused by the firewall of the network.
But I used the "automatic update" option and I publish the application on a online server. This works fine on my company's network.
So the application can download the updates from the network, but can't connect to the mysql server. What could cause this issue?
The most common situation that would cause this is selective egress filtering. Specifically, the firewall is most likely only allowing HTTP/HTTPS port connections out.
Try changing mysql to listen on 443, then try again using 443 instead. The firewall may allow the traffic since it is using 443 like web traffic instead of 3306 (mysql default).
If you're testing it locally, its because you need to whitelist the IP that you're CURRENTLY on.
On live sites, the IP of the server doesn't change. So you use that IP with the correct permissions to allow mysql to work.
So basically, figure out where your allowed IP's to talk to the DB are, find your local ip, and modify. Incorrect ports can be a problem also
The MySql server and client are on the same server. In time, they will be on separate machines. We want to establish secure protocols from the get go.
Does it make sense to require SSL on database connections? Or put another way, is there any reason NOT to use SSL?
If I were you, I'd refrain from connecting to localhost, and instead connect to your local machine by using its explicit hostname. I think you're also wise to use TLS / SSL to connect in this configuration if that's what you're expecting to use when you deploy in production.
You may want to ask yourself whether that's worth the trouble, though. If your app - to - mysqld connection is on a private backend network (as it may be) using TLS / SSL may be overkill. It's called "transport layer security" and it pretty much protects against badguys intercepting data going to and from mysql. Your app system will probably have other vulnerabilities that render TLS protection uninteresting. For example, if it's a web app the mysql password is probably hardcoded in a config file someplace. If the badguy wants to look at your data, he need only grab the password and log in to the mysqld. To keep your info safe you need to keep badguys off your private network.
It's a good idea always to paramaterize the hostname, port number, and production password of your mysql database. If those things are parameterized you can then deploy to a staging or production server system simply by changing those parameters.