I am using couchbase and would like to restrict the access to only localhost.
Any help to point to correct document would be helpful, i tried to find the info in couchbase site, could not find.
tcp 0 0 0.0.0.0:8091 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:60168 127.0.0.1:8091 ESTABLISHED
tcp 0 0 127.0.0.1:8091 127.0.0.1:60168 ESTABLISHED
Currently Couchbase does not have internal access controls at the IP level. Bucket authentication via SASL is available but you will need to tune connection access via your firewall solution. Couchbase provide a list of ports that are required to be open in the security outside Couchbase section of the documentation. If you set your policy to deny by default and open those ports you will have achieved what you are looking for.
I should mention that it is not recommended to run your application on the same machine as Couchbase as it severely limits the flexibility to scale all parts of your stack independently.
To add to what Adam said, I might even go a step further and create a private network to run the Couchbase cluster in and then only open that network up to the application server. This way, when you add a node to the cluster to scale later on, it inherits the protection from the network layer. I say this as depending on how you do the server's firewall it could become difficult to maintain over time and usually that is when security gets lax. So if you can, set the ACLs at the network level.
Also, like Adam said do not run the app on the same node as Couchbase in production if you can help it. You will get resource collisions most likely. It is just a bad idea in general regardless of the DB usually.
Related
I need to access a postgres database from my java code which resides in openshift cluster. I need a way to do so. without initiating port forwarding manually through oc port forward command.
I have tried using openshift java client class openshift connection factory to get the connection by passing server url and username password through which I log in to the console but it dint help.
(This is mostly just a more detailed version of Will Gordon's comment, so credit to him.)
It sounds like you are trying to expose a service (specifically Postgres) outside of your cluster. This is very common.
However the best method to do so does depend a bit on your physical infrastructure because we are by definition trying to integrate with your networking. Look at the docs for Getting Traffic into your Cluster. Routes are probably not what you want, because Postgres is a TCP protocol. But one of the other options in that chapter (Load Balancer, External IP, or NodePort) is probably your best option depending on your networking infrastructure and needs.
I'm currently trying to build my services on kubernetes using istio and have trouble trying to whitelist all host IPs that are allowed to connect to the Mysql database through mysql.user table.
I always get the following error after a new deployment:
Host 'X.X.X.X' is not allowed to connect to this MySQL server
Knowing that every time i deploy my service a new pod IP always pops out and i have to add replace the old user with the new host IP. I would really like to avoid using '%' for the host.
Is there any way how i could just register the node IP instead to keep its persistence?
Both Kubernetes and Istio provide network-level protections and setting the allowed hosts to "all" is safe.
A Kubernetes network policy is probably the best cluster-level match for what you're looking for. You'd set the database itself to accept connections from all addresses, but then would set a network policy to refuse connections except from pods that have a specific set of labels. Since you control this by label, any new pods that have the appropriate set of labels will be automatically granted access without manual changes.
Depending on your needs, the default protection given by a ClusterIP service may be enough for you. If a service is ClusterIP but not any other type, it is unreachable from outside the cluster; there is no network path to make it accessible. This is often enough to prevent casual network snoopers from finding your database.
Istio's authorization system is a little bit more powerful and robust at a network level. It can limit calls by the Kubernetes service account of the caller, and uses TLS certificates rather than just IP addresses to identify the caller. However, it doesn't come enabled by default, and in my limited experience with it it's very easy to accidentally configure it to do things like block Kubernetes health checks or Prometheus metric probes. If you're satisfied with IP-level security this might be more power than you need.
I have what is going to be a production MySQL database, and we want to access such remotely but haven't found a secure way to do it.
Docker Swarm do not have support for host bound ports such as 127.0.0.1:3303:3303, however normal mode does. Making a port public becomes also public in all swarm nodes.
Using firewalls is not really an option since we would have to configure these on every single node in the swarm.
We have on table only two options
Opening the port and only allowing connections through TLS and enforcing REQUIRE options Issuer and Subject, to only one single user and probably read_only. Still seems to be highly insecure due to having the open port.
Creating a temporary dockerized sshd service and making it available in the same network as MySQL service, it is more hazzle to manage these ssh containers. Still more secure since it would be turn on/off when needed
Question: Is there any other/better options to approach this? and how badly insecure is it to have open port + tls connections?
If you have a good argument against accessing MySQL remotely I would appreciate it
I was wondering if someone could tell me if there is any potential security breeches that could occur by connecting to a MySQL database that does not reside at 'localhost' i.e. via IP address?
Yes, breaches do occur by not protecting the connection to your database. This is a network secuirty question more so than an Application secuirty question. Thus this answer is entirely dependent on your network topography.
If a segment of your network maybe accessible by an attacker, then you must protect yourself with cryptography. For instance you have a malicious individual who has compromised a machine on your network, then they can conduct an ARP Spoofing attack to "Sniff" or even MITM devices on a switched network. This could be used to see all data that flows in and out of your database, or modify the database's response to a specific query (like a login!). If the network connection to your database is a single rj45 twisted connection to your httpd server all residing inside a locked cabinet, then you don't have to worry about a hacker sniffing this. But if your httpd is on a wifi network and then connecting to a database in China, then you might want to think about encryption.
You should connect to your MySQL database using MySQL's built-in SSL ability. This insures that all data transferred is highly protected. You should create self-signed x509 certificates and hard code them. This is free, and you don't need a CA like Verisign for this. If there is a certificate exception then there is a MITM and thus this stops you from spilling the password.
Another option is a VPN, and this is better suited if you have multiple daemons that require secure point to point connections.
It's usually the other way round that the bigger problem lies, vulnerabilities in the MySQL server being exploited by untrustworthy clients.
However, yes, there have also been client vulnerabilities in the past (eg.) that would allow an untrustworthy server to attack the client.
Naturally you should keep your MySQL client libraries up to date to avoid such possibilities, as well as updating the server.
If your connection to the server is going over the internet (rather than a private network), you should consider running it over an encrypted link (either MySQL's own SSL scheme or using a tunnel). Otherwise any man-in-the-middle could fiddle with the data going in and out of the database, and if there are client or server vulnerabilities those could also be targeted.
If the servers are in the same rack, you can use dedicated high-speed MySQL cable, or use switch VLAN isolation, and protect the database OS. In cloud with the virtual cloud network you can connect it the way that arp spoof is not possible, and for the geo-ip replication, you can use user/password and firewall, and then measure the performance, and then setup a tunnel and measure performance again, if it's not bad, it might be worth against unknown threats or just useful in using spare cpu cycles.
Simply SQL servers has to be on isolated network, and not into the public, as rule of thumb, you never publish open database connection to anyone, and keep it with seriously good firewall filtering on separate subnet made for handling sensitive data with very good arp spoofing protection, otherwise it's crackable and the major parts of the system can be compromised using several techniques, and it's very nice and sometimes very easy to handle it this way, e.g. to control, monitor and policy the MySQL traffic with hardware layer - and it really does the job and makes a real difference.
Optionally you can keep it on encrypted hard-drive in physically safe place along with the switch, so upon breaking the power its switched off, and the private key erased, hence both layer-1 and layer-2 are secured.
On the switch to use the static ARP table plus the filtering for the static entries versus the port is very easy to do because it's also physical layer - the port number.
Is there a performance difference between TCP connections to:
localhost / 127.0.0.1
a domain which resolves to the local machine
Or more specifically, do the latter connections go through the loopback device, or over the actual network?
The reason I'm asking is I'm thinking about changing database settings in many PHP apps so they use a full domain instead of localhost. That way we could more easily move the database to a different server, if the need arises.
This is implementation and operating system dependent. On Windows, anything connecting to a local IP address, even if it is an outside-facing IP, will go over loopback. This is a documented problem for applications such as packet sniffers, because you can't sniff the loopback. (Windows doesn't treat loopback as a "device" -- it is handled at the network level.) However, in this case it would work in your favor.
Linux, in contrast, will follow whatever you have in your routing table, so packets that are destined to your local machine will go to your local machine over the network if the routing table isn't properly configured. However, in 99% of the cases the routing will be configured properly. Your packets won't go over the loopback device, but the TCP/IP stack will know that you are contacting a local IP and it will virtually go out and back in the proper ethernet device.
In a properly configured environment, the only bottleneck for using a domain name would be DNS resolution time. Contacting an outside DNS can add additional latency into your configuration. However, if you add in the domain name into your /etc/hosts file (C:\Windows\System32\drivers\etc\hosts on Windows), your system will skip the DNS resolution phase and obtain an IP directly, making this time cost moot.
That depends on how the names are resolved. The procedure is typically /etc/hosts first and then DNS if that fails. If localhost is in your /etc/hosts, putting whatever.wherever in the file as well will make it resolve with the same speed.