Setting up servers with fault tolerance using Go and MySQL (failover) - mysql

I am working in a project where we are using Go as a web server and MySQL.
We have been told to implement fault tolerance to handle a hardware crash. We were given 2 servers which have MySQL and the Go-server on them.
We have succesfully set up replication in MySQL, but we are struggling with the failover part. Our thought was to get an extra server with HAProxy to have a primary server and then being able to failover to the backup server.
We also considered using MySQL failover, but did not see how we could redirect the traffic using it.
Is this a reasonable plan? Or what would you recommend that we do instead?

If you want two identical servers connecting to their local MySQL instances, you need a way of deciding which one is the production server. There are a number of solutions for that, including
Setting up a reverse proxy, as you mention, but then, your proxy
itself becomes a SPOF,
Using a floating IP, also known as a failover
IP, but this only works if your host supports it. Cloud providers
typically support them, as well as some bare metal server providers.
There is nothing specific to Go as far as I know.

Related

How can I connect a local MySQL database to the IBM Node-Red platform

I am using MySQL workbench on windows, which I want to connect to a Node-Red running on the IBM cloud. Since I don't run them on the same server the host 127.0.0.1 and port 3306 does not seem to work. What permissions should I give?
I'm going to make a LOT of assumptions here, because there really isn't enough information in your question.
First assumption, by "running on IBM" you mean that Node-RED is running on the IBM Cloud hosting service.
The short answer is you can not do what you want.
The longer version is that you probably could actually make this work but doing it is a REALLY bad idea.
Second assumption, you are doing this from home (even if you are doing it from a office location the same problems are likely to apply). This means you are connected to a local LAN using RFC 1918 address range (e.g. 192.168.0.x), this means you are behind a router that is performing NAT (Network Address Translation). This means you are going to need to set up portforwarding on the router so that when traffic arrives at the router it will send it on to your Windows machine. How you do this will depend on your router.
Next problem, your broadband probably doesn't have a static IP address which means it will change every time your connection drops. There are work arounds for this using things like Dynamic DNS. But that's too complicated to get into here.
Assuming you get all of that sorted out you still have the problem that you have now exposed your mysql database to the internet, so you need to make sure you have enabled all the right security measures to prevent people logging in and at best seeing all your data.
There are 2 much better solutions to this
Run Node-RED on the same machine or at least on the same local network as the database.
Use one of IBM Clouds hosted database solutions, these are a lot easier to connect to a IBM Cloud instance of Node-RED.
If you do not want to open ports to your network I recommend using a free MYSQL remote server
A simple website is https://remotemysql.com
Just take the screenshot of the credentials of your database after registration.
Keep in mind if your database is empty it will get deleted after some time.

Connecting to Database on Virtual Machine?

Simple question can a Java service layer running on Tomcat7 on a host machine connect to persistent data store (mySQL) running inside a virtual box with portforwarding? I want to know if the hibernate or Jdbc connection strings from host machine work if mySQL server is installed inside a VirtualBox.
Also if it does work can I expect behavioral deviations in terms of speed and connection pooling if everything is packaged into one single system and deployed in a real world web server in a single enviroment?
The short answer is yes, it is possible and will work. You will likely have to play with the firewall settings on your virtual box instance. You don't specify OS, so it's hard to tell you what exactly you'll need to tweak.
As far as deploying this in a real-world environment, if you mean production, you probably should NOT do that. This is a great setup to build on, but not something I would run in production.
To be clear, there won't be any issues behaviorally speaking, it will act as MySQL always acts, but it will absolutely be slower than running it on 'bare metal' -- how much slower will vary based on hardware, workload, etc. and it is generally not a great design for a production deployment..

Run MySQL and PostgreSQL on same server

For our customer the application which is running is using MySQL database. However, this server is without monitoring. I want to install OpenNMS (which uses PostgreSQL) application to monitor the solution and send the traps to main NMS system.
Is there any problem having both on the same server?
No, there is no technical problem. Both default to different ports they listen on.
The only problem that could arise is that each individual DB might be slower compared to an installation on separate phyiscal machines because they are both share (and fight for) for the same resources (I/O, memory, CPU, network, ...)

MySQL connection and security

I was wondering if someone could tell me if there is any potential security breeches that could occur by connecting to a MySQL database that does not reside at 'localhost' i.e. via IP address?
Yes, breaches do occur by not protecting the connection to your database. This is a network secuirty question more so than an Application secuirty question. Thus this answer is entirely dependent on your network topography.
If a segment of your network maybe accessible by an attacker, then you must protect yourself with cryptography. For instance you have a malicious individual who has compromised a machine on your network, then they can conduct an ARP Spoofing attack to "Sniff" or even MITM devices on a switched network. This could be used to see all data that flows in and out of your database, or modify the database's response to a specific query (like a login!). If the network connection to your database is a single rj45 twisted connection to your httpd server all residing inside a locked cabinet, then you don't have to worry about a hacker sniffing this. But if your httpd is on a wifi network and then connecting to a database in China, then you might want to think about encryption.
You should connect to your MySQL database using MySQL's built-in SSL ability. This insures that all data transferred is highly protected. You should create self-signed x509 certificates and hard code them. This is free, and you don't need a CA like Verisign for this. If there is a certificate exception then there is a MITM and thus this stops you from spilling the password.
Another option is a VPN, and this is better suited if you have multiple daemons that require secure point to point connections.
It's usually the other way round that the bigger problem lies, vulnerabilities in the MySQL server being exploited by untrustworthy clients.
However, yes, there have also been client vulnerabilities in the past (eg.) that would allow an untrustworthy server to attack the client.
Naturally you should keep your MySQL client libraries up to date to avoid such possibilities, as well as updating the server.
If your connection to the server is going over the internet (rather than a private network), you should consider running it over an encrypted link (either MySQL's own SSL scheme or using a tunnel). Otherwise any man-in-the-middle could fiddle with the data going in and out of the database, and if there are client or server vulnerabilities those could also be targeted.
If the servers are in the same rack, you can use dedicated high-speed MySQL cable, or use switch VLAN isolation, and protect the database OS. In cloud with the virtual cloud network you can connect it the way that arp spoof is not possible, and for the geo-ip replication, you can use user/password and firewall, and then measure the performance, and then setup a tunnel and measure performance again, if it's not bad, it might be worth against unknown threats or just useful in using spare cpu cycles.
Simply SQL servers has to be on isolated network, and not into the public, as rule of thumb, you never publish open database connection to anyone, and keep it with seriously good firewall filtering on separate subnet made for handling sensitive data with very good arp spoofing protection, otherwise it's crackable and the major parts of the system can be compromised using several techniques, and it's very nice and sometimes very easy to handle it this way, e.g. to control, monitor and policy the MySQL traffic with hardware layer - and it really does the job and makes a real difference.
Optionally you can keep it on encrypted hard-drive in physically safe place along with the switch, so upon breaking the power its switched off, and the private key erased, hence both layer-1 and layer-2 are secured.
On the switch to use the static ARP table plus the filtering for the static entries versus the port is very easy to do because it's also physical layer - the port number.

Port LAMP application to EC2

Any good resource on how to port a LAMP stack to Ec2?
Mainly I'm concerned about storage, the MySQL part. The existing app works agains a single store. Do I need to port all my storage to S3? Will the EC2 instances be able to share a single MySQL database? Alternatively I can partition my data and have a single database for each EC2 image, but I still need a global user account database for authentication and if the data is partitioned the requests have to be routed to the proper image. Not sure how this is achieved in EC2.
To wrap up: where should I start?
These Tips for deploying a LAMP stack on Amazon EC2 are IMO a really good starting point. I'd suggest to read them first (I'm not sure I understand your concerns about the storage part), maybe things will be clearer after.
I know this is old, but for anyone who's in this situation check out: http://www.robotmedia.net/2011/04/how-to-create-an-amazon-ec2-instance-with-apache-php-and-mysql-lamp/
That is the most straight forward tutorial I've found for implementing a LAMP stack on Amazon EC2.
Using S3 isn't required, although it is an affordable way to host files. Yes, multiple instances can share a single database and you can use database replication for additional availability. Here's a great tutorial for that: http://aciddrop.com/2008/01/10/step-by-step-how-to-setup-mysql-database-replication/