Bitnami MySQL Chart - Replication and TLS - mysql

Quick question for anyone familiar with MySQL Helm charts by Bitnami:
Does anyone know if it's possible to configure a replication cluster with TLS enabled between the primary/secondary instances?
I was able to easily get replication cluster up and running without TLS, but I can't see anything baked into the chart about enabling TLS for replication purposes. I tried using an init script to accomplish this based on instructions here, but could not login as root on the replica instances at the point that init scripts run.
I almost wonder if I need to create another container that waits for them to start, then connects and runs that script?

Related

Creating a Staging VM in Google Compute Engine

I'm trying to set up a Staging VM for a site that's in production that I have just inherited. The site is running Wordpress/Woocommerce and has not been updated in a while. The VM it's hosted on is running an old version of PHP. Obviously, this all needs to be fixed up but I'm unfamiliar with GCP Compute Engine. Also any attempt to run backup/clone plugins crashes the site and requires a restore from the daily snapshot which is very annoying.
Is it possible to clone the VM/disk to a new instance, point that at a temporary domain, and test/update the site? I have been trying to do this for a while now without much luck any suggestions would be much appreciated. Thanks.
Creating a clone of an existing VM is possible and quite easy.
Create a snapshot of the VM. If possible stop the VM before doing this to ensure 100% accuracy - this way you will have exact snapshot of the drive without any errors. You can do it while the VM is running too if stopping it is out of the question.
Create a VM from the shapshot - select as a boot disk a snapshot that you've just created. Remember to assign a static public IP to this VM (unless you want it changed after VM restart and since you're going to do some configuration this would likely happen). You can change the VM's specs at this time too - nothing stops you from adding/removing CPU's, RAM etc. It may well be that your VM is underutilised and you can use something smaller to save costs. Or the opposite.
Start the machine. Now you can modify your WP configuration to point to a new domain. Depending on the SSL certificate - you can either use external one or the one provided by GCP (most convinient solution).
If you already own a domain you want to use for staging you can host it in Cloud DNS or at some other provider - just point it to the external IP you just reserved.
If you will be hosting your domain in the Cloud DNS then you will find necessary infomration in the documentation about managed zones (domains).
You can also consider creating a new VM and setting it as a template for creating a group of VM's (managed autoscaled group) and creating an external HTTPS load balancer in front of it. But this adds a little to the complexity so it's just my idea if you needed to handle a lot more traffic.

Does traffic get discarded if a google cloud endpoint is redeployed?

Let's say for argument's sake that I have a vm instance, which is configured with an endpoint config_id in it's meta-data that is set to an existing working cloud endpoint.
Can someone please explain to me what happens to the incoming requests if the cloud endpoint is redeployed? Obviously, I will get an new config_id, but if haven't yet applied this config_id to the vm instance, does the traffic just get discarded?
If this is the case, what are some viable solutions to prevent service interruption for my users.
Thanks!
The traffic keeps going to the old configuration until you change the endpoints-service-config-id with the new config_id:
And then ssh into the VM instance with gcloud compute ssh [INSTANCE-NAME] and run sudo /etc/init.d/nginx restart.
In conclusion, traffic won't be discarded. It just keeps using the old config deployment. See redeploying

Gear to gear connection (Please read the full description first)

I have checked almost all solutions both in Openshift forum and here in stackoverflow but couldn't solve the problem.
Here is the situation
I have a php server with load balancing in one gear.
I have a second gear for mysql server along with PhpMyAdmin. At present OpenShift does not support load balancing for PhpMyAdmin, so my second gear does not have any scaling feature.
Now I want to host a php app in first gear and the database in the second gear. So how do I connect them internally (would be better if I could do it without port forwarding)? I need all the commands from the beginning to the end unfortunately.
Thank you.
You should just add the mysql cartridge to your scaled application. It will still put the mysql database on it's own gear, but it will be accessible from your scaled application using the standard mysql environment variables. You can view those variables by sshing into your application and running env | grep mysql. If you decide to run your own second gear for the mysql database (you still had to install a web cartridge anyways to do that right?) then you will either HAVE to use port forwarding for direct access, or you will have to write an API on that server that will allow your application to access the mysql database.

Mysql: How to configure mysql proxy for an existing master-slave setup

I want to configure mysql proxy on my test environment to observe the below.
1. Behavior of the proxy
2. How load, CPU usage varies on my test server for read/write distribution.
I googled and able to install proxy on my ubuntu linux.
But I didnt see any thing on configuring it in a step by step manner and how to start or stop this.
Shall some one explore on this and this would be of great help for me.
Thanks in advance
Regards,
UDAY
By default if you run the proxy on the same machine as the server it will listen to port 4040 and query a backend server on the msyql default port of 3036. Other port numbers and server locations can be configured from the command line or with a configuration file.
To distribute queries across servers, add monitoring, profiling etc. you need to provide a Lua script to mysql-proxy. See the example / tutorial scripts in /usr/local/share/docs that came with the installation download. There is work to do for a production implementation.
The basics of how the scripting works can be found here under MySQL Proxy Scripting.
Don't be worried about Lua. The syntax is quite readable given the tutorial examples to work from. As and when you need it lua.org has more details of Lua.

Integrate different Nagios webservers

I have different sites running with 4 to 5 server at each location. All the locations have one monitoring server with Nagios. Now I want to create a central location and want to combine all the nagios services running at each location. Can anyone please point me to some documentation for these type of jobs.
There are two approaches that you can take.
Install a new Nagios core as you did at each location and perform active checks on each of the remote hosts. You'll likely end up installing NRPE on each of the remote hosts at each location and can read this document for the details: http://nagios.sourceforge.net/docs/nrpe/NRPE.pdf. If your remote servers are Windows servers, you can use NSClient to much of the same things that NRPE does for Linux hosts. This effectively centralizes your monitoring server. I also wrote some how-to style entries for using NRPE to run privileged commands http://blog.gnucom.cc/?p=479 or to run event handlers http://blog.gnucom.cc/?p=458. If you get tired of installing NRPE, you can use my script here http://blog.gnucom.cc/?p=185. I also have instructions to install NSClient here http://blog.gnucom.cc/?p=201.
Install a new Nagios core as you did at each location and perform passive checks by instructing the remote Nagios cores to feed their results to the new central Nagios core's passive command file. I haven't done this myself, so I'm going to point you to the communities documentation here http://nagios.sourceforge.net/docs/2_0/passivechecks.html. You could probably look at my event handler post to set up event handlers that send checks to the main server.
From my personal experience, the first option I mentioned is easier to implement, and is far easy to administer. However, as your server fleet grows you'll start seeing major CPU bottlenecks with the main Nagios core. This is where passive checks would become beneficial, as the main Nagios core simply waits for critical checks to be sent to it rather than having to check them itself.
Hope this helps. :)
A centralized view tool may be what you are looking for. There are a number of different options available.
Nagiosfusion
MK Livestatus
Nagcen
Thruk