aws - ec2 - mysql - instance stop,reboot - other users passwords changed - mysql

So I am facing this problem,whereby,whenever I stop my MySql server(which is using an EC2 free-tiered micro instance), I would have my non-root users passwords changed!! by itself.
I need to reset their respective passwords everytime I stop and reboot my MySql EC2 instance.

See the following screenshot:
Perform the Image / Create Image functionality. Give it a meaningful image name and description. For description, help yourself later by being as verbose as possible like "from 20160401 build plus Scala 2.12 and vsfptd configured". The request to save the custom AMI will be received and may take a short time to complete. Typically when you are just starting with small instances, it will be completed in a few minutes. When completed, it will be visible in the left pane under Images / AMIs.
See the AWS Manual page entitled Step 3: Deploy Your App at the bottom. The section "Create a Custom AMI".
In short, without saving your work and the current state of your server, all work is lost by a stop and reboot. You need to manage, cleanup, and discard prior AMI instances that cause confusion later. That is why the description field is your best friend. Naturally only discard things not of value.

Related

Minor MYSQL DB upgrade on GCP

There is a bug on the Mysql 5.7.14 regarding password hash and has been fixed on version 5.7.19. But the Mysql in the GCP doesn't have any option to do a minor upgrade. So can anyone suggest how to go about this issue?
Version 5.7.25, which includes the fix for this bug, will be in the next maintenance release later this month.
No you cannot do minor upgrades by yourself inCloud SQL becasue it is a fully managed service by Google and all updates and upgrades are done behind the scenes for their customers instances. These updates can be done at any time during the next maintenance cycle. However, you can control the day and time and specify a maintenance window for the instance in question.
When you specify a maintenance window, Cloud SQL will not initiate the updates outside of that window. This way you can specify the window when there is less or no traffic on your applications which help reduce the disruptive side effects of that maintenance. Maintenance usually takes between 1-3 minutes for the new update to be pushed and the instance become available again.
To specify a maintenance window:
1- Go to the project page and select a project.
2- Click an Instance name.
3- On the Cloud SQL Instance details page, click Edit maintenance preferences.
4- Under Configuration options, open Maintenance.
5- Configure the following options:
Preferred window. Set the day and hour range when updates can occur on this instance.
Order of update. Set the order for updating this instance, in relation to updates to other instances. Set timing to Any, Earlier, or Later. Earlier instances receive updates up to a week earlier than later instances within the same location.
read more on it here.

openshift: is it possible to create multiple mysql cartridges?

Openshift offers scalability. However, it seems to me, if you are using MySql, in the end MySql queries/hits will be the bottleneck (if you are lucky enough to generate traffic which need scalability and considering the max-connections limit on openshift).
Suppose I want to use OpenShift, is it possible to create multiple mysql cartridges to balance the load and create dynamic environmental variables to assign requests to different mySql cartridges? (suppose I send an id or something and the environment variable for the mysql is set to "dbname+lastdigit" of this id").
This is a simplified example which should multiply the database capacity by ten (if this is unrelated data). Can it be done?
I hope some openshift guy or girl will clarify this for me....
cheers
Edit: Thanks mbaird for your info:
To clarify:
I wasn't talking about auto-scaling but using for instance 11 static/persistant db-cartridges which would never scale up or down.
Then you could store user information in any of them depending on (also for instance) their last id-digit.
The 11th database cartride could be used as a table to get the user's id and then redirect that user to the right database (if last digit = 0, db = db0, if last digit = 1, db = db1 etc). This would enable me to call the right database for the right user.
Of course, this is not auto-scaling but it would multiply the database capacity by (roughly) ten.
However, this would require the ability to create multiple mysql cartridges and corresponding environmental variables to gain access to all these mysql cartridges.
It seems to me this not possible right now, so I will investigate your suggestions.
The OpenShift database tier currently doesn't scale. Further, even if you could add a second MySQL cartridge it wouldn't give you a scalable database, it would give you a new, empty database. What you are looking for is the ability to scale the MySQL cartridge across multiple gears, not adding another cartridge.
I've actually seen some comments from OpenShift (although I can't seem to find them now) that the databases on OpenShift are for development only and you should look for another service to host your database if you have a mission-critical application that requires database fail-over and scalability.
Since you are specifically using MySQL, I would look into using Amazon RDS (either MySQL or the new Aurora engine which is MySQL compatible) or ScaleDB.

How to perform targeted select queries on main DB instance when using Amazon MySQL RDS and Read replica?

I'm considering to use Amazon MySQL RDS with Read Replicas. The only thing disturbing me is Replica Lag and eventual inconsistency. For example, image the case when user modifies his profile (UPDATE will be performed on main DB instance) and then refreshes the page to see changed info (SELECT might be performed from Replica which has not received changes yet due to Replica Lag).
By accident, I found Amazon article which mentions its possible to perform targeted queries. For me it sounds like we can add some parameter or other to tell Amazon to execute select on the main DB instance instead of on Replica. The example with user profile is quite trivial but the same problem occurs in more realistic cases, for example checkout, when a user performs several steps and he needs to see updated info on then next screens. Yes, application could cache entire data set on its own, however it would be great if anybody knows how to perform targeted queries on main DB instance.
I read the link you referenced and didn't find any mention of "target" or anything like that.
But this line might be what you're referring to:
Otherwise, you should spread out the load and read from one of the
Read Replicas. You can make this decision on a query-by-query basis
within your application. You will probably want to maintain some sort
of registry of available Read Replicas within your application,
choosing from among them on a round-robin or randomly distributed
basis.
If so, then I interpret that line to suggest that you can balance reads in your application by just picking one server from a pool and hitting that one. But it would be all in your application logic.

Best way to migrate servers without losing any data and with no downtime(?)

This is a methodology question from a freelancer, with a corollary on MySQL.. Is there a way to migrate from an old dedicated server to a new one without losing any data in-between - and with no downtime? In the past, I've had to lose MySQL data between the time when the new server goes up (i.e., all files transferred, system up and ready), and when I take the old server down (data still transferred to old until new one takes over). There is also a short period where both are down for DNS, etc., to refresh.
Is there a way for MySQL/root to easily transfer all data that was updated/inserted between a certain time frame?
I'd make a sorry page, put it up on the old server, transfer all data to the new one and then switch DNS. Though there will be a downtime.
What I like to do is close the site and starting to move DB to other server using these commands: 2, then move all files (php ..etc) to the other server (if you have some store data or change files every hour, like image upload). and point the old server to the new DB server while the DNS is changing to all to the new server.
The longest downtime is from DNS switch - can take several hours and even days till all clients caches are expired.
To avoid it:
set up application on new server to access DB on old one, or just proxy http requests with nginx to the old one, depending on what is more acceptable.
then goes DNS switch, some clients go to ld server, some to new, here you can wait for 24+ hours to make sure all requests go to new server
While DNS switches - rehearse mysql transition.
make a 'sorry/maintanance page', there're plenty of guides how to do that usung rewrites. You'll need it anyway
measure how fast you can dump-transfer-restore db, if time is acceptable - this is the simplest, but remember to give some margin
if previous is too slow - you can try binlog method suggested in previos answer
minimal downtime can be achieved by making new server a mysql slave to the old one, under the hood it is just downloads binlog from master on-the-fly and you will save time on transferring the whole log, most probably during minimal load slave will be just several seconds behind master and catch up very quickly once app is taken down, see how to force slave to catch up.
Write a script, that does all transition for you - enables maintenance mode, locks master db, waits till slave catches up, makes slave a new master, replaces app config with new db, disables maintenance, switches app etc. This way you save time on typing commands youself, test on staging environment to avoid possible errors (also remember to set larger mysql timeout, just in case slave is a lot behind)
here goes the transition itself, by running script from previous step
Also if you use file uploads to a local filesystem - these need to be synced too and on lots of files this is more pain than with db, because even rsync scan for changes can take a lot of time.
Check out the MySQL binary log.
Sure. Enable bin logging on the source server. After that is started, make a DB dump and transfer it to the new server and import it. Then, when you're ready to make the switch, change DNS (let the change propagate while you're working), then take the site down on both servers. Copy the binlogs to the new server and run them again starting at the date/time of the dump.

Can a webserver determine if its the active node of an HA failover system without hard coding anything on the server itself?

I can think of a few hacks using ping, the box name, and the HA shared name but I think that they are leading to data leakage.
Should a box even know its part of an HA cluster or what that cluster name is? Is this more a function of DNS? Is there some API exposed for boxes to join an HA cluster and request the id of the currently active node?
I want to differentiate between the inactive node and active node in alerting mechanisms for a running program. If the active node is alerting I want to hit a pager and on the inactive node I want to send an email. Pushing the determination into the alerting layer moves the same problem elsewhere.
EASY SOLUTION: Polling the server from an external agent that connects through the network makes any shell game of who is the active node a moot point. To clarify this the only thing that will page is the remote agent monitoring the real. Each box can send emails all day long for all I care.
It really depends on the HA system you're using.
For example, if your system uses a shared IP and the traffic is managed by some hardware box, then it can be hard to determine if a certain box is a master or slave. That will depend on a specific solution really... As long as you can add a custom script to the supervisor, you should be ok - for example the controller can ping a daemon on the master server every second. In the alerting script, simply check if the time of the last ping < 2 sec...
If your system doesn't have a supervisor / controller node, but each node tries to determine the state itself, you can have more problems. If a split brain occurs, you can end up with both slaves or both masters, so your alerting software will be wrong in both cases. Gadgets that can ensure only one live node (STONITH and others) could help.
On the other hand, in the second scenario, if the HA software works on both hosts properly, you should be able to obtain the master/slave information straight from it. It has to know its own state at any time, because it's one of its main functions. In most HA solutions you should be able to either get the current state, or add some code to run when the state changes. Heartbeat offers both.
I wouldn't worry about the edge cases like a split brain though. Almost any situation when you lose connection between the clustered nodes will be more important than the stuff that happens on the separate nodes :)
If the thing you care about is really logging / alerting only, then ideally you could have a separate logger box which gets all the information about the current network / cluster status. External box will probably have better idea how to deal with the situation. If your cluster gets dos'ed / disconnected from the network / loses power, you won't get any alert. A redundant pair of independent monitors can save you from that.
I'm not sure why you mentioned DNS - due to its refresh time it shouldn't be a source of any "real-time" cluster information.
One way is to get the box to export it's idea of whether it is active into your monitoring. From there you can predicate paging/emailing on this status (with a race condition around failover), and alert on none/too many systems believing they are active.
Another option is to monitor the active system via a DNS alias (or some other method to address the active system) and page on that. Then also monitor all the systems, both active and inactive, and email on that. This will cause duplicate alerts for the active system, but that's probably okay.
It's hard to be more specific without knowing more about your setup.
As a rule, the machines in a HA cluster shouldn't really know which one is active. There's one exception, mind, and that's with cronjobs. At work, we have a HA cluster on top of which some rather important services run. Some of those use services have cronjobs, and we only want them running on the active box. To do that, we use this shell script:
#!/bin/sh
HA_CLUSTER_IP=0.0.0.0
if ip addr | grep $HA_CLUSTER_IP >/dev/null; then
eval "$#"
fi
(Note that this is running on Debian.) What this does is check to see if the current box is the active one within the cluster (replace 0.0.0.0 with the external IP of your HA cluster), and if so, executes the command passed in as arguments to the script. This ensures that one and only one box is ever actually executing the cronjobs.
Other than that, there's really no reasons I can think of why you'd need to know which box is the active one.
UPDATE: Our HA cluster uses Heartbeat to assign the cluster's external IP address as a secondary address to the active machine in the cluster. Programmatically, you can check to see if your machine is the current active box by calling gethostbyname(), and iterating over the data returned until you either get to the end or you find the cluster's IP in the list.
Without hard-coding.... ? I assume you mean some native heartbeat query, not sure. However, you could use ifconfig, HA creates a virtual interface on whatever interface it is configured to run on. For instance if HA was configured on eth0 then it would create a virtual interface of eth0:0, but only on the active node.
Therefore you could do a simple query of the ifconfig output to determine if the server twas the active node or not, for example if eth0 was the configured interface:
ACTIVE_NODE=`ifconfig | grep -c 'eth0:0'`
That will set the $ACTIVE_NODE variable to 1 (for active) and 0 (if standby). Hope that may help.
http://www.of-networks.co.uk