There is a bug on the Mysql 5.7.14 regarding password hash and has been fixed on version 5.7.19. But the Mysql in the GCP doesn't have any option to do a minor upgrade. So can anyone suggest how to go about this issue?
Version 5.7.25, which includes the fix for this bug, will be in the next maintenance release later this month.
No you cannot do minor upgrades by yourself inCloud SQL becasue it is a fully managed service by Google and all updates and upgrades are done behind the scenes for their customers instances. These updates can be done at any time during the next maintenance cycle. However, you can control the day and time and specify a maintenance window for the instance in question.
When you specify a maintenance window, Cloud SQL will not initiate the updates outside of that window. This way you can specify the window when there is less or no traffic on your applications which help reduce the disruptive side effects of that maintenance. Maintenance usually takes between 1-3 minutes for the new update to be pushed and the instance become available again.
To specify a maintenance window:
1- Go to the project page and select a project.
2- Click an Instance name.
3- On the Cloud SQL Instance details page, click Edit maintenance preferences.
4- Under Configuration options, open Maintenance.
5- Configure the following options:
Preferred window. Set the day and hour range when updates can occur on this instance.
Order of update. Set the order for updating this instance, in relation to updates to other instances. Set timing to Any, Earlier, or Later. Earlier instances receive updates up to a week earlier than later instances within the same location.
read more on it here.
Related
I identified an issue with an infrastructure I created on the Google Cloud Platform and would like to ask the community for help.
A charge was made that I did not expect, as theoretically it would be almost impossible for me to pass the limits of the free tier. But I noticed that my database is huge, with 1gb and growing, and there are some very heavy buckets.
My database is managed by a Django APP, and accessing the tool's admin panel there are only 2 records in production. There were several backups and things like that, but it wasn't me.
Can anyone give me a guide on how to solve this?
I would assume that you manage the database yourself, i.e. it is not Cloud SQL. Databases pre-allocate files in larger chunks in order to be efficient on writes. You can check this yourself - write additional 100k records, most likely size will not change.
Go to Cloud SQL and it will say how many sql instances you have. If you see the "create instance" window that means that you don't have any Google managed SQL instances and the one we're talking about is a standalone one.
You can also check Deployment Manger to see if you deployed one from the marketplace or it's a standalone VM with MySQL installed manually.
I made a simple experiment and deployed (from Marketplace) a MySQL 5.6 on top of Ubuntu 14.04 LTS. Initial DB size was over 110MB - but there are some records in it initially (schema, permissions etc) so it's not "empty" at all.
It is still weird that your DB is 1GB already. Try to deploy new one (if your use case permits that). Maybe this is some old DB that was used for some purpose and all the content deleted afterwards but some "trash" still remains and takes a lot of space.
Or - it may well be exactly as NeatNerd said - that DB will allocate some space due to performance issues and optimisation.
If I have more details I can give you better explanation.
I have an Azure SQL db where I am executing a change with a c# call (using await db.SaveChangesAsync();)
This works fine and I can see the update in the table, and in the APIs that I call which pull the data. However, roughly 30-40 minutes later, I run the API again and the value is back to the initial value. I check the database and see that it is indeed back to the initial value.
I can't figure out why this is, and I'm not sure how to go about tracking it down. I tried to use the Track Changes SQL command but it doesn't give me any insight into WHY the change is happening, or in what process, just that it is happening.
BTW, This is a test Azure instance that nobody has access to but me, and there are no other processes. I'm assuming this is some kind of delayed transaction rollback, but it would be nice to know how to verify that.
I figured out the issue.
I'm using an Azure Free Tier service, which is done on a shared virtual machine. When the app went inactive, it was being shut down, and restarted on demand when I issued a new request.
In addition, I had a Seed method in my Entity Framework Migration set up to set the particular record I was changing to 0, and when it restarted, it re-ran the migration, because it was configured to do so in my web config.
Simply disabling the EF Migrations and republishing does the trick (or when I upgrade to a better tier for real production, it will also go away). I verified that records outside of those expressly mentioned in the Migration Seed method were not affected by this change, so it was clearly that, and after disabling the migrations, I am not seeing it any more.
I want to update the machine type of my Google Cloud SQL instance, but this takes several minutes to update (second generation instance). The instance will be unavailable until the instance has restarted. Because of this downtime, we have to update the machine type at night time, so our visitors are the least troubled by this update.
Is there a workflow how we can minimise this downtime to zero or maybe a few seconds? I already thought about possible solutions like adding a temporary failover or maybe make use of read replica.
I contacted the support of Google Cloud about this question and they told me that Cloud SQL isn't build to perform this change without downtime. If I want to be able to make these changes, I should look at Cloud Spanner which is a horizontal scalable SQL solution provided by Google.
I have recently been doing cold migrations...which means that I make it impossible from an application level to read/write to the database while doing the migration (Maintenance page).
This way errors won't happen for changes to the structure and also if there is a lot of load I wouldn't want mysql to crash in the middle of the migration.
My structure is that every clients get their own database. The only downside to this approach is their can be downtime of 15-45 minutes depending on how many changes are made.
My solution to this would be the following:
Have 2 copies of the code running at the same time. I have code that detects what version of the program they are on and if they are still on old show them the old code...if they are on new show them the new code
The only part that scares me is if someone does a denial of service attack in the middle of the migration I could have serious problems.
I have about 360 databases right now.
Is the hot method recommended? I just get worried about a denial of service in the middle of it or some sort of mysql query error because their could be data changes going on. I did have this happen once before but luckily it was just before I started the migration.
Your idea would work only if the "new Code Base" is 100% compatible with the "old DB version", or else it might crash while the DB migration is in progress. Also, it requires that at no stage during the DB migration, the database is never in an inconsistent state (perhaps by wrapping migration steps in appropriate transactions).
If ressources allow, I would:
install and configure the new code base under a new virtual host, pointing at the new database (see below)
put the "old" site in read-only mode
duplicate the current database, on the same DB server
migrate the duplicate database to the new version
switch the virtualhost to the new code base (make sure you lift the maintenance mode :)
let the new version mature for a few hours, and then drop the old code base, DB, and virtual host.
(you can even skip the tinkering with virtual hosts and use symbolic links instead)
Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror).
I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate).
The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data.
Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). What do people out there do? The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.
We at Percona offer free tools to detect discrepancies between master and server, and to get them back in sync by re-applying minimal changes.
pt-table-checksum
pt-table-sync
GoldenGate is a very good solution, but probably as expensive as the MySQL replicator.
It basically tails the journal, and applies changes based on what's committed. They support bi-directional replication (a hard task), and replication between heterogenous systems.
Since they work by processing the journal file, they can do large-scale distributed replication without affecting performance on the source machine(s).
I have never seen dropped statements but there is a bug where network problems could cause relay log corruption. Make sure you dont run mysql without this fix.
Documented in the 5.0.56, 5.1.24, and 6.0.5 changelogs as follows:
Network timeouts between the master and the slave could result
in corruption of the relay log.
http://bugs.mysql.com/bug.php?id=26489