Database high CPU utilisation during magento indexing - mysql

Magento is creating 700 + connections leading to database breakdown whenever cache is flushed or indexing is triggered. Production site remain down for 20 mins till all connections clears.
All connections firing same query. And remain in state creating sort index. Using very high database configuration.
DB on Amazon rds.Any help is appreciated. This is breaking our production site.

This is the reason why there is load balancer (Mysql - Master Slave architecture). Let me explain how it works.
There is a master database. 2)
There are multiple slaves (replica) databases connected to the master.
Whenever there is a write on the master database the slaves are also updated. This will make the slaves updated as like the master containing the updated data. Whenever you want to perform any optimization or the db gets down then you can switch the database to one of the slave avoiding the downtime. When the master is up and running again you can anytime switch to that database.
Check this link :
https://severalnines.com/blog/how-cluster-magento-nginx-and-mysql-multiple-servers-high-availability
Hope this helps you.

Related

Connect and Sync 2 Mysql Databases

Is it possible to sync 2 MYSQL databases so if you write into one it will sync add it to the other one too and other way around? I've seen some Programs that you can use on your local pc but I need a script for example php that I can upload to my vps, it checks every minute for new data and syncs it.
Is that possible?
Are you tried Mysql Replication strategies? See MySQL Replication for High Availability.
Specially Master with Backup Master (Multiple Replication)
or Master with Active Master (Circular Replication or Ring topology)

Can I have a HA MySQL/MariaDB Slave?

Weird question I know. I have a master MySQL database which I'm not allowed to touch and need to build a slave for. I would like the slave to be as real time (as possible) of a replica of the master and would like the slave to be HA.
Does MySQL (or MariaDB) replication work when run on a cluster, say, can I make a Galera cluster and make it replicate from a master out of the box or must I use binlog-esque tools?
For the curious; this new slave cluster will be on a different network and will have many large, important queries made against it regularly - the aim of the game is to reduce load on the master and reduce network traffic.
If you are not planning on doing modifications to the downstream slave server, then you can just set up multiple slave servers. This way if one of the slaves goes down you can use another one. This will place a small load on the master for each added slave but whether this added load is even measurable depends on your setup.
Galera could work but I believe you would have to reconfigure one of the nodes to act as the slave if the current one goes down. This would place a minimal load on the master but it would require a manual intervention whenever the current "slave" node goes down.
Parallel replication should also help speed up replication for MariaDB servers.

MySQL MASTER MASTER replication, adding a new Master with out downtime

I have setup master-master mysql replication in 2 different nodes. Suppose If i am going to add one more node, i.e 3rd master , do I need to have a exactly the same copy of the database in the new server as in the node-1 and node-2 ?
These are high traffic servers and will be keep on updating the database every seconds, so we would like to do it with out downtime. Is there any way to do this, with out downtime ?
MySQL 5.1.18, it is possible to use MySQL Cluster in multi-master replication, including circular replication between a number of MySQL Clusters.
Detailed explanation is explained here:
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication-multi-master.html

Strategies for copying data from a live MySQL database to a staging server

I have a live MySQL DB which is too large to regularly copy the live data to the staging server.
Is there a way of getting just the changes from the last week with the intention of running that script weekly? Would every table have to have an updated timestamp field added to it to achieve this?
I don't know how large "too large to regularly copy" is, but I use SQLyog to synchronize databases. It intelligently does insert/update/deletes for only the records that have changed. I recommend it highly.
One way of going about this would be to make the staging server a replication slave of the production server. However, if you don't want the staging machine to be constantly up to date with the production master, you can keep the slave mode turned off.
Then weekly, run a script that starts the slave for a few hours, allowing it to bring itself up to date with the master, and stop the slave again.
START SLAVE;
-- Wait a while
-- Trial and error to determine how long it takes to come into sync
STOP SLAVE;
This will save it in a state consistent with the master for the current week. On the other hand, if you don't really need it as a weekly snapshot you can just leave the slave running all the time so it just stays in sync.

Mysql 4.x LOAD DATA FROM MASTER; slave

I have a scenario where there are multiple mysql 4.x servers. These databases were supposed to be replicating to another server. After checking things out on a slave it appears that this slave has not replicated any databases in some time.
Some of these databases are > 4G in size and one is 43G(which resides on another server). Has anyone out there replicated databases without creating a snapshot to copy over to a slave? I cannot shutdown the master server because of the downtime. It will probably take over an hour and 40 minutes to create a snapshot. So this is out of the question.
I was going to perform a load data from master on the slave to pull everything from scratch. Any idea how long this will take on databases ranging from 1-4G and the 43G database will be for another day. All of the tables on the master are myIsam so I don't think I will have a problem with the load from master method.
What are the best methods on the slave to clean things up or reset things so I can just start from a clean slate?
Any suggestions?
Thanks in advance
You need a snapshot to start replication. Snapshots require either the database to be locked (at least) read-only. So you can have a consistent place to start from.
Downtime is a necessary thing, customers usually understand it as long as it doesn't happen too often.