Aegir stuck clone/migration/install tasks - mysql

I'm working on Aegir for migrating/cloning sites from platform1/db1 to platform2/db2. Platform1, platform2, db1 and db2 are verified successfully, but when i try to migrate/clone site the tasks spinning for ever. When i go into the server i can see the db is created on db2 and site is created on platform2 but the task is unfinished which is always in progress. I cancelled the task and re verified the platform which din't help.
Is there anything i'm missing or am i doing something wrong?

You might wish to compare site creation times on both servers. It could take as long to migrate a site over as it does creating a site.
If that works (creating a site on db2) I would try and disable unstable/custom modules until it works. It sounds as though something is interfering.
(Btw in my opinion you shoud post at least the last lines of the task log, as well as how long you waited and how big the site's backups are, and any relevant info on the db engines etc. I notice you forgot to talk about installs on db2 too.)

Related

WordPress: too many queries to the database

I have just moved a WP installation from one hosting provider to another. Everything went fine except for a problem I have with the new installation. Please note that I have moved from a regular VPS to a kinda powerful and fast dedicated machine.
The thing is that now, the website is slower than when in the previous server. It takes 6-7 seconds to load a page and according to Chrome's Dev Tools network panel, it has a period of 3-4 seconds to the get the first response byte (TTFB), which is insane.
I have tried the following with no success:
Review database for anomalies
Disable all plugins (and delete them)
Disable template (and delete it)
With these last two actions, I lowered the loading time to 5-6 seconds, which is a lot for small site (a few hundreds of posts and 50-60 pages), with no comments enabled. I still have the 3-4 TTFB period.
After that, I installed the Query Monitor plugin and found out that, at every page load, WP performs hundreds (ranging from 400 to 800) database queries and, in some cases, even 1500 database queries. OMG!
Honestly, I am quite lost here. I mean, on one hand I have this strange database behavior I cannot really understand. And on the other hand, I cannot help wondering how it was faster on the previous & slower server.
By the way, I have moved from MySQL to MariaDB, which should be even faster too. Indexes are kept when dumping & importing the file. I am lost. :(
Any help is greatly appreciated. Apologies for my english (not my language) and please let me know if the is some important information missing. I will be glad to provide all the necessary information that help me/us troubleshoot this.
Thanks in advance!
I think you should optimize your MySQL config (my.cnf in Linux or my.ini in Windows). To view problems in MySQL you can try run the script MySQLTuner: https://github.com/major/MySQLTuner-perl.

How to Deploy Database Changes to a Live Server?

I have a script that is part of my deployment process to push DB changes to the production server. If the script corrupts my data for some reason (a bad update), it is tough to recover.
One way to solve this is to shut down the application to users while updating, so if a problem occurs, just go back to the backup I made before deploying.
But I have heard of others who deploy and keep their site live... how would you go about doing this, and if you failed, how could you recover the data that came in since you took your backup before deploying?
This is a tricky problem in general, like with many things in database administration. There are basically three ways to approach this:
Avoid failure at all costs.
Lock everything down (and make the upgrade really fast).
It's OK to lose data.
If you have a complex system, isolate your components according to these or similar categories.
Have a staging system to test upgrades. The staging system is more or less a copy of the production system; it's separate from the test system. Another thing is to have an audit or logging system that you can refer to if you need to replay data.
The real problem is if you notice much later that your upgrade was faulty. Then you're quite screwed.
How big is your database? Can you afford to lose data that was updated while the customer were using it and before you had to go to the backup? Every plan for deployment involves some compromises somewhere, and you've got to decide which compromises are the least painful for what you want to do.
For simple websites running just pgsql, you can disconnect clients, and run the entire update in one big transaction. If any part fails, the whole thing rolls back and it's like you never did anything. Sadly this doesn't work exactly the same for other dbs, but with flashback or whatever oracle calls it you can get something similar.
For bigger, complex websites running on top of a replicated db server set, things get much more complex much more quickly. Where I've worked we've used Slony, and it doesn't take nicely to playing with others when you're deploying DDL changes, and you pretty much HAVE to take all the customers offline while deploying DDL. However, the downtime is measured in minutes for us, even with databases approaching 1TB in size.

How to benchmark and optimize a really database-intensive Rails action?

There is an action in the admin section of a client's site, say Admin::Analytics (that I did not build but have to maintain) that compiles site usage analytics by performing a couple dozen, rather intensive database queries. This functionality has always been a bottleneck to application performance whenever the analytics report is being compiled. But, the bottleneck has become so bad lately that, when accessed, the site comes to a screeching halt and hangs indefinitely. Until yesterday I never had a reason to run the "top" command on the server, but doing so I realized that Admin::Analytics#index causes mysqld to spin at upwards of 350+% CPU power on the quad-core, production VPS.
I have downloaded fresh copies of production data and the production log. However, when I access Admin::Analytics#index locally on my development box, while using the production data, it loads in about 10 - 12 seconds (and utilizes ~ 150+% of my dual-core CPU), which sadly is normal. I suppose there could be a discrepancy in mysql settings that has suddenly come into play. Also, a mysqldump of the database is now 531 MB, when it was only 336 MB 28 days ago.  Anyway, I do not have root access on the VPS, so tweaking mysqld performance would be cumbersome, and I would really like to get to the exact cause of this problem. However, the production logs don't contain info. on the queries; they merely report the length that these requests took, which average out to a few minutes apiece (although they seemed to have caused mysqld to stall for much longer than this and prompting me to request our host to reboot mysqld just to get our site back up in one instance).
I suppose I can try upping the log level in production to solicit info. on the database queries being performed by Admin::Analytics#index, but at the same time I'm afraid to replicate this behavior in production because I don't feel like calling our host up to restart mysqld again! This action contains a single database request in its controller, and a couple dozen prepared statements embedded in its view!
How would you proceed to benchmark/diagnose and optimize/fix this action?!
(Aside: Obviously I would like to completely replace this functionality with Google Analytics or a similar solution, but I need fix this problem before proceeding.)
I'd recommend taking a look at this article:
http://axonflux.com/building-and-scaling-a-startup
Particularly, query_reviewer and newrelic have been a life-saver for me.
I appreciate all the help with this, but what turned out to be the fix for this was to implement a couple of indexes on the Analytics table to cater to the queries in this action. A simple Rails migration to add the indexes and the action now loads in less than a second both on my dev box and on prod!

How to keep databases synchronized between hosting account and a local testing server?

I have several databases hosted on a shared server, and a local testing server which I use for development.
I would like to keep both set of databases somewhat synchronized (more or less daily).
So far, my ideas to solve the problem seem very clumsy. Anyway, for reference, here is what I have considered so far:
Make a database dump from online databases, trash local databases, and recreate the databases from the dump. It's a lot of work and requires a lot of download time (which guarantees I won't do it as much as I would like it to be done)
Write a small web service to access the new data, and write a small application locally to communicate with said web service, download the newest data, and update the local databases.
Both solutions sound like a lot of work for a problem that is probably already solved a zillion times over. Or maybe it's even an existing feature which I completely overlooked.
Is there an easy way to keep databases more or less in synch? Ideally something that I can set up once, schedule and forget about.
I am using MySQL 5 (MyISAM) databases on both servers.
=============
Edit: I had a look at replication, but it seems that I can't go that route because the shared hosting does not give me enough control on the server itself (I got most permissions on my databases, but not on the MySQL server itself)
I only need to keep the data synchronized, nothing else. Is there any other solution that doesn't require full control on the server?
Edit 2:
Sorry, I forgot to mention I am running on a LAMP stack on the shared server, so Windows-only solutions won't work.
I am surprised to see that there is no obvious off-the-shelves solution for this problem.
Have you considered replication? It's not to be trifled with but may be what you want. See here for more details... http://dev.mysql.com/doc/refman/5.0/en/replication-configuration.html
Take a look at Microsoft Sync Framework - you will need to code in .net, but it can resolve your issues.
http://msdn.microsoft.com/en-in/sync/default(en-us).aspx
Here is a sample for SQL server, but it can be adapted to mysql as well using ado.net provider for Mysql.
http://code.msdn.microsoft.com/sync/Release/ProjectReleases.aspx?ReleaseId=4835
You will need the additional tables for change tracking and anchors (keeping track of last synchronization) for this to work, in your mysql database, but you wont need full control as long as you can access the db.
Replication would have simpler :), but this might just work in your case.

Development and Production Database?

I'm working with PHP & mySQL. I've finally got my head around source control and am quite happy with the whole development (testing) v production v repository thing for the PHP part.
My new quandary is what to do with the database. Do I create one for the test environment and one for the production environment? I currently have just the one which both environments use, leaving my test data sitting there. I kind of feel that I should have two, but I'm nervous in terms of making sure that my production database looks and feels exactly the same as my test one.
Any thoughts on which way to go? And, if you think the latter, what the best way is to keep the two databases the same (apart from the data, of course...)?
Each environment should have a separate database. Script all of the database objects (tables, views, procedures, etc) and store the scripts in source control. The scripts are applied first to the development database, then promoted to test (QA, UAT, etc), then production. By applying the same scripts to each database, they should all be the same in the end.
If you have data that needs to be loaded (code tables, lookup values, etc), script that data load as part of the database creation process.
By scripting everything and keeping it in source control, a database structure can be recreated at any time for any given build level.
You should definitely have two. As far as keeping them in sync, you should always create DDL for creating your database objects. Treat these scripts as you do you PHP code - keep them in version control. Anytime you have to modify the test database, make a script to do so, and check it in. Then you can propogate those changes to the production system once you are ready.
As a minimum one database for each development workstation and one for production. Besides that you should have one for the test environment unless you are only one developer and have a similar setup as the production environment.
See also
How do you version your database schema?
It's a common question and has been asked and answered many times.
Thomas Owens: Replication is not usable for versioning schemas - it is for duplicating data. You never want to replicate from dev to production or vice versa.
Once I've deployed my database, any changes made to my development database(s), are done in an SQL script (not a tool), and the script is saved, and numbered.
deploy.001.description.sql
deploy.002.description.sql
deploy.003.description.sql
... etc..
Then I run each of those scripts in order when I deploy.
Then I archive them into a directory called something like
\deploy.YYMMDD\
And start all over.
If I make a mistake, I never go back to the previous deploy script, I'll create a new script and put my fix in there.
Good luck
One thing I've been working with is creating a VM with the database installed. you can save the VM as a playfile, including its data. What you can do then is take a snapshot of the playfile, and start up as many different VM's as you want. They can all be identical, or you can modify one or another. Here's the good thing: assuming you have a dev version of the database that you want to go out, you can simply start that VM on your production server instead of the current server.
It's another problem altogether if you have production data that is not on your dev machines. In that case though, one thing you can do is set up a tracking VM. Run replication from your main DB to the tracking VM. When you get to a point where you need to run some alters on the production database, first stop the slave and save a snapshot.
Start an instance of that snapshot, take it out of slave mode entirely, apply your changes, and point your QA box at that database. If it works as intended, you can run the patches against your main production database. If not, bring up the snapshot, and get it replicating off the master again until you are ready to repeat the update test.
I was having the same dilemmas. I got stuck thinking that there was a clear dichotomy between production db versus development db. I.e they were two sides of a coin and never the twain shall meet.
A lot of problems disappeared when I stopped making my application 'think' in terms of "Either production db OR development db". Instead my application uses a local db.
When its running on my virtual (dev) machine, that local db happens to be a dev db. My application doesn't really 'know' that though.
So, for the main part, the problem disappears.
But sometimes I want to run tests using live data, or move data from the code into the live production db and see the results quickly.
This is when I added the concept of a live-read-only db connection. The application treats this differently. Its a bit like how your application might treat a web service like Google Apps. Its 'some external resource that your app uses'.
By default my app uses the local db and in some very special conditions (in the test suite) it also uses the live-readonly db. (Because its a read-only connection I don't fear making a mess of the live data during tests).
So rather than asking the question "dev db OR production db?", my app asks "local db OR live-read-only db".
Obviously my situation could be different to yours, but I found this 'breakthrough in understanding' to be most helpful for me.