Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm trying to figure out why a MySQL database is abnormally slow on a live server. Deleting a row from a table (with only less than 100 rows) can take anywhere between 1 second and 20 seconds. I've checked the running processes and cannot see anything that would take out all the CPU or memory.
Also the website is not launched yet so there's just me on it.
In these conditions, what could be the reason for the database to be so slow? Is there any way to diagnose this kind of problem?
In these conditions, what could be the reason for the database to be so slow? Is there any way to diagnose this kind of problem?
Are you sure that it's the DB that is slow?
Connect to your server using the command line, launch mysql, and run a few sample queries from there. If it's plenty fast there (which it should, unless you're swapping like mad or have a gazillion funky triggers), you can safely eliminate SQL as the culprit. If not, there likely is a problem with your schema, your database configuration (does it have enough memory?) or your server (is the RAM broken?).
Another sources of slowness might be latency. Examples:
Time needed to do a DNS lookup (e.g. on occasion, it's faster to connect to 127.0.0.1 than it is to connect to localhost)
Lag due to the DB being located on a separate server (especially if the DB is at the other end of the world)
Time needed to retrieve the results back from the DB, if blobs are involved.
Dreadfully slow NSF:
http://lists.freebsd.org/pipermail/freebsd-fs/2013-April/017125.html
etc.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have two identically configured MySQL 5.6.10 servers and needed to move the data files fast from one to the other. Is this an OK procedure?
Here is what I did:
1) Shut down both servers
2) Moved all the files from one box to the other (DATA is on a separate drive on both machines)
3) Turned the second server on
4) Connected it back to the app server
It took about 5 minutes to move all files (~50GB) and all seems to work. I just wonder if I missed anything?
Thanks much for your feedback.
If both the server versions are same, then I think, it's perfectly fine, not just OK, as I have done the same many times, without any data loss, but this method comes with cost:
You have to shut down mysql server (which is not good, if it's a production server)
You have to make sure the permission of data (mysql) directory is same as the previous one.
You will have to monitor the mysql_error log while starting the second server.
You can use mysqldump, but if you don't want to, then you can use Mysql Workbench's migration wizard, it really takes care of everything.
A much safer and recommended way would be Database Backup And Recovery.
Do a full backup from server1 and restore it to server2. Later then on, you can go for a differential backup.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I will be writing a program in Delphi that will be reading and writing to a MySQL database tables on a regular basis like every 5 seconds. Is this going to be CPU intensive? or get to a point where computer will freeze completely? I know reading and writing to and from a hardrive nonstop can freeze everything on your computer. I am not really sure about MySQL database.
Databases are designed to handle many transactions frequently, but it really depends on what the queries you are using. A simple SELECT on a couple rows is unlikely to cause an issue, but large scale updates targeting many tables or multiple joins can slow performance. It all depends on what your queries are.
This all depends on the computer and the complexity of the query.
As David has said, it really does depend on the hardware and queries you are processing.
I would suggest measuring the processing time of each query to determine whether the writing processes will be stacking over the other 5 second interval queries.
You can find information on how to measure your MySQL processes here.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Mysql CPU high usage (80%-150%). This is shop (10 000 products), max 15 online.
my.cnf
http://pastebin.com/xJNuWTWT
log mysqltunner:
http://pastebin.com/HxWwucE2
console:
Firstly
Some other things you can check would be to run the following query while the CPU load is high:
SHOW PROCESSLIST;
I can suggest you to turn off the persistant connections.
And then to check the MySQL users, just to make sure it's not possible for anyone to be connecting from a remote server
And at the same time I would like to say you want to turn on the MySQL Slow Query Log to keep an eye on any queries that are taking a long time, and use that to make sure you don't have any queries locking up key tables for too long.
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
Kindly get over here too :
http://dev.mysql.com/doc/refman/5.0/en/memory-use.html
MySQL is only one part of the performance problem. Mysql only is not dedicated to high load traffic website or high load data website. You should find cache solution to deal with your problem. 10000 products is sufficiently large to slow down your website especially if there is no cache, if the server is not a dedicated server but with standard virtual hosting, etc.
To summarize, you should re-build the hardware architecture to take in consideration large database, user perfomance, dynamic pages.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there any reason to run JETCOMP.EXE or any other compaction method on an MDB file if it is relatively small (ie. 0-200MB)?
I can understand if it is approaching a size limit (eg. 1GB for some older MDB formats) then it would make sense, but is there any need otherwise?
Can a failure to run JETCOMP (or similar) bring about data corruption problems?
It is good practice to run a regular compact on an Access database of any size as it has several benefits: First of all is the size of the file, not an issue if around 200mb but can't hurt. It will also re-index all your tables so especially important if you have amended any table structures or added new tables. It will re-evaluate any queries to ensure they have the fastest execution plan.
In terms of a failure on the compact causing corruption, the first thing the compact does is create a backup copy of the original file, so that if it fails you still have the original to revert back to and therefore you shouldn't run in to any corruption issues.
Access database is very prone to data corruption especially when the database is shared through the network. It won't hurt to run Compact and Repair every now and then just to make sure DB is in optimum condition.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Do you have any experience about this question? I have currently 1900 MySQL databases in a single domain in my plesk control panel and I wonder if my MySQL server gets overloaded or out-of-service due to such high number of databases in the system.
Do you have any suggestions? Each database is for a user in my service by the way.
MySQL itself doesn't place any restrictions on the number of databases you can have, and I doubt Plesk does either, I'm sure it just displays all the databases present on the MySQL server.
However, your host may have a limit (which you'd have to ask them about), or if you start getting a huge number of databases, you may actually run into a filesystem limit. As the MySQL documentation says, each database is stored as a directory, so you could hypothetically hit the filesystem's upper limit for how many subdirectories are allowed.
ive got well over 5000 databases running on alinux based plesk cluster (one db one web server) and its running fine, though i have had to increase open files limits due to the huge amounts of files. i cant run the mysql tuning primer any more though, well i can but it takes about 4 hours