Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Mysql CPU high usage (80%-150%). This is shop (10 000 products), max 15 online.
my.cnf
http://pastebin.com/xJNuWTWT
log mysqltunner:
http://pastebin.com/HxWwucE2
console:
Firstly
Some other things you can check would be to run the following query while the CPU load is high:
SHOW PROCESSLIST;
I can suggest you to turn off the persistant connections.
And then to check the MySQL users, just to make sure it's not possible for anyone to be connecting from a remote server
And at the same time I would like to say you want to turn on the MySQL Slow Query Log to keep an eye on any queries that are taking a long time, and use that to make sure you don't have any queries locking up key tables for too long.
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
Kindly get over here too :
http://dev.mysql.com/doc/refman/5.0/en/memory-use.html
MySQL is only one part of the performance problem. Mysql only is not dedicated to high load traffic website or high load data website. You should find cache solution to deal with your problem. 10000 products is sufficiently large to slow down your website especially if there is no cache, if the server is not a dedicated server but with standard virtual hosting, etc.
To summarize, you should re-build the hardware architecture to take in consideration large database, user perfomance, dynamic pages.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently setting up ghost on my server. I will host my own blog and probably some more for my friends.
Ghost uses sqlite per default. Sqlite is good for small applications and development environments.
I plan to run my blog for at least 1 - 2 years or longer if ghost will work out well. A blog contains a lot of images and text. The sqlite db will grow over time with more and more images and so on.
Is it ok to use sqlite for this purpose for several years? MySQL would be much more powerful but also more complex to setup.
What would be the best choice for a Ghost Blog?
Please note that database performance depends not so much on the amount of data (until you run out of local disk space) but on the amount of concurrency.
The SQLite documentation says:
SQLite usually will work great as the database engine for low to medium traffic websites (which is to say, 99.9% of all websites). The amount of web traffic that SQLite can handle depends, of course, on how heavily the website uses its database. Generally speaking, any site that gets fewer than 100K hits/day should work fine with SQLite. The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic.
[…]
But if your website is so busy that you are thinking of splitting the database component off onto a separate machine, then you should definitely consider using an enterprise-class client/server database engine instead of SQLite.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am developing an Enterprise application in Java EE and I think it will have a huge amount of stored data. It is similar to a University management application in which all colleges students are registered and have their profile.
I am using a MySQL database. I tried to explore on the internet and I found some tips on this link.
What are the best practices to develop huge databases so that they do not decrease its performance?
Thanks in advance.
First of all your database is not huge but medium -> small size. Huge database is when you need to deal with terabytes of data and million operations per second. Considering your case, MySQL (MyISAM) is enough and rather than optimization you should focus on correct database design (optimization is the next step).
Let me share some tips with you:
scale your hardware (not so important for your case)
identify relations (normalize) and correct datatypes (i.e. use tiny int instead of big int if you can)
try to avoid NULL if possible
user varchar instead of text/blob if possible
index your tables (remember indexes slow update/delete/insert operations)
design your queries in a correct way (use indexes)
always use transactions
Once you design and develop your database and the performance is not sufficient - think about optimization:
- check explain plans and tune sqls
- check hardware utilization and tune either system or mysql parameters (i.e. query cache).
Please check also this link:
http://dev.mysql.com/doc/refman/5.0/en/optimization.html
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm trying to figure out why a MySQL database is abnormally slow on a live server. Deleting a row from a table (with only less than 100 rows) can take anywhere between 1 second and 20 seconds. I've checked the running processes and cannot see anything that would take out all the CPU or memory.
Also the website is not launched yet so there's just me on it.
In these conditions, what could be the reason for the database to be so slow? Is there any way to diagnose this kind of problem?
In these conditions, what could be the reason for the database to be so slow? Is there any way to diagnose this kind of problem?
Are you sure that it's the DB that is slow?
Connect to your server using the command line, launch mysql, and run a few sample queries from there. If it's plenty fast there (which it should, unless you're swapping like mad or have a gazillion funky triggers), you can safely eliminate SQL as the culprit. If not, there likely is a problem with your schema, your database configuration (does it have enough memory?) or your server (is the RAM broken?).
Another sources of slowness might be latency. Examples:
Time needed to do a DNS lookup (e.g. on occasion, it's faster to connect to 127.0.0.1 than it is to connect to localhost)
Lag due to the DB being located on a separate server (especially if the DB is at the other end of the world)
Time needed to retrieve the results back from the DB, if blobs are involved.
Dreadfully slow NSF:
http://lists.freebsd.org/pipermail/freebsd-fs/2013-April/017125.html
etc.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've an issue on a website made in Joomla 2.5 with Ja teline IV template
that has 300 concurrent user,
it is a soccer magazine, so the article are updated often,
also minute by minute during the match.
I've a server of 16gb ram and quad core processor, but the website freeze when 300 users are accessing to the website.
I've done all the frontend optimization, but, I the last optimization could be enable caching.
My issues are:
- caching enabled also for logged in users
- caching timing, if I have that type of article, I can enable cache
expiring to 1 minute? It is also a good option? Could optimize the performance.
Can you suggest me what to do? Other possible optimization?
16gb should be enough to handle 300 concurrent users...smells to mysql server not fine tuned.
Run this script for you Mysql server
https://github.com/rackerhacker/MySQLTuner-perl
You have to find the bottleneck: processor? memory? disk? database? network?
After you find the issue, you have to choose the right solution: bigger processor, more memory, faster disk, db index, memory caching, network caching, etc etc
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Do you have any experience about this question? I have currently 1900 MySQL databases in a single domain in my plesk control panel and I wonder if my MySQL server gets overloaded or out-of-service due to such high number of databases in the system.
Do you have any suggestions? Each database is for a user in my service by the way.
MySQL itself doesn't place any restrictions on the number of databases you can have, and I doubt Plesk does either, I'm sure it just displays all the databases present on the MySQL server.
However, your host may have a limit (which you'd have to ask them about), or if you start getting a huge number of databases, you may actually run into a filesystem limit. As the MySQL documentation says, each database is stored as a directory, so you could hypothetically hit the filesystem's upper limit for how many subdirectories are allowed.
ive got well over 5000 databases running on alinux based plesk cluster (one db one web server) and its running fine, though i have had to increase open files limits due to the huge amounts of files. i cant run the mysql tuning primer any more though, well i can but it takes about 4 hours