Auto-deleting mySQL query cache? - mysql

I'm the owner of a web-game with a high-traffic database with a lot of queries.
I'm now optimizing the mySQL and tools like mysqltuner keep saying that I have insufficient query cache memory.
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Qcache_free_blocks | 218 |
| Qcache_free_memory | 109155712 |
| Qcache_hits | 60955602 |
| Qcache_inserts | 38923475 |
| Qcache_lowmem_prunes | 100051 |
| Qcache_not_cached | 10858099 |
| Qcache_queries_in_cache | 19384 |
| Qcache_total_blocks | 39134 |
+-------------------------+-----------+
That's from about 1,5 day of running the server.
Ideally the lowmem_prunes would be 0 of course, but there are very much queries which are user-based. Like,
SELECT username FROM users WHERE id='1';
SELECT username FROM users WHERE id='12';
SELECT username FROM users WHERE id='12453';
SELECT username FROM users WHERE id='122348';
As I have about 3000 different users a day logging in you will have many queries like this in the memory.
I know about the ON DEMAND rule, but I'm avoiding it because we have a lot of queries which I don't want to check all over.
Increasing the memory wouldn't be a fix either since we're having so many queries.
Then I came up with the idea to make a cronjob to automatically RESET the query cache when the lowmem_prunes is zero. (which will probably be once an hour)
What's the best option?
1. Automatically reset the query cache once an hour
2. Buy more RAM and increase the cache with 1GB for example (which has it's downsides... I rather not do that)
3. Specify per query which should be kept in cache in which not.
4. Just keep it like this.

Related

Upgrading Aurora RDS does it automatically adjust the parameters?

I am going to upgrade my RDS for a much bigger instance type (r3.large to r3.2xlarge) and I would like to know if AWS will adjust the mysql parameters accordantly.. what are the may concerns should I have on this procedure? Is fining customization lost when it exist?
instance cpu Memory PIOPS-Optimized Network Performance
Price
db.r3.large 2 15 No Moderate $0.32
db.r3.2xlarge 8 61 Yes High $1.28
My main concern is regarding the caching configuration.
innodb_buffer_pool_size is currently 7GB and I thinking about leaving it as 50GB after upgrade. will this be adjusted automatically?
Just to complement the question, I am upgrading due to lack of memory as showed on this:
mysql> show status like '%qcache%';
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 134 |
| Qcache_free_memory | 148080 |
| Qcache_hits | 42459186 |
| Qcache_inserts | 14059268 |
| Qcache_lowmem_prunes | 2455035 |
| Qcache_not_cached | 22422639 |
| Qcache_queries_in_cache | 241632 |
| Qcache_total_blocks | 772112 |
+-------------------------+----------+
8 rows in set (0.01 sec)
As it state mysql cannot cache some stuff and has loads of prunes.
The value of parameter innodb_buffer_pool_size, by default, is assigned using the formula {DBInstanceClassMemory*3/4}. So if you change the db instance class(upgrade or downgrade), then the value is changed accordingly. This is valid as long as you have not changed the value manually and set it to a numeric value(without using the formula).
In your case,If you are upgrading the instance class to a higher class and If you have updated the value of the parameter(without using the formula), then the same is preserved after the db instance class is upgraded.

MYSQL - Find the largest historical result?

I am running a mysql server. I am trying to tune my cache mechanism. I am viewing my cache statistics below and I am concerned about the number of lowmem prunes as well as the not cached stat. I believe I have enough memory dedicated but I feel it is possible that my maximum query size may be too small.
mysql> SHOW STATUS LIKE "qcache%"
-> ;
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 297 |
| Qcache_free_memory | 15375480 |
| Qcache_hits | 24724191 |
| Qcache_inserts | 23954609 |
| Qcache_lowmem_prunes | 2011492 |
| Qcache_not_cached | 6987151 |
| Qcache_queries_in_cache | 6004 |
| Qcache_total_blocks | 12386 |
+-------------------------+----------+
8 rows in set (0.00 sec)
Is there a way to get the server to report back a historical statistic for the largest query ever returned? My intention is to discover how large the returned query data is in order to better tune the cache. I feel that the not cached number may be too large and that stems from not having a large enough maximum query.

Performance metrics calculation for SQL queries

I have two databases A and B on MySQL server.
A is the original database and B is derived from A changing the format of some tables. So for each table_A in db A there's a respective table table_B in db B and for each row in table_A there is a respective row in table_B representing the exact same table entry, in a different format.
I'm pretty sure that explaining this "format difference" between A and B is irrelevant of what I'm going to ask.
I use Java, JDBC actually, to interface with MySQL server.
I have a number of "SELECT" queries for db A and the equivalent queries for db B. I want to execute them repeatedly and calculate some metrics, like so:
execute SELECT query on db A and calculate metrics;
execute equivalent SELECT query on db B and calculate metrics;
UPDATE data stored in db A and db B by a percentage
loop
The final goal is to compare the performance of the "same" queries on the two twin dbs, to see what effect the "format difference" has in query performance.
My questions:
How can I calculate CPU time of the query execution? Currently what I do is:
long startTime = System.currentTimeMillis();
ResultSet rs = stmt.executeQuery(QUERY);
long time = System.currentTimeMillis() - startTime;
Is this accurate?
How can I calculate other metrics such as memory usage, cache usage, disk reads, disk writes, buffer gets
Could anyone suggest any other metrics to compare the performance of the "same" queries on the two databases?
There are a lot of metrics you cannot get. But here is a set I like to get:
FLUSH STATUS;
SELECT ...; -- or whatever query
SHOW SESSION STATUS LIKE 'Handler%';
The last command might give something like
mysql> SHOW SESSION STATUS LIKE 'Handler%';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Handler_commit | 1 |
| Handler_delete | 0 |
| Handler_discover | 0 |
| Handler_external_lock | 2 |
| Handler_mrr_init | 0 |
| Handler_prepare | 0 |
| Handler_read_first | 1 |
| Handler_read_key | 1 |
| Handler_read_last | 0 |
| Handler_read_next | 5484 | -- rows in the table; so it did a table scan
| Handler_read_prev | 0 |
| Handler_read_rnd | 7 |
| Handler_read_rnd_next | 14 |
| Handler_rollback | 0 |
| Handler_savepoint | 0 |
| Handler_savepoint_rollback | 0 |
| Handler_update | 0 |
| Handler_write | 13 | -- wrote to a tmp table 13 rows after a GROUP BY
+----------------------------+-------+
18 rows in set (0.00 sec)
Caching comes and goes, so timings can vary even by a factor of 10. Handlers, on the other hand, are very consistent. They give me insight into what is happening.
If you are running through JDBC, run the FLUSH like you would a non-SELECT; run the SHOW like a SELECT that gives you 2 columns.

MySQL Queries taking too long to load

I have a database table, with 300,000 rows and 113.7 MB in size. I have my database running on Ubuntu 13.10 with 8 Cores and 8GB of RAM. As things are now, the MySQL server uses up an average of 750% CPU. and 6.5 %MEM (results obtained by running top in the CLI). Also to note, it runs on the same server as Apache2 Web Server.
Here's what I get on the Mem line:
Mem: 8141292k total, 6938244k used, 1203048k free, 211396k buffers
When I run: show processlist; I get something like this in return:
2098812 | admin | localhost | phpb | Query | 12 | Sending data | SELECT * FROM items WHERE thumb = 'Halloween 2013 Horns/thumbs/Halloween 2013 Horns (Original).png'
2098813 | admin | localhost | phpb | Query | 12 | Sending data | SELECT * FROM items WHERE thumb = 'Halloween 2013 Witch Hat/thumbs/Halloween 2013 Witch Hat (Origina
2098814 | admin | localhost | phpb | Query | 12 | Sending data | SELECT * FROM items WHERE thumb = 'Halloween 2013 Blouse/thumbs/Halloween 2013 Blouse (Original).png
2098818 | admin | localhost | phpb | Query | 11 | Sending data | SELECT * FROM items WHERE parent = 210162 OR auto = 210162
Some queries are taking an excess of 10 seconds to execute, this is not the top of the list, but somewhere in the middle just to give kind of a perspective of how many queries are stacking up in this list. I feel that it may have something to do with my Query Cash configurations. Here are the configurations show from running the SHOW STATUS LIKE 'Qc%';
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 434 |
| Qcache_free_memory | 2037880 |
| Qcache_hits | 62580686 |
| Qcache_inserts | 10865474 |
| Qcache_lowmem_prunes | 4157011 |
| Qcache_not_cached | 3140518 |
| Qcache_queries_in_cache | 1260 |
| Qcache_total_blocks | 4440 |
+-------------------------+----------+
I noticed that the Qcache_lowmem_prunes seem a bit high, is this normal?
I've been searching around StackOverflow, but I couldn't find anything that would solve my problem. Any help with this would be greatly appreciated, thank you!
This is probably one for http://dba.stackexchange.com. That said...
Why are your queries running slow? Do they return a large result set, or are they just really complex?
Have you tried running one of these queries using EXPLAIN SELECT column FROM ...?
Are you using indexes correctly?
How have you configured MySQL in your my.cnf file?
What table types are you using?
Are you getting any errors?
Edit: Okay, looking at your query examples. What data type is items.thumb? Varchar, Text? Is it not at all possible to query this table using another method than literal text matching? (e.g. ID number). Does this column have an index?

very poor performance from MySQL INNODB

lately my mysql 5.5.27 has been performing very poorly. I have changed just about everything in the config to try and see if it makes a difference with no luck. I am getting tables locked up constantly reaching 6-9 locks per table. My select queries take forever 300sec-1200sec.
Moved Everything to PasteBin because it exceeded 30k chars
http://pastebin.com/bP7jMd97
SYS ACTIVITIES
90% UPDATES AND INSERTS
10% SELECT
My slow query log is backed up. below I have my mysql info. Please let me know if there is anything i should add that would help.
Server version 5.5.27-log
Protocol version 10
Connection XX.xx.xxx via TCP/IP
TCP port 3306
Uptime: 21 hours 39 min 40 sec
Uptime: 78246 Threads: 125 Questions: 6764445 Slow queries: 25 Opens: 1382 Flush tables: 2 Open tables: 22 Queries per second avg: 86.451
SHOW OPEN TABLES
+----------+---------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+---------------+--------+-------------+
| aridb | ek | 0 | 0 |
| aridb | ey | 0 | 0 |
| aridb | ts | 4 | 0 |
| aridb | tts | 6 | 0 |
| aridb | tg | 0 | 0 |
| aridb | tgle | 2 | 0 |
| aridb | ts | 5 | 0 |
| aridb | tg2 | 1 | 0 |
| aridb | bts | 0 | 0 |
+---------+--------------+-------+------------+
I've hit a brick wall and need some guidance. thanks!
From looking through your log it would seem the problem (as I’m quite sure you’re aware) is due to the huge amount of locks that are present given the amount of data being updated / selected / inserted and possible at the same time.
It is really hard to give performance tips without first knowing lots of information which you don’t provide such as size of tables, schema, hardware, config, topology etc – SO probably isn’t the best place for such a broad question anyway!
I’ll keep my answer as generic as I can, but possible things to look at or try would be:
Run Explain the select queries and make sure they are selectively finding data and not performing full table scans or wasting huge amounts of data
Leave the server to do it's inserts and updates but create a read replica for reporting, this way data won’t be locked
If you’re updating many rows at a time, think about updating with a limit supplied to stop so much data being locked
If you are able to, delay the inserts to relieve pressure
Look at a hardware fix such as Solid State Disks for IO performance and more memory so more indexing / data can be held in memory or to have a larger buffer