MySQL 10x slower on one server compared to another - mysql

I have a live server and my dev server, and I am finding that queries on my LIVE (not dev) server run 10x slower, even though the live server is more powerful and they are both running comparable load. It's not a database structure thing because I load the backup from the live server into my dev server.
Does anybody have any ideas on where I could look for the discrepancy? Could it be a MySQL config thing? Where should I start looking?
Live Server:
mysql> SELECT count(`Transaction`.`id`) as count, sum(`Transaction`.`amount`) as sum, sum(Transaction.citiq_margin+rounding + Transaction.citiq_margin_vat) as revenue FROM `transactions` AS `Transaction` LEFT JOIN `meters` AS `Meter` ON (`Transaction`.`meter_id` = `Meter`.`id`) LEFT JOIN `units` AS `Unit` ON (`Meter`.`unit_id` = `Unit`.`id`) WHERE (NOT (`Unit`.`building_id` IN ('1', '85')) AND NOT (`Transaction`.`state` >= 90)) AND DAY(`Transaction`.`created`) = DAY(NOW()) AND YEAR(`Transaction`.`created`) = YEAR(NOW()) AND (MONTH(`Transaction`.`created`)) = MONTH(NOW());
+-------+---------+---------+
| count | sum | revenue |
+-------+---------+---------+
| 413 | 3638550 | 409210 |
+-------+---------+---------+
1 row in set (2.62 sec)
[root#mises ~]# uptime
17:11:57 up 55 days, 1 min, 1 user, load average: 0.45, 0.56, 0.60
Dev Server (result count is different because of slight time delay from backup):
mysql> SELECT count(`Transaction`.`id`) as count, sum(`Transaction`.`amount`) as sum, sum(Transaction.citiq_margin+rounding + Transaction.citiq_margin_vat) as revenue FROM `transactions` AS `Transaction` LEFT JOIN `meters` AS `Meter` ON (`Transaction`.`meter_id` = `Meter`.`id`) LEFT JOIN `units` AS `Unit` ON (`Meter`.`unit_id` = `Unit`.`id`) WHERE (NOT (`Unit`.`building_id` IN ('1', '85')) AND NOT (`Transaction`.`state` >= 90)) AND DAY(`Transaction`.`created`) = DAY(NOW()) AND YEAR(`Transaction`.`created`) = YEAR(NOW()) AND (MONTH(`Transaction`.`created`)) = MONTH(NOW());
+-------+---------+---------+
| count | sum | revenue |
+-------+---------+---------+
| 357 | 3005550 | 338306 |
+-------+---------+---------+
1 row in set (0.22 sec)
[www#smith test]$ uptime
18:11:53 up 12 days, 1:57, 4 users, load average: 0.91, 0.75, 0.62
Live Server (2 x Xeon Quadcore):
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5620 # 2.40GHz
stepping : 2
cpu MHz : 2395.000
cache size : 12288 KB
physical id : 0
siblings : 8
core id : 10
cpu cores : 4
Dev Server (1 x Quadcore)
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Quad CPU Q8300 # 2.50GHz
stepping : 10
microcode : 0xa07
cpu MHz : 1998.000
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
Live Server:
CentOS 5.7
MySQL ver 5.0.95
Dev Server:
ArchLinux
MySQL ver 5.5.25a

The obvious first thing to check would be your MySql configuration file to make sure you are utilizing an appropriate amount of memory for queries.. such as key_buffer, sort_buffer, etc... There are far smarter people than me out there who have entire blogs dedicated to configuring MySql.
You can also prepend your query with "explain" to see what is taking the most time... but that might just be something for general use later on.
In reality, your "live" server has caching capabilities and twice the number of cores to make these queries, and it likely has enough horsepower and memory to explain the difference in query times between the servers.

So, I ran the same database and queries on a Virtual Machine running Centos, 1 CPU and 512MB of memory: it provides the answer to that query in 0.3 seconds; system load is 0.4 :/
The only real difference seems to be that I am running Mysql 5.5 on that server. And it seems that there really is a 10x performance improvement in my case from Mysql 5.0 to Mysql 5.5.
I will only know for sure once I have migrated my live servers from Mysql 5.0 to Mysql 5.5, I will confirm the results once I have done that.

Related

Grafana & MYSQL - Visualise data in a table tracking weights

I have a very simple MYSQL table, which tracks the weights of some animals:
id
name
weight
date
1
Brillo
400
2022-12-01
2
Barli
200
2022-12-01
3
Bueno
350
2022-12-01
4
Brillo
410
2022-12-10
5
Barli
197
2022-12-10
6
Bueno
362
2022-12-10
So in the example above, I weight my 3 animals on the 1st, then again on the 10th.
I would like to visualize this data in Grafana with a timeseries panel. I get the exact data I want, if I query the database once per pet:
SELECT name, weight as 'Brillo', date FROM animal.weights WHERE name='Brillo'
SELECT name, weight as 'Bueno', date FROM animal.weights WHERE name='Bueno'
SELECT name, weight as 'Barli', date FROM animal.weights WHERE name='Barli'
This gives me the following panel:
Whilst this works, doing 1 query per animal feels like the wrong approach. I will eventually have 20+ on here, so doing 20 queries to the database every time feels incorrect.
My question is this; Is there a way I can get the same results from my table into a Grafana timeseries panel in a single query?
SELECT name, weight, date FROM animals.weights
You need aggregation per name and date + also time filter (otherwise you will have a problem with a lot of records in the DB).
This is a good start:
SELECT
$__timeGroup(date, '1d', NULL),
name AS "metric",
AVG(weight) AS "value"
FROM animals.weights
WHERE
$__time(date)
ORDER BY 1
See Grafana MySQL doc for used macros.

Delete and Insert takes long time in MySQL(InnoDB)

I've set of numbers and to each number in it there are few numbers associated with it. So I store it in a table like this:
NUMBERS ASSOCIATEDNUMBERS
1 3
1 7
1 8
2 11
2 7
7 9
8 13
11 17
14 18
17 11
17 18
Thus it's a number which has many associated numbers and vice versa. Both columns are indexed. (Thus enabling me to find number and its associated numbers and vice versa)
My create table looks like this:
CREATE TABLE `TABLE_B` (
`NUMBERS` bigint(20) unsigned NOT NULL,
`ASSOCIATEDNUMBERS` bigint(20) unsigned NOT NULL,
UNIQUE KEY `unique_number_associatednumber_constraint` (`NUMBERS`,`ASSOCIATEDNUMBERS`),
KEY `fk_AssociatedNumberConstraint` (`ASSOCIATEDNUMBERS`),
CONSTRAINT `fk_AssociatedNumberConstraint` FOREIGN KEY (`ASSOCIATEDNUMBERS`) REFERENCES `table_a` (`SRNO`),
CONSTRAINT `fk_NumberConstraint` FOREIGN KEY (`NUMBERS`) REFERENCES `table_a`` (`SRNO`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Here TABLE_A has column SRNO which is AUTO_INCREMENT PRIMARY KEY and is first column in the table. (As per MySQL manual I haven't defined indexes on TABLE_B.NUMBERS and TABLE_B.ASSOCIATEDNUMBERS as foreign key constraints defines it automatically)
PROBLEM:
Whenever I need to change ASSOCIATEDNUMBERS for a number (in `NUMBERS') I just delete existing rows for that number from the table:
DELETE FROM TABLE_B WHERE NUMBERS= ?
and then insert rows for new set of ASSOCIATEDNUMBERS:
INSERT INTO TABLE_B (NUMBERS, ASSOCIATEDNUMBERS) VALUES ( ?, ?), (?, ?), (?, ?), ...
However, this takes long time. Especially when in my multi-threaded application I open multiple connections (each per thread) to the database, each running above two queries (but each with different number).
For example, if I open 40 connections, each connection to delete existing and insert 250 new associated numbers, it takes upto 10 to 15 seconds. If I increase number of connections, the time also increases.
Other Information:
SHOW GLOBAL STATUS LIKE 'Threads_running';
Shows upto 40 threads.
Innodb parameters:
innodb_adaptive_flushing, ON
innodb_adaptive_flushing_lwm, 10
innodb_adaptive_hash_index, ON
innodb_adaptive_max_sleep_delay, 150000
innodb_additional_mem_pool_size, 2097152
innodb_api_bk_commit_interval, 5
innodb_api_disable_rowlock, OFF
innodb_api_enable_binlog, OFF
innodb_api_enable_mdl, OFF
innodb_api_trx_level, 0
innodb_autoextend_increment, 64
innodb_autoinc_lock_mode, 1
innodb_buffer_pool_dump_at_shutdown, OFF
innodb_buffer_pool_dump_now, OFF
innodb_buffer_pool_filename, ib_buffer_pool
innodb_buffer_pool_instances, 8
innodb_buffer_pool_load_abort, OFF
innodb_buffer_pool_load_at_startup, OFF
innodb_buffer_pool_load_now, OFF
innodb_buffer_pool_size, 1073741824
innodb_change_buffer_max_size, 25
innodb_change_buffering, all
innodb_checksum_algorithm, crc32
innodb_checksums, ON
innodb_cmp_per_index_enabled, OFF
innodb_commit_concurrency, 0
innodb_compression_failure_threshold_pct, 5
innodb_compression_level, 6
innodb_compression_pad_pct_max, 50
innodb_concurrency_tickets, 5000
innodb_data_file_path, ibdata1:12M:autoextend
innodb_data_home_dir,
innodb_disable_sort_file_cache, OFF
innodb_doublewrite, ON
innodb_fast_shutdown, 1
innodb_file_format, Antelope
innodb_file_format_check, ON
innodb_file_format_max, Antelope
innodb_file_per_table, ON
innodb_flush_log_at_timeout, 1
innodb_flush_log_at_trx_commit, 2
innodb_flush_method, normal
innodb_flush_neighbors, 1
innodb_flushing_avg_loops, 30
innodb_force_load_corrupted, OFF
innodb_force_recovery, 0
innodb_ft_aux_table,
innodb_ft_cache_size, 8000000
innodb_ft_enable_diag_print, OFF
innodb_ft_enable_stopword, ON
innodb_ft_max_token_size, 84
innodb_ft_min_token_size, 3
innodb_ft_num_word_optimize, 2000
innodb_ft_result_cache_limit, 2000000000
innodb_ft_server_stopword_table,
innodb_ft_sort_pll_degree, 2
innodb_ft_total_cache_size, 640000000
innodb_ft_user_stopword_table,
innodb_io_capacity, 200
innodb_io_capacity_max, 2000
innodb_large_prefix, OFF
innodb_lock_wait_timeout, 50
innodb_locks_unsafe_for_binlog, OFF
innodb_log_buffer_size, 268435456
innodb_log_compressed_pages, ON
innodb_log_file_size, 262144000
innodb_log_files_in_group, 2
innodb_log_group_home_dir, .\
innodb_lru_scan_depth, 1024
innodb_max_dirty_pages_pct, 75
innodb_max_dirty_pages_pct_lwm, 0
innodb_max_purge_lag, 0
innodb_max_purge_lag_delay, 0
innodb_mirrored_log_groups, 1
innodb_monitor_disable,
innodb_monitor_enable,
innodb_monitor_reset,
innodb_monitor_reset_all,
innodb_old_blocks_pct, 37
innodb_old_blocks_time, 1000
innodb_online_alter_log_max_size, 134217728
innodb_open_files, 300
innodb_optimize_fulltext_only, OFF
innodb_page_size, 16384
innodb_print_all_deadlocks, OFF
innodb_purge_batch_size, 300
innodb_purge_threads, 1
innodb_random_read_ahead, OFF
innodb_read_ahead_threshold, 56
innodb_read_io_threads, 64
innodb_read_only, OFF
innodb_replication_delay, 0
innodb_rollback_on_timeout, OFF
innodb_rollback_segments, 128
innodb_sort_buffer_size, 1048576
innodb_spin_wait_delay, 6
innodb_stats_auto_recalc, ON
innodb_stats_method, nulls_equal
innodb_stats_on_metadata, OFF
innodb_stats_persistent, ON
innodb_stats_persistent_sample_pages, 20
innodb_stats_sample_pages, 8
innodb_stats_transient_sample_pages, 8
innodb_status_output, OFF
innodb_status_output_locks, OFF
innodb_strict_mode, OFF
innodb_support_xa, ON
innodb_sync_array_size, 1
innodb_sync_spin_loops, 30
innodb_table_locks, ON
innodb_thread_concurrency, 0
innodb_thread_sleep_delay, 10000
innodb_undo_directory, .
innodb_undo_logs, 128
innodb_undo_tablespaces, 0
innodb_use_native_aio, OFF
innodb_use_sys_malloc, ON
innodb_version, 5.6.28
innodb_write_io_threads, 16
UPDATE:
Here is "SHOW ENGINE InnoDB STATUS" output: http://pastebin.com/raw/E3rK4Pu5
UPDATE2:
The reason behind this was somewhere else and not actually DB. Some other function in my code was eating lots of CPU causing MySQL (which runs on same machine) to go slow. Thanks for all your answers and help.
It seems you are acquiring lock on the row/table before deleting/inserting so that's why it is causing you the issue.
Check
SELECT * from information_schema.GLOBAL_VARIABLES;
Also, check locks on table using
SHOW OPEN TABLES from <database name> where In_use>0
and lock type using
SELECT * FROM INFORMATION_SCHEMA.INNODB_LOCKS
So when you run the query add these on watch and you can also use tee command to store it in file.
So, there is one more thing that can cause this, although you have indexed the column, but there are limitations to mysql if data Read this for limitation.
Read this to install watch http://osxdaily.com/2010/08/22/install-watch-command-on-os-x/ and run watch script with time delay of 1 second and then run mysql queries in watch, if you want to store it in file use tee command and then write output on file. You can tail to get the data in file

Sql Popularity algorithm with weighted score

I'm implement an algorithm that returns popular posts at the moment, given his likes and dislikes.
To do this, for each post I add all his likes (1) and dislikes (-1) to get his score but each like/dislike is weighted : the latest, the heaviest. For example, at the moment an user likes a post, his like weights 1. After 1 day, it weights 0.95 (or -0.95 if it's a dislike), after 2 days, 0.90, and so on... With a minimal of 0.01 reached after 21 days. (PS: Theses are totally approximate values)
Here are how my tables are made :
Posts table
id | Title | user_id | ...
-------------------------------------------
1 | Random post | 10 | ...
2 | Another post | 36 | ...
n | ... | n | ...
Likes table
id | vote | post_id | user_id | created
----------------------------------------
1 | 1 | 2 | 10 | 2014-08-18 15:34:20
2 | -1 | 1 | 24 | 2014-08-15 18:54:12
3 | 1 | 2 | 54 | 2014-08-17 21:12:48
Here is the SQL query I'm currently using which does the job
SELECT Post.*, Like.*,
SUM(Like.vote *
(1 - IF((TIMESTAMPDIFF(MINUTE, Like.created, NOW()) / 60 / 24) / 21 > 0.99, 0.99, (TIMESTAMPDIFF(MINUTE, Like.created, NOW()) / 60 / 24) / 21))
) AS score
FROM posts Post
LEFT JOIN likes Like ON (Post.id = Like.post_id)
GROUP BY Post.id
ORDER BY score DESC
PS: I'm using TIMESTAMPDIFF with MINUTE and not DAY directly because I'm calculating the day myself otherwise it returns me an integrer and I want a float value, in order to gradually decay overtime and not day per day. So TIMESTAMPDIFF(MINUTE, Like.created, NOW())/60/24 just gives me the number of day passed since the like creation with the decimal part.
Here are my questions :
Look at the IF(expr1, expr2, expr3) part : it is necessary in order to set minimal value for the like's weight, so it will not go under 0.01 and become negative (and so the like, even older still has a little weight). But I'm calculating 2 times the same thing : expr1 is the same as expr2. Isn't there a way to avoid this duplicate expression ?
I was going to cache this query and update it every 5 minutes, as I think it will be pretty heavy on a big Post and Like table. Is the cache really necessary or not ? I'm aiming to run this query on a table with 50 000 entries, and for each 200 associated likes (that makes a 10 000 000 entries Like table).
Should I create Index in Like table for post_id ? And for created ?
Thank you !
EDIT: Imagine a Post can have multiple tags, and each tag can belong to multiple posts. If I want to get populars Posts given a Tag or multiple Tag, I can't cache each query ; as there is a good amount of possible queries. Is the query still viable so ?
EDIT FOR FINAL SOLUTION: I finally did some tests. I created a table Post with 30 000 entries and Like with 250 000 entries.
Without index, the query was incredibly long (timed out > 10mn), but with indexes on Post.id (primary), Like.id(primary) and Like.post_id it took ~0.5s.
So I'm not caching the data, neither using update every 5mn. If the table keeps growing this is still possible solution (over 1s it's not acceptable).
2: I was going to cache this query and update it every 5 minutes, as I think it will be pretty heavy on a big Post and Like table. Is the cache really necessary or not ? I'm aiming to run this query on a table with 50 000 entries, and for each 200 associated likes (that makes a 10 000 000 entries Like table).
10000 and 50000 are considered small on current hardware. With those table sizes you probably won't need any cache, unless the query will run several times per second.
Anyway, I would do a performance test before deciding to have a cache.
3: Should I create Index in Like table for post_id ? And for created ?
I would create an index for (post_id, created, vote). That way the query can get all information from the index and doesn't need to read the table at all.
Edit (response to comments):
An extra index will slow down inserts/updates slightly. In the end, the path you choose will dictate the characteristics of what you need in terms of CPU/RAM/Disk I/O.
If you have enough RAM for the DB so that you expect the entire Like table to be cached in RAM then you might be better off with an index on just post_id.
In terms of total load you need to consider the ratio between insert and select and the relative cost of insert and select with or without the index.
My gut feeling is that the total load will be lower with the index.
Regarding your question on concurrency (selecting and inserting simultaneously). What happens depends on the isolation level. The general advice is to keep inserts/updates as short as possible. If you don't do unneccessary things between the start of the insert and the commit you should be fine.

logical error in mysql query while selecting data

Table pc -
code model speed ram hd cd price
1 1232 500 64 5.0 12x 600.0000
10 1260 500 32 10.0 12x 350.0000
11 1233 900 128 40.0 40x 980.0000
12 1233 800 128 20.0 50x 970.0000
2 1121 750 128 14.0 40x 850.0000
3 1233 500 64 5.0 12x 600.0000
4 1121 600 128 14.0 40x 850.0000
5 1121 600 128 8.0 40x 850.0000
6 1233 750 128 20.0 50x 950.0000
7 1232 500 32 10.0 12x 400.0000
8 1232 450 64 8.0 24x 350.0000
9 1232 450 32 10.0 24x 350.0000
Desired output -
model speed hd
1232 450 10.0
1232 450 8.0
1232 500 10.0
1260 500 10.0
Query 1 -
SELECT model, speed, hd
FROM pc
WHERE cd = '12x' AND price < 600
OR
cd = '24x' AND price < 600
Query 2 -
SELECT model, speed, hd
FROM pc
WHERE cd = '12x' OR cd = '24x'
AND price < 600
Query 1 is definitely working correctly, however when i tried to reduce the query to use price at once only, it is not showing the correct result..let me know what I am missing in the logic.
Find the model number, speed and hard drive capacity of the PCs having
12x CD and prices less than $600 or having 24x CD and prices less than
$600.
Since AND comes before OR, your query is being interpreted as:
WHERE (cd = '12x') OR ((cd = '24x') AND (price < 600))
Or, in words: All PCs having 12x CD, or PCs < $600 having 24x CD
You need to use parentheses to specify order of operations:
WHERE (cd = '12x' OR cd = '24x') AND price < 600
Or, you can use IN:
WHERE cd IN ('12x', '24x') AND price < 600
See Also: http://dev.mysql.com/doc/refman/5.5/en/operator-precedence.html
In your table may contain duplicate rows or coloumns try by using group by clause as shown below where you will get the soloution and let me know the output after trying these thanks...
SELECT model, speed, hd
FROM PC
WHERE cd IN ('12x','24x') AND price < 600
group by Model,speed,hd
Try using IN:
SELECT model, speed, hd
FROM PC
WHERE cd IN ('12x','24x') AND price < 600
Good luck.

fine tuning MySQL server configuration for better performance

I need help tuning my mysql server for better performance. I have a lot of resources but it still is performing poorly. I only have 3.5 million records in one table that i hit the most.
I need help focusing on which settings to change for better performance.
a simple query like
SELECT label,
COUNT(ObjectKey) AS labelcount
FROM db.results
GROUP BY label
ORDER BY labelcount DESC
LIMIT 30
EXPLAINED:
'1', 'SIMPLE', 'results', 'index', NULL, 'label_index', '258', NULL, '9093098', 'Using index; Using temporary; Using filesort'
takes 44 seconds.
here are my settings.
SHOW VARIABLES LIKE '%buffer%';
'bulk_insert_buffer_size', '8388608'
'innodb_buffer_pool_instances', '1'
'innodb_buffer_pool_size', '16106127360'
'innodb_change_buffering', 'all'
'innodb_log_buffer_size', '10485760'
'join_buffer_size', '131072'
'key_buffer_size', '304087040'
'myisam_sort_buffer_size', '70254592'
'net_buffer_length', '16384'
'preload_buffer_size', '32768'
'read_buffer_size', '65536'
'read_rnd_buffer_size', '262144'
'sort_buffer_size', '262144'
'sql_buffer_result', 'OFF'
SHOW VARIABLES LIKE 'innodb%'
innodb_data_home_dir,
innodb_doublewrite, ON
innodb_fast_shutdown, 1
innodb_file_format, Antelope
innodb_file_format_check, ON
innodb_file_format_max, Antelope
innodb_file_per_table, OFF
innodb_flush_log_at_trx_commit, 0
innodb_flush_method,
innodb_force_load_corrupted, OFF
innodb_force_recovery, 0
innodb_io_capacity, 200
innodb_large_prefix, OFF
innodb_lock_wait_timeout, 50
innodb_locks_unsafe_for_binlog, OFF
innodb_log_buffer_size, 10485760
innodb_log_file_size, 536870912
innodb_log_files_in_group, 2
innodb_log_group_home_dir, .\
innodb_max_dirty_pages_pct, 75
innodb_max_purge_lag, 0
innodb_mirrored_log_groups, 1
innodb_old_blocks_pct, 37
innodb_old_blocks_time, 0
innodb_open_files, 300
innodb_purge_batch_size, 20
innodb_purge_threads, 0
innodb_random_read_ahead, OFF
innodb_read_ahead_threshold, 56
innodb_read_io_threads, 4
innodb_replication_delay, 0
innodb_rollback_on_timeout, OFF
innodb_rollback_segments, 128
innodb_spin_wait_delay, 6
innodb_stats_method, nulls_equal
innodb_stats_on_metadata, ON
innodb_stats_sample_pages, 8
innodb_strict_mode, OFF
innodb_support_xa, ON
innodb_sync_spin_loops, 30
innodb_table_locks, ON
innodb_thread_concurrency, 10
innodb_thread_sleep_delay, 10000
innodb_use_native_aio, ON
innodb_use_sys_malloc, ON
innodb_version, 1.1.8
innodb_write_io_threads, 4
here is my table explained.
http://pastebin.com/PX6qDHCd
As this depends on many factors, you could use MySQL Primer and/or MySQL Tuner script to find decent settings for your server.
As MySQL needs to create a temp table to perform the query, it could also be related to poor disk i/o.