I've set of numbers and to each number in it there are few numbers associated with it. So I store it in a table like this:
NUMBERS ASSOCIATEDNUMBERS
1 3
1 7
1 8
2 11
2 7
7 9
8 13
11 17
14 18
17 11
17 18
Thus it's a number which has many associated numbers and vice versa. Both columns are indexed. (Thus enabling me to find number and its associated numbers and vice versa)
My create table looks like this:
CREATE TABLE `TABLE_B` (
`NUMBERS` bigint(20) unsigned NOT NULL,
`ASSOCIATEDNUMBERS` bigint(20) unsigned NOT NULL,
UNIQUE KEY `unique_number_associatednumber_constraint` (`NUMBERS`,`ASSOCIATEDNUMBERS`),
KEY `fk_AssociatedNumberConstraint` (`ASSOCIATEDNUMBERS`),
CONSTRAINT `fk_AssociatedNumberConstraint` FOREIGN KEY (`ASSOCIATEDNUMBERS`) REFERENCES `table_a` (`SRNO`),
CONSTRAINT `fk_NumberConstraint` FOREIGN KEY (`NUMBERS`) REFERENCES `table_a`` (`SRNO`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Here TABLE_A has column SRNO which is AUTO_INCREMENT PRIMARY KEY and is first column in the table. (As per MySQL manual I haven't defined indexes on TABLE_B.NUMBERS and TABLE_B.ASSOCIATEDNUMBERS as foreign key constraints defines it automatically)
PROBLEM:
Whenever I need to change ASSOCIATEDNUMBERS for a number (in `NUMBERS') I just delete existing rows for that number from the table:
DELETE FROM TABLE_B WHERE NUMBERS= ?
and then insert rows for new set of ASSOCIATEDNUMBERS:
INSERT INTO TABLE_B (NUMBERS, ASSOCIATEDNUMBERS) VALUES ( ?, ?), (?, ?), (?, ?), ...
However, this takes long time. Especially when in my multi-threaded application I open multiple connections (each per thread) to the database, each running above two queries (but each with different number).
For example, if I open 40 connections, each connection to delete existing and insert 250 new associated numbers, it takes upto 10 to 15 seconds. If I increase number of connections, the time also increases.
Other Information:
SHOW GLOBAL STATUS LIKE 'Threads_running';
Shows upto 40 threads.
Innodb parameters:
innodb_adaptive_flushing, ON
innodb_adaptive_flushing_lwm, 10
innodb_adaptive_hash_index, ON
innodb_adaptive_max_sleep_delay, 150000
innodb_additional_mem_pool_size, 2097152
innodb_api_bk_commit_interval, 5
innodb_api_disable_rowlock, OFF
innodb_api_enable_binlog, OFF
innodb_api_enable_mdl, OFF
innodb_api_trx_level, 0
innodb_autoextend_increment, 64
innodb_autoinc_lock_mode, 1
innodb_buffer_pool_dump_at_shutdown, OFF
innodb_buffer_pool_dump_now, OFF
innodb_buffer_pool_filename, ib_buffer_pool
innodb_buffer_pool_instances, 8
innodb_buffer_pool_load_abort, OFF
innodb_buffer_pool_load_at_startup, OFF
innodb_buffer_pool_load_now, OFF
innodb_buffer_pool_size, 1073741824
innodb_change_buffer_max_size, 25
innodb_change_buffering, all
innodb_checksum_algorithm, crc32
innodb_checksums, ON
innodb_cmp_per_index_enabled, OFF
innodb_commit_concurrency, 0
innodb_compression_failure_threshold_pct, 5
innodb_compression_level, 6
innodb_compression_pad_pct_max, 50
innodb_concurrency_tickets, 5000
innodb_data_file_path, ibdata1:12M:autoextend
innodb_data_home_dir,
innodb_disable_sort_file_cache, OFF
innodb_doublewrite, ON
innodb_fast_shutdown, 1
innodb_file_format, Antelope
innodb_file_format_check, ON
innodb_file_format_max, Antelope
innodb_file_per_table, ON
innodb_flush_log_at_timeout, 1
innodb_flush_log_at_trx_commit, 2
innodb_flush_method, normal
innodb_flush_neighbors, 1
innodb_flushing_avg_loops, 30
innodb_force_load_corrupted, OFF
innodb_force_recovery, 0
innodb_ft_aux_table,
innodb_ft_cache_size, 8000000
innodb_ft_enable_diag_print, OFF
innodb_ft_enable_stopword, ON
innodb_ft_max_token_size, 84
innodb_ft_min_token_size, 3
innodb_ft_num_word_optimize, 2000
innodb_ft_result_cache_limit, 2000000000
innodb_ft_server_stopword_table,
innodb_ft_sort_pll_degree, 2
innodb_ft_total_cache_size, 640000000
innodb_ft_user_stopword_table,
innodb_io_capacity, 200
innodb_io_capacity_max, 2000
innodb_large_prefix, OFF
innodb_lock_wait_timeout, 50
innodb_locks_unsafe_for_binlog, OFF
innodb_log_buffer_size, 268435456
innodb_log_compressed_pages, ON
innodb_log_file_size, 262144000
innodb_log_files_in_group, 2
innodb_log_group_home_dir, .\
innodb_lru_scan_depth, 1024
innodb_max_dirty_pages_pct, 75
innodb_max_dirty_pages_pct_lwm, 0
innodb_max_purge_lag, 0
innodb_max_purge_lag_delay, 0
innodb_mirrored_log_groups, 1
innodb_monitor_disable,
innodb_monitor_enable,
innodb_monitor_reset,
innodb_monitor_reset_all,
innodb_old_blocks_pct, 37
innodb_old_blocks_time, 1000
innodb_online_alter_log_max_size, 134217728
innodb_open_files, 300
innodb_optimize_fulltext_only, OFF
innodb_page_size, 16384
innodb_print_all_deadlocks, OFF
innodb_purge_batch_size, 300
innodb_purge_threads, 1
innodb_random_read_ahead, OFF
innodb_read_ahead_threshold, 56
innodb_read_io_threads, 64
innodb_read_only, OFF
innodb_replication_delay, 0
innodb_rollback_on_timeout, OFF
innodb_rollback_segments, 128
innodb_sort_buffer_size, 1048576
innodb_spin_wait_delay, 6
innodb_stats_auto_recalc, ON
innodb_stats_method, nulls_equal
innodb_stats_on_metadata, OFF
innodb_stats_persistent, ON
innodb_stats_persistent_sample_pages, 20
innodb_stats_sample_pages, 8
innodb_stats_transient_sample_pages, 8
innodb_status_output, OFF
innodb_status_output_locks, OFF
innodb_strict_mode, OFF
innodb_support_xa, ON
innodb_sync_array_size, 1
innodb_sync_spin_loops, 30
innodb_table_locks, ON
innodb_thread_concurrency, 0
innodb_thread_sleep_delay, 10000
innodb_undo_directory, .
innodb_undo_logs, 128
innodb_undo_tablespaces, 0
innodb_use_native_aio, OFF
innodb_use_sys_malloc, ON
innodb_version, 5.6.28
innodb_write_io_threads, 16
UPDATE:
Here is "SHOW ENGINE InnoDB STATUS" output: http://pastebin.com/raw/E3rK4Pu5
UPDATE2:
The reason behind this was somewhere else and not actually DB. Some other function in my code was eating lots of CPU causing MySQL (which runs on same machine) to go slow. Thanks for all your answers and help.
It seems you are acquiring lock on the row/table before deleting/inserting so that's why it is causing you the issue.
Check
SELECT * from information_schema.GLOBAL_VARIABLES;
Also, check locks on table using
SHOW OPEN TABLES from <database name> where In_use>0
and lock type using
SELECT * FROM INFORMATION_SCHEMA.INNODB_LOCKS
So when you run the query add these on watch and you can also use tee command to store it in file.
So, there is one more thing that can cause this, although you have indexed the column, but there are limitations to mysql if data Read this for limitation.
Read this to install watch http://osxdaily.com/2010/08/22/install-watch-command-on-os-x/ and run watch script with time delay of 1 second and then run mysql queries in watch, if you want to store it in file use tee command and then write output on file. You can tail to get the data in file
What is the most accurate way to estimate how big a database would be with the following characteristics:
MySQL
1 Table with three columns:
id --> big int)
field1 --> varchar 32
field2 --> char 32
there is an index on field2
You can assume varchar 32 is fully populated (all 32 characters). How big would it be if each field is populated and there are:
1 Million rows
5 Million rows
1 Billion rows
5 Billion rows
My rough estimate works out to: 1 byte for id, 32 bits each for the other two fields. Making it roughly:
1 + 32 + 32 = 65 * 1 000 000 = 65 million bytes for 1 million rows
= 62 Megabyte
Therefore:
62 Mb
310 Mb
310 000 Mb = +- 302Gb
1 550 000 Mb = 1513 Gb
Is this an accurate estimation?
If you want to know the current size of a database you can try this:
SELECT table_schema "Database Name"
, SUM(data_length + index_length) / (1024 * 1024) "Database Size in MB"
FROM information_schema.TABLES
GROUP BY table_schema
My rough estimate works out to: 1 byte for id, 32 bits each for the other two fields.
You're way off. Please refer to the MySQL Data Type Storage Requirements documentation. In particular:
A BIGINT is 8 bytes, not 1.
The storage required for a CHAR or VARCHAR column will depend on the character set in use by your database (!), but will be at least 32 bytes (not bits!) for CHAR(32) and 33 for VARCHAR(32).
You have not accounted at all for the size of the index. The size of this will depend on the database engine, but it's definitely not zero. See the documentation on the InnoDB row structure for more information.
On the MySQL website you'll find quite comprehensive information about storage requirements:
http://dev.mysql.com/doc/refman/5.6/en/storage-requirements.html
It also depends if you use utf8 or not.
I have a live server and my dev server, and I am finding that queries on my LIVE (not dev) server run 10x slower, even though the live server is more powerful and they are both running comparable load. It's not a database structure thing because I load the backup from the live server into my dev server.
Does anybody have any ideas on where I could look for the discrepancy? Could it be a MySQL config thing? Where should I start looking?
Live Server:
mysql> SELECT count(`Transaction`.`id`) as count, sum(`Transaction`.`amount`) as sum, sum(Transaction.citiq_margin+rounding + Transaction.citiq_margin_vat) as revenue FROM `transactions` AS `Transaction` LEFT JOIN `meters` AS `Meter` ON (`Transaction`.`meter_id` = `Meter`.`id`) LEFT JOIN `units` AS `Unit` ON (`Meter`.`unit_id` = `Unit`.`id`) WHERE (NOT (`Unit`.`building_id` IN ('1', '85')) AND NOT (`Transaction`.`state` >= 90)) AND DAY(`Transaction`.`created`) = DAY(NOW()) AND YEAR(`Transaction`.`created`) = YEAR(NOW()) AND (MONTH(`Transaction`.`created`)) = MONTH(NOW());
+-------+---------+---------+
| count | sum | revenue |
+-------+---------+---------+
| 413 | 3638550 | 409210 |
+-------+---------+---------+
1 row in set (2.62 sec)
[root#mises ~]# uptime
17:11:57 up 55 days, 1 min, 1 user, load average: 0.45, 0.56, 0.60
Dev Server (result count is different because of slight time delay from backup):
mysql> SELECT count(`Transaction`.`id`) as count, sum(`Transaction`.`amount`) as sum, sum(Transaction.citiq_margin+rounding + Transaction.citiq_margin_vat) as revenue FROM `transactions` AS `Transaction` LEFT JOIN `meters` AS `Meter` ON (`Transaction`.`meter_id` = `Meter`.`id`) LEFT JOIN `units` AS `Unit` ON (`Meter`.`unit_id` = `Unit`.`id`) WHERE (NOT (`Unit`.`building_id` IN ('1', '85')) AND NOT (`Transaction`.`state` >= 90)) AND DAY(`Transaction`.`created`) = DAY(NOW()) AND YEAR(`Transaction`.`created`) = YEAR(NOW()) AND (MONTH(`Transaction`.`created`)) = MONTH(NOW());
+-------+---------+---------+
| count | sum | revenue |
+-------+---------+---------+
| 357 | 3005550 | 338306 |
+-------+---------+---------+
1 row in set (0.22 sec)
[www#smith test]$ uptime
18:11:53 up 12 days, 1:57, 4 users, load average: 0.91, 0.75, 0.62
Live Server (2 x Xeon Quadcore):
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5620 # 2.40GHz
stepping : 2
cpu MHz : 2395.000
cache size : 12288 KB
physical id : 0
siblings : 8
core id : 10
cpu cores : 4
Dev Server (1 x Quadcore)
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Quad CPU Q8300 # 2.50GHz
stepping : 10
microcode : 0xa07
cpu MHz : 1998.000
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
Live Server:
CentOS 5.7
MySQL ver 5.0.95
Dev Server:
ArchLinux
MySQL ver 5.5.25a
The obvious first thing to check would be your MySql configuration file to make sure you are utilizing an appropriate amount of memory for queries.. such as key_buffer, sort_buffer, etc... There are far smarter people than me out there who have entire blogs dedicated to configuring MySql.
You can also prepend your query with "explain" to see what is taking the most time... but that might just be something for general use later on.
In reality, your "live" server has caching capabilities and twice the number of cores to make these queries, and it likely has enough horsepower and memory to explain the difference in query times between the servers.
So, I ran the same database and queries on a Virtual Machine running Centos, 1 CPU and 512MB of memory: it provides the answer to that query in 0.3 seconds; system load is 0.4 :/
The only real difference seems to be that I am running Mysql 5.5 on that server. And it seems that there really is a 10x performance improvement in my case from Mysql 5.0 to Mysql 5.5.
I will only know for sure once I have migrated my live servers from Mysql 5.0 to Mysql 5.5, I will confirm the results once I have done that.
While i was creating stress data for a table i found the following files are generated.
-rw-rw---- 1 mysql mysql 8858 Jul 28 06:47 card.frm
-rw-rw---- 1 mysql mysql 7951695624 Jul 29 20:48 card.MYD
-rw-rw---- 1 mysql mysql 51360768 Jul 29 20:57 card.MYI
Actually I inserted 1985968 number of records in this table. But the index file size is unbelievable.
Structure of the table is
create table card(
company_id int(10),
emp_number varchar(100),
card_date varchar(10),
time_entry text,
total_ot varchar(15),
total_per varchar(15),
leave_taken double,
total_lop double,
primary key (company_id,emp_number,card_date),
index (company_id,card_date)
);
Is there any way to reduce the filesize of the MYD?
Please note that .MYI is your index, and .MYD is your data. The only way to reduce the size of your .MYD is to delete rows or alter your column sizes.
50MB for an index on 2 million rows is not large.
Let's look at the size breakdown of your table:
company_id - 4 Bytes
emp_number - 101 Bytes
card_date - 11 Bytes
total_ot - 17 Bytes
total_per - 17 Bytes
leave_taken - 9 Bytes
total_lop - 9 Bytes
time_entry - avg(length(time_entry)) + 3 Bytes
This gives us a row length of 172 + time_entry bytes. If time_entry averages out at 100 bytes. You're looking at 272 * 2000000 = 544MB
Of significance to me is the number of VARCHARs. Does employee number need to be a varchar(100), or even a varchar at all? You're duplicating that data in it's entirety in your index on (company_id,emp_number,card_date) as you're indexing the whole column.
You probably don't need a varchar here, and you possibly don't need it included in the primary key.
Do you really need time_entry to be a TEXT field? This is likely the biggest consumer of space in your database.
Why are you using varchar(10) for card date? If you used DATETIME you'd only use 8 Bytes instead of 11, TIMESTAMP would be 4 Bytes, and DATE would be 3 Bytes.
You're also adding 1 Byte for every column that can be NULL.
Also try running ANALYZE/REPAIR/OPTIMIZE TABLE commands as well.
A lot depends on how big that time_entry text field can be. I'm going to assume it's small, less than 100 bytes. Then you have roughly 4 + 100 + 10 + 100 + 15 + 15 + 8 + 8 = roughly 300 bytes of data per record. You have 2 million records. I'd expect the database to be 600 megabytes. In fact you are showing 8000 megabytes of data in the MYD on disk, or a factor of 12x. Something's not right.
Your best diagnostic tool is show table status. In particular check Avg_row_length and Data_length, they will give you some insight into where the space is going.
If you're using MyISAM tables, you may find that myisamchk will help make the table smaller. This tool particularly helps if you inserted and then deleted a lot of rows from the database. "optimize table" can help too. MyISAM does support read-only compressed tables via myisampack. I'd treat that as a last resort, though.