heave writing to InnoDB - mysql

We are constructing, for every day, mappings from tweet user id to the list of tweet ids of tweets made by that user. The storage engine we are using is Percona xtraDB "5.1.63-rel13.4 Percona Server (GPL), 13.4, Revision 443"
We are unsatisfied with the maximal throughput in terms of row inserts per second. Our maximal throughput to process tweets with xtraDB is around 6000 ~ 8000 tweets per second. (for example, if we had to rebuild data for some day from scratch, we'll have to wait for almost a day)
For the most part we are able to do this realtime enough with the full amount of twitter data (which is roughly 4000 ~ 5000 tweets per second).
We have narrowed down the bottleneck of our application to MySQL InnoDB insert. In our application, we read the feed from the disk and parse it with jackson (which happens at about 30,000 tweets per second). Our application then proceeds in batches of tweets. For the set of authors that generates these tweets, we partitioning them into 8 groups (simple partitioning with user id modulo 8). A table is allocated for each group and 1 thread is allocated to write the data to that table. Everyday there are roughly 26 million unique users that generates these tweets, and therefore each table have roughly 4 millions rows.
For a group of users, we only use one transaction for read and update. The group size is a runtime tunable. We have tried various sizes from 8 ~ 64000 , and we have determined 256 to be a good batch size.
the schema of our table is
CREATE TABLE `2012_07_12_g0` ( `userid` bigint(20) NOT NULL, `tweetId` longblob, PRIMARY KEY (`userid`)) ENGINE=InnoDB DEFAULT CHARSET=utf8
where tweetId is the compressed list of tweet ids long integers, compressed with Google snappy
Each thread uses
Select userid,tweetId from <tablename> where userid IN (....)
to resolve the userids to readback the data, and the threads use
INSERT INTO <tablename> (userid,tweetId) VALUES (...) ON DUPLICATE KEY UPDATE tweetId=VALUES(tweetId)
to update the rows with new tweetids.
We have tried setting various XtraDB parameters
innodb_log_buffer_size = 4M
innodb_flush_log_at_trx_commit = 2
innodb_max_dirty_pages_pct = 80
innodb_flush_method = O_DIRECT
innodb_doublewrite = 0
innodb_use_purge_thread = 1
innodb_thread_concurrency = 32
innodb_write_io_threads = 8
innodb_read_io_threads = 8
#innodb_io_capacity = 20000
#innodb_adaptive_flushing = 1
#innodb_flush_neighbor_pages= 0"
The table size for each day is roughly 8G for all tables, and InnoDB is given 24GB to work with.
We are using:
6-disk (crucial m4 SSD, 512 GB, 000F firmware) software RAID5.
Mysql innodb data, table space on the SSD partition
ext4 mount with noatime,nodiratime,commit=60
centos 6.2
sun jdk 1.6.30
Any tips for making our insert go faster would be greatly appreciated, thanks.

InnoDB is given 24GB
Do you mean this is the innodb_buffer_pool_size? You didn't say how much memory you have nor what CPUs you are using. If so then you should probably be using a larger innodb_log_buffer_size. What's your setting for innodb_log_file_size? It should probably be in the region of 96Mb.
innodb_write_io_threads = 8
ISTR that ext3 has some concurrency problems with multiple writers - but I don't know about ext4
Have you tried changing innodb_flush_method?
Which I/O scheduler are you using (in the absence of a smart disk controller, usually deadline is fastest, sometimes CFQ)?
Switching off the ext4 barriers will help with throughput - its a bit more risky - make sure you've got checksums enabled in JBD2. Similarly setting innodb_flush_log_at_trx_commit=0 should give a significant increase but more risky.
Since you're obviously not bothered about maintaining your data in a relational format, then you might consider using a noSQL database.

My initial suggestions would be:
As you don't have RAID card with memory you may want to comment out innodb_flush_method = O_DIRECT line to let system cache writes
as you disabled double write buffer you could also set innodb_flush_log_at_trx_commit to 0 which would be faster than 2
set innodb_log_buffer_size to cover at least one second of writes (approx 12Mb for 30K tweets)
in case you use binary logs - make sure you have sync_binlog = 0
On the hardware side I would strongly suggest to try RAID card with at least 256Mb RAM and battery unit (BBU) to improve write speed. There are RAID cards on the market that supports SSD.
Hope this helps. Please let me know how it goes.

Related

mysql indexing status and running index in memory

I would like to know progress of indexing my primary column which is "URL" has around 1million rows if it is finish or still is in progress. I would like to run indexing in memory to reduce IO of hdd, more importantly I want to speed up process no more than waiting 10 minutes, because I need to delete all datas and insert new ones everyday.
Innodb_buffer_pool_size = 3G
Mysql mytable structure
URL(primary) 255Varchar
filename 200Varchar
mytable 1,065,380rows InnoDB latin1_swedish_ci 1 GiB
Space usage
Data 404 MiB
Index 660 MiB
Total 1 GiB
Row Statistics
Format dynamic
Collation latin1_swedish_ci
Creation Nov 18, 2017 at 02:35 PM
How much RAM do you have? For a 3G buffer_pool, I hope you have at least 6GB of RAM. If not, the buffer_pool is configured so big that you are probably swapping. This is a serious performance killer.
If you already have PRIMARY KEY(URL), there is no reason to add INDEX(URL). The PK is already an index, and UNIQUE.
What query are you hoping to speed up?

Issues with wikipedia dump table pagelinks

I downloaded the enwiki-latest-pagelinks.sql.gz dump from dumps.wikimedia.org/enwiki/latest/.
I upacked the file, its uncompressed size is 37G.
The table structure is this:
SHOW CREATE TABLE wp_dump.pagelinks;
CREATE TABLE `pagelinks` (
`pl_from` int(8) unsigned NOT NULL DEFAULT '0',
`pl_namespace` int(11) NOT NULL DEFAULT '0',
`pl_title` varbinary(255) NOT NULL DEFAULT '',
`pl_from_namespace` int(11) NOT NULL DEFAULT '0',
UNIQUE KEY `pl_from` (`pl_from`,`pl_namespace`,`pl_title`),
KEY `pl_namespace` (`pl_namespace`,`pl_title`,`pl_from`),
KEY `pl_backlinks_namespace` (`pl_from_namespace`,`pl_namespace`,`pl_title`,`pl_from`)
) ENGINE=InnoDB DEFAULT CHARSET=binary
I imported the table into a new, empty database:
mysql -D wp_dump -u root -p < enwiki-latest-pagelinks.sql
The computer I am running the task on has 16G of RAM and the mysql database is located on a SSD, so I was assuming that despite the table's size the import would not take too long.
However, the task is running since over a day and still running. There are no other processes accessing mysql and there is no workload on the computer.
The database file itself now is 79G large.
ls -lh
-rw-r----- 1 mysql mysql 65 May 11 17:40 db.opt
-rw-r----- 1 mysql mysql 8,6K May 12 07:06 pagelinks.frm
-rw-r----- 1 mysql mysql 79G May 13 16:59 pagelinks.ibd
The table now has over 500 million rows.
SELECT table_name, table_rows FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'wp_dump';
+------------+------------+
| table_name | table_rows |
+------------+------------+
| pagelinks | 520919860 |
+------------+------------+
I am wondering:
Is the enwiki-latest-pagelinks.sql really over 79G large?
Does pagelinks really contain over 500 million rows?
Does it really take that long to import the pagelinks table?
Can you provide some metrics the expected table size and the row amount, please?
Update: 14th may, 2017:
insert still running; pagelinks.ibdfile now 130G; number of rows now almost 700 million
Update: 16th may, 2017:
insert still running; pagelinks.ibdfile now 204G; number of rows now over 1.2 billion
I calculated the rows inserted per second over the last two days:
rows/sek = 3236
And: It is many thousand inserts per insert statement in the sql script (head -41 enwiki-latest-pagelinks.sql | tail -1 | grep -o "(" | wc -l is 30471)
So, my follow-up / modified questions:
Is the number of rows and the idb file size to be expected given the sql file size of 37G and the table structure (as listed above)?
Is rows/sek = 3236 a good value (meaning that it takes a few days to insert the table)?
What may be the limiting speed factor / how can I speed up the import?
Disable the indexes (and calculate them after the insert)?
Optimize transactions (commit (nothing set in script) / autocommit (now ON))?
Optimize variable settings (e.g. innodb_buffer_pool_size, now 134217728)?
#Sim Betren: I was currently importing the same table, I can get about 7700 rows/s. Which means about 600.000.000 rows a day. Probably the most important thing is to get the right settings on InnoDB:
https://dba.stackexchange.com/questions/83125/mysql-any-way-to-import-a-huge-32-gb-sql-dump-faster
innodb_buffer_pool_size = 4G
innodb_log_buffer_size = 256M
innodb_log_file_size = 1G
innodb_write_io_threads = 16
innodb_flush_log_at_trx_commit = 0
Those settings work good. From what I read and tried, InnoDB loves high memory settings. Ideally, one would use a 16Gb or even 32Gb machine, then increase these settings even more. But I got 7700 rows/s on a modest setup, that's almost 10 years old:
Intel Q6700 quad
8 Gb DDR2 memory
I combined that 10 year old hardware with a 2017 model 500Gb SSD, which is dedicated to the job and handles both the reading and the writing. the reason for using the older hardware is that the SSD is the most important part of the setup (because of IOPS). Plus by using older hardware I saved a bit of money. However, the hardware is limited to 8Gb of DDR2. A newer dedicated machine with 32Gb or 64Gb internal memory could really fly I reckon.
Software setup:
Linux Mint 64bit
MySQL Server 5.7.18 for Ubuntu
MySQL Workbench for importing
I also tried this on Windows 10 and the speed is almost the same on both. So you could try Windows too.
Note: I did try changing the engine to MyISAM. MyISAM can be pretty fast, also around 8000 rows/sec or more. But the import always became corrupted for some reason. So I would stick to InnoDB
Update 17-06-2017:
Finished the import. The table "pagelinks" is about 214Gb large with 1200 million rows. About 112Gb is raw data, 102Gb are indexes. The original uncompressed file was about 37Gb.
It took about 2 days and 6 hours to import. Avg speed = 5350 rows/second. With high end equipment (huge memory, preferably 64Gb or more) and optimal settings, it can probably be done faster. But I let it run on a dedicated machine 24/7 and I wasn't in a hurry, so 2 days seems OK.
Update 18-06-2017:
Also imported "page.sql" because this contains the names connected to the ID's. The uncompressed file is about 5Gb, import took 1 hour. Which seems quick: the pagelink file is about 37Gb which is 7x bigger than "page.sql". Yet takes 50x longer to import. So, there are a few reasons why "pagelinks" takes so long: (A) probably because it doesn't fit in memory (B) The table structure, many data per insert (C) Settings. But most likely it's memory.
Conclusion: try get a PC with 32Gb or 64Gb internal memory. Maybe even more. And use SSD's that can keep up with that memory, 500Gb or more. The SSD is more important than memory so try that first.
#Sim Betren:
I want to open a completely new answer, since I have discovered a new solution. Splitting up the file is probably the best answer. As discussed in the other answer, InnoDB works best when the entire model fits in memory. The delays start when it needs to swap stuff on disk. The pagelinks file is 37Gb and that's simply too big for most machines to fit easily into memory. Maybe a $1000+ dedicated machine with endless memory can do it, most desktops can't. So what you can do:
The plan is to split the file. First thing to do, is to separate the SQL structure from the data.
There's probably better ways to do that, but a program I found was this:
SqlDumpSplitter2
That dump splitter program maybe old but it worked on pagelinks. It is Windows only though. I simply told it to split the unpacked 37Gb file into 37 chunks of 1Gb and it dutifully did. Checked the data and it seems to be working. You could also use 74 chunks of 500Mb.
The import of each file takes maybe 10 to 20 minutes per 1Gb.
Total time: about 1 to 2 hours for splitting the 37Gb file. About 6 to 12 hours for importing. That easily beats the previous answer I gave
When importing, use the same big data settings as in the previous answer. And try to find a machine that has big memory 16Gb or 32Gb preferred.
What is most important here is: it doesn't really matter that much how you split it. Just split the file anyway you can. Then build it up by re-creating the structure and the data separately. This way import could be down from 2 days to maybe as little as a few hours. Given a big dedicated machine it can probably be done in just 1 to 6 hours.
37GB of data --> 79GB of InnoDB table seems reasonable...
Title: 2 quotes and 1 comma --> 1 byte for length
Ints: several bytes, plus comma --> 4 bytes for INT (regardless of the (...) after INT. See MEDIUMINT.
20-30 bytes overhead per row
20-40% overhead for BTrees.
UNIQUE index becomes PRIMARY KEY and clusters with data --> very little overhead.
Other two indexes: Each is virtually the same size as the data. This more that allows for the increased size.
Adding it all together, I would expect the table to be more than 120GB. So, there are probably some details missing. A guess: The dump is one row per INSERT, instead of the less verbose many-rows-per-INSERT.
As for performance, it all depends on the SELECTs. Set innodb_buffer_pool_size to somewhere around 11G. This may work efficiently enough for caching the 79G.
More
Change UNIQUE to PRIMARY, for clarity and because InnoDB really needs a PK.
Check the source data. Is it in (pl_from,pl_namespace,pl_title) order? If not, can you sort the file before loading? If you can, that, alone, should significantly help the speed.
128MB for the buffer_pool is also significantly hampering progress.

difference between opened files and open files in mysql

In the below status i have opened files count to be '95349'.
this value is increasing rapidly.
mysql> show global status like 'open_%';
Open_files = 721
Open_streams = 0
Open_table_definitions = 706
Open_tables = 741
Opened_files = 95349
Opened_table_definitions = 701
Opened_tables = 2851
also see this.
mysql>show variables like '%open%';
have_openssl = DISABLED
innodb_open_files = 300
open_files_limit = 8502
table_open_cache = 4096
and
max_connection = 300
is there any relation to open files and opened files. will there be any performance issues because of increasing opened_files value. This is a server of 8 GD RAM and 500 GB hardisk with processor: Intel(R) Xeon(R) CPU E3-1220 V2 # 3.10GHz. It is a dedicated mysql server.
here for the command
ulimit -n;
1024 was the count
the server is hanging often. using some online tools i have optimised some parameters already. need to know what else should be optimized ? in what case the opened files count will reduce? is it necessary that opened files count should be with in some limit. if so how to find the appropriate limit for my server. if am not clear some where please help me by asking more questions.
Opened_files is a counter of how many times you have opened a table since the last time you restarted mysqld (see status variable Uptime for the number of seconds since last restart).
Open_files is not a counter; it's the current number of open files.
If your Opened_files counter is increasing rapidly, you may be able to gain improvement to performance by increasing the size of the table_open_cache.
For some tips on the performance implications of this variable (and some cautions about setting it too high), see:
http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scalability/ (the problem described there seems to be solved finally in MySQL 5.6)
Re your comments:
You misunderstand the purpose of the counter. It always increases. It counts the number of times a particular operation has occurred since the last restart of mysqld. In this case, opening a file for a table.
Having a high value in a counter isn't necessarily a problem. It could mean simply that your mysqld has been running for many days or weeks without a restart. So you have to look at that number compared to your Uptime (that is, MySQL status variable Uptime, not Linux uptime).
What is more meaningful is the rate of increase of a counter, that is how fast does it grow in a given interval of time. That could indicate that you are re-opening tables rapidly.
Normally, MySQL shouldn't have to re-open tables, because it retains an open table handle for each table. But it can only have a finite number of those. That's what table_open_cache is for. In your case, your MySQL instance can "remember" that it has already opened up to 4096 tables at a time. If you need another table opened, it closes one of the file descriptors and opens the table you requested.
So if you have many thousands of tables (or partitions of tables) and you access a wide variety of them rapidly, you could see a lot of turnover in that table open cache. That would be indicated by the counter Opened_tables increasing rapidly.
Therefore sizing the table_open_cache higher means that MySQL can retain more open table handles, and possibly decrease the rate of turnover.
SO the solution is either to increase my hardware (especially RAM) so that i will be able to increase the table_open_cache beyond 4096 or to optimize the query.

Improving MySQL I/O Performance (Hardware & Partitioning)

I need to improve I/O performance for my database. I'm using the "2xlarge" HW described below & considering upgrading to the "4xlarge" HW (http://aws.amazon.com/ec2/instance-types/). Thanks for the help!
Details:
CPU usage is fine (usually under 30%), uptime load averages anywhere from 0.5 to 2.0 (but I believe I'm supposed to divide that by the number of CPU's) so that looks okay as well. However, the I/O is bad: iostat show favorable service times, but the time spent in queue (I suppose this means waiting to access the disk) is far too high. I've configured MySQL to flush to disk every 1sec instead of every write, which helps, but not enough. Profiling shows there are a handful of tables that are the culprits for most of the load (both read && write operations). Queries are already indexed & optimized, but not partitioned. Average MySQL states are: Sending Data # 45%, statistics # 20%, Updating # 15%, Sorting result # 8%.
Questions:
How much performance will I get by upgrading HW?
Same question, but if I partition the high-load tables?
Machines:
m2.2xlarge
64-bit
4 vCPU
13 ECU
34.2 Gb Mem
EBS-Optimized
Network Performance: "Moderate"
m2.4xlarge
64-bit
6 vCPU
26 ECU
68.4 Gb Mem
EBS-Optimized
Network Performance: "High"
In my experience, the biggest boost in MySQL performance comes from IO. You have alot of RAM. Try setting up a ram drive and point the tmpdir to it.
I have several MySQL servers that are very busy. My settings are below - maybe this can help you tweak your settings.
My Setup is:
-Dual 2.66 CPU 8 core box with a 6-drive Raid-1E array - 1.3TB.
-innodblogs on a separate SSD drives.
-tmpdir is on a 2GB tempfs partition.
-32GB of ram
InnoDB settings:
innodb_thread_concurrency=16
innodb_buffer_pool_size = 22G
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 400M
innodb_log_files_in_group=8
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2 (This is a slave machine - 1 is not required fo my purposes)
innodb_flush_method=O_DIRECT
Current Queries per second avg: 5185.650
I am using Percona Server, which is quite a bit faster that other MySQLs from my testing.

Is tuning the innodb_buffer_pool_size important on Solaris ZFS?

We're running a moderate size (350GB) database with some fairly large tables (a few hundred million rows, 50GB) on a reasonably large server (2 x quad-core Xeons, 24GB RAM, 2.5" 10k disks in RAID10), and are getting some pretty slow inserts (e.g. simple insert of a single row taking 90 seconds!).
Our innodb_buffer_pool_size is set to 400MB, which would normally be way too low for this kind of setup. However, our hosting provider advises that this is irrelevant when running on ZFS. Is he right?
(Apologies for the double post on https://dba.stackexchange.com/questions/1975/is-tuning-the-innodb-buffer-pool-size-important-on-solaris-zfs, but I'm not sure how big the audience is over there!)
Your hosting provider is incorrect. There are various things you should tune differently when running MySQL on ZFS, but reducing the innodb_buffer_pool_size is not one of them. I wrote an article on the subject of running MySQL on ZFS and gave a lecture on it a while back. Specifically regarding innodb_buffer_pool_size, what you should do is set it to whatever would be reasonable on any other file system, and because O_DIRECT doesn't mean "don't cache" on ZFS, you should set primarycache=metadata on your ZFS file system containing your datadir. There are other optimisations to be made, which you can find in the article and the lecture slides.
I would still set the innodb_buffer_pool_size much higher that 400M. The reason? InnoDB Buffer Pool will still cache the data and index pages you need for tables accessed frequently.
Run this query to get the recommended innodb_buffer_pool_size in MB:
SELECT CONCAT(ROUND(KBS/POWER(1024,IF(pw<0,0,IF(pw>3,0,pw)))+0.49999),SUBSTR(' KMG',IF(pw<0,0,IF(pw>3,0,pw))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A,(SELECT 2 pw) B;
Simply use either the result of this query or 80% of installed RAM (in your case 19660M) whichever is smaller.
I would also set the innodb_log_file_size to 25% of the InnoDB Buffer Pool size. Unfortunately, the maximum value of innodb_log_file_size is 2047M. (1M short of 2G) Thus, set innodb_log_file_size to 2047M since 25% of innodb_buffer_pool_size of my recommendated setting is 4915M.
Yet another recommedation is to disable ACID compliance. Use either 0 or 2 for innodb_flush_log_at_trx_commit (default is 1 which support ACID compliance) This will produce faster InnoDB writes AT THE RISK of losing up to 1 second's worth of transactions in the event of a crash.
May be worth reading slow-mysql-inserts if you haven't already. Also this link to the mysql docs on the matter - especially with regards to considering a transaction if you are doing multiple inserts to a large table.
More relevant is this mysql article on performance of innodb and zfs which specifically considers the buffer pool size.
The headline conclusion is;
With InnoDB, the ZFS performance curve suggests a new strategy of "set the buffer pool size low, and let ZFS handle the data buffering."
You may wish to add some more detail such as the number / complexity of the indexes on the table - this can obviously make a big difference.
Apologies for this being rather generic advice rather than from personal experience, I haven't run zfs in anger but hope some of those links might be of use.