Very big data in mysql table. Even select statements take much time - mysql

I am working on a database and its a pretty big one with 1.3 billion rows and around 35 columns. Here is what i get after checking the status of the table:
Name:Table Name
Engine:InnoDB
Version:10
Row_format:Compact
Rows:12853961
Avg_row_length:572
Data_length:7353663488
Max_data_length:0
Index_length:5877268480
Data_free:0
Auto_increment:12933138
Create_time:41271.0312615741
Update_time:NULL
Check_time:NULL
Collation:utf8_general_ci
Checksum:NULL
Create_options:
Comment:InnoDB free: 11489280 kB
The Problem I am facing that even a single select query takes too much time to process for example a query Select * from Table_Name limit 0,50000 takes around 2.48 minutes
Is that expected?
I have to make a report in which I have to use the whole historical data, that is whole 1.3 bil rows. I could do this batch by batch but then I would have to run queries which are taking too much time many times again and again.
When the simple query is taking so much time I am not able to do any other complex query which needs joins and case statements.

A common practice is, if you have huge amount of data, you ...
should not SELECT * : You should only select the columns you want
should Limit your fetch range to a smaller number: I bet you won't handle 50000 records at the same time. Try to fetch it batch by batch.

A common problem many database administrators face. The solution: Caching.
Break the Queries into more simpler and small queries. Use Memcached or other caching techniques and tools Memcached saves key vaue pairs, check for a data in memcache..if available, use it. If not fetch it from database and then use and cach. Next tie the data will be available from cahe.
You will have to develop own logic and change some queries. Memcached is available here:
http://memcached.org/
Many tutorials are available on the Web

enable in your my.conf the slow queries up to N seconds, then execute some queries and watch this log, this gives you some clues and maybe you could add some indexes to this table.
or do some queries with EXPLAIN. http://hackmysql.com/case1

A quick note that is usually an easy win ...
If you have any columns that are large text blobs, try selecting everything except for those fields. I've seen varchar(max) fields absolutely kill query efficiency.

You have a very wide average row size and 35 columns. You could try vertically partitioning the table, that is, split the table up into smaller tables that are related to each other 1:1 with a subset of columns from the table. InnoDB stores rows in pages and is not efficient for very wide rows.
If the data is append-only consider looking at ICE.
You might also look at TokuDB because it supports good compression.
You can consider using partitioning and Shard-Query (http://code.google.com/p/shard-query) to access data in parallel. You can also split data over more than one server for parallelism using Shard-Query.

Try adding WHERE clause: WHERE 1=1
If it doesn't give any effect then you should change your engine type to MyISAM.

Related

Mysql what if too much data in a table

Data is increasing in one table everyday, it might lower the performance . I was thinking if I can create a trigger which move table A into A1 and create a new table A every a period of time, so that insert or update could be faster in table A. Is this the right way to save performance ? If not, what should I do ?
(for example, insert or update 1000 rows per second in table A, how is the performance after 3 years ?)
We are designing softwares for a factory. There are product lines which pcb boards are made on. We need to insert almost 60 pcb records per second for years. (1000 rows seem to be exaggerated)
First, you are talking about several terabytes for a single table. Is your disk that big? Yes, MySQL can handle that big a table.
Will it slow down? It depends on
The indexes. If you have 'random' indexes, the INSERTs will slow down to about 1 insert per disk hit. On a spinning HDD, that is only about 100 per second. SSD might be able to handle 1000/sec. Please provide SHOW CREATE TABLE.
Does the table have an AUTO_INCREMENT? If so, it needs to be BIGINT, not INT. But, if possible, get rid of it all together (to save space). Again, let's see the SHOW.
"Point" queries (load one row via an index) are mostly unaffected by the size of the table. They will be about twice as slow in a trillion-row table as in a million-row table. A point query will take milliseconds or tens of milliseconds; no big deal.
A table scan will take hours or days; hopefully you are not doing that.
A billion-row scan of part of the table will take days or weeks unless you are using the PRIMARY KEY or have a "covering" index. Let's see the queries and the SHOW.
The best technique is not to store the data. Summarize it as it arrives, save the summaries, then toss the raw data. (OK, you might store the raw in a csv file just in case you need to build a new summary table or fix a bug in an existing one.)
Having a few summary tables instead of the raw data would shrink the data to under 1TB and allow the relevant queries to run 10 times as fast. (OK, point queries would be only slightly faster.)
PARTITIONing (or otherwise splitting up the table)? It depends. Let's see the queries and the SHOW. In many situations, PARTITIONing does not speed up anything.
Will you be deleting or modifying existing rows? I hope not. That adds more dimensions of problems. If, on the other hand, you need to purge 'old' data, then that is an excellent use for PARTITIONing. For 3 years' worth of data, I would PARTITION BY RANGE(TO_DAYS(..)) and have monthly partitions. Then a monthly DROP PARTITION would be very fast.
Very Huge data may decrease the performance of server, So there is a way to handle this :
1) you have to create another table to store archive data ( old data ) using Archive storage mechanism . ( https://dev.mysql.com/doc/refman/8.0/en/archive-storage-engine.html )
2) create MySQL job/scheduler to move older records to archive table. schedule in timeslot
when server is maximum idle.
3) after moving older records to archive table, re-index the original table.
this will serve the purpose of performance.
It is unlikely that 1000 row tables perform sufficiently poorly that doing a table copy every once in a while is an overall net gain. And anyway, what would the new table have that the old one did not which would improve performance?
The key to having tables perform efficiently is intelligent table design and management of indexes. That is how zillion row tables are effective in geospatial work, library catalogs, astronomy, and how internet search engines find useful data, etc.
Each index defined does cause more mysql impact especially at row insert time. Assuming there are more reads than inserts, this is an advantage because most queries are rapidly completed thanks to a suitable index.
Indexes are best defined with a thorough understanding of the queries made against the table—both in quality and quantity. And, if there is any tendency for the nature of the queries to trend over months or years, then the indexes would need additions, modifications, or—yes—even deletions.
It seems to me there is something inherently wrong with the way you are using MySQL to begin with.
A database system is supposed to manage data that is required by your application in order for it to work. If you think flushing the table every so often is something acceptable, then that doesn't seem to be the case.
Perhaps you are better off just using log files. Split them by date, delete old ones if and when you decide they are no longer relevant or need the disk space. It's even safer to do that way from a recovery perspective.
If you need a better suggestion, then improve your question to include exactly what you are trying to accomplish so we can help you with it.

MySQL Large Datasets

I have large sets of data. Over 40GB that I loaded in MySQL table. I am trying to perform simple queries like select * from tablename but it takes gazillion minutes to run and eventually times out. If I set a limit, the execution is fairly fast ex: select * from tablename limit 1000.
The table has over 200 million records.
Tried creating indexes on some columns and that failed too after 3 hours of execution.
Any tips on working with these types of datasets?
First thing you need to do is completely ignore all answers and comments advising some other, awesome, mumbo jumbo technology. It's absolute bullshit. Those things can't work in a different way because they're all constrained with the same problem - hardware.
Now, let's get back to MySQL. The problem with LIMIT is that MySQL takes the whole data set, then takes LIMIT amount of rows starting from OFFSET. That means if you do SELECT * FROM my_table LIMIT 1000 - it will take all 200 million rows, buffer them, then it will start counting from 0 to 999 and discard the rest.
Yes, it takes time. Yes, it appears as dumb. However, MySQL doesn't know what "start" or "end" mean, so it can't know what limit and offset are until you tell it so.
To improve your search, you can use something like this (assuming you have numeric primary key):
SELECT * FROM tablename WHERE id < 10000 LIMIT 1000;
In this case, instead of with 200 million rows, MySQL will work with all rows whose PK is below 10 000. Much easier, much quicker, also readable. Numbers can be tweaked at any point and if you perform a pagination of some sort in a scripting language, you can always transfer the last numeric id that was present so MySQL can start from that id onwards in its search.
Also, you should be using InnoDB engine, and tweak it using innodb_buffer_pool_size which is the magic sauce that makes MySQL fly.
For large databases, one should consider using an alternative solutions such as Apache Spark. MySQL reads the data from disk which is a slow operation. Nothing can work as fast as a technology that is based on MapReduce. Take a look to this answer. It is true that with large databases, queries get very challenging.
Anyway assuming you want to stick with MySQL, first of all if you are using MyISAM, make sure to convert your database storage to InnoDB. This is especially important if you have lots of read/write operations.
It is also important to partition, that reduce the table into more manageable smaller tables. It will also enhance the indexes performance.
Do not be too generous with adding indexes. Define indexes wisely. If an index does not need to be UNIQUE do not define it as one. If an index does not need to include multiple fields do not include multiple fields.
Most importantly start monitor your MySQL instance. Use SHOW ENGINE INNODB STATUS to investigate the performance of your MySQL instance.

Benefit to duplicate mysql table to dispatch overload

case 1: i have a table A with 1 insert/per seconde .
From my admin i need to make some heavy read and delets on this table to perform some statistic and maintenance .
Is it make sense to insert incoming data in 2 differents tables A and B , and use the table B for my administration. Goal is to not overload table A .
case 2 :
Another exemple to fully understand the logic , i have a table (tmpA) dedicated to fill search result . Each time there is a search , result is insert into this table and help for pagination.The night , olds results are delet .
actually i have 5 request per second for this table , so aproximativly 500 rows * 5 = 2500 rows /per second .
Is it make sens to creat more tables (tmpA , tmpB , tmpC ,etc..) to dispatch insert and avoid overload ?
for case 1 , if make sens to duplicate ,
whats is the difference with inserting "manualy" incoming data in 2 (or more) differentes
tables between use the mysql replication ?
Thanks to you,
jess
This is kinda difficult to answer, as it depends on your setup hardware-wise.
An insert per second isn't that much. A properly setup server should be able to handle it.
Reads on a table are non-blocking. so gathering info to do statistics (and assuming you don do the calculations for the statistics in the database) shouldn't influence the performance of your database.
Deletes on the other hand are blocking, and will add up to load on a table with heavy inserts.
For Case 1, I do not understand how you would want to split the load on different tables. Generally speaking, there's a database-server load, and not specifically a table load (unless we define blocking processes as table load).
I gather from the comments that Case 1 are user signups/registration. splitting user information over two tables is horrid from a maintenance perspective, plus the coupling of two tables that inevitably needs happen only increases overhead -load-, instead of decreasing it. Deleting data (users?) is also a major issue if the data is divided over two tables. Can you explain how you see administering your data if this is divided over two tables? I'm probably missing something.
Looking at the above, I do not recommend splitting this data between tables.
What I do recommend is:
Use InnoDB as a table type. It has smaller locking than MyISAM (which does table locking?)
Optimize your RAM/memory usage for MySQL. Proper memory settings allow for very quick reads and writes.
Optimize your indexes. the EXPLAIN statement can show which ones are used for each query
Case 2
I don't fully understand the use case, but it might make sense to spit this data up into several tables. Depending on why you want to push the data in these temp tables, splitting might happen per user, keyword or other significant features.
Depending on the use case try limiting the search results (and thus utilizing pagination) through LIMIT BY statements. You don´t need store results for pagination that way, or store the results at all. Can you explain why you want to store these results? 2500 rows/sec is a lot.
Replication is a whole other topic, much more complicated and not achieved by copying tables, but by copying servers. I can't help you with that, never done it, as I never needed it. (my largest MySQL server was aprox. 80Gb large, 350 million rows, with inserts peaking at 224 rows per second)
Can you paste the architecture of your tables you currently use, and some sample data? That might makes the cases at tad more clear.

sorting 1 billion rows by one varchar column in MYSQL quickly

I have 1 billion rows stored in MYSQL, I need to output them alphabetically by the a varchar column, what's the most efficient way of go about it. using other linux utilites like sort awk are allowed.
MySQL can deal with a billion rows. Efficiency depends on 3 main factors: Buffers, Indexes and Joins.
Some suggestions:
Try to fit data set you’re working with in memory
Processing in memory is so much faster and you have whole bunch of problems solved just doing so. Use multiple servers to host portions of data set. Store portion of data you’re going to work with in temporary table etc.
Prefer full table scans to index accesses
For large data sets full table scans are often faster than range scans and other types of index lookups. Even if you look at 1% or rows or less full table scan may be faster.
Avoid joins to large tables
Joining of large data sets using nested loops is very expensive. Try to avoid it. Joins to smaller tables is OK but you might want to preload them to memory before join so there is no random IO needed to populate the caches.
Be aware of MySQL limitations which requires you to be extra careful working with large data sets. In MySQL, a query runs as a single thread (with exeption of MySQL Cluster) and MySQL issues IO requests one by one for query execution, which means if single query execution time is your concern many hard drives and large number of CPUs will not help.
Sometimes it is good idea to manually split query into several, run in parallel and aggregate result sets.
You did not give much info on your setup or your dataset, but this should give you a couple of clues on what to watch out for. In my opinion having the (properly tuned) database sort this for you would be faster than doing it programmatically unless you have very specific needs not mentioned in your post.
Have you just tried indexing the column and dumping them out? I'd try that first to see if the performance was inadequate before going exotic.
It depends on how you define efficient. CPU/Memory/IO/Time/Coding Effort. What is important in this case?
"select * from big_table order by the_varchar_column" That is probably the most efficient use of developer resources. Adding an index might make it run a lot faster.

What are some optimization techniques for MySQL table with 300+ million records?

I am looking at storing some JMX data from JVMs on many servers for about 90 days. This data would be statistics like heap size and thread count. This will mean that one of the tables will have around 388 million records.
From this data I am building some graphs so you can compare the stats retrieved from the Mbeans. This means I will be grabbing some data at an interval using timestamps.
So the real question is, Is there anyway to optimize the table or query so you can perform these queries in a reasonable amount of time?
Thanks,
Josh
There are several things you can do:
Build your indexes to match the queries you are running. Run EXPLAIN to see the types of queries that are run and make sure that they all use an index where possible.
Partition your table. Paritioning is a technique for splitting a large table into several smaller ones by a specific (aggregate) key. MySQL supports this internally from ver. 5.1.
If necessary, build summary tables that cache the costlier parts of your queries. Then run your queries against the summary tables. Similarly, temporary in-memory tables can be used to store a simplified view of your table as a pre-processing stage.
3 suggestions:
index
index
index
p.s. for timestamps you may run into performance issues -- depending on how MySQL handles DATETIME and TIMESTAMP internally, it may be better to store timestamps as integers. (# secs since 1970 or whatever)
Well, for a start, I would suggest you use "offline" processing to produce 'graph ready' data (for most of the common cases) rather than trying to query the raw data on demand.
If you are using MYSQL 5.1 you can use the new features.
but be warned they contain lot of bugs.
first you should use indexes.
if this is not enough you can try to split the tables by using partitioning.
if this also wont work, you can also try load balancing.
A few suggestions.
You're probably going to run aggregate queries on this stuff, so after (or while) you load the data into your tables, you should pre-aggregate the data, for instance pre-compute totals by hour, or by user, or by week, whatever, you get the idea, and store that in cache tables that you use for your reporting graphs. If you can shrink your dataset by an order of magnitude, then, good for you !
This means I will be grabbing some data at an interval using timestamps.
So this means you only use data from the last X days ?
Deleting old data from tables can be horribly slow if you got a few tens of millions of rows to delete, partitioning is great for that (just drop that old partition). It also groups all records from the same time period close together on disk so it's a lot more cache-efficient.
Now if you use MySQL, I strongly suggest using MyISAM tables. You don't get crash-proofness or transactions and locking is dumb, but the size of the table is much smaller than InnoDB, which means it can fit in RAM, which means much quicker access.
Since big aggregates can involve lots of rather sequential disk IO, a fast IO system like RAID10 (or SSD) is a plus.
Is there anyway to optimize the table or query so you can perform these queries
in a reasonable amount of time?
That depends on the table and the queries ; can't give any advice without knowing more.
If you need complicated reporting queries with big aggregates and joins, remember that MySQL does not support any fancy JOINs, or hash-aggregates, or anything else useful really, basically the only thing it can do is nested-loop indexscan which is good on a cached table, and absolutely atrocious on other cases if some random access is involved.
I suggest you test with Postgres. For big aggregates the smarter optimizer does work well.
Example :
CREATE TABLE t (id INTEGER PRIMARY KEY AUTO_INCREMENT, category INT NOT NULL, counter INT NOT NULL) ENGINE=MyISAM;
INSERT INTO t (category, counter) SELECT n%10, n&255 FROM serie;
(serie contains 16M lines with n = 1 .. 16000000)
MySQL Postgres
58 s 100s INSERT
75s 51s CREATE INDEX on (category,id) (useless)
9.3s 5s SELECT category, sum(counter) FROM t GROUP BY category;
1.7s 0.5s SELECT category, sum(counter) FROM t WHERE id>15000000 GROUP BY category;
On a simple query like this pg is about 2-3x faster (the difference would be much larger if complex joins were involved).
EXPLAIN Your SELECT Queries
LIMIT 1 When Getting a Unique Row
SELECT * FROM user WHERE state = 'Alabama' // wrong
SELECT 1 FROM user WHERE state = 'Alabama' LIMIT 1
Index the Search Fields
Indexes are not just for the primary keys or the unique keys. If there are any columns in your table that you will search by, you should almost always index them.
Index and Use Same Column Types for Joins
If your application contains many JOIN queries, you need to make sure that the columns you join by are indexed on both tables. This affects how MySQL internally optimizes the join operation.
Do Not ORDER BY RAND()
If you really need random rows out of your results, there are much better ways of doing it. Granted it takes additional code, but you will prevent a bottleneck that gets exponentially worse as your data grows. The problem is, MySQL will have to perform RAND() operation (which takes processing power) for every single row in the table before sorting it and giving you just 1 row.
Use ENUM over VARCHAR
ENUM type columns are very fast and compact. Internally they are stored like TINYINT, yet they can contain and display string values.
Use NOT NULL If You Can
Unless you have a very specific reason to use a NULL value, you should always set your columns as NOT NULL.
"NULL columns require additional space in the row to record whether their values are NULL. For MyISAM tables, each NULL column takes one bit extra, rounded up to the nearest byte."
Store IP Addresses as UNSIGNED INT
In your queries you can use the INET_ATON() to convert and IP to an integer, and INET_NTOA() for vice versa. There are also similar functions in PHP called ip2long() and long2ip().