mysql create index on table with a 100 million rows - mysql

I have few MySQL tables-these have around 300 columns and 100 million rows. These store data for log files, hence the size. I am using InnoDB engine. Few queries involving joins of these tables obviously do not work. I tried adding indices to these, but the queries do not finish at all.
I wanted to know if there is any other way to speed up performance, or some way to make the 'create index' work on the tables?
Thank you.

Creating an index takes time, proportional to the number of rows in the table. 100 million rows is quite a lot for a MySQL table. It will probably take many hours to create an index on that table. Exactly how long varies, based on other factors including your server hardware, the data type of the columns you are creating the index for, other current load on the database, etc.
One tool that can help you is pt-online-schema-change. It actually takes longer to build the index, but you can continue to read and write the original table while it's working. Test with a smaller table so you get some experience with using this tool.
You can view a webinar about this tool here: Zero-Downtime Schema Changes in MySQL (free to view, but requires registration).
Another technique is to create an empty table like your original, create the index in that table, and then start copying data from your original table into the new table gradually. If this is a log table, it's likely that you write to the table more than you read from the table, so you can probably swap the tables immediately and start logging new events immediately, and backfill it over time.
A tool like pt-archiver can help you to copy data gradually without putting too much load on the server. Simply doing INSERT INTO... SELECT is not good for your database server's health if you try to copy 100 million rows in one transaction. It also puts locks on the original table. pt-archiver works by copying just a bite-sized chunk of rows at a time, so it avoids the high cost of such a large transaction.
If you use an auto-increment primary key, take care to adjust the value to be higher than the max value in the original table before you let log events start writing to it, so you don't accidentally id values more than once.

Use
create table newtable like oldtable;
Then apply the index to the newtable while it is empty.
Then
insert into newtable select * from oldtable;
This may also take a long time to finish.

The Bug
On MySQL tables using MyISAM engine there are some problems creating a new secondary index.
A known issue with MyISAM engine, on some MySQL versions like 5.7.24 (shipped with Wamp for instance) not only causes a table scan, as expected, but need a full table rebuild when you create an index. If you just drop an index, the table is also rebuilt :-(
Ref: https://bugs.mysql.com/bug.php?id=93530
Alternative
Sometimes you cannot upgrade MySQL or can't ask for customers to do that, to run your solution. Change engine to InnoDB can lead to another problems if you don't need all features InnoDB provides.
The Index Table
So, there is an approach that consists of creating an "index table" manually, with the benefit you can filter records that you really need, as I explain below:
Imagine you have a 100M records of companies of the world on a table, where about 30M are companies of USA and 10M from Canada, plus other companies.
Each company has a COUNTRY and a STATE field, you want to index, because you need to search USA or CANADA companies by it's state.
So, in MySQL if you create an index for Country and State, all 100M records will be indexed, even with NULL states.
To solve this you create an index-table and a real index, like this:
create table index_tb_companies (
company_id int unique,
company_country char(2), -- US/CA
company_state char(2) -- AL/AK/.../WI/WY
);
create index index_tb_companies_index
on index_tb_companies (company_country, company_state);
Fill the Index Table
Now you can import original data to the index-table, with a simple insert into or replace into with a filtered select.
replace into index_tb_companies(
company_id, company_country, company_state)
(select
company_id, company_country, company_state
from original_company_table
where country in ('US', 'CA')
);
This will take a while, since maybe you don't have a index for country yet, and need a full table scan. But the final index-table size will be lower then a MySQL index size, since only US/CA data will be in there.
How to Select
Now, the final part is to make use of the index-table with your specific report of US and CA companies, since other countries are not covered by the index.
select o.*
from
original_company_table o INNER JOIN
index_tb_companies idx ON idx.company_id = o.company_id
where
idx.company_country = 'US'
and idx.company_state = 'NY'
This approach is particularly good when you want to index a tiny portion of your data on MySQL, so the index size is small.
Partial Index
Other databases, like PostgreSQL, have a "Partial Indexes", you can create regular indexes and pass a where clause on it's creation.
PG Partial Indexes: https://www.postgresql.org/docs/8.0/indexes-partial.html
Like and share this solution if you learn from it, I'm producing some material about databases and appreciate the feedback.

Related

Does it make sense to split a large table into smaller ones to reduce the number of rows (not columns)? [duplicate]

rails app, I have a table, the data already has hundreds of millions of records, I'm going to split the table to multiple tables, this can speed up the read and write.
I found this gem octopus, but he is a master/slave, I just want to split the big table.
or what can I do when the table too big?
Theoretically, a properly designed table with just the right indexes will be able to handle very large tables quite easily. As the table grows the slow down in queries and insertion of new records is supposed to be negligible. But in practice we find that it doesn't always work that way! However the solution definitely isn't to split the table into two. The solution is to partition.
Partitioning takes this notion a step further, by enabling you to
distribute portions of individual tables across a file system
according to rules which you can set largely as needed. In effect,
different portions of a table are stored as separate tables in
different locations. The user-selected rule by which the division of
data is accomplished is known as a partitioning function, which in
MySQL can be the modulus, simple matching against a set of ranges or
value lists, an internal hashing function, or a linear hashing
function.
If you merely split a table your code is going to become inifinitely more complicated, each time you do an insert or a retrieval you need to figure out which split you should run that query on. When you use partitions, mysql takes care of that detail for you an as far as the application is concerned it's still one table.
Do you have an ID on each row? If the answer is yes, you could do something like:
CREATE TABLE table2 AS (SELECT * FROM table1 WHERE id >= (SELECT COUNT(*) FROM table1)/2);
The above statement creates a new table with half of the records from table1.
I don't know if you've already tried, but an index should help in speed for a big table.
CREATE INDEX index_name ON table1 (id)
Note: if you created the table using unique constraint or primary key, there's already an index.

Duplicate table fields vs indexing only

I have a huge and very busy table (few thousands INSERT / second). The table stores loginlogs, it has a bigint ID which is not generated by MySQL but rather by pseudorandom generator on MySQL client.
Simply put, the table has loginlog_id, client_id, tons,of,other,columns,with,details,about,session....
I have few indexes on this table such as PRIMARY_KEY(loginlog_id) and INDEX(client_id)
In some other part of our system I need to fetch client_id based on loginlog_id. This does not happen that often (just few hundreds SELECT client_id FROM loginlogs WHERE loginlog_id=XXXXXX / second). Table loginlogs is read by various other scripts now and then, and always various columns are needed. But the most frequent call to read is for sure the above mentioned get client_id by loginlog_id.
My question is: should I create another table loginlogs_clientids and duplicate loginlog_id, client_id in there (this means another few thousands INSERTS, as for every loginlogs INSERT I get this new one). Or should I be happy with InnoDB handling my lookups by PRIMARY KEY efficiently.
We have tons of RAM (128GB, most of which is used by MySQL). Load of MySQL is between 40% and 350% CPU (we have 12 core CPU). When I tried to use the new table, I did not see any difference. But I am asking for the future, if our usage grows even more, what is the suggested approach? Duplicate or index?
Thanks!
No.
Looking up table data for a single row using the primary key is extremely efficient, and will take the same time for both tables.
Exceptions to that might be very large row sizes (e.g. 8KB+), and client_id is e.g. a varchar that is stored off-page, in which case you might need to read an additional data block, which at least theoretically could cost you some milliseconds.
Even if this strategy would have an advantage, you would not actually do it by creating a new table, but by adding an index (loginlog_id, client_id) to your original table. InnoDB stores everything, including the actual data, in an index structure, so that adding an index is basically the same as adding a new table with the same columns, but without (you) having the problem of synchronizing those two "tables".
Having a structure with a smaller row size can have some advantages for ranged scans, e.g. MySQL will evaluate select count(*) from tablename using the smallest index of the table, as it has to read less bytes. You already have such a small index (on client_id), so even in that regard, adding such an additonal table/index shouldn't have an effect. If you have any range scan on the primary key (which is probably unlikely for pseudorandom data), you may want to consider this though, or keep it in mind for cases when you have.

Make a view permanent

I have a 8.6M table, with full text search but it is still impossible to use it. I can split this table in two, but I think there must be a better way to use it.
I tried to create a view, with temptable algorithms but it didn't create a physical table.
Table 1 - 8.6M rows
id name age
1 john 20
2 jean 25
View 1 - 200K rows - Only records where age = 25.
id name age
2 jean 25
In MySQL, views are not "materialized views." Every time you query the view, it's like querying the base table. Some other RDBMS products have materialized views, where the subset of the table is also stored, but MySQL does not have this feature.
You have misunderstood the temptable algorithm for views. That means it creates a temporary table every time you query the view. This is probably not going to improve performance.
I'm not sure why you say that it's impossible to use the table. Do you mean that queries are not fast enough? That's not really the biggest table that MySQL can handle. There are tables that have hundreds of millions of rows and are still usable.
You may need different indexes to serve your query. You might benefit from partitioning (although I don't think table partitioning is compatible with fulltext indexes). You might need server hardware with more RAM or CPU horsepower.
If fulltext searches are important, you could also consider copying searchable data to a fulltext search technology like Sphinx Search. See my presentation Full Text Search Throwdown.
if you want a view then create a VIEW like here: http://dev.mysql.com/doc/refman/5.0/en/create-view.html
CREATE VIEW ....
If you want a temporary table then create a temporary table like here: http://dev.mysql.com/doc/refman/5.1/en/create-table.html
CREATE TEMPORARY TABLE ...
A view is permanent, but it is a view, not a table. Meaning that it will only execute a sql statement for you in the background each time you access it. Its purposes and role are not just to make your queries to look prettier but for example to be like a filter that will hide certain information.
A view will be accessible all the time. A temporary table has to be create within the DB connection cycle in order to be used.
Hopefully this answer will help you to decide what you really need - What to use? View or temporary Table

MySQL temporary indexes for user-defined queries

I am building an analytics platform where users can create reports and such against a MySQL database. Some of the tables in this database are pretty huge (billions of rows), so for all of the features so far I have indexes built to speed up each query.
However, the next feature is to add the ability for a user to define their own query so that they can analyze data in ways that we haven't pre-defined. They have full read permission to the relevant database, so basically any SELECT query is a valid query for them to enter. This creates problems, however, if a query is defined that filters or joins on a column we haven't currently indexed - sometimes to the point of taking over a minute for a simple query to execute - something as basic as:
SELECT tbl1.a, tbl2.b, SUM(tbl3.c)
FROM
tbl1
JOIN tbl2 ON tbl1.id = tbl2.id
JOIN tbl3 ON tbl1.id = tbl3.id
WHERE
tbl1.d > 0
GROUP BY
tbl1.a, tbl1.b, tbl3.c, tbl1.d
Now, assume that we've only created indexes on columns not appearing in this query so far. Also, we don't want too many indexes slowing down inserts, updates, and deletes (otherwise the simple solution would be to build an index on every column accessible by the users).
My question is, what is the best way to handle this? Currently, I'm thinking that we should scan the query, build indexes on anything appearing in a WHERE or JOIN that isn't already indexed, execute the query, and then drop the indexes that were built afterwards. However, the main things I'm unsure about are a) is there already some best practice for this sort of use case that I don't know about? and b) would the overhead of building these indexes be enough that it would negate any performance gains the indexes provide?
If this strategy doesn't work, the next option I can see working is to collect statistics on what types of queries the users run, and have some regular job periodically check what commonly used columns are missing indexes and create them.
If using MyISAM, then performing an ALTER statement on tables with large (billions of rows) in order to add an index will take a considerable amount of time, probably far longer than the 1 minute you've said for the statement above (and you'll need another ALTER to drop the index afterwards). During that time, the table will be locked meaning other users can't execute their own queries.
If your tables use the InnoDB engine and you're running MySQL 5.1+, then CREATE / DROP index statements shouldn't lock the table, but it still may take some time to execute.
There's a good rundown of the history of ALTER TABLE [here][1].
I'd also suggest that automated query analysis to identify and build indeces would quite difficult to get right. For example, what about cases such as selecting by foo.a but ordering by foo.b? This kind of query often needs a covering index over multiple columns, otherwise you may find your server tries a filesort on a huge resultset which can cause big problems.
Giving your users an "explain query" option would be a good first step. If they know enough SQL to perform custom queries then they should be able to analyse EXPLAIN in order to best execute their query (or at least realise that a given query will take ages).
So, going further with my idea, I propose you segment your datas into well identified views. You used abstract names so I can't reuse your business model, but I'll take a virtual example.
Say you have 3 tables:
customer (gender, social category, date of birth, ...)
invoice (date, amount, ...)
product (price, date of creation, ...)
you would create some sorts of materialized views for specific segments. It's like adding a business layer on top of the very bottom data representation layer.
For example, we could identify the following segments:
seniors having at least 2 invoices
invoices of 2013 with more than 1 product
How to do that? And how to do that efficiently? Regular views won't help your problem because they will have poor explain plans on random queries. What we need is a real physical representation of these segments. We could do something like this:
CREATE TABLE MV_SENIORS_WITH_2_INVOICES AS
SELECT ... /* select from the existing tables */
;
/* add indexes: */
ALTER TABLE MV_SENIORS_WITH_2_INVOICES ADD CONSTRAINT...
... etc.
So now, your guys just have to query MV_SENIORS_WITH_2_INVOICES instead of the original tables. Since there are less records, and probably more indexes, the performances will be better.
We're done! Oh wait, no :-)
We need to refresh these datas, a bit like a FAST REFRESH in Oracle. MySql does not have (not that I know... someone corrects me?) a similar system, so we have to create some triggers for that.
CREATE TRIGGER ... AFTER INSERT ON `seniors`
... /* insert the datas in MV_SENIORS_WITH_2_INVOICES if it matches the segment */
END;
Now we're done!

What are some optimization techniques for MySQL table with 300+ million records?

I am looking at storing some JMX data from JVMs on many servers for about 90 days. This data would be statistics like heap size and thread count. This will mean that one of the tables will have around 388 million records.
From this data I am building some graphs so you can compare the stats retrieved from the Mbeans. This means I will be grabbing some data at an interval using timestamps.
So the real question is, Is there anyway to optimize the table or query so you can perform these queries in a reasonable amount of time?
Thanks,
Josh
There are several things you can do:
Build your indexes to match the queries you are running. Run EXPLAIN to see the types of queries that are run and make sure that they all use an index where possible.
Partition your table. Paritioning is a technique for splitting a large table into several smaller ones by a specific (aggregate) key. MySQL supports this internally from ver. 5.1.
If necessary, build summary tables that cache the costlier parts of your queries. Then run your queries against the summary tables. Similarly, temporary in-memory tables can be used to store a simplified view of your table as a pre-processing stage.
3 suggestions:
index
index
index
p.s. for timestamps you may run into performance issues -- depending on how MySQL handles DATETIME and TIMESTAMP internally, it may be better to store timestamps as integers. (# secs since 1970 or whatever)
Well, for a start, I would suggest you use "offline" processing to produce 'graph ready' data (for most of the common cases) rather than trying to query the raw data on demand.
If you are using MYSQL 5.1 you can use the new features.
but be warned they contain lot of bugs.
first you should use indexes.
if this is not enough you can try to split the tables by using partitioning.
if this also wont work, you can also try load balancing.
A few suggestions.
You're probably going to run aggregate queries on this stuff, so after (or while) you load the data into your tables, you should pre-aggregate the data, for instance pre-compute totals by hour, or by user, or by week, whatever, you get the idea, and store that in cache tables that you use for your reporting graphs. If you can shrink your dataset by an order of magnitude, then, good for you !
This means I will be grabbing some data at an interval using timestamps.
So this means you only use data from the last X days ?
Deleting old data from tables can be horribly slow if you got a few tens of millions of rows to delete, partitioning is great for that (just drop that old partition). It also groups all records from the same time period close together on disk so it's a lot more cache-efficient.
Now if you use MySQL, I strongly suggest using MyISAM tables. You don't get crash-proofness or transactions and locking is dumb, but the size of the table is much smaller than InnoDB, which means it can fit in RAM, which means much quicker access.
Since big aggregates can involve lots of rather sequential disk IO, a fast IO system like RAID10 (or SSD) is a plus.
Is there anyway to optimize the table or query so you can perform these queries
in a reasonable amount of time?
That depends on the table and the queries ; can't give any advice without knowing more.
If you need complicated reporting queries with big aggregates and joins, remember that MySQL does not support any fancy JOINs, or hash-aggregates, or anything else useful really, basically the only thing it can do is nested-loop indexscan which is good on a cached table, and absolutely atrocious on other cases if some random access is involved.
I suggest you test with Postgres. For big aggregates the smarter optimizer does work well.
Example :
CREATE TABLE t (id INTEGER PRIMARY KEY AUTO_INCREMENT, category INT NOT NULL, counter INT NOT NULL) ENGINE=MyISAM;
INSERT INTO t (category, counter) SELECT n%10, n&255 FROM serie;
(serie contains 16M lines with n = 1 .. 16000000)
MySQL Postgres
58 s 100s INSERT
75s 51s CREATE INDEX on (category,id) (useless)
9.3s 5s SELECT category, sum(counter) FROM t GROUP BY category;
1.7s 0.5s SELECT category, sum(counter) FROM t WHERE id>15000000 GROUP BY category;
On a simple query like this pg is about 2-3x faster (the difference would be much larger if complex joins were involved).
EXPLAIN Your SELECT Queries
LIMIT 1 When Getting a Unique Row
SELECT * FROM user WHERE state = 'Alabama' // wrong
SELECT 1 FROM user WHERE state = 'Alabama' LIMIT 1
Index the Search Fields
Indexes are not just for the primary keys or the unique keys. If there are any columns in your table that you will search by, you should almost always index them.
Index and Use Same Column Types for Joins
If your application contains many JOIN queries, you need to make sure that the columns you join by are indexed on both tables. This affects how MySQL internally optimizes the join operation.
Do Not ORDER BY RAND()
If you really need random rows out of your results, there are much better ways of doing it. Granted it takes additional code, but you will prevent a bottleneck that gets exponentially worse as your data grows. The problem is, MySQL will have to perform RAND() operation (which takes processing power) for every single row in the table before sorting it and giving you just 1 row.
Use ENUM over VARCHAR
ENUM type columns are very fast and compact. Internally they are stored like TINYINT, yet they can contain and display string values.
Use NOT NULL If You Can
Unless you have a very specific reason to use a NULL value, you should always set your columns as NOT NULL.
"NULL columns require additional space in the row to record whether their values are NULL. For MyISAM tables, each NULL column takes one bit extra, rounded up to the nearest byte."
Store IP Addresses as UNSIGNED INT
In your queries you can use the INET_ATON() to convert and IP to an integer, and INET_NTOA() for vice versa. There are also similar functions in PHP called ip2long() and long2ip().