We have a Mysql Master Slave architecture. We have around 1000 tables. 5 or 6 tables in our db is around 30 to 40 GB each. We can not join one 30 GB table to another 30 GB table as it never returns result .
What we do : Select required data from one table and than find matching data in another table in chunks. This gives result to us, but this is slow.
After joining two tables in chunks we further process these tables. We use few more joins as well as per the use case.
Current DB: architecture: 5 Master Server, 100 Slave Servers.
1. How can we make it faster ? Indexing is not an issue here, we are already using it.
2. Do we need some big data approach to get faster result.
EDIT: Query Details Below
Query select count(*) from A, B where A.id = B.uid;
Table A 30 GB, have 51 Columns. Id is primary key which is auto incremental integer.
Table B 27 GB, have 48 Columns. uid (int 11) is non unique index.
MySql ISAM is used.
That's an awful query. It will either
Scan all of A
For each id, lookup (randomly) the uid in B's index.
or
Scan all of B's index on uid
For each uid, lookup (randomly) the id in A (in the PK, hence i the data).
In either case,
the 30GB of A will all be touched
much of the uid index of B will be touched
Step 1 will be a linear scan
Step 2 will be random probe, presumably involving lots of I/O.
Please explain the intent if the query; maybe we can help you reformulate it to achieve the same or similar purpose.
Meanwhile, how much RAM do you have? What is the setting of innodb_buffer_pool_size? And are the tables InnoDB?
The query will eventually return a result, unless some "timeout" kills it.
Is id an AUTO_INCREMENT? Or is uid a "UUID"? (UUIDs make performance worse, but there are some minor tips to help.)
Related
I have a database table which is around 700GB with 1 Billion rows, the data is approximately 500 GB and index is 200GB,
I am trying to delete all the data before 2021,
Roughly around 298,970,576 rows in 2021 and there are 708,337,583 rows remaining.
To delete this I am running a non-stop query in my python shell
DELETE FROM table_name WHERE id < 1762163840 LIMIT 1000000;
id -> 1762163840 represent data from 2021. Deleting 1Mil row taking almost 1200-1800sec.
Is there any way I can speed up this because the current way is running for more than 15 days and there is not much data delete so far and it's going to do more days.
I thought that if I make a table with just ids of all the records that I want to delete and then do an exact map like
DELETE FROM table_name WHERE id IN (SELECT id FROM _tmp_table_name);
Will that be fast? Is it going to be faster than first making a new table with all the records and then deleting it?
The database is setup on RDS and instance class is db.r3.large 2 vCPU and 15.25 GB RAM, only 4-5 connections running.
I would suggest recreating the data you want to keep -- if you have enough space:
create table keep_data as
select *
from table_name
where id >= 1762163840;
Then you can truncate the table and re-insert new data:
truncate table table_name;
insert into table_name
select *
from keep_data;
This will recreate the index.
The downside is that this will still take a while to re-insert the data (renaming keep_data would be faster). But it should be much faster than deleting the rows.
AND . . . this will give you the opportunity to partition the table so future deletes can be handled much faster. You should look into table partitioning if you have such a large table.
Multiple techniques for big deletes: http://mysql.rjweb.org/doc.php/deletebig
It points out that LIMIT 1000000 is unnecessarily big and causes more locking than might be desirable.
In the long run, PARTITIONing would be beneficial, it mentions that.
If you do Gordon's technique (rebuilding table with what you need), you lose access to the table for a long time; I provide an alternative that has essentially zero downtime.
id IN (SELECT...) can be terribly slow -- both because of the inefficiency of in-SELECT and due to the fact that DELETE will hang on to a huge number of rows for transactional integrity.
If I had a table called "users" and in there contained 1,000,000 users, how long would it take to do a query?
Scenario 1 (1 million records)
SELECT * FROM "USERS" WHERE "ID" = 290000
Scenario 2 (10 million records)
UPDATE USERS SET lastname='Doe' WHERE "ID"=5525
Scenario 3 (100 million records)
SELECT * FROM "USERS" LIMIT 10 OFFSET 15
So basically my question is, how big can a table get before a performance hit is taken? And what times should I expect to be waiting
If a performance hit is taken, how do I manage a very large database?
Notes:
Lets say I had 64GB of RAM and that was not an issue
I also used an SSD lets say
If you use indexes (for your queries) and partitioning, then you do not need to worry about the table size. I mean, at some point, the index won't fit into memory and then you might have some performance issues. But with 64 Gbytes and 100,000,000 rows, you are not there yet.
For your first two queries, you want an index on id, which you will get automatically if it is a primary key.
The third is just taking arbitrary rows, so an index doesn't help.
I've been running a website, with a large amount of data in the process.
A user's save data like ip , id , date to the server and it is stored in a MySQL database. Each entry is stored as a single row in a table.
Right now there are approximately 24 million rows in the table
Problem 1:
Things are getting slow now, as a full table scan can take too many minutes but I already indexed the table.
Problem 2:
If a user is pulling a select data from table it could potentially block all other users (as the table is locked) access to the site until the query is complete.
Our server
32 Gb Ram
12 core with 24 thread cpu
table use MyISAM engine
EXPLAIN SELECT SUM(impresn), SUM(rae), SUM(reve), `date` FROM `publisher_ads_hits` WHERE date between '2015-05-01' AND '2016-04-02' AND userid='168' GROUP BY date ORDER BY date DESC
Lock to comment from #Max P. If you write to MyIsam Tables ALL SELECTs are blocked. There is only a Table lock. If you use InnoDB there is a ROW Lock that only locks the ROWs they need. Aslo show us the EXPLAIN of your Queries. So it is possible that you must create some new one. MySQL can only handle one Index per Query. So if you use more fields in the Where Condition it can be useful to have a COMPOSITE INDEX over this fields
According to explain, query doesn't use index. Try to add composite index (userid, date).
If you have many update and delete operations, try to change engine to INNODB.
Basic problem is full table scan. Some suggestion are:
Partition the table based on date and dont keep more than 6-12months data in live system
Add an index on user_id
my question is simple: let's say that I have hypothetically 18446744073709551615 records in one table (the max number) but I want to select from those records only one something like this:
SELECT * FROM TABLE1 WHERE ID = 5
1.- will the result be so slow to appear?
or if I have another table with only five records and I do the same query
SELECT * FROM TABLE2 WHERE ID = 5
2.- will the result appear at the same speed as in the first select or will be much faster in this other one?
thanks.
Let's assume for simplicity that the ID column is a fixed-width primary key. It will be found in roughly 64 index lookups (Wolfram Alpha on that). Since MySQL / InnoDB uses BTrees, it will be somewhat less than that for disk seeks.
Searching among 1 in a million would take you roughly index lookups. Seeking among 5 values will take 3 index lookups and the whole page will probably fit into one block.
Most of the speed difference will come from data that is being read from disk. The index branching should be a relatively fast operation and functionally you would not notice the difference once the values were cached in RAM. That is to say the first time you select from you 264 rows, it will be a little bit to read from a spinning disk, but essentially the same speed for the 5 and 264 rows if you were to repeat the query (even ignoring query cache).
No the first one will almost certainly be slower than the second but probably not that much slower, provided you have an index on the ID column.
With an index, you can efficiently find the first record meeting the condition and then all the other records will be close by (in the index structures anyway, not necessarily the data area).
I'd say you're more likely to run out of disk storage with the first one before you run out of database processing power :-)
A table with 3 columns, 1000000 records. Another table with 20 columns, 5000000 records. From the above which table gives quick output while query for data. Provided both table has auto increment value as primary key?
To represent more clearly,
Lets say, table1 has 3 columns with 1million records,1 field is indexed. And also table2 has 30 columns with 10lakh records, 5 field is indexed. If I run query to select a data from table1 and the next query to fetch data from table2 ( columns are indexed on both tables ), which table gives output much quicker than others?
Based on the sizes you mentioned, the tables are so small that it won't matter.
Generally speaking though MyISAM will be a bit faster than InnoDB for pretty much any table although it seems like the gap there is closing all the time.
Keep in mind though that for a small performance penalty, InnoDB gives you a lot in terms of ACID compliance.