Okay so here are my table schemas.
I have 2 tables. Say Table A and Table B. The primary key of Table A is PriKeyA bigint(50) and primary key of Table B is PriKeyB varchar(255). Both PriKeyA and PriKeyB contain the same type of data.
The relevant fields of Table A required for this problem are Last_login_date_in_A (date) and Table B is the primary key itself.
What I need to do is, get those PriKeyA's in A which are not there in Table B's PriKeyB column and the Last_login_date_in_A column should be greater than 30 days from the current date. Basically I need the difference of Table A and Table B along with a certain condition(which is the date in this problem)
Here is my SQL command
: SELECT A.PriKeyA from A
LEFT JOIN B ON A.PriKeyA = B.PriKeyB
WHERE B.PriKeyB IS NULL and DATEDIFF(CURRENTDATE,Last_login_date_in_A)>30;
However when I run this MySQL command, it takes about ridiculously long amount of time (About 3 hours). The size of Table A is 2,50,000 and Table B is 42,000 records respectively. I thought that this problem could arise due to the fact that PriKeyA and PriKeyB are different datatypes. So i also used the CAST(PriKeyB as unsigned) in the query. But that too didn't work. There was a marginal performance improvement.
What could be the possible problems? I've used Left Joins before and they never have taken this long.
The expense of the query appears to be for these reasons:
The SQL datatype for A's PK and B's PK aren't the same.
Table A probably doesn't have an index on Last_login_date_in_A
What this means is that ALL rows in table A MUST be examined one row at a time in order to determine if the > 30 days ago criteria is true. This is especially true if A has 2,500,000 rows (as evidenced by how you placed your commas in A's row count) instead of 250,000.
Adding an index on Last_login_date_in_A might help you out here, but will also slightly slow down insert/update/delete statement times for the table due to needing to update the additional index.
Additionally, you should utilize the documentation for explaining MySQL's actual chosen query plan for your query at: MySQL query plan documentation
Related
I have a table with 32 columns of which 6 rows are primary keys and 2 more column are indexed.
Explain statement provides the below output
I have observed that, everytime the number of rows in the explain statement increases, the select query takes seconds to retrieve data from DB. The above select query returned only 310 rows but it had to scan 382546 rows.
Time taken was calculated by enabling mariadb's slow query log.
Create table query
I would like to understand the incorrectness in the table or query which is considerably slowing down the select query execution.
Your row is relatively large (around 300bytes, depending on the content of your varchar columns). Using the primary key means (for InnoDB) that MySQL will read the whole row. Assuming the estimate of 400k rows is right (which it probably isn't, but you can check by removing the and country_code = 1506 from your query to get a better count), MySQL may end up reading more than 100mb from disk, which reasonably can take several seconds.
Adding a proper index should fix this, in your case I would suggest (country_code, lcr_run_id, tier_type) (which would, with your primary key, actually be the same as just (country_code)).
If most of your queries have that form (e.g. use at least these three columns for lookup), you could think about changing the order of your primary key to start with those three columns, it should give you another speedboost. That operation will take some time though.
Hash partitioning is useless for performance, get rid of it. Ditto for subpartitioning.
Specifying which partition to use defeats the purpose of letting the Optimizer do it for you.
You simply need INDEX(tier_type, lcr_run_id, country_code) with the columns in any desired order.
Plan A: Have the PRIMARY KEY start with those 3 columns (again, the order is not important)
Plan B: Have a "secondary" index with those 3 columns, but not being the same as the start of the PK. (This index could have more columns on the end; let's see some more queries to advise further.)
Either way, it will scan only 310 rows if you also get rid of all partitioning. (Hence, resolving your "returned only 310 rows but it had to scan 382546 rows". Anyway, the '382546' may have been a poor estimate by Explain.)
The important issue here is that indexing works with the leftmost columns in the INDEX. (The PK is an index.) Your SELECT had a match on the first 2 columns, but country_code came later in the list, and the intervening columns were not tested with =.
The three 35M values makes me wonder if the PK is over-specified. For example, if a "zone" is comprised of several "countries", then "zone" is irrelevant in specifying the PK.
The table has only 382K rows, but it is much fatter than it needs to be. Partitioning has a lot of overhead. Also, most columns have (I think) much bigger datatypes than needed. BIGINT takes 8 bytes; INT takes 4 bytes. For example, if there are only a small number of "zones", use TINYINT UNSIGNED, which takes only 1 byte (and allows values 0..255). (See also other 'int' variants.)
Oops, I missed something else. Since zone is not in the WHERE, it can't even get past the primary partitioning.
I am currently using mysql
I have two tables called person and zim_list_id both tables has over 2 million rows
I want to update person table using zim_list_id table
the query I am using is
update person p JOIN zim_list_id z on p.person_id = z.person_id
set p.office_name = z.`Office Name`;
I have also created index on zim_list_id table and person table , the query I executed was
create index idx_person_office_name on person(`Office_name`);
create index idx_zim_list_id_office_name on zim_list_id(`Office name`);
the query execution is taking very long. is there any way to reduce the execution time?
The indexes on Office Name do nothing at all for this query. All you've done with those indexes is make inserts and updates slower, as now the database has to update the index any time that column changes.
What you really need, if you don't already have them, are indexes on the person_id field in those tables, to make the join more efficient.
You might also consider adding Office_Name as a second column on the zim_list_id table's index, as this will allow the database to fullfill that part of the query entirely from the index. But I wouldn't do that until I had checked the results after setting the plain person_id indexes first.
Finally, I'm curious how much memory is in that server (especially relative to the total size of the database), how much of it is available in your MySql buffer_pool_size setting, and what other work that server might be doing... there could always be an environmental factor as well.
I have a very simple mysql query on a remote windows 7 server on which i cannot change most of the parameters. I need to execute it only once now, to create a table, but in upcoming projects i'll be confronted to the same issue.
The query's the following, and has been running for 24 hours now, it's a basic filtering query :
CREATE TABLE compute_us_gum_2013P13
SELECT A.HHID, UPC, DIVISION, yearweek, CAL_DT, RETAILER, DEAL, RAW_PURCH_QTY,
UNITS,VOL,GROSS_DOL,NET_DOL, CREATE_DATE
FROM work_us_gum_2013P13_digital_purchases_with_yearweek A
INNER JOIN compute_us_gum_2013_digital_panelists B
on A.hhid = B.hhid;
Table A is quite big, around 250 million lines.
table B is 5 million lines
hhid is indexed on both tables, i haven't put a unique index in table B though i could, but will it change things dramatically ?
My ram of 12 GB is completely saturated (actually there's 1GB free but i think mysql can't touch it). Of course i closed everything i could, and the processor is basically not used. The status of the query has been stuck on "sending data" for most of the time.
Table A has also a cover index on 7 column, that i could drop as it's not used, but i don't think it would change something would it ?
One big issue i have is i cannot test a lot of things because i wouldn't know if it works until it works, and i think this query will be long no matter what. Also I don't want to lose for nothing the computation time that's already been spent.
I could also if it helps keep only the columns HHID, UPC and yearweek (resp bigint(20),bigint(20),and int(11), though the columns i would drop are only decimal and dates.
And what if i split table B in several parts ? the operation is only a filtering one, so it can be done in several steps, would i win time ? If i don't gain time but don't lose either, at least i could see my progress.
Another possibility would be to directly delete rows from table A (and if really necessary, columns), so i wouldn't have to write another table, would it be faster ?
I can change some database parameters if i send an email to my client, but it take some tim and is not suitable for a lot of tweeking and testing.
Any solution would be appreciate, even the dirtiest one :), i'm really stuck here.
EDIT:
Explain gives me this:
Id select_type table type possible_keys key keylen ref row Extra
1 Simple B index hhidx hhidx 8 NULL 5003865 Using Index
1 Simple A ref hhidx hhidx 8 ncsmars.B.hhid 6 [nothing]
What is the Engine? Is it InnoDB?
What are the primary keys for both tables?
Did you start both primary keys with your HHID (if HHID is not a candidate key for a table - you can create composite key and start it from that field)?
If you start both PK from HHID and then will join your tables on that field - disk seek will be reduced dramatically so you should see much better performance. If you cannot alter both tables - start from smaller one - alter its PK to have HHID on the first place of it and then check execution plan.
ALTER TABLE compute_us_gum_2013_digital_panelists ADD PRIMARY KEY(HHID, [other necessary fields (if any)])
Suppose it will be better than before.
I have a query of the following form:
SELECT * FROM MyTable WHERE Timestamp > [SomeTime] AND Timestamp < [SomeOtherTime]
I would like to optimize this query, and I am thinking about putting an index on timestamp, but am not sure if this would help. Ideally I would like to make timestamp a clustered index, but MySQL does not support clustered indexes, except for primary keys.
MyTable has 4 million+ rows.
Timestamp is actually of type INT.
Once a row has been inserted, it is never changed.
The number of rows with any given Timestamp is on average about 20, but could be as high as 200.
Newly inserted rows have a Timestamp that is greater than most of the existing rows, but could be less than some of the more recent rows.
Would an index on Timestamp help me to optimize this query?
No question about it. Without the index, your query has to look at every row in the table. With the index, the query will be pretty much instantaneous as far as locating the right rows goes. The price you'll pay is a slight performance decrease in inserts; but that really will be slight.
You should definitely use an index. MySQL has no clue what order those timestamps are in, and in order to find a record for a given timestamp (or timestamp range) it needs to look through every single record. And with 4 million of them, that's quite a bit of time! Indexes are your way of telling MySQL about your data -- "I'm going to look at this field quite often, so keep an list of where I can find the records for each value."
Indexes in general are a good idea for regularly queried fields. The only downside to defining indexes is that they use extra storage space, so unless you're real tight on space, you should try to use them. If they don't apply, MySQL will just ignore them anyway.
I don't disagree with the importance of indexing to improve select query times, but if you can index on other keys (and form your queries with these indexes), the need to index on timestamp may not be needed.
For example, if you have a table with timestamp, category, and userId, it may be better to create an index on userId instead. In a table with many different users this will reduce considerably the remaining set on which to search the timestamp.
...and If I'm not mistaken, the advantage of this would be to avoid the overhead of creating the timestamp index on each insertion -- in a table with high insertion rates and highly unique timestamps this could be an important consideration.
I'm struggling with the same problems of indexing based on timestamps and other keys. I still have testing to do so I can put proof behind what I say here. I'll try to postback based on my results.
A scenario for better explanation:
timestamp 99% unique
userId 80% unique
category 25% unique
Indexing on timestamp will quickly reduce query results to 1% the table size
Indexing on userId will quickly reduce query results to 20% the table size
Indexing on category will quickly reduce query results to 75% the table size
Insertion with indexes on timestamp will have high overhead **
Despite our knowledge that our insertions will respect the fact of have incrementing timestamps, I don't see any discussion of MySQL optimisation based on incremental keys.
Insertion with indexes on userId will reasonably high overhead.
Insertion with indexes on category will have reasonably low overhead.
** I'm sorry, I don't know the calculated overhead or insertion with indexing.
If your queries are mainly using this timestamp, you could test this design (enlarging the Primary Key with the timestamp as first part):
CREATE TABLE perf (
, ts INT NOT NULL
, oldPK
, ... other columns
, PRIMARY KEY(ts, oldPK)
, UNIQUE (oldPK)
) ENGINE=InnoDB ;
This will ensure that the queries like the one you posted will be using the clustered (primary) key.
Disadvantage is that your Inserts will be a bit slower. Also, If you have other indices on the table, they will be using a bit more space (as they will include the 4-bytes wider primary key).
The biggest advantage of such a clustered index is that queries with big range scans, e.g. queries that have to read large parts of the table or the whole table will find the related rows sequentially and in the wanted order (BY timestamp), which will also be useful if you want to group by day or week or month or year.
The old PK can still be used to identify rows by keeping a UNIQUE constraint on it.
You may also want to have a look at TokuDB, a MySQL (and open source) variant that allows multiple clustered indices.
I have a MySQL MYISAM table (say tbl) consisting of 2 unsigned int fields, say, f1 and f2. There is an index on f2 and the table is very large (approximately 320,000,000+ rows). I update this table periodically (with approximately 100,000 new rows a week), and, in order to be able to search this table without doing an ORDER BY (which would be very time consuming in real-time queries), I physically ORDER the table according to the way in which I want to retrieve its rows.
So, I perform an ALTER TABLE tbl ORDER BY f1 DESC. (I know I have enough physical space on the server for a copy of the table.) I have read that during this operation, a temporary table is created and SELECT statements are not affected on the current rows.
However, I have experienced that this is not the case, and SELECT statements on the table that occur at the same time with the ALTER table are getting blocked and do not terminate. After the ALTER TABLE tbl completes (about 40 minutes on the production server), the SELECT statements on tbl start executing fine again.
Is there any reason why the "ALTER table tbl ORDER BY f1 DESC" seems to be blocking other clients from querying tbl?
Altering a table will always grab a lock on the table, preventing SELECTs from running.
I'll admin that I didn't even know you could do that with an ALTER TABLE.
What are you trying to get from the table? For example, all records in a given range? 320 million rows is not a trivial number. I'll give you my gut reactions:
Switch to InnoDB (allows #2, also gives transactions, but without #2 may hurt performance)
Partition the table (makes it act like a number of slightly smaller tables)
Consider a redesign, such as having a "working set" table and a "historical" table, basically manually partitioning. If you usually look for recently inserted data, this (along with partitioning) will help a lot. If your lookups are evenly distributed, this probably won't make a difference.
Consider adding a new column you could use in conjunction to narrow down selects (so instead of searching on date, search on date and customer ID)
Since I don't know what you're storing, some of these (such as #4) may not apply.
There are some other things you could try. OPTIMIZE TABLE may help you but take less time, but I doubt it. I think internally it's implemented as a dump/reload, at least on the InnoDB side.