Slow Updates for Single Records by Primary Key - mysql

I am using MySQL 5.5.
I have an InnoDB table definition as follows:
CREATE TABLE `table1` (
`col1` int(11) NOT NULL AUTO_INCREMENT,
`col2` int(11) DEFAULT NULL,
`col3` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`col4` int(11) DEFAULT NULL,
`col5` datetime DEFAULT NULL,
`col6` tinyint(1) NOT NULL DEFAULT '0',
`col7` datetime NOT NULL,
`col8` datetime NOT NULL,
`col9` int(11) DEFAULT NULL,
`col10` tinyint(1) NOT NULL DEFAULT '0',
`col11` tinyint(1) DEFAULT '0',
PRIMARY KEY (`col1`),
UNIQUE KEY `index_table1_on_ci_ai_tn_sti` (`col2`,`col4`,`col3`,`col9`),
KEY `index_shipments_on_applicant_id` (`col4`),
KEY `index_shipments_on_shipment_type_id` (`col9`),
KEY `index_shipments_on_created_at` (`col7`),
KEY `idx_tracking_number` (`col3`)
) ENGINE=InnoDB AUTO_INCREMENT=7634960 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
The issue is UPDATES. There are about 2M rows in this table.
A typical UPDATE query would be :
UPDATE table1 SET col6 = 1 WHERE col1 = 7634912;
We have about 5-10k QPS on this production server. These queries are often in "Updating" state when looked at through the process list. The InnoDB locks show that there are many rec but not gap locks on index_table1_on_ci_ai_tn_sti. No transaction is waiting for lock.
My feeling is that the Unique Index is causing the lag but I'm not sure why. This is the only table we have that is defined this way using the Unique Index.

I don't think the UNIQUE key has any impact (in this case).
Are you really setting a DATETIME to "1"? (Please check for other typos -- they could make a big difference.)
Are you trying to do 10K UPDATEs per second?
Is innodb_buffer_pool_size bigger than the table, but no bigger than 70% of available RAM?
What is the value of innodb_flush_log_at_trx_commit? 1 is default and secure, but slower than 2.
Can you put a bunch of updates into a single transaction? That would cut down the transaction overhead.

Related

mysql insert too slow and high io/cpu usage some time

the table row is about one hundred million, sometimes the io bps about 150
IOPS about 4k
os version: CentOS Linux 7
MySQL version: docker mysql:5.6
server_id=3310
skip-host-cache
skip-name-resolve
max_allowed_packet=20G
innodb_log_file_size=1G
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=5120M
expire-logs-days=7
log_bin=webser
binlog_format=ROW
back_log=1024
slow_query_log
slow_query_log_file=slow-log
tmpdir=/var/log/mysql
sync_binlog=1000
the create table statement
CREATE TABLE `device_record` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`os` tinyint(9) DEFAULT NULL,
`uid` int(11) DEFAULT '0',
`idfa` varchar(50) DEFAULT NULL,
`adv` varchar(8) DEFAULT NULL,
`oaid` varchar(100) DEFAULT NULL,
`appId` tinyint(4) DEFAULT NULL,
`agent` varchar(100) DEFAULT NULL,
`channel` varchar(20) DEFAULT NULL,
`callback` varchar(1500) DEFAULT NULL,
`activeAt` datetime DEFAULT NULL,
`chargeId` int(11) DEFAULT '0',
`createAt` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `idfa_record_index_oaid` (`oaid`),
UNIQUE KEY `index_record_index_agent` (`agent`) USING BTREE,
UNIQUE KEY `idfa_record_index_idfa_appId` (`idfa`) USING BTREE,
KEY `index_record_index_uid` (`uid`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1160240883 DEFAULT CHARSET=utf8mb4
insert statement
#Insert(
"insert into idfa_record (os,idfa,oaid,appId,agent,channel,callback,adv,createAt) "
+ "values(#{os},#{idfa},#{oaid},#{appId},#{agent},#{channel},#{callback},#{adv},now()) on duplicate key "
+ "update createAt=if(uid<=0,now(),createAt),activeAt=if(uid<=0 and channel != #{channel},null,activeAt),channel=if(uid<=0,#{channel},channel),"
+ "adv=if(uid<=0,#{adv},adv),callback=if(uid<=0,#{callback},callback),appId=if(uid<=0,#{appId},appId)")
100M rows, but the auto_increment is already at 1160M? This is quite possible, but...
Most importantly, the table is more than halfway to overflowing INT SIGNED.
Are you doing inserts that "burn" ids?
Does the existence of 4 Unique keys cause many rows to be skipped?
This seems excessive: max_allowed_packet=20G.
How much RAM is available?
Does swapping occur?
How many rows are inserted per second? What is "bps"? (I am pondering why there are 4K writes. I would expect about 2 IOPS per Unique key per INSERT, but that dones not add up to 4K unless you have about 500 Inserts/sec.
Are the Inserts coming from different clients? (This feeds into "burning" ids, sluggishness, etc.)

Mysql Partitioning Query Performance

i have created partitions on pricing table. below is the alter statement.
ALTER TABLE `price_tbl`
PARTITION BY HASH(man_code)
PARTITIONS 87;
one partition consists of 435510 records. total records in price_tbl is 6 million.
EXPLAIN query showing only one partion is used for the query . Still the query takes 3-4 sec to execute. below is the query
EXPLAIN SELECT vrimg.image_cap_id,vm.man_name,vr.range_code,vr.range_name,vr.range_url, MIN(`finance_rental`) AS from_price, vd.der_id AS vehicle_id FROM `range_tbl` vr
LEFT JOIN `image_tbl` vrimg ON vr.man_code = vrimg.man_code AND vr.type_id = vrimg.type_id AND vr.range_code = vrimg.range_code
LEFT JOIN `manufacturer_tbl` vm ON vr.man_code = vm.man_code AND vr.type_id = vm.type_id
LEFT JOIN `derivative_tbl` vd ON vd.man_code=vm.man_code AND vd.type_id = vr.type_id AND vd.range_code=vr.range_code
LEFT JOIN `price_tbl` vp ON vp.vehicle_id = vd.der_id AND vd.type_id = vp.type_id AND vp.product_type_id=1 AND vp.maintenance_flag='N' AND vp.man_code=164
AND vp.initial_rentals_id =(SELECT rental_id FROM `rentals_tbl` WHERE rental_months='9')
AND vp.annual_mileage_id =(SELECT annual_mileage_id FROM `mileage_tbl` WHERE annual_mileage='8000')
WHERE vr.type_id = 1 AND vm.man_url = 'audi' AND vd.type_id IS NOT NULL GROUP BY vd.der_id
Result of EXPLAIN.
Same query without partitioning takes 3-4 sec.
Query with partitioning takes 2-3 sec.
how we can increase query performance as it is too slow yet.
attached create table structure.
price table - This consists 6 million records
CREATE TABLE `price_tbl` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`lender_id` bigint(20) DEFAULT NULL,
`type_id` bigint(20) NOT NULL,
`man_code` bigint(20) NOT NULL,
`vehicle_id` bigint(20) DEFAULT NULL,
`product_type_id` bigint(20) DEFAULT NULL,
`initial_rentals_id` bigint(20) DEFAULT NULL,
`term_id` bigint(20) DEFAULT NULL,
`annual_mileage_id` bigint(20) DEFAULT NULL,
`ref` varchar(255) DEFAULT NULL,
`maintenance_flag` enum('Y','N') DEFAULT NULL,
`finance_rental` decimal(20,2) DEFAULT NULL,
`monthly_rental` decimal(20,2) DEFAULT NULL,
`maintenance_payment` decimal(20,2) DEFAULT NULL,
`initial_payment` decimal(20,2) DEFAULT NULL,
`doc_fee` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`,`type_id`,`man_code`),
KEY `type_id` (`type_id`),
KEY `vehicle_id` (`vehicle_id`),
KEY `term_id` (`term_id`),
KEY `product_type_id` (`product_type_id`),
KEY `finance_rental` (`finance_rental`),
KEY `type_id_2` (`type_id`,`vehicle_id`),
KEY `maintenanace_idx` (`maintenance_flag`),
KEY `lender_idx` (`lender_id`),
KEY `initial_idx` (`initial_rentals_id`),
KEY `man_code_idx` (`man_code`)
) ENGINE=InnoDB AUTO_INCREMENT=5830708 DEFAULT CHARSET=latin1
/*!50100 PARTITION BY HASH (man_code)
PARTITIONS 87 */
derivative table - This consists 18k records.
CREATE TABLE `derivative_tbl` (
`type_id` bigint(20) DEFAULT NULL,
`der_cap_code` varchar(20) DEFAULT NULL,
`der_id` bigint(20) DEFAULT NULL,
`body_style_id` bigint(20) DEFAULT NULL,
`fuel_type_id` bigint(20) DEFAULT NULL,
`trans_id` bigint(20) DEFAULT NULL,
`man_code` bigint(20) DEFAULT NULL,
`range_code` bigint(20) DEFAULT NULL,
`model_code` bigint(20) DEFAULT NULL,
`der_name` varchar(255) DEFAULT NULL,
`der_url` varchar(255) DEFAULT NULL,
`der_intro_year` date DEFAULT NULL,
`der_disc_year` date DEFAULT NULL,
`der_last_spec_date` date DEFAULT NULL,
KEY `der_id` (`der_id`),
KEY `type_id` (`type_id`),
KEY `man_code` (`man_code`),
KEY `range_code` (`range_code`),
KEY `model_code` (`model_code`),
KEY `body_idx` (`body_style_id`),
KEY `capcodeidx` (`der_cap_code`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
range table - This consists 1k records
CREATE TABLE `range_tbl` (
`type_id` bigint(20) DEFAULT NULL,
`man_code` bigint(20) DEFAULT NULL,
`range_code` bigint(20) DEFAULT NULL,
`range_name` varchar(255) DEFAULT NULL,
`range_url` varchar(255) DEFAULT NULL,
KEY `range_code` (`range_code`),
KEY `type_id` (`type_id`),
KEY `man_code` (`man_code`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
PARTITION BY HASH is essentially useless if you are hoping for improved performance. BY RANGE is useful in a few use cases_.
In most situations, improvements in indexes are as good as trying to use partitioning.
Some likely problems:
No explicit PRIMARY KEY for InnoDB tables. Add a natural PK, if applicable, else an AUTO_INCREMENT.
No "composite" indexes -- they often provide a performance boost. Example: The LEFT JOIN between vr and vrimg involves 3 columns; a composite index on those 3 columns in the 'right' table will probably help performance.
Blind use of BIGINT when smaller datatypes would work. (This is an I/O issue when the table is big.)
Blind use of 255 in VARCHAR.
Consider whether most of the columns should be NOT NULL.
That query may be a victim of the "explode-implode" syndrome. This is where you do JOIN(s), which create a big intermediate table, followed by a GROUP BY to bring the row-count back down.
Don't use LEFT unless the 'right' table really is optional. (I see LEFT JOIN vd ... vd.type_id IS NOT NULL.)
Don't normalize "continuous" values (annual_mileage and rental_months). It is not really beneficial for "=" tests, and it severely hurts performance for "range" tests.
Same query without partitioning takes 3-4 sec. Query with partitioning takes 2-3 sec.
The indexes almost always need changing when switching between partitioning and non-partitioning. With the optimal indexes for each case, I predict that performance will be close to the same.
Indexes
These should help performance whether or not it is partitioned:
vm: (man_url)
vr: (man_code, type_id) -- either order
vd: (man_code, type_id, range_code, der_id)
-- `der_id` 4th, else in any order (covering)
vrimg: (man_code, type_id, range_code, image_cap_id)
-- `image_cap_id` 4th, else in any order (covering)
vp: (type_id, der_id, product_type_id, maintenance_flag,
initial_rentals, annual_mileage, man_code)
-- any order (covering)
A "covering" index is an extra boost, in that it can do all the work just in the index's BTree, without touching the data's BTree.
Implement a bunch of what I recommend, then come back (in another Question) for further tweaking.
Usually the "partition key" should be last in a composite index.

Alter table to apply partitioning by key in mysql

I have a table with million of rows and the frequency of growth will probably increase in future, so far about 4.3 million rows are added in a month, causing the database to slow down. I have already applied indexing but it's not really optimizing the speed. Is applying Partitioning to such data favorable?
Also how can I apply partitioning on a table with million of rows? I know it will look something like this
ALTER TABLE gpsloggs
PARTITION BY KEY(DeviceCode)
PARTITIONS 10;
The problem is I was Partitioning on DeviceCode which is not a primary key so partitioning isn't permissible.
DROP TABLE IF EXISTS `gpslogss`;
CREATE TABLE `gpslogss` (
`Id` int(11) NOT NULL AUTO_INCREMENT,
`DeviceCode` varchar(255) DEFAULT NULL,
`Latitude` varchar(255) DEFAULT NULL,
`Longitude` varchar(255) DEFAULT NULL,
`Speed` double DEFAULT NULL,
`rowStamp` datetime DEFAULT NULL,
`Date` varchar(255) DEFAULT NULL,
`Time` varchar(255) DEFAULT NULL,
`AlarmCode` int(11) DEFAULT NULL,
PRIMARY KEY `Id` (`Id`) USING BTREE,
KEY `DeviceCode` (`DeviceCode`) USING BTREE
);
So I altered the table and made the table in a new database with 0 records this way and it worked fine
DROP TABLE IF EXISTS `gpslogss`;
CREATE TABLE `gpslogss` (
`Id` int(11) NOT NULL AUTO_INCREMENT,
`DeviceCode` varchar(255) DEFAULT NULL,
`Latitude` varchar(255) DEFAULT NULL,
`Longitude` varchar(255) DEFAULT NULL,
`Speed` double DEFAULT NULL,
`rowStamp` datetime DEFAULT NULL,
`Date` varchar(255) DEFAULT NULL,
`Time` varchar(255) DEFAULT NULL,
`AlarmCode` int(11) DEFAULT NULL,
KEY `Id` (`Id`) USING BTREE,
KEY `DeviceCode` (`DeviceCode`) USING BTREE
);
PARTITION BY KEY(DeviceCode)
PARTITIONS 10;
How should I render the code so that I can apply partitioning to the table with million of rows? How should I drop keys and alter the table to apply partitioning without damaging data?
Short answer: Don't.
Long answer: PARTITION BY KEY does not provide any performance benefit (that I know of). And why else use PARTITION?
Other notes:
You should use InnoDB for virtually all tables.
InnoDB tables should have an explicit PRIMARY KEY.
There is a DATETIME datatype; don't use VARCHAR for date or time, and don't split them.
latitude and longitude are numeric; don't use VARCHAR. FLOAT is a likely candidate (precise enough to differentiate vehicles, but not people).
Your real question is about speed. Let's see the slow SELECTs and work backward from them. Adding PARTITIONing is rarely a solution to performance.

optimize mysql schema, need advice

I was asked to help to check a table and improve performance.
The table is a large table with a 2.000.000 rows and is growing fast.
There are lot's of users who are using this table with a lot of update / insert and delete queries.
Maybe you can give me some good advice to improve performance and realibity
Here is the table definition:
CREATE TABLE `calculate` (
`GROUP_LINE_ID` BIGINT(250) NOT NULL DEFAULT '0',
`GROUP_LINE_PARENT_ID` BIGINT(250) NOT NULL DEFAULT '0',
`MOEDER_LINE_CODE` BIGINT(250) NOT NULL DEFAULT '0',
`CALC_ID` BIGINT(250) NOT NULL DEFAULT '0',
`GROUP_ID` BIGINT(250) NOT NULL DEFAULT '0',
`CODE` VARCHAR(250) DEFAULT NULL,
`DESCRIPTION` VARCHAR(250) DEFAULT NULL,
`RAW_AMOUNT` DECIMAL(50,3) NOT NULL DEFAULT '0.000',
`AMOUNT` DECIMAL(50,3) NOT NULL DEFAULT '0.000',
`UNIT` VARCHAR(100) DEFAULT NULL,
`MEN_HOURS` DECIMAL(50,3) NOT NULL DEFAULT '0.000',
`PRICE_PER_UNIT` DECIMAL(50,3) NOT NULL DEFAULT '0.000',
`CONTRACTOR_UNIT` DECIMAL(50,3) NOT NULL DEFAULT '0.000',
`POSTS_PER_UNIT` DECIMAL(50,3) NOT NULL DEFAULT '0.000',
`SORT_INDEX` BIGINT(250) NOT NULL DEFAULT '0',
`FACTOR` DECIMAL(50,4) NOT NULL DEFAULT '0.0000',
`FACTOR_TYPE` INT(2) NOT NULL DEFAULT '0',
`ROUND_AT` DECIMAL(50,2) NOT NULL DEFAULT '0.00',
`MATERIAL_ID` BIGINT(250) NOT NULL DEFAULT '0',
`MINIMUM` DECIMAL(50,2) NOT NULL DEFAULT '0.00',
`LINE_TYPE` INT(1) NOT NULL DEFAULT '0',
`ONDERDRUKT` INT(5) NOT NULL DEFAULT '0',
`MARKED` INT(5) NOT NULL DEFAULT '0',
`IS_TEXT` INT(5) NOT NULL DEFAULT '0',
`BRUTO_PRICE` DECIMAL(20,2) NOT NULL DEFAULT '0.00',
`AMOUNT_DISCOUNT` DECIMAL(20,3) NOT NULL DEFAULT '0.000',
`FROM_CONSTRUCTOR` INT(5) NOT NULL DEFAULT '0',
`CHANGE_DATE` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00',
`BEREKENING_VALUE` INT(5) NOT NULL DEFAULT '0',
`MAATVOERING_ID` BIGINT(250) NOT NULL DEFAULT '0',
`KOZIJN_CALC_ID` BIGINT(250) NOT NULL DEFAULT '0',
`IS_KOZIJN_CALC_TOTALS` INT(5) NOT NULL DEFAULT '0',
`EAN_CODE` VARCHAR(150) DEFAULT NULL,
`UURLOON_ID` BIGINT(20) NOT NULL DEFAULT '0',
`ORG_PRICE_PER_UNIT` DECIMAL(50,3) NOT NULL DEFAULT '0.000',
`ORG_CONTRACTOR_UNIT` DECIMAL(50,3) NOT NULL DEFAULT '0.000',
`BTWCode` INT(5) NOT NULL DEFAULT '0',
`IS_CONTROLE_GETAL` INT(5) NOT NULL DEFAULT '0',
`AttentieRegel` INT(5) NOT NULL DEFAULT '0',
`KozijnSelectionRowId` BIGINT(250) NOT NULL DEFAULT '0',
`OfferteTekst` TEXT,
`VerliesFactor` DECIMAL(15,4) NOT NULL DEFAULT '0.0000',
PRIMARY KEY (`GROUP_LINE_ID`),
KEY `GROUP_LINE_PARENT_ID` (`GROUP_LINE_PARENT_ID`),
KEY `MOEDER_LINE_CODE` (`MOEDER_LINE_CODE`),
KEY `CALC_ID` (`CALC_ID`),
KEY `GROUP_ID` (`GROUP_ID`),
KEY `MATERIAL_ID` (`MATERIAL_ID`),
KEY `MAATVOERING_ID` (`MAATVOERING_ID`),
KEY `KOZIJN_CALC_ID` (`KOZIJN_CALC_ID`),
KEY `IS_KOZIJN_CALC_TOTALS` (`IS_KOZIJN_CALC_TOTALS`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
In my opinion the
change table engine to InnoDB
bigint(250) can be changed to int(10) of bigint(10)?
Please give me some advice
yes you are right..
1.remember that a primary key in a table is assigned at the last of every other key
so keep your primary key as less size as possible i think int is sufficient it can handle up to 2 billion records and big int is not needed
2.change key_buffer_size (up to 25% of your memory) if your server has many Myisam or only myisam table you can increase it up to 60-70%
try manual chache
SET GLOBAL keycache1.key_buffer_size=256*1024;
CACHE INDEX t1,t2 IN keycache1;
LOAD INDEX INTO CACHE t1, t2 IGNORE LEAVES;
(The IGNORE LEAVES modifier causes only blocks for the nonleaf nodes of the index to be reloaded)
3.As you mentioned your table is growing faster it is better to partition the table which will improve the performance
4.Perform table maintenance tasks like analyse the table frequently which will update the indexing,optimize the table and check it and repair it if there are any errors (optimize table frequently because there are many deletes as you said)
5.if you don't want to change the engine then turn on delay_key_write variable (specific to myisam) which makes the keys to be updated after table closed
6.Run procedure analyse() which suggest you the best data types
7.Create full text indexes to take advontage of myisam if possible and only if it is useful
8.Examine your Query cache (set the Query cache limit to on demand ) and make slow query log is actvated
9.Once examine ( and rewrite if needed ) all the queries using the table
finally if you want to change engine
change your table storage engine to innodb and increase the innodb_buffer_pool_size it may help you little bit
if number of accesses on the table are more then better to shift to innodb because myisam will implement table level locking due to which some queries are not logged in slow query log ( the initial time required to aquire the lock is not treated in execution time in MySQL )
Turn on the MySQL Slow Query Log to find the slowest queries.
If they are selects then run them through EXPLAIN to find out what indexes (if any) are being used. You may be able to turn some indexes into multi-column indexes and find some improvements that way. See Multiple Indexes vs Multi-Column Indexes for a good discussion on the differences.
Inserts, Updates, and Deletes are probably slowed due to your indexes. You need to figure out which indexes are not being used and drop them. Unfortunately there's not a simple way to do this other than running through your most popular queries.
Reducing the size of columns that are oversized is a good idea.
The only reason I know of these days for using MyISAM is when doing full text search. You should, as you suggest, switch to InnoDB. (ALTER TABLE calculate ENGINE = InnoDB;)
Is this a flattened table to avoid joins? Do you have to have the OfferteTekst column in this table? Even extracting that into a related table may help, but not if you'd only end up joining against it.

Why do i HAVE to optimize tables?

I have a pretty big table with contains about 3 million records.
When running a very simple query, joining this table on a few others (all with indexes and/or primary keys), the query will take about 25 seconds to complete!
The value of "Handler_read_next" is about 7 million!
Number of requests to read the next row in key order, incremented if you are querying an index column with a range constraint or if you are doing an index scan.
This problem have only started since this table began to grow big.
Now if I do an "optimize tables" on this table, the query will run in about 0.02 seconds and "Handler_read_next" will have a value of about 1500.
How can the difference be so extreme, and do I really have to setup a scheduled query, optimizing this table once a week or so? Even so, I would like to know the meaning behind this and why mysql behaves like this. Sure, rows are deleted and updated pretty much in this table, but should it get so badly fragmented in only one week that the query goes from 0.02 sec to 25 sec?
Edit: After request, here comes the query in question:
SELECT *
FROM budget_expenses
JOIN budget_categories
ON budget_categories.BudgetAreaId = budget_expenses.BudgetAreaId
AND budget_categories.BudgetCategoryId = budget_expenses.BudgetCategoryId
LEFT JOIN budget_types
ON budget_types.BudgetAreaId = budget_expenses.BudgetAreaId
AND budget_types.BudgetCategoryId = budget_expenses.BudgetCategoryId
AND budget_types.BudgetTypeId = budget_expenses.BudgetTypeId
WHERE budget_expenses.BudgetId = 1
AND budget_expenses.ExpenseDate >= '2012-11-25'
AND budget_expenses.ExpenseDate <= '2012-12-24'
AND budget_expenses.BudgetAreaId = 2
ORDER BY budget_expenses.ExpenseDate DESC,
budget_expenses.ExpenseTime IS NULL ASC,
budget_expenses.ExpenseTime DESC
(BudgetAreaId, BudgetCategoryId) is the primary key in budget_categories and (BudgetAreaId, BudgetCategoryId, BudgetTypeId) is the primary key in budget_types. In budget_expenses these 3 keys are indexes and also ExpenseDate has an index. This query returns about 20 rows.
Show create table:
CREATE TABLE `budget_areas` (
`BudgetAreaId` int(11) NOT NULL,
`Name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`BudgetAreaId`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
CREATE TABLE `budget_categories` (
`BudgetAreaId` int(11) NOT NULL,
`BudgetCategoryId` int(11) NOT NULL AUTO_INCREMENT,
`Name` varchar(255) DEFAULT NULL,
`SortOrder` int(11) DEFAULT NULL,
PRIMARY KEY (`BudgetAreaId`,`BudgetCategoryId`),
KEY `BudgetAreaId` (`BudgetAreaId`,`BudgetCategoryId`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
CREATE TABLE `budget_types` (
`BudgetAreaId` int(11) NOT NULL,
`BudgetCategoryId` int(11) NOT NULL,
`BudgetTypeId` int(11) NOT NULL,
`Name` varchar(255) DEFAULT NULL,
`SortId` int(11) DEFAULT NULL,
PRIMARY KEY (`BudgetAreaId`,`BudgetCategoryId`,`BudgetTypeId`),
KEY `BudgetAreaId` (`BudgetAreaId`,`BudgetCategoryId`,`BudgetTypeId`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
CREATE TABLE `budget_expenses` (
`ExpenseId` int(11) NOT NULL AUTO_INCREMENT,
`BudgetId` int(11) NOT NULL,
`TempId` int(11) DEFAULT NULL,
`BudgetAreaId` int(11) DEFAULT NULL,
`BudgetCategoryId` int(11) DEFAULT NULL,
`BudgetTypeId` int(11) DEFAULT NULL,
`Company` varchar(255) DEFAULT NULL,
`ImportCompany` varchar(255) DEFAULT NULL,
`Sum` double(50,2) DEFAULT NULL,
`ExpenseDate` date DEFAULT NULL,
`ExpenseTime` time DEFAULT NULL,
`Inserted` datetime DEFAULT NULL,
`Changed` datetime DEFAULT NULL,
`InsertType` int(1) DEFAULT NULL,
`AccountId` int(11) DEFAULT NULL,
`BankCardId` int(11) DEFAULT NULL,
PRIMARY KEY (`ExpenseId`),
KEY `BudgetId` (`BudgetId`),
KEY `AccountId` (`AccountId`),
KEY `Company` (`Company`) USING BTREE,
KEY `ExpenseDate` (`ExpenseDate`),
KEY `BudgetAreaId` (`BudgetAreaId`),
KEY `BudgetCategoryId` (`BudgetCategoryId`),
KEY `BudgetTypeId` (`BudgetTypeId`),
CONSTRAINT `budget_expenses_ibfk_1` FOREIGN KEY (`BudgetId`) REFERENCES `budgets` (`BudgetId`)
) ENGINE=InnoDB AUTO_INCREMENT=3604462 DEFAULT CHARSET=latin1
After I copy pasted this I changed from MyIsam to Innodb on the budget_categories table.
Edit: The change from myisam to innodb didn't make any difference. The query is now very slow, just 12 hours after i optimized the budget_expenses table!
Here is the explain for the query which now takes about 9 seconds:
http://jsfiddle.net/dmVPY/1/
Ahhh MyISAM....
Try changing the table type (aka 'storage engine') to InnoDB instead.
If you do this, make sure innodb_buffer_pool_size in your my.cnf is a sensible value - the default is too small.