I've a table with composite primary key with below structure:
CREATE TABLE field_name_test (
id_type varchar(128) NOT NULL DEFAULT '',
desc varchar(128) NOT NULL DEFAULT '' ,
deleted tinyint(4) NOT NULL DEFAULT '0' ,
type_id int(10) unsigned NOT NULL ,
rev_id int(10) unsigned NOT NULL ,
lang varchar(32) NOT NULL DEFAULT '',
delta int(10) unsigned NOT NULL,
fname_value varchar(255) DEFAULT NULL,
fname_format varchar(255) DEFAULT NULL,
PRIMARY KEY (id_type,type_id,rev_id,deleted,delta,lang),
KEY id_type (id_type),
KEY desc (desc),
KEY deleted (deleted),
KEY type_id (type_id),
KEY rev_id (rev_id),
KEY lang (lang),
KEY fname_format (fname_format)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
i'm running pt-o-s-c to change the collation of the table and it is working fine with other tables but this one is giving below error:
pt-online-schema-change --execute --password=#### --user=#### --socket=#### --port=#### --chunk-time=1 --recursion-method=none --no-drop-old-table --alter "CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci , CHANGE desc desc varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci , CHANGE id_type id_type varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci , CHANGE lang lang varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci , ROW_FORMAT=DYNAMIC , LOCK=SHARED, ALGORITHM=COPY" D=db,t=field_name_test,h=localhost
No slaves found. See --recursion-method if host ###### has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
copy_rows, 10, 0.25
create_triggers, 10, 1
drop_triggers, 10, 1
swap_tables, 10, 1
update_foreign_keys, 10, 1
Altering db.field_name_test...
Creating new table...
Created new table db._field_name_test_new OK.
Altering new table...
Altered db._field_name_test_new OK.
2017-09-15T09:18:47 Creating triggers...
2017-09-15T09:18:47 Created triggers OK.
2017-09-15T09:18:47 Copying approximately 3843064 rows...
2017-09-15T09:18:47 Dropping triggers...
2017-09-15T09:18:47 Dropped triggers OK.
2017-09-15T09:18:47 Dropping new table...
2017-09-15T09:18:47 Dropped new table OK.
db.field_name_test was not altered.
2017-09-15T09:18:47 Error copying rows from db.field_name_test to db._field_name_test_new: 2017-09-15T09:18:47 Error copying rows at chunk 1 of db.field_name_test because MySQL used only 390 bytes of the PRIMARY index instead of 497. See the --[no]check-plan documentation for more information.
I'm running above in Galera 3 node cluster.
So i've below concerns on pt-o-s-c:
1) what solutions can be for above such cases ?
2) Is it possible to run parallel pt-o-s-c in a same database ?
Please let me know if any other input you need. Thanks in advance.
Related
I am trying to backup a database using mysqldump but I got this error:
Trying to backup MySQL database... mysqldump: Couldn't execute 'show create table `transaction_registry`':
Table 'mysql.transaction_registry' doesn't exist in engine (1932)
the problem first were with innodb_index_stats & innodb_table_stats and followed the instructions and worked well but got another problem
I tried from these 1 - 2, but still getting the same error, any ideas?
what I ended doing is like that:
Removed corrupted tables:
rm -rf /var/lib/mysql/mysql/transaction_registry.idb
Recreated the table:
CREATE TABLE transaction_registry (
transaction_id bigint(20) unsigned NOT NULL,
commit_id bigint(20) unsigned NOT NULL,
begin_timestamp timestamp(6) NOT NULL DEFAULT '0000-00-00 00:00:00.000000',
commit_timestamp timestamp(6) NOT NULL DEFAULT '0000-00-00 00:00:00.000000',
isolation_level enum('READ-UNCOMMITTED','READ-COMMITTED','REPEATABLE-READ','SERIALIZABLE') COLLATE utf8_bin NOT NULL, PRIMARY KEY (transaction_id),
UNIQUE KEY commit_id (commit_id), KEY begin_timestamp (begin_timestamp), KEY commit_timestamp (commit_timestamp,transaction_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ;
Ref: https://mariadb.com/kb/en/mysqltransaction_registry-table/
I've went over this many times but I couldnt find a way to make this faster. I have a table with about 4 million records and I want to grab rows from a specific date range (which would only yield about 10000 results). My query takes 10 seconds to execute... why!?
SELECT *
FROM banjo_live.actions_activity
where userid IN (102,164,94,140)
AND actionsid=4
AND (actions_activity_timestamp between '2021-06-01 00:00:00'
AND '2021-06-31 23:23:23')
AND new_statusid NOT IN (10,13)
LIMIT 0, 50000
Surely this shouldnt take 10 seconds. What could be the issue?
Thanks
My table;
DROP TABLE IF EXISTS `actions_activity`;
CREATE TABLE `actions_activity` (
`actions_activity_id` int(11) NOT NULL AUTO_INCREMENT,
`orderid` int(11) NOT NULL,
`barcodeid` int(11) NOT NULL,
`skuid` int(11) NOT NULL,
`sku_code` varchar(50) CHARACTER SET latin1 COLLATE latin1_swedish_ci NULL DEFAULT NULL,
`actionsid` int(11) NOT NULL,
`action_note` text CHARACTER SET latin1 COLLATE latin1_swedish_ci NOT NULL,
`starting_count` int(11) NOT NULL,
`new_count` int(11) NOT NULL,
`old_statusid` int(11) NOT NULL COMMENT 'Old Status',
`new_statusid` int(11) NOT NULL COMMENT 'New Status',
`userid` int(11) NOT NULL COMMENT 'Handled By',
`actions_activity_timestamp` timestamp(0) NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP(0),
`actions_activity_created_at` timestamp(0) NOT NULL DEFAULT CURRENT_TIMESTAMP,
`sessionid` int(11) NULL DEFAULT NULL,
PRIMARY KEY (`actions_activity_id`) USING BTREE,
INDEX `FetchingIndex`(`barcodeid`) USING BTREE,
INDEX `skuindex`(`skuid`) USING BTREE,
INDEX `searchbysession`(`sessionid`) USING BTREE,
FULLTEXT INDEX `sku_code`(`sku_code`)
) ENGINE = InnoDB AUTO_INCREMENT = 4336767 CHARACTER SET = latin1 COLLATE = latin1_swedish_ci ROW_FORMAT = Dynamic;
23:23:23 ?? -- Gordon's rewrite avoids typos like this. Or, I prefer this:
actions_activity_timestamp >= '2021-06-01' AND
actions_activity_timestamp < '2021-06-01' + INTERVAL 1 MONTH
Add a 2-column index where the second column is whichever of the other things in the WHERE is most selective:
INDEX(actionsid, ...)
Once you add an ORDER BY (cf, The Impaler), there may be a better index.
Are you really expecting 10K rows of output? That will choke most clients. Maybe there is some processing you could have SQL do so the output won't be as bulky?
First, I assume you intend:
SELECT *
FROM banjo_live.actions_activity
WHERE userid IN (102,164,94,140) AND
actionsid = 4 AND
actions_activity_timestamp >= '2021-06-01' AND
actions_activity_timestamp < '2021-07-01' AND
new_statusid NOT IN (10, 13)
LIMIT 0, 50000;
You want a composite index. Without knowing the sizes of the fields, I would suggest an index on (actionsid, userid, actions_activity_timestamp, new_statusid).
I have partitioned a MySQL table containing 53 rows. Now when I query number of records in all partitions, the records are almost 3 times the expected. Even phpMyAdmin thinks there are 156 records.
Have I done somthing wrong in my table design and partitioning?
Below picture shows count of records in partitions:
phpMyAdmin:
Finally, this is my table:
CREATE TABLE cl_inbox (
id int(11) NOT NULL AUTO_INCREMENT,
user int(11) NOT NULL,
contact int(11) DEFAULT NULL,
sdate timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
body text NOT NULL,
userstatus tinyint(4) NOT NULL DEFAULT 1 COMMENT '0: new, 1:read, 2: deleted',
contactstatus tinyint(4) NOT NULL DEFAULT 0,
class tinyint(4) NOT NULL DEFAULT 0,
attachtype tinyint(4) NOT NULL DEFAULT 0,
attachsrc varchar(255) DEFAULT NULL,
PRIMARY KEY (id, user),
INDEX i_class (class),
INDEX i_contact_user (contact, user),
INDEX i_contactstatus (contactstatus),
INDEX i_user_contact (user, contact),
INDEX i_userstatus (userstatus)
)
ENGINE = INNODB
AUTO_INCREMENT = 69
AVG_ROW_LENGTH = 19972
CHARACTER SET utf8
COLLATE utf8_general_ci
ROW_FORMAT = DYNAMIC
PARTITION BY KEY (`user`)
(
PARTITION partition1 ENGINE = INNODB,
PARTITION partition2 ENGINE = INNODB,
PARTITION partition3 ENGINE = INNODB,
.....
PARTITION partition128 ENGINE = INNODB
);
Those numbers are approximations, just as with SHOW TABLE STATUS and EXPLAIN.
Meanwhile, you will probably find that PARTITION BY KEY provides no performance improvement. If you find otherwise, I would be very interested to hear about it.
Why do I get an error of the form:
Error in query: Duplicate entry '10' for key 1
...when doing an INSERT statement like:
INSERT INTO wp_abk_period (pricing_id, apartment_id) VALUES (13, 27)
...with 13 and 27 being valid id-s for existing pricing and apartment rows, and the table is defined as:
CREATE TABLE `wp_abk_period` (
`id` int(11) NOT NULL auto_increment,
`apartment_id` int(11) NOT NULL,
`pricing_id` int(11) NOT NULL,
`type` enum('available','booked','unavailable') collate utf8_unicode_ci default NULL,
`starts` datetime default NULL,
`ends` datetime default NULL,
`recur_type` enum('daily','weekly','monthly','yearly') collate utf8_unicode_ci default NULL,
`recur_every` char(3) collate utf8_unicode_ci default NULL,
`timedate_significance` char(4) collate utf8_unicode_ci default NULL,
`check_in_times` varchar(255) collate utf8_unicode_ci default NULL,
`check_out_times` varchar(255) collate utf8_unicode_ci default NULL,
PRIMARY KEY (`id`),
KEY `fk_period_apartment1_idx` (`apartment_id`),
KEY `fk_period_pricing1_idx` (`pricing_id`),
CONSTRAINT `fk_period_apartment1` FOREIGN KEY (`apartment_id`) REFERENCES `wp_abk_apartment` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `fk_period_pricing1` FOREIGN KEY (`pricing_id`) REFERENCES `wp_abk_pricing` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Isn't key 1 id in this case and having it on auto_increment sufficient for being able to not specify it?
Note: If I just provide an unused value for id, like INSERT INTO wp_abk_period (id, pricing_id, apartment_id) VALUES (3333333, 13, 27) it works fine, but then again, it is set as auto_increment so I shouldn't need to do this!
Note 2: OK, this is a complete "twilight zone" moment: so after running the query above with the huge number for id, things started working normally, no more duplicate entry errors. Can someone explain me WTF was MySQL doing to produce this weird behavior?
It could be that your AUTO_INCREMENT value for the table and the actual values in id column have got out of whack.
This might help:
Step 1 - Get Max id from table
select max(id) from wp_abk_period
Step 2 - Align the AUTO_INCREMENT counter on table
ALTER TABLE wp_abk_period AUTO_INCREMENT = <value from step 1 + 100>;
Step 3 - Retry the insert
As for why the AUTO_INCREMENT has got out of whack I don't know. Added auto_increment after data was in the table? Altered the auto_increment value after data was inserted into the table?
Hope it helps.
I had the same problem and here is my solution :
My ID column had a bad parameter. It was Tinyint, and MySql want to write a 128th line.
Sometimes, your problem you think the bigger you have is only a tiny parameter...
Late to the party, but I just ran into this tonight - duplicate key '472817' and the provided answers didn't help.
On a whim I ran:
repair table wp_abk_period
which output
Number of rows changed from 472816 to 472817
Seems like mysql had the row count wrong, and the issue went away.
My environment:
mysql Ver 14.14 Distrib 5.1.73, for Win64 (unknown)
Create table syntax:
CREATE TABLE `env_events` (
`tableId` int(11) NOT NULL AUTO_INCREMENT,
`deviceId` varchar(50) DEFAULT NULL,
`timestamp` int(11) DEFAULT NULL,
`temperature` float DEFAULT NULL,
`humidity` float DEFAULT NULL,
`pressure` float DEFAULT NULL,
`motion` int(11) DEFAULT NULL,
PRIMARY KEY (`tableId`)
) ENGINE=MyISAM AUTO_INCREMENT=528521 DEFAULT CHARSET=latin1
You can check the current value of the auto_increment with the following command:
show table status
Then check the max value of the id and see if it looks right. If not change the auto_increment value of your table.
When debugging this problem check the table name case sensitivity (especially if you run MySql not on Windows).
E.g. if one script uses upper case to 'CREATE TABLE my_table' and another script tries to 'INSERT INTO MY_TABLE'. These 2 tables might have different contents and different file system locations which might lead to the described problem.
I've got a query which seems to be impossible to optimise further (with regards to execution time). It's a plain simple query, indexes are in place, I've tried to configure InnoDB settings...but nothing really seems to help.
Tables
The query is a JOIN between the three tables trk, auf and paf.
trk : temporary table holding id's representing tracks.
auf : table representing audio files associated with the tracks.
paf : table holding the id's of published audio files. Acts as a "filter".
// 'trk' table
CREATE TEMPORARY TABLE auf_713340 (
`id` char(36),
PRIMARY KEY (id)
) ENGINE=MEMORY);
// 'auf' table
CREATE TABLE `file` (
`id` char(36) NOT NULL,
`track_id` char(36) NOT NULL,
`type` varchar(3) DEFAULT NULL,
`quality` int(1) DEFAULT '0',
`size` int(20) DEFAULT '0',
`duration` float DEFAULT '0',
`bitrate` int(6) DEFAULT '0',
`samplerate` int(5) DEFAULT '0',
`tagwritten` datetime DEFAULT NULL,
`tagwriteattempts` int(3) NOT NULL DEFAULT '0',
`audiodataread` datetime DEFAULT NULL,
`audiodatareadattempts` int(3) NOT NULL DEFAULT '0',
`converted` datetime DEFAULT NULL,
`convertattempts` int(3) NOT NULL DEFAULT '0',
`waveformgenerated` datetime DEFAULT NULL,
`waveformgenerationattempts` int(3) NOT NULL DEFAULT '0',
`flag` int(1) NOT NULL DEFAULT '0',
`status` int(1) NOT NULL DEFAULT '0',
`updated` datetime NOT NULL DEFAULT '2000-01-01 00:00:00',
PRIMARY KEY (`id`),
KEY `FK_file_track` (`track_id`),
KEY `file_size` (`size`),
KEY `file_type` (`type`),
KEY `file_quality` (`quality`),
CONSTRAINT `file_ibfk_1` FOREIGN KEY (`track_id`) REFERENCES `track` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
// 'paf' table
CREATE TABLE `publishedfile` (
`file_id` varchar(36) NOT NULL,
`data` varchar(255) DEFAULT NULL,
`file_updated` datetime NOT NULL,
PRIMARY KEY (`file_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
The query usually takes between 1500 ms and 2500 ms to execute with somewhere between 50 and 100 ids in the trk table.The auf table holds about 1.1 million rows, and the paf table holds about 900.000 rows.
The MySQL server runs on a 4GB Rackspace Cloud Server instance.
The Query
SELECT auf.*
FROM auf_713340 trk
INNER JOIN file auf
ON auf.track_id = trk.id
INNER JOIN publishedfile paf
ON auf.id = paf.file_id
The Query w/EXPLAIN
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE trk ALL NULL NULL NULL NULL 60
1 SIMPLE auf ref PRIMARY,FK_file_track FK_file_track 108 func 1 Using where
1 SIMPLE paf eq_ref PRIMARY PRIMARY 110 trackerdatabase_development.auf.id 1 Using where; Using index
The InnoDB configuration
[mysqld]
# The size of memory used to cache table data and indexes. The larger
# this value is, the less I/O is needed to access data in tables.
# Default value is 8MB. Recommendations point towards 70% - 80% of
# available system memory.
innodb_buffer_pool_size=2850M
# Recommendations point towards using O_DIRECT to avoid double buffering.
# innodb_flush_method=O_DIRECT
# Recommendations point towards using 256M.
# #see http://www.mysqlperformanceblog.com/2006/07/03/choosing-proper-innodb_log_file_size/
innodb_log_file_size=256M
# The size in bytes of the buffer that InnoDB uses to write to the log files
# on disk. Recommendations point towards using 4MB.
innodb_log_buffer_size=4M
# The size of the buffer used for MyISAM index blocks.
key_buffer_size=128M
Now, the question is; what can I do to get the query to perform better? After all, the tables in question are not that big and indexes are in place..?
In auf table make id field as int(11) and make it auto increment. all int field length which are >11 , edit them into 11.
Thanks
Ripa Saha
Try this:
SELECT auf.*
FROM file auf
WHERE EXISTS
( SELECT *
FROM auf_713340 trk
WHERE auf.track_id = trk.id
)
AND EXISTS
( SELECT *
FROM publishedfile paf
WHERE auf.id = paf.file_id
) ;
I would also test and compare efficiency with the temporary table defined with InnoDB engine or with the (Primary) index as a BTREE index. Memory tables have HASH indices by default, not Btree if I remember correctly.