I have a table that contains all translations of words:
CREATE TABLE `localtexts` (
`Id` int(11) NOT NULL,
`Lang` char(2) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT 'pe',
`Text` varchar(300) DEFAULT NULL,
`ShortText` varchar(100) NOT NULL,
`DbVersion` timestamp NOT NULL DEFAULT current_timestamp(),
`Status` int(11) NOT NULL DEFAULT 1
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
As example there is a table that refers to localtexts:
CREATE TABLE `composes` (
`Status` int(11) NOT NULL DEFAULT 1,
`Id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The table above has foreign key Id to localtexts.Id. And when I need to get word on English I do:
SELECT localtexts.text,
composes.status
FROM composes
LEFT JOIN localtexts ON composes.Id = localtexts.Id
WHERE localtexts.Lang = 'en'.
I'm concerned in performance this decision when there are a lot of tables for join with localtexts.
You might find that adding the following index to the localtexts table would speed up the query:
CREATE INDEX idx ON localtexts (Lang, id, text);
This index covers the WHERE clause, join, and SELECT.
Related
I can't get this to work
CREATE TABLE `oc_tax_class` (
`tax_class_id` int(11) NOT NULL,
`title` varchar(255) NOT NULL,
`description` varchar(255) NOT NULL,
`date_added` datetime NOT NULL,
`date_modified` datetime NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Table structure for table `oc_tax_rate`
--
CREATE TABLE `oc_tax_rate` (
`tax_rate_id` int(11) NOT NULL,
`geo_zone_id` int(11) NOT NULL DEFAULT 0,
`name` varchar(255) NOT NULL,
`rate` decimal(15,4) NOT NULL DEFAULT 0.0000,
`type` char(1) NOT NULL,
`date_added` datetime NOT NULL,
`date_modified` datetime NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Table structure for table `oc_tax_rule`
--
CREATE TABLE `oc_tax_rule` (
`tax_rule_id` int(11) NOT NULL,
`tax_class_id` int(11) NOT NULL,
`tax_rate_id` int(11) NOT NULL,
`based` varchar(10) NOT NULL,
`priority` int(5) NOT NULL DEFAULT 1
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
3 tables. I want oc_tax_class.title = oc_tax_rate.name
I believe, although I'm not sure, that I should
INSERT INTO oc_tax_class(title)
or
UPDATE oc_tax_class SET title = ...
SELECT oc_tax_rate.name, oc_tax_rule.tax_class_id
JOIN oc_tax_rule ON oc_tax_rate.tax_rate_id = oc_tax_rule.tax_rate_id
And then I don't know what to do next.
I need to copy values from one column to another table, passing through a connecting table.
MySQL supports a multi-table UPDATE syntax, but the documentation (https://dev.mysql.com/doc/refman/en/update.html) has pretty sparse examples of it.
In your case, this may work:
UPDATE oc_tax_class
JOIN oc_tax_rule USING (tax_class_id)
JOIN oc_tax_rate USING (tax_rate_id)
SET oc_tax_class.title = oc_tax_rate.name;
I did not test this. I suggest you test it first on a sample of your data, to make sure it works the way you want it to.
I see that my query does full table scan and takes a lot of time. I heard that making indexes would speed this up and I have added some to the tables. Is there any other indexes I should create to make this query faster?
My query is:
SELECT p.id, n.people_type_id, n.full_name, n.post, p.nick,
p.key_name, p.email, p.internal_user_id FROM email_routing e
JOIN people_emails p ON p.id=e.receiver_email_id
JOIN people n ON n.id = p.people_id
WHERE e.message_id = 897360 AND e.basket=1
Here is the EXPLAIN result:
EXPLAIN SELECT p.id, n.people_type_id, n.full_name, n.post, p.nick,
p.key_name, p.email, p.internal_user_id FROM email_routing e
JOIN people_emails p ON p.id=e.receiver_email_id
JOIN people n ON n.id = p.people_id
WHERE e.message_id = 897360 AND e.basket=1
id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 SIMPLE n NULL ALL PRIMARY NULL NULL NULL 1 100.00 NULL
1 SIMPLE p NULL ALL PRIMARY NULL NULL NULL 3178 10.00 Using where; Using join buffer (Block Nested Loop)
1 SIMPLE e NULL ref bk1 bk1 4 server.p.id 440 1.00 Using where; Using
And here are the tables strucutre:
SHOW CREATE TABLE people_emails;
CREATE TABLE `people_emails` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`nick` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL,
`key_name` varchar(255) NOT NULL,
`people_id` int(11) NOT NULL,
`status` int(11) NOT NULL DEFAULT '0',
`activity` int(11) NOT NULL,
`internal_user_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `email` (`email`)
) ENGINE=MyISAM AUTO_INCREMENT=22114 DEFAULT CHARSET=utf8
SHOW CREATE TABLE email_routing;
CREATE TABLE `email_routing` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`message_id` int(11) NOT NULL,
`sender_email_id` int(11) NOT NULL,
`receiver_email_id` int(11) NOT NULL,
`basket` int(11) NOT NULL,
`status` int(11) NOT NULL,
`popup` int(11) NOT NULL DEFAULT '0',
`tm` int(11) NOT NULL DEFAULT '0',
KEY `id` (`id`),
KEY `bk1` (`receiver_email_id`,`status`,`sender_email_id`,`message_id`,`basket`),
KEY `bk2` (`sender_email_id`,`tm`)
) ENGINE=InnoDB AUTO_INCREMENT=1054618 DEFAULT CHARSET=utf8
SHOW CREATE TABLE people;
CREATE TABLE `people` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`fname` varchar(255) CHARACTER SET cp1251 NOT NULL,
`lname` varchar(255) CHARACTER SET cp1251 NOT NULL,
`patronymic` varchar(255) CHARACTER SET cp1251 NOT NULL,
`gender` tinyint(1) NOT NULL,
`full_name` varchar(255) NOT NULL DEFAULT ' ',
`category` int(11) NOT NULL,
`people_type_id` int(255) DEFAULT NULL,
`tags` varchar(255) CHARACTER SET cp1251 NOT NULL,
`job` varchar(255) CHARACTER SET cp1251 NOT NULL,
`post` varchar(255) CHARACTER SET cp1251 NOT NULL,
`profession` varchar(255) CHARACTER SET cp1251 DEFAULT NULL,
`zip` varchar(16) CHARACTER SET cp1251 NOT NULL,
`country` int(11) DEFAULT NULL,
`region` varchar(10) NOT NULL,
`city` varchar(255) CHARACTER SET cp1251 NOT NULL,
`address` varchar(255) CHARACTER SET cp1251 NOT NULL,
`address_date` date DEFAULT NULL,
`inner` tinyint(4) NOT NULL,
`contact_through` varchar(255) DEFAULT '',
`next_call` date NOT NULL,
`additional` text CHARACTER SET cp1251 NOT NULL,
`user_id` int(11) NOT NULL,
`changed` datetime NOT NULL,
`status` int(11) DEFAULT NULL,
`nick` varchar(255) DEFAULT NULL,
`birthday` date DEFAULT NULL,
`last_update_ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`area` text NOT NULL,
`reviewed_` tinyint(4) NOT NULL,
`phones_old` text NOT NULL,
`post_sticker` text NOT NULL,
`permissions` int(120) NOT NULL DEFAULT '0',
`internal_user_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `most_used` (`category`,`status`,`city`,`lname`,`next_call`),
KEY `registrars` (`category`,`status`,`contact_through`,`next_call`),
FULLTEXT KEY `lname` (`lname`),
FULLTEXT KEY `fname` (`fname`),
FULLTEXT KEY `mname` (`patronymic`),
FULLTEXT KEY `Full Name` (`full_name`)
) ENGINE=MyISAM AUTO_INCREMENT=415009 DEFAULT CHARSET=utf8
How to choose columns for building indexes, should I pick text columnts too or that only will work with numer columnts
The table email_routing seem to have 1054618 rows .
And you try to find one row , by message_id .
e.message_id = 897360
BUT message_id must be indexed to speed-up the query .
message_id is part of the index bk1 , but this is not enough because message_id is not the first columns of the index .
email_routing needs
INDEX ( message_id, basket, -- first, in either order
receiver_email_id ) -- for "covering"
Your bk1 starts with receiver_email_id; this is not nearly as good.
Include column(s) in WHERE that are tested with =.
Include other columns from WHERE, GROUP BY, and ORDER BY (none in your case); the order is important, but beyond the scope of this discussion.
Include any other columns of the same table used anywhere in the query -- this is to make it a "covering" index. But don't bother if this would lead to more than, say, 5 columns or would involve TEXT, which cannot be in an index.
Then move on to the other tables. In both JOINs, it seems that they would be hit by their PRIMARY KEYs (JOIN x ON x.id = ...)
More discussion: Cookbook for creating indexes
On other issues...
You really should move to InnoDB. As of 5.6, it includes FULLTEXT, but there are some differences. In particular, you may need more fulltext indexes. For example, MATCH(lname, fname) requires FULLTEXT(lname, fname).
Do you really want to stick to cp1251? It limits your internalization mostly to English, Russian, Bulgarian, Serbian and Macedonian. And it is unclear how well FULLTEXT (MyISAM or InnoDB) will work with those non-English languages.
INTs are always 4 bytes; consider using smaller versions.
Is there really only one people? The Optimizer decided that was the best table to start with, but it wasn't. I'm hoping my improved index on email_routing will trick it into starting with that table -- which will definitely be optimal.
My problem is a slow search query with a one-to-many relationship between the tables. My tables look like this.
Table Assignment
CREATE TABLE `Assignment` (
`Id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`ProjectId` int(10) unsigned NOT NULL,
`AssignmentTypeId` smallint(5) unsigned NOT NULL,
`AssignmentNumber` varchar(30) NOT NULL,
`AssignmentNumberExternal` varchar(50) DEFAULT NULL,
`DateStart` datetime DEFAULT NULL,
`DateEnd` datetime DEFAULT NULL,
`DateDeadline` datetime DEFAULT NULL,
`DateCreated` datetime DEFAULT NULL,
`Deleted` datetime DEFAULT NULL,
`Lat` double DEFAULT NULL,
`Lon` double DEFAULT NULL,
PRIMARY KEY (`Id`),
KEY `idx_assignment_assignment_type_id` (`AssignmentTypeId`),
KEY `idx_assignment_assignment_number` (`AssignmentNumber`),
KEY `idx_assignment_assignment_number_external`
(`AssignmentNumberExternal`)
) ENGINE=InnoDB AUTO_INCREMENT=5280 DEFAULT CHARSET=utf8;
Table ExtraFields
CREATE TABLE `ExtraFields` (
`assignment_id` int(10) unsigned NOT NULL,
`name` varchar(30) NOT NULL,
`value` text,
PRIMARY KEY (`assignment_id`,`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
My search query
SELECT
`Assignment`.`Id`, COL_5_72, COL_5_73, COL_5_74, COL_5_75, COL_5_76,
COL_5_77 FROM (
SELECT
`Assignment`.`Id`,
`Assignment`.`AssignmentNumber` AS COL_5_72,
`Assignment`.`AssignmentNumberExternal` AS COL_5_73 ,
`AssignmentType`.`Name` AS COL_5_74,
`Assignment`.`DateStart` AS COL_5_75,
`Assignment`.`DateEnd` AS COL_5_76,
`Assignment`.`DateDeadline` AS COL_5_77 FROM `Assignment`
CASE WHEN `ExtraField`.`Name` = "WorkDistrict" THEN
`ExtraField`.`Value` end as COL_5_78 FROM `Assignment`
LEFT JOIN `ExtraFields` as `ExtraField` on
`ExtraField`.`assignment_id` = `Assignment`.`Id`
WHERE `Assignment`.`Deleted` IS NULL -- Assignment should not be removed.
AND (1=1) -- Add assignment filters.
) AS q1
GROUP BY `Assignment`.`Id`
HAVING 1 = 1
AND COL_5_78 LIKE '%Amsterdam East%'
ORDER BY COL_5_72 ASC, COL_5_73 ASC;
When the table is only around 3500 records my query takes a couple of seconds to execute and return the results.
What is a better way to search in the related data? Should I just add a JSON field to the Assignment table and use the MySQL 5.7 Json query features? Or did I made a mistake in designing my database?
You are using select from subquery that forces MySQL to create unindexed temp table for each execution. Remove subquery (you really don't need it here) and it will be much faster.
I have the following table:
CREATE TABLE `Triples` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`Subject` longtext COLLATE utf8mb4_unicode_ci,
`Predicate` longtext COLLATE utf8mb4_unicode_ci,
`Object` longtext COLLATE utf8mb4_unicode_ci,
`SubHash` binary(16) DEFAULT NULL,
`PredHash` binary(16) DEFAULT NULL,
`ObHash` binary(16) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `PredHash` (`PredHash`),
KEY `ObHash` (`ObHash`),
KEY `SubHash` (`SubHash`)
) ENGINE=InnoDB
It contains about 800 Million rows.
Now I want to create two other tables.
One table is Nodes:
CREATE TABLE `Nodes` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`val_hash` binary(16) NOT NULL,
`val` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
`subjectCount` bigint(20) unsigned NOT NULL DEFAULT '0',
`objectCount` bigint(20) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
)
That will store all Subjects and Objects from Triples one time (val_hash will become a Key after the data is inserted).
The Question is which of the INSERTs performs better?:
INSERT INTO Nodes(val,val_hash)
SELECT j.val,j.val_hash FROM (SELECT Subject as val, SubHash as val_hash
FROM Triples GROUP BY val_hash
UNION
SELECT Object as val, ObHash as val_hash
FROM Triples GROUP BY val_hash) as j
GROUP BY j.val_hash
OR The following:
ALTER TABLE Nodes
ADD UNIQUE KEY(val_hash);
INSERT IGNORE INTO Nodes(val,val_hash)
SELECT Subject,SubHash FROM Triples GROUP BY SubHash;
INSERT IGNORE INTO Nodes(val,val_hash)
SELECT Object,ObHash FROM Triples GROUP BY ObHash;
I ask because a key adds complexity to inserts but a union requires a temporary table and i don't know which of both is better in this situation.
I've got a question about MySQL performance.
These are my tables:
(about 140.000 records)
CREATE TABLE IF NOT EXISTS `article` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`label` varchar(256) COLLATE utf8_unicode_ci NOT NULL,
`title` varchar(256) COLLATE utf8_unicode_ci NOT NULL,
`intro` text COLLATE utf8_unicode_ci NOT NULL,
`content` text COLLATE utf8_unicode_ci NOT NULL,
`date` int(11) NOT NULL,
`active` int(1) NOT NULL,
`language_id` int(11) NOT NULL,
`category_id` int(11) NOT NULL,
`indexed` int(1) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=132911 ;
(about 400.000 records)
CREATE TABLE IF NOT EXISTS `article_category` (
`article_id` int(11) NOT NULL,
`category_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
RUNNING THIS COUNT QUERY:
SELECT SQL_NO_CACHE COUNT(id) as total
FROM (`article`)
LEFT JOIN `article_category` ON `article_category`.`article_id` = `article`.`id`
WHERE `article`.`language_id` = 1
AND `article_category`.`category_id` = '<catid>'
This query takes a lot of resources, so I am wondering how to optimize this query.
After executing it's beeing cached, so after the first run I am fine.
RUNNING THE EXPLAIN FUNCTION:
AFTER CREATING AN INDEX:
ALTER TABLE `article_category` ADD INDEX ( `article_id` , `category_id` ) ;
After adding indexes and changing LEFT JOIN to JOIN the query runs alot faster!
Thanks for these fast replys :)
QUERY I USE NOW (I removed the language_id because it was not that neccesary):
SELECT COUNT(id) as total
FROM (`article`)
JOIN `article_category` ON `article_category`.`article_id` = `article`.`id`
AND `article_category`.`category_id` = '<catid>'
I've read something about forcing an index, but I think thats not neccesary anymore because the tables are already indexed, right?
Thanks alot!
Martijn
You haven't created necessary index on the table
Table article_category - Create a compound index on (article_id, category_id)
Table article -Create a compound index on (id, language_id)
If this doesn't help post the explain statement.
The columns used in a JOIN condition should have an index, so you need to index article_id.