Is it possible for mysqldump to dump one query per line?
For example, it currently dumps a CREATE TABLE expressions like so:
--
-- Table structure for table `post`
--
CREATE TABLE IF NOT EXISTS `post` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(160) NOT NULL,
`slug` varchar(255) NOT NULL,
`url` varchar(600) NOT NULL,
`domain` varchar(90) NOT NULL,
`author` int(11) NOT NULL,
`description` text,
`category` int(11) DEFAULT NULL,
`score` int(11) NOT NULL,
`ip` varchar(255) DEFAULT NULL,
`created` datetime NOT NULL,
`comment_count` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `uc_slug` (`slug`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
-- --------------------------------------------------------
--
-- Table structure for table `users`
--
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`login` varchar(12) NOT NULL,
`password` varchar(45) NOT NULL,
`email` varchar(150) NOT NULL,
`about` varchar(300) DEFAULT NULL,
`last_visit` datetime DEFAULT NULL,
`ip` varchar(255) DEFAULT NULL,
`created` datetime NOT NULL,
`perm_mod` int(11) DEFAULT NULL,
`perm_admin` int(11) DEFAULT NULL,
`post_count` int(11) NOT NULL DEFAULT '0',
`comment_count` int(11) NOT NULL DEFAULT '0',
`vote_count` int(11) NOT NULL DEFAULT '0',
`voted_count` int(11) NOT NULL DEFAULT '0',
`forgot_key` varchar(150) DEFAULT NULL,
`cookie_key` varchar(40) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=3;
Instead, I want it to dump like this:
--
-- Table structure for table `post`
--
CREATE TABLE IF NOT EXISTS `post` (`id` int(11) NOT NULL AUTO_INCREMENT,`title` varchar(160) NOT NULL,`slug` varchar(255) NOT NULL,`url` varchar(600) NOT NULL,`domain` varchar(90) NOT NULL,`author` int(11) NOT NULL,`description` text,`category` int(11) DEFAULT NULL,`score` int(11) NOT NULL,`ip` varchar(255) DEFAULT NULL,`created` datetime NOT NULL,`comment_count` int(11) NOT NULL DEFAULT '0',PRIMARY KEY (`id`),UNIQUE KEY `uc_slug` (`slug`) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
-- --------------------------------------------------------
--
-- Table structure for table `users`
--
CREATE TABLE IF NOT EXISTS `users` (`id` int(11) NOT NULL AUTO_INCREMENT,`login` varchar(12) NOT NULL,`password` varchar(45) NOT NULL,`email` varchar(150) NOT NULL,`about` varchar(300) DEFAULT NULL,`last_visit` datetime DEFAULT NULL,`ip` varchar(255) DEFAULT NULL,`created` datetime NOT NULL,`perm_mod` int(11) DEFAULT NULL,`perm_admin` int(11) DEFAULT NULL,`post_count` int(11) NOT NULL DEFAULT '0',`comment_count` int(11) NOT NULL DEFAULT '0',`vote_count` int(11) NOT NULL DEFAULT '0',`voted_count` int(11) NOT NULL DEFAULT '0',`forgot_key` varchar(150) DEFAULT NULL,`cookie_key` varchar(40) DEFAULT NULL,PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=3;
I have been going through the args for mysqldump, and can't find anything that would do this
Indeed, there is no command-line switch to achieve this. Instead I recommend you to do the dump normally and then use a good text editor like Notepad++ to open dump file.
First select block to convert to single line, then select menu option of the folowing image.
This is the result (repeat for every block to convert to single line):
I had similar task - to format mysqldump output file to be mysql init_file option compatible. Here is the solution:
sed ':a;N;$!ba;s/\n/THISISUNIQUESTRING/g' | sed -e 's/;THISISUNIQUESTRING/;\n/g' | sed -e 's/THISISUNIQUESTRING//g'
This replaces all newline chars with a weird string that is not present in the file. Next sed replaces the combination of this string and semicolon with semicolon and newline char. This is needed because table data can contain string values that include ";", so straighforward removal of all newline chars and adding them after semicolons is not universal.
After that remaining weird strings are just removed from the text.
Btw, I was running mysqldump with --comments=false parameter, as comments are not allowed for "init_file" thing as well.
If you truly want individual queries the most pragmatic solution in my book would be to use the built-in functionality of mysqldump. The --skip-extended-insert flag is what you want, it is not clear in the documentation but it gives you the dump format you want.
Related
As sometimes happens, a block of code fails and no matter how hard you try, you can't figure out where it is. In these cases a second pair of eyes sometimes sees what the brain doesn't register. I think this is one of those cases. It's almost certainly my fault and I did something wrong but I honestly can't figure out where.
This is a SELECT I wrote
SELECT
`a.dev_act_id`,
`a.dev_act_code`,
`a.dev_act_desc`,
`a.dev_act_type`,
`a.lang_code`,
`pa.dev_plan_act_id`,
`pa.action_status`,
`pa.action_expiration`,
`cb.competence_id`,
`cb.credits` AS `avail_credits`,
`w.credits` AS `settled_credits`
FROM
`pbq_idp_plan_actions` AS pa
INNER JOIN `pbq_idp_dev_actions` AS a ON `pa.dev_act_id` = `a.dev_act_id`
INNER JOIN `pbq_idp_credit_bags` AS cb ON `pa.dev_plan_act_id` = `cb.dev_plan_act_id`
INNER JOIN `pbq_idp_wallets` AS w ON `a.dev_act_id` = `w.dev_act_id`
WHERE
`pa.dev_plan_id` = 0
ORDER BY
`cb.competence_id`
The structure of the pbq_idp_dev_actions table whose alias is 'a', is as follows
CREATE TABLE `pbq_idp_dev_actions` (
`dev_act_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`wallet_id` bigint(20) unsigned NOT NULL,
`dev_act_code` text COLLATE utf8mb4_unicode_520_ci NOT NULL,
`dev_act_desc` longtext COLLATE utf8mb4_unicode_520_ci NOT NULL,
`dev_act_type` tinyint(1) unsigned NOT NULL DEFAULT '0',
`lang_code` varchar(7) COLLATE utf8mb4_unicode_520_ci NOT NULL,
PRIMARY KEY (`dev_act_id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci
It would seem that everything is fine, but I get the following error and I don't understand why. The alias is correct, the field exists, yet the system can't find it.
WordPress database error: [Unknown column 'a.dev_act_id' in 'field list']
By request, here are the other three tables:
CREATE TABLE `pbq_idp_plan_actions` (
`dev_plan_act_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`dev_act_id` bigint(20) unsigned NOT NULL,
`dev_plan_id` bigint(20) unsigned NOT NULL,
`action_status` tinyint(2) unsigned NOT NULL DEFAULT '0',
`not_earlier` datetime DEFAULT NULL,
`deadline` datetime DEFAULT NULL,
PRIMARY KEY (`dev_plan_act_id`),
KEY `dev_act_id` (`dev_act_id`,`dev_plan_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci
CREATE TABLE `pbq_idp_credit_bags` (
`credit_bag_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`dev_plan_act_id` bigint(20) unsigned NOT NULL,
`competence_id` varchar(4) COLLATE utf8mb4_unicode_520_ci NOT NULL,
`credits` tinyint(3) NOT NULL DEFAULT '0',
PRIMARY KEY (`credit_bag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci
CREATE TABLE `pbq_idp_wallets` (
`wallet_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`dev_act_id` bigint(20) unsigned NOT NULL,
`competence_id` varchar(4) COLLATE utf8mb4_unicode_520_ci NOT NULL,
`credits` tinyint(3) NOT NULL DEFAULT '0',
PRIMARY KEY (`wallet_id`),
KEY `dev_act_id` (`dev_act_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci
The problem is in the wrong usage of backticks as can be seen in this example and the column action_expiration doesn't exist on pbq_idp_plan_actions table.
Backticks are used in MySQL to select Schema Object Names. Do not put the alias name inside backticks.
Not valid
`a.dev_act_id`
Valid
a.`dev_act_id`.
In your case backticks are exces
Trying to set up a user profile page for a job site. The database I plan to use is the MySQL database.
Looking into a few database design I came up with this schema.
First, the user management tables
CREATE TABLE `user` (
`user_id` int(11) NOT NULL,
`firstname` varchar(32) NOT NULL,
`lastname` varchar(32) NOT NULL,
`email` varchar(96) NOT NULL,
`mobile_number` varchar(32) NOT NULL,
`password` varchar(40) NOT NULL,
`salt` varchar(9) NOT NULL,
`address_id` int(11) NOT NULL DEFAULT '0',
`ip` varchar(40) NOT NULL,
`status` tinyint(1) NOT NULL,
`approved` tinyint(1) NOT NULL,
`registration_date` datetime NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
CREATE TABLE `user_address` (
`user_id` int(11) NOT NULL,
`city` varchar(128) NOT NULL
`work_city` varchar(128) NOT NULL,
`postal_code` varchar(10) NOT NULL,
`country_id` int(11) NOT NULL DEFAULT '0'
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
CREATE TABLE `user_description` (
`user_id` int(11) NOT NULL,
`description` text NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
and then the education table and work experience
CREATE TABLE `education_detail` (
`user_id` int(11) NOT NULL,
`certificate_degree_name` varchar(255) DEFAULT NULL,
`major` varchar(255) DEFAULT NULL,
`institute_university_name` varchar(255) DEFAULT NULL,
`start_date` date NOT NULL DEFAULT '0000-00-00',
`completion_date` date NOT NULL DEFAULT '0000-00-00'
)
CREATE TABLE `experience_detail` (
`user_id` int(11) NOT NULL,
`is_current_job` int(2) DEFAULT NULL,
`start_date` date NOT NULL DEFAULT '0000-00-00',
`end_date` date NOT NULL DEFAULT '0000-00-00',
`job_title` varchar(255) DEFAULT NULL,
`company_name` varchar(255) DEFAULT NULL,
`job_location_city` varchar(255) DEFAULT NULL,
`job_location_state` varchar(255) DEFAULT NULL,
`job_location_country` varchar(255) DEFAULT NULL,
`job_description` varchar(255) DEFAULT NULL
)
Note that user_id in table user_address, user_description, education_detail and experience_detail is a foreign key referencing it to the table user.
There are a few more table like skills, certification etc to which I plan on using user_id as a FK.
My question, is this database design good enough? Can you suggest me what should be done more to make the design much better?
Keep in mind not all will have work experience, some may be freshers.
Use InnoDB, not MyISAM. (There are many Q&A explaining 'why'.)
NULL or an empty string is perfectly fine for a missing description. Do you have any further argument for disliking such? (Meanwhile, InnoDB is more efficient at handling optional big strings.)
Every table should have a PRIMARY KEY; you don't seem to have any. The first table probably needs user_id as the PK. Read about AUTO_INCREMENT.
As with description, why is address in a separate table?
May I suggest this for country name/code/id:
country_code CHAR(2) CHARACTER SET ascii
Education is 1:many from users, so user_id cannot be the PK. Ditto for jobs.
Is there a way I can speed this up? Right now it's taking an unbelievably insane amount of time to query.
SELECT trades.*, trader1.user_name as trader1_name,
trader2.user_name as trader2_name FROM trades
LEFT JOIN logs_players trader1 ON trader1.user_id = trader1_account_id
LEFT JOIN logs_players trader2 ON trader2.user_id = trader2_account_id
ORDER BY time_added
LIMIT 20 OFFSET 0;
I've done as much as I could in terms of searching online for a solution. Or even just trying to get some more information why it's taking so long to execute.
The query takes about 45 seconds or so to complete.
Create statements:
CREATE TABLE `trades` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`trader1_account_id` int(11) DEFAULT NULL,
`trader2_account_id` int(11) DEFAULT NULL,
`trader1_value` bigint(20) DEFAULT NULL,
`trader2_value` bigint(20) DEFAULT NULL,
`trader1_ip` varchar(16) DEFAULT NULL,
`trader2_ip` varchar(16) DEFAULT NULL,
`world` int(11) DEFAULT NULL,
`x` int(11) DEFAULT NULL,
`z` int(11) DEFAULT NULL,
`level` int(11) DEFAULT NULL,
`trader1_user` varchar(12) DEFAULT NULL,
`trader2_user` varchar(12) DEFAULT NULL,
`time_added` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8
CREATE TABLE `logs_players` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) DEFAULT NULL,
`user_name` varchar(20) DEFAULT NULL,
`world_stage` varchar(20) DEFAULT NULL,
`world_type` varchar(20) DEFAULT NULL,
`bank` longtext,
`inventory` longtext,
`equipment` longtext,
`total_wealth` mediumtext,
`total_play_time` mediumtext,
`rights` int(11) DEFAULT NULL,
`icon` int(11) DEFAULT NULL,
`ironmode` int(11) DEFAULT NULL,
`x` int(11) DEFAULT NULL,
`z` int(11) DEFAULT NULL,
`level` int(11) DEFAULT NULL,
`last_ip` varchar(16) DEFAULT NULL,
`last_online` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`muted_until` timestamp NULL DEFAULT NULL,
`banned_until` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8
I filled a sample database with 10k rows each, and found that a few indexes were what you needed:
ALTER TABLE `logs_players` ADD INDEX(`user_id`);
ALTER TABLE `trades` ADD INDEX(`time_added`);
The main index we need is an index on user_id. Changing that we went from a query time of 20.1390 seconds, to 0.0130 seconds:
We can even get that down further, by adding an index on time_added to make sorting a lot faster, now we ended up with an impressive query time:
Do some research on indexes! A simple EXPLAIN query would show you that you're using filesort (Which is rather bad!):
After indexes, this looks a lot better:
I have an iOS app that currently uses MySQL as the database backend to store about 2000 records and 10,000 photos. I want to refactor my Objective-C to use Parse instead of the current MySQL and I'm wondering what would be the best way to move my MySQL data to Parse?
Here is the current MySQL structure of my database.
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";
--
-- Database: `mysqlToParsePlatform`
--
-- --------------------------------------------------------
--
-- Table structure for table `awesome_authentication`
--
CREATE TABLE `awesome_authentication` (
`authentication_id` int(11) NOT NULL auto_increment,
`username` varchar(100) NOT NULL,
`password` varchar(100) NOT NULL,
`name` varchar(100) NOT NULL,
`role_id` int(11) NOT NULL,
`is_deleted` int(11) NOT NULL,
`deny_access` int(11) NOT NULL,
PRIMARY KEY (`authentication_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=6 ;
-- --------------------------------------------------------
--
-- Table structure for table `awesome_categories`
--
CREATE TABLE `awesome_categories` (
`category_id` int(11) NOT NULL auto_increment,
`category` varchar(100) NOT NULL,
`category_icon` varchar(100) NOT NULL,
`created_at` int(11) NOT NULL,
`updated_at` int(11) NOT NULL,
`is_deleted` int(11) NOT NULL,
PRIMARY KEY (`category_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=4 ;
-- --------------------------------------------------------
--
-- Table structure for table `awesome_news`
--
CREATE TABLE `awesome_news` (
`news_id` int(11) NOT NULL auto_increment,
`news_content` text NOT NULL,
`news_title` varchar(100) NOT NULL,
`news_url` varchar(100) NOT NULL,
`photo_url` varchar(200) NOT NULL,
`created_at` int(11) NOT NULL,
`updated_at` int(11) NOT NULL,
`is_deleted` int(11) NOT NULL,
PRIMARY KEY (`news_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 ;
-- --------------------------------------------------------
--
-- Table structure for table `awesome_photos`
--
CREATE TABLE `awesome_photos` (
`photo_id` int(11) NOT NULL auto_increment,
`photo_url` varchar(200) NOT NULL,
`thumb_url` varchar(200) NOT NULL,
`store_id` int(11) NOT NULL,
`created_at` int(11) NOT NULL,
`updated_at` int(11) NOT NULL,
`is_deleted` int(11) NOT NULL,
PRIMARY KEY (`photo_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=10167 ;
-- --------------------------------------------------------
--
-- Table structure for table `awesome_ratings`
--
CREATE TABLE `awesome_ratings` (
`rating_id` int(11) NOT NULL auto_increment,
`rating` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`created_at` int(11) NOT NULL,
`updated_at` int(11) NOT NULL,
`store_id` int(11) NOT NULL,
PRIMARY KEY (`rating_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=32 ;
-- --------------------------------------------------------
--
-- Table structure for table `awesome_reviews`
--
CREATE TABLE `awesome_reviews` (
`review_id` int(11) NOT NULL auto_increment,
`review` text NOT NULL,
`store_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`created_at` int(11) NOT NULL,
`updated_at` int(11) NOT NULL,
`is_deleted` int(11) NOT NULL,
PRIMARY KEY (`review_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ;
-- --------------------------------------------------------
--
-- Table structure for table `awesome_stores`
--
CREATE TABLE `awesome_stores` (
`store_id` int(11) NOT NULL auto_increment,
`store_name` varchar(100) NOT NULL,
`store_address` varchar(160) NOT NULL,
`store_desc` text NOT NULL,
`lat` varchar(20) NOT NULL,
`lon` varchar(20) NOT NULL,
`sms_no` varchar(30) NOT NULL,
`phone_no` varchar(30) NOT NULL,
`email` varchar(30) NOT NULL,
`website` varchar(100) NOT NULL,
`category_id` int(11) NOT NULL,
`created_at` int(11) NOT NULL,
`updated_at` int(11) NOT NULL,
`featured` int(11) NOT NULL,
`is_deleted` int(11) NOT NULL,
PRIMARY KEY (`store_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=2027 ;
-- --------------------------------------------------------
--
-- Table structure for table `awesome_users`
--
CREATE TABLE `awesome_users` (
`user_id` int(11) NOT NULL auto_increment,
`full_name` varchar(100) NOT NULL,
`username` varchar(40) NOT NULL,
`password` varchar(40) NOT NULL,
`login_hash` varchar(200) NOT NULL,
`facebook_id` text NOT NULL,
`twitter_id` text NOT NULL,
`email` varchar(100) NOT NULL,
`deny_access` int(11) NOT NULL,
`thumb_url` varchar(100) NOT NULL,
`photo_url` varchar(100) NOT NULL,
PRIMARY KEY (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=19 ;
After spending a lot of time looking for answers, trying to research migration services and looking into writting custom scripts, I decided to try the easiest route first. And that worked!
Hopefully this will help others looking to move similar records from MySQL to Parse.
In order to get my 2,000+ stores records and 10,000+ photos out of the MySQL database, I went to phpMyAdmin and exported the awesome_stores and awesome_photos tables to two separate CSV files using the setting pictured below.
Once you have your CSV files, open your Parse Data Browser and go to the Core tab and look under the Data section pictured below and click on Import.
That will bring up the import dialog box. That looks like this. Name your new Custom Class and then add your .CSV. Make sure that phpMyAdmin does not add an extra line to head of your .csv. If it does, you will get an error when trying to do the import to Parse.
I have 3 tables in which I'm trying to preform joins on, and inserting the resulting data into another table. The query is taking anywhere between 15-30 mins depending on the dataset. The tables I'm selecting from and joining on are at least 25k records each but will quickly grow to be 500k+.
I tried adding indexes on the fields but still isn't helping that much. Are there any other things I can try or are joins on this scale just going to take this long?
Here is the query I'm trying to perform:
INSERT INTO audience.topitem
(runs_id, total_training_count, item, standard_index_value, significance, seed_count, nonseed_count, prod, model_type, level_1, level_2, level_3, level_4, level_5)
SELECT 5, seed_count + nonseed_count AS total_training_count,
ii.item, standard_index_value, NULL, seed_count, nonseed_count,
standard_index_value * seed_count AS prod, 'site', topic_L1, topic_L2, topic_L3, topic_L4, topic_L5
FROM audience.item_indexes ii
LEFT JOIN audience.usercounts uc ON ii.item = uc.item AND ii.runs_id = uc.runs_id
LEFT JOIN categorization.categorization at on ii.item = at.url
WHERE ii.runs_id = 5
Table: audience.item_indexes
CREATE TABLE `item_indexes` (
`item` varchar(1024) DEFAULT NULL,
`standard_index_value` float DEFAULT NULL,
`runs_id` int(11) DEFAULT NULL,
`model_type` enum('site','term','combo') DEFAULT NULL,
KEY `item_idx` (`item`(333))
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Table: audience.usercounts
CREATE TABLE `usercounts` (
`item` varchar(1024) DEFAULT NULL,
`seed_count` int(11) DEFAULT NULL,
`nonseed_count` int(11) DEFAULT NULL,
`significance` float(19,6) DEFAULT NULL,
`runs_id` int(11) DEFAULT NULL,
`model_type` enum('site','term','combo') DEFAULT NULL,
KEY `item_idx` (`item`(333))
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Table: audience.topitem
CREATE TABLE `topitem` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`total_training_count` int(11) DEFAULT NULL,
`item` varchar(1024) DEFAULT NULL,
`standard_index_value` float(19,6) DEFAULT NULL,
`significance` float(19,6) DEFAULT NULL,
`seed_count` int(11) DEFAULT NULL,
`nonseed_count` int(11) DEFAULT NULL,
`prod` float(19,6) DEFAULT NULL,
`cat_type` varchar(32) DEFAULT NULL,
`cat_level` int(11) DEFAULT NULL,
`conf` decimal(19,9) DEFAULT NULL,
`level_1` varchar(64) DEFAULT NULL,
`level_2` varchar(64) DEFAULT NULL,
`level_3` varchar(64) DEFAULT NULL,
`level_4` varchar(64) DEFAULT NULL,
`level_5` varchar(64) DEFAULT NULL,
`runs_id` int(11) DEFAULT NULL,
`model_type` enum('site','term','combo') DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=825 DEFAULT CHARSET=utf8;
Table: categorization.categorization
CREATE TABLE `AT_categorization` (
`url` varchar(760) NOT NULL ,
`language` varchar(10) DEFAULT NULL,
`category` text,
`entity` text,
`source` varchar(255) DEFAULT NULL,
`topic_L1` varchar(45) NOT NULL DEFAULT '',
`topic_L2` varchar(45) NOT NULL DEFAULT '',
`topic_L3` varchar(45) NOT NULL DEFAULT '',
`topic_L4` varchar(45) NOT NULL DEFAULT '',
`topic_L5` varchar(45) NOT NULL DEFAULT '',
`last_refreshed` datetime DEFAULT NULL,
PRIMARY KEY (`url`,`topic_L1`,`topic_L2`,`topic_L3`,`topic_L4`,`topic_L5`),
UNIQUE KEY `inx_url` (`url`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
If you add the following indexes, your query will run faster:
CREATE INDEX runs_idx ON audience.item_indexes (runs_id);
ALTER TABLE audience.usercounts
DROP INDEX item_idx,
ADD INDEX item_idx (runs_id, item(333));
Also, item_indexes is utf8, but AT_categorization is latin1, which keeps any indexes from being used. To address this issue, change AT_categorization to utf8:
ALTER TABLE AT_categorization CHARSET=utf8;
Lastly, for the AT_categorization table, the two indexes
PRIMARY KEY (`url`,`topic_L1`,`topic_L2`,`topic_L3`,`topic_L4`,`topic_L5`),
UNIQUE KEY `inx_url` (`url`)
are redundant. So you could DROP these, and simply have the url field be the primary key:
ALTER TABLE AT_categorization
DROP PRIMARY KEY,
DROP KEY `inx_url`,
ADD PRIMARY KEY (url);