I have a query that grabs tags for a list of articles and limits it to under 5 tags per article. This works pretty well.
Here's the query:
SET #rank=null, #val=null;
SELECT * FROM (
SELECT r.article_id, c.`category_name`, c.`category_id`,
#rank := IF( #val = r.article_id, #rank +1, 1 ) AS rank,
#val := r.article_id
FROM `article_category_reference` r
INNER JOIN `articles_categorys` c ON c.category_id = r.category_id
WHERE r.article_id
IN ( 1,2 )
ORDER BY r.`article_id` ASC
) AS a
WHERE rank < 5
However, I have specific tags I want to show up first which have a column of "show_first" 0/1 and I want them included first and be counted.
I've tried doing:
ORDER BY CASE WHEN (c.`show_first` = 1) THEN 0 ELSE 1 END, r.`article_id` ASC
Which breaks the rank counting, so all tags end up showing.
Any pointers would be appreciated.
The tables:
CREATE TABLE `article_category_reference` (
`ref_id` int(11) NOT NULL,
`article_id` int(11) NOT NULL,
`category_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Indexes for table `article_category_reference`
--
ALTER TABLE `article_category_reference`
ADD PRIMARY KEY (`ref_id`),
ADD KEY `category_id` (`category_id`),
ADD KEY `article_id` (`article_id`);
CREATE TABLE `articles_categorys` (
`category_id` int(11) NOT NULL,
`category_name` varchar(32) CHARACTER SET utf8 NOT NULL,
`quick_nav` tinyint(1) NOT NULL DEFAULT '0',
`is_genre` tinyint(1) NOT NULL DEFAULT '0',
`show_first` tinyint(1) NOT NULL DEFAULT '0'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
--
-- Indexes for table `articles_categorys`
--
ALTER TABLE `articles_categorys`
ADD PRIMARY KEY (`category_id`);
-- And some data:
INSERT INTO `articles_categorys` (`category_id`, `category_name`, `quick_nav`, `is_genre`, `show_first`) VALUES
(1, 'one', 1, 0, 0),
(2, 'two', 1, 0, 0),
(3, 'three', 1, 0, 0),
(4, 'four', 0, 0, 0),
(5, 'five', 0, 0, 0),
(6, 'six', 0, 0, 0),
(7, 'seven', 0, 0, 1),
(8, 'eight', 0, 0, 1);
INSERT INTO `article_category_reference` (`ref_id`, `article_id`, `category_id`) VALUES
(1, 1, 1),
(2, 1, 2),
(3, 1, 3),
(4, 1, 4),
(5, 1, 5),
(6, 1, 6),
(7, 1, 7),
(8, 1, 8),
(9, 2, 1),
(10, 2, 2),
(11, 2, 3),
(12, 2, 4),
(13, 2, 5),
(14, 2, 6),
(15, 2, 7),
(16, 2, 8);
Fiddle of how it works: http://sqlfiddle.com/#!9/1de99/1/0
Fiddle of it not working with me wanting some to always show first: http://sqlfiddle.com/#!9/0d36b7/1 (adding in a second group seems to break the ranking system)
Your issue is not in the where condition, it's about the ranking that you are creating.
As you will see in my answer, I have created one inner query which will get that record in specific order and apply accurate ranking.
If you check your inner query, it's shows that all rows have the same rank and that is due to that ordering issue.
So I have added the ORDER BY clause in innermost query, and then filtered out records which have rank1 less than 5.
SET #rank1=null, #val=null;
SELECT * FROM (
SELECT a.article_id, a.`category_name`, a.`category_id`,
#rank1 := IF( #val = a.article_id, #rank1 +1, 1 ) AS rank1,
#val := a.article_id
FROM (
SELECT r.article_id, c.`category_name`, c.`category_id`
FROM `article_category_reference` r
INNER JOIN `articles_categorys` c ON c.category_id = r.category_id
GROUP BY r.article_id, c.`category_name`, c.`category_id`
ORDER BY r.`article_id`,CASE WHEN (c.`show_first` = 1) THEN 0 ELSE 1 END ASC
) AS a
) Z
WHERE Z.rank1 < 5;
You can check here.
Related
I have a problem with my update query. I need to change notify_admin from 0 to 1 only for last users row if action_type = abuse. (Result should be rows with id=9 and id=13)
I'm trying something like that:
UPDATE user_log SET notify_admin = 1
WHERE id IN (
SELECT DISTINCT user_id FROM (SELECT user_id FROM user_log) as UNIKALNE
) AND action_type LIKE 'abuse'
Unfortunately it update only 1 row (id=3).
Here is my table:
CREATE TABLE IF NOT EXISTS `user_log` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`user_id` int(10) unsigned NOT NULL,
`action_type` enum('login','logout','abuse') CHARACTER SET latin1 NOT NULL,
`notify_admin` tinyint(1) NOT NULL DEFAULT '0',
`saved` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=15;
INSERT INTO `user_log` (`id`, `user_id`, `action_type`, `notify_admin`, `saved`) VALUES
(1, 1, 'login', 0, '2015-11-02 12:13:14'),
(2, 1, 'logout', 0, '2015-11-02 13:12:11'),
(3, 1, 'abuse', 0, '2016-01-03 14:10:02'),
(4, 2, 'abuse', 0, '2016-01-04 17:47:03'),
(5, 2, 'login', 0, '2016-01-04 18:11:55'),
(6, 1, 'abuse', 0, '2016-01-04 18:23:57'),
(7, 1, 'abuse', 0, '2016-01-04 18:24:23'),
(8, 2, 'logout', 0, '2016-01-04 18:25:24'),
(9, 1, 'abuse', 0, '2016-01-04 18:25:32'),
(10, 1, 'login', 0, '2016-01-05 21:02:59'),
(11, 3, 'login', 0, '2016-01-05 21:28:43'),
(12, 3, 'logout', 0, '2016-01-05 21:52:01'),
(13, 2, 'abuse', 0, '2016-01-05 22:00:35'),
(14, 1, 'logout', 0, '2016-01-05 22:12:09');
You need to first get the most recent saved value per user and then update the column.
UPDATE user_log
JOIN
(
select id from user_log JOIN (
select user_id,max(saved) max_saved
from user_log
where action_type="abuse"
group by user_id
) t
ON t.user_id = user_log.user_id AND t.max_saved = user_log.saved
) t2
ON user_log.id = t2.id
SET notify_admin = 1
Here you are checking User_id from table with distinct so it will give 1,2,3 only and then comparing with abuse action type so it will update row 3 which matches.
replace User_id with id if you want to update for all rows
UPDATE user_log SET notify_admin = 1 WHERE id IN (SELECT DISTINCT id FROM (SELECT id FROM user_log) as UNIKALNE) AND action_type LIKE 'abuse'
Here is my table and sample data.
CREATE TABLE `articles`
(
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(100) NOT NULL,
PRIMARY KEY (`id`)
);
CREATE TABLE `tags`
(
`id` int(11) NOT NULL,
`name` varchar(100) NOT NULL,
PRIMARY KEY (`id`)
);
CREATE TABLE `article_tags`
(
`article_id` int(11) NOT NULL,
`tag_id` int(11) NOT NULL
);
INSERT INTO `tags` (`id`, `name`) VALUES
(1, 'Wap Stories'),
(2, 'App Stories');
INSERT INTO `articles` (`id`, `title`) VALUES
(1, 'USA'),
(2, 'England'),
(3, 'Germany'),
(4, 'India'),
(5, 'France'),
(6, 'Dubai'),
(7, 'Poland'),
(8, 'Japan'),
(9, 'China'),
(10, 'Australia');
INSERT INTO `article_tags` (`article_id`, `tag_id`) VALUES
(1, 1),
(1, 2),
(4, 1),
(5, 1),
(2, 2),
(2, 1),
(6, 2),
(7, 2),
(8, 1),
(9, 1),
(3, 2),
(9, 2),
(10, 2);
How can I get the below output I have tried using group_concat function. It gives all the results. But my requirement is I need to groupconcat values as
a. Combination of 1,2 can be there, only 1 can be there but 2 alone cannot be there.
b. Combination of 2,1 can be there, only 2 can be there but 1 alone cannot be there
Below is the output I need
id, title, groupconcat
--------------------------
1, USA, 1,2
2, England, 1,2
4, India, 1
5, France, 1
8, Japan, 1
9, China, 1,2
SqlFiddle Link
The query which I am using is
select id, title, group_concat(tag_id order by tag_id) as 'groupconcat' from articles a
left join article_tags att on a.id = att.article_id
where att.tag_id in (1,2)
group by article_id order by id
You can try like this
SELECT id, title, GROUP_CONCAT(tag_id ORDER BY tag_id) AS 'groupconcat'
FROM articles a
LEFT join article_tags att on a.id = att.article_id
WHERE att.tag_id in (1,2)
GROUP BY article_id
HAVING SUBSTRING_INDEX(groupconcat,',',1) !='2'
ORDER BY id
I have a database with tables: applicant (or candidate for a job), application (candidate applied for a certain job), test, selected_test(any application has a defined set of tests) and test_result.
When I need to show which applicant scored what result for any application and test I would use this query:
SELECT applicant.first_name, applicant.last_name, application.job, test.name, test_result.score
FROM applicant
INNER JOIN application ON application.applicant_id=applicant.id
INNER JOIN selected_test ON application.id=selected_test.application_id
INNER JOIN test ON selected_test.test_id=test.id
INNER JOIN test_result ON selected_test.test_id=test_result.test_id AND applicant.id=test_result.applicant_id
What I need to accomplish is sorting by certain test type (test.name) along with test.score
This is what I mean:
SELECT a.first_name, a.last_name, app.job, iq.score AS iqScore, math.score AS mathScore, personality.score AS personalityScore, logic.score AS logicScore
FROM applicant a
INNER JOIN application app ON a.id=app.applicant_id
LEFT JOIN
(SELECT app.id AS appId, tr.score
FROM applicant a
INNER JOIN application app ON app.applicant_id=a.id
INNER JOIN selected_test st ON app.id=st.application_id
INNER JOIN test t ON st.test_id=t.id AND t.name='iq'
INNER JOIN test_result tr ON st.test_id=tr.test_id AND a.id=tr.applicant_id) AS iq ON app.id=iq.appId
LEFT JOIN
(SELECT app.id AS appId, tr.score
FROM applicant a
INNER JOIN application app ON app.applicant_id=a.id
INNER JOIN selected_test st ON app.id=st.application_id
INNER JOIN test t ON st.test_id=t.id AND t.name='math'
INNER JOIN test_result tr ON st.test_id=tr.test_id AND a.id=tr.applicant_id) AS math ON app.id=math.appId
LEFT JOIN
(SELECT app.id AS appId, tr.score
FROM applicant a
INNER JOIN application app ON app.applicant_id=a.id
INNER JOIN selected_test st ON app.id=st.application_id
INNER JOIN test t ON st.test_id=t.id AND t.name='personality'
INNER JOIN test_result tr ON st.test_id=tr.test_id AND a.id=tr.applicant_id) AS personality ON app.id=personality.appId
LEFT JOIN
(SELECT app.id AS appId, tr.score
FROM applicant a
INNER JOIN application app ON app.applicant_id=a.id
INNER JOIN selected_test st ON app.id=st.application_id
INNER JOIN test t ON st.test_id=t.id AND t.name='logic'
INNER JOIN test_result tr ON st.test_id=tr.test_id AND a.id=tr.applicant_id) AS logic ON app.id=logic.appId
ORDER BY mathScore DESC, iqScore DESC, logicScore DESC
The query returns a set of applications, showing applicant data, job, test names and scores.
For instance, if I want candidate applications with higher "math" score, followed by highest scores in "IQ" and then in "logic" to be on top, 'ORDER BY' clause looks like the above.
The query works correct but the problem is that in real situation it deals with large data sets and I need a way to shorten/refactor this query.
Example database it works on is here:
CREATE TABLE IF NOT EXISTS `applicant` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`first_name` varchar(255) NOT NULL,
`last_name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=8 ;
--
-- Dumping data for table `applicant`
--
INSERT INTO `applicant` (`id`, `first_name`, `last_name`) VALUES
(2, 'Jack', 'Redburn'),
(4, 'Barry', 'Leon'),
(6, 'Elisabeth', 'Logan'),
(7, 'Jane', 'Doe');
-- --------------------------------------------------------
--
-- Table structure for table `application`
--
CREATE TABLE IF NOT EXISTS `application` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`applicant_id` int(11) NOT NULL,
`job` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=10 ;
--
-- Dumping data for table `application`
--
INSERT INTO `application` (`id`, `applicant_id`, `job`) VALUES
(2, 2, 'Salesman'),
(4, 4, 'Policeman'),
(6, 6, 'Journalist'),
(8, 6, 'Hostess'),
(9, 7, 'Journalist');
-- --------------------------------------------------------
--
-- Table structure for table `selected_test`
--
CREATE TABLE IF NOT EXISTS `selected_test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`application_id` int(11) NOT NULL,
`test_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=24 ;
--
-- Dumping data for table `selected_test`
--
INSERT INTO `selected_test` (`id`, `application_id`, `test_id`) VALUES
(1, 1, 1),
(2, 1, 2),
(3, 1, 3),
(5, 2, 1),
(6, 2, 2),
(7, 2, 3),
(8, 2, 4),
(9, 3, 4),
(10, 3, 2),
(11, 4, 1),
(12, 4, 2),
(13, 4, 3),
(14, 4, 4),
(15, 5, 2),
(16, 5, 3),
(17, 6, 1),
(18, 6, 4),
(19, 7, 3),
(20, 7, 2),
(21, 7, 1),
(22, 8, 2),
(23, 8, 3);
-- --------------------------------------------------------
--
-- Table structure for table `test`
--
CREATE TABLE IF NOT EXISTS `test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ;
--
-- Dumping data for table `test`
--
INSERT INTO `test` (`id`, `name`) VALUES
(1, 'math'),
(2, 'logic'),
(3, 'iq'),
(4, 'personality');
-- --------------------------------------------------------
--
-- Table structure for table `test_result`
--
CREATE TABLE IF NOT EXISTS `test_result` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`applicant_id` int(11) NOT NULL,
`test_id` int(11) NOT NULL,
`score` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=24 ;
--
-- Dumping data for table `test_result`
--
INSERT INTO `test_result` (`id`, `applicant_id`, `test_id`, `score`) VALUES
(2, 2, 1, 6),
(3, 4, 1, 7),
(6, 6, 1, 3),
(7, 7, 1, 8),
(9, 2, 2, 15),
(11, 4, 2, 12),
(13, 6, 2, 11),
(14, 7, 2, 9),
(15, 7, 3, 105),
(16, 6, 3, 112),
(18, 4, 3, 108),
(20, 2, 3, 117),
(22, 4, 4, 70);
And here is what results look like:
First query is just to show you how data is related:
The large query, shows score data horizontally so it is possible to sort by test name and score:
caveat I don't know mysql
Googling mysql pivot gives this result http://en.wikibooks.org/wiki/MySQL/Pivot_table
So if we apply the same logic using the test.id as the seed number (which is exam in the example from the google search) we get this:
SQLFIDDLE
select first_name, last_name, job,
sum(score*(1-abs(sign(testid-1)))) as math,
sum(score*(1-abs(sign(testid-2)))) as logic,
sum(score*(1-abs(sign(testid-3)))) as iq,
sum(score*(1-abs(sign(testid-4)))) as personality
from
(
SELECT applicant.first_name, applicant.last_name, application.job, test.name, test_result.score, test.id as testid
FROM applicant
INNER JOIN application ON application.applicant_id=applicant.id
INNER JOIN selected_test ON application.id=selected_test.application_id
INNER JOIN test ON selected_test.test_id=test.id
INNER JOIN test_result ON selected_test.test_id=test_result.test_id AND applicant.id=test_result.applicant_id
) t
group by first_name, last_name, job
Now you've got your short query yu can apply sorting as required - you can use case statement in you order by to dynamically change the order as required...
I noticed that you have only defined primary keys. You should see a noticeable performance improvement when you index other fields. Index at least the following: application.applicant_id, selected_test.application_id, selected_test.test_id, test_result.applicant_id, test_result.test_id, test_result.score.
You might be surprised how much this speeds things up for you. In fact, mysql tells us this is the best way to improve performance: https://dev.mysql.com/doc/refman/5.5/en/optimization-indexes.html.
I wish to divide values from the same column using mySQL.
is there a better way of doing this??? Do I really need 3 select statements?
SELECT (SELECT FixAM From Fixes WHERE Id = 1) / (SELECT FixAM From Fixes WHERE Id = 2)
My table structure is as follows:
CREATE TABLE IF NOT EXISTS `Fixes` (
`Id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'PK',
`CurrencyId` int(11) NOT NULL COMMENT 'FK',
`MetalId` int(11) NOT NULL COMMENT 'FK',
`FixAM` decimal(10,5) NOT NULL,
`FixPM` decimal(10,5) DEFAULT NULL,
`TimeStamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`Id`),
KEY `CurrencyId` (`CurrencyId`),
KEY `MetalId` (`MetalId`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=13 ;
--
-- Dumping data for table `Fixes`
--
INSERT INTO `Fixes` (`Id`, `CurrencyId`, `MetalId`, `FixAM`, `FixPM`, `TimeStamp`) VALUES
(1, 1, 1, '1592.50000', '1586.25000', '2013-02-25 15:10:21'),
(2, 2, 1, '1051.84900', '1049.59300', '2013-02-25 15:10:21'),
(3, 3, 1, '1201.88700', '1194.10600', '2013-02-25 15:10:21'),
(4, 1, 2, '29.17000', NULL, '2013-02-25 13:54:02'),
(5, 2, 2, '19.27580', NULL, '2013-02-25 13:54:02'),
(6, 3, 2, '21.98190', NULL, '2013-02-25 13:54:02'),
(7, 1, 3, '1627.00000', '1620.00000', '2013-02-25 14:28:59'),
(8, 2, 3, '1074.65000', '1072.50000', '2013-02-25 14:28:59'),
(9, 3, 3, '1229.30000', '1218.95000', '2013-02-25 14:28:59'),
(10, 1, 4, '747.00000', '748.00000', '2013-02-25 14:28:59'),
(11, 2, 4, '493.40000', '495.20000', '2013-02-25 14:28:59'),
(12, 3, 4, '564.40000', '562.85000', '2013-02-25 14:28:59');
I think this is what you're looking for:
SELECT MetalId,
MAX(CASE WHEN CurrencyId = 1 THEN FixAM END) /
MAX(CASE WHEN CurrencyId = 2 THEN FixAM ELSE 1 END) Output
FROM Fixes
GROUP BY MetalId
This produces 1592.50000 / 1051.849000. If you want the opposite, swap the currency ids.
SQL Fiddle Demo
In case you don't have a CurrencyId = 2, I defaulted the dividing value to 1 so you wouldn't receive an error.
Hello I am having issues with execution time on a query that searches for users ( from users table ) that are having at least one interest from one specified interests set and a location from a specified locations set. So I have this test DB:
CREATE TABLE IF NOT EXISTS `interests` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=10 ;
--
-- Dumping data for table `interests`
--
INSERT INTO `interests` (`id`, `name`) VALUES
(1, 'auto'),
(2, 'moto'),
(3, 'health'),
(4, 'garden'),
(5, 'house'),
(6, 'music'),
(7, 'video'),
(8, 'games'),
(9, 'it');
-- --------------------------------------------------------
--
-- Table structure for table `locations`
--
CREATE TABLE IF NOT EXISTS `locations` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(50) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=11 ;
--
-- Dumping data for table `locations`
--
INSERT INTO `locations` (`id`, `name`) VALUES
(1, 'engalnd'),
(2, 'austia'),
(3, 'germany'),
(4, 'france'),
(5, 'belgium'),
(6, 'italy'),
(7, 'russia'),
(8, 'poland'),
(9, 'norway'),
(10, 'romania');
-- --------------------------------------------------------
--
-- Table structure for table `users`
--
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`email` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=11 ;
--
-- Dumping data for table `users`
--
INSERT INTO `users` (`id`, `email`) VALUES
(1, 'email1#test.com'),
(2, 'email2#test.com'),
(3, 'email3#test.com'),
(4, 'email4#test.com'),
(5, 'email5#test.com'),
(6, 'email6#test.com'),
(7, 'email7#test.com'),
(8, 'email8#test.com'),
(9, 'email9#test.com'),
(10, 'email10#test.com');
-- --------------------------------------------------------
--
-- Table structure for table `users_interests`
--
CREATE TABLE IF NOT EXISTS `users_interests` (
`user_id` int(11) NOT NULL,
`interest_id` int(11) NOT NULL,
PRIMARY KEY (`user_id`,`interest_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
--
-- Dumping data for table `users_interests`
--
INSERT INTO `users_interests` (`user_id`, `interest_id`) VALUES
(1, 1),
(1, 2),
(2, 5),
(2, 7),
(2, 8),
(3, 1),
(4, 1),
(4, 5),
(4, 6),
(4, 7),
(4, 8),
(5, 1),
(5, 2),
(5, 8),
(6, 3),
(6, 7),
(6, 8),
(7, 7),
(7, 9),
(8, 5);
-- --------------------------------------------------------
--
-- Table structure for table `users_locations`
--
CREATE TABLE IF NOT EXISTS `users_locations` (
`user_id` int(11) NOT NULL,
`location_id` int(11) NOT NULL,
PRIMARY KEY (`user_id`,`location_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
--
-- Dumping data for table `users_locations`
--
INSERT INTO `users_locations` (`user_id`, `location_id`) VALUES
(2, 5),
(2, 7),
(2, 8),
(3, 1),
(4, 1),
(4, 5),
(4, 6),
(4, 7),
(4, 8),
(5, 1),
(5, 2),
(5, 8),
(6, 3),
(6, 7),
(6, 8),
(7, 7),
(7, 9),
(8, 5);
Is there a better way to query it than this:
SELECT email,
GROUP_CONCAT( DISTINCT ui.interest_id ) AS interests,
GROUP_CONCAT( DISTINCT ul.location_id ) AS locations
FROM `users` u
LEFT JOIN users_interests ui ON u.id = ui.user_id
LEFT JOIN users_locations ul ON u.id = ul.user_id
GROUP BY u.id
HAVING IF( interests IS NOT NULL , FIND_IN_SET( 2, interests )
OR FIND_IN_SET( 3, interests ) , 1 )
AND IF( locations IS NOT NULL , FIND_IN_SET( 2, locations )
OR FIND_IN_SET( 3, locations ) , 1 )
This is the best solution I found but it still slow on a 500k and 1mil rows in the relational tables ( locations and interests ). Especially when you are matching against a large set of values ( let's say above 50 locations and interests ).
So I am trying to achieve the result this query produces, but a bit faster:
email interests locations
email1#test.com 1,2 [BLOB - 0B]
email5#test.com 1,2,8 1,2,8
email6#test.com 3,7,8 3,7,8
email9#test.com [BLOB - 0B] [BLOB - 0B]
email10#test.com [BLOB - 0B] [BLOB - 0B]
I also tried to join against an SELECT UNION table - for the matching set - but it was even slower. Like this:
SELECT *
FROM `users` u
LEFT JOIN users_interests ui ON u.id = ui.user_id
LEFT JOIN users_locations ul ON u.id = ul.user_id
LEFT JOIN (SELECT 2 as interest UNION SELECT 3 as interest) as `is` ON ui.interest_id = is.interest
LEFT JOIN (SELECT 2 as location UNION SELECT 3 as location ) as `ls` ON ul.location_id = ls.location
WHERE IF(ui.user_id IS NOT NULL, `is`.interest IS NOT NULL,1) AND
IF(ul.user_id IS NOT NULL, ls.location IS NOT NULL,1)
GROUP BY u.id
I am using this for a basic targeting system.
I would appreciate very much, any suggestion! Thank you!
you have IS is reserved word for mysql
and also your group by can slow your query but i dont see any meaning to use group by u.id here since the u.id is already unique id.
look demo
try use backticks around it.
SELECT *
FROM `users` u
LEFT JOIN users_interests ui ON u.id = ui.user_id
LEFT JOIN users_locations ul ON u.id = ul.user_id
LEFT JOIN (SELECT 2 as interest UNION SELECT 3 as interest) as `is`
ON ui.interest_id = `is`.interest
LEFT JOIN (SELECT 2 as location UNION SELECT 3 as location ) as `ls`
ON ul.location_id = `ls`.location
WHERE IF(ui.user_id IS NOT NULL, `is`.interest IS NOT NULL,1)
AND
IF(ul.user_id IS NOT NULL, `ls`.location IS NOT NULL,1)