Mysql Matching "Same" Emails - mysql

I have a table with 2 columns email and id. I need to find emails that are closely related. For example:
john.smith12#example.com
and
john.smith12#some.subdomains.example.com
These should be considered the same because the username (john.smith12) and the most top level domain (example.com) are the same. They are currently 2 different rows in my table. I've written the below expression which should do that comparison but it takes hours to execute (possibly/probably because of regex). Is there a better way to write this:
select c1.email, c2.email
from table as c1
join table as c2
on (
c1.leadid <> c2.leadid
and
c1.email regexp replace(replace(c2.email, '.', '[.]'), '#', '#[^#]*'))
The explain of this query comes back as:
id, select_type, table, type, possible_keys, key, key_len, ref, rows, Extra
1, SIMPLE, c1, ALL, NULL, NULL, NULL, NULL, 577532, NULL
1, SIMPLE, c2, ALL, NULL, NULL, NULL, NULL, 577532, Using where; Using join buffer (Block Nested Loop)
The create table is:
CREATE TABLE `table` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`Email` varchar(100) DEFAULT NULL,
KEY `Table_Email` (`Email`),
KEY `Email` (`Email`)
) ENGINE=InnoDB AUTO_INCREMENT=667020 DEFAULT CHARSET=latin1
I guess the indices aren't being used because of the regexp.
The regex comes out as:
john[.]smith12#[^#]*example[.]com
which should match both addresses.
Update:
I've modified the on to be:
on (c1.email <> '' and c2.email <> '' and c1.leadid <> c2.leadid and substr(c1. email, 1, (locate('#', c1.email) -1)) = substr(c2. email, 1, (locate('#', c2.email) -1))
and
substr(c1.email, locate('#', c1.email) + 1) like concat('%', substr(c2.email, locate('#', c2.email) + 1)))
and the explain with this approach is at least using the indices.
id, select_type, table, type, possible_keys, key, key_len, ref, rows, Extra
1, SIMPLE, c1, range, table_Email,Email, table_Email, 103, NULL, 288873, Using where; Using index
1, SIMPLE, c2, range, table_Email,Email, table_Email, 103, NULL, 288873, Using where; Using index; Using join buffer (Block Nested Loop)
So far this has executed for 5 minutes, will update if there is a vast improvement.
Update 2:
I've split the email so the username is a column and domain is a column. I've stored the domain in reverse order so the index of it can be used with a trailing wildcard.
CREATE TABLE `table` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`Email` varchar(100) DEFAULT NULL,
`domain` varchar(100) CHARACTER SET utf8 DEFAULT NULL,
`username` varchar(500) CHARACTER SET utf8 DEFAULT NULL,
KEY `Table_Email` (`Email`),
KEY `Email` (`Email`),
KEY `domain` (`domain`)
) ENGINE=InnoDB AUTO_INCREMENT=667020 DEFAULT CHARSET=latin1
Query to populate new columns:
update table
set username = trim(SUBSTRING_INDEX(trim(email), '#', 1)),
domain = reverse(trim(SUBSTRING_INDEX(SUBSTRING_INDEX(trim(email), '#', -1), '.', -3)));
New query:
select c1.email, c2.email, c2.domain, c1.domain, c1.username, c2.username, c1.leadid, c2.leadid
from table as c1
join table as c2
on (c1.email is not null and c2.email is not null and c1.leadid <> c2.leadid
and c1.username = c2.username and c1.domain like concat(c2.domain, '%'))
New Explain Results:
1, SIMPLE, c1, ALL, table_Email,Email, NULL, NULL, NULL, 649173, Using where
1, SIMPLE, c2, ALL, table_Email,Email, NULL, NULL, NULL, 649173, Using where; Using join buffer (Block Nested Loop)
From that explain it looks like the domain index is not being used. I also tried to force the usage with USE but that also didn't work, that resulted in no indices being used:
select c1.email, c2.email, c2.domain, c1.domain, c1.username, c2.username, c1.leadid, c2.leadid
from table as c1
USE INDEX (domain)
join table as c2
USE INDEX (domain)
on (c1.email is not null and c2.email is not null and c1.leadid <> c2.leadid
and c1.username = c2.username and c1.domain like concat(c2.domain, '%'))
Explain with use:
1, SIMPLE, c1, ALL, NULL, NULL, NULL, NULL, 649173, Using where
1, SIMPLE, c2, ALL, NULL, NULL, NULL, NULL, 649173, Using where; Using join buffer (Block Nested Loop)

You told us that the table has 700K rows.
This is not much, but you are joining it to itself, so in the worst case the engine would have to process 700K * 700K = 490 000 000 000 = 490B rows.
An index can definitely help here.
The best index depends on the data distribution.
What does the following query return?
SELECT COUNT(DISTINCT username)
FROM table
If result is close to 700K, say 100K, then it means that there are a lot of different usernames and you'd better focus on them, rather than domain. If result is low, say, 100, than indexing username is unlikely to be useful.
I hope that there are a lot of different usernames, so, I'd create an index on username, since the query joins on that column using simple equality comparison and this join would greatly benefit from this index.
Another option to consider is a composite index on (username, domain) or even covering index on (username, domain, leadid, email). The order of columns in the index definition is important.
I'd delete all other indexes, so that optimiser can't make another choice, unless there are other queries that may need them.
Most likely it won't hurt to define a primary key on the table as well.
There is one more not so important thing to consider. Does your data really have NULLs? If not, define the columns as NOT NULL. Also, in many cases it is better to have empty strings, rather than NULLs, unless you have very specific requirements and you have to distinguish between NULL and ''.
The query would be slightly simpler:
select
c1.email, c2.email,
c1.domain, c2.domain,
c1.username, c2.username,
c1.leadid, c2.leadid
from
table as c1
join table as c2
on c1.username = c2.username
and c1.domain like concat(c2.domain, '%')
and c1.leadid <> c2.leadid

No REGEXP_REPLACE needed, so it will work in all versions of MySQL/MariaDB:
UPDATE tbl
SET email = CONCAT(SUBSTRING_INDEX(email, '#', 1),
'#',
SUBSTRING_INDEX(
SUBSTRING_INDEX(email, '#', -1),
'.',
-2);
Since no index is useful, you may as well not bother with a WHERE clause.

If you search related data, you should have look to some data mining tools or Elastic Search for instance, which work like you need.
I have another possible "database-only" solution, but I don't know if it would work or if it'd be the best solution. If I have had to do this, I would try to make a table of "word references", filled by splitting all emails by all non alphanumerical characters.
In your example, this table would be filled with : john, smith12, some, subdomains, example and com. Each word with a unique id. Then, another table, a union table, which would link the email with its own words. Indexes would be needed on both tables.
To search closely related emails, you would have to split the source email with a regex and loop on each sub-word, like this one in the answer (with the connected by), then for each word, find it in the word references table, then the union table to find the emails which match it.
Over this request, you could make a select which sums all matched emails, by grouping by email to count the number of words matched by found emails and keep only the most matched email (excluding the source one, of course).
And sorry for this "not-sure-answer", but it was too long for a comment. I'm going to try to make an example.
Here is an example (in oracle, but should work with MySQL) with some data:
---------------------------------------------
-- Table containing emails and people info
CREATE TABLE PEOPLE (
ID NUMBER(11) PRIMARY KEY NOT NULL,
EMAIL varchar2(100) DEFAULT NULL,
USERNAME varchar2(500) DEFAULT NULL
);
-- Table containing word references
CREATE TABLE WORD_REF (
ID number(11) NOT NULL PRIMARY KEY,
WORD varchar2(20) DEFAULT NULL
);
-- Table containg id's of both previous tables
CREATE TABLE UNION_TABLE (
EMAIL_ID number(11) NOT NULL,
WORD_ID number(11) NOT NULL,
CONSTRAINT EMAIL_FK FOREIGN KEY (EMAIL_ID) REFERENCES PEOPLE (ID),
CONSTRAINT WORD_FK FOREIGN KEY (WORD_ID) REFERENCES WORD_REF (ID)
);
-- Here is my oracle sequence to simulate the auto increment
CREATE SEQUENCE MY_SEQ
MINVALUE 1
MAXVALUE 999999
START WITH 1
INCREMENT BY 1
CACHE 20;
---------------------------------------------
-- Some data in the people table
INSERT INTO PEOPLE (ID, EMAIL, USERNAME) VALUES (MY_SEQ.NEXTVAL, 'john.smith12#example.com', 'jsmith12');
INSERT INTO PEOPLE (ID, EMAIL, USERNAME) VALUES (MY_SEQ.NEXTVAL, 'john.smith12#some.subdomains.example.com', 'admin');
INSERT INTO PEOPLE (ID, EMAIL, USERNAME) VALUES (MY_SEQ.NEXTVAL, 'john.doe#another.domain.eu', 'jdo');
INSERT INTO PEOPLE (ID, EMAIL, USERNAME) VALUES (MY_SEQ.NEXTVAL, 'nathan.smith#example.domain.com', 'nsmith');
INSERT INTO PEOPLE (ID, EMAIL, USERNAME) VALUES (MY_SEQ.NEXTVAL, 'david.cayne#some.domain.st', 'davidcayne');
COMMIT;
-- Word reference data from the people data
INSERT INTO WORD_REF (ID, WORD)
(select MY_SEQ.NEXTVAL, WORD FROM
(select distinct REGEXP_SUBSTR(EMAIL, '\w+',1,LEVEL) WORD
from PEOPLE
CONNECT BY REGEXP_SUBSTR(EMAIL, '\w+',1,LEVEL) IS NOT NULL
));
COMMIT;
-- Union table filling
INSERT INTO UNION_TABLE (EMAIL_ID, WORD_ID)
select words.ID EMAIL_ID, word_ref.ID WORD_ID
FROM
(select distinct ID, REGEXP_SUBSTR(EMAIL, '\w+',1,LEVEL) WORD
from PEOPLE
CONNECT BY REGEXP_SUBSTR(EMAIL, '\w+',1,LEVEL) IS NOT NULL) words
left join WORD_REF on word_ref.word = words.WORD;
COMMIT;
---------------------------------------------
-- Finaly, the request which orders the emails which match the source email 'john.smith12#example.com'
SELECT COUNT(1) email_match
,email
FROM (SELECT word_ref.id
,words.word
,uni.email_id
,ppl.email
FROM (SELECT DISTINCT regexp_substr('john.smith12#example.com'
,'\w+'
,1
,LEVEL) word
FROM dual
CONNECT BY regexp_substr('john.smith12#example.com'
,'\w+'
,1
,LEVEL) IS NOT NULL) words
LEFT JOIN word_ref
ON word_ref.word = words.word
LEFT JOIN union_table uni
ON uni.word_id = word_ref.id
LEFT JOIN people ppl
ON ppl.id = uni.email_id)
WHERE email <> 'john.smith12#example.com'
GROUP BY email_match DESC;
The request results :
4 john.smith12#some.subdomains.example.com
2 nathan.smith#example.domain.com
1 john.doe#another.domain.eu

You get the name (i.e. the part before '#') with
substring_index(email, '#', 1)
You get the domain with
substring_index(replace(email, '#', '.'), '.', -2))
(because if we substitute the '#' with a dot, then it's always the part after the second-to-last dot).
Hence you find duplicates with
select *
from users
where exists
(
select *
from mytable other
where other.id <> users.id
and substring_index(other.email, '#', 1) =
substring_index(users.email, '#', 1)
and substring_index(replace(other.email, '#', '.'), '.', -2) =
substring_index(replace(users.email, '#', '.'), '.', -2)
);
If this is too slow, then you may want to create a computed column on the two combined and index it:
alter table users add main_email as
concat(substring_index(email, '#', 1), '#', substring_index(replace(email, '#', '.'), '.', -2));
create index idx on users(main_email);
select *
from users
where exists
(
select *
from mytable other
where other.id <> users.id
and other.main_email = users.main_email
);
Of course you can just as well have the two separated and index them:
alter table users add email_name as substring_index(email, '#', 1);
alter table users add email_domain as substring_index(replace(email, '#', '.'), '.', -2);
create index idx on users(email_name, email_domain);
select *
from users
where exists
(
select *
from mytable other
where other.id <> users.id
and other.email_name = users.email_name
and other.email_domain = users.email_dome
);
And of course, if you allow for both upper and lower case in the email address column, you will also want to apply LOWER on it in above expressions (lower(email)).

Related

MySQL query to efficiently return combined rows excluding duplicated info

So this is likely something simple, but I'm pulling my hair out trying to figure out an efficient way of doing this. I've looked at many other Q&A's, and I've messed with DISTINCT, GROUP BY, sub-queries, etc.
I've tried to super-simplify this example. (for the purpose of the example, there's no DB normalization) Here's a SQL fiddle:
http://sqlfiddle.com/#!9/948be7c/1
CREATE TABLE IF NOT EXISTS `orders` (
`id` int NOT NULL,
`name` varchar(90) NULL,
`email` varchar(200) NULL,
`phone` varchar(200) NULL,
PRIMARY KEY (`id`)
) DEFAULT CHARSET=utf8;
INSERT INTO `orders` (`id`, `name`, `email`, `phone`) VALUES
('1', 'Bob', 'bob#email.com', NULL),
('2', 'Bobby', 'bob#email.com', '1115551111'),
('3', 'Robert', 'robert#email.com', '1115551111'),
('4', 'Fred', 'fred#email.com', '1115552222'),
('5', 'Freddy', 'fred#email.com', '1115553333')
If I just run a simple select, I'll get:
But I'd like to "de-duplicate" any results that have the same email address or that have the same phone number - because they will be the same people, even if there are multiple ID's for them, and even if their names are spelled different. And then consolidate those results (one of the "distinct" email addresses and one of the "distinct" phone numbers along with one of the names and one of the ID's.)
So that for the above, I'd end up with something like this:
Any suggestions?
I think that you can do what you want by filtering with a correlated subquery:
select o.*
from orders o
where o.id = (
select o1.id
from orders o1
where o1.email = o.email or o1.phone = o.phone
order by o1.phone is not null desc, o1.email is not null desc, id
limit 1
)
This retains just one row out of those that have the same phone or email, while giving priority to the row whose phone and email is not null. Ties are broken by picking the lowest id.
For your sample data, this returns:
id name email phone
2 Bobby bob#email.com 1115551111
4 Fred fred#email.com 1115552222
There are a number of different ways your requirements could be interpreted.
One way would be to reframe it as a constraint: only return a record if one of these is true:
it has a non-null email and phone, and no record exists with the same email and phone and a lower id
it has a non-null email but null phone, and no record exists with the same email and a non-null phone, and no record exists with the same email and a null phone and a lower id
it has a non-null phone but null email, and no record exists with the same phone and a non-null email, and no record exists with the same phone and a null email and a lower id
This translates easily into a couple of joins, no group by or distinct required.

MYSQL: Create Unique ID based on Another Column

If I have the following column in database:
Email
aaa#gmail.com
ccc#gmail.com
ddd#gmail.com
ccc#gmail.com
bbb#gmail.com
aaa#gmail.com
I would like to ALTER THE TABLE and create a unique ID column based on the 'Email' column. Like the following:
Email Email_ID
aaa#gmail.com 001
ccc#gmail.com 002
ddd#gmail.com 003
ccc#gmail.com 002
bbb#gmail.com 004
aaa#gmail.com 001
I would suggest that you use an integer for the value -- rather than a string. Then, you can use variables as for the assignment:
alter table t add email_id int;
update t join
(select email, (#rn := #rn + 1) as rn
from (select distinct email from t order by email) t cross join
(select #rn := 0) params
) tt
on t.email = tt.email
set t.email_id = tt.rn;
If you run the subquery, you will see that this assigns to each email a distinct number.
The outer query then assigned this number into the email_id column. In MySQL 8+, you could also write:
alter table t add email_id int;
update t join
(select email, row_number() over (order by email) as rn
from (select distinct email from t order by email) t
) tt
on t.email = tt.email
set t.email_id = tt.rn;
If you are using MySQL version 8 or later, then DENSE_RANK provides a nice option here:
SELECT
Email,
LPAD(CAST(DENSE_RANK() OVER (ORDER BY Email) AS CHAR(3)), 3, '0') AS Email_ID
FROM yourTable
ORDER BY
Email;
I would have suggested maybe just adding an auto increment column to your table, but that wouldn't quite meet your requirements, because an auto increment column would always be unique.
Here's what I'd do...
Create a new table with a unique email column
CREATE TABLE `emails` (
id INT(3) PRIMARY KEY AUTO_INCREMENT,
email VARCHAR(255) UNIQUE
);
Seed it with your current data
INSERT INTO `emails` (`email`)
SELECT DISTINCT `email` FROM `some_mystery_table`
ORDER BY `email`;
Alter your existing tables to reference emails(id) as a foreign key. This could be a little tricky as you'd need to (probably)
Add a new int column email_id where required
Update your data with the id value corresponding to the email address
UPDATE some_mystery_table, emails
INNER JOIN emails ON some_mystery_table.email = emails.email
SET some_mystery_table.email_id = emails.id;
Remove the email column
Add a foreign key where email_id references emails(id)
When displaying your data and you need a zero-padded email id, join the emails table, eg
SELECT a.whatever, e.email, LPAD(e.id, 3, '0') AS email_id
FROM some_mystery_table a
INNER JOIN emails e ON a.email_id = e.id;
When adding new email records, you add them to emails first, then use the generated id in any other related tables.
ALTER TABLE emails add column `email_id` int(5) ZEROFILL PRIMARY KEY AUTO_INCREMENT;
SET #x:=0;
UPDATE emails SET email_id = LPAD(#x:= (#x+1),4, '0') WHERE 1=1;
We first added the column email_id to the table emails and set it as primary key, using this query:
ALTER TABLE emails add column `email_id` int(5) ZEROFILL PRIMARY KEY AUTO_INCREMENT;
Then we declared a global variable called x with a default value of 0:
SET #x:=0;
And finally, we filled the column with the incremental zero filled id:
UPDATE emails SET email_id = LPAD(#x:= (#x+1),4, '0') WHERE 1=1;
We used LPAD to zero fill.

SELECT FROM Table WHERE exact number not partial is in a string SQL

I have a table that contains a bunch of numbers seperated by a comma.
I would like to retrieve rows from table where an exact number not a partial number is within the string.
EXAMPLE:
CREATE TABLE IF NOT EXISTS `teams` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`uids` text NOT NULL,
`islive` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ;
INSERT INTO `teams` (`id`, `name`, `uids`, `islive`) VALUES
(1, 'Test Team', '1,2,8', 1),
(3, 'Test Team 2', '14,18,19', 1),
(4, 'Another Team', '1,8,20,23', 1);
I would like to search where 1 is within the string.
At present if I use Contains or LIKE it brings back all rows with 1, but 18, 19 etc is not 1 but does have 1 within it.
I have setup a sqlfiddle here
Do I need to do a regex?
You only need 1 condition:
select *
from teams
where concat(',', uids, ',') like '%,1,%'
I would search for all four possible locations of the ID you are searching for:
As the only element of the list.
As the first element of the list.
As the last element of the list.
As an inner element of the list.
The query would look like:
select *
from teams
where uids = '1' -- only
or uids like '1,%' -- first
or uids like '%,1' -- last
or uids like '%,1,%' -- inner
You could probably catch them all with a OR
SELECT ...
WHERE uids LIKE '1,%'
OR uids LIKE '%,1'
OR uids LIKE '%, 1'
OR uids LIKE '%,1,%'
OR uids = '1'
You didn't specify which version of SQL Server you're using, but if you're using 2016+ you have access to the STRING_SPLIT function which you can use in this case. Here is an example:
CREATE TABLE #T
(
id int,
string varchar(20)
)
INSERT INTO #T
SELECT 1, '1,2,8' UNION
SELECT 2, '14,18,19' UNION
SELECT 3, '1,8,20,23'
SELECT * FROM #T
CROSS APPLY string_split(string, ',')
WHERE value = 1
You SQL Fiddle is using MySQL and your syntax is consistent with MySQL. There is a built-in function to use:
select t.*
from teams t
where find_in_set(1, uids) > 0;
Having said that, FIX YOUR DATA MODEL SO YOU ARE NOT STORING LISTS IN A SINGLE COLUMN. Sorry that came out so loudly, it is just an important principle of database design.
You should have a table called teamUsers with one row per team and per user on that team. There are numerous reasons why your method of storing the data is bad:
Numbers should be stored as numbers, not strings.
Columns should contain a single value.
Foreign key relationships should be properly declared.
SQL (in general) has lousy string handling functions.
The resulting queries cannot be optimized.
Simple things like listing the uids in order or removing duplicate are unnecessarily hard.

Aggregate joined table results in an SQL SELECT on MySQL 5

I have the following schema, and would like to do a query that returns one row for each entry in the articles table, with it's corresponding content column from the content table, and a column with each of that articles tags, such as you might get by using concat.
The query should SELECT only rows that match a certain tag. So if the tag atdi was provided, the result set would look something like:
id content tags
1 on my way nails broke and fell song,atdi,invalid
3 im all alone so far up here and my oxygen is all gone song,atdi,hourglass
4 you know your insides true better than i do song,atdi,starslight
I've tried a few different ways with subqueries, but keep getting errors - it's quite frustrating.
Here's the schema:
CREATE TABLE articles (
id int not null default 0,
published datetime,
author int not null default 0,
primary key (id)
);
INSERT INTO articles
(id, published, author)
VALUES
(1, CURRENT_TIMESTAMP, 1),
(2, CURRENT_TIMESTAMP, 1),
(3, CURRENT_TIMESTAMP, 1),
(4, CURRENT_TIMESTAMP, 1);
CREATE TABLE content (
id int not null default 0,
content varchar(250) not null default '',
primary key (id)
);
INSERT INTO content
(id,content)
VALUES
(1,'on my way nails broke and fell'),
(2,'exo skeleton junction at the railroad delayed'),
(3,'im all alone so far up here and my oxygen is all gone'),
(4,'you know your insides true better than i do');
CREATE TABLE tags (
id int not null default 0,
tag varchar(100) not null default '',
primary key (id,tag)
);
INSERT INTO tags
(id,tag)
VALUES
(1,"song"),
(2,"song"),
(3,"song"),
(4,"song"),
(1,"atdi"),
(2,"mars"),
(3,"atdi"),
(4,"atdi"),
(1,"invalid"),
(2,"roulette"),
(3,"hourglass"),
(4,"starslight");
Try something like this one
select a.id, a.content, b.tags_1
from content as a inner join (
select id, GROUP_CONCAT(tag SEPARATOR ',') as tags_1 FROM tags group by id
) as b on a.id = b.id
INNER JOIN tags AS c ON a.id = c.id
WHERE c.tag = 'atdi'
Using the GROUP_CONCAT() method

mysql group_concat one table to another

i would like to have a query that will solve my problem in native sql.
i have a table named "synonym" which holds words and the words' synonyms.
id, word, synonym
1, abandon, forsaken
2, abandon, desolate
...
As you can see words are repeated in this table lots of times and this makes the table unnecessarily big. i would like to have a table named "words" which doesn't have duplicate words like:
id, word, synonyms
1, abandon, 234|90
...
note: "234" and "90" here are the id's of forsaken and desolate in newly created words table.
so i already created a new "words" table with unique words from word field at synonym table. what i need is an sql query that will look at the synonym table for each word's synonyms then find their id's from words table and update the "synonyms" field with vertical line seperated ids. then i will just drop the synonym table.
just like:
UPDATE words SET synonyms= ( vertical line seperated id's (id's from words table) of the words at the synonyms at synonym table )
i know i must use group_concat but i couldn't achieved this.
hope this is clear enough. thanks for the help!
Your proposed schema is plain horrible.
Why not use a many-to-many relationship ?
Table words
id word
1 abandon
234 forsaken
Table synonyms
wid sid
1 234
1 90
You can avoid using update and do it using the queries below:
TRUNCATE TABLE words;
INSERT INTO words
SELECT (#rowNum := #rowNum+1),
a.word,
SUBSTRING(REPLACE(a.syns, a.id + '|', ''), 2) syns
FROM (
SELECT a.*,group_concat(id SEPARATOR '|') syns
FROM synonyms a
GROUP BY word
) a,
(SELECT #rowNum := 0) b
Test Script:
CREATE TABLE `ts_synonyms` (
`id` INT(11) NULL DEFAULT NULL,
`word` VARCHAR(20) NULL DEFAULT NULL,
`synonym` VARCHAR(2000) NULL DEFAULT NULL
);
CREATE TABLE `ts_words` (
`id` INT(11) NULL DEFAULT NULL,
`word` VARCHAR(20) NULL DEFAULT NULL,
`synonym` VARCHAR(2000) NULL DEFAULT NULL
);
INSERT INTO ts_synonyms
VALUES ('1','abandon','forsaken'),
('2','abandon','desolate'),
('3','test','tester'),
('4','test','tester4'),
('5','ChadName','Chad'),
('6','Charles','Chuck'),
('8','abandon','something');
INSERT INTO ts_words
SELECT (#rowNum := #rowNum+1),
a.word,
SUBSTRING(REPLACE(a.syns, a.id + '|', ''), 2) syns
FROM (
SELECT a.*,
GROUP_CONCAT(id SEPARATOR '|') syns
FROM ts_synonyms a
GROUP BY word
) a,
(SELECT #rowNum := 0) b;
SELECT * FROM ts_synonyms;
SELECT * FROM ts_words;