I am building a bot that matches users based on a score they get, this score is taken from calculations done to data in a database on the request of the user.
I have only 1 table in that database and a few columns (user,age,genre,language,format,...etc).
What I want to do is, once the user clicks "find match" button on the chatbot, this user's data, which is already in the database will be compared to the other user's data in the same table and compare each column 1 by 1 of each row.
For example, the user's genre preference will be compared to each genre pref of the other users in each row of the table, when there is a match, 1 point is added, then language will be compared of each user and 1 point is given when there's a match. This will go to each column in each row and be compared with the user's. In the end, the users that has highest matching points will be recommended to this user.
What's the best way and approach to do that?
I am using nodejs and mysql database.
Thank you.
I see this as a self join and conditional expressions:
select t.*,
(t1.genre = t.genre) + (t1.language = t.language) + (t1.format = t.format) as score
from mytable t
inner join mytable t1 on t1.user <> t.user
where t1.user = ?
order by score desc
The question mark represents the id of the currently logged on user, for who you want to search matching users. The query brings all other users, and counts how many values they have in common over the table columns: each matching value increases the score by 1. Results are sorted by descending score.
Heres a description screen shot of the keys in a table that I use:
Table keys
Each row in the table is a representation of purchases by a specific client in a specific hour.
So a typical row would be like:
typical row (screenshot)
I need to merge two clients data, so one client will have all the purchases values summed up in his rows, for each hour.
In pseudo code what i want to perform is:
For every hour (row), Add the 'purchase amount' of the rows that have
client id of '526' to all rows that have client id '518'
.
At first, I tried to execute this but then got an error due to the multiple keys configured in the table:
UPDATE purchases set client id = 518 where client id = 526;
since the client '518' already has rows for the same hours, I can not perform the above query that creates new rows.
How should I tackle this?
You will require three queries:
One to do the sum if a record exists for both customers with the same time value:
update purchase p1
inner join purchase p2 on p2.client_id=528 and p2.date=p1.date
set p1.amount = p1.amount + p2.amount
where p1.client_id=526;
A second one to handle the records where only one exists (and not the one that will continue to exist):
insert into purchase
(select 526, date, amount
from purchase p1
where p1.client_id=528 and
not exists (select *
from purchase p2
where p2.client_id=526 and
p2.date=p1.date));
Note - the above can probably also be done (and more elegantly) using an update query.
And a final query to remove the merged records:
delete from purchase where client_id=528;
Note - I used client_id values 526 and 528 throughout - you may need to alter these numbers to fit your purpose.
I have a query in Microsoft Access that is pulling from a Sql Server view. There is a revision level field that is a type char(2). So, an object could have multiple revisions starting with "01", then "02", and on up. I would like to get just the latest revision records for each object. Is there a way to setup the query to do this?
try something like
SELECT DataTable.ItemName, DataTable.Revision
FROM DataTable
WHERE (((DataTable.Revision)=(select max(revision) from DataTable T where T.ItemName=DataTable.ItemName)));
You cannot do this in one step with SQL. Rather you need to find the right revision for each Product first and then get the records, that match those Product/Revision pairs:
To find the Revision for each product, you want to group by product and find the highest related revision:
SELECT Test.ProductID, Max(Test.Revision) AS MaxOfRevision
FROM Test
GROUP BY Test.ProductID;
Only now you can go to the original dataset and get every record that matches our newly found ProductID/Revision pair:
SELECT Test.TestID, Test.ProductID, Test.Description
FROM Test
INNER JOIN
subQ
ON (Test.Revision = subQ.MaxOfRevision) AND (Test.ProductID = subQ.ProductID);
Of course you can create a Query that contains both:
SELECT Test.TestID, Test.ProductID, Test.Description
FROM Test
INNER JOIN
(SELECT Test.ProductID, Max(Test.Revision) AS MaxOfRevision
FROM Test
GROUP BY Test.ProductID) AS subq
ON (Test.Revision = subQ.MaxOfRevision) AND (Test.ProductID = subQ.ProductID);
Explanation:
The only way to find the highest value in a column is with Max(). SELECT Max(Revision) FROM table; gives us exactly one value: the highest occurring revision in all records. The DBEngine looks at every revision it finds and takes the highest one.
If we want to have the highest revision of every Product, we need to group by product. The DBEngine now walks through the table once for every ProductID, looks at each revision it finds in the respective records, takes the highest one and associates it with its Product-Group.
That is why the resulting records of the first query cannot contain all the desired data: they are not a subset of the original ones, they are new records that have one group-designator (the ProductID) and one revision-value: the highest out of all revisionvalues the DBEngine could find in relation to that Product.
Sidenote
As long as you have the leading zeroes in your revisions you can just use Max() here because incidentally it sorts the same way as numbers would. In a future application you might want to replace that field with a number field or convert it explicitly to numbers before using it like a number (MAX, Sort....).
I found that the following sub select also works.
SELECT *
FROM DataTable
INNER JOIN (
SELECT JobNumber ,
MAX(CAST(RevisionLevel AS INT)) AS MaxRevision
FROM DataTable
GROUP BY JobNumber)x
ON x.JobNumber = DataTable.JobNumber AND CAST(DataTable.RevisionLevel AS INT) = x.MaxRevision
So I have three databases (all on one server) that I need to join tables on. Essentially I do a join across two tables to determine the identification of particular users who do a certain thing after a certain date. It works fine:
SELECT a.THING, a.ANOTHERTHING, b.IDENTIFICATION, b.RELEVANTDATE
FROM FIRSTDATABASE.TABLE a
JOIN SECONDDATABASE.TABLE b
ON a.THING = b.THING
WHERE ANOTHERTHING = '----' AND IDENTIFICATION <> 'NULL'
AND b.RELEVANTDATE > date('YYYY-MM-DD')
At present I'm also running a second query by its lonesome - this is one table, on a third database - to get all users with a certain amount of an item. It also works:
SELECT ITEM, AMOUNT, IDENTIFICATION
FROM TABLE
WHERE ITEM = '----' AND AMOUNT > '0' AND IDENTIFICATION <> 'NULL'
GROUP BY AMOUNT
I then, using the first table as my guide, use VLOOKUP so I can get the AMOUNT generated in the second query for each and every user IDENTIFICATION meeting the criteria after a certain date AND who did a certain thing, from the first query.
My question is, how would I join these two into one large query?
In a large user database with the following format and sample data, we are trying to identify duplicated people:
id first_name last_name email
---------------------------------------------------
1 chris baker
2 chris baker chris#gmail.com
3 chris baker chris#hotmail.com
4 chris baker crayzyguy#crazy.com
5 carl castle castle#npr.org
6 mike rotch fakeuser#sample.com
I am using the following query:
SELECT
GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "duplicate_count"
FROM
users
GROUP BY
name
HAVING
duplicate_count > 1
This works great; I get a list of duplicates with the id numbers of the involved rows.
We would re-assign any associated data tied to a duplicate to the actual person (set user_id = 2 where user_id = 3), then we delete the duplicating user row.
The trouble comes after we make this report the first time, as we clean up the list after manually verifying that they are indeed duplicates -- some ARE NOT duplicates. There are 2 Chris Bakers that are legitimate users.
We don't want to keep seeing Chris Baker in subsequent duplicate reports until the end of time, so I am looking for a way to flag that user id 1 and user id 4 are NOT duplicates of each other for future reports, but they could be duplicated by new users added later.
What I tried
I added a is_not_duplicate field to the user table, but then if a new duplicate "Chris Baker" gets added to the database, it will cause this situation to not show on the duplicate report; the is_not_duplicate improperly excludes one of the accounts. My HAVING statement would not meet the > 1 threshold until there are -two- duplicates of Chris Baker, plus the "real" one marked is_not_duplicate.
Question Summed Up
How can I build exceptions into the above query without looping results or multiple queries?
Sub-queries are fine, but the size of the dataset makes every query count and I'd like the solution to be as performant as possible.
Try to add the is_not_duplicate boolean field and modify your code as follows:
SELECT
GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "duplicate_count",
SUM(is_not_duplicate) AS "real_count"
FROM
users
GROUP BY
name
HAVING
duplicate_count > 1
AND
duplicate_count - real_count > 0
Newly added duplicates will have is_not_duplicate=0 so the real_count for that name will be less than duplicate_count and the row will be shown
My brain is too fried to come up with the actual query for this at the moment, but I might be able to give you a nudge in a path that should work :)
What if you did add another column (maybe a table of valid duplicated users instead?...both will accomplish the same thing), and ran a subquery that would count up all of the valid duplicates and then you could compare against the count in your current query. You would exclude any users that have matching counts, and would pull in any with counts that are higher. Hopefully that makes sense; I will create a use case:
Chris Baker with id 1 and 4 are marked as valid_duplicates
There are 4 Chris Baker's in the system
You get a count of valid Chris Baker's
You get a count of all Chris Baker's
valid_count <> total_count, so return Chris Baker
*You probably can even modify the query so that it does not even list the duplicate id's (even if you get a duplicate marking of only 1 id). Rather than having to re-check which are the valids. This would be a little more complicated. Without it, at least you ignore Chris Baker until another enters the system
I have written up the basic query, dealing with excluding specific id's I will try to roll in tonight. But, this at least solves your initial need. If you do not need the more complicated query, do let me know so that I do not waste my time on it :)
SELECT
GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "duplicate_count"
FROM
users
WHERE NOT EXISTS
(
SELECT 1
FROM
(
SELECT
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "valid_duplicate_count"
FROM
users
WHERE
is_valid_duplicate = 1 --true
GROUP BY
name
HAVING
valid_duplicate_count > 1
) AS duplicate_users
WHERE
duplicate_users.name = users.name
AND valid_duplicate_count = duplicate_count
)
GROUP BY
name
HAVING
duplicate_count > 1
Below is the query that should do the same as above, but the final list will only print the id's that are not in the valid list. This actually ended up being a lot simpler than I thought. And, it is mostly the same as above, but the only reason I kept above is to keep the two options and in case I messed the above up...it does get complicated as it is many nested queries. If CTE's are available to you, or even temp tables. It might make the query more expressive to break it up into temp tables :). Hopefully this helps and is what you are looking for
SELECT GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "final_duplicate_count"
--This count could actually be 1 due to the nature of the query
FROM
users
--get the list of duplicated user names
WHERE EXISTS
(
SELECT
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "total_duplicate_count"
FROM
users AS total_dup_users
--ignore valid_users whose count still matches
WHERE NOT EXISTS
(
SELECT 1
FROM
(
SELECT
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "valid_duplicate_count"
FROM
users AS valid_users
WHERE
is_valid_duplicate = 1 --true
GROUP BY
name
HAVING
valid_duplicate_count > 1
) AS duplicate_users
WHERE
--join inner table to outer table
duplicate_users.name = total_dup_users.name
--valid count check
AND valid_duplicate_count = total_duplicate_count
)
--join inner table to outer table
AND total_dup_users.Name = users.Name
GROUP BY
name
HAVING
duplicate_count > 1
)
--ignore users that are valid when doing the actual counts
AND NOT EXISTS
(
SELECT 1
FROM users AS valid
WHERE
--join inner table to outer table
users.name =
CONCAT(UPPER(valid.first_name), UPPER(valid.last_name))
--only valid users
AND valid.is_valid_duplicate = 1 --true
)
GROUP BY
FinalDuplicates.Name
Since this is basically a many-to-many relationship I would add a new table not_duplicate with fields user1 and user2.
I would probably add two rows for each not_duplicate relationship such that I have one row for 2 -> 3 and a symmetric row for 3 -> 2 to ease querying, but that may introduce data inconsistencies so make sure you delete both rows at the same time (or have only one row and make the correct query in your script).
well it seems to me that the is_not_duplicate column is not complex enough to hold the information you want to store - from what I understand you want to manually tell your detection that two distinct users are not duplicates of each other. so either you create a column like is_not_duplicate_of=other-user-id or if you want to keep the possibility open that one user can be manually defined not duplicate of more than one users, you need a seperate table with two user-id columns.
the query telling you the non overridden duplicates probably has to be a bit more complex than the one you suggested, I cannot think of one that works with a group by and having logic. The only thing that would come to my mind is something like
SELECT u1.* FROM users u1
INNER JOIN users u2
ON u1.id <> u2.id
AND u2.name = u1.name
WHERE NOT EXISTS (
SELECT *
FROM users_non_dups un
WHERE (un.id1 = u1.id AND un.id2 = u2.id)
OR (un.id1 = u2.id AND un.id2 = u1.id)
)
If you were to correct all duplicates each time you run the report, then a very simple solution might be to modify the query:
SELECT
GROUP_CONCAT(id) AS "ids",
MAX(id) AS "max_id",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name",
COUNT(*) AS "duplicate_count"
FROM
users
GROUP BY
name
HAVING
duplicate_count > 1
AND
max_id > MAX_ID_LAST_TIME_DUPLICATE_REPORT_WAS_GENERATED;
I would go ahead and make the "confirmed_unique" column, defaulted as "False."
In order to avoid the problems you mentioned,
Then I would select all elements that may look like duplicates and have a "False" entry for "confirmed_unique."
I am not sure if this will work, but could you consider the reverse logic of adding a *is_duplicate_of* column? That way you can mark duplicates by entering the ID of the first record at this column which will be greater than zero. The records that you wish to retain will have a 0 value at this field. You can set the default (unchecked records) to -1 to keep track of the validation status for each record.
Afterwards you can keep executing an SQL that will compare new records only with correct records having is_duplicate_of = 0 .
If you are ok to make a slight change to the format of the report. You could do a self-join like this -
SELECT
CONCAT(u1.id,",", u2.id) AS "ids",
CONCAT(UPPER(u1.first_name), UPPER(u1.last_name)) AS "name"
FROM
users u1, users u2
WHERE
u1.id < u2.id AND
UPPER(u1.first_name) = UPPER(u2.first_name) AND
UPPER(u1.last_name) = UPPER(u2.last_name) AND
CONCAT(u1.id,",", u2.id) NOT IN (SELECT ids from not_dupe)
which reports duplicates as follows:
ids | name
----|--------
1,2 | CHRISBAKER
1,3 | CHRISBAKER
...
And the not_dupe table would have rows like below:
ids
------
1,2
3,4
...
I think it would make sense to create a lookup-table storing the ids of the ones that are not duplicates. Thus confirmed non duplicants are removed and the query will only have to ad a small look up for duplicates actualy found on the lookup table.
for instance in this example we would have
id 1 | id 2
2 4
if crayzyguy#crazy.com and chris#gmail.com are diffrent persons.
If I were you, I will add some geolocalisation tables/fields to my database schema.
The probability two end-users are having the same names AND are living in the same place is very very low - except in very big town - but you can split geolocalization to small areas too - it's about granularity.
Good luck.
I would suggest you to create a couple of things:
A Boolean column to flag confirmed users
A String column to save ids
A trigger that will check if the first name and last name are already there to fill up the flag, and save in the string column all ids to which this one is a possible duplicate.
And then build a report that looks for duplicated true and decode the string field to match the possible duplicated
I gave Justin Pihony +1 as the 1st to suggest comparing the duplicate count with the not duplicate count, and Hrant Khachatrian +1 for being the 1st to show an efficient way of doing that.
Here is a slightly different method, plus some renaming to make everything a bit more self explanatory, plus some extra columns in the query to make it obvious which records need to be compared as potential duplicates.
I would call the new column "CONFIRMED_UNIQUE" instead of "IS_NOT_DUPLICATE". Like Hrant I would make it Boolean (tinyint(1) with 0=FALSE and 1=TRUE).
The "potential_duplicate_count" is the maximum number of records that would have to be deleted.
select
group_concat(case when not confirmed_unique then id end) as potential_duplicate_ids,
group_concat(case when confirmed_unique then id end) as confirmed_unique_ids,
concat(upper(first_name), upper(last_name)) as name,
sum( case when not confirmed_unique then 1 end ) - (not max(confirmed_unique)) as potential_duplicate_count
from
users
group by
name
having
potential_duplicate_count > 0
I see someone else has been voted down for the suggestion of merging, but nothing about your problem statement says the data needs to be inplace. The OP followed up with their solution which happens to be a put SQL one, that doesn't imply that every solution needs to be limited to that.
The issue as I understand is around contacts having multiple, similar, but not necessarily identical records in your database, which has cost and reputational implications so you're looking to deduplicate these records.
I would write a batch job that searches for potential duplicates (this can be as complicated or as simple as you like) and then close the two records that it finds are dupes and create a new record.
To enable that you'd need four new columns:
Status, which would be either Open, Merged, Split
RelatedId, which would hold the value of who the record was merged with
ChainId, the new record Id
DateStatusChanged, obvious enough
Open would be the default status
Merged would be when the record is merged (effectively closed and replaced)
Split would be if the merge was reversed
So, as an example, go through all of the records that, for example, have the same name. Merge them in pairs. So if you have three Chris Bakers, records 1, 2 and 3, merge 1 and 2 to make record 4 and then 3 and 4 to make record 5. Your table would end up something like:
ID NAME STATUS RELATEDID CHAINID DATESTATUSCHANGED [other rows omitted]
1 Chris Baker MERGED 2 4 27-AUG-2012
2 Chris Baker MERGED 1 4 27-AUG-2012
3 Chris Baker MERGED 4 5 28-AUG-2012
4 Chris Baker MERGED 3 5 28-AUG-2012
5 Chris Baker OPEN
This way you have a full record of what has happened to your data can reverse any changes by unmerging, if for example contacts 1 and 2 weren't the same you reverse the merge of 3 and 4, reverse the merge of 1 and 2, you'd end up with this:
ID NAME STATUS RELATEDID CHAINID DATESTATUSCHANGED
1 Chris Baker SPLIT 2 4 29-AUG-2012
2 Chris Baker SPLIT 1 4 29-AUG-2012
3 Chris Baker SPLIT 4 5 29-AUG-2012
4 Chris Baker CLOSED 3 5 29-AUG-2012
5 Chris Baker CLOSED 29-AUG-2012
You could then manually merge, as you'd probably not want your job to automatically remerge split records.
Is there a good reason for not merging duplicate accounts into a single account?
From the comments, it seems like the information is being used mostly for contact information so merging should be relatively painless and low risk. Once you merge users they will no longer appear in your duplicate report. Furthermore, you users table will actually shrink which could help with performance.
Add is_not_duplicate by datatype bit to your table and use below query after set is_not_duplicate data value:
SELECT GROUP_CONCAT(id) AS "ids",
CONCAT(UPPER(first_name), UPPER(last_name)) AS "name"
FROM users
GROUP BY name
HAVING COUNT(*) > SUM(CAST(is_not_duplicate AS INT))
above query compare total duplicate rows by total valid duplicate rows.
Why don't you make the email column to be a unique identifier in this case, and after you cleanse your records once, you do not allow duplicates from there onwards?