Hi i have to delete duplication from my table where item's duplicate for user
Example table
Id | User | item | count
1 | max | coco | 2
2 | max | nut | 4
3 | max | image| 1
4 | max | coco | 4
How to create sql query to delete all duplicate where, have a lot of user's.
I try to find this duplicate by :
SELECT id, user, item, COUNT(id) AS licznik
FROM Users
GROUP BY user, item
HAVING licznik > 1;
If you don't care which row remains, you can use a query such as the following to keep the row with the minimum id:
delete u
from users u
(select user, item, min(id) as min_id
from users u
group by user, item
having count(*) > 1
) ui
using (user, item)
where u.id > min_id;
Try using common expression table to receive data with row number over specified columns (window function). And then delete with row number larger than 1. I don't have mysql database to check if it works but on Microsoft sql server below query works like a charm. I read documentation of mysql and this should also work.
;with cte as (
select
row_number() over (partition by [User], item order by Id) as rn
from Users
)
delete c from cte c where rn > 1
Related
I know there is a ton of same questions about finding and removing duplicate values in mySQL but my question is a bit different:
I have a table with columns as ID, Timestamp and price. A script scrapes data from another webpage and saves it in the database every 10 seconds. Sometimes data ends up like this:
| id | timestamp | price |
|----|-----------|-------|
| 1 | 12:13 | 100 |
| 2 | 12:14 | 120 |
| 3 | 12:15 | 100 |
| 4 | 12:16 | 100 |
| 5 | 12:17 | 110 |
As you see there are 3 duplicated values and removing the price with ID = 4 will shrink the table without damaging data integrity. I need to remove continuous duplicated records except the first one (which has the lowest ID or Timestamp).
Is there a sufficient way to do it? (there is about a million records)
I edited my scraping script so it checks for duplicated price before adding it but I need to shrink and maintain my old data.
Since MySQL 8.0 you can use window function LAG() in next way:
delete tbl.* from tbl
join (
-- use lag(price) for get value from previous row
select id, lag(price) over (order by id) price from tbl
) l
-- join rows with same previous price witch will be deleted
on tbl.id = l.id and tbl.price = l.price;
fiddle
I am just grouping based on price and filtering only one record per group.The lowest id gets displayed.Hope the below helps.
select id,timestamp,price from yourTable group by price having count(price)>0;
My query is based on #Tim Biegeleisen one.
-- delete records
DELETE
FROM yourTable t1
-- where exists an older one with the same price
WHERE EXISTS (SELECT 1
FROM yourTable t2
WHERE t2.price = t1.price
AND t2.id < t1.id
-- but does not exists any between this and the older one
AND NOT EXISTS (SELECT 1
FROM yourTable t3
WHERE t1.price <> t3.price
AND t3.id > t2.id
AND t3 < t1.id));
It deletes records where exists an older one with same price but does not exists any different between
It could be checked by timestamp column if id column is not numeric and ascending.
I have a table that contains custom user analytics data. I was able to pull the number of unique users with a query:
SELECT COUNT(DISTINCT(user_id)) AS 'unique_users'
FROM `events`
WHERE client_id = 123
And this will return 16728
This table also has a column of type DATETIME that I would like to group the counts by. However, if I add a GROUP BY to the end of it, everything groups properly it seems except the totals don't match. My new query is this:
SELECT COUNT(DISTINCT(user_id)) AS 'unique_users', DATE(server_stamp) AS 'date'
FROM `events`
WHERE client_id = 123
GROUP BY DATE(server_stamp)
Now I get the following values:
|-----------------------------|
| unique_users | date |
|---------------|-------------|
| 2650 | 2019-08-26 |
| 3486 | 2019-08-27 |
| 3475 | 2019-08-28 |
| 3631 | 2019-08-29 |
| 3492 | 2019-08-30 |
|-----------------------------|
Totaling to 16734. I tried using a sub query to get the distinct users then count and group in the main query but no luck there. Any help in this would be greatly appreciated. Let me know if there is further information to help diagnosis.
A user, who is connected with events on multiple days (e.g. session starts before midnight and ends afterwards), will occur the number of these days times in the new query. This is due to the fact, that the first query performs the DISTINCT over all rows at once while the second just removes duplicates inside each groups. Identical values in different groups will stay untouched.
So if you have a combination of DISTINCT in the select clause and a GROUP BY, the GROUP BY will be executed before the DISTINCT. Thus without any restrictions you cannot assume, that the COUNT(DISTINCT user_id) of the first query and the sum over the COUNT(DISTINCT user_id) of all groups is the same.
Xandor is absolutely correct. If a user logged on 2 different days, There is no way your 2nd query can remove them. If you need data grouped by date, You can try below query -
SELECT COUNT(user_id) AS 'unique_users', DATE(MIN_DATE) AS 'date'
FROM (SELECT user_id, MIN(DATE(server_stamp)) MIN_DATE -- Might be MAX
FROM `events`'
WHERE client_id = 123
GROUP BY user_id) X
GROUP BY DATE(server_stamp);
For example, I have the following table called, Information
user_id | item
-------------------------
45 | camera
36 | smartphone
23 | camera
1 | glucose monitor
3 | smartwatch
2 | smartphone
7 | smartphone
2 | camera
2 | glucose monitor
2 | smartwatch
How can I check which user_id has at least one of every item?
The following items will not be static and may be different everytime. However in this example there are 4 unique items: camera, smartphone, smartwatch, glucose monitor
Expected Result:
Because user_id : 2 has at least one of every item, the result will be:
user_id
2
Here is what I attempted at so far, however if the list of items changes from 4 unique items to 3 unique items, I don't think it works anymore.
SELECT *
FROM Information
GROUP BY Information.user_id
having count(DISTINCT item) >= 4
One approach would be to aggregate by user_id, and then assert that the distinct item_id count matches the total distinct item_id count from the entire table.
SELECT
user_id
FROM Information
GROUP BY
user_id
HAVING
COUNT(DISTINCT item_id) = (SELECT COUNT(DISTINCT item_id) FROM Information);
You can try to use self-join by count and total count
SELECT t1.user_id
FROM (
SELECT user_id,COUNT(DISTINCT item) cnt
FROM T
GROUP BY user_id
) t1 JOIN (SELECT COUNT(DISTINCT item) cnt FROM T) t2
WHERE t1.cnt = t2.cnt
or exists
Query 1:
SELECT t1.user_id
FROM (
SELECT user_id,COUNT(DISTINCT item) cnt
FROM T
GROUP BY user_id
) t1
WHERE exists(
SELECT 1
FROM T tt
HAVING COUNT(DISTINCT tt.item) = t1.cnt
)
Results:
| user_id |
|---------|
| 2 |
One more way of solving this problem is by using CTE and dense_rank function.
This also gives better performance on MySQL. The Dense_Rank function ranks every item among users. I count the number of distinct items and say pick the users who have the maximum number of distinct items.
With Main as (
Select user_id
,item
,Dense_Rank () over (
Partition by user_id
Order by item
) as Dense_item
From information
)
Select
user_id
From Main
Where
Dense_item = (
Select
Count(Distinct item)
from
information);
I need to count the number of duplicate emails in a mysql database, but without counting the first one (considered the original). In this table, the query result should be the single value "3" (2 duplicate x#q.com plus 1 duplicate f#q.com).
TABLE
ID | Name | Email
1 | Mike | x#q.com
2 | Peter | p#q.com
3 | Mike | x#q.com
4 | Mike | x#q.com
5 | Frank | f#q.com
6 | Jim | f#q.com
My current query produces not one number, but multiple rows, one per email address regardless of how many duplicates of this email are in the table:
SELECT value, count(lds1.leadid) FROM leads_form_element lds1 LEFT JOIN leads lds2 ON lds1.leadID = lds2.leadID
WHERE lds2.typesID = "31" AND lds1.formElementID = '97'
GROUP BY lds1.value HAVING ( COUNT(lds1.value) > 1 )
It's not one query so I'm not sure if it would work in your case, but you could do one query to select the total number of rows, a second query to select distinct email addresses, and subtract the two. This would give you the total number of duplicates...
select count(*) from someTable;
select count(distinct Email) from someTable;
In fact, I don't know if this will work, but you could try doing it all in one query:
select (count(*)-(count(distinct Email))) from someTable
Like I said, untested, but let me know if it works for you.
Try doing a group by in a sub query and then summing up. Something like:
select sum(tot)
from
(
select email, count(1)-1 as tot
from table
group by email
having count(1) > 1
)
I have the following database
id | user | urgency | problem | solved
The information in there has different users, but these users all have multiple entries
1 | marco | 0 | MySQL problem | n
2 | marco | 0 | Email problem | n
3 | eddy | 0 | Email problem | n
4 | eddy | 1 | MTV doesn't work | n
5 | frank | 0 | out of coffee | y
What I want to do is this: Normally I would check everybody's oldest problem first. I use this query to get the ID's of the oldest problem.
select min(id) from db group by user
this gives me a list of the oldest problem ID's. But I want people to be able to make a certain problem more urgent. I want the ID with the highest urgency for each user, or ID of the problem with the highest urgency
Getting the max(urgency) won't give the ID of the problem, it will give me the max urgency.
To be clear: I want to get this as a result
row | id
0 | 1
1 | 4
The last entry should be in the results since it's solved
Select ...
From SomeTable As T
Join (
Select T1.User, Min( T1.Id ) As Id
From SomeTable As T1
Join (
Select T2.User, Max( T2.Urgency ) As Urgency
From SomeTable As T2
Where T2.Solved = 'n'
Group By T2.User
) As MaxUrgency
On MaxUrgency.User = T1.User
And MaxUrgency.Urgency = T1.Urgency
Where T1.Solved = 'n'
Group By T1.User
) As Z
On Z.User = T.User
And Z.Id = T.Id
There are lots of esoteric ways to do this, but here's one of the clearer ones.
First build a query go get your min id and max urgency:
SELECT
user,
MIN(id) AS min_id,
MAX(urgency) AS max_urgency
FROM
db
GROUP BY
user
Then incorporate that as a logical table into
a larger query for your answers:
SELECT
user,
min_id,
max_urgency,
( SELECT MIN(id) FROM db
WHERE user = a.user
AND urgency = a.max_urgency
) AS max_urgency_min_id
FROM
(
SELECT
user,
MIN(id) AS min_id,
MAX(urgency) AS max_urgency
FROM
db
GROUP BY
user
) AS a
Given the obvious indexes, this should be pretty efficient.
The following will get you exactly one row back -- the most urgent, probably oldest problem in your table.
select id from my_table where id = (
select min(id) from my_table where urgency = (
select max(urgency) from my_table
)
)
I was about to suggest adding a create_date column to your table so that you could get the oldest problem first for those problems of the same urgency level. But I'm now assuming you're using the lowest ID for that purpose.
But now I see you wanted a list of them. For that, you'd sort the results by ID:
select id from my_table where urgency = (
select max(urgency) from my_table
) order by id;
[Edit: Left out the order by!]
I forget, honestly, how to get the row number. Someone on the interwebs suggests something like this, but no idea if it works:
select #rownum:=#rownum+1 ‘row', id from my_table where ...