I have a huge list of roads and the place of that road, like below:
StreetName,PlaceName,xcoord,ycoord
Ovayok Road,Cambridge Bay,-104.99656,69.12876
Ovayok Road,Cambridge Bay,-104.99693,69.12865
Ovayok Road,Cambridge Bay,-104.99794,69.12842
Ovayok Road,Cambridge Bay,-104.99823,69.12835
Hikok Drive,Kugluktuk,-115.09433,67.82674
Hikok Drive,Kugluktuk,-115.09570,67.82686
Hikok Drive,Kugluktuk,-115.09593,67.82689
Hikok Drive,Kugluktuk,-115.09630,67.82695
Sivulliq Avenue,Rankin Inlet,-92.08252,62.81265
Sivulliq Avenue,Rankin Inlet,-92.08276,62.81265
Sivulliq Avenue,Rankin Inlet,-92.08461,62.81262
How to delete rows that have duplicates data on first and second column? All numbers (coordinates) are differents.
If you don't have any column by which you can uniquely identify your data or any column with ID,
then fetch the unique records in the table and move them to a copy of the table and rename this temp table with the original table.
Below find the query for the same -
CREATE TABLE street_details_temp LIKE street_details;
INSERT INTO street_details_temp SELECT DISTINCT * FROM street_details;
DROP TABLE street_details;
RENAME TABLE street_details_new TO street_details;
Related
I need to validate the last inserted row in a table every time using some constraints given by the user on a particular column in that table( Ex: age > 50...etc.).
I thought of using a temporary table to insert the row in both of my temporary table and my normal table. After then, I'll use query like Select * from tb_name USER_CONSTRAINTS [ Ex: select * from student where age>50 ] on temporary table. If the result is not null, then the last row satisfies the user constraint else it fails. After I'll delete the last added row from the temporary table and repeat the process for next row.
I don't know if this way is good or bad or is there some other efficient way to do this?
Edit:
The table has 5 columns
stud_id,
subject_id,
age,
dob,
marks
Here stud_id and subject_id act as foreign keys to two tables tbl_student and tbl_subject
I have a situation like this:
I have 3 tables, for example:
table phones
table computers
table printers
Every table has the same column named "Address" and every column has the same record "06-00-00-00-00-00" (a duplicate record).
Now, I was wondering if it's possible somehow to check all the records from all of the tables and delete the duplicate records from table "computers" and table "printers" but leave the record in table "phones"
In other words: Delete all the duplicate records from all of the tables except from one chosen table (in this case table "phones").
Thanks a lot.
For deleting records,
DELETE TABLE TABLE1
WHERE ADDRESS = (SELECT ADDRESS FROM TABLE2)
This is an correction of this question now I properly understand what I need to do.
I have a table with films and the dates they were shown on along with other info in other columns, in MySQL.
So relevant columns are...
FilmID
FilmName
DateShown
The dates are stored as Unix timestamps.
I currently have multiple instances of films that were shown on different dates yet all other information is the same.
I need to copy the dates of the duplicate films into a new table matching them up to the film ID. Then I need to remove the duplicate film rows from the original table.
So I have created a new table, Film_Dates with the columns
FilmDateID
FilmID
Date
Can anyone help with the actual sql to do this.
Thank you.
to start with:
insert into filmdateid (filmid, `date`)
select filmid, dateshown
from films
and that should populate your new table.
alter ignore table films
add unique (filmid)
This will enforce uniqueness for filmid, and drop all duplicates, keeping just the one row. If this fails with a 'duplicate entry' error, you will need to run this command, and then try the alter again.
set session old_alter_table=1
As it seems mysql is moving away from being able to do it this way.
Lastly, you need to get rid of your dateshown column.
alter table films
drop column dateshown
Please make sure you have a backup before you attempt any of this. Always best to be safe.
since filmid is not duplicated, only filmname, there are some extra steps
first, create the filmdates table:
create table filmdates as
select filmname, dateshown
from films;
Then add a filmid column:
alter table filmdates add column filmid integer;
And a unique index on (filmname, dateshown)
alter ignore table filmdates add unique(filmname, dateshown);
Then we add a unique index on films(filmname) - since its the only value that really gets duplicated.
alter ignore table films add unique(filmname);
Now that we're setup, we populate the filmid column in the new table, with maching values from the old.
update films f
inner join filmdates fd
on f.filmname = fd.filmname
set fd.filmid = f.filmid;
Now we just need to cleanup, and get rid of the redundant columns (films.dateshown and filmdates.filmname).
alter table filmdates drop column filmname;
alter table films drop column dateshown;
demo here
I have this table email_addr_bean_rel with these fieldsid, email_address_id, bean_id, bean_module, primary_address, reply_to_address, date_created, date_modified, deleted
Out of these only records of column bean_id is duplicated twice.
I have tried this, but it doesn't work
CREATE TABLE email_addr_bean_rel_V AS SELECT DISTINCT * FROM email_addr_bean_rel;
DROP TABLE email_addr_bean_rel;
RENAME TABLE email_addr_bean_rel_V TO email_addr_bean_rel;
It still contains same number of records.
Part 1:
In MySQL suppose I have Table A which has more columns than Table B. I want to transfer values from Table B to Table A where the id row in A matches the id Row in B and update the values in table A from the values in table B.
Part 2:
Table B is a superset of table A, so how does one insert ids and their corresponding values from table B into table A while also updating id's that are in table A.
Like FreshPrinceOfSO already mentioned in the comments, you won't get code for free here.
But here are at least the steps. Two possibilities. Either you split the work up in two statements, one update then one insert statement. Or you could work with
INSERT ... ON DUPLICATE KEY UPDATE ...
You would have to have an unique index on the table for this to work.
For the first solution mentioned you'd inner join the tables for the update first, that's trivial. Then for the insert you'd use a select with a left join and with is null checking for entries that are not already in the table.
Good luck...