It's been my first question to this website, I'm sorry if I used any wrong keywords. I have been with one problem from quite a few days.
The Problem is, I have a MYSQL table named property where I wanted to add a ref number which will be a unique 6 digit non incremental number so I alter the table to add a new column named property_ref which has default value as 1.
ALTER TABLE property ADD uniqueIdentifier INT DEFAULT (1) ;
Then I write a script to first generate a number then checking it to db if exist or not and If not exist then update the row with the random number
Here is the snippet I tried,
with cte as (
select subIdentifier, id from (
SELECT id, LPAD(FLOOR(RAND() * (999999 - 100000) + 100000), 6, 0) AS subIdentifier
FROM property as p1
WHERE "subIdentifier" NOT IN (SELECT uniqueIdentifier FROM property as p2)
) as innerTable group by subIdentifier
)
UPDATE property SET uniqueIdentifier = (
select subIdentifier from cte as c where c.id = property.id
) where property.id != ''
this query returns a set of record for almost all the rows but I have a table of entries of total 20000,
but this query fills up for ~19000 and rest of the rows are null.
here is a current output
[current result picture]
If anyone can help, I am extremely thanks for that.
Thanks
Instead of trying to randomly generate unique numbers that do not exist in the table, I would try the approach of randomly generating numbers using the ID column as a seed; as long as the ID number is unique, the new number will be unique as well. This is not technically fully "random" but it may be sufficient for your needs.
https://www.db-fiddle.com/f/iqMPDK8AmdvAoTbon1Yn6J/1
update Property set
UniqueIdentifier = round(rand(id)*1000000)
where UniqueIdentifier is null
SELECT id, round(rand(id)*1000000) as UniqueIdentifier FROM test;
Related
I know, deleting duplicates from mysql is often discussed here. But none of the solution work fine within my case.
So, I have a DB with Address Data nearly like this:
ID; Anrede; Vorname; Nachname; Strasse; Hausnummer; PLZ; Ort; Nummer_Art; Vorwahl; Rufnummer
ID is primary Key and unique.
And i have entrys for example like this:
1;Herr;Michael;Müller;Testweg;1;55555;Testhausen;Mobile;012345;67890
2;Herr;Michael;Müller;Testweg;1;55555;Testhausen;Fixed;045678;877656
The different PhoneNumber are not the problem, because they are not relevant for me. So i just want to delete the duplicates in Lastname, Street and Zipcode. In that case ID 1 or ID 2. Which one of both doesn't matter.
I tried it actually like this with delete:
DELETE db
FROM Import_Daten db,
Import_Daten dbl
WHERE db.id > dbl.id AND
db.Lastname = dbl.Lastname AND
db.Strasse = dbl.Strasse AND
db.PLZ = dbl.PLZ;
And insert into a copy table:
INSERT INTO Import_Daten_1
SELECT MIN(db.id),
db.Anrede,
db.Firstname,
db.Lastname,
db.Branche,
db.Strasse,
db.Hausnummer,
db.Ortsteil,
db.Land,
db.PLZ,
db.Ort,
db.Kontaktart,
db.Vorwahl,
db.Durchwahl
FROM Import_Daten db,
Import_Daten dbl
WHERE db.lastname = dbl.lastname AND
db.Strasse = dbl.Strasse And
db.PLZ = dbl.PLZ;
The complete table contains over 10Mio rows. The size is actually my problem. The mysql runs on a MAMP Server on a Macbook with 1,5GHZ and 4GB RAM. So not really fast. SQL Statements run in a phpmyadmin. Actually i have no other system possibilities.
You can write a stored procedure that will each time select a different chunk of data (for example by rownumber between two values) and delete only from that range. This way you will slowly bit by bit delete your duplicates
A more effective two table solution can look like following.
We can store only the data we really need to delete and only the fields that contain duplicate information.
Let's assume we are looking for duplicate data in Lastname , Branche, Haushummer fields.
Create table to hold the duplicate data
DROP TABLE data_to_delete;
Populate the table with data we need to delete ( I assume all fields have VARCHAR(255) type )
CREATE TABLE data_to_delete (
id BIGINT COMMENT 'this field will contain ID of row that we will not delete',
cnt INT,
Lastname VARCHAR(255),
Branche VARCHAR(255),
Hausnummer VARCHAR(255)
) AS SELECT
min(t1.id) AS id,
count(*) AS cnt,
t1.Lastname,
t1.Branche,
t1.Hausnummer
FROM Import_Daten AS t1
GROUP BY t1.Lastname, t1.Branche, t1.Hausnummer
HAVING count(*)>1 ;
Now let's delete duplicate data and leave only one record of all duplicate sets
DELETE Import_Daten
FROM Import_Daten LEFT JOIN data_to_delete
ON Import_Daten.Lastname=data_to_delete.Lastname
AND Import_Daten.Branche=data_to_delete.Branche
AND Import_Daten.Hausnummer = data_to_delete.Hausnummer
WHERE Import_Daten.id != data_to_delete.id;
DROP TABLE data_to_delete;
You can add a new column e.g. uq and make it UNIQUE.
ALTER TABLE Import_Daten
ADD COLUMN `uq` BINARY(16) NULL,
ADD UNIQUE INDEX `uq_UNIQUE` (`uq` ASC);
When this is done you can execute an UPDATE query like this
UPDATE IGNORE Import_Daten
SET
uq = UNHEX(
MD5(
CONCAT(
Import_Daten.Lastname,
Import_Daten.Street,
Import_Daten.Zipcode
)
)
)
WHERE
uq IS NULL;
Once all entries are updated and the query is executed again, all duplicates will have the uq field with a value=NULL and can be removed.
The result then is:
0 row(s) affected, 1 warning(s): 1062 Duplicate entry...
For newly added rows always create the uq hash and and consider using this as the primary key once all entries are unique.
I've got a SQL 2008 R2 table defined like this:
CREATE TABLE [dbo].[Search_Name](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](300) NULL),
CONSTRAINT [PK_Search_Name] PRIMARY KEY CLUSTERED ([Id] ASC))
Performance querying the Name field using CONTAINS and FREETEXT works well.
However, I'm trying to keep the values of my Name column unique. Searching for an existing entry in the Name column is unbelievably slow for a large number of names (usually batches of 1,000), even with an index on the Name field. Query plans indicate I'm using the index as expected.
To search for an existing value, my query looks like this:
SELECT TOP 1 Id, Name from Search_Name where Name = 'My Name Value'
I've tried duplicating the Name column to another column and searching on the new column, but the net effect was the same.
At this point, I'm thinking I must be mis-using this feature.
Should I just stop trying to prevent duplication? I'm using a linking table to join these search name values to the underlying data. It seems somehow 'dirty' to just store a whole bunch of duplicate values...
...or is there faster way to take a list of 1,000 names and see which ones are already stored in the database?
The first change to make is to get the entire list to SQL Server at one time. Regardless of how you add the names to the existing table, doing it as a set operation will make a big difference in performance.
Passing the List as a table-valued parameter (TVP) is a clean way to handle it. Have a look here for an example. You can still use an OUTPUT clause to track which rows did or didn't make the cut, for example:
-- Some sample existing names.
declare #Search_Name as Table ( Id Int Identity, Name VarChar(32) );
insert into #Search_Name ( Name ) values ( 'Bob' ), ( 'Carol' ), ( 'Ted' ), ( 'Alice' );
select * from #Search_Name;
-- Some (prospective) new names.
declare #New_Names as Table ( Name VarChar(32) );
insert into #New_Names ( Name ) values ( 'Ralph' ), ( 'Alice' ), ( 'Ed' ), ( 'Trixie' );
select * from #New_Names;
-- Add the unique new names.
declare #Inserted as Table ( Id Int, Name VarChar(32) );
insert into #Search_Name
output inserted.Id, inserted.Name into #Inserted
select New.Name
from #New_Names as New left outer join
#Search_Name as Old on Old.Name = New.Name
where Old.Id is NULL;
-- Results.
select * from #Search_Name;
-- The names that were added and their id's.
select * from #Inserted;
-- The names that were not added.
select New.Name
from #New_Names as New left outer join
#Inserted as I on I.Name = New.Name
where I.Id is NULL;
Alternatively, you could use a MERGE statement and OUTPUT the names that were added, those that weren't, or both.
I'm currently working on a project with a MySQL Db of more than 8 million rows. I have been provided with a part of it to test some queries on it. It has around 20 columns out of which 5 are of use to me. Namely: First_Name, Last_Name, Address_Line1, Address_Line2, Address_Line3, RefundID
I have to create a unique but random RefundID for each row, that is not the problem. The problem is to create same RefundID for those rows whose First_Name, Last_Name, Address_Line1, Address_Line2, Address_Line3 as same.
This is my first real work related to MySQL with such large row count. So far I have created these queries:
-- Creating Teporary Table --
CREATE temporary table tempT (SELECT tt.First_Name, count(tt.Address_Line1) as
a1, count(tt.Address_Line2) as a2, count(tt.Address_Line3) as a3, tt.RefundID
FROM `tempTable` tt GROUP BY First_Name HAVING a1 >= 2 AND a2 >= 2 AND a3 >= 2);
-- Updating Rows with First_Name from tempT --
UPDATE `tempTable` SET RefundID = FLOOR(RAND()*POW(10,11))
WHERE First_Name IN (SELECT First_Name FROM tempT WHERE First_Name is not NULL);
This update query keeps on running but never ends, tempT has more than 30K rows. This query will then be run on the main DB with more than 800K rows.
Can someone help me out with this?
Regards
The solutions that seem obvious to me....
Don't use a random value - use a hash:
UPDATE yourtable
SET refundid = MD5('some static salt', First_Name
, Last_Name, Address_Line1, Address_Line2, Address_Line3)
The problem is that if you are using an integer value for the refundId then there's a good chance of getting a collision (hint CONV(SUBSTR(MD5(...),1,16),16,10) to get a SIGNED BIGINT). But you didn't say what the type of the field was, nor how strict the 'unique' requirement was. It does carry out the update in a single pass though.
An alternate approach which creates a densely packed seguence of numbers is to create a temporary table with the unique values from the original table and a random value. Order by the random value and set a monotonically increasing refundId - then use this as a look up table or update the original table:
SELECT DISTINCT First_Name
, Last_Name, Address_Line1, Address_Line2, Address_Line3
INTO temptable
FROM yourtable;
set #counter=-1;
UPDATE temptable t SET t,refundId=(#counter:=#counter + 1)
ORDER BY r.randomvalue;
There are other solutions too - but the more efficient ones rely on having multiple copies of the data and/or using a procedural language.
Try using the following:
UPDATE `tempTable` x SET RefundID = FLOOR(RAND()*POW(10,11))
WHERE exists (SELECT 1 FROM tempT y WHERE First_Name is not NULL and x.First_Name=y.First_Name);
In MySQL, it is often more efficient to use join with update than to filter through the where clause using a subquery. The following might perform better:
UPDATE `tempTable` join
(SELECT distinct First_Name
FROM tempT
WHERE First_Name is not NULL
) fn
on temptable.First_Name = fn.First_Name
SET RefundID = FLOOR(RAND()*POW(10,11));
A JobID goes as follows: ALC-YYYYMMDD-001. The first three are a companies initials, the last three are an incrementing number that resets daily and increments throughout the day as jobs are added for a maximum of 999 jobs in a day; it is these last three that I am trying to work with.
I am trying to get a before-insert trigger to look for the max JobID of the day, and add one so I can have the trigger derive the proper JobID. For the first job, it will of course return null. So here is what I have so far.
Through the following I can get a result of '000'.
set #maxjobID =
(select SUBSTRING(
(Select MAX(
SUBSTRING((Select JobID FROM jobs WHERE SUBSTRING(JobID,5,8)=date_format(curdate(), '%Y%m%d')),4,12)
)
),14,3)
);
select lpad((select ifnull(#maxjobID,0)),3,'0')
But I really need to add one to this keeping the leading zeros to increment the first and subsequent jobs of the day. My problem is as soon as try to add '1' I get a return of 'BLOB'. That is:
select lpad((select ifnull(#maxjobID,0)+1),3,'0')
returns 'BLOB'
I need it to return '001' so I can concatenate that result with the CO initials and the current date.
try casting VARCHAR back to INTEGER
SELECT lpad(SELECT (COALESCE(#maxjobID,0, CAST(#maxjobID AS SIGNED)) + 1),3,'0')
If you're using the MyISAM storage engine, you can implement exactly this with AUTO_INCREMENT, without denormalising your data into a delimited string:
For MyISAM tables, you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is useful when you want to put data into ordered groups.
In your case:
Normalise your schema:
ALTER TABLE jobs
ADD initials CHAR(3) NOT NULL FIRST,
ADD date DATE NOT NULL AFTER initials,
ADD seq SMALLINT(3) UNSIGNED NOT NULL AFTER date,
;
Normalise your existing data:
UPDATE jobs SET
initials = SUBSTRING_INDEX(JobID, '-', 1),
date = STR_TO_DATE(SUBSTRING(JobID, 5, 8), '%Y%m%d'),
seq = SUBSTRING_INDEX(JobID, '-', -1)
;
Set up the AUTO_INCREMENT:
ALTER TABLE jobs
DROP PRIMARY KEY,
DROP JobID,
MODIFY seq SMALLINT(3) UNSIGNED NOT NULL AUTO_INCREMENT,
ADD PRIMARY KEY(initials, date, seq)
;
You can then recreate your JobID as required on SELECT (or even create a view from such a query):
SELECT CONCAT_WS(
'-',
initials,
DATE_FORMAT(date, '%Y%m%d'),
LPAD(seq, 3, '0')
) AS JobID,
-- etc.
If you're using InnoDB, whilst you can't generate sequence numbers in this fashion I'd still recommend normalising your data as above.
So, I found a query that works (thus far).
Declare maxjobID VARCHAR(16);
Declare jobincrement SMALLINT;
SET maxjobID =
(Select MAX(
ifnull(SUBSTRING(
(Select JobID FROM jobs WHERE SUBSTRING(JobID,5,8)=date_format(curdate(), '%Y%m%d')),
5,
12),0)
)
);
if maxjobID=0
then set jobincrement=1;
else set jobincrement=(select substring(maxjobID,10,3))+1;
end if;
Set NEW.JobID=concat
(New.AssignedCompany,'-',date_format(curdate(), '%Y%m%d'),'-',(select lpad(jobincrement,3,'0')));
Thanks for the responses! Especially eggyal for pointing out the auto_increment capabilities in MyISAM.
I have the following SQL statement which was working perfectly until I moved it to another server. The middle query (encapsulated in ** ) does not seem to work. I am getting an error saying that 'AUTO' is an incorrect integer type. If I remove it altogether, it says that I have an incorrect number of fields. I am trying to copy data from one table to another and allow the destination table to auto increment its ID number.
SET sql_safe_updates=0;
START TRANSACTION;
DELETE FROM shares
WHERE asset_id = '$asset_ID';
/*************************************************************/
INSERT INTO shares
SELECT 'AUTO', asset_ID, member_ID, percent_owner, is_approved
FROM pending_share_changes
WHERE asset_ID = '$asset_ID';
/*************************************************************/
DELETE FROM pending_share_changes
WHERE asset_ID = '$asset_ID';
DELETE FROM shares
WHERE asset_ID = '$asset_ID' AND percent_owner = '0';
COMMIT;";
Based on this page of the mysql docs, you have to do:
INSERT INTO shares
(column_name1, column_name2, column_name3, column_name4) -- changed!
SELECT asset_ID, member_ID, percent_owner, is_approved
FROM pending_share_changes
WHERE asset_ID = '$asset_ID';
The difference is that the column names of the "receiving" table are explicitly listed after the name of the receiving table.
The docs say
AUTO_INCREMENT columns work as usual.