I want a MySQL field to be able to reference the same field in another row and return that field's value when loaded into PHP.
For instance, if row with id = 1 has value "bar" in column foo, the row with id = 3 should have value "[1]" (or something similar; the 1 is pointing to the row with id = 1) in column foo and MySQL should replace that value with "bar" when returning the array.
I'm not talking about an UPDATE query. I am trying to build a SELECT query that will make the appropriate replacement. I want [1] to be the permanent value of that row, and reflect whatever foo in the referenced row should happen to be.
You could try something like this but without knowing more, your mileage may vary.
Also, simply use 1, not [1] as the reference value
SELECT a.id, b.foo
FROM `foo_table` a
INNER JOIN `foo_table` b
ON CAST(a.foo AS UNSIGNED) = b.id
WHERE a.id = 3 -- or whatever
Update
I'd be more inclined to make your table design more specific. For example, try this structure
CREATE TABLE `foo_table` (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
foo VARCHAR(255),
parent_id INT NULL,
FOREIGN KEY (parent_id) REFERENCES `foo_table` (id)
ON DELETE CASCADE
) ENGINE=INNODB;
Then your query can be
SELECT a.id, COALESCE(b.foo, a.foo) AS foo
FROM foo_table a
LEFT JOIN foo_table b
ON a.parent_id IS NOT NULL
AND a.parent_id = b.id;
Related
I'm a bit new to SQL and having some trouble coming up with this query. I have two tables, a parent x table, and a child y table which references the parent table via a x_id foreign key:
x table:
x_id
col_to_update
1
0
2
0
y table:
x_id
testing_enum
1
1
2
0
I'd like to add the new column col_to_update and set its default value based on whether there exists a row in y with the same x_id and a value set for testing_enum. For example, for x_id=1, since there is a row in y for that x_id and a value set for ftesting_enum, I want to then default the col_to_update for that x_id to 1. Hopefully that makes sense. I think this involves a JOIN clause but I'm unsure of how everything is supposed to come together.
One-time update:
UPDATE policies
SET col_to_update = EXISTS ( SELECT NULL
FROM policies_sub
WHERE policies_sub.policy_id = policies.policy_id
AND policies_sub.factor_enum
)
On-the-fly actualization:
CREATE TRIGGER tr_ai_factor_enum
AFTER INSERT ON factor_enum
FOR EACH ROW
UPDATE policies
SET col_to_update = EXISTS ( SELECT NULL
FROM policies_sub
WHERE policy_id = NEW.policy_id
AND factor_enum
)
WHERE policy_id = NEW.policy_id
and the same triggers AFTER DELETE (use OLD.policy_id) and AFTER UPDATE (use both - i.e. 2 UPDATEs).
After triggers creation run one-time UPDATE once for current values actualization.
I know, deleting duplicates from mysql is often discussed here. But none of the solution work fine within my case.
So, I have a DB with Address Data nearly like this:
ID; Anrede; Vorname; Nachname; Strasse; Hausnummer; PLZ; Ort; Nummer_Art; Vorwahl; Rufnummer
ID is primary Key and unique.
And i have entrys for example like this:
1;Herr;Michael;Müller;Testweg;1;55555;Testhausen;Mobile;012345;67890
2;Herr;Michael;Müller;Testweg;1;55555;Testhausen;Fixed;045678;877656
The different PhoneNumber are not the problem, because they are not relevant for me. So i just want to delete the duplicates in Lastname, Street and Zipcode. In that case ID 1 or ID 2. Which one of both doesn't matter.
I tried it actually like this with delete:
DELETE db
FROM Import_Daten db,
Import_Daten dbl
WHERE db.id > dbl.id AND
db.Lastname = dbl.Lastname AND
db.Strasse = dbl.Strasse AND
db.PLZ = dbl.PLZ;
And insert into a copy table:
INSERT INTO Import_Daten_1
SELECT MIN(db.id),
db.Anrede,
db.Firstname,
db.Lastname,
db.Branche,
db.Strasse,
db.Hausnummer,
db.Ortsteil,
db.Land,
db.PLZ,
db.Ort,
db.Kontaktart,
db.Vorwahl,
db.Durchwahl
FROM Import_Daten db,
Import_Daten dbl
WHERE db.lastname = dbl.lastname AND
db.Strasse = dbl.Strasse And
db.PLZ = dbl.PLZ;
The complete table contains over 10Mio rows. The size is actually my problem. The mysql runs on a MAMP Server on a Macbook with 1,5GHZ and 4GB RAM. So not really fast. SQL Statements run in a phpmyadmin. Actually i have no other system possibilities.
You can write a stored procedure that will each time select a different chunk of data (for example by rownumber between two values) and delete only from that range. This way you will slowly bit by bit delete your duplicates
A more effective two table solution can look like following.
We can store only the data we really need to delete and only the fields that contain duplicate information.
Let's assume we are looking for duplicate data in Lastname , Branche, Haushummer fields.
Create table to hold the duplicate data
DROP TABLE data_to_delete;
Populate the table with data we need to delete ( I assume all fields have VARCHAR(255) type )
CREATE TABLE data_to_delete (
id BIGINT COMMENT 'this field will contain ID of row that we will not delete',
cnt INT,
Lastname VARCHAR(255),
Branche VARCHAR(255),
Hausnummer VARCHAR(255)
) AS SELECT
min(t1.id) AS id,
count(*) AS cnt,
t1.Lastname,
t1.Branche,
t1.Hausnummer
FROM Import_Daten AS t1
GROUP BY t1.Lastname, t1.Branche, t1.Hausnummer
HAVING count(*)>1 ;
Now let's delete duplicate data and leave only one record of all duplicate sets
DELETE Import_Daten
FROM Import_Daten LEFT JOIN data_to_delete
ON Import_Daten.Lastname=data_to_delete.Lastname
AND Import_Daten.Branche=data_to_delete.Branche
AND Import_Daten.Hausnummer = data_to_delete.Hausnummer
WHERE Import_Daten.id != data_to_delete.id;
DROP TABLE data_to_delete;
You can add a new column e.g. uq and make it UNIQUE.
ALTER TABLE Import_Daten
ADD COLUMN `uq` BINARY(16) NULL,
ADD UNIQUE INDEX `uq_UNIQUE` (`uq` ASC);
When this is done you can execute an UPDATE query like this
UPDATE IGNORE Import_Daten
SET
uq = UNHEX(
MD5(
CONCAT(
Import_Daten.Lastname,
Import_Daten.Street,
Import_Daten.Zipcode
)
)
)
WHERE
uq IS NULL;
Once all entries are updated and the query is executed again, all duplicates will have the uq field with a value=NULL and can be removed.
The result then is:
0 row(s) affected, 1 warning(s): 1062 Duplicate entry...
For newly added rows always create the uq hash and and consider using this as the primary key once all entries are unique.
Consider the following schema:
create schema testSchema;
use testSchema;
create table A (
id int not null auto_increment,
name varchar (50),
primary key(id));
create table B (
id int not null auto_increment,
name varchar (50),
primary key(id));
create table C (
id int not null auto_increment,
name varchar (50),
primary key(id));
create table main (
id int not null auto_increment,
typeId int,
type varchar(50),
tableMappingsId int,
primary key (id)
);
create table tableMappings (
id int not null auto_increment,
tableName varchar(50),
primary key (id)
);
insert into A (name) values
('usa'),
('uk'),
('uno');
insert into B (name) values
('earth'),
('mars'),
('jupiter');
insert into C (name) values
('1211'),
('12543'),
('345');
insert into main (typeId, type, tableMappingsId) values
(1,'tableA',1),
(2,'tableB',2),
(3,'tableC',3);
insert into tableMappings (tableName) values ('A'),('B'),('C');
Description:-
I have three tables A, B and C which have id and name.
In main table, type tells from which table (A or B or C) we have to read the name property. typeId tells the id within that table(A or B or C). To do this I have created a tableMappings table which has tableNames. In main table I have created a column tableMappingsId which points to tableMappings.
Is this correct approach? and how can I write a query like following in MySQL:-
select (name property) from the table which is pointed to by this row and mapped by tableMappings?
About your design
If we think in an object-oriented manner, your have a base class constituted of attributes recorded in main table, and derivated by A, B and C with additionnal attributes.
You want to avoid having many attributes in a single table with NULLs depending on records types. This is a good approach. But your method to implement this can be improved.
Answer to your question
You want to select from a table (A, B or C) depending on the value of a field. As far as I know this cant be done without "preparing" the query.
"preparing" the query can be done in multiple manners :
using prepared statements ("pure-SQL" method) : https://dev.mysql.com/doc/refman/5.7/en/sql-syntax-prepared-statements.html
in a stored procedure or function for example : selecting the type, then testing the type and selecting from the right table
or constituting the query in two times via a script language
Example with prepared statement :
SET #idToSelect = 2;
SELECT
CONCAT('SELECT name FROM ', tableMappings.tableName, ' WHERE Id = ', main.tableMappingsId)
INTO #statement
FROM main
INNER JOIN tableMappings ON tableMappings.tableName = REPLACE(main.type, 'table', '')
WHERE main.id = #idToSelect;
PREPARE stmt FROM #statement;
EXECUTE stmt;
Note : we have to translate 'tableA', 'tableB'... in main.type to match to 'A', 'B'... in tableMappings.tableName, which is not ideal.
But this is not very convenient and efficient.
Other approaches and comments
Selecting from multiple tables : not necessarily a big deal
Basically, you want to avoid SELECT'ing from tables you dont need to read from. But keep in mind that if your schema is correctly indexed, this is not necessarily a big deal. MySQL runs a query optimizer. You could simply LEFT JOIN all of the tables and select from the right table depending on 'type' value :
SET #idToSelect = 2;
SELECT
IFNULL(A.name, IFNULL(B.name, C.name)) AS name
FROM main
LEFT JOIN A ON main.type = 'tableA' AND A.id = main.tableMappingsId
LEFT JOIN B ON main.type = 'tableB' AND B.id = main.tableMappingsId
LEFT JOIN C ON main.type = 'tableC' AND C.id = main.tableMappingsId
WHERE main.id = #idToSelect;
Note that I didn't use tableMappings table
Useless tableMappings trick
You can avoid using this kind of mapping by using the same id in the "children" table as in the main table. This is how many ORM's implement inheritance. I will give an example later in my answer.
A bit irrelevant example
In your question, you want to select the "name" property regardless of the type of the record. But I bet if you have really different types of records, each type holds a different set of properties. If the "name" is a common property between all the types, it should be in the main table. I assume you provided "name" as a simplified example.
But I think in a real case, you'll rarely have to select a field regardless of the type of the object.
Other thing : In data example, you provide records for A, B and C tables which does not match to main records
Final proposition
drop schema testSchema;
create schema testSchema;
use testSchema;
create table main (
id int not null auto_increment,
typeId int,
common_data VARCHAR(50),
primary key (id)
);
create table A (
id int not null,
specific_dataA varchar (50),
primary key(id),
foreign key FK_A (id) references main (id)
);
create table B (
id int not null,
specific_dataB varchar (50),
primary key(id),
foreign key FK_B (id) references main (id)
);
create table C (
id int not null,
specific_dataC varchar (50),
primary key(id),
foreign key FK_C (id) references main (id)
);
insert into main (typeId, common_data) values
(1, 'ABC'),
(2, 'DEF'),
(3, 'GHI');
insert into A (id, specific_dataA) values
(1, 'usa');
insert into B (id, specific_dataB) values
(2, 'mars');
insert into C (id, specific_dataC) values
(3, '345');
Some comments :
typeId in main table is optionnal, but depending on queries you have to do it could be useful to retrieve the type of an object. One field is enough, dont need typeId integer and type varchar.
id's in A, B and C tables are not auto_increment because they have to match to main id's
this design is irrelevant if there is no common attributes, so I put a common data field in main table
I materialized relations by defining foreign keys
Queries examples :
I want the common data field for id 1 :
SELECT common_data FROM main WHERE id = 1;
I know that id 2 is from type B and I want the specific data B :
SELECT specific_dataB FROM B WHERE id = 2;
I know that id 3 is from type C and I want the common data and the specific data C :
SELECT common_data, specific_dataB FROM main INNER JOIN B ON B.id = main.id WHERE main.id = 2;
(best match to your case) I dont know the type of object 3 but I want a specific data depending on its type :
SELECT IFNULL(
A.specific_dataA,
IFNULL(
B.specific_dataB,
C.specific_dataC
)
)
FROM main
LEFT JOIN A on A.id = main.id
LEFT JOIN B on B.id = main.id
LEFT JOIN C on C.id = main.id
WHERE main.id = 3
I have two tables, with same schema -
create table test1 (
a INT NOT NULL ,
b INT NOT NULL ,
c INT,
PRIMARY KEY (a,b)
);
create table test2 (
a INT NOT NULL ,
b INT NOT NULL ,
c INT,
PRIMARY KEY (a,b)
);
I want to insert values from test2 table into test1, but if the row with same primary key already exist, update it. I know in mysql you can do similar thing with ON DUPLICATE KEY UPDATE like -
INSERT INTO test1 VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=3;
But I dont know how to do the above query with a SELECT from another table. What I am looking for is a query of form -
INSERT INTO test2
SELECT a, b, c FROM test1
ON DUPLICATE KEY UPDATE c = c + t.c
(Select a, b, c from tests1)t;
This query is obviously invalid. I would appreciate if somebody can make a valid query out of it.
This should work for you:
INSERT INTO test2
SELECT a, b, c as c1 FROM test1
ON DUPLICATE KEY UPDATE c = c + c1
You could do it with this SQL:
INSERT INTO test1 (a, b, c)
SELECT t.a as a, t.b as b, t.c AS c FROM test2 AS t
ON DUPLICATE KEY UPDATE c=t.c;
It depends, I use the PK autoincrement feature, for Row Unique identifier, I split a big table, so the small one have changes I can't miss, I read several opinions, even merge and remake PK that's do not work for me. The result set in Memory looks like if you don't mention the full columns name you will have errors. I did this and work, I hope some body have a better and smaller solutions:
>INSERT
>INTO t_traffic_all
>SELECT
>t_traffic.file_id,
>t_traffic.cust,
>t_traffic.supp,
>.
>. (a lot...)
>.
>t_traffic.i_traffic_type,
>t_traffic.date_posted,
>t_traffic.date_update
>FROM t_traffic
>ON DUPLICATE KEY UPDATE
>t_traffic_all.file_id = t_traffic.file_id,
>t_traffic_all.cust = t_traffic.cust,
>t_traffic_all.supp = t_traffic.supp,
>t_traffic_all.imprt = t_traffic.imprt,
>.
>.
>.
>t_traffic_all.i_traffic_type= t_traffic.i_traffic_type,
>t_traffic_all.date_posted= t_traffic.date_posted,
>t_traffic_all.date_update= t_traffic.date_update
> Affected rows: 11128
> Time: 29.085s
the total rows processed where 18608, so it insert 11128 and update 7480 (7554 should be) this numbers do not are very precise but the result was.
I have got an older database for which (at some really questionable and obscure reason I do not like to put too much on topic here) I want to randomize or shuffle the primary keys.
I right now have auto-increment fields in the Mysql database tables.
I have not many relations, those that exist are not defined as foreign keys. The relationships do not need to be preserved.
All I'm looking for is to take the current values of the primary keys and replace it with a random value out of those like:
ID := new(ID)
Where the new function returns a value from the set of all OLD ids with a 1:1 match. E.g.
2 := 3
3 := 2
But not
2 := 3
3 := 3
Is there a way to change the data in the database with (ideally) a single SQL query per table?
Edit: I do not have really strict requirements. Consider to have exclusive access to the database if it helps, including changing constraints on the primary key back and forth, e.g. alter the table, do the operation, alter the table to previous schema. It is also possible to add another column for the new (or old) PK value.
Just a scetch of the procedure. Create two temporary tables
CREATE TABLE temp_old
( ai INT NOT NULL AUTO_INCREMENT
, id INT NOT NULL
, PRIMARY KEY (ai)
, INDEX old_idx (id, ai)
) ENGINE = InnoDB ;
CREATE TABLE temp_new
( ai INT NOT NULL AUTO_INCREMENT
, id INT NOT NULL
, PRIMARY KEY (ai)
, INDEX new_idx (id, ai)
) ENGINE = InnoDB ;
Copy the id values in different orders to the two tables (randomly in the 2nd table):
INSERT INTO temp_old
(id)
SELECT id
FROM tableX
ORDER BY id ;
INSERT INTO temp_new
(id)
SELECT id
FROM tableX
ORDER BY RAND() ;
Then we drop the primary key:
ALTER TABLE tableX
DROP PRIMARY KEY ;
to run the actual UPDATE statement:
UPDATE tableX AS t
JOIN temp_old AS o
ON o.id = t.id
JOIN temp_new AS n
ON n.ai = o.ai
SET t.id = n.id ;
Then recreate the primary key and drop the temp tables:
ALTER TABLE tableX
ADD PRIMARY KEY (id) ;
DROP TABLE temp_old ;
DROP TABLE temp_new ;
Tested in SQL-Fiddle
Here's a technique that creates a list of your ids in table order, along with a sequential number from 1, it also creates a list of your ids in a random order, along with a sequential number from 1. It then updates the ids based on matching the sequential number.
There are issues with the performance of order by rand(), (and it's randomness).
If your keys are already sequential starting from 1, you can simplify this.
Update
Test as t
Inner Join (
Select
#rownum2 := #rownum2 + 1 as rank,
t2.id
From
Test t2,
(Select #rownum2:= 0) a1
) as o on t.id = o.id
Inner Join (
Select
#rownum := #rownum + 1 as rank,
t3.id
From
(Select id from Test order by Rand()) t3,
(Select #rownum:= 0) a2
) as n on o.rank = n.rank
Set
t.id = n.id
http://sqlfiddle.com/#!2/3f354/1
You could create a stored procedure that would create a temporary table containing all of the ids, then you can loop over each record, replacing the id with an id from the temp table then removing that id from the temp table. I don't believe there is a way to do what you are talking about in a single query though.