mysql insert into table if exists - mysql

In my project I have two code paths that interact with MySQL during first time setup. The first step is the database structure creation, in here, a user has the ability to pick and choose the features they want - and some tables may not end up being created in the database depending on what the user selects.
In the second part, I need to preload the tables that did get created with some basic data - how could I go about inserting rows into these tables, only if the table exists?
I know of IF NOT EXISTS but as far as I know, that only works with creating tables, I am trying to do something like this
INSERT INTO table_a ( `key`, `value` ) VALUES ( "", "" ) IF EXISTS table_a;
This is loaded through a file that contains a lot of entries, so letting it throw an error when the table does not exist is not an option.

IF (SELECT count(*)FROM information_schema.tables WHERE table_schema ='databasename'AND table_name ='tablename') > 0
THEN
INSERT statement
END IF
Use information schema to check existence if table

If you know that a particular table does exist with at least 1 record (or you can create a dummy table with just a single record) then you can do a conditional insert this way without a stored procedure.
INSERT INTO table_a (`key`, `value`)
SELECT "", "" FROM known_table
WHERE EXISTS (SELECT *
FROM information_schema.TABLES
WHERE (TABLE_SCHEMA = 'your_db_name') AND (TABLE_NAME = 'table_a')) LIMIT 1;

Related

Records get deleted from a specific table, leaving exactly the same amount of records (54) left each time

Okay, I've just inherited a project from a previous developer, before I came in they had problems of a particular table losing records and leaving exactly the same number of records each time, the records get erased completely. And I noticed that there are lots of DELETE statements in the code as well, but I can't find the script that deletes the records.
For now I run a CRON job twice a day to back up the database.
I have checked for CASCADE DELETE using this SQL
USE information_schema;
SELECT
table_name
FROM
referential_constraints
WHERE
constraint_schema = 'my_database_name'
AND referenced_table_name IN
(SELECT table_name
FROM information_schema.tables
WHERE table_schema ='my_database_name')
AND delete_rule = 'CASCADE'
It lists all the tables in my database and checks for any possibilities of a CASCADE DELETE, but so far it returns empty.
I use SQL a lot because I'm a back-end developer but I'm not an expert at it. So I could really use some help because it's getting quite embarrassing each time it happens. It's a MySQL database. Thanks.
I onced faced a similar situation. I created an SQL TRIGGER statement that stored the rows into another table before they get deleted, that way:
I was able to restore the lost data each time it happened.
I was able to study the rows being deleted and the information helped in resolving the situation.
Here's a sample for backing up the records before they are deleted:
CREATE TABLE `history_table` LIKE `table`;
ALTER TABLE `history_table`
MODIFY COLUMN `id` INT UNSIGNED NOT NULL;
ALTER TABLE `history_table` DROP PRIMARY KEY;
DELIMITER $$
CREATE TRIGGER `deleted_table` BEFORE DELETE on `table`
FOR EACH ROW
BEGIN
INSERT INTO history_table
SELECT *
FROM table
WHERE id = OLD.id;
END$$
DELIMITER ;
For restoring the table
INSERT INTO table
SELECT * FROM history_table

How to use IF EXISTS to check whether table exists before removing data in that table

I wanted to check whether a table exists before deleting the values inside the table. In SQL Server we can do as simple as so :
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'table_to_be_edited')
BEGIN
DELETE FROM table_to_be_edited;
END;
but how do we do it in MySQL ?
I am using MySQL Workbench V8.0.
When delete an option is to ignore the table not found error. This eliminates race conditions where a table is created between the test and the truncate. Always consider this when doing SQL operations.

SQL - delete row if another table exists

I'm trying to delete a row from a table if another table doesn't exist. I've tried using the following statement
IF (NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'user3project3'))
BEGIN
DELETE FROM user1table WHERE id=3
END
However I get the following error:
Unrecognized statement type. (near "IF" at position 0)
I'm using phpMyAdmin with XAMPP, if that matters.
Thanks a lot in advance!
The IF statement is only allowed in programming blocks, which in practice means in stored procedures, functions, and triggers.
You could express this logic in a single query:
DELETE FROM user1table
WHERE id = 3 AND
NOT EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'user3project3');
That said, you have a very questionable data model if you are storing a separate table for each user.

Is there a way to ignore columns that don't exist on INSERT?

I'm using a MySQL GUI to migrate some sites to a new version of a CMS by selecting certain tables and running the INSERT statement generated from a backup dump into an empty table (the new schema). There are a few columns in the old tables that don't exist in the new one, so the script stops with an error like this:
Script line: 1 Unknown column 'user_id' in 'field list'
Cherry-picking the desired columns to export, or editing the dump file would be too tedious and time consuming. To work around this I'm creating the unused columns as the errors are generated, importing the data by running the query, then dropping the unused columns when I'm done with that table. I've looked at INSERT IGNORE, but this seems to be for ignoring duplicate keys (not what I'm looking for).
Is there any way to preform an INSERT while ignoring columns that don't exist in the target table? I'm looking for something "painless", like some existing SQL functionality.
To clarify, I'm working with a bunch of backup files and importing the data to a local database for testing before moving it to the live server. Example of the kind of solution I'm hoping for:
-- Don't try to insert data from columns that don't exist in "new_table"
INSERT INTO `new_table` {IGNORE UNKNOWN COLUMNS} (`id`, `col1`, `col2`) VALUES
(1, '', ''),
(2, '', '');
If something like this simply doesn't exist, I'm happy to accept that as an answer and continue to use my current workaround.
Your current technique seems practical enough. Just one small change.
Rather than waiting for error and then creating columns one by one, you can just export the schema, do a diff and find out all the missing columns in all the tables.
That way it would be less work.
Your gui will be capable of exporting just schema or the following switch on mysqldump will be useful to find out all the missing columns.
mysqldump --no-data -uuser -ppassword --database dbname1 > dbdump1.sql
mysqldump --no-data -uuser -ppassword --database dbname2 > dbdump2.sql
Diffing the dbdump1.sql and dbdump2.sql will give you all the differences in both the databases.
you can write a store function like that:
sf_getcolumns(table_name varchar(100))
return string contatining the filed list like this:
'field_1,field_2,field_3,...'
then create a store procedure
sp_migrite (IN src_db varchar(50), IN target_db varchar(50))
that runs trugh the tables and for each table gets the filed lists and then creates a string like
cmd = 'insert into ' || <target_db><table_name> '(' || <fileds_list> || ') SELECT' || <fileds_list> || ' FROM ' <src_db><table_name>
then execute the string for each table
Is there any way to preform an INSERT while ignoring columns that don't exist in the target table? I'm looking for something "painless", like some existing SQL functionality.
No, there is no "painless" way to do so.
Instead, you must explicitly handle those columns which do not exist in the final tables. For example, you must remove them from the input stream, drop them after the fact, play dirty tricks (engine=BLACKHOLE + triggers to INSERT only what you want to the true target schema), whatever.
Now, this doesn't necessarily need to be manual -- there are tools (as Devart noted) and ways to query the db catalog to determine column names. However, it's not as easy as simply annotating your INSERT statements.
Perhaps the CMS vendor can supply a reasonable migration script?
dbForge Studio for MySQL will give you an opportunity to compare and synchronize data between two databases.
By default data comparison is performed only for the objects with the same names; you can use automatic mapping or map database objects manually. dbForge Studio allows you to customize mapping of tables and columns, so you can compare data of objects with non-equal names, owners, and structure. You may also map columns with different types, however this may result in data truncation, rounding, and errors during synchronization for certain types.
I carefully read all these posts because I have the same challenge. Please review my solution:
I did it in c# but you can do it any language.
Check the insert statement for columnnames. If any is missing from your actual table ADD them as a TEXT column coz TEXT can eat anything.
When finished inserting into that table, remove the added columns.
Done.
found your question interesting.
I knew that there was a way to select the column names from a table in MySQL; it's show columns from tablename. What I'd forgotten was that all of the MySQL table data is held in a special database, called "information_schema".
This is the logical solution, but it doesn't work:
mysql> insert into NEW_TABLE (select column_name from information_schema.columns where table_name='NEW_TABLE') values ...
I'll keep looking, though. If it's possible to grab a comma-delimited value from the select column_name query, you might be in business.
Edit:
You can use the select ... from ... into command to generate a one-line CSV, like the following:
mysql> select column_name from information_schema.columns where table_name='NEW_TABLE' into outfile 'my_file' fields terminated by '' lines terminated by ', '
I don't know how to get this to output to the MySQL CLI stdout, though.
If your tables and databases are on the same server and you know that the common columns are compatible, you can easily do this with GROUP_CONCAT, INFORMATION_SCHEMA.COLUMNS, and a prepared statement.
This example creates two similar tables
and inserts the data from the common columns in
table_a into table_b. Since the two tables have
a common primary key column, it is excluded. (Note: the database [table_schema] I am using is called 'test')
create table if not exists `table_a` (
`id` int(11) not null auto_increment,
`a` varchar(2) default null,
`b` varchar(2) default null,
`c` varchar(2) default null,
`d` varchar(2) default null,
PRIMARY KEY (`id`)
);
create table if not exists `table_b` (
`id` int(11) not null auto_increment,
`a` varchar(2) default null,
`b` varchar(2) default null,
`c` varchar(2) default null,
PRIMARY KEY (`id`)
);
insert into table_a
(a, b, c, d)
values
('a1', 'b1', 'c1', 'd1'),
('a2', 'b2', 'c2', 'd2'),
('a3', 'b3', 'c3', 'd3');
-- This creates a comma delimited list of common
-- columns in table_a and table_b. It also excludes
-- any columns you don't want from table_a
set #common_columns = (
select
group_concat(column_name)
from
information_schema.columns
where
table_schema = 'test'
and
table_name = 'table_a'
and
column_name not in ('id')
and
column_name in (
select
column_name
from
information_schema.columns
where
table_schema = 'test'
and
table_name = 'table_b'
)
);
set #stmt = concat(
'insert into table_b (', #common_columns, ') ',
'select ', #common_columns, ' from table_a;'
);
prepare stmt from #stmt;
execute stmt;
deallocate prepare stmt;
select * from table_b;
The prepared statement ends up looking like this:
insert into table_b (a, b, c)
select a, b, c from table_a;
Don't forget to change the values for table_name and table_schema to match your tables and database (table_schema). If it is useful you could create a stored procedure to do this task.

MySQL query across 2 databases with different users

I have a mysql query that joins data across 2 databases and inserts new records into one of the databases. The query below works if the same mysql user has access to both databases and both are on the same server. However, how would you re-write if each database has different user credentials?
PHP Script snippet:
$hard = mysql_connect($hostname_hard, $username_hard, $password_hard)
or trigger_error(mysql_error(),E_USER_ERROR);
# Insert artists:
mysql_select_db($database_hard, $hard);
$query_artsync = "insert ignore into {$joomla_db}.jos_muscol_artists
(artist_name, letter, added,url,class_name)
select distinct
artist
, left(artist,1) as letter
, now()
, website, artist
from {$sam_db}.songlist
where (songtype='s' AND artist <>\"\")";
mysql_query($query_artsync, $hard) or die(mysql_error());
echo "<br>Artist tables merged! <br><br> Now merging albums<br><br>";
So..in the above the {$sam_db} database is accessed by a different user than the {$joomla_db} user...
There are more complex inserts following this one, but I think if I can get the above to work, then I'll likely be able to apply the same principles to the other insert queries...
you're talking about using 2 different connections in the same query, which, unfortunately, is not possible. what you'll have to do is get (select) all the information you need from the one database, and use that info to construct your insert query on the other database (2 separate queries).
Something like this:
$result = mysql_query("SELECT DISTINCT artist, LEFT(artist,1) AS letter, now() as added, website
FROM {$sam_db}.songlist
WHERE (songtype='s' AND artist <>\"\"", $sam_con);
while(($row = mysql_fetch_assoc($result)) != false)
{
$artist = mysql_real_escape_string($row['artist']);
$letter = mysql_real_escape_string($row['letter']);
$added = mysql_real_escape_string($row['added']);
$website = mysql_real_escape_string($row['website']);
mysql_query("INSERT IGNORE INTO {$joomla_db}.jos_muscol_artists
(artist_name, letter, added,url,class_name)
VALUES
('$artist', '$letter', '$added', '$website', '$artist')", $joomla_con);
}
where $sam_con and $joomla_con are your connection resources.
There is no problem querying tables from different databases.
SELECT a.*, b.*
FROM db1.table1 a
INNER JOIN db2.table2 b ON (a.id = b.id)
Will run with no problems, as will your insert query.
However the user that starts the query needs to have proper access to the databases and tables involved.
That means that user1 (who does the insert) has to be granted insert+select rights to table {$joomla_db}.jos_muscol_artists and select rights to {$sam_db}.songlist
If you don't want to expand the rights of your existing users, then you can just create a new inserter user that has the proper access rights to use both databases in the proper manner.
Only use this user to do inserts.
Alternative solution without adding users
Another option is to create a blackhole table on db1 (the db you select from)
CREATE TABLE db1.bh_insert_into_db2 (
f1 integer,
f2 varchar(255),
f3 varchar(255)
) ENGINE = BLACKHOLE;
And attach a trigger to that that does the insert into db2.
DELIMITER $$
CREATE TRIGGER ai_bh_insert_into_db2_each ON bh_insert_into_db2 FOR EACH ROW
BEGIN
INSERT INTO db2.table2 (f1,f2,f3) VALUES (NEW.f1, NEW.f2, NEW.f3);
END $$
DELIMITER ;
The insert into table db2.table2 will happen with the access rights of the user how created the trigger.
http://dev.mysql.com/doc/refman/5.1/en/grant.html
With PHP for example, you will have to create 2 unique mysql instances, 1 per connection.
Then use both and do individual queries.
Explanation...
Setup connections to both databases using mysql_connect for example. Now, you now have defined the connenction variables...
In the mysql_query you apply those variables, e.g mysql_query($query, $connect1) or mysql_query($query, $connect2).
From there you can extract and insert using code.