Has anyone experienced a situation like this?
I accidentally typed the DELETE command in the wrong syntax (MySQL version 8.0.22). The command should never have worked, but it not only worked, it also deleted all data from the table:
Syntax: DELETE FROM test WHERE 123456;
Note that neither the column name nor the conditional operator was specified, but even so the command was executed without errors and deleted all data from the table.
The code 123456 is an example but can be any code.
Test it on any version of MySQL:
CREATE TABLE `test` (
`cod` int NOT NULL AUTO_INCREMENT,
`name` varchar(50),
PRIMARY KEY (`cod`),
KEY `ix_tmp_autoinc` (`cod`)
);
INSERT INTO `test`
(`name`)
VALUES
('MySQL bug');
INSERT INTO `test`
(`name`)
VALUES
('MySQL bug 2');
DELETE FROM test WHERE 123456;
SELECT COUNT(*) FROM test;
This is a perfectly valid statement, absolutely in accordance with the syntax for the DELETE command described at DELETE statement. On the DELETE statement page it says that WHERE must be followed by a where_condition, which is described on the SELECT statement page. There we find that a where_condition can be either a Function and Operator, or it can be an Expression. Looking at the Expression page we find the following hierarchy:
expr
|
boolean_primary
|
predicate
|
bit_expr
|
simple_expr
|
literal
So a where_condition can be a literal, which is exactly what you gave it. It may not have been what you meant, and it may not have done what you intended, but from the standpoint of MySQL syntax it's perfectly legal.
What is playing out here are MySQL complex casting rules. When you ran the following query:
DELETE FROM test WHERE 123456;
MySQL expected a boolean expression following WHERE. It didn't find that, but it did instead find an integer literal. It turns out that MySQL will treat an integer literal as being "truthy," and so it will evaluate to true.
I'm seeing a weird behavior when I INSERT some data into a table and then run a SELECT query on the same table. This table has an auto-increment primary key (uid), and this problem occurs when I try to then select results where 'uid IS NULL'.
I've golfed this down to the following SQL commands:
DROP TABLE IF EXISTS test_users;
CREATE TABLE test_users (uid INTEGER PRIMARY KEY AUTO_INCREMENT, name varchar(20) NOT NULL);
INSERT INTO test_users(name) values('foo');
SELECT uid FROM test_users WHERE uid IS NULL;
SELECT uid FROM test_users WHERE uid IS NULL; -- no output from this query
I'd expect SELECT uid FROM test_users WHERE uid IS NULL to never return anything, but it does, sometimes. Here's what I've found:
Version of MySQL/MariaDB seems to matter. The machine having this problem is running MySQL 5.1.73 (CentOS 6.5, both 32-bit and 64-bit). My other machine running 5.5.37-MariaDB (Fedora 19, 64-bit). Both running default configs, aside from being configured to use MyISAM tables.
Only the first SELECT query after the INSERT is affected.
If I specify a value for uid rather than let it auto-increment, then it's fine.
If I disconnect and reconnect between the INSERT and SELECT, then I get the expected no results. This is easiest to see in something like Perl where I manage the connection object. I have a test script demonstrating this at https://gist.github.com/avuserow/1c20cc03c007eda43c82
This behavior is by design.
It's evidently equivalent to SELECT * FROM t1 WHERE id = LAST_INSERT_ID(); which would also work only from the connection where you just did the insert, exactly as you described.
It's apparently a workaround that you can use in some environments that make it difficult to fetch the last inserted (by your connection) row's auto-increment value in a more conventional way.
To be precise, it's actually the auto_increment value assigned to the first row inserted by your connection's last insert statement. That's the same thing when you only inserted one row, but it's not the same thing when you insert multiple rows with a single insert statement.
http://dev.mysql.com/doc/connector-odbc/en/connector-odbc-usagenotes-functionality-last-insert-id.html
I am seeing a curious name dependency in the following MySQL table definition. When I code the table as shown, it seems to break MySQL. When I perform "select * from dictionary_pair_join" from MySQLQueryBrowser, the status bar says "No resultset returned" -- no column names and no errors. When I insert a row into the table, the status bar says "1 row affected by the last command, no resultset returned", and subsequent selects give the same "No resultset returned" response.
When I enclose the tablename in backticks in the "select" statement, all works fine. Surely there are no mysql entities named "dictionary_pair_join"!
Here is the table definition:
DROP TABLE IF EXISTS dictionary_pair_join;
CREATE TABLE dictionary_pair_join (
version_id int(11) UNSIGNED NOT NULL default '0',
pair_id int(11) UNSIGNED default NULL,
KEY (version_id),
KEY (pair_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Here is the broken select statement:
select * from dictionary_pair_join;
Here is its working counterpart:
select * from `dictionary_pair_join`;
Why are backticks required in the select statement?
Update: This also fails in the Python mysqldb interface, which is why I started looking at it. I can put the backticks into my Python "select" generators, but I was hoping this was some stupid and easily-changed nit. I suppose I can also find a different name.
I've uprated the comments from Quassnoi and Software Guy, together they've persuaded me that it's just a bug in mysql/mysqldb/mysqlquerybrowser.
I changed the table name (to "dictionary_pair_cons") and the problem went away, in both the query browser and mysqldb.
Suppose I have an attribute called phone number and I would like to enforce certain validity on the entries to this field. Can I use regular expression for this purpose, since Regular Expression is very flexible at defining constraints.
Yes, you can. MySQL supports regex (http://dev.mysql.com/doc/refman/5.6/en/regexp.html) and for data validation you should use a trigger since MySQL doesn't support CHECK constraint (you can always move to PostgreSQL as an alternative:). NB! Be aware that even though MySQL does have CHECK constraint construct, unfortunately MySQL (so far 5.6) does not validate data against check constraints. According to http://dev.mysql.com/doc/refman/5.6/en/create-table.html: "The CHECK clause is parsed but ignored by all storage engines."
You can add a check constraint for a column phone:
CREATE TABLE data (
phone varchar(100)
);
DELIMITER $$
CREATE TRIGGER trig_phone_check BEFORE INSERT ON data
FOR EACH ROW
BEGIN
IF (NEW.phone REGEXP '^(\\+?[0-9]{1,4}-)?[0-9]{3,10}$' ) = 0 THEN
SIGNAL SQLSTATE '12345'
SET MESSAGE_TEXT = 'Wroooong!!!';
END IF;
END$$
DELIMITER ;
INSERT INTO data VALUES ('+64-221221442'); -- should be OK
INSERT INTO data VALUES ('+64-22122 WRONG 1442'); -- will fail with the error: #1644 - Wroooong!!!
However you should not rely merely on MySQL (data layer in your case) for data validation. The data should be validated on all levels of your app.
MySQL 8.0.16 (2019-04-25) and MariaDB 10.2.1 (2016-04-18) now not only parse CHECK constraint but also enforces it.
MySQL: https://dev.mysql.com/doc/refman/8.0/en/create-table-check-constraints.html
MariaDB: https://mariadb.com/kb/en/constraint/
Actually, we can can set regular expression within check constraints in MySQL.
Eg.,:
create table fk
(
empid int not null unique,
age int check(age between 18 and 60),
email varchar(20) default 'N/A',
secondary_email varchar(20) check(secondary_email RLIKE'^[a-zA-Z]#[a-zA-Z0-9]\.[a-z,A-Z]{2,4}'),
deptid int check(deptid in(10,20,30))
)
;
This INSERT query will work:
insert into fk values(1,19,'a#a.com','a#b.com', 30);
This INSERT query will not work:
insert into fk values(2,19,'a#a.com','a#bc.com', 30);
I'm using a MySQL GUI to migrate some sites to a new version of a CMS by selecting certain tables and running the INSERT statement generated from a backup dump into an empty table (the new schema). There are a few columns in the old tables that don't exist in the new one, so the script stops with an error like this:
Script line: 1 Unknown column 'user_id' in 'field list'
Cherry-picking the desired columns to export, or editing the dump file would be too tedious and time consuming. To work around this I'm creating the unused columns as the errors are generated, importing the data by running the query, then dropping the unused columns when I'm done with that table. I've looked at INSERT IGNORE, but this seems to be for ignoring duplicate keys (not what I'm looking for).
Is there any way to preform an INSERT while ignoring columns that don't exist in the target table? I'm looking for something "painless", like some existing SQL functionality.
To clarify, I'm working with a bunch of backup files and importing the data to a local database for testing before moving it to the live server. Example of the kind of solution I'm hoping for:
-- Don't try to insert data from columns that don't exist in "new_table"
INSERT INTO `new_table` {IGNORE UNKNOWN COLUMNS} (`id`, `col1`, `col2`) VALUES
(1, '', ''),
(2, '', '');
If something like this simply doesn't exist, I'm happy to accept that as an answer and continue to use my current workaround.
Your current technique seems practical enough. Just one small change.
Rather than waiting for error and then creating columns one by one, you can just export the schema, do a diff and find out all the missing columns in all the tables.
That way it would be less work.
Your gui will be capable of exporting just schema or the following switch on mysqldump will be useful to find out all the missing columns.
mysqldump --no-data -uuser -ppassword --database dbname1 > dbdump1.sql
mysqldump --no-data -uuser -ppassword --database dbname2 > dbdump2.sql
Diffing the dbdump1.sql and dbdump2.sql will give you all the differences in both the databases.
you can write a store function like that:
sf_getcolumns(table_name varchar(100))
return string contatining the filed list like this:
'field_1,field_2,field_3,...'
then create a store procedure
sp_migrite (IN src_db varchar(50), IN target_db varchar(50))
that runs trugh the tables and for each table gets the filed lists and then creates a string like
cmd = 'insert into ' || <target_db><table_name> '(' || <fileds_list> || ') SELECT' || <fileds_list> || ' FROM ' <src_db><table_name>
then execute the string for each table
Is there any way to preform an INSERT while ignoring columns that don't exist in the target table? I'm looking for something "painless", like some existing SQL functionality.
No, there is no "painless" way to do so.
Instead, you must explicitly handle those columns which do not exist in the final tables. For example, you must remove them from the input stream, drop them after the fact, play dirty tricks (engine=BLACKHOLE + triggers to INSERT only what you want to the true target schema), whatever.
Now, this doesn't necessarily need to be manual -- there are tools (as Devart noted) and ways to query the db catalog to determine column names. However, it's not as easy as simply annotating your INSERT statements.
Perhaps the CMS vendor can supply a reasonable migration script?
dbForge Studio for MySQL will give you an opportunity to compare and synchronize data between two databases.
By default data comparison is performed only for the objects with the same names; you can use automatic mapping or map database objects manually. dbForge Studio allows you to customize mapping of tables and columns, so you can compare data of objects with non-equal names, owners, and structure. You may also map columns with different types, however this may result in data truncation, rounding, and errors during synchronization for certain types.
I carefully read all these posts because I have the same challenge. Please review my solution:
I did it in c# but you can do it any language.
Check the insert statement for columnnames. If any is missing from your actual table ADD them as a TEXT column coz TEXT can eat anything.
When finished inserting into that table, remove the added columns.
Done.
found your question interesting.
I knew that there was a way to select the column names from a table in MySQL; it's show columns from tablename. What I'd forgotten was that all of the MySQL table data is held in a special database, called "information_schema".
This is the logical solution, but it doesn't work:
mysql> insert into NEW_TABLE (select column_name from information_schema.columns where table_name='NEW_TABLE') values ...
I'll keep looking, though. If it's possible to grab a comma-delimited value from the select column_name query, you might be in business.
Edit:
You can use the select ... from ... into command to generate a one-line CSV, like the following:
mysql> select column_name from information_schema.columns where table_name='NEW_TABLE' into outfile 'my_file' fields terminated by '' lines terminated by ', '
I don't know how to get this to output to the MySQL CLI stdout, though.
If your tables and databases are on the same server and you know that the common columns are compatible, you can easily do this with GROUP_CONCAT, INFORMATION_SCHEMA.COLUMNS, and a prepared statement.
This example creates two similar tables
and inserts the data from the common columns in
table_a into table_b. Since the two tables have
a common primary key column, it is excluded. (Note: the database [table_schema] I am using is called 'test')
create table if not exists `table_a` (
`id` int(11) not null auto_increment,
`a` varchar(2) default null,
`b` varchar(2) default null,
`c` varchar(2) default null,
`d` varchar(2) default null,
PRIMARY KEY (`id`)
);
create table if not exists `table_b` (
`id` int(11) not null auto_increment,
`a` varchar(2) default null,
`b` varchar(2) default null,
`c` varchar(2) default null,
PRIMARY KEY (`id`)
);
insert into table_a
(a, b, c, d)
values
('a1', 'b1', 'c1', 'd1'),
('a2', 'b2', 'c2', 'd2'),
('a3', 'b3', 'c3', 'd3');
-- This creates a comma delimited list of common
-- columns in table_a and table_b. It also excludes
-- any columns you don't want from table_a
set #common_columns = (
select
group_concat(column_name)
from
information_schema.columns
where
table_schema = 'test'
and
table_name = 'table_a'
and
column_name not in ('id')
and
column_name in (
select
column_name
from
information_schema.columns
where
table_schema = 'test'
and
table_name = 'table_b'
)
);
set #stmt = concat(
'insert into table_b (', #common_columns, ') ',
'select ', #common_columns, ' from table_a;'
);
prepare stmt from #stmt;
execute stmt;
deallocate prepare stmt;
select * from table_b;
The prepared statement ends up looking like this:
insert into table_b (a, b, c)
select a, b, c from table_a;
Don't forget to change the values for table_name and table_schema to match your tables and database (table_schema). If it is useful you could create a stored procedure to do this task.