Remove all zero dates from MySQL database across all Tables - mysql

I have plenty of tables in MySQL which which contains zero date in dateTime column 0000-00-00 00:00:00
Using some sort of admin settings, Is it possible to disable zero dates and replace all zero with static value say 1-1-1900?
EDIT:
I am working on database migration which involves migrating more than 100 MySQL tables to SQL Server.
Can I avoid executing scripts on each table manually by setting up
database mode?

To change existings values you could use a query like this:
UPDATE tablename SET date_column = '1900-01-01' WHERE date_column = '0000-00-00';
If you want to automate the UPDATE query you can use a prepared statement:
SET #sql_update=CONCAT_WS(' ', 'UPDATE', CONCAT(_schema, '.', _table),
'SET', _column, '=', '\'1900-01-01\'',
'WHERE', _column, '=', '\'0000-00-00\'');
PREPARE stmt FROM #sql_update;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
And you can loop through all colums in all tables on the current schema that are declared as date:
SELECT
table_schema,
table_name,
column_name
FROM
information_schema.columns
WHERE
table_schema=DATABASE() AND data_type LIKE 'date%'
To loop through all columns you could use a stored procedure:
DELIMITER //
CREATE PROCEDURE update_all_tables() BEGIN
DECLARE done BOOLEAN DEFAULT FALSE;
DECLARE _schema VARCHAR(255);
DECLARE _table VARCHAR(255);
DECLARE _column VARCHAR(255);
DECLARE cur CURSOR FOR SELECT
CONCAT('`', REPLACE(table_schema, '`', '``'), '`'),
CONCAT('`', REPLACE(table_name, '`', '``'), '`'),
CONCAT('`', REPLACE(column_name, '`', '``'), '`')
FROM
information_schema.columns
WHERE
table_schema=DATABASE() AND data_type LIKE 'date%';
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done := TRUE;
OPEN cur;
columnsLoop: LOOP
FETCH cur INTO _schema, _table, _column;
IF done THEN
LEAVE columnsLoop;
END IF;
SET #sql_update=CONCAT_WS(' ', 'UPDATE', CONCAT(_schema, '.', _table),
'SET', _column, '=', '\'1900-01-01\'',
'WHERE', _column, '=', '\'0000-00-00\'');
PREPARE stmt FROM #sql_update;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END LOOP columnsLoop;
CLOSE cur;
END//
DELIMITER ;
Please see an example here.

This is an old question but was running into a similar problem except I was trying to set the 0000-00-00 to NULL.
Was trying to query this
UPDATE tablename SET date_column = NULL WHERE date_column = '0000-00-00';
and was getting the following error :
Incorrect date value: '0000-00-00' for column 'date_column' at row 1
Turns out the following query without '' around the 0000-00-00 worked !
UPDATE tablename SET date_column = NULL WHERE date_column = 0000-00-00;

You can change existing values running that query
update your_table
set date_column = '1900-01-01'
where date_column = '0000-00-00'
And you can change the definition of your table to a specfic default value or null like this
ALTER TABLE your_table
CHANGE date_column date_column date NOT NULL DEFAULT '1900-01-01'

You have two options.
Option One - In the programming language of your choice (you can even do this with Stored Procedures):
Loop through your INFORMATION_SCHEMA, probably COLUMNS and build a query to get back the tables you need to affect, i.e.
-
SELECT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME='date' AND TABLE_SCHEMA='<YOUR DB NAME>'
or maybe even better
SELECT TABLE_NAME,COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME in ('timestamp','date','datetime')
AND TABLE_SCHEMA='<YOUR DB NAME>'
Store results and then loop through them. Each loop, create a new query. In MySQL that would be a Stored Procedure with Prepared Statements, AKA:
-
#string = CONCAT("UPDATE ", #table_name, " SET ", #column_name, "='1-1-1900' WHERE ", #column_name, "=0000-00-00 00:00:00");
PREPARE stmt FROM #string;
EXECUTE stmt;
That wouldn't be too tough to write up.
Option Two - Another example, while certainly more low tech, may be no less effective. After doing a mysqldump and before doing your export, you can do a simple search-replace in the file. Vim or any other text editor would do this quite expertly and would allow you to replace 0000-00-00 00:00:00 with 1-1-1900. Because you are almost definitely not going to find situations where you DON'T want that to be replaced, this could be the easiest option for you. Just throwing it out there!

In my opinion, you could generate all updates the simplest way:
select
concat('UPDATE ',TABLE_NAME,' SET ',COLUMN_NAME,'=NULL WHERE ',COLUMN_NAME,'=0;')
from information_schema.COLUMNS
where TABLE_SCHEMA = 'DATABASE_NAME' and DATA_TYPE in ('datetime', 'date', 'time');
Just replace DATABASE_NAME to your DB name, and execute all updates.

Alter your Table as
ALTER TABLE `test_table`
CHANGE COLUMN `created_dt` `created_dt` date NOT NULL DEFAULT '1900-01-01';
but before Altering table you need to update the existing value as juergen d said
update test_table
set created_dt= '1900-01-01'
where created_dt= '0000-00-00'

You can update your table by filtering where dates are equals to 0 and you can define a default value to the column.

Prefix: You might want to check the concept of ETL in DataWareHousing, there are tools which should do the simple conversions for you, even Open Source ones like Kettle/Pentaho.
But this one is easy when you use any programming language capable of composing SQL queries. I have made an example in perl, but php or java would do also the job:
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
my $user='geheim';
my $pass='secret';
my $dbh = DBI->connect( "dbi:mysql:host=localhost:database=to_convert:port=3306", $user, $pass ) or die $DBI::errstr;
# Prints out all the statements needed, might be checked before executed
my #tables = #{ $dbh->selectall_arrayref("show tables") };
foreach my $tableh ( #tables){
my $tabname = $tableh->[0];
my $sth=$dbh->prepare("explain $tabname");
$sth->execute();
while (my $colinfo = $sth->fetchrow_hashref){
if ($colinfo->{'Type'} =~ /date/i && $colinfo->{'Null'} =~ /yes/i){
print ("update \`$tabname\` set \`" . $colinfo->{'Field'} . "\` = '1990-01-01' where \`" . $colinfo->{'Field'} . "\` IS NULL; \n");
print ("alter table \`$tabname\` change column \`" . $colinfo->{'Field'} . "\` \`" . $colinfo->{'Field'} . "\` " . $colinfo->{'Type'} . " not null default '1990-01-01'; \n");
}
}
}
This does not change anything, but when the database has tables like:
localmysql [localhost]> explain dt;
+-------+------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------+------+-----+---------+-------+
| a | date | YES | | NULL | |
+-------+------+------+-----+---------+-------+
1 row in set (0.00 sec)
localmysql [localhost]> explain tst
-> ;
+-------+----------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+----------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| atime | datetime | YES | | NULL | |
+-------+----------+------+-----+---------+-------+
2 rows in set (0.00 sec)
it produces the Statements:
update `dt` set `a` = '1990-01-01' where `a` IS NULL;
alter table `dt` change column `a` `a` date not null default '1990-01-01';
update `tst` set `atime` = '1990-01-01' where `atime` IS NULL;
alter table `tst` change column `atime` `atime` datetime not null default '1990-01-01';
This list can then be reviewed and executed as Statements.
Hope that Helps!

As this is for migration, I would suggest that you simply wrap your tables in views which does the conversion as you export the data. I have used the below concept moving data from MySQL to postgress which has the same problem.
Each table should be proxied by something like this;
CREATE VIEW migration_mytable AS
SELECT field1, field2,
CASE field3
WHEN '0000-00-00 00:00:00'
THEN '1900-01-01 00:00:00'
ELSE field3
END CASE AS field3
FROM mytable;
You should be able to write a script which generate this for you from the catalog, in case you have a great deal of tables to take care of.
You should then be able to import the data into your SqlServer table (using a bridge like this), and simply running a query like;
INSERT INTO sqlserver.mytable SELECT * FROM mysql.migration_mytable;

Related

How to loop through all the tables on a database to update columns

I'm trying to update a column (in this case, a date) that is present on most of the tables on my database. Sadly, my database has more than 100 tables already created and full of information. Is there any way to loop through them and just use:
UPDATE SET date = '2016-04-20' WHERE name = 'Example'
on the loop?
One painless option would be to create a query which generates the UPDATE statements you want to run on all the tables:
SELECT CONCAT('UPDATE ', a.table_name, ' SET date = "2016-04-20" WHERE name = "Example";')
FROM information_schema.tables a
WHERE a.table_schema = 'YourDBNameHere'
You can copy the output from this query, paste it in the query editor, and run it.
Update:
As #PaulSpiegel pointed out, the above solution might be inconvenient if one be using an editor such as HeidiSQL, because it would require manually copying each record in the result set. Employing a trick using GROUP_CONCAT() would give a single string containing every desired UPDATE query in it:
SELECT GROUP_CONCAT(t.query SEPARATOR '; ')
FROM
(
SELECT CONCAT('UPDATE ', a.table_name,
' SET date = "2016-04-20" WHERE name = "Example";') AS query,
'1' AS id
FROM information_schema.tables a
WHERE a.table_schema = 'YourDBNameHere'
) t
GROUP BY t.id
You can use SHOW TABLES command to list all tables in database. Next you can check if column presented in table with SHOW COLUMNS command. It can be used this way:
SHOW COLUMNS FROM `table_name` LIKE `column_name`
If this query returns result, then column exists and you can perform UPDATE query on it.
Update
You can check this procedure on sqlfiddle.
CREATE PROCEDURE UpdateTables (IN WhereColumn VARCHAR(10),
IN WhereValue VARCHAR(10),
IN UpdateColumn VARCHAR(10),
IN UpdateValue VARCHAR(10))
BEGIN
DECLARE Finished BOOL DEFAULT FALSE;
DECLARE TableName VARCHAR(10);
DECLARE TablesCursor CURSOR FOR
SELECT c1.TABLE_NAME
FROM INFORMATION_SCHEMA.COLUMNS c1
JOIN INFORMATION_SCHEMA.COLUMNS c2 ON (c1.TABLE_SCHEMA = c2.TABLE_SCHEMA AND c1.TABLE_NAME = c2.TABLE_NAME)
WHERE c1.TABLE_SCHEMA = DATABASE()
AND c1.COLUMN_NAME = WhereColumn
AND c2.COLUMN_NAME = UpdateColumn;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET Finished = TRUE;
OPEN TablesCursor;
MainLoop: LOOP
FETCH TablesCursor INTO TableName;
IF Finished THEN
LEAVE MainLoop;
END IF;
SET #queryText = CONCAT('UPDATE ', TableName, ' SET ', UpdateColumn, '=', QUOTE(UpdateValue), ' WHERE ', WhereColumn, '=', QUOTE(WhereValue));
PREPARE updateQuery FROM #queryText;
EXECUTE updateQuery;
DEALLOCATE PREPARE updateQuery;
END LOOP;
CLOSE TablesCursor;
END
This is just an example how to iterate through all tables in database and perform some action with them. Procedure can be changed according to your needs.
Assuming you are using MySQL, You can use Stored Procedure.
This post is a very helpful.
Mysql-loop-through-tables

MySQL create view across all prefixed databases' table

I have databases named company_abc, company_xyz, etc. Those company_* databases have all the same structure and they contain users table.
What I need to do is to aggregate all users data from just company_* databases and replicate this view to another server. The view would just be something like
COMPANY NAME | USERNAME
abc | user#email.com
abc | user1#email.com
xyz | user2#email.com
company3 | user3#email.com
Is something like that possible in MySQL?
The databases are created dynamically, as well as the users so I can't create a view with just a static set of databases.
As you say you want to create view with dynamic database names - so the result you want to achieve is not possible in current versions of mysql.
So you have example following options:
Option 1
If you want to get result of all databases users tables you could define a stored procedure that uses prepared statement. This procedure needs parameter db_prefix what in your case is company_%. Basicly this procedure selects all tables named as users from information_schema when database name is like db_prefix parameter value. After that it loops through results and creates query string as union all users tables and executes this query. When creating a query string i also add field called source, so i can identify from what database this result is coming. In my example my databases are all in default collation utf8_unicode_ci.
In this case you can define procedure example "getAllUsers"
-- Dumping structure for procedure company_abc1.getAllUsers
DELIMITER //
CREATE DEFINER=`root`#`localhost` PROCEDURE `getAllUsers`(IN `db_prefix` TEXT)
DETERMINISTIC
COMMENT 'test'
BEGIN
DECLARE qStr TEXT DEFAULT '';
DECLARE cursor_VAL VARCHAR(255) DEFAULT '';
DECLARE done INTEGER DEFAULT 0;
DECLARE cursor_i CURSOR FOR SELECT DISTINCT (table_schema) FROM information_schema.tables WHERE table_name = 'users' AND table_schema LIKE db_prefix COLLATE utf8_unicode_ci;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cursor_i;
read_loop: LOOP
FETCH cursor_i INTO cursor_VAL;
IF done = 1 THEN
LEAVE read_loop;
END IF;
IF qStr != '' THEN
SET qStr = CONCAT(qStr, ' UNION ALL ');
END IF;
SET qStr = CONCAT(qStr, ' SELECT *, \'', cursor_VAL ,'\' as source FROM ', cursor_VAL, '.users');
END LOOP;
CLOSE cursor_i;
SET #qStr = qStr;
PREPARE stmt FROM #qStr;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
SET #qStr = NULL;
END//
DELIMITER ;
Now you can get your all users result as:
CALL getAllUsers('company_%');
In my example database it results as:
id name source
1 User 1 company_abc1
2 User 2 company_abc1
3 User 3 company_abc1
1 User 1 company_abc2
2 User 2 company_abc2
3 User 3 company_abc2
1 User 1 company_abc3
2 User 2 company_abc3
3 User 3 company_abc3
1 User 1 company_abc4
2 User 2 company_abc4
3 User 3 company_abc4
1 User 1 company_abc5
2 User 2 company_abc5
3 User 3 company_abc5
Option 2
If you really, really need view then you can modify first procedure and instead of executeing select you can create view. Example like this:
-- Dumping structure for procedure company_abc1.createAllUsersView
DELIMITER //
CREATE DEFINER=`root`#`localhost` PROCEDURE `createAllUsersView`(IN `db_prefix` TEXT)
DETERMINISTIC
COMMENT 'test'
BEGIN
DECLARE qStr TEXT DEFAULT '';
DECLARE cursor_VAL VARCHAR(255) DEFAULT '';
DECLARE done INTEGER DEFAULT 0;
DECLARE cursor_i CURSOR FOR SELECT DISTINCT (table_schema) FROM information_schema.tables WHERE table_name = 'users' AND table_schema LIKE db_prefix COLLATE utf8_unicode_ci;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cursor_i;
read_loop: LOOP
FETCH cursor_i INTO cursor_VAL;
IF done = 1 THEN
LEAVE read_loop;
END IF;
IF qStr != '' THEN
SET qStr = CONCAT(qStr, ' UNION ALL ');
END IF;
SET qStr = CONCAT(qStr, ' SELECT *, \'', cursor_VAL ,'\' as source FROM ', cursor_VAL, '.users');
END LOOP;
CLOSE cursor_i;
SET #qStr = CONCAT('CREATE OR REPLACE VIEW allUsersView AS ', qStr);
PREPARE stmt FROM #qStr;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
SET #qStr = NULL;
END//
DELIMITER ;
In this stored procedure we create/replace view called allUsersView, so basicly every time you will execute this procedure it will updates view.
In my test case it creates view like this:
CREATE OR REPLACE VIEW `allusersview` AS
SELECT *, 'company_abc1' as source FROM company_abc1.users
UNION ALL SELECT *, 'company_abc2' as source FROM company_abc2.users
UNION ALL SELECT *, 'company_abc3' as source FROM company_abc3.users
UNION ALL SELECT *, 'company_abc4' as source FROM company_abc4.users
UNION ALL SELECT *, 'company_abc5' as source FROM company_abc5.users ;
And now you can use view.
SELECT * FROM allusersview
And result is same as in first option.
All tested on:
Mysql 5.6.16
To find the list of database names:
SELECT SCHEMA_NAME
FROM information_schema.`SCHEMATA`
WHERE SCHEMA_NAME LIKE 'company%';
If you can code in something like PHP, the rest is pretty easy -- build a UNION of SELECTs from each database. But, if you must do it just in SQL...
To build the UNION, write a Stored Procedure. It will do the above query in a CURSOR. Inside the loop that walks through the cursor, CONCAT() a constructed SELECT onto a UNION you are building.
When the loop is finished, PREPARE and EXECUTE the constructed UNION. That will deliver something like the output example you had.
But, if you now need to INSERT the results of that into another server, you should leave the confines of the Stored Procedure and use some other language.
OK, OK, if you must stay in SQL, then you need some setup: Create a "Federated" table that connects to the other server. Now, in your SP, concatenate INSERT INTO fed_tbl in front of the UNION. Then the execute should do the entire task.
If you have trouble with the FEDERATED Engine, you may need to switch to FederatedX in MariaDB.
"The details are left as an exercise to the reader."
I already marked this as duplicate of Mysql union from multiple database tables
(SELECT *, 'abc' as COMPANY_NAME from company_abc.users)
union
(SELECT *, 'xyz' as COMPANY_NAME from company_xyz.users)
union
(SELECT *, 'company3' as COMPANY_NAME from company_company3.users)
...
I think that the only method to make this is to write a stored procedure that read all database and table name from information_schema.table, build a string with union select * from company_abc.users union all select * from company_xyz and then execute the command with prepared statement: http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-prepared-statements.html

Alternative to using Prepared Statement in Trigger with MySQL

I'm trying to create a MySQL Before Insert trigger with the following code which would do what I want it to do if I could find a way to execute the prepared statement generated by the trigger.
Are the any alternative ways to execute prepared statements from within triggers? Thanks
BEGIN
SET #CrntRcrd = (SELECT AUTO_INCREMENT FROM information_schema.TABLES
WHERE TABLE_SCHEMA=DATABASE()
AND TABLE_NAME='core_Test');
SET #PrevRcrd = #CrntRcrd-1;
IF (NEW.ID IS NULL) THEN
SET NEW.ID = #CrntRcrd;
END IF;
SET #PrevHash = (SELECT Hash FROM core_Test WHERE Record=#PrevRcrd);
SET #ClmNms = (SELECT CONCAT('NEW.',GROUP_CONCAT(column_name
ORDER BY ORDINAL_POSITION SEPARATOR ',NEW.'),'')
FROM information_schema.columns
WHERE table_schema = DATABASE()
AND table_name = 'core_Test');
SET #Query = CONCAT("SET #Query2 = CONCAT_WS(',','",#PrevHash,"','", #CrntRcrd, "',", #ClmNms, ");");
PREPARE stmt1 FROM #Query;
EXECUTE stmt1;
DEALLOCATE PREPARE stmt1;
SET NEW.Hash = #Query2;
END
UPDATE / CLARIFICATION: The data will be stored in a table as below.
+------------+-----+------+----------------+
| Record (AI)| ID | Data | HASH |
+------------+-----+------+----------------+
| 1 | 1 | ASDF | =DHFBGKJSDFHBG | (Hash Col 1)
| 2 | 2 | NULL | =UEGFRYJKSDFHB | (Hash Col 1 + Col 2)
| 3 | 1 | VBNM | =VKJSZDFVHBFJH | (Hash Col 2 + Col 3)
| 4 | 4 | TYUI | =KDJFGNJBHMNVB | (Hash Col 3 + Col 4)
| 5 | 5 | ZXCV | =SDKVBCVJHBJHB | (Hash Col 4 + Col 5)
+------------+-----+------+----------------+
On each insert command the table will generate a Hash value for that row by appeding the pervious row's Hash value to a CONCAT() of the entire new row, then re-hashing the entire string. This will create a running record of Hash values for auditing purposes / use in another part of the application.
My constraints are that this has to be done before the INSERT as rows cannot be updated afterwards.
UPDATE: I'm currently using the following code until I can find a way to pass the column names to CONCAT dynamically:
BEGIN
SET #Record = (
SELECT AUTO_INCREMENT FROM information_schema.TABLES
WHERE TABLE_SCHEMA=DATABASE()
AND TABLE_NAME='core_Test' #<--- UPDATE TABLE_NAME HERE
);
SET #PrevRecrd = #Record-1;
IF (new.ID IS NULL) THEN
SET new.ID = #Record;
END IF;
SET #PrevHash = (
SELECT Hash FROM core_Test #<--- UPDATE TABLE_NAME HERE
WHERE Record=#PrevRecrd
);
SET new.Hash = SHA1(CONCAT_WS(',',#PrevHash, #Record,
/* --- UPDATE TABLE COLUMN NAMES HERE (EXCLUDE "new.Record" AND "new.Hash") --- */
new.ID, new.Name, new.Data
));
END
The short answer is that you can't use dynamic SQL in a TRIGGER.
I'm confused by the query of the auto_increment value, and assigning a value to the ID column. I don't understand why you need to set the value of the ID column. Isn't that the column that is defined as the AUTO_INCREMENT? The database will handle the assignment.
It's also not clear that your query is guaranteed to return unique values, especially when concurrent inserts are run. (I've not tested, so it might work.)
But the code is peculiar.
It looks as if what you're trying to accomplish is to get the value of a column from the most recently inserted row. I think there are some restrictions on querying the same table the trigger is defined on. (I know for sure there is in Oracle; MySQL may be more liberal.)
If I needed to do something like that, I would try something like this:
SELECT #prev_hash := t.hash AS prev_hash
FROM core_Test t
ORDER BY t.ID DESC LIMIT 1;
SET NEW.hash = #prev_hash;
But again, I'm not sure this will work (I would need to test). If it works on a simple case, that's not proof that it works all the time, in the case of concurrent inserts, in the case of an extended insert, et al.
I wrote the query the way I did so that it can make use of an index on the ID column, to do a reverse scan operation. If it doesn't use the index, I would try rewriting that query (probably as a JOIN, to get the best possible performance.
SELECT #prev_hash := t.hash AS prev_hash
FROM ( SELECT r.ID FROM core_Test r ORDER BY r.ID DESC LIMIT 1 ) s
JOIN core_Test t
ON t.ID = s.ID
Excerpt from MySQL 5.1 Reference Manual E.1 Restrictions on Stored Programs
SQL prepared statements (PREPARE, EXECUTE, DEALLOCATE PREPARE) can be
used in stored procedures, but not stored functions or
triggers. Thus, stored functions and triggers cannot use
dynamic SQL (where you construct statements as strings and then
execute them).
[sic]

MySQL sorting table by column names

I have already built a table with field names in arbitrary order. I want those field names to be in alphabetical order so that I can use them in my dropdown list. Is it possible with a query?
Select columns from a specific table using INFORMATION_SCHEMA.COLUMNS and sort alphabetically with ORDER BY:
SELECT column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = '[schemaname]'
AND table_name = '[tablename]'
ORDER BY column_name
Note: The following code will alter the specified table and reorder the columns in alphabetical order
This should do the trick. It's a bit messy and lengthy, and you'll have to change the database name and table name, but for this one, the only requirement is that there is a database named "test" and that you are running these commands in it:
Let's create the tables we need:
-- CREATE TESTING TABLE IN A DATABASE NAMED "test"
DROP TABLE IF EXISTS alphabet;
CREATE TABLE alphabet (
d varchar(10) default 'dee' not null
, f varchar(21)
, e tinyint
, b int NOT NULL
, a varchar(1)
, c int default '3'
);
-- USE A COMMAND STORAGE TABLE
DROP TABLE IF EXISTS loadcommands;
CREATE TABLE loadcommands (
id INT NOT NULL AUTO_INCREMENT
, sqlcmd VARCHAR(1000)
, PRIMARY KEY (id)
);
Now let's create the two stored procedures required for this to work:
Separating them since one will be responsible for loading the commands, and including a cursor to immediately work with it isn't plausible (at least for me and my mysql version):
-- PROCEDURE TO LOAD COMMANDS FOR REORDERING
DELIMITER //
CREATE PROCEDURE reorder_loadcommands ()
BEGIN
DECLARE limitoffset INT;
SET #rank = 0;
SET #rankmain = 0;
SET #rankalter = 0;
SELECT COUNT(column_name) INTO limitoffset
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = 'test'
AND table_name = 'alphabet';
INSERT INTO loadcommands (sqlcmd)
SELECT CONCAT(t1.cmd, t2.position) AS commander FROM (
SELECT #rankalter:=#rankalter+1 AS rankalter, CONCAT('ALTER TABLE '
, table_name, ' '
, 'MODIFY COLUMN ', column_name, ' '
, column_type, ' '
, CASE
WHEN character_set_name IS NOT NULL
THEN CONCAT('CHARACTER SET ', character_set_name, ' COLLATE ', collation_name, ' ')
ELSE ' '
END
, CASE
WHEN is_nullable = 'NO' AND column_default IS NULL
THEN 'NOT NULL '
WHEN is_nullable = 'NO' AND column_default IS NOT NULL
THEN CONCAT('DEFAULT \'', column_default, '\' NOT NULL ')
WHEN is_nullable = 'YES' THEN 'DEFAULT NULL '
END
) AS cmd
, column_name AS columnname
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = 'test'
AND table_name = 'alphabet'
ORDER BY columnname
) t1
INNER JOIN (
SELECT #rankmain:=#rankmain+1 AS rownum, position FROM (
SELECT 0 AS rownum, 'FIRST' AS position
, '' AS columnname
UNION
SELECT #rank:=#rank+1 AS rownum, CONCAT('AFTER ', column_name) AS position
, column_name AS columnname
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = 'test'
AND table_name = 'alphabet'
ORDER BY columnname
LIMIT limitoffset
) inner_table
) t2 ON t1.rankalter = t2.rownum
;
END//
DELIMITER ;
If anyone thinks/sees that I'm missing to include any important column attributes in the ALTER command, please hesitate not and mention it! Now to the next procedure. This one just executes the commands following the order of column id from the loadcommands table. :
-- PROCEDURE TO RUN EACH REORDERING COMMAND
DELIMITER //
CREATE PROCEDURE reorder_executecommands ()
BEGIN
DECLARE sqlcommand VARCHAR(1000);
DECLARE isdone INT DEFAULT FALSE;
DECLARE reorderCursor CURSOR FOR
SELECT sqlcmd FROM loadcommands ORDER BY id;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET isdone = TRUE;
OPEN reorderCursor;
read_loop:LOOP
FETCH reorderCursor INTO sqlcommand;
IF isdone THEN
LEAVE read_loop;
END IF;
SET #sqlcmd = sqlcommand;
PREPARE stmt FROM #sqlcmd;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END LOOP read_loop;
CLOSE reorderCursor;
END//
DELIMITER ;
The SQL is long, so if someone can point out ways (and has tested them) to make this shorter I'd gladly do it, but for now, this at least works on my end. I also didn't need to put dummy data in the alphabet table. Checking the results can be done using the SHOW... command.
The last part:
-- TO TEST; AFTER RUNNING DDL COMMANDS:
SHOW CREATE TABLE alphabet; -- SEE ORIGINAL ORDER
CALL reorder_loadcommands(); -- PREPARE COMMANDS
CALL reorder_executecommands(); -- RUN COMMANDS
SHOW CREATE TABLE alphabet; -- SEE NEW ORDER
Perhaps later on I could make reorder_loadcommands dynamic and accept table and schema parameters, but I guess this is all for now..

How do I delete blank rows in Mysql?

I do have a table with more than 100000 data elements, but there are almost 350 blank rows within. How do I delete this blank rows using phpmyadmin? Manually deleting is a tedious task.
The general answer is:
DELETE FROM table_name WHERE some_column = '';
or
DELETE FROM table_name WHERE some_column IS NULL;
See: http://dev.mysql.com/doc/refman/5.0/en/delete.html
More info when you post your tables!~
Also, be sure to do:
SELECT * FROM table_name WHERE some_column = '';
before you delete, so you can see which rows you are deleting! I think in phpMyAdmin you can even just do the select and then "select all" and delete, but I'm not sure. This would be pretty fast, and very safe.
I am doing the mysql operation in command prompt in windows. And the basic queries:
delete * from table_name where column=''
and
delete * from table_name where column='NULL'
doesn't work. I don't know whether it works in phpmyadmin sqlcommand builder. Anyway:
delete * from table_name where column is NULL
works fine.
I have a PHP script that automatically removes empty rows based on column data types.
That allows me to define "emptiness" differently for different column types.
e.g.
table
first_name (varchar) | last_name (varchar) | some_qty ( int ) | other_qty (decimal)
DELETE FROM `table` WHERE
(`first_name` IS NULL OR `first_name` = '')
AND
(`last_name` IS NULL OR `last_name` = '')
AND
(`some_qty` IS NULL OR `some_qty` = 0)
AND
(`other_qty` IS NULL OR `other_qty` = 0)
Since "0" values are meaningless in my system, I count them as empty. But I found out that if you do (first_name = 0) then you will always get true, because strings always == 0 in MySQL. So I tailor the definition of "empty" to the data type.
This procedure will delete any row for all columns that are null ignoring the primary column that may be set as an ID. I hope it helps you.
DELIMITER //
CREATE PROCEDURE DeleteRowsAllColNull(IN tbl VARCHAR(64))
BEGIN
SET #tbl = tbl;
SET SESSION group_concat_max_len = 1000000;
SELECT CONCAT('DELETE FROM `',#tbl,'` WHERE ',(REPLACE(group_concat(concat('`',COLUMN_NAME, '` is NULL')),',',' AND ')),';') FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = #tbl AND COLUMN_KEY NOT LIKE 'PRI' into #delete_all;
PREPARE delete_all FROM #delete_all;
EXECUTE delete_all;
DEALLOCATE PREPARE delete_all;
END //
DELIMITER ;
Execute the procedure like this.
CALL DeleteRowsAllColNull('your table');
I know this has already been answered and has got a tick, but I wrote a small function for doing this, and thought it might be useful to other people.
I call my function with an array so that I can use the same function for different tables.
$tableArray=array("Address", "Email", "Phone"); //This is the column names
$this->deleteBlankLines("tableName",$tableArray);
and here is the function which takes the array and builds the delete string
private function deleteBlankLines($tablename,$columnArray){
$Where="";
foreach($columnArray as $line):
$Where.="(`".$line."`=''||`".$line."` IS NULL) && ";
endforeach;
$Where = rtrim($Where, '&& ');
$query="DELETE FROM `{$tablename}` WHERE ".$Where;
$stmt = $this->db->prepare($query);
$stmt->execute();
}
You can use this function for multiple tables. You just need to send in a different table name and array and it will work.
My function will check for a whole row of empty columns or NULL columns at the same time. If you don't need it to check for NULL then you can remove that part.