Converting column type in MySQL - mysql

I had to bring in a whole bunch of tables from CSV files. A lot of these files had column that were INT but had null values. To speed up the import I just made all of the column VARCHAR. Now I have all this data in the tables but need to change the type. I'm able to do this in the MySQL workbench except for one problem -- It error's because of the null/blank values. Is there some sort of SQL magic that will allow me to convert these column types and ignore the nulls or replace them with the correct 'null value' for that data type?

You can update the columns to set blank fields as NULL as follows:
UPDATE mytable SET mycolumn=NULL WHERE TRIM(mycolumn,' ')='';
Then do your normal table alters as follows:
ALTER TABLE mytable MODIFY mycolumn VARCHAR(255);
The 'DEFAULT NULL' is optional as fields by default allow null. This should allow you to convert the columns to whatever data types you wish without any problem except in the case where there is mixed data -- such as numbers, and strings, and you wish to make that column FLOAT.
The above example also does not take into account removing carriage returns, etc, in the event that a column contains a "\n" or "\r\n" and nothing else, it will not set it to NULL, but you can modify the "TRIM(mycolumn, ' ')" to meet those requirements if you have them : aka ...
UPDATE mytable SET mycolumn=NULL WHERE TRIM(mycolumn,"\n")='';

Related

MySQL AES_DECRYPT of mediumblob column with charset latin1

I have a database where there are 3 tables with 3 columns which are AES_ENCRYPTed.
One table has a column which is a VARCHAR(255) with charset latin1. It contains some text encrypted.
Applying the AES_DECRYPT with an UNHEX of the column value shows the output of the real value properly.
Another table has a column which is a mediumblob with charset utf8. It contains some large text encrypted.
Applying just the AES_DECRYPT and then casting it to char displays the original value properly too.
But the 3rd table has a column which is also a mediumblob but charset latin1. It contains a large JSON data stringified.
Now when I apply just the AES_DECRYPT('<column_name>', '') it outputs null. Applying unhex or casting the column to other types before the decryption did not do anything.
For the first 2 tables, just applying the AES_DECRYPT without the conversion also output something.
But the 3rd table does not output anything; just shows NULL.
Any idea what is happening here? It would be very helpful if someone with DB expertise can point me to the right direction why the output is NULL and what needs to be done in the query to get the real output.
EDIT:
The DB Columns are populated by a microservice which uses JAVA Hibernate ColumnTransformer for doing the write and read.
write = AES_ENCRYPT(, ), read = AES_DECRYPT(, )
The values posted via this is also returned properly in the GET response. But the same query does not output the 3rd column value and print NULL as described.
With this structure :
And this command :
UPDATE encryption SET `encrypted` = AES_encrypt(CONVERT(`json` USING latin1), "key");
UPDATE encryption SET `decrypted` = AES_decrypt(encrypted, "key");
This works well for me.
However blobs doesn't any character sets...

mysql - Invalid date error when creating stored generated column

I am trying to add a generated column for an existing table.
Within the table, there is a varchar column containing data like 321njkfvds_10911342
If I add the generated column as VIRTUAL, it works well!.
ALTER TABLE my_table
ADD COLUMN `PartitionKey` INT
GENERATED ALWAYS AS (IFNULL(TO_DAYS(SUBSTRING_INDEX(`Sequence`, '_', -1)), 0)) VIRTUAL
AFTER `Sequence`;
But if I try to add it as STORED generated column, it fails.
ALTER TABLE my_table
ADD COLUMN `PartitionKey` INT
GENERATED ALWAYS AS (IFNULL(TO_DAYS(SUBSTRING_INDEX(`Sequence`, '_', -1)), 0)) STORED
AFTER `Sequence`;
Error Code: 1292
Incorrect datetime value: '10911342'
I know 10911342 is not a valid date, but at least its generated column is 0 when VIRTUAL specified.
But why can't I add the generated column as STORED while VIRTUAL works? is there some way to fix it?
##version
---------------------
10.3.27-MariaDB-log
Generated (Virtual and Persistent/Stored) Columns :: Making Stored
Values Consistent
...
When a generated column is PERSISTENT or indexed, the value of the expression needs to be consistent regardless of the SQL Mode flags in the current session. If it is not, then the table will be seen as corrupted when the value that should actually be returned by the computed expression and the value that was previously stored and/or indexed using a different sql_mode setting disagree.
...
See dbfiddle.

Update a single BLOB column for a specific row with a Null or any other value using a sql syntax

I have 2 identical tables (both column count, name and settings).
I have 1 identical row on each table, identical information.
One of the columns is a BLOB type column and contains an Image (bmp).
Both rows / tables have an id column, auto increment, so id's are identical for every row in both tables.
The column set to blob type CAN be NULL (it's set).
I'm using an action that has the following query's:
dbmodule.arhivaQuery.SQL.Clear;
dbmodule.arhivaQuery.SQL.Add('UPDATE `database_name`.`2nd_table_name` SET `column_name`=deleted WHERE `id`=''' + inttostr(dbmodule.comenziDataSetid.Value) + ''';');
dbmodule.arhivaQuery.ExecSql(true);
This should theoretically update the row in the 2nd table by removing the bmp from the blob column, or rather, replacing the bmp with the word "deleted". It doesn't, the image is still in the column / row.
A few things to clarify:
dbmodule is the name of a data module that contains the dataset, data source, sql connection and query. (TSimpleDataSet, TDataSource, TSQLConnection, TSQLQuery).
arhivaQuery is the name of the query I'm using (a TSQLQuery)
column_name = name of the column, I've edited in this paste so you can get a clearer picture.
You notice at the end I use the id of the row from the 1st table to change the data in the 2nd, so that's why that is there (ids are identical for the row in both tables).
When I execute this it should keep both rows in both tables but update just the row in the 2nd table by removing its image from the blob column.
So after this I should have row in 1st table with the image in the blob column and same row in the 2nd table with no image in the blob column.
I'm guessing my sql syntax is wrong (got some warnings saying so too), can anyone correct it for me please?
If you want to clear the content of a blob field, assign it the value NULL.
dbmodule.arhivaQuery.SQL.Add('UPDATE `database_name`.`2nd_table_name` SET `column_name` = NULL WHERE `id`=''' + inttostr(dbmodule.comenziDataSetid.Value) + ''';');
If you want to display deleted for those columns that have no value, do that in your SELECT statement when retrieving the content using IFNULL() or COALESCE(), whichever your DBMS supports.
An additional improvement you could make (both for coding ease and prevention of SQL injection) is to stop concatenating your SQL and switch to using parameters. It also means you can stop with all of the ''' double/triple/quadruple quoting nonsense and data type conversions, because the DB driver will take care of all of that for you.
dbmodule.arhivaQuery.SQL.Add('UPDATE `database_name`.`2nd_table_name` SET `column_name` = NULL WHERE `id`= :ID;');
// Use AsInteger, AsString, AsFloat, or AsBoolean, whichever fits your
// column data type. Notice no quotes, no call to IntToStr or FloatToStr.
dbmodule.arhivaQuery.ParamByName('ID').AsString := dbmodule.comenziDataSetid.Value;
NOTE: Some DB drivers will need Params.ParamByName instead. If one doesn't work, the other will.
Finally, break your long lines of SQL into manageable pieces so you can stop all of the scrolling around to read it.
dbmodule.arhivaQuery.SQL.Add('UPDATE `database_name`.`2nd_table_name`');
dbmodule.arhivaQuery.SQL.Add('SET `column_name` = NULL WHERE `id`= :ID;');
dbmodule.arhivaQuery.ParamByName('ID').AsString := dbmodule.comenziDataSetid.Value;

Convert MySQL to postgres and retain default value

I have a MySQL database that I wish to convert into Postgres. One issue I encountered is to convert tinyint(1) (synonym to boolean) columns into "true" boolean and retain the default value of the MySQL column which can be either 0 or 1 but in Postgres the respective values are true or false. The SQL I'm trying:
ALTER TABLE "payments" ALTER COLUMN "is_automatic" TYPE boolean USING CAST("is_automatic" as boolean);
The error message:
ERROR: default for column "is_automatic" cannot be cast automatically to type boolean
I would think it would be possible to cast this value somehow. Is this possible to do or do I have to manually add this to the migration script?
Edit: I realise I might have explained the issue a bit vaguely, sorry about that. I am using this script (https://github.com/lanyrd/mysql-postgresql-converter) to convert the MySQL database. The values are converted into "true" postgres boolean using this script just fine but the columns themselves that where originally booleans in MySQL (represented by tinyint(1)) gets their default value dropped. This happens on row 157 in the script and removing the "DROP DEFAULT" part of the command generates the error above, because it can't be casted (I guess). My question is better asked this way: In the process of converting a tinyint(1) column, can the default value be "remembered" and later applied again with a "SET DEFAULT" command?
The postgresql ALTER TABLE reference page has an example exactly covering this scenario:
.. when the column has a default expression that won't automatically
cast to the new data type:
ALTER TABLE foo
ALTER COLUMN foo_timestamp DROP DEFAULT,
ALTER COLUMN foo_timestamp TYPE timestamp with time zone
USING
timestamp with time zone 'epoch' + foo_timestamp * interval '1 second',
ALTER COLUMN foo_timestamp SET DEFAULT now();
So, you need to drop the old default, alter the type, then add the new default.
Note that the USING expression does not have any bearing on the default. It is purely used to convert existing values in the table. But in any case, there is no direct cast between integer and boolean, so you need a slightly more advanced USING expression.
Your statement might look like this:
ALTER TABLE payments
ALTER COLUMN is_automatic DROP DEFAULT,
ALTER COLUMN is_automatic TYPE BOOLEAN
USING is_automatic!=0,
ALTER COLUMN is_automatic SET DEFAULT TRUE;
The using expression might need a little tweaking, I am assuming here that your existing data has a value of 0 for false and something else for true.

Changing over from double data type to decimal

I have a database that currently uses the double data type and I want to change it to using the deciaml data type as I heard the double data type can't really be trusted when storing monetary data as it's approximate.
So I'm just wondering if I should expect any issues if I just change the data type? Everything should change ok with no loss of data?
Yes, you do have a risk of losing data unless planned carefully, but as long as you ensure that your data is already consistent and you choose correct length for the field you should be fine. Consider the following:
ALTER TABLE yourtable MODIFY COLUMN yourcolumn DECIMAL;
This will convert yourcolumn into DECIMAL but it'll result in DECIMAL(10,0) which is practically integer column. If you go ahead and convert to DECIMAL(10,2) instead:
ALTER TABLE yourtable MODIFY COLUMN yourcolumn DECIMAL(10,2);
You will lose everything beyond 2nd decimal. As an example value 10.025 will be converted to 10.03. If all your values already have only two decimal positions you should be fine.
All above holds for MySQL 5.5.25
One safe way to do is as
alter table your_table add column `test_col` decimal(8,2);
Then copy the existing values to the new col as
update table `your_table` set test_col = `col_double_datatype`;
Then check if the data is copied properly and if looks good drop the col_double_datatype and rename the test_col to the one you are using.
None of the above solutions worked for me on MySQL 8.
My solution (with a decimal(14, 4) field):
UPDATE your_table SET test_col = cast((col_double_datatype * 1000000 / 1000000) as decimal(14, 4));