Data Corruption with MediumText and ASP/MySQL - mysql

I have a website written in ASP with a mySQL database. ASP uses the ODBC 5.1 driver to connect to the database. Inside the database there is a varchar(8000) column (the length started small, but the application has evolved A LOT since its conception). Anyway, it recently became evident that the varchar column should be changed into a MEDIUMTEXT column. I made the change and everything appeared alright. However, whenever I do an UPDATE statement, the data in that column for that specific row gets corrupted. Do to the nature of the website, I am unable to provide data or example queries, but the queries are not using any functions or anything; just a straight UPDATE.
Everything works fine with the varchar, but blows up when I make the field a MEDIUMTEXT. The corruption I'm talking about is as follows:
ٔڹ���������������ߘ����ߘ��������
Any ideas?

Have you checked encodings (ASP+HTML+DB)? Using UTF8?
Not using UTF8 and that text is not English , right?

You might have a version specific bug. I searched for "mysql alter table mediumtext corruption" and there were some bugs specifically having to do with code pages and non-latin1 character sets.
Your best best is conduct a survey of the table, comparing it against a backup. If this is a MyISAM table, you might want to recreate the table with CHECKSUM option enabled. What does a CHECK TABLE tell you? If an ALTER TABLE isn't working for you, you could consider partitioning the mediumtext field into it's own table, or duplicating the table contents using a variation of an INSERT...SELECT:
CREATE TABLE b LIKE a;
ALTER TABLE b MODIFIY b.something MEDIUMTEXT;
INSERT INTO b SELECT * FROM a LIMIT x,1000;
-- now check those 1000 rows --
By inserting a few rows at a time and then checking them, it you might be able to tease out what kind of input isn't converting well.
Check dmesg and syslog output to see if you've got ram or disk issues. I have seen table corruptions occur due to ECC errors, bad raid controllers, bad sectors and faulty network transmission. You might attempt the ALTER TABLE on a comparable machine and see if it checks out.

Related

Converting varbinary to longblob in MySQL

We are storing data in an Innodb table having varbinary column. However, our data size requirement has grown to over 1 MB and hence I converted the column to longblob.
alter table mytable modify column d longblob;
Everything seems to be working as expected after I converted the column. However, I like to know from people who have done it earlier if anything more is required other than just converting column as shown above, especially
is there any MySQL / MariaDB version specific issues with longblob that I should take care of. There is no index on the column.
We use mysqldump to take regular backup. Do we need to change anything since the blob storage mechanism seems to be different than varbinary.
Any other precautions/suggestion.
Thank you for your guidance

MySQL - Create Index Statement prints "[HY000][1880] TIME/TIMESTAMP/DATETIME columns of old format have been upgraded to the new format"

My database has a MyISAM table on a MySQL server using 5.6.41-log.
I created a composite index on the table that references a varchar column and a datetime column:
create index ix_orders_region_date on orders (region_code, order_date);
Upon execution, I received the following error message:
[HY000][1880] TIME/TIMESTAMP/DATETIME columns of old format have been upgraded to the new format.
The documentation states that this error pertains to the symbol ER_OLD_TEMPORALS_UPGRADED, but there is no further explanation.
I've established the following:
The create index statement succeeded and the index is being used by queries.
SHOW FULL PROCESSLIST doesn't show anything out of the ordinary and the application appears to be functioning.
There is nothing suspect in the error log.
My questions are:
Is it possible that the create index statement broke something?
If so, how do I diagnose what, if anything, is wrong?
The temporal datatypes (being TIME, DATETIME, and TIMESTAMP) support fractional values starting from MySQL 5.6.4, which means that the storage encoding is slightly different from what it is in older versions. Basic operations continue to be supported when using the old temporal columns, and they're automatically upgraded to the new format when a table containing the deprecated types has an ALTER TABLE, FORCE, or CREATE INDEX issued against it.
Historically I’ve not seen any issues where this automatic process results in data corruption or alteration. That said, you may want to look at updating all of the tables in the database to use the newer data type, as these changes came about almost a decade ago. By updating the definitions, you’ll reduce the risk of unexpected failures later if you upgrade to a much newer version of MySQL that rejects the older format.

ERROR #1046: Data too long for column but both source and destination columns are of type char(1)

I have three databases that are all meant to be the same but unfortunately are all different versions of Mysql (not my decision and unable to change that currently) only MariaDb 10.4 do I get an error when trying to send all data from a view into a historical table.
INSERT INTO
`destination`.historical_table
SELECT * FROM
`source`.daily_table
There are no triggers and the flagged column has the same datatype CHAR(1) on both tables.
The source table is actually a view, could that be the problem? It works on my other two DBs...
UPDATE: So this was actually my own fault but an interesting issue that mariahdb 10.4.10 caught but mysql 5.6.33 and mariahdb 10.1.38 did not.
The table definitions were not identical, two columns were switched between source and destination causing the failure. On the other databases however they merely truncated the values to the correct size (thus inserting incorrect data as well).
More my fault than anything else but interesting and something to take note of nonetheless.

Sql Server issue

I hope this is not off-topic, but I have a real problem that I oculd use some advice on.
I have an application that upgrades its own Sql Server database (from previous versions) on startup. Normally this works well, but a new version has to alter several nvarchar column widths.
On live databases with large amount of data in the table this is taking a very long time. There appear to be two problems - one is that Sql Server seems to be processing the data (possibly rewriting it), even though that isn't actually being changed, and the other is that the transaction log gobbles up a huge amount of space.
Is there any way to circumvent this issue? It's only a plain Alter Table... Alter Column command, changing nvarchar(x) to nvarchar(x+n), nothing fancy, but it is causing an 'issue' and much dissatisfaction in the field. If there was a way of changing the column width without processing the existing data, and somehow suppressing the transaction log stuff, that would be handy.
It doesn't seem to be a problem with Oracle databases.
An example command:
IF EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE table_name='ResourceBookings' AND column_name = ('ResourceBookerKey1') AND character_maximum_length <= 50)
ALTER TABLE [ResourceBookings] ALTER COLUMN [ResourceBookerKey1] NVARCHAR(80) NULL
As you can see, the table is only changed if the column width needs to be increased
TIA
Before upgrading, make sure the SQL Server database's Recovery Model is set to "Simple". Go to SSMS, right-click the database, select properties, and then click on the Options pages. Record the "Recovery Mode" value. Set the Recovery Model to "Simple", if it's not already (I assume it's set to FULL).
Then run the upgrade. After the upgrade, you can restore the value back to what it was.
Alternately you can script it with something like this:
Before upgrade:
ALTER DATABASE MyDatabase SET RECOVERY SIMPLE;
After upgrade:
ALTER DATABASE MyDatabase SET RECOVERY FULL;

Ensure MySQL table charset & collation

Situation: there's a table that is managed by application A. Application A inserts and updates data in the table throughout the day. Once per week it DROPs the table, recreates it, and inserts all data.
Problem: application A creates the table as utf8. Application B that relies on this table require for it to be ascii_bin. I did not design either application, nor do I have access to modifying their requirements.
What's needed: a way to ensure that the table is in ascii_bin. I considered writing a script and run it via CRON, which would check the current charset and set it if needed. Is there a better way of achieving this?
Since ALTER is one of the statements that causes an implicit COMMIT, I do not believe it is possible to do it as part of a trigger after INSERT or UPDATE.
You can set ascii_bin as a default charset for your database schema. Then all the created tables will have this charset when created, unless you explicitly specify another charset.
Refer to MySQL documentation on how to set the default charset: http://dev.mysql.com/doc/refman/5.0/en/charset-database.html
See SET NAME at http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html
MySQL proxy might be a solution here.
You can rewrite the create statement when it goes through the proxy.
Alternatively, maybe you could remove privileges from Application A so it can't drop the table.
An ALTER statement that makes no changes is basically ignored. So, if the conversion to ascii_bin is run multiple times, it's not going to be much effort on the server. So, putting it in cron, or an exiting stored procedure that Applicatio B calls, or something else clever, isn't so bad.