Ensure MySQL table charset & collation - mysql

Situation: there's a table that is managed by application A. Application A inserts and updates data in the table throughout the day. Once per week it DROPs the table, recreates it, and inserts all data.
Problem: application A creates the table as utf8. Application B that relies on this table require for it to be ascii_bin. I did not design either application, nor do I have access to modifying their requirements.
What's needed: a way to ensure that the table is in ascii_bin. I considered writing a script and run it via CRON, which would check the current charset and set it if needed. Is there a better way of achieving this?
Since ALTER is one of the statements that causes an implicit COMMIT, I do not believe it is possible to do it as part of a trigger after INSERT or UPDATE.

You can set ascii_bin as a default charset for your database schema. Then all the created tables will have this charset when created, unless you explicitly specify another charset.
Refer to MySQL documentation on how to set the default charset: http://dev.mysql.com/doc/refman/5.0/en/charset-database.html

See SET NAME at http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html

MySQL proxy might be a solution here.
You can rewrite the create statement when it goes through the proxy.
Alternatively, maybe you could remove privileges from Application A so it can't drop the table.
An ALTER statement that makes no changes is basically ignored. So, if the conversion to ascii_bin is run multiple times, it's not going to be much effort on the server. So, putting it in cron, or an exiting stored procedure that Applicatio B calls, or something else clever, isn't so bad.

Related

Making sure that a table is constructed correctly

I have a schema of a database and a web application. I want to have the web application be able to select, insert and remove rows to a table, but the table may not exist, maybe in a testing environment, and the table may be missing columns, most likely because the web application has updated.
I want to be able to make sure that the table is ready to accept the data that the web application sends to it during the time the application is alive.
The idea I had is the application (written in Java) will have a table structure embedded into it, and when the application starts, just copy all of the data in the table (if it exists) to a temporary table, delete the old table and make a new one with the temporary table's data, and then drop the temporary table. As you can tell, it's nowhere near innovative.
Another idea I had is use the SHOW COLUMNS command to correct any missing columns parallel with the SHOW TABLES LIKE to check if it exists, but I feel like Stack Overflow would've had a better solution. Is that all I can do?
There are many ways to solve the problem of consistency of the database version and the version of the application.
However, in the production database, this situation is unacceptable.
I think that the simplest ways are the best.
To ensure such compliance, it is enough to execute a script that updates the database before performing the testing.
START TRANSACTION;
DROP TABLE ... IF EXISTS;
CREATE TABLE ...
COMMIT;
Remember about IF EXISTS and having DROP grant!
Such a script can be easily managed by placing it in RCS and controlling the version number needed in the application.
You can also save this version number in some table in the database itself and check when the application starts, whether the number is compatible with the assumed one and if you do not call the database update script.
Have a look at JPA an Hibernate. There is hbm2ddl.auto property. Looks like "update" option does what you want.
For more details
What are the possible values of the Hibernate hbm2ddl.auto configuration and what do they do

How to convert rds mysql latin1 to utf8

I want to convert our production database from latin1 to utf8.
we are using amazon rds mysql.
Please provide step by step. Will there be any downtime?
ALTER DATABASE database_name CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;
ALTER TABLE table_name CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
i use above query to convert each table.
Is it good a way? I need to do one by one or is there any other way to do in one step?
Change in PRODUCT server is always crucial. Lot of consideration need to take in concern before taking the final decision. The first question is -
is this really a show-stopper? or A dead lock situation? or Performance issue? If with all consideration you are decided that this is a must, some caution need to
perform to achieve the changes-
Step-1: Take a full database backup.
Step-2: Make sure the backup is restore-able. Make several copy of backup and preserve in different Server. This will help restore your OLD data in case of any accidental data lose.
Step-3: Make necessary changes to Development server first. Check your application performing as before without any issue specially where there are data access exist for the changed area.
Step-4: Check all database objects like (SP,FUNCTION) are using that table are still performing as expected.
Step-5: Better if you can engage some QA resource before make changes in LIVE environment.
Step-6: If all above steps goes fine, you can go for LIVE changes.
Step-7: Engage QA resource again to make sure LIVE changes are also applied without any issue.
Note: No significant downtime is required. But always it is best if you can keep stop database access from your application during the database changes. This make sure no data will hamper during insert/edit/delete data from application.

Sql Server issue

I hope this is not off-topic, but I have a real problem that I oculd use some advice on.
I have an application that upgrades its own Sql Server database (from previous versions) on startup. Normally this works well, but a new version has to alter several nvarchar column widths.
On live databases with large amount of data in the table this is taking a very long time. There appear to be two problems - one is that Sql Server seems to be processing the data (possibly rewriting it), even though that isn't actually being changed, and the other is that the transaction log gobbles up a huge amount of space.
Is there any way to circumvent this issue? It's only a plain Alter Table... Alter Column command, changing nvarchar(x) to nvarchar(x+n), nothing fancy, but it is causing an 'issue' and much dissatisfaction in the field. If there was a way of changing the column width without processing the existing data, and somehow suppressing the transaction log stuff, that would be handy.
It doesn't seem to be a problem with Oracle databases.
An example command:
IF EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE table_name='ResourceBookings' AND column_name = ('ResourceBookerKey1') AND character_maximum_length <= 50)
ALTER TABLE [ResourceBookings] ALTER COLUMN [ResourceBookerKey1] NVARCHAR(80) NULL
As you can see, the table is only changed if the column width needs to be increased
TIA
Before upgrading, make sure the SQL Server database's Recovery Model is set to "Simple". Go to SSMS, right-click the database, select properties, and then click on the Options pages. Record the "Recovery Mode" value. Set the Recovery Model to "Simple", if it's not already (I assume it's set to FULL).
Then run the upgrade. After the upgrade, you can restore the value back to what it was.
Alternately you can script it with something like this:
Before upgrade:
ALTER DATABASE MyDatabase SET RECOVERY SIMPLE;
After upgrade:
ALTER DATABASE MyDatabase SET RECOVERY FULL;

Managing mysql schema changes with SQL scripts and transactions

I am working with multiple databases in a PHP/MySQL application. I have development, testing, staging and production databases to keep in sync.
Currently we're still building the thing, so it's easy to keep them in sync. I use my dev db as the master and when I want to update the others I just nuke them and recreate them from mine. However, in future once there's real data I can't do this.
I would like to write SQL scripts as text files that I can version with the PHP changes that accompany them in svn, then apply the scripts to each db instance as I update them.
I would like to use transactions so that if there are any errors during the script, it will roll back any partial changes made. All tables are InnoDB
When I try to add a column that already exists, and add one new column like this:
SET FOREIGN_KEY_CHECKS = 0;
START TRANSACTION;
ALTER TABLE `projects` ADD COLUMN `foo1` varchar(255) NOT NULL after `address2`;
ALTER TABLE `projects` ADD COLUMN `foo2` varchar(255) NOT NULL after `address2`;
COMMIT;
SET FOREIGN_KEY_CHECKS = 1;
... it still commits the new column even though it failed to add the first one, of course, because I issued COMMIT instead of ROLLBACK.
I need it to issue the rollback command conditionally upon error. How can I do this in an adhoc SQL script?
I know of the 'declare exit handler' feature of stored procs but I don't want to store this; I just want to run it as an adhoc script.
Do I need to make it into a stored proc anyway in order to get conditional rollbacks, or is there another way to make the whole transaction atomic in a single adhoc SQL script?
Any links to examples welcome - I've googled but am only finding stored proc examples so far
Many thanks
Ian
EDIT - This is never going to work; ALTER TABLE causes an implicit commit when encountered: http://dev.mysql.com/doc/refman/5.0/en/implicit-commit.html Thanks to Brian for the reminder
I learned the other day that data definition language statements are always acted on in MySQL and cause transactions to be committed when they are applied. I think you'll probably have to do this interactively if you want to be sure of success.
I can't find the question on this website where this was discussed (it was only a couple of days ago).
If you need to keep multiple databases in synch, you could look into replication. Although replication isn't to be trifled with, it may be what you need. See http://dev.mysql.com/doc/refman/5.0/en/replication-features.html

Data Corruption with MediumText and ASP/MySQL

I have a website written in ASP with a mySQL database. ASP uses the ODBC 5.1 driver to connect to the database. Inside the database there is a varchar(8000) column (the length started small, but the application has evolved A LOT since its conception). Anyway, it recently became evident that the varchar column should be changed into a MEDIUMTEXT column. I made the change and everything appeared alright. However, whenever I do an UPDATE statement, the data in that column for that specific row gets corrupted. Do to the nature of the website, I am unable to provide data or example queries, but the queries are not using any functions or anything; just a straight UPDATE.
Everything works fine with the varchar, but blows up when I make the field a MEDIUMTEXT. The corruption I'm talking about is as follows:
ٔڹ���������������ߘ����ߘ��������
Any ideas?
Have you checked encodings (ASP+HTML+DB)? Using UTF8?
Not using UTF8 and that text is not English , right?
You might have a version specific bug. I searched for "mysql alter table mediumtext corruption" and there were some bugs specifically having to do with code pages and non-latin1 character sets.
Your best best is conduct a survey of the table, comparing it against a backup. If this is a MyISAM table, you might want to recreate the table with CHECKSUM option enabled. What does a CHECK TABLE tell you? If an ALTER TABLE isn't working for you, you could consider partitioning the mediumtext field into it's own table, or duplicating the table contents using a variation of an INSERT...SELECT:
CREATE TABLE b LIKE a;
ALTER TABLE b MODIFIY b.something MEDIUMTEXT;
INSERT INTO b SELECT * FROM a LIMIT x,1000;
-- now check those 1000 rows --
By inserting a few rows at a time and then checking them, it you might be able to tease out what kind of input isn't converting well.
Check dmesg and syslog output to see if you've got ram or disk issues. I have seen table corruptions occur due to ECC errors, bad raid controllers, bad sectors and faulty network transmission. You might attempt the ALTER TABLE on a comparable machine and see if it checks out.