Sql Server issue - sql-server-2008

I hope this is not off-topic, but I have a real problem that I oculd use some advice on.
I have an application that upgrades its own Sql Server database (from previous versions) on startup. Normally this works well, but a new version has to alter several nvarchar column widths.
On live databases with large amount of data in the table this is taking a very long time. There appear to be two problems - one is that Sql Server seems to be processing the data (possibly rewriting it), even though that isn't actually being changed, and the other is that the transaction log gobbles up a huge amount of space.
Is there any way to circumvent this issue? It's only a plain Alter Table... Alter Column command, changing nvarchar(x) to nvarchar(x+n), nothing fancy, but it is causing an 'issue' and much dissatisfaction in the field. If there was a way of changing the column width without processing the existing data, and somehow suppressing the transaction log stuff, that would be handy.
It doesn't seem to be a problem with Oracle databases.
An example command:
IF EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE table_name='ResourceBookings' AND column_name = ('ResourceBookerKey1') AND character_maximum_length <= 50)
ALTER TABLE [ResourceBookings] ALTER COLUMN [ResourceBookerKey1] NVARCHAR(80) NULL
As you can see, the table is only changed if the column width needs to be increased
TIA

Before upgrading, make sure the SQL Server database's Recovery Model is set to "Simple". Go to SSMS, right-click the database, select properties, and then click on the Options pages. Record the "Recovery Mode" value. Set the Recovery Model to "Simple", if it's not already (I assume it's set to FULL).
Then run the upgrade. After the upgrade, you can restore the value back to what it was.
Alternately you can script it with something like this:
Before upgrade:
ALTER DATABASE MyDatabase SET RECOVERY SIMPLE;
After upgrade:
ALTER DATABASE MyDatabase SET RECOVERY FULL;

Related

Run Update statement on information_schema.COLUMNS

A previous dba made many spurious design decisions when creating the schema for the database that I am now administering. Basically every column in the database that has a default value also is not nullable. This plays havoc with just about any ORM. I'd like to be able to run an update statement on the COLUMNS table of the information_schema database and set nullable to YES if the column has a default value, but naturally, I don't have access to that table, nor does root. Is this even possible, or do I need to manually alter thousands of columns?
Without root privileges or no other way to modify the columns (other than manully touching each column) you could do the following:
get a backup of the database
restore the backup to a new database that you have full access to
make the changes to the newly restored database
delete the original
rename the restored database to the original databases name
I would do this a few times in your dev environment first and feel very confident of the changes before doing it in a production environment.
Also, depending on the size of your database this could be quite an expensive action to perform on the database. If it takes hours or days to run then this might not be a viable solution.
Good luck.

Reverse an sql statement?

I have an sql file with alot of create, alterings and modifies to a database. If I need to back out at some point (up to a day maybe) after executing the sql script, is there an easy way to do that? For example, is there any tool to read an sql script and produce a 'rollback' script from it?
I am using sqlyog aswell, in case there happens to be any such features built-in (I havn't found any)
No, sorry, there are many statements that cannot be reversed from looking at the SQL command.
DROP TABLE (what was in the table that dropped?)
UPDATE mytable SET timestamp = NOW() (what was the timestamp before?)
INSERT INTO mytable (id) VALUES (NULL) (assuming id is auto-increment, what row was created?)
Many others...
If you want to recover the database from before your day's worth of changes, take a backup before you begin changing it.
You can also do point-in-time recovery using binary logs, to restore the database to any moment since your last backup.

Ensure MySQL table charset & collation

Situation: there's a table that is managed by application A. Application A inserts and updates data in the table throughout the day. Once per week it DROPs the table, recreates it, and inserts all data.
Problem: application A creates the table as utf8. Application B that relies on this table require for it to be ascii_bin. I did not design either application, nor do I have access to modifying their requirements.
What's needed: a way to ensure that the table is in ascii_bin. I considered writing a script and run it via CRON, which would check the current charset and set it if needed. Is there a better way of achieving this?
Since ALTER is one of the statements that causes an implicit COMMIT, I do not believe it is possible to do it as part of a trigger after INSERT or UPDATE.
You can set ascii_bin as a default charset for your database schema. Then all the created tables will have this charset when created, unless you explicitly specify another charset.
Refer to MySQL documentation on how to set the default charset: http://dev.mysql.com/doc/refman/5.0/en/charset-database.html
See SET NAME at http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html
MySQL proxy might be a solution here.
You can rewrite the create statement when it goes through the proxy.
Alternatively, maybe you could remove privileges from Application A so it can't drop the table.
An ALTER statement that makes no changes is basically ignored. So, if the conversion to ascii_bin is run multiple times, it's not going to be much effort on the server. So, putting it in cron, or an exiting stored procedure that Applicatio B calls, or something else clever, isn't so bad.

Managing mysql schema changes with SQL scripts and transactions

I am working with multiple databases in a PHP/MySQL application. I have development, testing, staging and production databases to keep in sync.
Currently we're still building the thing, so it's easy to keep them in sync. I use my dev db as the master and when I want to update the others I just nuke them and recreate them from mine. However, in future once there's real data I can't do this.
I would like to write SQL scripts as text files that I can version with the PHP changes that accompany them in svn, then apply the scripts to each db instance as I update them.
I would like to use transactions so that if there are any errors during the script, it will roll back any partial changes made. All tables are InnoDB
When I try to add a column that already exists, and add one new column like this:
SET FOREIGN_KEY_CHECKS = 0;
START TRANSACTION;
ALTER TABLE `projects` ADD COLUMN `foo1` varchar(255) NOT NULL after `address2`;
ALTER TABLE `projects` ADD COLUMN `foo2` varchar(255) NOT NULL after `address2`;
COMMIT;
SET FOREIGN_KEY_CHECKS = 1;
... it still commits the new column even though it failed to add the first one, of course, because I issued COMMIT instead of ROLLBACK.
I need it to issue the rollback command conditionally upon error. How can I do this in an adhoc SQL script?
I know of the 'declare exit handler' feature of stored procs but I don't want to store this; I just want to run it as an adhoc script.
Do I need to make it into a stored proc anyway in order to get conditional rollbacks, or is there another way to make the whole transaction atomic in a single adhoc SQL script?
Any links to examples welcome - I've googled but am only finding stored proc examples so far
Many thanks
Ian
EDIT - This is never going to work; ALTER TABLE causes an implicit commit when encountered: http://dev.mysql.com/doc/refman/5.0/en/implicit-commit.html Thanks to Brian for the reminder
I learned the other day that data definition language statements are always acted on in MySQL and cause transactions to be committed when they are applied. I think you'll probably have to do this interactively if you want to be sure of success.
I can't find the question on this website where this was discussed (it was only a couple of days ago).
If you need to keep multiple databases in synch, you could look into replication. Although replication isn't to be trifled with, it may be what you need. See http://dev.mysql.com/doc/refman/5.0/en/replication-features.html

Data Corruption with MediumText and ASP/MySQL

I have a website written in ASP with a mySQL database. ASP uses the ODBC 5.1 driver to connect to the database. Inside the database there is a varchar(8000) column (the length started small, but the application has evolved A LOT since its conception). Anyway, it recently became evident that the varchar column should be changed into a MEDIUMTEXT column. I made the change and everything appeared alright. However, whenever I do an UPDATE statement, the data in that column for that specific row gets corrupted. Do to the nature of the website, I am unable to provide data or example queries, but the queries are not using any functions or anything; just a straight UPDATE.
Everything works fine with the varchar, but blows up when I make the field a MEDIUMTEXT. The corruption I'm talking about is as follows:
ٔڹ���������������ߘ����ߘ��������
Any ideas?
Have you checked encodings (ASP+HTML+DB)? Using UTF8?
Not using UTF8 and that text is not English , right?
You might have a version specific bug. I searched for "mysql alter table mediumtext corruption" and there were some bugs specifically having to do with code pages and non-latin1 character sets.
Your best best is conduct a survey of the table, comparing it against a backup. If this is a MyISAM table, you might want to recreate the table with CHECKSUM option enabled. What does a CHECK TABLE tell you? If an ALTER TABLE isn't working for you, you could consider partitioning the mediumtext field into it's own table, or duplicating the table contents using a variation of an INSERT...SELECT:
CREATE TABLE b LIKE a;
ALTER TABLE b MODIFIY b.something MEDIUMTEXT;
INSERT INTO b SELECT * FROM a LIMIT x,1000;
-- now check those 1000 rows --
By inserting a few rows at a time and then checking them, it you might be able to tease out what kind of input isn't converting well.
Check dmesg and syslog output to see if you've got ram or disk issues. I have seen table corruptions occur due to ECC errors, bad raid controllers, bad sectors and faulty network transmission. You might attempt the ALTER TABLE on a comparable machine and see if it checks out.