InnoDb working some of the time but not others - mysql

I'm in the process of swapping over a database for a rewrite of my program and part of that is writing both a conversion script and a script to create new tables.
I'm renaming tables, changing indexes and generally altering most of the table in some way, part of that is that I'm changing from MyISAM to InnoDB tables.
The conversion script works flawlessly but the script to create new tables falls over at a specific point.
Query:
create table team_resources
(
amount double not null default 0,
resource int unsigned not null default 0,
team int unsigned not null default 0,
primary key (resource,team)
) ENGINE = InnoDB
I get error 121 which is the error given when a table cannot be created. The script is run from a Python file but I get the same error in both my SQL program and phpMyAdmin in both raw script and the table wizard helper form thingie.
The tables all converted to InnoDB just fine so I'm stumped as to why it has issues creating new ones. This query works if I take out the InnoDB part.
Any suggestions?

Bug 26507 sheds some light on this. Looks like creating/dropping tables isn't quite atomic.
One option is to do a mysqldump and try loading into a freshly installed database.
Another way to handle this is described at the end of Bug 17546, but you should verify the issues is with the frm file.

I'm able to run that statement fine on a MySQL 5.0.32 install. It may be a bug that's been fixed.

Related

Sync SQL Binary column to MySQL table

I’m attempting to use a piece of software (Layer2 Cloud Connector) to sync a local SQL table (Sage software) to a remote MySQL database where the data is used reports generated via the company's web app. We are doing this with about 12 tables, and have been doing so for almost two years without any issues.
Background:
I’m using a simple piece of software the uses a SELECT statement to sync records from one table to another using ODBC. In this case from SQL (SQLTable) to MySQL (MySQLTable). To do so, the software requires a SELECT statement for each table, a PK field, and, being ODBC-based, a provider. For SQL I'm using the Actian Zen 4.5, and for MySQL I'm using the MySQL ODBC 5.3.
Here is a screenshot of what the setup screen looks like for each of the tables. I have omitted the other column names that I'm syncing to make the SELECT statement more readable. The other columns are primarily varchar or int types.
Problem
For unrelated reasons, we must now sync a new table. Like most of the other tables, it has a primary key column named rGUID of type binary. When initially setting up the other tables, I tried to sync the primary key as a binary type to a MySQL binary column, but it failed when attempting to verify the SELECT statement on the SQLServer side with the error “Cannot remove this column, because it is a part of the constraint Constraint1 on the table SQLTable”.
Example of what I see for the the GUID/rGUID primary key values stored in the SQLTable via Access, or in MySQL after syncing as string:
¡狻➽䪏蚯㰛蓪
Ҝ諺䖷ᦶ肸邅
ब惈蠷䯧몰吲론�
ॺ䀙㚪䄔麽骧⸍薉
To get around this, I use CAST in the SQLTable SELECT statement to CAST the binary value as a string using: CAST(GUID as nchar(8)) as GUID, and then set up the MySQL column as a VARCHAR(32) using utf8_general_ci collation.
This has worked great for every other table since we originally set this up. But this additional table has considerably more records (about 120,000 versus 5,000-10,000), and though I’m able to sync 10,000 – 15,000 successfully, when I try to sync the entire table I get about 10-12 errors such as:
The metabase record 'd36d2dbe-fa89-4712-be4c-6b212367004b' is marked
to be added. The table 'SQLTable' does not contain a corresponding
row. Changes made to this metabase record will be reset to the
initial state.
I don't understand what is causing the above error or how to work past it.
What I’ve tried so far:
I’ve confirmed the SQLTable has no other unique fields that could be
used as PK in place of the rGUID column
I’ve tried use different type, length and collation settings on the
MySQL table, and have had mixed success, but ultimately still get
errors when attempting to sync the entire table.
I’ve also tried tweaking the CAST settings for the SQL SELECT
statement, but nchar(8) seems to work best for the other tables
I've tried syncing using HASHBYTES('SHA1', GUID) as GUID and syncing
the value of that, but get the below ODBC error
I was thinking perhaps I could convert the SQL GUID to its value, then sync that as a varchar (or a binary), but my attempts at using CONVERT in the SQLTable SELECT statement have failed
Settings I used for all the other tables:
SQL SELECT Statement: SELECT CAST(GUID as nchar(8)) as GUID, OtherColumns FROM SQLTable;
MYSQL SELECT Statement: SELECT GUID, OtherColumns FROM MySQLTable;
Primary Key Field: GUID
Primary Key Field Type: String
MySQL Column Type/Collation: VARCHAR(32), utf8_general_ci
Any help or suggestions at all would be great. I've been troubleshooting this in my spare time for a couple of weeks now, and have no had much success. I'm not particularly familiar with the binary type, and am hoping someone might have an idea on how I might be able to successfully sync this SQL table to MySQL without these errors.
Given the small size of the datasets involved I would select as CHAR(36) from SQL Server and store in a CHAR(36) in MySQL.
If you are able to control the way the data is inserted by Layer2 Cloud Connector then you could set your MySQLTable GUID column as BINARY(16) -
SELECT CAST(GUID AS CHAR(36)) AS GUID, OtherColumns FROM SQLTable;
INSERT INTO MySQLTable (GUID) VALUES (UUID_TO_BIN(GUID)))
SELECT BIN_TO_UUID(GUID) AS GUID, OtherColumns FROM MySQLTable;

MySQL autoincrement column have diffrent behaviour with different databases

I have one working project in php and mysql.
In which I am using one column syntax for all my auto increment columns like below -
CREATE TABLE `mytable` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`sometext` int(11) NOT NULL
)
And for inserting records in this table in my whole proect I am using below syntax -
INSERT INTO mytable(ID,sometext)
VALUES(0,'Sometext')
And this is working fine.
But when I copied same DB and project and this code stopped working
So I changed my insert with below
INSERT INTO mytable( sometext)
VALUES( 'Sometext')
But this is very weird... In previous project old syntax is working fine but for new I have to make code change in 100 of places.
Can somebody tell me whats wrong with new MYSQL DB that it stopped supporting old syntax.
The difference is probably that your new database servers has the configuration option sql_mode=NO_AUTO_VALUE_ON_ZERO. Therefore only a NULL will cause an auto-increment to be generated.
Read https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html for full explanation of how sql modes affect your database server.
To avoid having to make code changes, you can change the server option.

Need readable pure SQL equivalent of SQL Closures code

Have you heard of SQL Closures or any library that implements them ?
They allow to execute this script in SQL command window (or put it into SP):
exec closure,"
rec{select db=name from sys.databases where name like 'corp_'},{
use |db|
rec{select tbl=name from sys.tables where name like 'user_'},{
for{col},{Created,Modified},{
def_col {
|tbl|.|col| datetime not null default(getdate()) ix
}
}
def_col {|tbl|.deleted datetime ix}
}
}
"
This script will make sure that Created not null, Modified not null and Deleted indexed columns exist in all tables with prefix user_ in all databases with prefix corp_.
def_col will create new column or alter existing column to match desired definition. It will also create/recreate non-unique ascending index for each of these columns.
def_col will drop and recreate dependencies as needed (constraints, indexes, foreign keys, schema bound views and functions).
rec and for and def_col will catch errors and log them into error table or raise immediately depending on context options for easy debugging and tracking of errors during script execution should they happen.
As you can see, the script can be executed many times without failures, it's just second time it will not change anything.
Is there a more readable, supportable and compact way to achieve the same functionality in MS-SQL ?
If yes - please post example in your answer.
Is more readable, supportable and compact way available in MySql, Oracle or other major flavors of SQL language ?
I do not see any reason you could not create simple SQL Server Groups in SSMS, and register your servers against those groups and run your DDL from there. You could also do it with SQLCMD, or Powershell.

Export DB Tables via phpMyAdmin In Non-Alphabetical Order

I have a MySQL database from a Joomla MultiSite installation where it has a set of tables with different prefixes for each Joomla site. When I export the db via phpMyAdmin it creates a SQL file where the tables are created and populated in alphabetical order. The problem is that the tables for the slave sites have dependencies on the tables for the master site, but alphabetically their prefixes are ahead of the master site. So the export works fine but when I try importing I get error after error and have to manually move sections around in the SQL file to make sure that the dependent tables are created/populated first.
So, is it possible to export a db via phpMyAdmin with the tables in a specific order?
EDIT: Here's the error I'm getting which should clarify things:
Error
SQL query: Documentation
--
-- Dumping data for table `j1_content_rating`
--
-- --------------------------------------------------------
--
-- Table structure for table `j1_core_acl_aro`
--
CREATE ALGORITHM = UNDEFINED DEFINER = `bookings_bpjms`#`localhost` SQL SECURITY DEFINER VIEW `bookings_bpjms`.`j1_core_acl_aro` AS SELECT `bookings_bpjms`.`js0_core_acl_aro`.`id` AS `id` , `bookings_bpjms`.`js0_core_acl_aro`.`section_value` AS `section_value` , `bookings_bpjms`.`js0_core_acl_aro`.`value` AS `value` , `bookings_bpjms`.`js0_core_acl_aro`.`order_value` AS `order_value` , `bookings_bpjms`.`js0_core_acl_aro`.`name` AS `name` , `bookings_bpjms`.`js0_core_acl_aro`.`hidden` AS `hidden`
FROM `bookings_bpjms`.`js0_core_acl_aro` ;
MySQL said: Documentation
#1146 - Table 'bookings_bpjms.js0_core_acl_aro' doesn't exist
The js0_ portions of the import script come after the j1_ portions, and so this error occurs. If I edit this file in a text editor (30+ megs and growing every day) I can find the js0_ portions and move them to the top, but this is tedious, time consuming and error prone.
Is the problem foreign key checks (in which case a SET FOREIGN_KEY_CHECKS=0 at the start of the file should work), or is the problem simply importing in a live environment?
With mysqldump it seems the tables are dumped in the order you give them in (if you specify tables instead of just a database), but this is undocumented behavior as far as I know and hence should not be relied upon.

Resetting AUTO_INCREMENT on myISAM without rebuilding the table

Please help I am in major trouble with our production database. I had accidentally inserted a key with a very large value into an autoincrement column, and now I can't seem to change this value without a huge rebuild time.
ALTER TABLE tracks_copy AUTO_INCREMENT = 661482981
Is super-slow.
How can I fix this in production? I can't get this to work either (has no effect):
myisamchk tracks.MYI --set-auto-increment=661482982
Any ideas?
Basically, no matter what I do I get an overflow:
SHOW CREATE TABLE tracks
CREATE TABLE tracks (
...
) ENGINE=MYISAM AUTO_INCREMENT=2147483648 DEFAULT CHARSET=latin1
After struggling with this for hours, I was finally able to resolve it. The auto_increment info for myISAM is stored in TableName.MYI, see state->auto_increment in http://forge.mysql.com/wiki/MySQL_Internals_MyISAM. So fixing that file was the right way to go.
However, myisamchk definitely has an overflow bug somewhere in the update_auto_increment function or what it calls, so it does not work for large values -- or rather if the current value is already > 2^31, it will not update it (source file here -- http://www.google.com/codesearch/p?hl=en#kYwBl4fvuWY/pub/FreeBSD/distfiles/mysql-3.23.58.tar.gz%7C7yotzCtP7Ko/mysql-3.23.58/myisam/mi_check.c&q=mySQL%20%22AUTO_INCREMENT=%22%20lang:c)
After discovering this, I ended up just using "xxd" to dump the MYI file into a hexfile, edit around byte 60, and replace the auto_increment value manually in the hexfile. "xxd -r" then restores the binary file from the hex file. To discover exactly what to edit, I just used ALTER TABLE on much smaller tables and looked at the effects using diffs. No fun, but it worked in the end. There seems to be a checksum in the format, but it seems to be ignored.
Have you dropped the record with the very large key? I don't think you can change the auto_increment to a lower value if that record still exists.
From the docs on myisamchk:
Force AUTO_INCREMENT numbering for new records to start at the given value (or higher, if there are existing records with AUTO_INCREMENT values this large)