SQL: Application cross-db maintain only one generic schema - mysql

I actually have an application based on MySQL with a schema based on InnoDB (with constraints...)
My co-workers need to import this schema, so I export my schema in SQL files.
For example:
DROP TABLE IF EXISTS `admins`;
CREATE TABLE `admins` (
`id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
`username` varchar(45) NOT NULL,
`password` varchar(45) NOT NULL,
`email` varchar(45) DEFAULT NULL,
`creation_date` datetime NOT NULL,
`close_date` datetime DEFAULT NULL,
`close_reason` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=4 ;
Now, I would like to have a cross-db application, so:
I tried to import my previous sql files in PostgreSQL, but it didn't work, my SQL files are mysql-related (for example use of ` character...)
I tried to export my schema with mysqldump and a compatibility mode "--compatible=ansi" my goal: have a generic sql file compatible with all major SGBD. But it didn't work: PostgreSQL returns error about synthax
compatible=ansi returns:
DROP TABLE IF EXISTS "admins";
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE "admins" (
"id" smallint(5) unsigned NOT NULL AUTO_INCREMENT,
"username" varchar(45) NOT NULL,
"password" varchar(45) NOT NULL,
"email" varchar(45) DEFAULT NULL,
"creation_date" datetime NOT NULL,
"close_date" datetime DEFAULT NULL,
"close_reason" varchar(45) DEFAULT NULL,
PRIMARY KEY ("id")
);
/*!40101 SET character_set_client = #saved_cs_client */;
I even tried to export with compatibility=postgresql:
DROP TABLE IF EXISTS "admins";
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE "admins" (
"id" smallint(5) unsigned NOT NULL,
"username" varchar(45) NOT NULL,
"password" varchar(45) NOT NULL,
"email" varchar(45) DEFAULT NULL,
"creation_date" datetime NOT NULL,
"close_date" datetime DEFAULT NULL,
"close_reason" varchar(45) DEFAULT NULL,
PRIMARY KEY ("id")
);
/*!40101 SET character_set_client = #saved_cs_client */;
But also didn't work...
I know there are tools to convert MySQL schema to PostgreSQL schema but this isn't the goal...
My question: Is it possible to have only one SQL file compatible with MySQL, PostgreSQL, SQLite... and don't maintain a SQL file for each SGBD ?
Thank you

My question: Is it possible to have only one SQL file compatible with MySQL, PostgreSQL, SQLite... and don't maintain a SQL file for each SGBD ?
Not easily with raw SQL, unless you wish to use a pathetic subset of the databases' supported features.
SELECTs and DML in SQL can be moderately portable, but DDL is generally a hopeless nightmare for all but the total basics. You'll want an abstraction tool that generates the SQL for you, handling database specific differences in sequences/generated keys, type naming, constraints, index creation, etc.
As just one example, lets look at auto-incrementing values / sequences, as frequently used for synthetic keys:
MySQL: integer AUTO_INCREMENT
PostgreSQL: SERIAL (shorthand for a sequence)
MS-SQL: int IDENTITY(1,1)
Oracle (below 12c): No direct support, use a sequence.
Oracle (12c and above): NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY
.. and that's just for the very common task of a generated key. Lots of other fun differences exist. For example, MySQL has tinyint and unsigned int. PostgreSQL does not. PostgreSQL has bool and has bit(n) bitfields, range-types, PostGIS types, etc etc etc which most other DBs don't have. Even for things that're shared, quirks abound - specifying "4 byte signed integer" across all DBs isn't even trivial.
One option to help is Liquibase which I've heard good things about. Some people instead use an ORM to manage their DDL generation instead - though those tend to use, again, only the most primitive of database features.

Related

MySQL Database import causing issue special characters (ě ř č ů)

Hi I recently changed the hosting provider for my website. When doing this I exported the mysql database I had in my previous cpanel phpmyadmin. It had CHARACTER SET latin1 and COLLATE latin1_swedish_ci. After I importing it to my new phpmyadmin I saw there was an issue with displaying the characters written in Czech ě ř č ů which appeared as question mark or weird symbols etc. I also wasn't able to insert these letters at first but after changing the table CHARSET to utf8 I'm able to insert them. But how do I export the data from my old database and import it in the new one without messing up the data? Here's what the database looks like:
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
SET AUTOCOMMIT = 0;
START TRANSACTION;
SET time_zone = "+00:00";
/*!40101 SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT */;
/*!40101 SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS */;
/*!40101 SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8mb4 */;
--
-- Database: `sambajiu_samba`
--
-- --------------------------------------------------------
CREATE TABLE `bookings` (
`id` int(11) NOT NULL,
`fname` varchar(100) NOT NULL,
`surname` varchar(100) DEFAULT NULL,
`email` varchar(255) NOT NULL,
`telephone` varchar(100) NOT NULL,
`age_group` varchar(100) DEFAULT NULL,
`hear` varchar(100) DEFAULT NULL,
`experience` text,
`subscriber` tinyint(1) DEFAULT NULL,
`booking_date` varchar(255) DEFAULT NULL,
`lesson_time` varchar(255) NOT NULL,
`booked_on` datetime DEFAULT CURRENT_TIMESTAMP
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
ALTER TABLE `bookings` ADD PRIMARY KEY (`id`);
ALTER TABLE `bookings` MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=345;
Czech is not handled by latin1. It would be better to use utf8mb4 (which can handle virtually everything in the world). Outside of MySQL, it is called "UTF-8".
How did you do the "export" and "import"? What is in the file? Can you get the hex of a small portion of the exported file -- we need to check what encoding was used for the Czech characters.
As for "as question mark or weird symbols", see question marks and Mojibake in Trouble with UTF-8 characters; what I see is not what I stored .
Your hex probably intended to say
Rezervovat trénink zda
In the middle of the hex is
C383 C2A9
Which is UTF-8 for é. When you display the data, you might see that, or you might see the desired é. In the latter case, the browser is probably "helping" you by decoding the data twice. For further discussion on this, see "double encoding" in the link above.
"Fixing the data" is quite messy:
CONVERT(BINARY(CONVERT(CONVERT(
UNHEX('52657A6572766F766174207472C383C2A96E696E6B207A6461')
USING utf8mb4) USING latin1)) USING utf8mb4)
==> 'Rezervovat trénink zda'
But, I don't think we are finished. that acute-e is a valid character in latin1. You mentioned 4 Czech accented letters that, I think, are not in Latin1. Latin5 and dec8 may be relevant.

Where to add innodb_large_prefix

I am running into a bit of an issue.
You see I have made a WordPress website locally using WAMP and everything seemed to be working fine, until I tried to get the MySQL database imported onto the new live site where it gave an error:
"#1709 - Index column size is to large, The maximum column size is 767 bytes"
See image of the complete error here:
Now I have found some answers to what may be causing this here:
MySQL error: The maximum column size is 767 bytes
And here:
mysql change innodb_large_prefix
And although I understand what needs to be imlemented code wise, I am none the wiser as to where the code actually needs to be placed.
As aside from importing and exporting and editing the database credentials I never had to do anything else with MySQL, it is all a bit foreign to me.
And though I am more than happy to look more deeply into it at a later point in time, at this point I rather just want my live site to be working.
Well I figured it out, apparently I had to edit the SQL file itself and had to add ROW_FORMAT=DYNAMIC at the end of every CREATE TABLE Query which uses the INNODB engine.
So I changed this:
CREATE TABLE `xxx` (
`visit_id` bigint(20) NOT NULL AUTO_INCREMENT,
`visitor_cookie` mediumtext NOT NULL,
`user_id` bigint(20) NOT NULL,
`subscriber_id` bigint(20) NOT NULL,
`url` mediumtext NOT NULL,
`ip` tinytext NOT NULL,
`date` datetime NOT NULL,
PRIMARY KEY (`visit_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = #saved_cs_client */;
Into
CREATE TABLE `xxx` (
`visit_id` bigint(20) NOT NULL AUTO_INCREMENT,
`visitor_cookie` mediumtext NOT NULL,
`user_id` bigint(20) NOT NULL,
`subscriber_id` bigint(20) NOT NULL,
`url` mediumtext NOT NULL,
`ip` tinytext NOT NULL,
`date` datetime NOT NULL,
PRIMARY KEY (`visit_id`)
) ROW_FORMAT=DYNAMIC ENGINE=InnoDB DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = #saved_cs_client */;
Then I re-imported the file onto the local server and then did a new export to the live server... and it is live now...finally.
I still find it a bit strange that mySQL doesn't automatically set rows to dynamic, once you exceed a certain amount of characters ( 747) and that it still works inside the existing database eventhough it shouldn't work...but maybe WAMP just has different environment settings vs the live server.
Anyway thanks all!

View definitions in MariaDB are not create with mysqldump

I have a db in MariaDB 10.1.25 and in this, I have many tables and 20 views.
When I try to backup my db using mysqldump, it works fine for tables but in view definitions, it fails to create a create statement like it does with tables.
The code generated is this:
--
-- Temporary table structure for view `qry_clientes`
--
DROP TABLE IF EXISTS `qry_clientes`;
/*!50001 DROP VIEW IF EXISTS `qry_clientes`*/;
SET #saved_cs_client = ##character_set_client;
SET character_set_client = utf8;
/*!50001 CREATE TABLE `qry_clientes` (
`Id` tinyint NOT NULL,
`Cliente` tinyint NOT NULL,
`Direccion` tinyint NOT NULL,
`Ciudad` tinyint NOT NULL,
`Fono` tinyint NOT NULL,
`Fax` tinyint NOT NULL,
`Email` tinyint NOT NULL,
`Ruc` tinyint NOT NULL,
`tipo` tinyint NOT NULL
) ENGINE=MyISAM */;
SET character_set_client = #saved_cs_client;
and in this there are no view definitions. I have all the privilegies grandted
Usually, in the mysqldump backup script, the views are first created as tables and then are then dropped at the bottom of the script as each view is being created.
Sometimes there is an error in this process because when a view is created there is a user used as DEFINER. This statement may fail because this user might not exist in the database.
Please verify that the view drop/create script exists at the end, write the error that you are getting (if you are getting) and run the import using the -v option for more logging.

"ERROR 1406: 1406: Data too long for column" but it shouldn't be?

I have the following table structure:
DROP TABLE IF EXISTS `tblusers`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `tblusers` (
`UserID` int(5) NOT NULL AUTO_INCREMENT,
`ContactPersonID` int(5) NOT NULL,
`NameOfUser` varchar(70) NOT NULL,
`LegalForm` varchar(70) DEFAULT NULL,
`Address` varchar(70) DEFAULT NULL,
`City` varchar(50) DEFAULT NULL,
`Postal` int(8) DEFAULT NULL,
`Country` varchar(50) DEFAULT NULL,
`VatNum` int(10) DEFAULT NULL,
`Username` varchar(30) NOT NULL,
`Password` varchar(20) NOT NULL,
`Email` varchar(40) NOT NULL,
`Website` varchar(40) DEFAULT NULL,
`IsSeller` bit(1) DEFAULT NULL,
`IsBuyer` bit(1) DEFAULT NULL,
`IsAdmin` bit(1) DEFAULT NULL,
`Description` text,
PRIMARY KEY (`UserID`),
KEY `ContactPersonID` (`ContactPersonID`),
CONSTRAINT `tblusers_tblpersons` FOREIGN KEY (`ContactPersonID`) REFERENCES `tblpersons` (`PersonID`)
) ENGINE=InnoDB AUTO_INCREMENT=87 DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
Then once I create a user from the UI of my application, I have to manually set the very first admin, and this is the only time I am doing this directly from the DB, all the rest is envisioned to be done from the UI (granting admin privileges):
UPDATE `tblusers` SET `IsAdmin`='1' WHERE `UserID`='79';
but then I get:
Operation failed: There was an error while applying the SQL script to the database.
Executing:
UPDATE `trace`.`tblusers` SET `IsAdmin`='1' WHERE `UserID`='79';
ERROR 1406: 1406: Data too long for column 'IsAdmin' at row 1
SQL Statement:
UPDATE `trace`.`tblusers` SET `IsAdmin`='1' WHERE `UserID`='79'
Which doesn't make sense because I am doing the exact same thing on other machines and it works like a charm. The only difference is that in this scenario I have mysql 5.7 server whereas I have 5.6 versions on the machines that this does work.
I tried the following solution but it didn't work for me. Besides that, the my.ini file is unchanged in the 5.6 machine where it does work.
Downgrading to 5.6 is out of the question. I need a real solution here please.
isadmin is a column of type bit and you are storing a value of type varchar in it which is of larger size than bit. modify query as follows:-
UPDATE `tblusers` SET `IsAdmin`=b'1' WHERE `UserID`='79';
IsAdmin has the datatype of bit(1), yet you are assigning the string '1' to it. Indicate that you are assigning a bit value to it by preceeding the '1' with b or use 0b format:
UPDATE `tblusers` SET `IsAdmin`=b'1' WHERE `UserID`='79';
or
UPDATE `tblusers` SET `IsAdmin`=0b1 WHERE `UserID`='79';
The reason for this behaviour is probably that strict_all_tables or strict_trans_tables setting is enabled on the v5.7 mysql server:
Strict mode controls how MySQL handles invalid or missing values in
data-change statements such as INSERT or UPDATE. A value can be
invalid for several reasons. For example, it might have the wrong data
type for the column, or it might be out of range. A value is missing
when a new row to be inserted does not contain a value for a non-NULL
column that has no explicit DEFAULT clause in its definition. (For a
NULL column, NULL is inserted if the value is missing.) Strict mode
also affects DDL statements such as CREATE TABLE.
The BIT data type is used to store bit values. A type of BIT(M) enables storage of M-bit values. M can range from 1 to 64.
UPDATE tblusers SET IsAdmin=b'1' WHERE UserID='012';
UPDATE tblusers SET IsAdmin=b'0' WHERE UserID='012';
I had the same problem when I synchronized the Model's table from MySQL Workbench to the MySQL server which had old tables with data. the data of old column types is longer than the new column types. (for example: the old column type is char[43] but the new column type is binary[32] so the new column type can't contain all of the old data)
my solution: drop the old table and then synchronized new Model with the old database

How can you speed up making changes to large tables (200k+ rows) in mysql databases?

I have a table inside of my mysql database which I constantly need to alter and insert rows into but it continues running slow when I make changes making it difficult because there are over 200k+ entries. I tested another table which has very few rows and it moves quickly, so it's not the server or database itself but that particular table which has a tough time. I need all of the table's rows and cannot find a solution to get around the load issues.
DROP TABLE IF EXISTS `articles`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `articles` (
`id` int(11) NOT NULL auto_increment,
`content` text NOT NULL,
`author` varchar(255) NOT NULL,
`alias` varchar(255) NOT NULL,
`topic` varchar(255) NOT NULL,
`subtopics` varchar(255) NOT NULL,
`keywords` text NOT NULL,
`submitdate` timestamp NOT NULL default CURRENT_TIMESTAMP,
`date` varchar(255) NOT NULL,
`day` varchar(255) NOT NULL,
`month` varchar(255) NOT NULL,
`year` varchar(255) NOT NULL,
`time` varchar(255) NOT NULL,
`ampm` varchar(255) NOT NULL,
`ip` varchar(255) NOT NULL,
`score_up` int(11) NOT NULL default '0',
`score_down` int(11) NOT NULL default '0',
`total_score` int(11) NOT NULL default '0',
`approved` varchar(255) NOT NULL,
`visible` varchar(255) NOT NULL,
`searchable` varchar(255) NOT NULL,
`addedby` varchar(255) NOT NULL,
`keyword_added` varchar(255) NOT NULL,
`topic_added` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
KEY `score_up` (`score_up`),
KEY `score_down` (`score_down`),
FULLTEXT KEY `SEARCH` (`content `),
FULLTEXT KEY `asearch` (`author`),
FULLTEXT KEY `topic` (`topic`),
FULLTEXT KEY `keywords` (`content `,`keywords`,`topic`,`author`),
FULLTEXT KEY `content ` (`content `,`keywords`),
FULLTEXT KEY `new` (`keywords`),
FULLTEXT KEY `author` (`author`)
) ENGINE=MyISAM AUTO_INCREMENT=290823 DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
With indexes it depends:
more indexes = faster selecting, slower inserting
less indexes = slower selecting, faster inserting
Because the index tables has to be rebuild when inserting and the more data in the table is the more work is for mysql to do to rebuild the index.
So maybe you could remove indexes you not need, that should speed your inserting up.
Another option is to partition you table into many - this stops the bottle neck.
Just try to pass the changes in an update script. This is slow because it creates tables. try updating the tables where changes has been made.
For example create a variable that catches all the changes in the program, with that, insert it to the tables query. That should be fast enough for programs. But as we all know speed depends on how much data is processed.
Let me know if you need anything else.
This may or may not help you directly, but I notice that you have a lot of VARCHAR(255) columns in your table. Some of them seem like they might be totally unnecessary — do you really need all those date / day / month / year / time / ampm columns? — and many could be replaced by more compact datatypes:
Dates could be stored as a DATETIME (or TIMESTAMP).
IP addresses could be stored as INTEGERs, or as BINARY(16) for IPv6.
Instead of storing usernames in the article table, you should create a separate user table and reference it using INTEGER keys.
I don't know what the approved, visible and searchable fields are, but I bet they don't need to be VARCHAR(255)s.
I'd also second Adrian Cornish's suggestion to split your table. In particular, you really want to keep frequently changing and frequently accessed metadata, such as up/down vote scores, separate from rarely changing and infrequently accessed bulk data like article content. See for example http://20bits.com/articles/10-tips-for-optimizing-mysql-queries-that-dont-suck/
"I have a table inside of my mysql database which I constantly need to alter and insert rows into but it continues"
Try innodb on this table if you application performs A LOT update, insert concurrently there, row level locking $$$
I recommend you to split that "big table"(not that big actually, but for MySQL it may be) in several tables to make the most of the query cache. Any time you update some record in that table, the query cache is erased. Also you can try to reduce the isolation level, but that is a little more complicated.