I am running into a bit of an issue.
You see I have made a WordPress website locally using WAMP and everything seemed to be working fine, until I tried to get the MySQL database imported onto the new live site where it gave an error:
"#1709 - Index column size is to large, The maximum column size is 767 bytes"
See image of the complete error here:
Now I have found some answers to what may be causing this here:
MySQL error: The maximum column size is 767 bytes
And here:
mysql change innodb_large_prefix
And although I understand what needs to be imlemented code wise, I am none the wiser as to where the code actually needs to be placed.
As aside from importing and exporting and editing the database credentials I never had to do anything else with MySQL, it is all a bit foreign to me.
And though I am more than happy to look more deeply into it at a later point in time, at this point I rather just want my live site to be working.
Well I figured it out, apparently I had to edit the SQL file itself and had to add ROW_FORMAT=DYNAMIC at the end of every CREATE TABLE Query which uses the INNODB engine.
So I changed this:
CREATE TABLE `xxx` (
`visit_id` bigint(20) NOT NULL AUTO_INCREMENT,
`visitor_cookie` mediumtext NOT NULL,
`user_id` bigint(20) NOT NULL,
`subscriber_id` bigint(20) NOT NULL,
`url` mediumtext NOT NULL,
`ip` tinytext NOT NULL,
`date` datetime NOT NULL,
PRIMARY KEY (`visit_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = #saved_cs_client */;
Into
CREATE TABLE `xxx` (
`visit_id` bigint(20) NOT NULL AUTO_INCREMENT,
`visitor_cookie` mediumtext NOT NULL,
`user_id` bigint(20) NOT NULL,
`subscriber_id` bigint(20) NOT NULL,
`url` mediumtext NOT NULL,
`ip` tinytext NOT NULL,
`date` datetime NOT NULL,
PRIMARY KEY (`visit_id`)
) ROW_FORMAT=DYNAMIC ENGINE=InnoDB DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = #saved_cs_client */;
Then I re-imported the file onto the local server and then did a new export to the live server... and it is live now...finally.
I still find it a bit strange that mySQL doesn't automatically set rows to dynamic, once you exceed a certain amount of characters ( 747) and that it still works inside the existing database eventhough it shouldn't work...but maybe WAMP just has different environment settings vs the live server.
Anyway thanks all!
Related
I use MySQL 5.7, but I do not know how to config it to display Vietnamese correctly.
I have set
CREATE DATABASE brt
DEFAULT CHARACTER SET utf8 COLLATE utf8_vietnamese_ci;
After that I used "LOAD DATA LOCAL INFILE" to load data written by Vietnamese into the database.
But I often get a result with error in Vietnamese character display.
For the detailed codes and files, please check via my GitHub as the following link
https://github.com/fivermori/mysql
Please show me how to solve this. Thanks.
As #ysth suggests, using utf8mb4 will save you a world of trouble going forward. If you change your create statements to look like this, you should be good:
CREATE DATABASE `brt` DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
USE `brt`;
DROP TABLE IF EXISTS `fixedAssets`;
CREATE TABLE IF NOT EXISTS `fixedAssets` (
`id` int(11) UNSIGNED NOT NULL AUTO_INCREMENT,
`code` varchar(250) UNIQUE NOT NULL DEFAULT '',
`name` varchar(250) NOT NULL DEFAULT '',
`type` varchar(250) NOT NULL DEFAULT '',
`createdDate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
CREATE INDEX `idx_fa_main` ON `fixedAssets` (`code`);
I've tested this using the data that you provided and get the expected query results:
name
----------------------------------------------------------------
Mould Terminal box cover BN90/112 612536030 39 tháng
Mould W2206-045-9911-VN #3 ( 43 tháng)
Mould Flange BN90/B5 614260271 ( 43 tháng)
Mould 151*1237PH04pC11 ( 10 năm)
Transfer 24221 - 2112 ( sửa chữa nhà xưởng Space T 07-2016 ) BR2
Using the utf8mb4 character set and utf8mb4_unicode_ci collation is usually one of the simpler ways to ensure that your database can correctly display everything from plain ASCII to modern emoji and everything in between.
Proclaimer: YES, I've done my search on Stackoverflow and NO it couldn't find an answer for this case.
I'm migrating data from an forum which has some legacy in it's MySQL database. One of the issues is the storage of Emoji's.
Donor database:
-- Server: 5.5.41-MariaDB
CREATE TABLE `forumtopicresponse` (
`id` int(10) UNSIGNED NOT NULL,
`topicid` int(10) UNSIGNED NOT NULL DEFAULT '0',
`userid` int(10) UNSIGNED NOT NULL DEFAULT '0',
`message` text NOT NULL,
`created` int(10) UNSIGNED NOT NULL DEFAULT '0',
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
In the message column I've got a message like this: Success!ðŸ‘ðŸ‘, which displays as "Success!👍👍"
Laravel target database:
-- Server: MySQL 5.7.x
CREATE TABLE `answers` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`topic_id` int(10) unsigned NOT NULL,
`user_id` int(10) unsigned NOT NULL,
`body` text CHARACTER SET utf8mb4,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
...keys & indexes
) ENGINE=InnoDB AUTO_INCREMENT=1254419 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
In HTML the document has a <meta charset="utf-8"> and to display the field, I'm using
{!! nl2br(e($answer->body)) !!}
And with this it just displays as Success!ðŸ‘👠and not the Emoji's.
Question
How can I migrate this data CLEAN and UTF-8 valid into my fresh database? I think I need some utf encoding, but can't figure out which.
UPDATE! THE SOLUTION
Got it fixed. The only solution was to alter the table in the Donor database.
ALTER TABLE forumtopicresponse CHANGE message message LONGTEXT CHARACTER SET latin1;
ALTER TABLE forumtopicresponse CHANGE message message LONGBLOB;
Do NOT change the LONGBLOB to LONGTEXT anymore: I lost data this way.
When I migrate the LONGBLOB data to the Laravel target database everything get's migrated correctly: all special chars and emoji's are fixed and in UTF-8.
The Emoji 👍 is hex F09F918D. That is, it is a 4-byte string.
MySQL's CHARACTER SET = utf8 does not handle 4-byte UTF-8 strings, only 3-byte ones, thereby excluding many of the Emoji and some of Chinese.
When interpreted as latin1, those hex digits are 👠(plus a 4th, but unprintable, character). Showing gibberish like that is called "Mojibake".
So, you have 2 problems:
Need to change the storage to utf8mb4 so you can store the Emoji.
Need to announce to MySQL that your client is speaking UTF-8, not latin1.
See "Best Practice" in Trouble with UTF-8 characters; what I see is not what I stored
And also see UTF-8 all the way through
Here's my list of fixes, but you must first correctly identify which case you have. Applying the wrong fix makes things worse.
There may be a 3rd mistake -- in moving the data from 5.5 to 5.7. Please provide those details.
I have the following table structure:
DROP TABLE IF EXISTS `tblusers`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `tblusers` (
`UserID` int(5) NOT NULL AUTO_INCREMENT,
`ContactPersonID` int(5) NOT NULL,
`NameOfUser` varchar(70) NOT NULL,
`LegalForm` varchar(70) DEFAULT NULL,
`Address` varchar(70) DEFAULT NULL,
`City` varchar(50) DEFAULT NULL,
`Postal` int(8) DEFAULT NULL,
`Country` varchar(50) DEFAULT NULL,
`VatNum` int(10) DEFAULT NULL,
`Username` varchar(30) NOT NULL,
`Password` varchar(20) NOT NULL,
`Email` varchar(40) NOT NULL,
`Website` varchar(40) DEFAULT NULL,
`IsSeller` bit(1) DEFAULT NULL,
`IsBuyer` bit(1) DEFAULT NULL,
`IsAdmin` bit(1) DEFAULT NULL,
`Description` text,
PRIMARY KEY (`UserID`),
KEY `ContactPersonID` (`ContactPersonID`),
CONSTRAINT `tblusers_tblpersons` FOREIGN KEY (`ContactPersonID`) REFERENCES `tblpersons` (`PersonID`)
) ENGINE=InnoDB AUTO_INCREMENT=87 DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
Then once I create a user from the UI of my application, I have to manually set the very first admin, and this is the only time I am doing this directly from the DB, all the rest is envisioned to be done from the UI (granting admin privileges):
UPDATE `tblusers` SET `IsAdmin`='1' WHERE `UserID`='79';
but then I get:
Operation failed: There was an error while applying the SQL script to the database.
Executing:
UPDATE `trace`.`tblusers` SET `IsAdmin`='1' WHERE `UserID`='79';
ERROR 1406: 1406: Data too long for column 'IsAdmin' at row 1
SQL Statement:
UPDATE `trace`.`tblusers` SET `IsAdmin`='1' WHERE `UserID`='79'
Which doesn't make sense because I am doing the exact same thing on other machines and it works like a charm. The only difference is that in this scenario I have mysql 5.7 server whereas I have 5.6 versions on the machines that this does work.
I tried the following solution but it didn't work for me. Besides that, the my.ini file is unchanged in the 5.6 machine where it does work.
Downgrading to 5.6 is out of the question. I need a real solution here please.
isadmin is a column of type bit and you are storing a value of type varchar in it which is of larger size than bit. modify query as follows:-
UPDATE `tblusers` SET `IsAdmin`=b'1' WHERE `UserID`='79';
IsAdmin has the datatype of bit(1), yet you are assigning the string '1' to it. Indicate that you are assigning a bit value to it by preceeding the '1' with b or use 0b format:
UPDATE `tblusers` SET `IsAdmin`=b'1' WHERE `UserID`='79';
or
UPDATE `tblusers` SET `IsAdmin`=0b1 WHERE `UserID`='79';
The reason for this behaviour is probably that strict_all_tables or strict_trans_tables setting is enabled on the v5.7 mysql server:
Strict mode controls how MySQL handles invalid or missing values in
data-change statements such as INSERT or UPDATE. A value can be
invalid for several reasons. For example, it might have the wrong data
type for the column, or it might be out of range. A value is missing
when a new row to be inserted does not contain a value for a non-NULL
column that has no explicit DEFAULT clause in its definition. (For a
NULL column, NULL is inserted if the value is missing.) Strict mode
also affects DDL statements such as CREATE TABLE.
The BIT data type is used to store bit values. A type of BIT(M) enables storage of M-bit values. M can range from 1 to 64.
UPDATE tblusers SET IsAdmin=b'1' WHERE UserID='012';
UPDATE tblusers SET IsAdmin=b'0' WHERE UserID='012';
I had the same problem when I synchronized the Model's table from MySQL Workbench to the MySQL server which had old tables with data. the data of old column types is longer than the new column types. (for example: the old column type is char[43] but the new column type is binary[32] so the new column type can't contain all of the old data)
my solution: drop the old table and then synchronized new Model with the old database
I actually have an application based on MySQL with a schema based on InnoDB (with constraints...)
My co-workers need to import this schema, so I export my schema in SQL files.
For example:
DROP TABLE IF EXISTS `admins`;
CREATE TABLE `admins` (
`id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
`username` varchar(45) NOT NULL,
`password` varchar(45) NOT NULL,
`email` varchar(45) DEFAULT NULL,
`creation_date` datetime NOT NULL,
`close_date` datetime DEFAULT NULL,
`close_reason` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=4 ;
Now, I would like to have a cross-db application, so:
I tried to import my previous sql files in PostgreSQL, but it didn't work, my SQL files are mysql-related (for example use of ` character...)
I tried to export my schema with mysqldump and a compatibility mode "--compatible=ansi" my goal: have a generic sql file compatible with all major SGBD. But it didn't work: PostgreSQL returns error about synthax
compatible=ansi returns:
DROP TABLE IF EXISTS "admins";
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE "admins" (
"id" smallint(5) unsigned NOT NULL AUTO_INCREMENT,
"username" varchar(45) NOT NULL,
"password" varchar(45) NOT NULL,
"email" varchar(45) DEFAULT NULL,
"creation_date" datetime NOT NULL,
"close_date" datetime DEFAULT NULL,
"close_reason" varchar(45) DEFAULT NULL,
PRIMARY KEY ("id")
);
/*!40101 SET character_set_client = #saved_cs_client */;
I even tried to export with compatibility=postgresql:
DROP TABLE IF EXISTS "admins";
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE "admins" (
"id" smallint(5) unsigned NOT NULL,
"username" varchar(45) NOT NULL,
"password" varchar(45) NOT NULL,
"email" varchar(45) DEFAULT NULL,
"creation_date" datetime NOT NULL,
"close_date" datetime DEFAULT NULL,
"close_reason" varchar(45) DEFAULT NULL,
PRIMARY KEY ("id")
);
/*!40101 SET character_set_client = #saved_cs_client */;
But also didn't work...
I know there are tools to convert MySQL schema to PostgreSQL schema but this isn't the goal...
My question: Is it possible to have only one SQL file compatible with MySQL, PostgreSQL, SQLite... and don't maintain a SQL file for each SGBD ?
Thank you
My question: Is it possible to have only one SQL file compatible with MySQL, PostgreSQL, SQLite... and don't maintain a SQL file for each SGBD ?
Not easily with raw SQL, unless you wish to use a pathetic subset of the databases' supported features.
SELECTs and DML in SQL can be moderately portable, but DDL is generally a hopeless nightmare for all but the total basics. You'll want an abstraction tool that generates the SQL for you, handling database specific differences in sequences/generated keys, type naming, constraints, index creation, etc.
As just one example, lets look at auto-incrementing values / sequences, as frequently used for synthetic keys:
MySQL: integer AUTO_INCREMENT
PostgreSQL: SERIAL (shorthand for a sequence)
MS-SQL: int IDENTITY(1,1)
Oracle (below 12c): No direct support, use a sequence.
Oracle (12c and above): NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY
.. and that's just for the very common task of a generated key. Lots of other fun differences exist. For example, MySQL has tinyint and unsigned int. PostgreSQL does not. PostgreSQL has bool and has bit(n) bitfields, range-types, PostGIS types, etc etc etc which most other DBs don't have. Even for things that're shared, quirks abound - specifying "4 byte signed integer" across all DBs isn't even trivial.
One option to help is Liquibase which I've heard good things about. Some people instead use an ORM to manage their DDL generation instead - though those tend to use, again, only the most primitive of database features.
I have a table inside of my mysql database which I constantly need to alter and insert rows into but it continues running slow when I make changes making it difficult because there are over 200k+ entries. I tested another table which has very few rows and it moves quickly, so it's not the server or database itself but that particular table which has a tough time. I need all of the table's rows and cannot find a solution to get around the load issues.
DROP TABLE IF EXISTS `articles`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `articles` (
`id` int(11) NOT NULL auto_increment,
`content` text NOT NULL,
`author` varchar(255) NOT NULL,
`alias` varchar(255) NOT NULL,
`topic` varchar(255) NOT NULL,
`subtopics` varchar(255) NOT NULL,
`keywords` text NOT NULL,
`submitdate` timestamp NOT NULL default CURRENT_TIMESTAMP,
`date` varchar(255) NOT NULL,
`day` varchar(255) NOT NULL,
`month` varchar(255) NOT NULL,
`year` varchar(255) NOT NULL,
`time` varchar(255) NOT NULL,
`ampm` varchar(255) NOT NULL,
`ip` varchar(255) NOT NULL,
`score_up` int(11) NOT NULL default '0',
`score_down` int(11) NOT NULL default '0',
`total_score` int(11) NOT NULL default '0',
`approved` varchar(255) NOT NULL,
`visible` varchar(255) NOT NULL,
`searchable` varchar(255) NOT NULL,
`addedby` varchar(255) NOT NULL,
`keyword_added` varchar(255) NOT NULL,
`topic_added` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
KEY `score_up` (`score_up`),
KEY `score_down` (`score_down`),
FULLTEXT KEY `SEARCH` (`content `),
FULLTEXT KEY `asearch` (`author`),
FULLTEXT KEY `topic` (`topic`),
FULLTEXT KEY `keywords` (`content `,`keywords`,`topic`,`author`),
FULLTEXT KEY `content ` (`content `,`keywords`),
FULLTEXT KEY `new` (`keywords`),
FULLTEXT KEY `author` (`author`)
) ENGINE=MyISAM AUTO_INCREMENT=290823 DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
With indexes it depends:
more indexes = faster selecting, slower inserting
less indexes = slower selecting, faster inserting
Because the index tables has to be rebuild when inserting and the more data in the table is the more work is for mysql to do to rebuild the index.
So maybe you could remove indexes you not need, that should speed your inserting up.
Another option is to partition you table into many - this stops the bottle neck.
Just try to pass the changes in an update script. This is slow because it creates tables. try updating the tables where changes has been made.
For example create a variable that catches all the changes in the program, with that, insert it to the tables query. That should be fast enough for programs. But as we all know speed depends on how much data is processed.
Let me know if you need anything else.
This may or may not help you directly, but I notice that you have a lot of VARCHAR(255) columns in your table. Some of them seem like they might be totally unnecessary — do you really need all those date / day / month / year / time / ampm columns? — and many could be replaced by more compact datatypes:
Dates could be stored as a DATETIME (or TIMESTAMP).
IP addresses could be stored as INTEGERs, or as BINARY(16) for IPv6.
Instead of storing usernames in the article table, you should create a separate user table and reference it using INTEGER keys.
I don't know what the approved, visible and searchable fields are, but I bet they don't need to be VARCHAR(255)s.
I'd also second Adrian Cornish's suggestion to split your table. In particular, you really want to keep frequently changing and frequently accessed metadata, such as up/down vote scores, separate from rarely changing and infrequently accessed bulk data like article content. See for example http://20bits.com/articles/10-tips-for-optimizing-mysql-queries-that-dont-suck/
"I have a table inside of my mysql database which I constantly need to alter and insert rows into but it continues"
Try innodb on this table if you application performs A LOT update, insert concurrently there, row level locking $$$
I recommend you to split that "big table"(not that big actually, but for MySQL it may be) in several tables to make the most of the query cache. Any time you update some record in that table, the query cache is erased. Also you can try to reduce the isolation level, but that is a little more complicated.