I am moving a MySQL database from a now inaccessible server to a new one. The dump contains tables which in turn contain binary blobs, which seems to cause trouble with the MySQL command line client. When trying to restore the database, I get the following error:
ERROR at line 694: Unknown command '\''.
I inspected the line at which the error is occurring and found that it is a huge insert statement (approx. 900k characters in length) which seems to insert binary blobs into a table.
Now, I have found these two questions that seem to be connected to mine. However, both answers proved to not solve my issue. Adding --default-character-set=utf8 or even --default-caracter-set=latin1 didn't change anything and creating a dump with --hex-dump is not possible because the source database server is no longer accessible.
Is there any way how I can restore this backup via the MySQL command line client? If yes, what do I need to do?
Please let me know if you need any additional information.
Thanks in advance.
EDIT: I am using MySQL 5.6.35. Also, in addition to the attempts outlined above, I have already tried increasing the max_allowed_packet system variable to its maximum value - on both server and client - but to no avail.
If I remember correctly, you need to set the max_allowed_packet in your my.cnf to a large enough value to accommodate the largest data blob in your dump file, and restart the MySQL server.
Then, you can use a restore command like this one :
mysql --max_allowed_packet=64M < your_dumpfile.sql
More info here :
[https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_max_allowed_packet]
No solution, just confirming that I had seen the same behavior with a "text" field type that contains a long JSON string. The SQL (backup) file that MySQLdump generates has an INSERT statement and it truncates the length of that particular text field to "about" 64K (there are many escaped quotes/double-quotes and various UTF-8 characters) - without issuing a warning that such truncation had occurred.
Naturally the restore into a JSON column fails because of the premature termination of the JSON formatted string.
What was odd in this case, that the column in the backed-up table was defined as TEXT, which indeed should have been limited to 64 KB. On a hunch, I changed the schema for the backed up table to MEDIUMTEXT. After THAT MySQLdump no longer truncated that string in the INSERT statement somewhere beyond 64K.
It appears as if MySQLdump doesn't just output the entire column, but truncates to whatever it thinks the maximum string length should be based on schema information, and does NOT issue warnings when it does truncate.
Related
I am facing problem, where mysqldump(ing) my database behaves strange, with just one table in database. Table has nothing unusual configuration in it and all the columns are just basic stuff, expect it has two columns with longtext/text datatype, if that does matter somehow. Longtext column contains valid xml strings and text column contains valid json strings.
Problem is that produced dump file contains no inserts at all for that specific table. Dump file has sql lines for everything else. Dump file even has create table lines for that problematic table and all other mysql stuff, but insert lines are totally missing for that table. Instead of inserts, there is just huge amount of empty lines in the middle of dump file. Dump is done like this:
mysqldump -u root -ppassword dbname > dump.sql
I did notice, that when I add
--extended-insert=false
to my dump command, dump contains some insert rows. Rows with ids 1-7 are present, 8-9 are missing, 10 is present, 11 is missing, 12-13 are present etc... missing row insert lines are replaced by empty line in dump file.
Could anyone have any clue, what is happening here? For me, data seems not corrupted and it can be browsed via phpmyadmin interface.
Some facts about the case
mysql Ver 14.14 Distrib 5.5.31, for debian-linux-gnu (x86_64) using
readline 6.2
Problem table engine is InnoDB
Dump file size is around 1.7G
Problem table has around 230.000 rows
Problem was solved by inspecting dump file closer with different program. LogExpert has known bug of not showing lines with too many characters in it. Dump file was just fine and problem did not actually even exist.
Importing from dump file failed because of too small innodb buffer size in php.ini file. So there was two separated problems that confused me.
For me, after long hours of troubleshooting, getting a blank dump file, it was disk space issues on the server where I am trying to take db dump.
Seems silly but posting may it helped.
I have converted from a MySQL database to Postgres. During the conversion, the picture column in Postgres was created as bytea.
This Xojo code works in MySQL but not Postgres.
Dim mImage as Picture
mImage = rs.Field("Picture").PictureValue
Any ideas?
I don't know about this particular issue, but here's what you can do to find out yourself, perhaps:
Pictures are stored as BLOBs in the database. Now, this means that the column must also be declared as BLOB (or a similar binary type). If it was accidentally marked as TEXT, this would work as long as the database does not get exported by other means. I.e, as long as only your Xojo code reads and writes to the record, using the PictureValue functions, that takes care of keeping the data in BLOB form. But if you'd then convert to another database, the BLOB data would be read as text, and in that process it might get mangled.
So, it may be relevant to let us know how you converted the DB. Did you perform a export as SQL commands and then imported it into Postgres by running these commands again? Do you still have the export file? If so, find a record with picture data in it and see if that data is starting with: x' and then contains hex byte code, e.g. x'45FE1200... and so on. If it doesn't, that's another indicator for my suspicion.
So, check the type of the Picture column in your old DB first. If that specifies a binary data type, then the above probably does not apply.
Next, you can look at the actualy binary data that Xojo reads. To do that, get the BlobValue instead of the PictureValue, and store that in a MemoryBlock. Do the same for a single picture, both with the old and the new database. The memoryblock should contain the same bytes. If not, that would suggest that the data was not transferred correctly. Why? Well, that depends on how you converted it.
Let's say that I have a html form (actually I have an editor - TinyMCE) which through PHP inserts a bunch of text into Mysql table.
I want to know the following:
If I have TINYTEXT data type in Mysql column - what happens if the user tries to put more text than 255 bytes into Mysql table??
Does the application save first 255 bytes and "cuts off" the rest? Or does nothing get saved into Mysql table and mysql issues a warning?? Or none of the above?
Actually, what I want and intend to do is the following:
Limit the size of user form input by setting the column data type in Mysql to TEXT data type, which can hold maximum of 64 KB of text. I want to limit the amount of text that gets passed from user to database, so that user can't put too much data to the server at once.
So, basically, I want to know what happens, if the user puts more text through TinyMCE editor than 65535 bytes, assuming TEXT data type in mysql table.
MySQL, by default, truncates the data if it's too long, and sends a warning.
SHOW WARNINGS;
Data truncated for foo ..
Just to be clear: the data will be saved, but you will be missing the part that was too large.
Default mysql configuration truncate the data if the value is greater than the maximum table field definition size, this will produce a non blocking warning.
If you want a blocking error you have to set the sql_mode to STRICT_ALL_TABLES
dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html#sqlmode_strict_all_tables
IMHO the best way is to manage this error via applicatin software.
Hope this helps
If you enter too much data to a TEXT field in MySQL it will insert the row anyway but with that field truncated to the maximum length, and issue a warning.
Even if MySQL did prevent the row from being added it would not be a good way of limiting the length of data that a user can enter. You should check the length of the POSTed string in PHP, and not run the query at all if it is too long - and perhaps tell the user why their data wasn't entered.
As well as this you can prevent the user from entering too many characters at the client side (although you should always do the check server side as well because someone could bypass the client side limit). It appears that there is no built-in way of doing this in TinyMCE, but it is possible by writing a callback: Limit the number of character in tinyMCE
I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program.
Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as:
INSERT INTO table SET field1='a';
INSERT INTO table SET field1='tommy';
INSERT INTO table SET field1='2';
I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor.
Thanks for your help in advance.
I never received an answer, but I will tell you what I did to get by.
1. Ran mysqlbinlog to a textfile
2. Created a PHP script that uses fgets to read each line of the log
3. While looping through each line, the script parses it using the stristr function
4. If the line matches the string I am looking for, it logs the line to a file
It takes a while to run mysqlbinlog and the PHP script, but it no longer times out. I originally used fread in PHP, but that reads the entire file into memory and caused the script to crash on large (1G) log files. Now, it takes several minutes to run (I also set the max_execution_time variable to be larger), but it works like a charm. fgets gets one line at a time, so it doesn't take up nearly as much memory.
Just moved a site over to a new server and we started getting some errors. Mainly that some data we were passing into a MySQL table was too long for the field. Having checked through the DB it seems the old server was truncating the data to fit the table, yet the new server throws a TEP STOP. Any ideas what the setting is to switch this back on, to temporarily get stuff working again?
Thanks!
MySQL used to be famous/infamous for silently "correcting" data it could not store directly (overlong strings, invalid dates) etc.. That has fortunately changed in recent versions.
You can now configure this behaviour using "Server SQL Modes". You probably want to switch off STRICT_ALL_TABLES or STRICT_TRANS_TABLES.
See MySQL's docs for details.