How can I get MySql backup size? - mysql

I have to get a backup of my database everyday and I use mysql dump with shell commands to get backup from database
I want to know the progress of backup process .
so I need to know to the backup file size and also the file which is being created as the backup.
how can I have these ?
any answers will be appreciated .

The MySQL information_schema table will give you meta-information about a database, including the total size for each table. See: http://dev.mysql.com/doc/refman/5.0/en/tables-table.html
There is an example in first comment of calculating the size for an entire database.
Note however that your mysqldump output will have overhead depending on your output format: integer values are represented as text, you'll have extra SQL or XML stuff, etc.
You may need to take the sizes provided and scale them up by a fudge factor to get an estimate for the dump size.
And for the dump file name: that's chosen by you (or the shell script you're using) as an argument to mysqldump

you can use the argument --show-progress-size of mysqldump.exe and read periodically the standard output.

Related

Restoring a MySQL dump with binary blobs

I am moving a MySQL database from a now inaccessible server to a new one. The dump contains tables which in turn contain binary blobs, which seems to cause trouble with the MySQL command line client. When trying to restore the database, I get the following error:
ERROR at line 694: Unknown command '\''.
I inspected the line at which the error is occurring and found that it is a huge insert statement (approx. 900k characters in length) which seems to insert binary blobs into a table.
Now, I have found these two questions that seem to be connected to mine. However, both answers proved to not solve my issue. Adding --default-character-set=utf8 or even --default-caracter-set=latin1 didn't change anything and creating a dump with --hex-dump is not possible because the source database server is no longer accessible.
Is there any way how I can restore this backup via the MySQL command line client? If yes, what do I need to do?
Please let me know if you need any additional information.
Thanks in advance.
EDIT: I am using MySQL 5.6.35. Also, in addition to the attempts outlined above, I have already tried increasing the max_allowed_packet system variable to its maximum value - on both server and client - but to no avail.
If I remember correctly, you need to set the max_allowed_packet in your my.cnf to a large enough value to accommodate the largest data blob in your dump file, and restart the MySQL server.
Then, you can use a restore command like this one :
mysql --max_allowed_packet=64M < your_dumpfile.sql
More info here :
[https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_max_allowed_packet]
No solution, just confirming that I had seen the same behavior with a "text" field type that contains a long JSON string. The SQL (backup) file that MySQLdump generates has an INSERT statement and it truncates the length of that particular text field to "about" 64K (there are many escaped quotes/double-quotes and various UTF-8 characters) - without issuing a warning that such truncation had occurred.
Naturally the restore into a JSON column fails because of the premature termination of the JSON formatted string.
What was odd in this case, that the column in the backed-up table was defined as TEXT, which indeed should have been limited to 64 KB. On a hunch, I changed the schema for the backed up table to MEDIUMTEXT. After THAT MySQLdump no longer truncated that string in the INSERT statement somewhere beyond 64K.
It appears as if MySQLdump doesn't just output the entire column, but truncates to whatever it thinks the maximum string length should be based on schema information, and does NOT issue warnings when it does truncate.

How do I import/handle large text files for MS SQL?

I have a 1.7GB txt file (about 1.5million rows) that is apparently formatted in some way for columns and rows, though I don't know the delimiter. I will need to be able to import this data into MySQL and MS SQL databases to run queries on.
I can't even open it in notepad to see a sample of the data.
For future reference, how does one handle and manipulate very large data files? What file format is best? To my knowledge Excel and CSV do not support unlimited numbers of rows.
You can use bcp in as below
bcp yourtable in C:\Data\yourfile.txt -c -t, -S localhost -T
Hence you know the column name from mysql, you can create table with that structure before hand in sql server

Big data migration from Oracle to MySQL

I received over 100GB of data with 67million records from one of the retailers. My objective is to do some market-basket analysis and CLV. This data is a direct sql dump from one of the tables with 70 columns. I'm trying to find a way to extract information from this data as managing itself in a small laptop/desktop setup is becoming time consuming. I considered the following options
Parse the data and convert the same to CSV format. File size might come down to around 35-40GB as more than half of the information in each records is column names. However, I may still have to use a db as I cant use R or Excel with 66 million records.
Migrate the data to mysql db. Unfortunately I don't have the schema for the table and I'm trying to recreate the schema looking at the data. I may have to replace to_date() in the data dump to str_to_date() to match with MySQL format.
Are there any better way to handle this? All that I need to do is extract the data from the sql dump by running some queries. Hadoop etc. are options, but I dont have the infrastructure to setup a cluster. I'm considering mysql as I have storage space and some memory to spare.
Suppose I go in the MySQL path, how would I import the data? I'm considering one of the following
Use sed and replace to_date() with appropriate str_to_date() inline. Note that, I need to do this for a 100GB file. Then import the data using mysql CLI.
Write python/perl script that will read the file, convert the data and write to mysql directly.
What would be faster? Thank you for your help.
In my opinion writing a script will be faster, because you are going to skip the SED part.
I think that you need to setup a server on a separate PC, and run the script from your laptop.
Also use tail to faster get a part from the bottom of this large file, in order to test your script on that part before you run it on this 100GB file.
I decided to go with the MySQL path. I created the schema looking at the data (had to increase a few of the column size as there were unexpected variations in the data) and wrote a python script using MySQLdb module. Import completed in 4hr 40mins on my 2011 MacBook Pro with 8154 failures out of 67 million records. Those failures were mostly data issues. Both client and server are running on my MBP.
#kpopovbg, yes, writing script was faster. Thank you.

cut off data in a dump file

I am using MySQL.
I got a mysql dump file (large_data.sql), I can create a database and load data from this dump file to the created database. No problem on this.
Now, I feel the data in the dump file is too large (for example, it contains 300000 rows/objects in one table, other tables are also contain a large amount of data).
So, I decided to make another dump (based on the large size dump) which can contains a small size of data (for example, 30 rows/objects in a table).
With only that big size dump file, what is the correct and efficient way to cut off the data in that dump and create a new dump file which contains small amount of data?
------------------------- More -----------------------------------
(Use textual tool to open the large size dump is not good, since the dump is very large, it takes long time to open the dump from textual tool)
If you want to work only on the textual dump files, you could use some textual tools (like awk or sed, or perhaps a perl or python or ocaml script) to handle them.
But maybe your big database was already loaded from the big dump file, and you want to work with MySQL incremental backups?
I recommend free file splitter : http://www.filesplitter.org/ .
Only problem : it cut a query in two parts. You need to edit manualy the file after but, it work like a charm.
Example :
My file is :
BlaBloBluBlw
BlaBloBluBlw
BlaBloBluBlw
Result will be :
File 1:
BlaBloBluBlw
BlaBloBl
File 2:
uBlw
BlaBloBluBlw
So you need to edit everything but it work like a charm and very quick. Used today on a 9,5 millions rows table.
BUT !! Best argument : the time you will take to do this is small compared to the time you try to import something big or waiting for it... this is quick and efficent even if you need to edit the file manualy since you need to rebuild the last and first query.

Exported databases have different sizes

If I export a database with phpmyadmin his size is 18MB
If I expoert it from terminal using this command is size is only 11MB.
/usr/bin/mysqldump --opt -u root -ppassword ${DB} | gzip > ${DB}.sql.gz
Could you explain me why ? Is because of --otp parameter ?
How can I be sure the database has been succesfully exported ? Should I inspect it.. still it is not a reliable evaluation. thanks
With the details you've given, there are a number of possibilties as to why the sizes may differ. Assuming the output from phpMyAdmin is also gzipped (otherwise the obvious reason for the difference would be that one is compressed, the other isn't), the following could affect size to some degree:
Different ordering of INSERT statements causing differences in the compressibility of the data
One using extended inserts, the other using only standard inserts (this seems most likely given the difference in sizes).
More comments added by the phpMyAdmin export tool
etc...
I'd suggest looking at the export to determine completeness (perhaps restore it to a test database and verifying that the row-counts on all tables are the
I don't have enough points to comment so I'm adding my comments in this answer...
If you look at the uncompressed contents of the export files from a phpmyadmin export and a mysqldump they will be quite different.
You could use diff to compare the two sql files:
diff file1.sql file2.sql
However, in my experience that will NOT be helpful in this case.
You can simply open the files in your favorite editor and compare them to see for yourself.
As mentioned by Iridium in the previous answer, the use of inserts can be different. I created two new empty databases and imported into each (via phpmyadmin) - one of the two exports mentioned above (one from phpmyadmin and the other via mysqldump).
The import using the mysqldump export file recreated the database containing 151 tables with 1484 queries.
The import using the phpmyadmin export file recreated the database containing 151 tables with 329 queries.
Of course these numbers apply only to my example, but it seems to be in line what Iridium was talking about earlier.