Exported databases have different sizes - mysql

If I export a database with phpmyadmin his size is 18MB
If I expoert it from terminal using this command is size is only 11MB.
/usr/bin/mysqldump --opt -u root -ppassword ${DB} | gzip > ${DB}.sql.gz
Could you explain me why ? Is because of --otp parameter ?
How can I be sure the database has been succesfully exported ? Should I inspect it.. still it is not a reliable evaluation. thanks

With the details you've given, there are a number of possibilties as to why the sizes may differ. Assuming the output from phpMyAdmin is also gzipped (otherwise the obvious reason for the difference would be that one is compressed, the other isn't), the following could affect size to some degree:
Different ordering of INSERT statements causing differences in the compressibility of the data
One using extended inserts, the other using only standard inserts (this seems most likely given the difference in sizes).
More comments added by the phpMyAdmin export tool
etc...
I'd suggest looking at the export to determine completeness (perhaps restore it to a test database and verifying that the row-counts on all tables are the

I don't have enough points to comment so I'm adding my comments in this answer...
If you look at the uncompressed contents of the export files from a phpmyadmin export and a mysqldump they will be quite different.
You could use diff to compare the two sql files:
diff file1.sql file2.sql
However, in my experience that will NOT be helpful in this case.
You can simply open the files in your favorite editor and compare them to see for yourself.
As mentioned by Iridium in the previous answer, the use of inserts can be different. I created two new empty databases and imported into each (via phpmyadmin) - one of the two exports mentioned above (one from phpmyadmin and the other via mysqldump).
The import using the mysqldump export file recreated the database containing 151 tables with 1484 queries.
The import using the phpmyadmin export file recreated the database containing 151 tables with 329 queries.
Of course these numbers apply only to my example, but it seems to be in line what Iridium was talking about earlier.

Related

Changing database in already exported .sql.gz file

So I have an exported database database.sql.gz from a database I can't access personally. Simply uploading it into PHPMyAdmin gives errors, and uploading it with zcat {directory} | mysql -u {user} -p {database} also gives an error "Row size too large (> 8126)". After reading through the file with Vi, I realize I can't simply change some row file formats around to make it fit, as the specific table has 67 rows. I also found out this table (along with others) don't get filled at all and get dropped at the end of the document. I tried commenting out the CREATE TABLE of the too large table, and there don't seem to have been any related errors from that, but I did run into some different errors at CREATE ALGORITHM commands.
Long story short, is there a better way to remove this giant table from my file that doesn't include exporting it again (as I don't have direct access to the database) or commenting out every bit that has to do with that table.

Incorrect database diagram - MySQL Workbench options

I have a database diagram designed in MySQL Workbench, and I have my database fully exported in a file (.sql), I also have later versions (incremental backup).
In summary, I have the following files:
database.mwb
database.sql
updateA.sql
updateB.sql
updateC.sql
updateD.sql
updateE.sql
updateF.sql
The problem is, that the diagram "database.mwb" does not match any of the databases, (Surely someone else has modified it and never exported the changes).
I have tested the difference...
... between "database.mwb" and "database.sql"
... between "database.mwb" and a file that I created with the contents of all the updated ones (copied and pasted manually by me)
... between "database.mwb" and export phpMyAdmin (database.sql + updateA + updateB)
In conclusion, I want to have my updated "database.mwb" diagram and I do not know what to do. Maybe reverse engineer to generate the new diagram, but there are more than 500 tables to organize again.
Is there any way to tell MySQL Workbench to modify the diagram based on the SQL file?
What should work is:
Create your schema from the original model file in the target server.
Apply the next update script on the server.
Synchronize your model with the server, taking over all changes from there.
Fix the model (layout etc.).
repeat steps 2 - 4 for each update script.

Opening huge mySQL database

I want to open a Huge SQL file (20 GB) on my system i tried phpmyadmin and bigdump but it seems bigdump dose not support more than 1 GB SQL files is there any script or software that i can use to open,view,search and edit it.
MySQL Workbench should work fine, works well for large DB's, and is very useful...
https://www.mysql.com/products/workbench/
Install, then basically you just create a new connection, and then double click it on the home screen to get access to the DB. Right click on a table and click Select 1000 for quick view of table data.
More info http://mysqlworkbench.org/2009/11/mysql-workbench-5-2-beta-quick-start-tutorial/
Try using mysql command line to do basic SELECT queries.
$ mysql -u myusername -p
>>> show DATABASES; // shows you a list of databases
>>> use databasename; //selects database to query
>>> show TABLES; // displays tables in database
>>> SELECT * FROM tablename WHERE column = 'somevalue';
It totally depends on the structure of the database, one way of handling this is by exporting each table in a seperate sql file, as for editing the file, you're limited to opening the raw sql files in notepad or any other text editor. But you probably already knew that.
What are the settings that were used to export the database? People often forget that there's also an option to turn on comments, for big databases it makes sense to turn that off.
To get a more detailed answer have you tried asking at https://dba.stackexchange.com/?

How can I get MySql backup size?

I have to get a backup of my database everyday and I use mysql dump with shell commands to get backup from database
I want to know the progress of backup process .
so I need to know to the backup file size and also the file which is being created as the backup.
how can I have these ?
any answers will be appreciated .
The MySQL information_schema table will give you meta-information about a database, including the total size for each table. See: http://dev.mysql.com/doc/refman/5.0/en/tables-table.html
There is an example in first comment of calculating the size for an entire database.
Note however that your mysqldump output will have overhead depending on your output format: integer values are represented as text, you'll have extra SQL or XML stuff, etc.
You may need to take the sizes provided and scale them up by a fudge factor to get an estimate for the dump size.
And for the dump file name: that's chosen by you (or the shell script you're using) as an argument to mysqldump
you can use the argument --show-progress-size of mysqldump.exe and read periodically the standard output.

MySQL database - backup problem

Hi I need to backup MySQL database and then deploy it on another MySQL server.
The problem is, I need it backup without data , just script which creates database, tables, procedures, users, resets autoincrements etc. ...
I tried MySQL administrator tool (Windows) and UNchecked "complete inserts check box", but it still created it ...
Thanks in advance
use mysqldump with option -d or --no-data
don't forget option -R to get the procedures
this page could help you: http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
From within phpMyAdmin you can export the structure, with or without the data. The only thing I'm not sure of, is wether it exports users as well. If you like, I can test that tomorrow morning. It exports users too. You can check all sorts of options.
(source: obviousmatter.com)
According to the page, there isn't a good way to dump the routines and have them easily able to be recreated.
What they suggest is to dump the mysql.proc table directly. Including all the data.
Then use your myback.sql to restore the structure. Then restore the mysql.proc table with all of its data.
"... If you require routines to be re-created with their original timestamp attributes, do not use --routines. Instead, dump and reload the contents of the mysql.proc table directly, using a MySQL account that has appropriate privileges for the mysql database. ..."