I want to open a Huge SQL file (20 GB) on my system i tried phpmyadmin and bigdump but it seems bigdump dose not support more than 1 GB SQL files is there any script or software that i can use to open,view,search and edit it.
MySQL Workbench should work fine, works well for large DB's, and is very useful...
https://www.mysql.com/products/workbench/
Install, then basically you just create a new connection, and then double click it on the home screen to get access to the DB. Right click on a table and click Select 1000 for quick view of table data.
More info http://mysqlworkbench.org/2009/11/mysql-workbench-5-2-beta-quick-start-tutorial/
Try using mysql command line to do basic SELECT queries.
$ mysql -u myusername -p
>>> show DATABASES; // shows you a list of databases
>>> use databasename; //selects database to query
>>> show TABLES; // displays tables in database
>>> SELECT * FROM tablename WHERE column = 'somevalue';
It totally depends on the structure of the database, one way of handling this is by exporting each table in a seperate sql file, as for editing the file, you're limited to opening the raw sql files in notepad or any other text editor. But you probably already knew that.
What are the settings that were used to export the database? People often forget that there's also an option to turn on comments, for big databases it makes sense to turn that off.
To get a more detailed answer have you tried asking at https://dba.stackexchange.com/?
Related
This question already has answers here:
How do I rename a MySQL database (change schema name)?
(46 answers)
Closed 9 days ago.
How can I change the database name of my database?
I tried to use the rename database command, but on the documents about it it is said that it is dangerous to use. Then what should I need to do to rename my database name?
For example, if I want to rename my database to this.
database1 -> database2?
Follow bellow steps:
shell> mysqldump -hlocalhost -uroot -p database1 > dump.sql
mysql> CREATE DATABASE database2;
shell> mysql -hlocalhost -uroot -p database2 < dump.sql
If you want to drop database1 otherwise leave it.
mysql> DROP DATABASE database1;
Note : shell> denote command prompt and mysql> denote mysql prompt.
I don't think it's possible.
You can use mysqldump to dump the data and then create a schema with your new name and then dump the data into that new database.
Unfortunately, MySQL does not explicitly support that (except for dumping and reloading database again).
From http://dev.mysql.com/doc/refman/5.1/en/rename-database.html:
13.1.32. RENAME DATABASE Syntax
RENAME {DATABASE | SCHEMA} db_name TO new_db_name;
This statement was added in MySQL 5.1.7 but was found to be dangerous and was removed in MySQL 5.1.23. ... Use of this statement could result in loss of database contents, which is why it was removed. Do not use RENAME DATABASE in earlier versions in which it is present.
"As long as two databases are on the same file system, you can use RENAME TABLE to move a table from one database to another"
-- ensure the char set and collate match the existing database.
SHOW VARIABLES LIKE 'character_set_database';
SHOW VARIABLES LIKE 'collation_database';
CREATE DATABASE `database2` DEFAULT CHARACTER SET = `utf8` DEFAULT COLLATE = `utf8_general_ci`;
RENAME TABLE `database1`.`table1` TO `database2`.`table1`;
RENAME TABLE `database1`.`table2` TO `database2`.`table2`;
RENAME TABLE `database1`.`table3` TO `database2`.`table3`;
http://dev.mysql.com/doc/refman/5.7/en/rename-table.html
You can change the database name using MySQL interface.
Go to http://www.hostname.com/phpmyadmin
Go to database which you want to rename. Next, go to the operation tab. There you will find the input field to rename the database.
InnoDB supports RENAME TABLE statement to move table from one database to another. To use it programmatically and rename database with large number of tables, I wrote a couple of procedures to get the job done.
You can check it out here - SQL script #Gist
To use it simply call the renameDatabase procedure.
CALL renameDatabase('old_name', 'new_name');
Tested on MariaDB and should work ideally on all RDBMS using InnoDB transactional engine.
I agree with above answers and tips but there is a way to change database name with phpmyadmin
Renaming the Database
From cPanel, click on phpMyAdmin. (It should open in a new tab.)
Click on the database you wish to rename in the left hand column.
Click on the Operations tab.
Where it says "Rename database to:" enter the new database name.
Click the Go button.
When it asks you to want to create the new database and drop the old database, click OK to proceed. (This is a good time to make sure you spelled the new name correctly.)
Once the operation is complete, click OK when asked if you want to reload the database.
here's the video tutorial:
http://support.hostgator.com/articles/specialized-help/technical/phpmyadmin/how-to-rename-a-database-in-phpmyadmin
Another way to rename the database or taking an image of the database is by using the reverse engineering option in the database tab. It will create an ER diagram for the database. Rename the schema there.
After that, go to the File menu and go to export and forward engineer the database.
Then you can import the database.
Sequel Ace database client have a rename database functionality. Select the database you would like to edit and click Database in the menu and then click Rename Database from the dropdown. Rename the database and ckick rename. Done!
After much aggravation this is what I have found to work"simply".
First thing, I am using MYSQL Workbench and the import would not work as it should, as the import dump file would always revert to the original schema name.
I spent several hours trying every thing to no avail,all for a spelling error.
I solved the issue by opening one of the .sql dump files in notebook and hand editing the typo's of the schema name, take care to rename all instances schema name has three in the beginning, save the file and then import. this worked perfectly for me and hope that it will help others looking for the simple answer to changing database names/schema names.
One more tip that I have found true, when programs do not do as they should go to the "source" literally find the source code.
Hope this helps someone
Low rep so they wont let me comment on the prior/post answer(it keeps changing rank or position), so I added it here. reverse engineering will work fine as long as there is no data in the sever table. if data exists and you try to update the server after the name change it will either pull an error or just create a new database/schema with no data, I know I tried ten times to no avail.
The above works simply and avoids headaches, as one can review the SQL code for other errors if any or change table names or creation data.
the .sql file is just a compiled SQL code so in theory one could copy and add it through PHP or the script console of the database management tool.
You can use below command
alter database Testing modify name=LearningSQL;
Old Database Name = Testing,
New Database Name = LearningSQL
Go to data directory and try this:
mv database1 database2
It works for me on a 900 MB database size.
Try:
RENAME database1 TO database2;
I am using MySQL v5.1.
I am developing a Rails app. and writing a ruby script to copy database. So far, I have got an array of table names, the number of tables is 2090. I need to create all the tables in a new database, my code looks like:
#"table_names" is fetched by execute 'show tables' SQL commands
table_names.each { |tbl_name|
ActiveRecord::Base.connection.execute("CREATE TABLE #{new_db_name}.#{tbl_name} LIKE #{old_db_name}.#{tbl_name}")
}
This code works, but it took a long time to complete, because the code has to execute the CREATE TABLE command one by one and there are 2090 tables to create.
I am wondering is there any way to have bulk creating of tables (like bulk inserting of data) in SQL to save the time? If not, how can I improve the speed of creating the tables? That's copy all 2090 tables from one database to another.
P.S. I don't want to hard code all 2090 table names in SQL file.
Simplest method in mysql is to do a mysqldump of the database in question, then restore it to the new database, e.g:
mysql_dump -pPASSWORD -uUSERNAME name_of_db > name_of_db.sql
mysql -pPASSWORD -uUSERNAME name_of_db < name_of_db.sql
the dump file will contain all the necessary DDL/DML queries to recreate the database, plus disabling foreign keys and whatnot so that the dump can be loaded without causing any foreign key problems while the restored DB is in a halfway state.
Sounds like what you're looking for is a SchemaCompare tool as opposed to a DataCompare tool. This is built into Visual Studio for SQL. This tool will do that: http://toadformysql.com/index.jspa
If I export a database with phpmyadmin his size is 18MB
If I expoert it from terminal using this command is size is only 11MB.
/usr/bin/mysqldump --opt -u root -ppassword ${DB} | gzip > ${DB}.sql.gz
Could you explain me why ? Is because of --otp parameter ?
How can I be sure the database has been succesfully exported ? Should I inspect it.. still it is not a reliable evaluation. thanks
With the details you've given, there are a number of possibilties as to why the sizes may differ. Assuming the output from phpMyAdmin is also gzipped (otherwise the obvious reason for the difference would be that one is compressed, the other isn't), the following could affect size to some degree:
Different ordering of INSERT statements causing differences in the compressibility of the data
One using extended inserts, the other using only standard inserts (this seems most likely given the difference in sizes).
More comments added by the phpMyAdmin export tool
etc...
I'd suggest looking at the export to determine completeness (perhaps restore it to a test database and verifying that the row-counts on all tables are the
I don't have enough points to comment so I'm adding my comments in this answer...
If you look at the uncompressed contents of the export files from a phpmyadmin export and a mysqldump they will be quite different.
You could use diff to compare the two sql files:
diff file1.sql file2.sql
However, in my experience that will NOT be helpful in this case.
You can simply open the files in your favorite editor and compare them to see for yourself.
As mentioned by Iridium in the previous answer, the use of inserts can be different. I created two new empty databases and imported into each (via phpmyadmin) - one of the two exports mentioned above (one from phpmyadmin and the other via mysqldump).
The import using the mysqldump export file recreated the database containing 151 tables with 1484 queries.
The import using the phpmyadmin export file recreated the database containing 151 tables with 329 queries.
Of course these numbers apply only to my example, but it seems to be in line what Iridium was talking about earlier.
If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.
Hi I need to backup MySQL database and then deploy it on another MySQL server.
The problem is, I need it backup without data , just script which creates database, tables, procedures, users, resets autoincrements etc. ...
I tried MySQL administrator tool (Windows) and UNchecked "complete inserts check box", but it still created it ...
Thanks in advance
use mysqldump with option -d or --no-data
don't forget option -R to get the procedures
this page could help you: http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
From within phpMyAdmin you can export the structure, with or without the data. The only thing I'm not sure of, is wether it exports users as well. If you like, I can test that tomorrow morning. It exports users too. You can check all sorts of options.
(source: obviousmatter.com)
According to the page, there isn't a good way to dump the routines and have them easily able to be recreated.
What they suggest is to dump the mysql.proc table directly. Including all the data.
Then use your myback.sql to restore the structure. Then restore the mysql.proc table with all of its data.
"... If you require routines to be re-created with their original timestamp attributes, do not use --routines. Instead, dump and reload the contents of the mysql.proc table directly, using a MySQL account that has appropriate privileges for the mysql database. ..."