I've seen a few posts about insert a table into another table (with the same columns) but i'm having trouble figuring out how to do this across different databases. I see that you reference the database first with the dot operator and then the table ie. database.table but the databases are completely separate instances (separate login credentials etc.) when i reference one database the another database doesn't recognize it. What would be the best way to accomplish what it is i'm trying to do?
I running the DBs on AWS if that helps
once logged in, there is no way to connect to another database (as far as I know - please correct me if I'm wrong). so you have to have to export your data first and then import it.
mysqldump is the tool for exporting, e.g. mysqldump -h HOST -u USER -p database table > export.sql
import it then via e.g. mysql -h HOST -u USER -p database table < export.sql
Related
I was working on a site offline. Now, due to some reason, it's not working. I've tried almost everything. To save time, I have setup a fresh mysql and wordpress site.
Now, I have theme files and database folder files like: dp.opt and wp_addonlibrary_addons.frm. How can I properly upload them?
I tried to manually do it, but some error came saying these tables already exist.
Please advice.
I assume your questions is about importing the DB again in MySQL?
Are you still able to export the database from your not working site? Than you could do this (source):
mysqldump --add-drop-table -u user -p > dumpfile.sql
And import it again on your new site with this command:
mysql -u user -p < dumpfile.sql
You won't get the error about already existing tables.
If you can't reach the old database anymore or not able to export, drop the current database first on your new site (make sure you made a backup).
DROP DATABASE databasename
CREATE DATABASE databasename
You can also drop all tables instead of dropping the database. You can use this script: https://www.cyberciti.biz/faq/how-do-i-empty-mysql-database/
Or my preferred method is using PHPMyAdmin to drop all tables.
Then re-import with:
mysql -u user -p < dumpfile.sql
I am trying to create a table on my local machine which has the same description as some other table on a remote machine. I just want to create the table with same columns, don't worry about the row data.
The table has around 150 columns, so its very tedious to write the CREATE TABLE command. Is there an elegant way of doing this?
Are you referring to something like:
SHOW CREATE TABLE table_name;
which shows an sql query on how to create the certain table.
You can get the description of your table on host1 with DESCRIBE by calling something like
DESCRIBE "myTable";
This will need some manual efforts to build up the CREATE TABLE-command, but has the advantage to get all important things on one view.
Anothe way would be to unload the structur of you database of host 1 you can use mysql_dump. To do so call mysql_dump similar to
mysql_dump -d [options_like_user_and_host] databasename tabelname;
You will get a file which nearly directly can be used for your host 2. Atttention: Some relations e.g. to other tables might not be included.
You can use mysqldump to export database table structure only. Use -d or --no-data to achieve it. Official document. For example
mysqldump -d -h hostname -u yourmysqlusername -p databasename > tablestruc.sql
If your local machine is accessible from remote, then change the hostname and execute this command from remote machine.
I'm transferring complete database from online server to localhost server.
But all records are not transferring. Is there any way to transfer complete data with same rows .
I tried via Navicat, export and import single tables, import and export .sql and gzip but all result are different
My Hosting is Shared.
Software on localhost Xamp
You can try mysqldump.
mysqldump -h hostname -u user -pPassWord --skip-triggers --single-transaction --complete-insert --extended-insert --quote-names --disable-keys dataBaseName > DUMP_dataBaseName.sql
then move you file DUMP_dataBaseName.sql to your localhost, and:
mysql -hHost -uUser -pPass -DBase < DUMP_dataBaseName.sql
Result is not missing probably your mysql tables on innodb click any table and see how many rows you are seeing .:) inno db now give the exact result
One issue I have run into countless times when moving WordPress sites are special characters (specifically single quotes and double quotes). The database will export fine, but upon import, it breaks at an "illegal" quote. My workflow now consists of exporting the database, running a find and replace on the sql file to filter out the offending characters, and then importing.
Without knowing more about your specific situation, that's just something I would look into.
I'm trying to "clone" a MySQL database. The problem is with views. When I export a view, the .sql defines the view as database_name.view_name. It doesn't do this for tables, just views. This obviously creates a problem when importing to the second database - the view doesn't get created.
I think I've found the answer. The problems I was running into were being created by phpMyAdmin. From the command line (make sure to create the target database first):
mysqldump -u [username] -p[password] [old_database_name] > dump.sql
mysql -u [username] -p[password] [new_database_name] < dump.sql
No problems.
One thing you may want to try is SqlYog Community, I use it all the time for MySQL and it seems to do a great job of copying entire databases from one server to another, or even on the same server.
I have a new database, similar to the old one but with more columns and tables. So the data on the old table is still useable and needs transferring.
The old database is on a different server to the new one. I want to transfer the data from one database to the other.
I have navicat, but it seems to take forever using the host to host data transfer. Also downloading a sql file then executing that takes too long also (it executes about 4 inserts per second).
The downloaded SQL file is about 40mb (with complete insert statements). The final one would probably be 60mb to 80mb.
What's the best way to transfer this data? (a process I will need to repeat a few times for testing)
Doing a mysqldump on the source machine and then slurping it in on the other side, even on a 40-100MB file is well within reason. Do it from the command line.
(source machine)
mysqldump -u user -p password database > database.sql
..transfer file to recipient machine...
(recipient machine)
mysql -u user -p password database < database.sql
Can you not transfer only a portion of the data for testing first? Then, later, transfer the entire thing when you're satisfied with test-results?
(it executes about 4 inserts per second)
That sounds more like there's something wrong with your database.. Are you sure that's alright?
Cody thank you for the direction. For some reason it did not work for me, but the below did on my redhat linux server:
(recipient machine)
mysql -u [username] -p -h localhost database < database.sql
(source machine)
I just used php myadmin
Is there a command that can be run to pull the DB from another server something along the lines of: mysqldump -u [username] -p -h [host address] [dbname] > [filename].sql
Thanks