Select from second MySQL Server - mysql

I would like to select data from a second MySQL database in order to migrate data from one server to another.
I'm looking for syntax like
SELECT * FROM username:password#serverip.databaseName.tableName
Is this possible? I would be able to do this in Microsoft SQL Server using linked servers, so I'm assuming it's possible in MySQL as well.

You can create a table using FEDERATED storage engine:
CREATE TABLE tableName (id INT NOT NULL, …)
ENGINE=FEDERATED
CONNECTION='mysql://username:password#serverip/databaseName/tableName'
SELECT *
FROM tableName
Basically, it will serve as a view over the remote tableName.

There are generally two approaches you can take, although neither of them sound like what you're after:
Use replication and set up a master/slave relationship between the two databases.
Simply dump the data (using the command line mysqldump tool) from the 1st database and import it into the 2nd.
However, both of these will ultimately migrate all of the data (i.e.: not a subset), although you can specify specific table(s) via mysqldump. Additionally, if you use the mysqldump approach and you're not using InnoDB you'll need to ensure that the source database isn't in use (i.e.: has integrity) when the dump is created.

You can't do this directly, but as someone else alluded to in a comment, you can use mysqldump to export the contents of a table as a SQL script.
At that point you could run the script on the new server to create the table, or if more manipulation of the data is required, import that data into a table with a different name on the new server, then write a query to copy the data from there.

Related

Is it possible to make the insert command from existing table data

I have table which has a few data.
name score
1 AAA 100
2 BBB 98
3 CCC 85
Now I want to make the insert sentence such as
insert into pepolescore(name,score) VALUE("CCC",85)
automatically.
Is there any command to do this or any function ? by mysql commandline or phpmyadmin.
MySQL queries can address another schema on the same MySQL Server instance by using qualified table names. See https://dev.mysql.com/doc/refman/8.0/en/identifier-qualifiers.html
But this does not work if the tables are on separate MySQL Servers. A given SQL query can only address schemas on the same server.
Here are a few workarounds:
Use mysqldump to export data from one table and then use mysql to import it to the other table on the other instance. You need to be careful not to let mysqldump output the DROP TABLE command, so read about the options here: https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html
MySQL supports a table engine called FEDERATED, where a table can function as a sort of proxy to a table on another MySQL Server. Then you can use INSERT ... SELECT syntax as if the tables were co-located on the same MySQL Server. The Federated engine has limitations, so read https://dev.mysql.com/doc/refman/8.0/en/federated-storage-engine.html and its subsections to learn more.
Use a community tool such as pt-archiver to copy data from one MySQL instance to the other. Read the manual to learn more: https://docs.percona.com/percona-toolkit/pt-archiver.html
Write your own custom code in a client application. Create two connections, one for each MySQL Server. Fetch query results from the first server, and store the resulting rows in variables in your application. Then use these rows as the tuples to insert using the second connection to the other MySQL Server. This involves writing more code, but you get a lot of flexibility.

Accesing data from one mysql database to another in MYSQL Workbench

I have two different databases. I have to access data from one database and insert them into another ( with some data processing included, it is not only to copy data ) Also, the schema is really complex and each table has many rows, so copying data into schema in the second database is not an option. I have to do that using MySQL Workbench, so I have to do it using SQL queries. Is there a way to create a connection from one database to another and access its data?
While MySQL Workbench can be used to transfer data between servers (e.g. as part of a migration process) it is not useful when you have to process the data first. Instead you have 2 other options:
Use a dedicated tool you write yourself to do that (as eddwinpaz mentioned).
Use the capabilities of your server. That is, copy the data to the target server, into a temporary table (using dump and restore). Then use queries to modify the data as you need it. Finally copy it to the target table.

Trying to copy entire MySQL schema using SQL commands

After searching SO, I found answers to the following:
How to copy an entire MySQL schema using mysqldump
How to copy an entire MySQL schema using PHP
How to copy an entire MySQL schema using the enterprise edition of MySQL
How to copy an entire Microsoft SQL Server schema using the menus.
I also found a few hints about copying a MySQL schema using SQL commands.
My question: If I use the following SQL commands to copy a MySQL schema, what parts of the old schema would not be copied? Indexes? Constraints? Views? Anything else?
CREATE SCHEMA new_schema DEFAULT CHARACTER SET utf8;
CREATE TABLE new_schema.table1 LIKE old_schema.table1;
CREATE TABLE new_schema.table2 LIKE old_schema.table2;
CREATE TABLE new_schema.table3 LIKE old_schema.table3;
...;
INSERT INTO new_schema.table1 SELECT * FROM old_schema.table1;
INSERT INTO new_schema.table2 SELECT * FROM old_schema.table2;
INSERT INTO new_schema.table3 SELECT * FROM old_schema.table3;
...;
The CREATE TABLE ... LIKE will take care of indexes and constraints.
You should take care to SET FOREIGN_KEY_CHECKS=0 while you run this, because if table1 has a foreign key to table2, then creating table1 will fail. Likewise inserting data into the tables in the wrong order will fail.
Your script does not cover:
Views
Triggers
Stored procedures
Stored functions
Events
There are no CREATE... LIKE... statements for these other objects. You'll have to use SHOW CREATE... and then run it against in the context of the new schema. See the various SHOW CREATE... statements here: http://dev.mysql.com/doc/refman/5.6/en/show.html
I also caution that the way you INSERT INTO... SELECT FROM... will work, but can fill up your rollback segment if the table is very large. Tools like pt-archiver try to copy tables in batches, ascending along the primary key, to avoid this problem.
I think routines can't be copied directly with sql commands (as far as I know there's not such anything like create procedure myProc like old.myProc).
I would recommend you use mysqldump, since it takes care of copying everything, including the data (if you don't want to copy the data, you can use the -d switch to prevent creating the insert statements).
If you want to create a "template" (a database that is exactly like another database, but without the data), you can use the following:
mysqldump [connectionParameters] -d -R -v yourOldDatabase > databaseTemplate.sql
The options explained:
[connectionParameters]: host, user and password
-d: Don't copy data
-R: Include routines in the dump
-v: Output what mysqldump is doing to the console
You can open this "light" sql script to check how the objects were created.
Hope this helps

How to accomplish "MySQL cross database reference" with PostgreSQL

We will migrate the database from mysql to postgresql in our product(through java). So we need to change the mysql query to postgresql query in java application. How to create the table i.e., databasename.tablename in postgresql.
For mysql, we can directly create the table e.g create table information.employee.
Here database name is "information" and table name is "employee" . Is it possible to achieve same query in postgresql.
I searched google it says cross database reference is not possible. Please help me.
I saw pg_class table it contains the table names in the specific database, like wise databse and tables relationships are stored in any other table.
This is normally done using schemas rather than databases, which is more or less like how MySQL organizes it anyway.
Instead of
create database xyz
use
create schema xyz
When you create tables, create them:
create table xyz.myTable
you will need to update your search path to see them on the psql command line tool, or if you want to query them without using the schema explicitly. The default schema is public, so when you create a table without a schema name, it ends up in public. If you modify your search_path as below, the default schema becomes the first in the list: xyz.
set search_path=xyz,public,pg_catalog;
and you must not have spaces in that statement. You can do it globally for a user/role too:
alter role webuser set search_path=xyz,public,pg_catalog;
Also, don't forget that postgresql string matches are case sensitive by default (this one catches people out a lot).
If you want to have different physical locations for the files for each schema, you can do that with tablespaces. If you have a look at the postgresql documentation page, they have info on how to do it, it's pretty easy.
database in MySQL == schema in PostgreSQL. So you will most probably want to migrate all your mysql dbs into one postgres db. Then you will be able to do "cross-database" queries.
See my answer to this question: Relationship between catalog, schema, user, and database instance

question about MySQL database migration

If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.