I have two different server server1 and server2, now I have db1 in server1 and db2 in server2.
I am trying to join these two table in MySQL like this.
Select a.field1,b.field2
FROM [server1, 3306].[db1].table1 a
Inner Join [server2, 3312].[db2].table2 b
ON a.field1=b.field2
But I am getting error. Is is possible in MYSQL.
Yes, it is possible in MySQL.
There are similar questions asked previously too. You have to use FEDERATED ENGINE to do this. The idea goes like this:
You have to have a federated table based on the table at another remote location to use the way you want. The structure of the table have to exactly same.
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user#remote_host:9306/federated/test_table';
[Source Answer]
Replication will be alternate and suitable solution.
server1 - db1 -> replicate to server2. (now db1 and db2 will be in same server server2. join will be easy).
NOTE: If the server2 is enough capable of take the load of db1 in terms of store/process etc., then wen can do the replication. As #brilliand mentioned yes Federated will make the much manual work and slow in process.
It's kind of a hack, and it's not a join, but I use bash functions to make it feel like I'm doing cross-server queries:
The explicit version:
tb2lst(){
echo -n "("
tail -n +2 - | paste -sd, | tr -d "\n"
echo ")"
}
id_list=$(mysql -h'db_a.hostname' -ume -p'ass' -e "SELECT id FROM foo;" | tb2lst)
mysql -h'db_b.hostname' -ume -p'ass' -e "SELECT * FROM bar WHERE foo_id IN $id_list"
+--------|-----+
| foo_id | val |
+--------|-----+
| 1 | 3 |
| 2 | 4 |
+--------|-----+
I wrote some wrapper functions which I keep in my bashrc, so my perspective it's just one command:
db_b "SELECT * FROM bar WHERE foo_id IN $(db_a "SELECT id FROM foo;" | tb2lst);"
+--------|-----+
| foo_id | val |
+--------|-----+
| 1 | 3 |
| 2 | 4 |
+--------|-----+
At least for my use case, this stitches the two queries together quickly enough that the output is equivalent to the join, and then I can pipe the output into whatever tool needs it.
Keep in mind that the id list from one query ends up as query text in the other query. If you "join" too much data this way, your OS might limit the length of query (https://serverfault.com/a/163390). So be aware that this is a poor solution for very large datasets. I have found that doing the same thing with a mysql library like pymysql works around this limitation.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 20 days ago.
Improve this question
Is there a way to create a copy of a database with sample rows using mysqldump command?
myslqdump -u <username> -h <host> -p <database name> [<table name> ...]
I have a fairly large DB and need to create a copy so that a developer can work with the App. Instead of dumping the entire DB, is there a way to randomly sample rows and create a copy of the db?
Mysqldump does support row-level backup, as it has --where option which filters rows to be dumped.Here is what the reference manual says about it: --where='where_condition', -w 'where_condition'Dump only rows selected by the given WHERE condition. Quotes around the condition are mandatory if it contains spaces or other characters that are special to your command interpreter. But it might not be that user-friendly. Even if we put a subquery in a WHERE clause, we are still facing some restrictions which might not be possible to overcome. For instance, let's use actor table from DB sakila. It's legit to execute this in mysql cli:
select * from actor
where actor_id in (select *
from (select actor_id from actor order by rand() limit 5) t
);
+----------+-------------+-----------+---------------------+
| actor_id | first_name | last_name | last_update |
+----------+-------------+-----------+---------------------+
| 19 | BOB | FAWCETT | 2006-02-15 04:34:33 |
| 91 | CHRISTOPHER | BERRY | 2006-02-15 04:34:33 |
| 11 | ZERO | CAGE | 2006-02-15 04:34:33 |
| 120 | PENELOPE | MONROE | 2006-02-15 04:34:33 |
| 109 | SYLVESTER | DERN | 2006-02-15 04:34:33 |
+----------+-------------+-----------+---------------------+
However, it's erroneous to use the same WHERE clause when using mysqldump.
mysqldump -uroot -p sakila actor --where="actor_id in (select * from (select actor_id from actor order by rand() limit 5) t )" > /tmp/acto
r_bck.sql
-- error message:
mysqldump: Couldn't execute 'SELECT /*!40001 SQL_NO_CACHE */ * FROM `actor` WHERE actor_id in (select * from (select actor_id from actor order by rand() limit 5) t )': Table 'actor' was not locked with LOCK TABLES (1100)
Besides, as Shadow stated, retaining referential integrity is an issue when using mysqldump. We don't want a broken relationship between tables and unusable dataset. With all regards, please do not use mysqldump for random row-level sampling.
Under the circumstances, the best I can come up with is to use a stored procedure to do the row-level backup to a new database,with contents like:
create database sakila_bck
create table sakila_bck.actor select * from sakila.actor order by rand() limit 10;
create table sakila_bck.actor_film select * from sakila.actor_film where actor_id in (select actor_id from sakila_bck.actor);
-- Note: The create table xx select * from yy does not create keys for backup table. By the way, if you want to retrieve a random number of rows, you can try the PREPARED statement to provide the `limit clause` with a randomly generated number beforehand.
The whole process is definitely not a pushover as we have to keep table relationship in mind. But once the job is done, we can safely use mysqldump to dump sakila_bck at DB-level.
I have two different server server1 and server2, now I have db1 in server1 and db2 in server2.
I am trying to join these two table in MySQL like this.
Select a.field1,b.field2
FROM [server1, 3306].[db1].table1 a
Inner Join [server2, 3312].[db2].table2 b
ON a.field1=b.field2
But I am getting error. Is is possible in MYSQL.
Yes, it is possible in MySQL.
There are similar questions asked previously too. You have to use FEDERATED ENGINE to do this. The idea goes like this:
You have to have a federated table based on the table at another remote location to use the way you want. The structure of the table have to exactly same.
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user#remote_host:9306/federated/test_table';
[Source Answer]
Replication will be alternate and suitable solution.
server1 - db1 -> replicate to server2. (now db1 and db2 will be in same server server2. join will be easy).
NOTE: If the server2 is enough capable of take the load of db1 in terms of store/process etc., then wen can do the replication. As #brilliand mentioned yes Federated will make the much manual work and slow in process.
It's kind of a hack, and it's not a join, but I use bash functions to make it feel like I'm doing cross-server queries:
The explicit version:
tb2lst(){
echo -n "("
tail -n +2 - | paste -sd, | tr -d "\n"
echo ")"
}
id_list=$(mysql -h'db_a.hostname' -ume -p'ass' -e "SELECT id FROM foo;" | tb2lst)
mysql -h'db_b.hostname' -ume -p'ass' -e "SELECT * FROM bar WHERE foo_id IN $id_list"
+--------|-----+
| foo_id | val |
+--------|-----+
| 1 | 3 |
| 2 | 4 |
+--------|-----+
I wrote some wrapper functions which I keep in my bashrc, so my perspective it's just one command:
db_b "SELECT * FROM bar WHERE foo_id IN $(db_a "SELECT id FROM foo;" | tb2lst);"
+--------|-----+
| foo_id | val |
+--------|-----+
| 1 | 3 |
| 2 | 4 |
+--------|-----+
At least for my use case, this stitches the two queries together quickly enough that the output is equivalent to the join, and then I can pipe the output into whatever tool needs it.
Keep in mind that the id list from one query ends up as query text in the other query. If you "join" too much data this way, your OS might limit the length of query (https://serverfault.com/a/163390). So be aware that this is a poor solution for very large datasets. I have found that doing the same thing with a mysql library like pymysql works around this limitation.
I have two different server server1 and server2, now I have db1 in server1 and db2 in server2.
I am trying to join these two table in MySQL like this.
Select a.field1,b.field2
FROM [server1, 3306].[db1].table1 a
Inner Join [server2, 3312].[db2].table2 b
ON a.field1=b.field2
But I am getting error. Is is possible in MYSQL.
Yes, it is possible in MySQL.
There are similar questions asked previously too. You have to use FEDERATED ENGINE to do this. The idea goes like this:
You have to have a federated table based on the table at another remote location to use the way you want. The structure of the table have to exactly same.
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user#remote_host:9306/federated/test_table';
[Source Answer]
Replication will be alternate and suitable solution.
server1 - db1 -> replicate to server2. (now db1 and db2 will be in same server server2. join will be easy).
NOTE: If the server2 is enough capable of take the load of db1 in terms of store/process etc., then wen can do the replication. As #brilliand mentioned yes Federated will make the much manual work and slow in process.
It's kind of a hack, and it's not a join, but I use bash functions to make it feel like I'm doing cross-server queries:
The explicit version:
tb2lst(){
echo -n "("
tail -n +2 - | paste -sd, | tr -d "\n"
echo ")"
}
id_list=$(mysql -h'db_a.hostname' -ume -p'ass' -e "SELECT id FROM foo;" | tb2lst)
mysql -h'db_b.hostname' -ume -p'ass' -e "SELECT * FROM bar WHERE foo_id IN $id_list"
+--------|-----+
| foo_id | val |
+--------|-----+
| 1 | 3 |
| 2 | 4 |
+--------|-----+
I wrote some wrapper functions which I keep in my bashrc, so my perspective it's just one command:
db_b "SELECT * FROM bar WHERE foo_id IN $(db_a "SELECT id FROM foo;" | tb2lst);"
+--------|-----+
| foo_id | val |
+--------|-----+
| 1 | 3 |
| 2 | 4 |
+--------|-----+
At least for my use case, this stitches the two queries together quickly enough that the output is equivalent to the join, and then I can pipe the output into whatever tool needs it.
Keep in mind that the id list from one query ends up as query text in the other query. If you "join" too much data this way, your OS might limit the length of query (https://serverfault.com/a/163390). So be aware that this is a poor solution for very large datasets. I have found that doing the same thing with a mysql library like pymysql works around this limitation.
How can this be possible on two servers? I m using MySQL and c#.net, the Insert is done perfectly but now I don't know how to do select!
server: 127.0.0.1
tbl_student
roll_no| stu_name
1 | abc
2 | def
3 | xyz
Server:127.0.0.2
tbl_room
room_id| room_name
1 | A1
2 | A2
3 | A3
tbl_info (on server:127.0.0.2)
id | roll_no | room_id
1 | 1 |2
2 | 2 |3
3 | 3 |3
select i.id, i.roll_no, s.stu_name, r.room_name
from tbl_student as s, tbl_room as r, tbl_info as i
where i.roll_no = s.roll_no and i.room_id = r.room_id
I don't know which version you are using. Try to make a research on DB-Link. This is the term used to what you need.
In a quick research I saw that is a openend ticket on mysql dev group:
http://dev.mysql.com/worklog/task/?id=1150
You need FEDERATED Storage Engine for link one table in the second server to the first.
If your main server is: 127.0.0.2
you can mapped the table tbl_student present in the server 127.0.0.1
in the other server, before you need to create a mirror table (pseudo code):
CREATE TABLE `tbl_student `(`roll_no` Int, stu_name VARCHAR(100))) ENGINE=FEDERATED
CONNECTION='MYSQL://127.0.0.1:3306/dbname/tbl_student ';
Now you can operate only in the main server.
The FEDERATED storage engine supports SELECT, INSERT, UPDATE, DELETE, and indexes. It does not support ALTER TABLE, or any Data Definition Language statements that directly affect the structure of the table, other than DROP TABLE. The current implementation does not use prepared statements.
Performance on a FEDERATED table is slower.
For more info:
http://dev.mysql.com/doc/refman/5.0/en/federated-use.html
I hope you helpful
I have run mysql -u root -p gf < ~/gf_backup.sql to restore my db. However when I see the process list I see that one query has has been idle for a long time. I do not know the reason why.
mysql> show processlist;
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
| 662 | root | localhost | gf | Query | 18925 | query end | INSERT INTO `gf_1` VALUES (1767654,'90026','Lddd',3343,34349),(1 |
| 672 | root | localhost | gf | Query | 0 | NULL | show processlist |
+-----+------+-----------+-------------+---------+-------+-----------+------------------------------------------------------------------------------------------------------+
Please check free space with df -h command (if under Linux/Unix) if you're out of space do not kill or restart MySQL until it catch up with changes when you free some space.
you may also want to check max_allowed_packet setting in my.cnf and set it to something like 256M, please refer to http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_allowed_packet
Probably your dump is very large and contains much normalized data (records split into a bunch of tables, with a bunch of foreign key constraints, indexes and so on).
If so, you may try to remove all constraints and index definitions from the SQL file, then import the data and re-create the former removed directives. This is a well-known trick to speed up imports, because INSERT commands without validation of any constraints are a lot faster, and creation of an index and so on afterwards can be done in a single transaction.
See also: http://support.tigertech.net/mysql-large-inserts
Of course, you should kill the query first. And remove all fragments it created already.