Federated Table Clarification - mysql

In my prior job, I was able to copy data from our production environment in a breeze by using the following statements:
from tablename#UNIXPROD2
INSERT INTO tablename#UNIXTEST2
My current job's databases aren't setup in this fashion.
So, I did some research on MySQL 5.0+ because that's what we are using for one of our customers. And I came across FEDERATED tables, so as I was reading, I found this (here):
As of MySQL 5.0.46, FEDERATED performs bulk-insert handling such that multiple rows are sent to the remote table in a batch. This provides a performance improvement. Also, if the remote table is transactional, it enables the remote storage engine to perform statement rollback properly should an error occur. This capability has the following limitations:
To me, this indicates that (A) I can copy the data from our prod database to our test database; (B) any actions performed on the federated table will also be processed on the source table, which is not what I want to do. I have some scripts that I need to run and I want to run it against actual prod data to make sure it works before I use it in the prod environment.
My question: Is my interpretation correct?
Assuming it is, I've tried:
select * from database.tablename#ipaddress, but received an error message that told me to check the MySQL manual for the version I'm running, which is what I'm going to do after I hit "Post Your Question."
I would appreciate any help in this matter.
EDIT: After further research, I think might be able to do what I need using OUTFILE and INFILE whereby I would use OUTFILE on the prod table(s) and then INFILE those rows on the test table(s). Thoughts?

My answer:
A - correct
B - correct.
You could set the user permission to read-only, but in your situation I would not use federated tables, instead dump the whole db into file and then restore it on the other server. Easiest way - use MySql Workbench.
and some info about federated tables:
You need federated enabled just on server B
You can access a view on A by making a federated table on B
You can do INSERT UPDATE DELETE on federated table
If you need read-only access you can limit the user privileges
BUT! You can't do any aggregate func. on a view which will be federated (ex. COUNT(), MAX(), UNION...) (you can, however it will lag)
Remember to set the KEY's on the federated table you are creating. (or it will lag horr.)
Remember to use ALGORITHM=MERGE on views
Remember to grant acces to USERNAME(from connection string) on server A
example of a federated table on server B:
delimiter $$
CREATE TABLE `schemaName`.`tableName`(
`keyName` VARCHAR(10) NOT NULL,
`key2Name` DATE DEFAULT '2012-01-01',
KEY `keyName` (`keyName`)
)
ENGINE=FEDERATED
DEFAULT CHARSET=utf8
CONNECTION='mysql://USERNAME:PASSWORD#IPADDRESS:PORTNUMBER/baseSchema/baseTable'
$$
And the view on server A:
CREATE
ALGORITHM = MERGE
DEFINER = `ANOTHERUSERNAME`#`%`
SQL SECURITY DEFINER
VIEW `baseSchema`.`baseTable` AS
SELECT
... AS `keyName`,
... AS `key2Name`
FROM
...

Related

How can I scan for new data in one database and send it to another one?

I have two databases on two servers.
The first one contains many tables which contains many codes.
An example is "Products" Table which contains the column "ProductCode".
Lets say there are 5 distinct records in that column i.e ProductCode1 -> ProductCode5.
The second database contains all the fields from each table defined in the first database.
I use the second database to provide definitions for each code found in the first database. I have migrated all the data from all the tables in the first db over to the new one manually via an excel file and script.
However, I would like to create an SQL function which scans the first database and when it finds new rows of data, it adds that data to the second database.
This would save me the hassle of querying all the tables individually and then adding them manually, as i originally did.
Please note that both databases are stored on separate servers.
Is this possible to achieve?
Or is there any better options?
While there are recovery backup and replication methods, if both databases maintain different data, there is no single, convenient SQL function to migrate new data in all tables from one database to another. However, you can build an .sql script or stored procedure that runs duplicate-avoid queries for new data.
Consider following steps where 1 and 2 are to be run for each table:
Create Federated Table from remote MySQL database to be locally available for querying but physical storage remains in remote database. See overview of Federated Storage Engine. Note: this step needs to only be run once for each needed table.
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id), INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
DEFAULT CHARSET=utf8mb4
CONNECTION='mysql://fed_user#remote_host:9306/federated/test_table
Federated table schema must be identical to remote table. Therefore, align data types of CREATE TABLE to output of SHOW CREATE TABLE in remote database.
Run SQL duplicate-avoid queries such as NOT IN vs. NOT EXISTS vs. LEFT JOIN / IS NULL. One other method is EXCEPT (same family as UNION and INTERSECT operators):
INSERT INTO Products (Col1, Col2, Col3, ...)
SELECT Col1, Col2, Col3, ...
FROM my_federated_products_table
EXCEPT
SELECT Col1, Col2, Col3, ...
FROM Products
Automate step 2 for each table into a single stored procedure (or .sql script) to be run multiple times in future.
DELIMITER $
CREATE PROCEDURE migrate_new_data()
BEGIN
-- ALL INSERT INTO STATEMENTS FROM FEDERATED TABLES
END $
DELIMITER ;
Run procedure each time from Excel, workbench, command line, or elsewhere which can serve as your SQL function:
CALL migrate_new_data;
Looks like FEDERATED Storage is what your are looking for.
From the docs:
The FEDERATED storage engine lets you access data from a remote MySQL database without using replication or cluster technology. Querying a local FEDERATED table automatically pulls the data from the remote (federated) tables. No data is stored on the local tables.
Here is and article that shows how to configure it: https://medium.com/#techrandomthoughts/setting-up-federated-tables-in-mysql-8a17520b988c

MySQL Cluster 7.4.15 - Ndb_Restore Fail Because an Orphan Fragment

i want to know if it's possible to drop a table fragment that is not letting me perform a restore with the NDB_RESTORE tool.
When i run the restore, it throws the following error:
Create table db_died_maestro/def/NDB$FKM_3194_0_mae_tipo_reg_evaluacion failed: 721: Schema object with given name already exists
Restore: Failed to restore table: db_died_maestro/def/NDB$FKM_3194_0_mae_tipo_reg_evaluacion ... Exiting
NDBT_ProgramExit: 1 - Failed
I have already drop the DB_DIED_MAESTRO database previous to run the restore, but this fragment is not being dropped along with the database.
I have check that the fragment is in the database catalog using this querys:
*select * from ndbinfo.operations_per_fragment
where fq_name like 'db_died_maestro%'*
query result
And this query:
*select * from ndbinfo.memory_per_fragment
where fq_name like '%FKM_3194_0_mae_tipo_reg_evaluacion'*
query 2 result
This fragment was created on a previous run of the NDB_RESTORE tool. Please help me.
The table is a foreign key 'mock' table (indicated by the name NDB$FKM prefix).
Foreign key mock tables are created transiently in some cases to implement the foreign_key_checks = 0 feature of MySQL. This feature requires storage engines to support unordered creation of tables with partially defined foreign key constraints which can be abritrarily enabled (without revalidation) at a later time.
Foreign key mock tables are normally entirely managed by the Ndb storage engine component of MySQL, and so should not be visible unless there has been a failure or bug of some kind.
If you can share information about activities occurring before this problem then that would help us understand how this happened and whether it can be avoided.
As a workaround it should be possible for you to use the ndb_drop_table utility to drop this table, before re-attempting the failing restore. You may have to escape the $ in the name passed as a command line argument from a shell. Probably you should check for any other NDB$FKM tables in a similar situation.

MySQL Queries from the MySQL Workbench won't replicate

I am preparing to make some changes to a database I manage and was unsure that what I wanted to do would replicate properly so I ran some tests in a test environment and it turns out they will but only as long as I do not run the commands from the MySQL Workbench.
For example if have a database named db_test and a table in that database named test_a having only a single column id and I try to execute this from the workbench: INSERT INTO db_test.test_a (id) VALUES (114);
I get the expected row in the master database, but it never replicates to the slave.
When I perform a SHOW SLAVE STATUS - it shows everything is fine, and current. If I then use a different SQL client such as SequelPro and insert another row the same way (but obviously a different id) it shows in the master and replicates to the slave.
This has me baffled, and concerned as I want to understand what the difference is so I can avoid performing actions that never replicate.
If you have set --replicate-do-db on the slave to filter replication for database db_test, replication is filtered based on the default database, so make sure that you issue USE db_test. Your client may be working differently in this manner, or you may be issuing different statements between clients.
Using --replicate-do-db set to db_test on the slave, this will replicate:
USE db_test;
INSERT INTO test_a (id) VALUES (114);
but this will not:
USE other_db;
INSERT INTO db_test.test_a (id) VALUES (114);
To get replication to work regardless of the current default database, use --replicate-wild-do-table to configure the database and table to replicate or don't filter at all.
Also, make sure that you are connected to the Master database server.

MySQL - Trigger or Replication is better?

I want to replicate certain table from one database into another database in the same server. This tables contain exactly the same fields.
I was considering to use MySQL Replication to replicate that table but some people said that it will increase IO so i find another way to create 3 Trigger (Insert, update and Delete) that will perform exactly the same thing like what i expect.
My Question is, which way is better? Is it using MySQL replication is better even though it's in the same server or using Trigger to replicate the data is better.
Thanks.
I don't know what is your goal, but I got mine getting use of the VIEW functionality.
I had two different applications with separate databases but in the same Mysql server. Application2 needed to get a few data from Application1. In general, this is a trivial situation that you can handle with USE DB1; or USE DB2; as your needing, but my programming framework does not work very well with multiple DBs.
So, lets see my solution...
Here is my select query to retrieve this data:
SELECT id, name FROM DB1.customers;
So, using DB2 as default schema, I've created a VIEW:
USE DB2;
CREATE VIEW app1_customers AS SELECT id, name FROM DB1.customers;
Now I can retrieve this data in DB2 as a regular table with a regular SELECT statement.
SELECT * FROM DB2.app1_customers;
Hope ts useful. BR
Assuming you have two databases on the same server i.e DB1 and DB2 and the table is called tbl1 and it is sitting in DB1 you can query the table like this:
USE DB1;
SELECT * FROM tbl1;
USE DB2;
SELECT * FROM DB1.tbl1;
This way you wont need to copy the data and worry about extra space and extra code. You can query a table in another database on the same server. Replication and triggers are not your answer here. You could also create a view to encapsulate the SQL statement.
Definitely triggers is the way to go. Having another server (slave) will need to spare several MB for installation, logs, cpu and memory usage.
I'd use triggers to keep both tables equal. If you want to create a table with the same columns definition and data use:
USE db2;
CREATE TABLE t1 AS SELECT * FROM db1.t1;
After that, go ahead and create the triggers for Update, Insert and Delete statemetns.
Also you could ALTER the new table to a different engine like MEMORY or add indexes to see if you can improve something.

I need to join table from other database and sometimes other server

My project has its own database. Also, I use table of users, which is on other database. Two offices have their data on the same server, but third one has its own user table on other server.
So, in lots of queries I need to join either table some_db.users or other_server.some_db.users
What solution would you advise for this scenario?
I use MySQL.
There is Federated tables in MySQL:
The FEDERATED storage engine lets you access data from a remote MySQL
database without using replication or cluster technology. Querying a
local FEDERATED table automatically pulls the data from the remote
(federated) tables. No data is stored on the local tables.
First, you must have a table on the remote server that you want to access by using a FEDERATED table. Suppose that the remote table is in the sakila database and is defined like this:
CREATE TABLE test_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL
PRIMARY KEY (id)
)
ENGINE=MyISAM
DEFAULT CHARSET=latin1;
Next, create a FEDERATED table on the local server for accessing the remote table:
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL
PRIMARY KEY (id)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user:fed_user#197.186.1.199:3306/sakila/test_table';
Sample connection strings:
CONNECTION='mysql://username:password#hostname:port/database/tablename'
CONNECTION='mysql://username#hostname/database/tablename'
CONNECTION='mysql://username:password#hostname/database/tablename'
The basic structure of this table should match that of the remote table, except that the ENGINE table option should be FEDERATED.
Execute:
show variables like '%federated%';
to check if FEDERATED storage engine is available on your local server.
The table federated_table in localhost becomes virtual table of test_table in remote server.
Now you can use the JOIN between the tables in a DB in the localhost server. If there is a table called test in your localhost server, and you want to JOIN with the former sakila.test_table which is in the remote server, write a query like the one shown below:
SELECT * FROM `federated_table` JOIN `test`;
The federated_table in the query will actually refer to test_table in remote server.
On enabling FEDERATED Storage Engine
The FEDERATED storage engine is not enabled by default in the running server; to enable FEDERATED, you must start the MySQL server binary using the --federated option.
NOTE:
Optional storage engines require privileges and will fail to load when --skip-grant-tables is specified.
The result the entire db will fail to load and the following error will appear in the logs:
110318 21:37:23 [ERROR] /usr/local/libexec/mysqld: unknown option '--federated'
This in turn means that an upgrade from 5.x needs to be done in two steps if you have federated tables. Once with --skip-grant-tables and without --federated, the once without --skip-grant-tables and with --federated.
Source: The FEDERATED Storage Engine
Please mention the databse also.
In SQLServer you can use Linked sever. http://msdn.microsoft.com/en-us/library/ms188279.aspx
In MySQL, you can join tables from different databases using fully qualified names like
`database_name1`. `table_name1` JOIN `database_name2`.`table_name2`
But i fear, you cant join tables from different servers because for that you need to have two different connections and as per my knowledge there are no fully qualified connection names to be used in the query.
Alternatively, you can create local temporary table(s) on one of the servers and run the query there on. But in this case you will need to transfer data from one server to another. You can use MySQL GUI tool like SQLyog or MySQL admin, to transfer data from one server to another and to synchronize databases on two servers.
Hope it helps....
Federated tables are your solution for tables on other servers. They are very slow though if you perform joins on them.
If you just want to read data from another database on the same server you can use a view. This way you have all tables virtually in one database and you have to open only one connection in your application.
CREATE
VIEW `my_db`.`table_name`
AS
(SELECT * FROM `other_db`.`table_name`);