mysql proxy r/w replication and temporary tables - mysql

I am doing master/slave replication on MySQL5.1 and r/w split with mysql proxy 0.8.x
It works fine except with temporary tables. MySQL throws an error, that the temporary table is not existent.
This is the query log for the master server:
CREATE TEMPORARY TABLE IF NOT EXISTS sh ( ad_id MEDIUMINT( 8 ) UNSIGNED NOT NULL, score float , INDEX ( `ad_id` ), INDEX ( `score` )) ENGINE = MEMORY
INSERT INTO sh
SELECT cl.ID, 1
FROM classifieds cl
WHERE cl.advertiser_id = '40179'
This is the query log for the slave:
CREATE TEMPORARY TABLE IF NOT EXISTS sh ( ad_id MEDIUMINT( 8 ) UNSIGNED NOT NULL, score float , INDEX ( `ad_id` ), INDEX ( `score` )) ENGINE = MEMORY
This is the mysql errror message:
Occured during executing INSERT INTO sh SELECT cl.ID, 1 FROM classifieds cl WHERE cl.advertiser_id = '40179' statement
Error: 1146 Table 'dbname.sh' doesn't exist
If I query the master directly (change php db connection to master instead to mysql-proxy), it works without problems.
I am using this mysql proxy config:
[mysql-proxy]
daemon = true
pid-file = /home/mysqladm/mysql-proxy.pid
log-file = /home/mysqladm/mysql-proxy.log
log-level = debug
proxy-address = 192.168.0.109:3307
proxy-backend-addresses = 192.168.0.108:3306
proxy-read-only-backend-addresses = 192.168.0.109
proxy-lua-script = /usr/local/mysql-proxy/share/doc/mysql-proxy/rw-splitting.lua
Has anybody an idea on how to fix that? Thank you for any help!
// edit next day
I believe I know why this isn't working:
MySQL Proxy sends the create tmp and insert select statements to the master which replicates the commands correctly to the slave, then in the next step the select is sent to the slave. Unfortunatelly in MySQL the tmp table is only valid for the connection which issued it, therefore the tmp table created by the replication is not valid for the second connection issued by mysql proxy on slave.
I am now trying to solve this by changing my application and issuing connects with tmp tables directly to the master.
Please let me know if you believe that there is a better solution.

Yes, that's exactly the problem. This is one of the pitfalls of splitting read queries with MySQL Proxy instead of having the application layer make that determination for itself.
It sounds like what you're doing is putting that determination back into the application layer, but for these tables only. That's a fine workaround. If you find yourself making more exceptions that require pointing a dbh directly at a database, consider abstracting that code and giving your application a way to request a dbh for a particular functionality. In this case, you'd like your code to ask a library "give me a dbh that I can perform TEMPORARY TABLE queries on."
Another way would be to give all TEMPORARY TABLEs recognizable names (maybe make them all start with "tmp_") which would give Proxy a fighting chance to send SELECTs on them to the right place.

Related

How to work with MySQL temp tables in a Galera Cluster?

I am scaling my application from one MySQL server to a 3 node MySQL Galera Gluster. Aparently temporary tables do not get replicated.
Is there a common workaround this problem?
My current code looks like this:
$stmt = "
CREATE TEMPORARY TABLE tmp (`city_id` MEDIUMINT( 8 ) UNSIGNED NOT NULL ,INDEX ( `city_id` ) )
";
db_query($stmt);
# travel tips
$stmt = "
INSERT INTO tmp
SELECT city_id
FROM $DB.$T33 g
WHERE g.country_code = '$country[code]'
GROUP BY city_id
";
execute_stmt($stmt, $link);
The error message is:
Error: 1146 Table 'test.tmp' doesn't exist
CREATE TEMPORARY TABLE creates a table visible only to the session where it was created. No other connections can see it.
A single connection stays connected to one node.
With those two in mind, it does not matter whether such a table is replicated.
ROW based replication is a requirement of Galera.
MyISAM tables are not replicated.
With those two additional bullet items, it does not even matter if the TEMPORARY TABLE is ENGINE=MyISAM (or MEMORY).
Back to your problem. What do db_query and execute_stmt do?
Do they also connect to the server? Not good. Have only one connection for your program.
Do they go through some form of Proxy before getting to a Galera node? It would not be good for it to be switching nodes.

MySQL Replication fails intermittently with HA_ERR_KEY_NOT_FOUND when inserting from a tmp table

I'm running mysql 5.6 onn AWS RDS. I have a mysql slave via a readreplica in RDS as well. Replication to the slave gets an error when running a stored proc that inserts from a temporary table into a non-temporary table.
My reading of the mysql documentation is that this case is handled as long as we use mixed mode binlogging (which we do). http://dev.mysql.com/doc/refman/5.6/en/binary-log-mixed.html
Is this a mysql bug or am I missing something? Is this approach simply not supported when using mysql slaves?
The stored proc that is causing trouble is doing something like this, where MySummaryTable is the non-temp table and tmp_locations_table is the temp one:
CREATE TEMPORARY TABLE tmp_locations_table
INSERT INTO MySummaryTable (
accountID,
locationID
)
SELECT
row_data.accountID,
row_data.locationID
FROM tmp_locations_table row_data
GROUP BY
row_data.accountID,
row_data.locationID
ON DUPLICATE KEY UPDATE
locationID=123
The exact mysql error I'm seeing isn't particularly helpful: Could not execute Update_rows event on table myschema.MySummaryTable; Can't find record in 'MySummaryTable', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log blahblahblah

MySQL Row-Based Replication (RBR) - Log file containing unexpected statement

I just configure 2 mySQL servers for a MASTER-MASTER replication.
I choose the RBR replication for some reason.
My congifuration on the server ONE :
#replication
server-id=1
binlog_do_db = db1
binlog_ignore_db = db2
log-bin="C:/ProgramData/MySQL/my56"
auto_increment_increment = 2
auto_increment_offset = 1
binlog_format=ROW
replicate_do_db=db1
and on the server TWO :
#replication
server-id=2
binlog_do_db = db1
log-bin="C:/ProgramData/MySQL/my56"
auto_increment_increment = 2
auto_increment_offset = 2
binlog_format=ROW
replicate_do_db=db1
With this, the replication works.
For example, on server ONE, if I execute :
USE db1;
INSERT INTO db1.table1 values (foo,bar);
It's works on the server TWO.
If on server ONE I execute :
USE db1;
INSERT INTO second_db.table2 values (foo,bar);
The insert is not execute on the server TWO, it's good.
If on server ONE I execute :
CREATE table db1.tableFoo(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
The create table is not execute on the server TWO, it's good because I choose the Row-Based replication, so I have to manualy execute the CREATE STATEMENT on server TWO. It's what I want.
Now, there is my problem :
If on the server ONE I execute :
USE db1;
CREATE table db1.tableFoo(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
The CREATE TABLE is execute on the server TWO, it's NOT good ! Normally with the Row-Based Replication the CREATE ORDER is not replicated.
Worst, if after the USE db1; if create a table in another database, the CREATE TABLE is replicated on my server TWO, and my slave is aborted on the server TWO because de database doesn't exist...
Do you have any idea ? I dont want any CREATE / ALTER / CREATE USER ... send to my replication even if I use à USE db1;
I based my work on the mySQL documentation, especially this one : http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html
Thank you and merry Xmas !
DDL statements are always logged using statement-based replication, regardless of whether you've chosen RBR or not. As a result, the default (current) database is important when you execute CREATE TABLE statements.
From http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#option_mysqld_binlog-do-db :
DDL statements such as CREATE TABLE and ALTER TABLE are always logged as statements, without regard to the logging format in effect, so the following statement-based rules for --binlog-do-db always apply in determining whether or not the statement is logged.
...Only those statements are written to the binary log where the default database (that is, the one selected by USE) is db_name.
This suggests that the behavior you observe is expected, although it is a bit odd.
If possible, I'd suggest that you USE an unreplicated DB (for example mysql) before executing DDL that you do not want to replicate in your application.

Federated Table Clarification

In my prior job, I was able to copy data from our production environment in a breeze by using the following statements:
from tablename#UNIXPROD2
INSERT INTO tablename#UNIXTEST2
My current job's databases aren't setup in this fashion.
So, I did some research on MySQL 5.0+ because that's what we are using for one of our customers. And I came across FEDERATED tables, so as I was reading, I found this (here):
As of MySQL 5.0.46, FEDERATED performs bulk-insert handling such that multiple rows are sent to the remote table in a batch. This provides a performance improvement. Also, if the remote table is transactional, it enables the remote storage engine to perform statement rollback properly should an error occur. This capability has the following limitations:
To me, this indicates that (A) I can copy the data from our prod database to our test database; (B) any actions performed on the federated table will also be processed on the source table, which is not what I want to do. I have some scripts that I need to run and I want to run it against actual prod data to make sure it works before I use it in the prod environment.
My question: Is my interpretation correct?
Assuming it is, I've tried:
select * from database.tablename#ipaddress, but received an error message that told me to check the MySQL manual for the version I'm running, which is what I'm going to do after I hit "Post Your Question."
I would appreciate any help in this matter.
EDIT: After further research, I think might be able to do what I need using OUTFILE and INFILE whereby I would use OUTFILE on the prod table(s) and then INFILE those rows on the test table(s). Thoughts?
My answer:
A - correct
B - correct.
You could set the user permission to read-only, but in your situation I would not use federated tables, instead dump the whole db into file and then restore it on the other server. Easiest way - use MySql Workbench.
and some info about federated tables:
You need federated enabled just on server B
You can access a view on A by making a federated table on B
You can do INSERT UPDATE DELETE on federated table
If you need read-only access you can limit the user privileges
BUT! You can't do any aggregate func. on a view which will be federated (ex. COUNT(), MAX(), UNION...) (you can, however it will lag)
Remember to set the KEY's on the federated table you are creating. (or it will lag horr.)
Remember to use ALGORITHM=MERGE on views
Remember to grant acces to USERNAME(from connection string) on server A
example of a federated table on server B:
delimiter $$
CREATE TABLE `schemaName`.`tableName`(
`keyName` VARCHAR(10) NOT NULL,
`key2Name` DATE DEFAULT '2012-01-01',
KEY `keyName` (`keyName`)
)
ENGINE=FEDERATED
DEFAULT CHARSET=utf8
CONNECTION='mysql://USERNAME:PASSWORD#IPADDRESS:PORTNUMBER/baseSchema/baseTable'
$$
And the view on server A:
CREATE
ALGORITHM = MERGE
DEFINER = `ANOTHERUSERNAME`#`%`
SQL SECURITY DEFINER
VIEW `baseSchema`.`baseTable` AS
SELECT
... AS `keyName`,
... AS `key2Name`
FROM
...

I need to join table from other database and sometimes other server

My project has its own database. Also, I use table of users, which is on other database. Two offices have their data on the same server, but third one has its own user table on other server.
So, in lots of queries I need to join either table some_db.users or other_server.some_db.users
What solution would you advise for this scenario?
I use MySQL.
There is Federated tables in MySQL:
The FEDERATED storage engine lets you access data from a remote MySQL
database without using replication or cluster technology. Querying a
local FEDERATED table automatically pulls the data from the remote
(federated) tables. No data is stored on the local tables.
First, you must have a table on the remote server that you want to access by using a FEDERATED table. Suppose that the remote table is in the sakila database and is defined like this:
CREATE TABLE test_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL
PRIMARY KEY (id)
)
ENGINE=MyISAM
DEFAULT CHARSET=latin1;
Next, create a FEDERATED table on the local server for accessing the remote table:
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL
PRIMARY KEY (id)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user:fed_user#197.186.1.199:3306/sakila/test_table';
Sample connection strings:
CONNECTION='mysql://username:password#hostname:port/database/tablename'
CONNECTION='mysql://username#hostname/database/tablename'
CONNECTION='mysql://username:password#hostname/database/tablename'
The basic structure of this table should match that of the remote table, except that the ENGINE table option should be FEDERATED.
Execute:
show variables like '%federated%';
to check if FEDERATED storage engine is available on your local server.
The table federated_table in localhost becomes virtual table of test_table in remote server.
Now you can use the JOIN between the tables in a DB in the localhost server. If there is a table called test in your localhost server, and you want to JOIN with the former sakila.test_table which is in the remote server, write a query like the one shown below:
SELECT * FROM `federated_table` JOIN `test`;
The federated_table in the query will actually refer to test_table in remote server.
On enabling FEDERATED Storage Engine
The FEDERATED storage engine is not enabled by default in the running server; to enable FEDERATED, you must start the MySQL server binary using the --federated option.
NOTE:
Optional storage engines require privileges and will fail to load when --skip-grant-tables is specified.
The result the entire db will fail to load and the following error will appear in the logs:
110318 21:37:23 [ERROR] /usr/local/libexec/mysqld: unknown option '--federated'
This in turn means that an upgrade from 5.x needs to be done in two steps if you have federated tables. Once with --skip-grant-tables and without --federated, the once without --skip-grant-tables and with --federated.
Source: The FEDERATED Storage Engine
Please mention the databse also.
In SQLServer you can use Linked sever. http://msdn.microsoft.com/en-us/library/ms188279.aspx
In MySQL, you can join tables from different databases using fully qualified names like
`database_name1`. `table_name1` JOIN `database_name2`.`table_name2`
But i fear, you cant join tables from different servers because for that you need to have two different connections and as per my knowledge there are no fully qualified connection names to be used in the query.
Alternatively, you can create local temporary table(s) on one of the servers and run the query there on. But in this case you will need to transfer data from one server to another. You can use MySQL GUI tool like SQLyog or MySQL admin, to transfer data from one server to another and to synchronize databases on two servers.
Hope it helps....
Federated tables are your solution for tables on other servers. They are very slow though if you perform joins on them.
If you just want to read data from another database on the same server you can use a view. This way you have all tables virtually in one database and you have to open only one connection in your application.
CREATE
VIEW `my_db`.`table_name`
AS
(SELECT * FROM `other_db`.`table_name`);