Two mysql servers using same database - mysql

I have a MySQL database running on our server at this location.
However, the internet connection at this location is slow (Especially when several users are connected remotely).
We also have a remote web server on a very fast internet connection.
Can I run another MySQL server on the remote server and still be able to run queries and updates on it?
I want to have two servers because
- Users at this location can connect via lan (fast)
- Users working remotely can connect to synced remote server (fast)
Is this possible? From what I understand replication does not work this way. What is replication used for then? Backups?
Thanks for your help!
[Edit]
After doing some more reading, I am a little worried about setting up multi-master replication due to the fact that I had not considered multi-master when designing the database and conflicts could be an issue.
The good news though is that most time consuming operations are queries not updates.
And, I found out that there is a driver that handles master-slave connections.
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-replication-connection.html
That way writes will be sent to the master and reads can come from the faster connection.
Has anyone tried doing this before? My one concern is that if I update to the master, then run a query expecting to see the update on the slave, will it be there right away? Or will the slow connection make this solution just as slow as using the master for both read and write?

What you're asking, I believe, is called Multi-Master Replication, by which both servers serve as replication masters to each other. Changes on either server become replicated back to the other as soon as possible. MySQL can be configured to do it, however I'm not sure how the differences in speed would affect your performance and data integrity.
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication-multi-master.html

Related

mysql/mariadb single database replication with read-write-split only for this single database

In my setup there are two debian servers. The first one is the old production server and the second is the new one. On the first (old) one runs a mysql v5.5 db-server and an old application which lags support. It cannot be ported easily to the new server. The new server runs mariadb v10.1 and all the other applications were ported from the old server to this new one. These applications have to work also with the data of the application that cannot be ported.
The ported application can only access local databases. So there is no easy way of changing the connection for these apps to the old db server.
My idea:
I want to replicate (master->slave) the data of the one database (used by the old application that is not portable) of the mysql v5.5 db server to the maraidb v10.1 db-server.
No problem so far.
But the applications on the new server not only read the data of the old application, they can also modify them. And they also have there own databases that only exists on the new server. This is a problem as far as I know and can lead to the break of the replication in some situations if the applications would try to write at the replicated database on the slave.
My next thought to solve this was that I can make use of a sql dispatcher proxy and found some interesting ones (mariadb maxscale, haproxy, proxySQL) but as far as I understood they can split read and write operations but I couldn't find a way to route write operations for different databases to different servers.
Can Anybody give me a hint to solve this problem?
Setting:
Server 1 - Mysql v5.5 - database_1
Server 2 - Mariadb v10.1 - database_1, database_2, database_3
An application on server 1 is writing and reading data from database_1 on server 1.
Other applications on server 2 are reading and writing data to database_1 on server 2.
So the data of database_1 have to be replicated from server 1 to server 2 and could be changed there.
A master-master replication instead of master-slave could work, but in reason of auto_increment fields that could break the replication and in reason of the fact that the changed data from server 2 doesn't have to exist on server 1, I think this is not the way to go. (I'm aware that I could set the auto_increment interval to two to avoid this problem, but it's an already running production system, so changes like this are not so easy).
At the moment we're doing backups by hand and copy them over but that's way to slow and I'm sure there is a better way ;)
You can use write to a replication slave (server 2) for databases like database_2 and database_3 that will never appear in the replication scream.
If you started updating database_1 you probably would end up in trouble.
You are replicating between two database server of over a major version difference so there is the possibility that a deprecated SQL statement gets replicated to a server that has it removed and the replication will stop. Keep an eye out for this in the weeks after deployment. binlog_format=ROW may mitigate some of the SQL that could got incorrectly.

Local files (xampp) using a remote mysql - very slow

i have a local xampp, with php application runing on php5.6 and data base mysql are remote.
the issue is my application open very slow on my browser, is there an way to improve speed ?
mysql is on ssd server (fast)...
thanks
It's always going to be slower over a remote connection. How much really depends on a number of factors related to your connection, not really to MySql.
Few things to consider - do you really need to be working off the remote db? Can you use a local copy and sync changes later? Might even be faster to have your code remote and suffer the slightly longer save times when updating your code. Highly situational though depending on your dev set-up.
Another option which would be a bit more complicated is to set up your local db as a read-only slave and at least that way you get more of async update -- i.e., your local db may lag behind remote master a bit but any reads you do locally will be back to your "native" local performance. You would have the additional complexity of setting up Master/Slave replication with different connections for read/write but that may be something you want to do for production anyway. (You can do master/master but I wouldn't recommend it over a remote connection.)

Looking for Mysql Backup/Sync suggestions (multiple servers syncing to 1)

I have 3 mysql servers that i need to backup daily.. each server uses just 1 database w/ multiple tables..
I've scripted a mysql dump script on each server.. but this time i want each mysql server backing up to a 4th server (MASTER SERVER) (w/c is remote location) ..
The Master server will serve as a MIRROR for all 3 servers, so that we can view the data of the other servers even if one of them goes down, because the Master server will be on a more reliable internet connection .
NOTES and LIMITATIONS:
1) EACH SERVER needs to "send" their backups to the MASTER SERVER, because the master server can not do "incoming" connection to each slave servers (port forwarding not supported on the slaves)
2) Prefer that only the "changes" are backed up to make things lighter on the network. (synchronization? incremental?)
3) All are running windows 7 at the moment, because for now i'm using Navicat MySQL's synchronization features.. I would prefer to use a PHP script based solution so i can migrate things to *nix.. i've read about replication and all that stuff, but I kinda wanted a ready solution, perhaps a software i could download or buy or something.. I've no time to code my own sync/replication scripts/software. just wana get over this remote sync hurdle and move on w/ the project.
regards to all
i've read about replication and all that stuff, but I kinda wanted a
ready solution
But replication is a ready-solution just type a few commands and change a few configuration.

proxy in front of mysql for redundancy removal

I'm trying to implement a proxy layer in front of MySQL server, that will catch redundant SQL queries and send them only once to the server. In other words, I have many clients (in PHP, Perl, on different web nodes) that talk to the MySQL and very often repeat the same SELECT queries. When traffic goes up MySQL, very often, goes down.
The question is - are you aware of any open source (or commercial) tool that can help? I tried MySQL Proxy, but looks like it can't help.
Two suggestions:
MySQL Proxy
This is a front end proxy from MySQL which does what you want as far as I know
vtocc
From the vitess project, used in the YouTube mysql environment, also does a similar thing. Query consolidation: The ability to reuse the results of an in-flight query to any subsequent requests that were received while the query was still executing.
You may want to look into HAProxy and how it works.
Here two additional suggestions
SUGGESTION #1 Setup a Cluster
If your data is all InnoDB, you should try Percona XtraDB Cluster and use HAProxy in conjunction with it. You can load balance across all server in the Cluster including the Write Master.
SUGGESTION #2 Setup a Cluster via MySQL Replication to 1 or more DB Servers
Use HAProxy to load balance your reads across the Read Slaves
If you are on a budget and your data is relatively small, setup multiple MySQL Instances on one server

MySQL Replication in internet environment - limitation, constrains?

We have a few servers deployed in different ISPs (Internet Service Provider).
There is real-time data need to be synchronized to these servers constantly, I think MySQL Replication maybe is a good candidate for this job (we use MySQL in servers).
I know Replication works in intranet, but I'm not sure whether it works in complicated network topology in internet and in ISP subnet.
Some facts:
Need to run as Master-Slaves, Master is to get data, about ten slave DB.
Don't care much about replication time lag, 5 mins is fine.
There is not much data or transactions to be synchronized per hour.
We run Java web application in each server.
It works fine. You usually want to either run it over a VPN or use SSL in the MySQL connection if it's going over the public internet.
If your write updates take more bandwidth than you have available that will of course be a limitation as the replication log will have basically every byte used in insert, update and replace statements.