I'm planning to create a system which tracks visitors clicks into the database. I'm expecting around 1M inserts/day into the Database.
On the backend, I'll have an analytics system which will analyze all the data that's been collected over the days/weeks/months/years.
My question is: is it a practical approach to have 2 different MySQL Servers + 1 Web server? MySQL Server A would insert the clicks into it's DB and it would be connected to MySQL Server B by group replication, so whenever I create reports, etc on MySQL Server B, it doesn't load Server A heavily.
These 2 Database servers would then be connected to the Web Server which would handle all the click requests and displaying the backend reports also.
Is it a practical solution, or is it better to have one bigger server to handle all the MySQL data? Or have multiple MySQL servers that are load balancing each other? Anything else perhaps?
1M inserts/day is not a high load by modern standards. That's less than 12 per second on average.
On sufficiently powerful servers with fast storage and proper tuning of MySQL options, you can expect to support at least 100x that load with a single MySQL server.
A better reason to use multiple MySQL servers is redundancy. Inevitably, any MySQL server needs to be upgraded, or you might have hardware failures and need to replace a disk, or other components. To avoid downtime, you should have a standby database server, which stays in sync with the primary server, either using MySQL replication or by disk-level replication like DRBD.
Related
My setup:
Two MySQL servers running with Master-Master replication using third party Tungsten Replicator (for a legacy reasons, can't change that now).
Typically this cluster is used as Active-Standby. In normal operation all queries should hit first server. Only in case of first DB server failure queries should hit secondary server. Master-Master is for convinience of not using any master failover scripting. If primary server is back online, all queries should be sent to it.
I'm now using Galera Load Balancer configured in active-standby mode with simple health check (no mysql ping for x times = skip this server) and it works OK.
Problem:
I'd like to migrate glbd to ProxySQL and to replicate my setup. Started with two hosts with different weights ie 100000 vs 1.
Byt apparently ProxySQL uses it to weight traffic and 100000 queries go to primary, one next go to secondary and so on. It causes problems when sometimes replication lag is high, 1 of every 100000 queries will hit secondary server that could have some stale data.
How can I configure ProxySQL to send all queries only to my primary server when health check says it's OK, and to secondary server only if primary is unhealthy? When primary goes back alive all queries should be migrated to it.
I have 5 different schemas, eventually I want to separate them into different servers for specific RAM and CPU assignment depending on the load.
How can I configure so I can show a schema from a different server into a "front" mysql server?
MySQL Proxy:
The MySQL Proxy is an application that communicates over the network using the MySQL network protocol and provides communication between one or more MySQL servers and one or more MySQL clients.
However, note:
Warning
MySQL Proxy is currently an Alpha release and should not be used within production environments.
The FEDERATED Storage Engine:
The FEDERATED storage engine lets you access data from a remote MySQL database without using replication or cluster technology. Querying a local FEDERATED table automatically pulls the data from the remote (federated) tables. No data is stored on the local tables.
Replication:
Replication enables data from one MySQL database server (the master) to be replicated to one or more MySQL database servers (the slaves).
However, note:
In this environment, all writes and updates must take place on the master server. Reads, however, may take place on one or more slaves.
I have an application that runs on server A and the database is on the same server
there is a backup server B which I use in case the server A is down
the application will remain unchanged but the data in the DB is changing constantly
Is there a way to synchronize those 2 databases real-time automatically?
currently I wait till all the users are gone so I can manually backup and restore in the backup server.
Edit: When I said real-time I didn't mean it literally, I can handle up to one hour delay but the faster sync the better.
My databases are located on 2 servers on the same local network.
2 of them are SQL Server 2008, the main DB is on windows server 2008
the backup is on windows server 2003
A web application (intranet) is using the DB
I can use sql agent (if that can help)
I don't know what kind of details could be useful to solve this, kindly tell me what can help. Thanks.
Edit: I need to sync all the tables and table only.
the second database is writable not read-only
I think what you want is Peer to Peer Transactional Replication.
From the link:
Peer-to-peer replication provides a scale-out and high-availability
solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of
transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables
applications that require scale-out of read operations to distribute
the reads from clients across multiple nodes. Because data is
maintained across the nodes in near real-time, peer-to-peer
replication provides data redundancy, which increases the availability
of data.
I'm trying to implement a proxy layer in front of MySQL server, that will catch redundant SQL queries and send them only once to the server. In other words, I have many clients (in PHP, Perl, on different web nodes) that talk to the MySQL and very often repeat the same SELECT queries. When traffic goes up MySQL, very often, goes down.
The question is - are you aware of any open source (or commercial) tool that can help? I tried MySQL Proxy, but looks like it can't help.
Two suggestions:
MySQL Proxy
This is a front end proxy from MySQL which does what you want as far as I know
vtocc
From the vitess project, used in the YouTube mysql environment, also does a similar thing. Query consolidation: The ability to reuse the results of an in-flight query to any subsequent requests that were received while the query was still executing.
You may want to look into HAProxy and how it works.
Here two additional suggestions
SUGGESTION #1 Setup a Cluster
If your data is all InnoDB, you should try Percona XtraDB Cluster and use HAProxy in conjunction with it. You can load balance across all server in the Cluster including the Write Master.
SUGGESTION #2 Setup a Cluster via MySQL Replication to 1 or more DB Servers
Use HAProxy to load balance your reads across the Read Slaves
If you are on a budget and your data is relatively small, setup multiple MySQL Instances on one server
I have a MySQL database running on our server at this location.
However, the internet connection at this location is slow (Especially when several users are connected remotely).
We also have a remote web server on a very fast internet connection.
Can I run another MySQL server on the remote server and still be able to run queries and updates on it?
I want to have two servers because
- Users at this location can connect via lan (fast)
- Users working remotely can connect to synced remote server (fast)
Is this possible? From what I understand replication does not work this way. What is replication used for then? Backups?
Thanks for your help!
[Edit]
After doing some more reading, I am a little worried about setting up multi-master replication due to the fact that I had not considered multi-master when designing the database and conflicts could be an issue.
The good news though is that most time consuming operations are queries not updates.
And, I found out that there is a driver that handles master-slave connections.
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-replication-connection.html
That way writes will be sent to the master and reads can come from the faster connection.
Has anyone tried doing this before? My one concern is that if I update to the master, then run a query expecting to see the update on the slave, will it be there right away? Or will the slow connection make this solution just as slow as using the master for both read and write?
What you're asking, I believe, is called Multi-Master Replication, by which both servers serve as replication masters to each other. Changes on either server become replicated back to the other as soon as possible. MySQL can be configured to do it, however I'm not sure how the differences in speed would affect your performance and data integrity.
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication-multi-master.html