Mysql Replication Question - mysql

currently I have this scenario,
multiple desktop client with mysql db installed on their windows machine.
need to sync over to one server hosted on web for reporting purpose.
just need to do one way sync ( client to web ).
client ip is always changing since they use standard adsl with no fix ip.
each client db will sync to one stand alone db on server ( hosted on web).
can this syncing run on scheduler ? like every 3 hour since once.
I m thinking of using mysql replication, but I have some of the question on how to setup this? shall I setup this as master to slave ? or master to master ?
I assume that the client will be master and the server will be slave , since the server is only use for reporting purpose, but checking on lots of mysql replication , it seem like the replication is initial from slave ? ( i see there are setting like master-host=ip on slave server setting ) this defeat the purpose since the server not sure about the client ip...

Perhaps this is totally off the mark given some of the items you're mentioning (slave/master/etc), but in an app I am developing, I have a similar architecture with the single source feeding multiple clients of unknown/dynamic IP. My solution was to include another field with a timestamp of when that row was last updated, then to sync, the clients search their local db for the MAX in that column, and send that as a variable to a webservice that then returns all rows with a more recent timestamp. The client then parses through the response data, and REPLACES INTO their local db, so that old data is over-written.
One detail I did not address (as my scenario does not need it) is how to communicate that an item has been deleted...perhaps when a row is deleted, an entry is made in another table with the row primary id, and timestamp of deletion, and then the web service could include an array of all rows with a more recent timestamp of that table.

Related

Sync multiple local databases to one remotely

I need to create a system with local webservers on Raspberry Pi 4 running laravel for API calls, websockets, etc. Each RPI will be installed in multiple customers places.
For this project i want to have the abality to save/sync the database to a remote server (when the local system is connected to internet).
Multiple locale databases => One remote database cutomers based
The question is, how to synchronize databases and identify properly each customers data and render them in a mutualised remote dashboard.
My first thought was to set a customer_id or a team_id on each tables but it seems dirty.
The other way is to create multiple databases on the remote server for the synchronization and one extra database to set customers ids and database connection informations...
Someone has already experimented something like that? Is there a sure and clean way to do this?
You refer to locale but I am assuming you mean local.
From what you have said you have two options at the central site. The central database can either store information from the remote databases into a single table with an additional column that indicates which remote site it's from, or you can setup a separate table (or database) for each remote site.
How do you want to use the data?
If you only ever want to work with the data from one remote site at a time it doesn't really matter - in both scenarios you need to identify what data you want to work with and build your SQL statement to either filter by the appropriate column, or you need to direct it to the appropriate table(s).
If you want to work on data from multiple remote sites at the same time, then using different tables requires tyhat you use UNION queries to extract the data and this is unlikely to scale well. In that case you would be better off using a column to mark each record with the remote site it references.
I recommend that you consider using Uuids as primary keys - it may be that key collision will not be an issue in your scenario but if it becomes one trying to alter the design retrospectively is likely to be quite a bit of work.
You also asked about how to synchronize the databases. That will depend on what type of connection you have between the sites and the capabilities of your software, but typically you would have the local system periodically talk to a webservice at the central site. Assuming you are collecting sensor data or some such the dialogue would be something like:
Client - Hello Server, my last sensor reading is timestamped xxxx
Server - Hello Client, [ send me sensor readings from yyyy | I don't need any data ]
You can include things like a signature check (for example an MD5 sum of the records within a time period) if you want to but that may be overkill.

MySQL group replication multi primary mode with multiple bootstraped

I have 7 MySQL servers in different locations. All servers has same database with same structure. All tables are structured with UUID based primary keys. (No auto increment values).
1 (Central) server is always connected to the network. (Internet)
All other 6 servers can get connected/disconnected from the network anytime.
All 6 servers must have an ability to work individually (Read/Write) and locally when not connected to internet.
They must replicate each other when network connected.
Once all databases completely replicated, all databases must have same contents of data. (Including Main server)
I just mentioned 1 server as a main server here. (But no any main server). It is main server, when all other 6 are not connected, because head office use it to query past reports from it.
I have read about MySQL group replication (Multi Primary Mode). Is it possible to use it in my requirement. Please advise me if someone already have this experience.
Group replication assumes all servers will contain the same data, and when you join a new server it will fetch from the group the data it is missing.
However, if the server has more data than the group, it won't be able to join.
So, in theory your setup will only work if these 6 servers don't receive writes and diverge while "offline", because if they do, you can no longer add them back to a group (without extra reconciliation operations).

Easy way to sync Firebird and MySQL [duplicate]

I am looking for a tip how to synchronize data from a local firebird database into online db? Few comments:
On a local machine I use sales software which keeps data on firebird db. There is an internet connection, but I want to avoid direct db access (as the PC after 9pm is being turned off).
I would like to create an online app (based on foundation + php + database) in which I will be able to view daily sales and explore past data.
In local db, I will need to pull data from several different tables, and I would like to keep them in online/final db as a single table (with fields: #id, transaction date, transaction value, sales manager).
While mostly I know how to create frontend of the app, and partially backend still I wonder what would be best choice in terms of db - mysql? (it was my first thought). Or rather I should focus on NoSQL?
What's your recommendation on data sync? I should use symmetricsDB (pretty hard to configure) or equivalent, I should write a script which will push data from firebird into json/xml? I'm referring to your knowledge and best practices
Put a scheduled job that will invoke a simple data pump / replication script.
From the script, connect to the source sales db, retrieve the joined data added from last replication and insert them into the "online" database.
You may keep also Firebird as online DB as it works great with PHP.
Firebird also in version 2.5 has all technology already build in to implement a fully functional replication. We have implemented this in the largest installation for a big restaurant company with about 0.6 billion records, daily about 1 million new records and 150 locations where replicated servers are working online or offline with the back office software.
If you simply want to upload the data from your local db to a remote db, you can rent a virtual server at a provider you like, install firebird there, create a secure connection (we use ssh, but any tcp over vpn can be used). copy your local database to the remote server, if required open firewall fb port (3050 or other) and when you a low number of writes on your local database, simply implement a trigger on each table, that does the same insert/update/delete with the same values using the "execute statement on external" feature.
When your local database has higher workload, it is better to put the change data (table name and pk values) from trigger into a log table and let a second connection upload the records to the target db, where the same "execute statement on external" can be used.
this is just a hint how to do that, if budget allows, we can do it for you, but stopping the database pc in the evening seems to be only typical for smaller companies

Auto-Deletion of Table Rows

I'm new to MySQL, but however I need MySQL to work as it will be at the center of my new SANS (Server Address Name System) system. The reason for this system is to provide a replacement system for gameservers, since the default Gamespy service that some games use is being switched off at the end of next month.
The function of MySQL in SANS is to store the IPs and ports of active gameservers (which are patched to send info to MySQL), and then make the clients (again, patched to retrieve the information from MySQL) add the servers to their in-game server lists.
Of cause, the issue here is that gameservers can easily go offline for any one of 1,000+ reasons, and we don't really want the client's game showing gameservers that are offline, mainly because:
If we need to block any fake gameservers, these fake gameservers will still be in the server list (and also the MySQL database)
It will clog up the server list very quickly
Temporary servers such as home, development and test servers will still be in the list
If a servers' IP and/or port changes for any reason (for example the server IP is dynamic), there will be duplicate servers in the list, and clients may not know which one to pick.
I've thought of a couple of solutions, including making the client ping each gameserver in turn to check to see if it is online, but this is not ideal for a couple of reasons:
The server computers' administrator may have WAN ping switched off, meaning that although our gameserver may be online, it won't show in the list
The pings of clients may be seen as suspicious behaviour to the various server administrators that administrate the networks that the server computers sit on, meaning that the client could be blocked because of this.
I've thought of a simple solution: get MySQL (or phpMyAdmin) to remove each table row 10 seconds after it has been added.
Is this sort of behaviour even possible?
I'm on Windows Server 2008 R2, with latest MySQL server and Xampp.
I think you could use a MySQL trigger to accomplish this (I'm not sure about the 10 second delay), but I believe there's a better solution:
You could add a column called Status to whichever table stores the gameserver information.
Then you could use flags to differentiate types of gameservers: fake, test, active, inactive, etc.
Next you would filter what the user sees to only show active gameservers.
If the server doesn't report back every 10 seconds, the flag is simply marked as inactive.
And finally you could schedule a job to run once a day to clean up records older than 24 hours.
If this doesn't work for your particular problem, let me know and I'll look into coding the trigger.

Database synchronization Server to Local

I have a local app that uses SQLite. Whenever it has internet access it requests the whole database from the server and recreates the local one from that. Local and Server databases have the same structure, basically the point of the local one is to guarantee function even when no internet is available.
This is a very inefficient way of doing this.
My question is, how to ask for only data that is missing?
Should I send the last ID from each local table and have the server send data from that ID onward?
What happens if an existing ID was modified? This would mean that all data should be checked, but sending the whole database for checking and getting back the modifications or additions also seems stupid.
The configuration is Local SQLite, Server MySQL. I could probably change the server to SQLite if it's recommended.
EDIT:
Multiple clients make requests to the same server MySQL Database, PHP processes the request and replies.
How would you tackle this?
Thank you.
I'd either timestamp the rows in the database and fetch by date, or use rsync (or librsync or similar) to synchronize the database files.