Sync multiple local databases to one remotely - mysql

I need to create a system with local webservers on Raspberry Pi 4 running laravel for API calls, websockets, etc. Each RPI will be installed in multiple customers places.
For this project i want to have the abality to save/sync the database to a remote server (when the local system is connected to internet).
Multiple locale databases => One remote database cutomers based
The question is, how to synchronize databases and identify properly each customers data and render them in a mutualised remote dashboard.
My first thought was to set a customer_id or a team_id on each tables but it seems dirty.
The other way is to create multiple databases on the remote server for the synchronization and one extra database to set customers ids and database connection informations...
Someone has already experimented something like that? Is there a sure and clean way to do this?

You refer to locale but I am assuming you mean local.
From what you have said you have two options at the central site. The central database can either store information from the remote databases into a single table with an additional column that indicates which remote site it's from, or you can setup a separate table (or database) for each remote site.
How do you want to use the data?
If you only ever want to work with the data from one remote site at a time it doesn't really matter - in both scenarios you need to identify what data you want to work with and build your SQL statement to either filter by the appropriate column, or you need to direct it to the appropriate table(s).
If you want to work on data from multiple remote sites at the same time, then using different tables requires tyhat you use UNION queries to extract the data and this is unlikely to scale well. In that case you would be better off using a column to mark each record with the remote site it references.
I recommend that you consider using Uuids as primary keys - it may be that key collision will not be an issue in your scenario but if it becomes one trying to alter the design retrospectively is likely to be quite a bit of work.
You also asked about how to synchronize the databases. That will depend on what type of connection you have between the sites and the capabilities of your software, but typically you would have the local system periodically talk to a webservice at the central site. Assuming you are collecting sensor data or some such the dialogue would be something like:
Client - Hello Server, my last sensor reading is timestamped xxxx
Server - Hello Client, [ send me sensor readings from yyyy | I don't need any data ]
You can include things like a signature check (for example an MD5 sum of the records within a time period) if you want to but that may be overkill.

Related

How can I set up my dev environment for MySQL Data on one dev machine in order to develop two different apps that start from the same source database?

I have a LAMP development stack with MySQL 8 database, on my local dev machine. There is a fairly large database, with over 10 databases each database having 5-10 tables each, so there are roughly 100 or more tables sprinkled throughout the code with various complex SQL queries.
I need to create a duplicate version of the app I am developing to be hosted on a completely separate server with its own database and code. App1 will be on one server, App2 will be hosted on another server. Each will have their own separate codebase and data, where data will be greatly modified on App2, even though the initial data for App2 will be identical to that of App1.
For example, database.table will start up to be the same on both, but later App1 will have one set of fields, and App2 will start having a different set of fields. And so on for about a hundred of other tables.
Note: the actual data rows & values will change greatly but actual schema changes to the database structure and database fields will be small and mostly non-existent (left as-is), but the need to have two separate databases is essential.
My problem is this - how do I develop both apps on my local dev machine while having two apps with different data needs that start out with the same database names? MySQL does not have a concept of schema or namespace, so I cannot simply copy the database tables into another namespace, and reuse the table names that way.
One rough solution could be to copy over the database but add a prefix, like app2_database.table. I feel it is a wonky solution, since it will pollute the namespace so to speak, with duplicate database tables on my dev. I want to ask if there is a better solution available to address my concern.
What can I do to be able to develop both apps concurrently using one dev machine, where apps come with different data needs but start off with the same collection databases (I am using MySQL database meaning here). My goal is to be able to develop both apps concurrently (i.e. one one day, the other another day), on same machine.

How I can connect and fetch the data from multiple mysql databases on multiple severs?

I want to fetch data from multiple mysql databases which are on multiple servers.
I'm using phpmyadmin (mysql). All the databases will be mysql database (same vendor) which are on multiple servers. First I want to connect to those server databases and then I want to fetch data from them and then put the result in central database.
For example : remote_db_1 on server 1, remote_db_2 on server 2, remote_db_3 on server 3. and I have central database where I want to store the data which comes from different databases.
Query : select count(user) from user where profile !=2; same query will be run for all the databases.
central_db
school_distrct_info_table
id school_district_id total_user
1. 2 50
2. 55 100
3. 100 200
I've tried federated engine but it doesn't fit to our requirement.What can be done in this situation any tool, any alternative method or anything.
In future no. of databases on different server will be increased. It might 50, 100, maybe more, exporting the tables from source server & then load to central db will be hard task. So I'm also looking for some kind of etl tool which can directly fetch data from multiple source databases and then sending the data to destination database. In central db table, structure,datatypes,columns everything will be different. Sometimes we might need to add extra column to store some data I know it can be achieved through etl tool in the past I've used ssdt which works with SQL Server but here this is mysql.
The easiest way to handle this problem is with federated servers. But, you say that won't work for you.
So, your next best way to handle the problem is to export the tables from the source servers and then load them into your central server. But that's much harder. This sort of operation is sometimes called extract / transform / load or ETL.
You'll write a program in the programming language of your choice (Python, php, Java, PERL, nodejs??) to connect to each database separately, then query it, then put the information into a central database.
Getting this working properly is, sad to say, incompatible with really urgent. It's tricky to get working and to test.
May I suggest you write another question explaining why server federation won't meet your needs, and asking for help? Maybe somebody can help you configure it so it does. Then you'll have a chance to finish this project promptly.

Easy way to sync Firebird and MySQL [duplicate]

I am looking for a tip how to synchronize data from a local firebird database into online db? Few comments:
On a local machine I use sales software which keeps data on firebird db. There is an internet connection, but I want to avoid direct db access (as the PC after 9pm is being turned off).
I would like to create an online app (based on foundation + php + database) in which I will be able to view daily sales and explore past data.
In local db, I will need to pull data from several different tables, and I would like to keep them in online/final db as a single table (with fields: #id, transaction date, transaction value, sales manager).
While mostly I know how to create frontend of the app, and partially backend still I wonder what would be best choice in terms of db - mysql? (it was my first thought). Or rather I should focus on NoSQL?
What's your recommendation on data sync? I should use symmetricsDB (pretty hard to configure) or equivalent, I should write a script which will push data from firebird into json/xml? I'm referring to your knowledge and best practices
Put a scheduled job that will invoke a simple data pump / replication script.
From the script, connect to the source sales db, retrieve the joined data added from last replication and insert them into the "online" database.
You may keep also Firebird as online DB as it works great with PHP.
Firebird also in version 2.5 has all technology already build in to implement a fully functional replication. We have implemented this in the largest installation for a big restaurant company with about 0.6 billion records, daily about 1 million new records and 150 locations where replicated servers are working online or offline with the back office software.
If you simply want to upload the data from your local db to a remote db, you can rent a virtual server at a provider you like, install firebird there, create a secure connection (we use ssh, but any tcp over vpn can be used). copy your local database to the remote server, if required open firewall fb port (3050 or other) and when you a low number of writes on your local database, simply implement a trigger on each table, that does the same insert/update/delete with the same values using the "execute statement on external" feature.
When your local database has higher workload, it is better to put the change data (table name and pk values) from trigger into a log table and let a second connection upload the records to the target db, where the same "execute statement on external" can be used.
this is just a hint how to do that, if budget allows, we can do it for you, but stopping the database pc in the evening seems to be only typical for smaller companies

Constants over multiple servers

The question is, how to update a constant? This sounds like a stupid question, but let's look at the background of my issue:
Background
I manage a network of servers, which includes a MySQL server, multiple HTTP servers, and a Minecraft server (a self-hosted server that gamers who have installed Minecraft can connect to and play together). All of the user-end services (HTTP servers, Minecraft server, user apps) are directly or indirectly related to the MySQL server. The MySQL database stores different data for each player account, for example, the online/offline status of players, etc.
In programming, constants are used to create a general reference to a value that will not change across a runtime. Especially, for software-internal identifiers, such as data flags, bitmasks, etc. In my case, I also use constants to store specific data, such as the MySQL server's address and other credentials. So when I want to change the server address, I only need to modify them from one point, for example, an internal constants.php of the server.
Problem
When I migrate my MySQL database to another host or change password, I have to update the details on every server. It is not possible to create a centralized data provider that provides the server address, because the MySQL server itself is the centralized data provider. That means, every time I change the value, I must update all servers. I must also maintain a very private and local list (probably has to be written down on a memo and stick it on my computer!) of all these places, because it is really hard to locate all these references. So, my question is, is there a better way of management that allows me to change the values from one place? Note that the servers are on different hosts, so it is not possible to put it in a local file, and it doesn't sound reasonable to create a centralized data provider (call it password provider) to provide access of the real centralized data provider (MySQL) either, since if I have the need to change the MySQL database details, I have the same need to change the password provider details as well.
This is less a concern. but since it is a similar question, I am putting it down here too. I use integer bitmasks to store player ranks. For example, if the player is a VIP, he has a 0x01 flag, and if the player is a moderator, he has a 0x10 flag, and 0x11 if both VIP and moderator, vice versa. I want to refactor the bitmask values as well, but it would be great trouble, because I need to shut down all servers and update the MySQL values, update constants on every server, then restart all servers, to avoid potential security vulnerability in the period of updating. Is there a more convenient way to do that?
This is a network management question too, but I consider it more programming-related.
We are talking about deployment system. For example we can use
capistrano: https://github.com/capistrano/capistrano. We need to
save constants.php to git and create task in capistrano for deploy
this file to each server. I use this tool for deploying of projects which are one of the 50 busiest sites of the Russian segment of the Internet:)
We are talking about data migration. So there are several ways. Some of them with downtimes and some not (sometimes it depends on the situation).
Data migration without downtime:
modify your app so it will understand old variant of players bitmask
and new one
deploy modified app
update bitmask into your databases
modify your app so it will understand only new variant of bitmasks
deploy modified app

How to store sensitive data of different clients in SQL server?

I work at a small company and I am trying to figure out a solution for storing sensitive data of multiple clients in Microsoft SQL server. Actually, I feel like this is a general database question and it is not specific to MSSQL.
Until now we have been using a proprietary database where the client data is stored as db files (flat files) in the client’s root directories in the file system. So the operating system permissions guarantee that the application used by client X can never fetch data from client Y’s database. Please note that there is no database server/instance/engine here…
However, for my project I want to use SQL database. But the security folks are expressing concerns over putting data of different clients on a single database.
One option is to create separate database instances for different clients. However, I am not sure if this idea is scalable.
So my questions are:
1) Is there any mechanism in MSSQL that enables you to store databases ‘separately’ in different files used by the SQL server?
2) Let’s say I have only one database instance where I have databases of client X and client Y. How can I make sure that client X’s requests can never (accidentally) get misdirected to client Y’s database? I do not want to rely on some parameter in my code to determine which database to fetch from! :)
So, is there any solid authentication scheme to guarantee that my queries could not be misdirected to fetch from an incorrect client table?
I think this is a very common problem and there has to be a good solution for this. What are other companies doing?
Please let me know if there are any good articles to read up on this.
Different databases are always stored in different files in SQL Server so you don't even have to do anything special for this. However, NTFS permissions will not help you in this case as the clients aren't ever accessing the files directly on disk.
One possible solution in SQL Server is to create separate sets of Windows user IDs and map those to separate SQL Logins for each customer. You could then only assign those logins access to the appropriate databases. For example, if you were hosting web sites for client X and client Y, you would set up the connection string(s) in the web.config for client X's web site to use the appropriate login(s) for client X's database. Vice versa for client Y. This guarantees that no matter what (barring a hard-coded login), the code from client X's site will never access client Y's database.
You can have 32,000 databases on a single instance of SQL server and having separate databases enables a number of improved serviceability scenarios (such as restoring a single customer's DB in case of a data problem without affecting all of your other customers).
http://technet.microsoft.com/en-us/library/ms143432.aspx