I am looking for a tip how to synchronize data from a local firebird database into online db? Few comments:
On a local machine I use sales software which keeps data on firebird db. There is an internet connection, but I want to avoid direct db access (as the PC after 9pm is being turned off).
I would like to create an online app (based on foundation + php + database) in which I will be able to view daily sales and explore past data.
In local db, I will need to pull data from several different tables, and I would like to keep them in online/final db as a single table (with fields: #id, transaction date, transaction value, sales manager).
While mostly I know how to create frontend of the app, and partially backend still I wonder what would be best choice in terms of db - mysql? (it was my first thought). Or rather I should focus on NoSQL?
What's your recommendation on data sync? I should use symmetricsDB (pretty hard to configure) or equivalent, I should write a script which will push data from firebird into json/xml? I'm referring to your knowledge and best practices
Put a scheduled job that will invoke a simple data pump / replication script.
From the script, connect to the source sales db, retrieve the joined data added from last replication and insert them into the "online" database.
You may keep also Firebird as online DB as it works great with PHP.
Firebird also in version 2.5 has all technology already build in to implement a fully functional replication. We have implemented this in the largest installation for a big restaurant company with about 0.6 billion records, daily about 1 million new records and 150 locations where replicated servers are working online or offline with the back office software.
If you simply want to upload the data from your local db to a remote db, you can rent a virtual server at a provider you like, install firebird there, create a secure connection (we use ssh, but any tcp over vpn can be used). copy your local database to the remote server, if required open firewall fb port (3050 or other) and when you a low number of writes on your local database, simply implement a trigger on each table, that does the same insert/update/delete with the same values using the "execute statement on external" feature.
When your local database has higher workload, it is better to put the change data (table name and pk values) from trigger into a log table and let a second connection upload the records to the target db, where the same "execute statement on external" can be used.
this is just a hint how to do that, if budget allows, we can do it for you, but stopping the database pc in the evening seems to be only typical for smaller companies
Related
I need to create a system with local webservers on Raspberry Pi 4 running laravel for API calls, websockets, etc. Each RPI will be installed in multiple customers places.
For this project i want to have the abality to save/sync the database to a remote server (when the local system is connected to internet).
Multiple locale databases => One remote database cutomers based
The question is, how to synchronize databases and identify properly each customers data and render them in a mutualised remote dashboard.
My first thought was to set a customer_id or a team_id on each tables but it seems dirty.
The other way is to create multiple databases on the remote server for the synchronization and one extra database to set customers ids and database connection informations...
Someone has already experimented something like that? Is there a sure and clean way to do this?
You refer to locale but I am assuming you mean local.
From what you have said you have two options at the central site. The central database can either store information from the remote databases into a single table with an additional column that indicates which remote site it's from, or you can setup a separate table (or database) for each remote site.
How do you want to use the data?
If you only ever want to work with the data from one remote site at a time it doesn't really matter - in both scenarios you need to identify what data you want to work with and build your SQL statement to either filter by the appropriate column, or you need to direct it to the appropriate table(s).
If you want to work on data from multiple remote sites at the same time, then using different tables requires tyhat you use UNION queries to extract the data and this is unlikely to scale well. In that case you would be better off using a column to mark each record with the remote site it references.
I recommend that you consider using Uuids as primary keys - it may be that key collision will not be an issue in your scenario but if it becomes one trying to alter the design retrospectively is likely to be quite a bit of work.
You also asked about how to synchronize the databases. That will depend on what type of connection you have between the sites and the capabilities of your software, but typically you would have the local system periodically talk to a webservice at the central site. Assuming you are collecting sensor data or some such the dialogue would be something like:
Client - Hello Server, my last sensor reading is timestamped xxxx
Server - Hello Client, [ send me sensor readings from yyyy | I don't need any data ]
You can include things like a signature check (for example an MD5 sum of the records within a time period) if you want to but that may be overkill.
I want to fetch data from multiple mysql databases which are on multiple servers.
I'm using phpmyadmin (mysql). All the databases will be mysql database (same vendor) which are on multiple servers. First I want to connect to those server databases and then I want to fetch data from them and then put the result in central database.
For example : remote_db_1 on server 1, remote_db_2 on server 2, remote_db_3 on server 3. and I have central database where I want to store the data which comes from different databases.
Query : select count(user) from user where profile !=2; same query will be run for all the databases.
central_db
school_distrct_info_table
id school_district_id total_user
1. 2 50
2. 55 100
3. 100 200
I've tried federated engine but it doesn't fit to our requirement.What can be done in this situation any tool, any alternative method or anything.
In future no. of databases on different server will be increased. It might 50, 100, maybe more, exporting the tables from source server & then load to central db will be hard task. So I'm also looking for some kind of etl tool which can directly fetch data from multiple source databases and then sending the data to destination database. In central db table, structure,datatypes,columns everything will be different. Sometimes we might need to add extra column to store some data I know it can be achieved through etl tool in the past I've used ssdt which works with SQL Server but here this is mysql.
The easiest way to handle this problem is with federated servers. But, you say that won't work for you.
So, your next best way to handle the problem is to export the tables from the source servers and then load them into your central server. But that's much harder. This sort of operation is sometimes called extract / transform / load or ETL.
You'll write a program in the programming language of your choice (Python, php, Java, PERL, nodejs??) to connect to each database separately, then query it, then put the information into a central database.
Getting this working properly is, sad to say, incompatible with really urgent. It's tricky to get working and to test.
May I suggest you write another question explaining why server federation won't meet your needs, and asking for help? Maybe somebody can help you configure it so it does. Then you'll have a chance to finish this project promptly.
I am looking for a tip how to synchronize data from a local firebird database into online db? Few comments:
On a local machine I use sales software which keeps data on firebird db. There is an internet connection, but I want to avoid direct db access (as the PC after 9pm is being turned off).
I would like to create an online app (based on foundation + php + database) in which I will be able to view daily sales and explore past data.
In local db, I will need to pull data from several different tables, and I would like to keep them in online/final db as a single table (with fields: #id, transaction date, transaction value, sales manager).
While mostly I know how to create frontend of the app, and partially backend still I wonder what would be best choice in terms of db - mysql? (it was my first thought). Or rather I should focus on NoSQL?
What's your recommendation on data sync? I should use symmetricsDB (pretty hard to configure) or equivalent, I should write a script which will push data from firebird into json/xml? I'm referring to your knowledge and best practices
Put a scheduled job that will invoke a simple data pump / replication script.
From the script, connect to the source sales db, retrieve the joined data added from last replication and insert them into the "online" database.
You may keep also Firebird as online DB as it works great with PHP.
Firebird also in version 2.5 has all technology already build in to implement a fully functional replication. We have implemented this in the largest installation for a big restaurant company with about 0.6 billion records, daily about 1 million new records and 150 locations where replicated servers are working online or offline with the back office software.
If you simply want to upload the data from your local db to a remote db, you can rent a virtual server at a provider you like, install firebird there, create a secure connection (we use ssh, but any tcp over vpn can be used). copy your local database to the remote server, if required open firewall fb port (3050 or other) and when you a low number of writes on your local database, simply implement a trigger on each table, that does the same insert/update/delete with the same values using the "execute statement on external" feature.
When your local database has higher workload, it is better to put the change data (table name and pk values) from trigger into a log table and let a second connection upload the records to the target db, where the same "execute statement on external" can be used.
this is just a hint how to do that, if budget allows, we can do it for you, but stopping the database pc in the evening seems to be only typical for smaller companies
I want to build a desktop application somewhat like a POS. The user can enter data to it and save the data to a local database instead of accessing a remote database in the server. The reason I want to do this is to reduce the traffic and make my application more responsive since it will make less overhead of accessing a remote database.
I want to build at least 5 client of this desktop application and each of them has a local database. Along with these clients, I will setup a server database which I will use for reports or for online access that displays all the status and data of all my clients.
For example, a specific user uses the client machine, all of his data will be stored in the local database before it can be transported to the server database for synchronization. I would seem like this system doesn't give a real time data update to the server but this is what I need. Since the server database is only used for reporting purposes, there is no information in the server that is manipulated by a client.
You may looking for jumpmind product called SymmetricDS. It Sync data from branch offices with a central office, sharing data with all offices and syncing subsets of data with specific offices.
I am working with a client who is syncing between SQL Server and MySQL containing the exact same schema and data. We want to centralize that data into one database. Other then performance and maintainability issues, what else is bad about the original design?
You can create a linked server instance in SQL Server, with the MySQL instance.
Despite being completely proprietary, one of the nice connectivity features offered in SQL Server is the ability to query other servers through a Linked Server. Essentially, a linked server is a method of directly querying another RDBMS; this often happens through the use of an ODBC driver installed on the server.
Refer This article : step-by-step process SQL Server Linked Server to MySQL.
Providing you grant the MySQL user you connect on behalf of proper permissions, you can write to the MySQL instance accouding to you. So you can update stored procedures to do an additional step to insert records into MySQL.
Much easier solution is to use commercial application - Omega Sync from Spectral Core
Omega Sync can compare and synchronize both database schema and table data. You can even synchronize data of heterogeneous databases (for example, compare your local SQL Server database with a MySQL replica on your web site - and synchronize all the differences in just a few minutes).
on the otherhand I think you've already mentioned what possible problems you may encounter when synchronizing 2 db at the same time aside from this two I think it would be the resources. since there are different RDBMS working for the application they would also have a separate resources for each, like when I update a particular record of a user it still needs to check on which resource does it really exist, but I love to hear more from other people out there this is really an interesting topic to discuss. ;)