I have two servers. One is located in our office, and its MySQL base contains our offers, our clients etc.
The second server is located at our hosting provider's datacenter. It uses the same database structure and the same offers, and I use it for our website.
I was synchronizing these two servers manually, by sending json from one server to another each few hours, but now I need a real-time synchronization.
Which way should I use?
Master-slave replication from company server to website server. The problem is, that our slave website database has its own changeable tables too. For example, orders, user sessions, viewcounts and so on. And I need to send somehow that tables to master server at office.
To use only one database for both servers. Problem is, that there could be up to 100 queries each pageview, and I think that running each query through internet could be quiet slow.
We cannot use only one server for all tasks because we are unable to provide a stable low-latency internet connection in our office. So when internet is down, our site or our CRM system would be down to.
May be there is a third and best way to do this?
You can try Data Comparison tool in dbForge Studio for MySQL. It will connect to two different MySQL servers (using simple connection, SSL, SSH or HTTP tunnel), compare them and show differences; then it will offer to run synchronization script or view/save it.
There is also stand-alone dbForge Data Compare tool.
Related
I am working at a company that has some CRM software running in a remote Windows XP server that uses a SQLAnywhere 9 db to store its data; I have access to this remote server with an administrator account.
I would like to extract the db into a .sql file so that I can run the db locally on my machine without affecting the running db in the server (since it is key for the company's day to day operation).
The reason I need this is that we are going to test some BI Software and we need data from this database to test it, but we don't know the structure of the database since the developers of the CRM software didn't give us any documentation on it. So we need to have the database locally so that, without affecting the running CRM, we can:
understand the structure by looking at the DDL
make queries to it to get sample data
I researched a bit, and the most common solution to my problem was to use dbunload on the remote server to unload the db into a reload.sql file that contained what I needed. But most tutorials on the subject mention that I have to stop the db first (which would be catastrophic). If this is the only option, then I guess I am willing to do it on the weekend when the CRM is not used, but I wanted to know if there was another solution first.
If there is no other solution, can you point me to where I can find the proper and safer way to do this?
I have researched a lot, but prior to this day I have never even heard of SQLAnywhere, so I really need all the help I can get. My main concern is doing something that impacts negatively the CRM software.
Thank you.
You can run dbunload across the network, you just have to tell it to do an "external" unload. The default is to do an internal unload which would only work from the machine where the database server is running.
I don't have SQL Anywhere 9 documentation right now to look up the exact switch, but dbunload -? should show you all the possible switches.
Edit:
-an will create a new database and load the data and schema from another data
-xi switch will do external unload and internal reload.
-c parameters to connect to your remote database
I have a situation where I would like a desktop application to be useable whether an internet connection is present or not.
I have a choice of MySQL on a web server and I could work with a local MySQL database or maybe MS Access database on the local drive and then just update data when connection is restored. My issues are as follows.
Sync local changes to remote server. Multi site / multi user scenario so how to keep db in sync when connection restored without loosing changes from other users in server data.
Sync remote changes to local. Multi site / multi user scenario so how to keep db in sync when connection restored without loosing changes made locally while updating with server data.
Currently I am using XML files and LingtoXML querying but it is unsatisfactory to continue with these files so a better solution is required.
Any help would be appreciated to identify what technology would work best and how to keep them in sync.
Thanks in Advance.
"Jet Replication Objects (JRO)", the replication features of the Access Database Engine, have been deprecated (ref: here). I believe that the related management features have also been completely removed from Access 2013. Therefore, native Access replication should no longer be considered a viable option.
Microsoft's recommendation would be to use SQL Server and its replication features. However, SQL Server Express has limitations on how much it can do (e.g., it can be a "Subscriber" but not a "Publisher" or "Distributor", ref: here) so presumably there would have to be a non-free copy of SQL Server involved somehow.
I haven't yet had the occasion to use MySQL replication myself, but it is probably worth considering. Chances are good that you could still use Access as a front-end (via ODBC linked tables).
I have recently started using MySQL Workbench, hence I apologise if this is not the proper platform to ask this question. I tried to figure out the solution of my own, but could not find any appropriate one.
Here is my situation: At my workplace, we have a huge set of data about the operational and financial figures such as sales, employee, profit, etc for European companies spread over past 7-8 years and new data keeps coming regularly. However, the problem is we work from different remote locations, me in one city and the other two colleagues in a different city. Normally, we share our work files (.xls/.doc) etc through Dropbox. So, we thought of creating a database in MySQL wherein we all can submit/edit/add this data so that we can filter and analyse this data on several ways once the collection is complete. And we plan to use and access it thereafter. We believe that this is ease a lot of our work. So all I want to know is: can all three of us collaborate simultaeousy (in order to add or edit the data) through workbench Server administration, like the way we collaborate our work through Dropbox? I want to be the host (like the administrator) and then want to allow the access to my colleagues.
Thank you for your time and answer. You may also refer me to any site or link to read more about it.
I think you are a bit confused about what MySQL Workbench is.
MySQL workbench is just a data viewer and administration tool that connects to a MySQL server, there's no data "stored" in MySQL workbench, all the data is stored in the server.
MySQL workbench can:
Connect to a MySQL server
Send SQL instructions and show the results: You can create and drop databases, send SQL queries, create and execute stored procedures and functions... all assuming you have the right privileges.
Perform administration tasks: You can create and drop users, grant or revoke permissions, etcétera
But the fact is: all is stored in a MySQL server... so the answer to your question is: Yes, you can work simmultaneously with your colleagues, if and only if all of you can connect to the same database server (as Mike W commented).
Addressing your comments, and clarifying more details:
MySQL is a database server. When you install it in a computer, all data is stored in that computer (aside from replication and other fine details). You should make regular backups of your data (MySQL has tools for that, one is mysqldump). If you want to access the data stored in your database server, you can do it:
By ussing the command-line client,
By using MySQL workbench or another GUI client program, or
By any program that can connect to the database server (via ODBC or specific libraries).
Focusing on MySQL Workbench, and addressing your specific question: If your machine breaks down, you can install the MySQL Server in any other machine, and load the backup into it. You will have to configure that new machine so that any of your coworkers can connect to it (that may imply that a new set of connection parameters is created).
It's one of those cases where you have a desktop application and its database is in a remote server. In my case, it's MySQL and the application is made in Delphi XE3. But when client wants his data both offline and online (for speed and security) we need to:
Login with remote server information (more updated);
Sync the online database to offline;
Do the tasks on the application and the database;
Sync back the offline database with online.
My question: Is there a standard way to do that, by MySQL instructions or another automatic way? Or Am I going to code all the rules to make it possible?
Luckily there is no need for code here.
Replication has been built into MySQL for many years.
The trick is to setup the remote host as master and the local copy as slave.
All updates go to the master.
And the slave reads from the remote.
The documentation is here: http://dev.mysql.com/doc/refman/5.7/en/replication.html
Here's a tutorial: http://www.howtoforge.com/mysql_master_master_replication
Note that there can really only be one master, if not the setup will get too complicated to be workable.
You could look at client data sets (as you need to update the local version I do not believe that mysql allows multiple masters). Basically your application connects to mysql when it is online and if your application goes offline you store the database and changes in a local xml database. One back online, you apply the updates. Downsides of cds are.: no sql locally and your local changes might conflict with changes made by other users so the apply updates must include logic to reconcile the conflicts. Also cds is involved still trying to get my head around it
I'm developing an app which will have a central database users can add entries to. The database will have to be on a server somewhere but I want the users to be able to add entries offline. The app will sync to the main db when connection is available. So, I supose I need 2 databases - the main one sitting on a server (preferably linux) and a small one on each client machine to use as a buffer when offline. The app will be coded in c# for windows. I'm having trouble deciding what databases to use and whether I can leverage any replication technology to make this easier. Also, I don't want to pay for anything ;) So I guess my questions are...
Will I have any trouble writing code in ADO.NET to move data from something like SQL Compact Edition to MySQL?
Are there any replication solutions which will move stuff from local to main database for me
I've recently discovered IBM's db2 expressC but I'm not sure if it's serverless as well as server installed. Does anyone know?
Firebird can be server or serverless. Can I replicate between them. Is the server mode capable of heavy use?
Firebird can be server or serverless.
Can I replicate between them.
Yes.
Is the server mode capable of heavy
use?
Define 'heavy use'. I've had production systems with 200 simultaneous users pumping 20 transactions/minute each on databases in the 10-20GB range. I'm sure there are many larger deployments out there.
Also, what you describe seem like the 'briefcase model'. You should look into it if you haven't already done so. Maybe the solution is not replication at the database level, but rather a smarter fat client.
Just answering two of your questions; I don't know about DB2 or Firebird.
Will I have any trouble writing code in ADO.NET to move data from something like SQL Compact Edition to MySQL?
That should be very trivial; install MySQL Connector/NET and you're good to go.
Are there any replication solutions which will move stuff from local to main database for me
SQL Server replication is made for this, but I don't suppose it would work with MySQL.