Query handling on no connection to database - mysql

Suppose we have a condition in which we have many clients are running the same windows application and using the same database, but net connectivity is not good in that region so it wont be able to access the database server all the time. Can we store SQL queries during this time and then execute them later?
And also how we will maintain data consistency for all the clients in this situation?

Can we store SQL queries during this time and then execute them later?
You could, but it might not be the best way. I recommend solving this not on the data access layer, but rather in the business logic. So instead of storing a sql statement to be executed later, I would rather store objects that represent the business action that should be performed.

Related

Replication from MySQL to SQL Server

I have a system in which data is written constantly. It works on MySQL, I also have a second system that runs on SQL Server and uses some parameters from the first base.
Question: how is it possible (is this even possible) to constantly transfer values from one base (MySQL) to another (SQL Server)? The option to switch to one base is not an option. As I understand it, it will be necessary to write a program for example in Delphi which will transfer values from the other database to another.
You have a number of options.
SQL Server can access another database using ODBC, so you could setup SQL server to obtain the information it needs directly from tables that are held in MySQL.
MySQL supports replication using log files, so you could configure MySQL replication (which does not have to be on all tables) to write relevant transactions to a log file. You would then need to process that log file (which you could do in (almost) real time as the standard MySQL replication does) to identify what needs to be written to the MS SQL Server. Typically this would produce a set of statements to be run against the MS SQL server. You have any number of languages you could use to process the log file and issue the updates.
You could have a scheduled task that reads the required parameters from MySQL and posts it to MS SQL, but this would leave a period of time where the two may not be in sync. Given that you may have an issue with parsing log files and posting the updates you may still want to implement this as a fall back if you are processing log files.
If the SQL Server and the MySQL server are on the same network the external tables method is likely to be simplest and lowest maintenance, but depending on the amount of data involved you may find the overhead of the external connection and queries could affect the overall performace of the queries made against the MS SQL Server.

MySQL frequent queries and persistent connections

I am going to be hosting a gaming website for a bunch of card games that are pretty popular in my country. Since there is really no other convenient way to do this apart from web sockets I decided that's the path I'm going to take. Anyhow there are a bunch of concerns that I have.
I plan on having multiple servers for each type of game and each server is going to host for about 100-200 people. With that being said it is necessary for players to see information about the server before joining, such as how many players are connected, what is the average wait time and such. To do this I could either use files or a database. I would very much like to go with the database but I'd like to ask a few questions about MySQL
I know that MySQL was not built for real-time applications but what is an acceptable interval for each server to update its status in the database?
Are there any problems I may run into when having persistent connections from each server to the MySQL server?
Are there any benefits to preparing the statement that is going to update the database and then execute it every N seconds or I should prepare it each time? I am asking this because I don't know what happens when a statement is prepared so having a prepared statement in a persistent connection may not be a good idea.
Using InnoDB, is there any need to create a separate MySQL server solely for this purpose or I could use the server that is used for the site. Not really sure if those updates every N seconds would affect anything.

Mysql with Node.js: Does it make sense to have node.js save/load stuff to/from the database all the time?

So I have a small game in node.js(only the server of course) which has map data and player accounts stored in a mysql database. Right now I constructed it in a way that minimizes the amount of queries made by loading data from the database and keeping it in javascript objects/arrays or whatever seems appropriate and only writing to the database when needed.
Now I was thinking: Is this really worth it? In many cases it would be alot better(in terms of data would be more save and WAY more up-to-date) to hardly store data in the server and just loading it from the database when needed(respectively writing when it needs to be changed).
My question is: Is it efficient/save/recommendable to have the server read/write from the database often rather than having data from the database in javascript variables in the server?
Additional info:
-The nodejs server and my mysql server are on the same machine and a query usually takes less than 1ms or maybe 3ms for big queries like loading room data.
-I am using a module simply called mysql.
-If needed I will include extra info, just ask in a comment.
Really depends on your Use-Case. Generally speaking, I would not add another layer of caching in node.js but handle that in your db with a bigger cache and optimized queries.

Strategy for exporting data from One Database to a Standard set of DB vendors

I have an application using MySQl as the DBMS. But many of clients want the data to be updated on their existing database on a different DBMS. So, we had to export data from our DBMS to theirs on a scheduled basis. But, every client has their own DBMS.
What would be the best strategy to design a framework such that the data available in our DBMS can be exported to the client's DBMS. Each client would have a different DBMS.
PS: Sorry for not being elaborate.
Say for example the client needs only couple of columns from our DBMS and updated into his DBMS which would be different for each clients. So, they may not necessarily have the same architecture, but we can design a framework with which most DBMS will work and may be others will work with minimum changes.
If you can write some code on your client's server, then write a script on their server which would read from your db and insert into client's db. Make sure you can call this script from your server.

SQLAlchemy MySQL Caching?

I am developing an intense financial MySQL database (django + SQLAlchemy) which is interrogated and manipulated constantly. My DB contains a lot of date-value pairs. I keep loading more and more data as time progresses, but historical values don't change which is why I think caching could really improve performance for me.
Is beaker really my best option, or should I implement my own caching over Redis? I would love to hear some ideas for caching architectures - thanks!
The Mysql cache stores the text of a SELECT statement together with the corresponding result that was sent to the client.
If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.