I am looking for a solution for automatic database failover in grails application. I have two databases which reside on two different sites connected over WAN. On Site A, we have Database 01 which is primary database for application on the same site on which database resides which is Application 1 while it would be treated as a secondary database for application on other site which is Application 2 and vice versa.
Please refer to the image below for more details:
So, according to the design described above:
Application deployed on Site A should use database at Site A primarily
If somehow database at Site A is not accessible, then application deployed at Site A should use database at Site B (as a secondary database)
Once Database at Site A comes up, the application deployed at Site A should again start using the database at Site A (as a primary database)
In the same way, it should work for application at Site B. For application at Site B, Database at Site B should be used primarily (as a primary database) and database at site B (as a secondary database).
Note: When database at corresponding site comes up, application at that site (on which it is deployed) again should start using the database to avoid unnecessary communication over the WAN.
Do grails provide any such support that we may specify two data sources one as a primary and another one as a secondary (that would just be used in case primary isn't accessible.)?
And If there is no such mechanism, then we need to continually check if the primary data source is up & running otherwise switch to the secondary data source which seems inappropriate at all and don't want to go for this option.
The real answer is not to have your code of any sort handle this, but to implement a load balancer which will route traffic accordingly. While you could do this in your own code (in theory), whatever solution you come up with will fall far short of what an actual load balancer is capable of regarding fail overs.
To answer your question, Grails does not have anything that will address this concern. This concern should be addressed outside of your application and instead be part of your network/infrastructure topology design/implementation.
Related
The question is, how to update a constant? This sounds like a stupid question, but let's look at the background of my issue:
Background
I manage a network of servers, which includes a MySQL server, multiple HTTP servers, and a Minecraft server (a self-hosted server that gamers who have installed Minecraft can connect to and play together). All of the user-end services (HTTP servers, Minecraft server, user apps) are directly or indirectly related to the MySQL server. The MySQL database stores different data for each player account, for example, the online/offline status of players, etc.
In programming, constants are used to create a general reference to a value that will not change across a runtime. Especially, for software-internal identifiers, such as data flags, bitmasks, etc. In my case, I also use constants to store specific data, such as the MySQL server's address and other credentials. So when I want to change the server address, I only need to modify them from one point, for example, an internal constants.php of the server.
Problem
When I migrate my MySQL database to another host or change password, I have to update the details on every server. It is not possible to create a centralized data provider that provides the server address, because the MySQL server itself is the centralized data provider. That means, every time I change the value, I must update all servers. I must also maintain a very private and local list (probably has to be written down on a memo and stick it on my computer!) of all these places, because it is really hard to locate all these references. So, my question is, is there a better way of management that allows me to change the values from one place? Note that the servers are on different hosts, so it is not possible to put it in a local file, and it doesn't sound reasonable to create a centralized data provider (call it password provider) to provide access of the real centralized data provider (MySQL) either, since if I have the need to change the MySQL database details, I have the same need to change the password provider details as well.
This is less a concern. but since it is a similar question, I am putting it down here too. I use integer bitmasks to store player ranks. For example, if the player is a VIP, he has a 0x01 flag, and if the player is a moderator, he has a 0x10 flag, and 0x11 if both VIP and moderator, vice versa. I want to refactor the bitmask values as well, but it would be great trouble, because I need to shut down all servers and update the MySQL values, update constants on every server, then restart all servers, to avoid potential security vulnerability in the period of updating. Is there a more convenient way to do that?
This is a network management question too, but I consider it more programming-related.
We are talking about deployment system. For example we can use
capistrano: https://github.com/capistrano/capistrano. We need to
save constants.php to git and create task in capistrano for deploy
this file to each server. I use this tool for deploying of projects which are one of the 50 busiest sites of the Russian segment of the Internet:)
We are talking about data migration. So there are several ways. Some of them with downtimes and some not (sometimes it depends on the situation).
Data migration without downtime:
modify your app so it will understand old variant of players bitmask
and new one
deploy modified app
update bitmask into your databases
modify your app so it will understand only new variant of bitmasks
deploy modified app
Suppose I want to keep millions of recharge codes into a separate database(named A) having a table . I want to design another database(named B) which will be used by a web application.
I want to keep my database A separate and as secure as it can be, preferably not exposed to the network. so that nobody could get access/hack to the huge sensitive data.
But I also have to populate one table of database B with the codes from table of Database A as needed or requested from web application.
I am using Mysql DB and Apache Tomcat as web server .
Can you please suggest me any best and secure way of designing the database keeping in mind that..
1) The safety of codes in database A are the priority.
2) the tables will contain millions of rows so quick response is also requirement.
I'm adding this as an answer because it is too long for a comment.
I think this is more about app design and layering than about the database design as such. In terms of DB design, you just need the tables to have indexes that use all the keys you will have. The db-access will be sub-second.
In terms of app design, I suppose your app will know when to look at table-B and when it has to retrieve from table-A.
So, the key issue is: how to access A. The simplest way would be for the app to connect to A, and read it via SQL. The problem with this is that a hacker who is on your app server could then see your connection details. You could try to obscure the connection details from app-server to A. This would be "security through obscurity" and would be something, but would not stop a good hacker.
If you're serious about control, you could have an app running on A. You can block all ways for apps from outside A to access the database on A, leaving the app on A as the sole point of access.
By it very uniqueness, the app could provide another level of obscurity. For instance, the app could insist on knowing the customer-id for whom the code is being requested, and could check this against some info (even on B). But, there are better reasons to use one...
The app on B could
impose controls: e.g. only 1000 codes given out per hour
send alerts: e.g. email an operator if more than 500 codes have been requested in
the hour
I have been making some research in the domain of servers for a website I want to launch. I thought of a certain configuration of a server with RAID 10 implemented with a NAS doing the backup which has a RAID 10 configuration as well. This should keep data safe in 99.99+ of cases.
My problem appeared when I thought about the need of a second server. If I shall ever require more processing power and thus more storage for users, how can I connect a second server to my primary one and make them act as one what the database (mySQL) is regarded?
I mean, I don't want to replicate my first DB on the second server and load-balance the request - I want to use just one DB (maybe external) and let the servers use it both at the same time. Is this possible? And is the option of backing up mySQL data on a NAS viable?
The most common configuration (once scaling up from a single box) is to put the database on its own server. In many web applications, the database is the bottleneck (rather than the web server); so the first hardware scale-up step tends to be to put the DB on its own server.
This also allows you to put additional security between the database and web server - firewalls are common; different user accounts etc. are pretty much standard.
You can then add web servers to the load balancer, all talking to the same database, as long as your database can keep up.
Having more than one web server also helps with resilience - you can have a catastrophic hardware event on one webserver and the load balancer will direct the traffic to the remaining machines.
Scaling the database server performance is a whole different story - though typically you use very beefy machines for the database, and relative lightweights for the web servers.
To add resilience to the database layer, you can introduce clustering - this is a fairly complex thing to keep running, but protects you against catastrophic failure of a single machine.
Yes, you can back up MySQL to a NAS.
So, we want to move out from Air (Adobe stopping support and really bad implementation for the sqlite api, among other things).
I want to make 3 things:
Connect with a flash (not web) application to a local mysql database.
Connect with a falsh (not web) application to a remote mysql database.
Connect with a flash (web) application with a remote mysql database.
All of this can be done without any problem, however:
1 and 2 can be done (WITHOUT using a webserver) using for example this:
http://code.google.com/p/assql/
3 can be done using also the above one as far as I understand.
Question are:
if you can connect with socket wit mysql server, why use a web server (for example with php) to connect like a inter connectioN? why not connnect directly?
I have done this a lot of times, using AMFPHP for example, but wouldn't be faster going directly?
In the case of accessing local machine, it will be a more simple deploy application that only require the flash application + mysql server, not need to also instal a web server.
Is this assumption correct?
Thanks a lot in advance.
The necessity of separate layer of data access usually stems from the way people build applications, the layered architecture, the distribution of the workload etc. SQL server usually don't provide very robust API for user management, session management etc. so one would use an intermediate layer between the database and the client application so that that layer could handle the issues not related directly to storing the data. Security plays a significant role here too. There are other concerns as well, as, for example, some times you would like to close all access to the database for maintenance reasons, but if you don't have any intermediate layer to notify the user about your intention, you'd leave them wondering about whether your application is still alive. The data access layer can also do a lot of caching, actually saving your trips to the database, you would have to make from client (of course, the client can do that too, but ymmv).
However, in some simple cases, having an intermediate layer is an overhead. More yet, I'd say that if you can, do it without an intermediate layer - less code makes better programs, but all chances are for that you will find yourself needing that layer for one reason or another.
Because connecting remotely over the internet poses huge huge huge security problems. You should never deploy an application that connects over the internet to a database directly. That's why AIR and Flex doesn't have remote Mysql Drivers because they should never be used except for building development type tools. And, even if you did build a tool that could connect directly, any descent network admin is going to block access to the database from anywhere outside the DMZ and internal network.
First in order your your application to connect to the database the database port has to exposed to the world. That means I won't have to hack your application to get your data. I just need to hack your database, and I can cut you out of the problem entirely because you were stupid enough to leave your database port open to me.
Second most databases don't encrypt credentials or data traveling over the wire. While most databases support SSL connections most people don't turn it on because applications want super fast data access and they don't want to pay for SSL encryption overhead blah blah blah. Furthermore, most applications sit in the DMZ and their database is behind a firewall so between the server and the database is unlikely something could be eavesdropping on their conversation. However, if you connected directly from an AIR app to the database it would be very easy to insert myself in the middle and watch the traffic coming out of your database because your not using SSL.
There are a whole host of problems doing what you are suggesting around privacy and data integrity that you can't guarantee by allowing a RIA direct access to the database its using.
Then there are some smaller nagging issues like if you want to do modern features like publishing reports to a central server so users don't have to install your software to see them, sending out email, social features, web service integration, cloud storage, collaboration or real time messaging etc you don't get if you don't use a web application. Middleware also gives you control over your database so you can pool connections to handle larger load. Using a web application brings more to the table than just security.
I have a website using cPanel on a dedicated account, I would like to be able to automatically sync the website to a second hosting company or perhaps to a local (in house ) server.
Basically this is a type of replication. The website is database driven (MySQL), so Ideally it would sync everything (content, database, emails etc.) , but most importantly is syncing the website files and its database.
I'm not so much looking for a fail-over solution as an automatic replication solution, so if the primary site (Server) goes off-line, I can manually bring up the replicated site quickly.
I'm familiar with tools like unison and rsync, but most of these only sync file(s) and do not do too well with open database connections.
Don't use one tool when two is better; Use rsync for files, but use replication for MySQL.
If, for some reason, you don't want to use replication, you might want to consider using DRBD. This is of course only applicable if you're running Linux. DRBD is now part of the main kernel (since version 2.6.33).
And yes - I am aware of at least one large enterprise deployment of DRBD which is used, among other things, store MySQL database files. In fact, MySQL website even has relevant page on this topic.
You might also want to Google for articles against DRBD/MySQL combination; I remember reading few posts of that.