Multiples readonly databases(backup connections) on sails.js - mysql

i have 2 databases with readonly status on amazon AWS RDS, as readonly databases cant be multi-az, i need to control them manually, my question is, its possible to sails control each connection to use by default my readonly1 database and when this one fail on connect, start using the readonly2 database?
Thanks.

What #Sangharsh said but also, Sails is great when your use case is fairly simple, when it is not then it's better to make use of elaborate methods of interfacing with the database (mysql#npm) and structure this into adapters/services/helpers.
On a more simple scope if you only wanted to read records and have it fall back to the alternative database for data if the connection fails then you could have like MySQLService with a read() method that does what you said if this is the only thing that you need to do.
That being said, you should do some research into load balancing the servers in AWS as you can do this in there without Sails related code :-)

Related

What option we have to build our own MySQL private cloud?

My company has a requirement that we want to build a private DB cloud service for internal use.
Requirements are:
User can easily request a new mysql instance, and terminate it.
Each mysql instances are isolated with each other.
One of the solution we have is just using to create different user and schema for each user. Something similar to what the cPanel is doing.
But I wonder is there better option available?
Honestly, I am don't think putting everybody on the single big MySQL instance is a good idea.
First, we can't do much about resources management. And I am afraid having a problem in the database (can't boot it up for example) is going to kill everybody.
To minimize the risk of single point of failure, I am looking for something like the Amazon RDS and Azure MySQL. What we want is very similar to that.
Does anybody know how are they do that? Is there is open source or commerical version we can buy?
Thanks you.
Without knowing the why it is hard to give you a good answer to your question. If Amazon RDS or Azure MySQL look like a good solution to you, I would suggest using those. Building such a service yourself and making sure it will scale well will probably cost a lot more money.
I mean, sure you could set up Kubernetes or Hashicorp Nomad and deploy containers there but you would need to figure out how these tools work, how to let MySQL run in a scaleable fashion, and build some kind of UI to easily launch and stop MySQL instances.

How to connect to an arbitary database using FaaS?

I just did some reading about serverless computing and FaaS. If using FaaS to access an arbitrary database, we need each time to establish and close a database connection. In, lets say a node applications, we would usually establish the connection once and reuse it for multiple requests.
Correct?
I have a hosted MongoDB at mlab and thought about implementing a REST API with Googles Cloud Functions Service. Don't know how to handle the database connection efficient.
For sure thing get clearer while coding and testing. But I would like to know chances to succeed before spending a lot of time.
Thanks
Stefan
Serverless platforms reuse the underlying containers between distinct function invocations whenever possible. Hence you can set up a database connection pool in the global function scope and reuse it for subsequent invocations - as long as the container stays warm. GCP has a guide here using MySQL but I imagine the same applies to MongoDB.

Microservices centralized database model

Currently we have some microservice, they have their own database model and migration what provided by GORM Golang package. We have a big old MySQL database which is against the microservices laws, but we can't replace it. Im afraid when the microservices numbers start to growing, we will be lost in the many database model. When I add a new column in a microservice I just type service migrate to the terminal (because there is a cli for run and migrate commands), and it is refresh the database.
What is the best practice to manage it. For example I have 1000 microservice, noone will type the service migrate when someone refresh the models. I thinking about a centralized database service, where we just add a new column and it will store all the models with all migration. The only problem, how will the services get to know about database model changes. This is how we store for example a user in a service:
type User struct {
ID uint `gorm:"column:id;not null" sql:"AUTO_INCREMENT"`
Name string `gorm:"column:name;not null" sql:"type:varchar(100)"`
Username sql.NullString `gorm:"column:username;not null" sql:"type:varchar(255)"`
}
func (u *User) TableName() string {
return "users"
}
Depending on your use cases, MySQL Cluster might be an option. Two phase commits used by MySQL Cluster make frequent writes impractical, but if write performance isn't a big issue then I would expect MySQL Cluster would work out better than connection pooling or queuing hacks. Certainly worth considering.
If I'm understanding your question correctly, you're trying to still use one MySQL instance but with many microservices.
There are a couple of ways to make an SQL system work:
You could create a microservice-type that handles data inserts/reads from the database and take advantage of connection pooling. And have the rest of your services do all their data read/writes through these services. This will definitely add a bit of extra latency to all your writes/reads and likely be problematic at scale.
You could attempt to look for a multi-master SQL solution (e.g. CitusDB) that scales easily and you can use a central schema for your database and just make sure to handle edge cases for data insertion (de-deuping etc.)
You can use data-streaming architectures like Kafka or AWS Kinesis to transfer your data to your microservices and make sure they only deal with data through these streams. This way, you can de-couple your database from your data.
The best way to approach it in my opinion is #3. This way, you won't have to think about your storage at the computation layer of your microservice architecture.
Not sure what service you're using for your microservices, but StdLib forces a few conversions (e.g. around only transferring data through HTTP) that helps folks wrap their head around it all. AWS Lambda also works very well with Kinesis as a source to launch the function which could help with the #3 approach.
Disclaimer: I'm the founder of StdLib.
If I understand your question correctly it seems to me that there may be multiple ways to achieve this.
One solution is to have a schema version somewhere in the database that your microservices periodically check. When your database schema changes you can increase the schema version. As a result of this if a service notices that the database schema version is higher than the current schema version of the service it can migrate the schema in the code which gorm allows.
Other options could depend on how you run your microservices. For example if you run them using some orchestration platform (e.g. Kubernetes) you could put the migration code somewhere to run when your service initializes. Then once you update the schema you can force a rolling refresh of your containers which would in turn trigger the migration.

Mysql connection pool with fail over

Question 1:
I am using MySQL Connector /J to connect to MySQL. I am creating connection for every request. I need to use connection pool. Whether i need to choose c3p0 or i could use MysqlConnectionPool class provided by the connector library.
Question 2:
I may need to load balace / failover between two MySQL database servers. I could use jdbc:mysql://host,host2/dbname to do the failover automatically. I want to use connection pool and failover in combination. How should i acheive it.
I'd recommend using C3PO or something else. It'll integrate into a Java EE app server better, and it's database agnostic.
Your second question is a good deal more complicated. Load balancing is usually done with an appliance of some kind, like an F5 or ACE, that stands between the client and the load balanced instances. Is that how you're doing it? How do you plan to keep the data in synch if you load balance between the two? If the connections aren't "sticky", you'll expect to find INSERTed data in both instances.
Maybe this reference can help you get started:
http://www.howtoforge.com/loadbalanced_mysql_cluster_debian

EclipseLink JPA secondary server

hoping someone may tell me if there is a way to provide automatic redundancy using JPA. We're currently using EclipseLink but can change should another provider have a suitable solution) and we need to ensure that we switch to our backup database should our primary database become unavailable (since its not located in the same building as our application). thanks for your input.
The easiest way is to change the jdbc connection url as explained in the mysql documentation. As an example
jdbc:mysql://master.server.com:3306,backup.server.com:3306/dbname
In this scenario, if maser.server.com fails, the driver will redirect the commands to backup.server.com. I strongly suggest you to read the whole documentation, as there are a lot of properties which change the failover behaviour, in particular the section High Availability and Clustering.