I want to create a stateless server, so that if any server goes down the loadbalancer can redirect the request to other servers. But if the session is created on one server and it goes down then how to persist it.I am using mysqlstore to persist my session in the database, but for each server, it creates a new record in database thus session id is not shared across different servers. So, need a mechanism to making server stateless.
I'm guessing you're using express-session because it wasn't otherwise indicated.
You're on the right track with the mysqlstore. The way to get around Node's single-threadedness here is to ditch express-session and instead encrypt the session data and put it into a client cookie. Then you can decrypt the session data on a GET request and validate it in your database using an isolate key in the cookie (or create a new session/cookie pair if none exist).
The most popular Node.js middleware for this is cookie-session. Great documentation there as well.
https://github.com/expressjs/cookie-session
As a side note, since it sounds like you're at a pretty scalable place right now with multiple servers, it's worth ditching express-session anyways. express-session uses MemoryStore, which is has a known issue with memory leaks. Fine to use for smaller projects, but probably should be reconsidered for larger ones.
Related
I am building an app that receives a bunch of static data that is read only. The user does not change the data, or send any data to the server. The app just gets the data and presents it to the user in various views.
Like for example a parts list, with part numbers and prices. This data is currently stored in mongoDB.
I have few options for getting the data to the client. I could just use meteor's publication system, and have the client subscribe to the data it needs.
Or I could map all the data the client needs into one JSON file, save the JSON file to Amazon S3, and have the client make simple GET request to grab the data.
If we wanted this app to scale to many, many users, would not using meteor publication be the best? Or would either method be similar in terms of performance? Using meteor publication system would be the easiest, but I am worried that going down this route would lead to performance issues if a lot of clients request the data. If the performance between publishing and get request is about the same, I would just stick with the publication as its the easiest.
In this case Meteor will provide better performance. If your data is mostly server to client driven then clients do not have to worry about polling the server and the server will not have to worry about handling the request.
Also Meteor requires very little resources to send data to the client because the connection is persistent. Take an app like code fights which is built on Meteor constantly has thousands of connections to and from it, its performance runs great.
As a side note, if you are ready to serve your static data as a JSON file in a separate server (AWS S3), then it means you do not expect that data to be that big, so that it can be handled in a single file and entirely loaded in client's memory.
In that case, you might even want to reconsider the need to perform any separate request (whether HTTP or Meteor Pub/Sub).
For instance, simply embedding the data in your app, or served through SSR / Fast Render package.
Then if you are really concerned about your scalability, you might even reconsider the need to use Meteor, since you do not seem to need any client-server interactivity (no real need for Pub/Sub, no reactivity…). After your prototype is ready, you could rework it as a separate and static SPA, so that you do not even need to serve it through Node / Meteor.
I have a realtime HTML5 canvas game that runs off a node backend. Players are connected via Websocket (socket.io). The problem is sometimes I need to deploy new code (hotfixes for instance) and restart the server but I don't want to disconnect players.
My idea for this was to divide the websocket server and application server into separately deployable components and add a message queue in the middle to decouple the 2 components. That way if the application server was rebooting there would just be a short delay while the messages bunch up but nothing would be lost. Is this a good strategy? Is there an alternative?
It's very possible for websocket based applications to be restarted without the user noticing anything (that's the case for my chat server for example).
To make that possible, the solution isn't to have a websocket application isolated and never restarted. In fact this would be very optimistic (are you sure you could ensure its API is never changed ?).
A solution is
to ensure the client reconnects if disconnected (this is standard if you use socket.io for websocketing)
to make the server ask the client its id (or session id) on client initiated reconnection
to persists the state of the application. This is usually done with a database. If your server has no other state than the queue between clients (which is a little unlikely) then you might look for an existing persistent queue implementation or build your own over a fast local storage (redis comes to mind)
I am writing my first .NET MVC application and I am using the Code-First approach. I have recently learned how to configure two SQL Servers installations for High Availability using a Mirror Database and a Witness (not to be confused with Failover Clusters) to do the failover process. I think this would be a great time to practice both things by mounting my web app into a highly-available DB.
Now, for what I learned (correct me if I'm wrong) in the mirror configuration you have the witness failover to the secondary DB if the first one goes down... but your application will also need to change the connection string to reference the secondary server.
What is the best approach to have both addresses in the Web.config (or somewhere else) and choosing the right connection string?
I have zero experience with connecting to Mirrored databases, so this is all heresy! :)
The short of it may be you may not have to do anything special, as long as you pass along the FailoverPartner attribute in your connection string. The long of it is you may need additional error handling to attempt a new connection so the data provide will actually use the FailoverPartner name in the new connection.
There seems to be some good information with Connecting Clients to a Database Mirroring Session to get started. Have you had a chance to check that out?
If not, its there with Making the Initial Connection where they introduce the FailoverPartner attribute of the ConnectionString property attributes.
Reconnecting to a Database Mirroring Session suggests that on any client disconnect due to failover, the client will need to trap this exception and be prepared to reconnect:
The application must become aware of
the error. Then, the application needs
to close the failed connection and
open a new connection using the same
connection string attributes.
If the FailoverPartner attribute is available, this process should be relatively transparent to the client.
If the above doesn't work, then you might need to actually introduce some logic at the application tier to track who is the primary node, the failover node, and connection strings for each, and be prepared to persist that information somewhere - much like the data access provider should be doing for us (eyes wide open).
There is also this ServerFault post on database mirroring with Sql Server that might be of interest from an operational viewpoint that has additional reference information.
Hopefully someone with actual experience will back up any of this!
This may be totally off base, but what if you had a load balancer between your web server and the database servers?
The Load Balancer would have both databases in it's pool, using basic health check techniques (e.g ping, etc).
Your configuration would then only need to point to the IP of the Load Balancer, and wouldn't need to change.
This is what these network devices are good for. It's not the job of the programming framework (ASP.NET) to make decisions on the health of servers.
I have a client software program used to launch alarms through a central server. At first it stored configuration data in registry entries, now in a configuration XML file. This configuration information consists of Alarm number, alarm group, hotkey combinations, and such.
This client connects to a server using a TCP socket, which it uses to communicate this configuration to the server. In the next generation of this program, I'm considering moving all configuration information to the server, which stores all of its information in a SQL database.
I envision using some form of web interface to communicate with the server and setup the clients, rather than the current method, which is to either configure the client software on the machine through a control panel, or on install to ether push out an xml file, or pass command line parameters to the MSI. I'm thinking now the only information I would want to specify on install would be the path to the server. Each workstation would be identified by computer name, and configured through the server.
Are there any problems or potential drawbacks of this approach? The main goal is to centralize configuration and make it easier to make changes later, because our software is usually managed by one or two people at most.
Other than allowing for the client to function offline (if such a possibility makes sense for your application), there doesn't appear to be any drawback of moving the configuration to a centralized location. Indeed even with a centralized location, a feature can be added in the client to cache the last known configuration, for use when the client is offline).
In case you implement a [centralized] database design, I suggest to consider storing configuration parameters in an Entity-Attribute-Value (EAV) structure as this schema is particularly well suited for parameters. In particular it allows easy addition and removal of particular parameters and also the handling parameters as a list (paving the way for a list-oriented display as well in the UI, and therefore no changes needed in the UI either when new types of parameters are introduced).
Another reason why configuartion parameter collections and EAV schemas work well together is that even with very many users and configuration points, the configuration data remains small enough that is doesn't suffer some of the limitations of EAV with "big" tables.
Only thing that comes to mind is security of the information. In either case you probably have that issue though. Probably be easier to interface with though with a database as everything would be in one spot.
I have a service that accepts callbacks from a provider.
Motivation: I do not want to EVER lose any callbacks (unless of course my network becomes unreachable).
Let's suppose the impossible happens and my mysql server becomes unreachable for some time,
I want to fallback to a secondary persistence store once I've retried several times and fail.
What are my options? Queues, in-memory cache ?
You say you're receiving "Callbacks" - you've not made clear what they are. What is the protocol? Is it over a network.
If it were HTTP, then I would say the best way is that if your application is unable to write the data into permanent storage, it should return an error ("Try again later" if that exists in the protocol) to the caller, who should try again later.
An asynchronous process like a callback should always be able to cope with failures downstream and queue its requests.
I've worked with a payment provider where this has been the case (Paypal). If you're unable to completely process the request, just send an error back to the caller.
I recommend some sort of job queue server. I personally use Starling and have had great results with it. It speaks the memcache protocol so it is easy to use as a persistent queue.
Starling on Github
I've put a queue in SQLite for this before. Though, in my case, it was to protect against loss of the network link to the MySQL server — the data was locally-generated.
You can have a backup MySQL server, and switch your connection to that one in case primary one breaks down. If it's going to be only fail-over store you could probably run it locally on the application server.