Global variables and sessions in asp.net - mysql

I'm new to web development, and coming from the world of java and android I have a few questions. (I'm using asp.net).
Let's assume I have a simple webpage with a label showing a number and a button. When any user presses the button, the number gets incremented automatically for all the users viewing the site, even if they do not refresh the page. Would I use sessions to achieve this or there another concept I should look into?
I have 2 types of counters which I store in a mysql table with the following schema.
Counter_ID Increment_Value
Each counter is active for a set amount of time and only one instance of a counter can be active at one point in time. After this time, the counter is reset to 0 and a new instance of the counter is created. I store all the instances which are active as well as past instances in a table with this schema.
Instance_ID Counter_ID Counter_Value Status(Active/Complete) Time_Remaining
When a user opens a page dedicated to one of the two counter types, the information about the current running instance of that counter needs to be loaded. Would I just execute a SQL query to achieve this and read the information for active counters every time the counter page is loaded or is there a way in which I can store this information on the site so that the site "knows" which instance is currently active and does not require an SQL query for each request (using a global variable concept) ? Obviously, the situations described above are just simplified examples which I use to explain my issue.

You can use ApplicationState to cache global values that are not user-specific. In your first example, since the number is incremented for all users you can transactionally store it in the database whenever it is incremented, and also cache it in ApplicationState so that it can be read quickly when rendering pages on the server. You will have to be careful to ensure you are handling concurrency properly so that each time the number is incremented the Database AND the cache are updated atomically.
It's a little unclear from your question, but if your requirement is to also publish changes to the number in real-time to all users who are currently using your website you will need to look at real-time techniques. Websockets are good for this (if available on the server and client browser). Specifically, on the .NET platform SignalR is a great way to implement real-time communication from server to client and with graceful fall-back in case WebSockets are not supported.
Just to be clear, you would not use Session storage for this scenario (unless I have misinterpreted your question). Session is per-user and should typically not affect other users in the system. Your example is all about global values so Session is not the correct choice in this case.
For your second example, using ApplicationState and transactional DB commits you should be able to cache which counter is currently active and switch them around at will provided you lock all your resources while you perform the switch between them.
Hopefully that's enough information to get you heading in the right direction.

Related

How do modern web applications implement caching and data persistence with large amounts of rapidly changing data?

For example, consider something like Facebook or Twitter. All the user tweets / posts are retained indefinitely (so they must ultimately be stored within a static database). At the same time, they can rapidly change (e.g. with replies, likes, etc), so some sort of caching layer is necessary (e.g. you obviously can't be writing directly to the database every time a user "likes" a post).
In a case like this, how are the database / caching layers designed and implemented? How are they tied together?
For example, is it typical to begin by implementing the database in its entirety, and then add the caching layer afterword?
What about the other way around? In other words, begin by implementing the majority of functionality into the cache layer, and then write another layer which periodically flushes the cache to the database (at some point when its activity has gone down)? In this scenario, for current / rapidly changing data, the entire application would essentially be stored in cache.
Or perhaps implement some sort of cache-ranking algorithm based on access / update frequency?
How then should it be handled when a user accesses less frequent data (which isn't currently in cache)? Simply bypass cache completely / query the database directly, or should all data be cached before it's sent to users?
In cases like this, does it make sense to design the database schema with the caching layer in mind, or should it be designed independently?
I'm not necessarily asking for direct answers to all these questions, but they're just to give an idea of where I'm coming from.
I've found quite a bit of information / books on implementing the database, and implementing the caching layer independent of one another, but not a whole lot of information on using them in conjunction / tying them together.
Any information, suggestions, general patters, articles, books, would be much appreciated. It's just difficult to find some direction here.
Thanks
Probably not the best solution, but I worked on a personal project using Openresty where I used their shared memory zones to cache, to avoid the overhead of connecting to something like Redis, then used Redis as the backend DB.
When a user loads a resource, it checks the shared dict, if it misses then it loads it from Redis and writes it to the cache on the way back.
If a resource is created or updated, it's written to the cache, and also queued to a shared dict queue.
A background worker ticks away waiting for new items in the queue, writing them to Redis and then sending an event to other servers to either invalidate the resource in their cache if they have it, or even pre-cache it if needed.

How to detect sql table updates with multiple open browsers or windows?

-I have an html page with a textbox element with autocomplete feature.
-The autocomplete list is filled from Mysql table called X.
-A user can open this page from multiple browsers or windows at the same time.
-The user is able to add new records or update existing records to table X from the same page.
Now as he adds new records I want the other window or the browser detect that a change happened in the table and refresh the autocomplete list so it is visible there too.
How can I achieve this?
I am thinking of checking if the table changed on every keypress of the textbox, but I am afraid that's gonna slow the page.
The other solution I was thinking is can I apply a trigger in this case?
I know this is used alot for example you can open your gmail account from multiple browser or window and if you edit anything you will be able to see it from the rest.
I appreciate your help as I searched alot about this but I couldn't find a solution.
This is a very broad question and has many, many answers. It also depends on your database back end. Among many is couple of note worthy ones, if you use a Bus of some sort in the back end you can push your change to the db then to the bus and your web client can consume it from there so it know to refresh. The other is use a trigger (if you're using MSSQL) to push the change, using a CLR assembly your created to an MSMQueue and consume it from there, it'll reduce the constant polling for the db. Personally I always use the Bus for this kind of things but it depends on your set up.
A SQL trigger wouldn't help here - that's just for running logic inside the DB. The issue is that you don't have a way to push changes down to the client (except perhaps Web sockets or something, but that would probably be a lot of work), so you would have to resort to polling the server for updates. Doing so on key press might be excessive - perhaps on focus and/or periodically (every minute?)? To lessen the load, you could have the client make the request using some indicator of the state that it last successfully fetched, and have the server only return changes (deletions and insertions - an update would be a combination of the two) so then rather than the full list every time it is only a delta.
Within a single browser you may be able to incorporate local storage as well, but that won't help across multiple browsers.
Another option would be to not store the autocomplete options locally and always fetch from the server (on key press). Typically you would not send the request when the input length is less than some threshold (say, 3 characters) to try to keep the result size reasonable. You can also throttle the key press event so that multiple presses in quick succession get combined into only one request sent, and also store and cancel any outstanding asynchronous requests before sending a new one. This approach will guarantee you always get the most current data from the database, and while it will add a degree of latency to the autocomplete in my experience it is rarely an issue.

What database/technology to use for a notification system on a node.js site?

I'm looking to implement notifications within my node.js application. I currently use mysql for relational data (users, submissions, comments, etc). I use mongodb for page views only.
To build a notification system, does it make more sense (from a performance standpoint) to use mongodb vs MySQL?
Also, what's the convention for showing new notifications to users? At first, I was thinking that I'd have a notification icon, and they click on it and it does an ajax call to look for all new notifications from the user, but I want to show the user that the icon is actually worth clicking (either with some different color or a bubble with the number of new notifications like Google Plus does).
I could do it when the user logs it, but that would mean the user would only see new notifications when they logged out and back in (because it'd be saved in their session). Should I poll for updates? I'm not sure if that's the recommended method as it seems like overkill to show a single digit (or more depending on the num of notifications).
If you're using node then you can 'push' notifications to a connected user via websockets. The linked document is an example of one well known websocket engine that has good performance and good documentation. That way your application can send notifications to any user, or sets of users, or everyone based on simple queries that you setup.
Data storage is a different question. Generally mysql does have poor perfomance in cases of high scalability, and mongo does generally have a quicker read query response, but it depends on what data structure you wish to use. If your data is in a simple key-value structure with no real need for relational data, then perhaps using a memory store such as Redis would be the most suitable.
This answer has more information on your question too if you want to follow up and investigate more.

Database strategy for synchronization based on changes

I have a Spring+Hibernate+MySQL backend that exposes my model (8 different entities) to a desktop client. To keep synchronized, I want the client to regularely ask the server for recent changes. The process may be as follows:
Point A: The client connects for the
first time and retrieves all the
model from the server.
Point B: The client asks the server
for all changes since Point A.
Point C: The client asks the server
for all changes since Point B.
To retrieve the changes (point B&C) I could create a HQL query that returns all rows in all my tables that have been last modified since my previous retrieval. However I'm afraid this can be a heavy query and degrade my performance if executed oftenly.
For this reason I was considering other alternatives as keeping a separate table with recent updates for a fast access. I have looked to using L2 query cache but it doesn't seem to serve for my purpose.
Does someone know a good strategy for my purpose? My initial thought is to keep control of synchronization and avoid using "automatic" synchronization tools.
Many thanks
you can store changes in a queue table. Triggers can populate the queue on insert, update, delete. this preserves the order of the changes like insert, update, update, delete. Empty the queue after download.
Emptying the queue would cause issues if you have multiple clients.... may need to think about a design to handle that case.
there are several designs you can go with, all with trade offs. I have used the queue design before, but it was only copying data to a single destination, not multiple.

How do you handle/react to user input concurrency on the GUI layer?

What are good ways to handle user input concurrency?
As the answers to this question already rule out database locking, how do you handle concurrent user inputs in general?
Is locking always a bad idea, even if it is not implemented by row locking? Are there best practices which are not use case dependant?
What were your experiences with your strategies?
EDIT: I'm aware of handling concurrency on a data level through transactions: If two users simultanteously trigger a complex data change, transaction will handle it.
But I'm interested in handling or at least reacting to them on the GUI layer. What if the data change is part of a lengthy operation with user interaction?
Let's say two or more users are editing the same file over a web interface. At some point one of the users hits the save button. What happes to the other users?
Will they get notified and/or forced to reload? Or will the eventually overwrite the changes of the first user?
Shall I lock the file and prevent multiple users editing the same file?
Can I put the whole editing process in a transaction (I highly doubt it, but who knows...)
What is the best way to handle this and similar situations? Are there any other strategies?
Best strategy depends on what should happen from (business) process perspective - also important questions are what users would normally expect and what would surprise them least, and, of course, whether it is feasible to implement what they expect.
Your example of editing a file over web can be broken down as follows:
user1 checks
out/gets/downloads/opens file v0
user2 checks
out/gets/downloads/opens file v0
user1 makes changes to his copy of
file v0
user2 makes changes to his copy of
file v0
user1 saves file version v1 to server
user2 saves file version v2 to server
Note, that it is typical for web applications, and indeed for normal desktop office programs, too, that newest changes that user makes only become available (to others) after saving them, which means that it is not a case of having colleague's typing appear on top of yours in the copy of file you are editing.
A classic version control approach to this is that for user1 nothing changes as compared to normal desktop editing/saving process.
For user2, however, when he attempts to save v2 to server, the application must check whether there have been any changes to file version v0 since user last downloaded it. Since this is the case, a version control system would typically show him both versions (v1 and v2) on screen side by side, and let him mix them and save the resulting version (v3) to server.
For text files there exist a number of tools and systems both on Unix and Windows that try to automate the process so that if the areas of file edited do not overlap, the changes are merged automatically.
The alternative is locking file for user2 until user1 has finished editing it.
Putting editing in transaction is typically of no relevance. It is the final operation which attempts to overwrite existing file with new version, that is important. Editing happens independently on each users workstation and does not touch the server until last point (saving).
Your example is, by the way, distinctly different from another situation such as booking airplane tickets or booking an appointment to a doctor.
When booking tickets, there is a limited number of seats in a plane. It is possible, due to the fact that data transfer is not actually instanteous for more than one person to put a reservation to the same last seat on a plane.
Therefore, booking should be at least a 2-step process:
system shows free slots;
user asks for one of free slots
(s1);
system tells user whether the slot
is really still free, and if so,
reserves it to you.
user completes booking.
The "really still free" step is because information on webpage user views is typically not updated realtime, so between steps 1 and 2, it is possible that another user has applied for the free slot.
Look for how to handle "transactions" in whatever language/database API you are using. If you design these correctly it will handle it for you.
And to understand the theory, I'd recommend Distributed Systems by Couloris et al but there are lots of other good books.