GWT memory buffer compared to session storage - html

Working with GWT/GXT i like to speed-up my App with 'local-caching'.
I red about HTML5 session storage, but i was wondering why i shouldn't just use a memory buffer (a big hashmap with all the incoming data).
Whats the pitfall with the memory buffer compared to session storage?

Just as what Thomas Broyer detailed in his comment, the pitfall for using a Map or any similar kind of data structure to save data is that all your data will be lost on page refresh.
If this is not a concern for your given scenario, I don't see any issue using Map/List or anything like that.
In the Errai framework we use a lot of #ApplicationScoped beans to hold data across the whole application, for example the currently logged in user, the latest loaded data from server etc.

Related

How do modern web applications implement caching and data persistence with large amounts of rapidly changing data?

For example, consider something like Facebook or Twitter. All the user tweets / posts are retained indefinitely (so they must ultimately be stored within a static database). At the same time, they can rapidly change (e.g. with replies, likes, etc), so some sort of caching layer is necessary (e.g. you obviously can't be writing directly to the database every time a user "likes" a post).
In a case like this, how are the database / caching layers designed and implemented? How are they tied together?
For example, is it typical to begin by implementing the database in its entirety, and then add the caching layer afterword?
What about the other way around? In other words, begin by implementing the majority of functionality into the cache layer, and then write another layer which periodically flushes the cache to the database (at some point when its activity has gone down)? In this scenario, for current / rapidly changing data, the entire application would essentially be stored in cache.
Or perhaps implement some sort of cache-ranking algorithm based on access / update frequency?
How then should it be handled when a user accesses less frequent data (which isn't currently in cache)? Simply bypass cache completely / query the database directly, or should all data be cached before it's sent to users?
In cases like this, does it make sense to design the database schema with the caching layer in mind, or should it be designed independently?
I'm not necessarily asking for direct answers to all these questions, but they're just to give an idea of where I'm coming from.
I've found quite a bit of information / books on implementing the database, and implementing the caching layer independent of one another, but not a whole lot of information on using them in conjunction / tying them together.
Any information, suggestions, general patters, articles, books, would be much appreciated. It's just difficult to find some direction here.
Thanks
Probably not the best solution, but I worked on a personal project using Openresty where I used their shared memory zones to cache, to avoid the overhead of connecting to something like Redis, then used Redis as the backend DB.
When a user loads a resource, it checks the shared dict, if it misses then it loads it from Redis and writes it to the cache on the way back.
If a resource is created or updated, it's written to the cache, and also queued to a shared dict queue.
A background worker ticks away waiting for new items in the queue, writing them to Redis and then sending an event to other servers to either invalidate the resource in their cache if they have it, or even pre-cache it if needed.

Node.js Store object in database or in array?

I am developing a node.js multiplayer card game application, played by 4 players at the same time.
I have an array of object which contains all games in progress,
I was wondering if 5000 games or more are in progress can I have memory problems with my server application ?
Would it be better for me to store the object in a database and read it each time, data connection will be a lot more used but memory less ? What is the best approach in this kind of situation ?
If you can practically keep your data in memory, that will usually yield a solution that is faster and less complicated.
Here are reasons that you might have to use a database instead:
You need access from multiple processes.
You need persistence of data (if server should be restarted or crashes).
You are storing more data than will fit in memory.
You need certain concurrency or transactional features already built into typical databases.
You want to use certain searching/indexing features of existing databases.
If none of those reasons drive you to a database and the data comfortably fits into memory (for node.js probably less than 500MB-1GB) depending upon how much other memory your server uses and your server has access to enough run-time memory, then it's usually faster and simpler to store and access the data from memory.

Live chat application using Node JS Socket IO and JSON file

I am developing a Live chat application using Node JS, Socket IO and JSON file. I am using JSON file to read and write the chat data. Now I am stuck on one issue, When I do the stress testing i.e pushing continuous messages into the JSON file, the JSON format becomes invalid and my application crashes.Although I am using forever.js which should keep application up but still the application crashes.
Does anybody have idea on this?
Thanks in advance for any help.
It is highly recommended that you re-consider your approach for persisting data to disk.
Among other things, one really big issue is that you will likely experience data loss. If we both get the file at the exact same time - {"foo":"bar"} - we both make a change and you save it before me, my change will overwrite yours since I started with the same thing as you. Although you saved it before me, I didn't re-open it after you saved.
What you are possibly seeing now in an append-only approach is that we're both adding bits and pieces without regard to valid JSON structure (IE: {"fo"bao":r":"ba"for"o"} from {"foo":"bar"} x 2).
Disk I/O is actually pretty slow. Even with an SSD hard drive. Memory is where it's at.
As recommended, you may want to consider MongoDB, MySQL, or otherwise. This may be a decent use case for Couchbase which is an in-memory key/value store based on memcache that persists things to disk ASAP. It is extremely JSON friendly (it is actually mostly based on JSON), offers great map/reduce support to query data, is super easy to scale to multiple servers, and has a node.js module.
This would allow you to very easily migrate your existing data storage routine into a database. Also, it provides CAS support which will prevent you from data loss in the scenarios outlined earlier.
At minimum though, you should possibly just modify an in memory object that you save to disk ever so often to prevent permanent data loss. However, this only works well with 1 server and then you're back at likely needing to look at a database.

HTML5 local storage memory architecture?

I have gone through many resources online but could not get the memory architecture used by HTML5 local storage. Is the data from local storage brought in memory while working over it (something like caching)?
Also in case I want to implement my app working in offline mode (basic purpose of storing into local storage), is it fine to store data as global JSON objects rather than going for local storage.
In short , I am getting a lot of JSON data while I login to my app(cross platform HTML5 app). Shall i store this data as global object or rather store it in memory.
Well, it depends on how sensitive is your information and the approach you want to follow.
Local storage
You can use local storage for "temporal" data, passing parameters and some config. values. AFAIK local storage should be used with care in the sense that the stored information is not ensure to be there always, as it could be deleted to reclaim some device memory or cleaning process. But you can use it without much fear.
To store JSON in local storage you will have to stringify your object to store it in a local storage key. JSON.stringify() function will do the trick for you.
So far I havenĀ“t found official information, but I think there is a limit of MB that you can store in local storage, however I Think that is not controlled directly via cordova. Again, is not official data, just take that in mind if your data in JSON notation is extremely big.
Store data as global objects
Storing data as global objects could be useful if you have some variables or data that is shared across functions inside the app, to ease access. However, bear in mind that data stored in global variables could be lost if the app is re-started, stopped, crashed or quit.
If it is not sensitive information or you can recover it later, go ahead and use local storage or global variables.
Permanent storage
For sensitive data or more permanent information I will suggest to store your JSON data in the app file system. That is write your JSON data in a file and when required recover the information from the file and store it in a variable to access it, that way if your app is offline, or the app is re-started or quit, you can always recover the information from the file system. The only way to loose that data is if the app is deleted from the device.
In my case I am using the three methods in the app I am developing, so just decide which approach will work the best for you and your needs.

BLOBs, Streams, Byte Arrays and WCF

I'm working on an image processing service that has two layers. The top layer is a REST based WCF service that takes the image upload, processes and the saves it to the file system. Since my top layer doesn't have any direct database access (by design) I need to pass the image to my application layer (WsHTTPBinding WCF) which does have database access. As it stands right now, the images can be up to 2MB in size and I'm trying to figure out the best way to transport the data across the wire.
I currently am sending the image data as a byte array and the object will have to be stored in memory at least temporarily in order to be written out to the database (in this case, a MySQL server) so I don't know that using a Stream would help eliminate the potential memory issues or if I am going to have to deal with potentially filling up my memory no matter what I do. Or am I just over thinking this?
Check out the Streaming Data section of this MSDN article: Large Data and Streaming
I've used the exact method described to successfully upload large documents and even stream video contents from a WCF service. The keys are to pass a Stream object in the message contract and setting the transferMode to Streaming in the client and service configuration.
I saw this post regarding efficiently pushing that stream into MySQL, hopefully that gets you pointed in the right direction.