Question: What is the best approach for implementing client caching of huge data. I am using Angular4 with Asp.net Web API2.
Problem: I am developing an web analytical tool (support mobile browsers as well) which generates echart metrics based on JSON data returned from Asp.net web api2. Web page have filters and chart events which recalculates charts measures based on same JSON data on client side. To optimize the speed, I have stored the JSON data(minified) in browser localstorage. With this I have avoiding frequent calls to api which are made on filter change and chart events. This JSON data is refreshed with server data at every 20 mins as I have set expiry for each JSON data saved on localstorage.
Problem is localstorage has a size constraint of 10mb and above solution does not work when JSON data (multiple localstorage keys) exceeds 10mb.
Since my data size can vary and can go more than 10mb. What is the best approach to cache such data, as same data can be used for recalculating measures for metrics without making web api server calls.
I though about (below but not implemented yet),
a) client- memory caching (may cause performance issues for users having less memory).
b) Storing json data as a javascript variable and using it.
Please let me know better solution for large client cache.
Related
I have a scenario wherein frontend app makes a call to backend DB (APP -> API Gateway -> SpringBoot -> DB) with a JSON request. Backend returns a very large dataset (>50000 rows) in response sizing ~10 MB.
My frontend app is highly responsive and mission critical, we are seeing performance issues; frontend where app is not responding or timing-out. What can be best design to resolve this issue condering
DB query cant be normalized any further.
SpringBoot code has has cache builtin.
No data can be left behind due to intrinsic nature
No multiple calls can be made as data is needed is first call itself
Can any cache be built in-between frontend and backend?
Thanks.
Sounds like this is a generated report from a search. If this data needs to be associated with each other, I'd assign the search an id and restore the results on the server. Then pull the data for this id as needed on the frontend. You should never have to send 50,000 rows to the client in one go... Paginate the data and pull as needed if you have to. If you don't want to paginate, how much data can they display on a single screen? You can pull more data from the server based on where they scroll on the page. You should only need to return the count of the rows to the frontend, and maybe 100 rows of data. This would allow you to show a scrollbar with the right height. When they scroll to a certain position within the data, you can pull the corresponding offset from the server for that particular search id. Even if you could return all 50,000+ rows in one go, it doesn't sound very friendly to the end user's device to have to load that kind of memory for a functional page.
This is a sign of a flawed frontend that should be redone.
10mb is huge and can be inconsiderate to your users especially if there's a high probability of mobile use.
If possible, it would be best to collect this data on the backend, probably put it onto disk, and then provide only the necessary data to the frontend as it's needed. As the map needs more data, you would make further calls to the backend.
If this isn't possible, you could load this data with the client-side bundle. If the data doesn't update too frequently, you can even cache it on the frontend. This would at least prevent the user from needing to fetch it repeatedly.
I'm looking for efficient ways to handle very large json files (i.e. possibly several GB in size, amounting to a few million json objects) in requests to a Django 2.0 server (using Django REST Framework). Each row needs to go through some processing, and then get saved to a DB.
The biggest painpoint thus far is the sheer memory consumption of the file itself, and the memory consumption steadily still increasing while the data is being processed in Django, without any ability to manually release the memory used.
Is there a recommended way of processing very large json files in requests to a Django app, without slaughtering memory consumption? Possible to combine with compressing (gzip)? I'm thinking of uploading the json to the API as a regular file, stream that to disk, and then stream from the file on disk using ijson or similar? Is there a more straightforward way?
The main purpose is to store data locally so it can be accessed without internet connection.
In my React application I will need to fetch JSON data (such as images, text and videos) from the internet and display it for a certain amount of time.
To add flexibility, this should work offline as well.
I've read about options such as localStorage and Firebase but all of them so far require either access to the Internet, or are limited to 10Mb which is too low for what I'll need.
What would be my best option to persist data in some sort of offline
database or file trough react?
I'd also be thankful if you could point me to a good tutorial about
any provided solution.
To store large amounts of data on client side you can use indexedDB.
IndexedDB is a low-level API for client-side storage of significant amounts of structured data, including files/blobs.
You can read more about indexedDB api here
I am building an app that receives a bunch of static data that is read only. The user does not change the data, or send any data to the server. The app just gets the data and presents it to the user in various views.
Like for example a parts list, with part numbers and prices. This data is currently stored in mongoDB.
I have few options for getting the data to the client. I could just use meteor's publication system, and have the client subscribe to the data it needs.
Or I could map all the data the client needs into one JSON file, save the JSON file to Amazon S3, and have the client make simple GET request to grab the data.
If we wanted this app to scale to many, many users, would not using meteor publication be the best? Or would either method be similar in terms of performance? Using meteor publication system would be the easiest, but I am worried that going down this route would lead to performance issues if a lot of clients request the data. If the performance between publishing and get request is about the same, I would just stick with the publication as its the easiest.
In this case Meteor will provide better performance. If your data is mostly server to client driven then clients do not have to worry about polling the server and the server will not have to worry about handling the request.
Also Meteor requires very little resources to send data to the client because the connection is persistent. Take an app like code fights which is built on Meteor constantly has thousands of connections to and from it, its performance runs great.
As a side note, if you are ready to serve your static data as a JSON file in a separate server (AWS S3), then it means you do not expect that data to be that big, so that it can be handled in a single file and entirely loaded in client's memory.
In that case, you might even want to reconsider the need to perform any separate request (whether HTTP or Meteor Pub/Sub).
For instance, simply embedding the data in your app, or served through SSR / Fast Render package.
Then if you are really concerned about your scalability, you might even reconsider the need to use Meteor, since you do not seem to need any client-server interactivity (no real need for Pub/Sub, no reactivity…). After your prototype is ready, you could rework it as a separate and static SPA, so that you do not even need to serve it through Node / Meteor.
I'm working on an image processing service that has two layers. The top layer is a REST based WCF service that takes the image upload, processes and the saves it to the file system. Since my top layer doesn't have any direct database access (by design) I need to pass the image to my application layer (WsHTTPBinding WCF) which does have database access. As it stands right now, the images can be up to 2MB in size and I'm trying to figure out the best way to transport the data across the wire.
I currently am sending the image data as a byte array and the object will have to be stored in memory at least temporarily in order to be written out to the database (in this case, a MySQL server) so I don't know that using a Stream would help eliminate the potential memory issues or if I am going to have to deal with potentially filling up my memory no matter what I do. Or am I just over thinking this?
Check out the Streaming Data section of this MSDN article: Large Data and Streaming
I've used the exact method described to successfully upload large documents and even stream video contents from a WCF service. The keys are to pass a Stream object in the message contract and setting the transferMode to Streaming in the client and service configuration.
I saw this post regarding efficiently pushing that stream into MySQL, hopefully that gets you pointed in the right direction.