I am developing an iOS app that uses a single context architecture. I make frequent calls to my API (PHP) and I want to "cache" the output for as long as the session is active. Right now I am saving the output to a variable that is defined in the app.s.
var contacts = {
contactsData: null
};
So I do this to save the output, is it really a good idea? Will it slow things down?
contacts.contactsData = output;
Thankful for all input!
It consist of how big is json file in mb. If device have enough RAM - it is the best way. Also be sure you save decoded json not just request response, so you will not decode it every time.
If json data is too big you must think about some kind of local storage. If Json is always the same (no need to synch every time) save it local.
If you need update it often you can upload extremly needed part with 1 limited request (API config needed) and other data with second background request.
Related
I'm working on a simple ruby script with cli that will allow me to browse certain statistics inside the terminal.
I'm using API from the following website: https://worldcup.sfg.io/matches
require 'httparty'
url = "https://worldcup.sfg.io/matches"
response = HTTParty.get(url)
I have to goals in mind. First is to somehow save the JSON response (I'm not using a database) so I can avoid unnecessary requests. Second is to check if the new data is available, and if it is, to override the previously saved response.
What's the best way to go about this?
... with cli ...
So caching in memory is likely not available to you. In this case you can save the response to a file on disk.
Second is to check if the new data is available, and if it is, to override the previously saved response.
The thing is, how can you check if new data is available without doing a request for the data? Not possible (given the information you provided). So you can simply keep fetching data every 5 minutes or so and updating your local file.
I have a RestFul server that is suppuse to return a large json object more specifically an array of objects to browsers. For example 30,000 points will have a size of 6.5mb.
But I get this content mismatch error in browser when speed is slow. I feel it is because large data throught rest api breaks up something. Even in Postman sometimes it fails to render even though i see data of 6.5 mb received.
My Server is in NodeJS. and return content-type header is application/json.
My Question is
Would it make more sense if I return a .json file. Will the browser be able to handle. If yes, then I will download the file and make front end changes.
Old URL - http://my-rest-server/data
Proposed Url - http://my-rest-server/data.json
What would be content-type in the proposed url?
Your client can't possibly expect to want all of the data at once but still, want their data fast data.
...but you might want to look into sending data in chunks and streams:
https://medium.freecodecamp.org/node-js-streams-everything-you-need-to-know-c9141306be93
I am building a little HTML program (well, not little), but I have got options that I would like to permanently write to the client's machine without using cookies. reason being is cookies will expire, and have a storage limit of 4 KB. how would I do that without jquery.
Client side cookies can be deleted any time. Server side cookies cannot be deleted and the expiration date can be set. You can't write to a machine from an HTML page though, it's not allowable by any means. It would have to be a hack and a virus if such a violation would occur.
You should use HTML5 Local Storage.
To set a item to the storage
localStorage.setItem("person", "jones");
To get the item
var val = localStorage.getItem("person");
To remove the item
localStorage.removeItem("person");
Depending of what browser you use, most have 10mb of storage. read here
The storage can be stored as long as the user or the application allows it. read here
The localStorage can only store values as strings. If you want to store objects like json
var data= { person: "jones" };
localStorage.setItem("jsonPerson", JSON.stringify(data));
To read the json
var data = JSON.parse(localStorage.getItem("jsonPerson"));
I'm implementing a web service that needs to query a JSON file( size: ~100MB; format: [{},{},...,{}] ) about 70-80 times per second, and the JSON file will be updated every hour. "query a JSON file" means checking if there's a JSON object in the file that has an attribute with a certain value.
Currently I think I will implement the service in Node.js, and import ( mongoimport ) the JSON file into a collection in MongoDB. When a request comes in, it will query the MongoDB collection instead of reading and looking up in the file directly. In the Node.js server, there should be another timer service, which in every hour checks whether the JSON file has been updated, and if it has, it needs to "repopulate" the collection with the data in the new file.
The JSON file is retrieved by sending a request to an external API. The API has two methods: methodA lets me download the entire JSON file; methodB is actually just an HTTP HEAD call, which simply tells whether the file has been updated. I cannot get the incrementally updated data from the API.
My problem is with the hourly update. With the service running, requests are coming in constantly. When the timer detects there is an update for the JSON file, it will download it and when download finishes it will try to re-import the file to the collection, which I think will take at least a few minutes. Is there a way to do this without interrupting the queries to the collection?
Above is my first idea to approach this. Is there anything wrong about the process? Looking up in the file directly just seems too expensive, especially with the requests coming in about 100 times per seconds.
I am fairly new to JSON and I want to create a choropleth example as so. http://gabrielflor.it/a-half-decade-of-rising-poverty Whenever the years are clicked it just goes to a different portion of the JSON (I'm assuming). Is this how functionality like this is usually done to avoid redrawing the whole map again and calling another JSON.js file? If so these .JSON files can get quite large?
Using a JSON is only a way to store values you need for each year. When you switch to another year the JS parse the JSON for the giving year and update the choropleth. For the example you have provided, here is the JSON used:
http://gabrielflor.it/static/data/saipe.json
This is a good way since you only have one JSON with every year you need and you load it only once. However since d3 needs datas this way I think you should add another JSON if you want to provide additional data like in gabrielflor example:
http://gabrielflor.it/static/js/d3.poverty-by-county.js?v=121107
He loads JSON like this with d3:
d3.json('../static/data/states.json', function (json) {
states = json;
});
or
d3.json('../static/data/saipehighlights.json', function (json) {
saipehighlights = json;
});
If you look at the network traffic for the example page you gave (ex. by using Chrome Developer Tools).
The file with the poverty data is quite large, but the mapping data file is even larger. You'll notice, that it takes longer for the website to load, but afterwords it runs very smoothly in the client without making any server calls.
The site is just about browsing information and nice design - for that purpose I think a longer load time is quite acceptable if the user experience after is smoother(i.e. user doesn't have to wait for year data to load).