I have an application that downloads data via NSURLConnection in the form of a JSON object; it then displays the data to the user. As new data may be created on the server at any point, what is the best way to 'realise' this and download this data?
At the moment I am planning on having the application download all the data every 30-40 seconds, and then check the data downloaded against the current data: if it is the same do nothing; if it is different, procede with the alterations. However, this seems a bit unnecessary, especially as the data may not change for a while. Is there a more efficient way of updating the application data when new server data is created?
Use ETag if the server supports it.
Wikipedia ETag
"If the resource content at that URL ever changes, a new and different ETag is assigned."
You could send a HTTP HEAD request to the server with the "If-Modified-Since" header set to the time you recieved the last version. If the server handles this correctly, it should return 304 (Not Modified) while the file is unchanged; so as soon as it doesn't return that, you GET the file and procede as usual.
See HTTP/1.1: Header Field Definitions
Related
I'm working on a simple ruby script with cli that will allow me to browse certain statistics inside the terminal.
I'm using API from the following website: https://worldcup.sfg.io/matches
require 'httparty'
url = "https://worldcup.sfg.io/matches"
response = HTTParty.get(url)
I have to goals in mind. First is to somehow save the JSON response (I'm not using a database) so I can avoid unnecessary requests. Second is to check if the new data is available, and if it is, to override the previously saved response.
What's the best way to go about this?
... with cli ...
So caching in memory is likely not available to you. In this case you can save the response to a file on disk.
Second is to check if the new data is available, and if it is, to override the previously saved response.
The thing is, how can you check if new data is available without doing a request for the data? Not possible (given the information you provided). So you can simply keep fetching data every 5 minutes or so and updating your local file.
I have a RestFul server that is suppuse to return a large json object more specifically an array of objects to browsers. For example 30,000 points will have a size of 6.5mb.
But I get this content mismatch error in browser when speed is slow. I feel it is because large data throught rest api breaks up something. Even in Postman sometimes it fails to render even though i see data of 6.5 mb received.
My Server is in NodeJS. and return content-type header is application/json.
My Question is
Would it make more sense if I return a .json file. Will the browser be able to handle. If yes, then I will download the file and make front end changes.
Old URL - http://my-rest-server/data
Proposed Url - http://my-rest-server/data.json
What would be content-type in the proposed url?
Your client can't possibly expect to want all of the data at once but still, want their data fast data.
...but you might want to look into sending data in chunks and streams:
https://medium.freecodecamp.org/node-js-streams-everything-you-need-to-know-c9141306be93
What is the best practive for keeping data in sync (server <=> browser) after creating new record?
e.g. After creating new record (HTTP POST to server) should I:
Also add new item to the $scope.someArray
Fetch the latest data from the server?
It really depends on your applications consistency requirements. Or if the POST could have effected other things that you might want to refresh.
For the most part, if you have the data, and all you want is the same set of data, save yourself the server call.
BTW, there is a third option that is usually found in REST interfaces:
3 - Send back the latest in response to the POST.
Have been considering caching my JSON on Amazon Cloudfront.
The issue with that is it can take 15 minutes to manually clear that cache when the JSON is updated.
Is there a way to store a simple JSON value in a CDN-like http cache that -
does not touch an application server (heroku) after intial generation
allows me to instantly expire a cache
Update
In response to AdamKG's point:
If it's being "updated", it's not static :D Write a new version and
tell your servers to use the new URL.
My actual idea is to cache a new CloudFront url every time a html page changes. That was my original focus.
The reason I want to JSON is to store the version number for that latest CloudFront url. That way I can make an AJAX call to discover what version to load, then a second AJAX call to actually load the content. This way I never need to expire CloudFront content, I just redirect the ajax loading it.
But then I have the issue of the JSON needing to be cached. I don't want people hitting the Heroku dynamos every time they want to see the single JSON version number. I know memcache and rack can help me speed that up, but it's a problem I just dont want to have.
Some ideas I've had:
Maybe there is a third party service, similar to a Memcache db, that allows me to be expose a value in a JSON url? That way my dynamos are never touched.
Maybe there is an alternative to Cloudfront that allows for quicker manual expiration? I know that kinda defeats the nature of caching, but maybe there more intermediary services, like a varnish layer or something.
One method is to use asset expiration similar to the way that Rails static assets are expired. Rails adds a hash signature to filenames, so something like application.js becomes application-abcdef1234567890.js. Then, each time a user requests your page, if application.js has been updated, the script tag has the new address.
Here is how I envision you doing this:
User → CloudFront (CDN) → Your App (Origin)
User requests http://www.example.com/. The page has meta tag
<meta content="1231231230" name="data-timestamp" />
based on the last time you updated the JSON resource. This could be generated from something like <%= Widget.order(updated_at: :desc).pluck(:updated_at).first.to_i %> if you are using Rails.
Then, in your application's JavaScript, grab the timestamp and use it for your JSON url.
var timestamp = $('meta[name=data-timestamp]').attr('content');
$.get('http://cdn.example.com/data-' + timestamp + '.json', function(data, textStatus, jqXHR) {
blah(data);
});
The first request to CloudFront will hit your origin server at /data/data-1231231230.json, which can be generated and cached forever. Each time your JSON should be updated, the user gets a new URL to query the CDN.
Update
Since you mention that the actual page is what you want to cache heavily, you are left with a couple options. If you really want CloudFront in front of your server, your only real option would be to send an invalidation request every time your homepage updates. You can invalidate 1,000 times per month for free, and $5 per 1,000 after that. In addition, CloudFront invalidations are not fast, and you will still have a delay before the page is updated.
The other option is to cache your content in Memcached and serve it from your dynos. I will assume that you are using Ruby on Rails or another Ruby framework based on your asking history (but please clarify if you are not). This entails getting Rack::Cache installed. The instructions on Heroku are for caching assets, but this will work for dynamic content, as well. Next, you would use Rack::Cache's invalidate method each time the page is updated. Yes, your dyno's will handle some of the load, but it will be a simple Memcached lookup and response.
Your server layout would look like:
User → CloudFront (CDN) → Rack::Cache → Your App (Origin) on cdn.example.com
User → Rack::Cache → Your App (Origin) on www.example.com
When you serve static assets like your images, CSS, and JavaScript, use the cdn.example.com domain. This will route requests through CloudFront and they will be cached for long periods of time. Requests to your app will go directly to your Heroku dyno, and the cacheable parts will be stored and retrieved by Rack::Cache.
There is a web service that allows me to go to a URL, with my API-key, and request a page of data. The data is returned as JSON. The JSON is well-formed, I ran it through JSONLint and confirmed its OK.
What I would like to do is retrieve the JSON data from within MS Access (2003 or 2007), if possible, and build a table from that data (first time thru), then append/update the table on subsequent calls to that URL. I would settle for "pre-step" where I retrieve this information via another means. Since I have an API key in the URL, I do not want to do this server-side. I would like to keep it all within Access, run it on my PC at home (its for personal use anyway).
If I have to use another step before the database load then Javascript? But I dont know that very well. I dont even really know what JSON is other than what I have read in Wikipedia. The URL looks similar to:
http://www.SomeWebService.com/MyAPIKey?p=1&s=50
where: p = page number
s = records per page
Access DB is a JavaScript Lib for MS Access, quick page search says they play nicely with JSON, and you can input/output with. boo-ya.
http://www.accessdb.org/
EDIT:
dead url; wayback machine ftw:
http://web.archive.org/web/20131007143335/http://www.accessdb.org/
also sourceforge
http://sourceforge.net/projects/accessdb/