Client get new data from server - json

I am designing a web app where the server generates batches of data, and the client periodically checks whether new batches of data are available for download. The way I am doing this is that whenever the server generates a new batch of data, it is available at a particular URL. The client periodically checks the URL to see whether a new batch is available for it. (I am currently not using web sockets.) This batch of data is in the format of a JSON object.
Since I have very little web experience, I'm a bit confused about what to do when the client visits the URL. How should the client know whether the batches of data at the URL are new (in which case the client should download them) or old (in which case the client should ignore them, since it has already downloaded them in the past)?
Also, there may be multiple clients working with the same server, so the solution should work regardless of the number of clients.

Include the Timestamp property (through server side script) in the JSON which is thrown by the server. You need to change the value of timestamp property everytime you update data to your server. Now it would be easy for you to detect change by checking the modification date.

Related

Asp.net Web API http response data size

I have an asp.net web api with getting a list of data from database with a very heavy sql query(using store procedure) then serialize to json, my data result could return sometime more than 100,000 rows of data and is beyond the max limitation of http JSON response which is 4MB, I've been trying to use pagination to limit my result size but it pulls down performance as everytime user click on next page will trigger a heavy sql command, but if I don't use pagination, sometimes the result data size is more than 4MB, and my client side grid won't render properly. Since I don't have a way to check the JSON data size before sending back to client from web api. So my questions would be:
Is there any way to check data size in asp.net web api before sending back to client? For example, if it's more than 4MB then send a response saying "please modify your date range to have less data"? Would this be a good idea in application design?
Is there any way to save the entire data result in cache or
somewhere with asp.net web api so that every time when user perform a
pagination, it will not get result again from database but from the
cache.
Is there any way to store the entire data result in cache or in a temp file on client side(using Angular 5) so that when user perform a pagination, it will not request another http call to web api.
I would be more than happy to listen any experience or suggestion from anyone! Thank you very much!

How to send back data to a server in a PWA website

I've searched and I can only find tutorials to pull info from a server to update the latest info of my PWA app using JSON. But I can't find any way and any example to fetch data back to a server to mantain for instance a Database updated and display that to all users which may use that PWA.
For example, I have a PWA that let me login (client-server communication), then it displays a list of contacts that were stored in a Database. I can delete, modify or add new users to this list from my PWA app, and after doing that, they'll update on my server Database, so if my friend Paul, wants to check the updated list from his account, he'll see the new changes.
How Can I do that? Which language would I have to use, php and Javascript (Ajax)? Which is the most fluid and optimized way to do it according to a Progressive Web App.
I guess you are trying to store user changes back to the server(webservice and then to the data base).
You have to make an AJAX call to your web service and pass the required data needed to store in DB.
Here is an example.
https://www.w3schools.com/xml/xml_http.asp
Depending on the framework you are using, you might have more options to call a web service. Like here is an example for Angular -> https://www.w3schools.com/angular/angular_http.asp

Add analytics to a desktop application

I have developed a desktop application using HTML 5 and node web-kit .
I would like to track parts of the app , such as how long its used , clicks ect.
I would like the analytics system to work both on and offline (storing data until its on-line).
Is there anything that I could use to do this?
The Google measurement protocol allows you to track everything that can send an http request. You need to generate a unqiue client id to group pageviews into session (the part is usually done by the Javascript tracker which does not help you) and can then choose between various interaction types and their related data to be added as parameters in a request to the Google Analytics server.
As far as offline capabilites, there is a "queue time" parameter that allows you to send delayed calls to GA. However as per documentation that delay is 4 hours at most (intended for Smartphones and Tablets that temporarily lose connection rather than to work permanently offline).
In the end it depends what data you need - you might just as well send calls to your own server and log them in a csv file and feed that to Klipfolio or some other dashboard solution (or even use Excel if you expect a low data volume).

Secure iOS to online database connection

I have an iPhone application that needs to collect data from an online MySQL database. I've written a PHP web service so I collect the data with JSON. The problem is that everyone can see the data if they go to the URL now. How do i secure the data transfer properly?
Thanks for your suggestions.
Typically, if you are showing data private to a particular user, then each user will generally have an account (user id and password). The app will pass the user's credentials to the server before the server will provide the user's data.
You can also do something similar using SSO integration, or OAuth (ala Facebook).
In some cases, your app may only pass the username/password on the initial call and receive a session ID, which the app passes on remaining calls. This allows the server to store session data.
Even if the data isn't private to a particular user, you can use accounts to restrict access and privileges for a publicly reachable web API.
In all of the above cases encryption such as SSL (HTTPS) must be used to protect the authentication mechanisms and data transfer.
I'm assuming your data is public for all users of your app, in other words, you don't want to implement a login mechanism for your users. If you just want to make sure you return the data only to users of your app and not to anyone who happens to enter the right URL in their browser, you will need to sign your requests, so that only requests from your app are accepted by your server.
I use a secret key that my app uses to create a hash/digest of the request which the server verifies (it knows the secret key as well). Also I make sure requests cannot be replayed if they are intercepted by adding a timestamp and a nonce. The timestamp is checked to be within 10 minutes of the server's timestamp (relaxed sync) and the nonce must be unique (server keeps the last 10 minutes of nonces). This way no-one can copy the same request, the server will just serve an error if they try.
This post explains how to sign your requests in a bit more detail:
http://www.naildrivin5.com/blog/2008/04/21/rest-security-signing-requests-with-secret-key-but-does-it-work.html

Why is a Push Engine Required

Our Architecture is using a Push Engine to send data to the browser ,
Could anybody please tell me what is the use of Push Engine ??
( Why is it required , as the same thing can be achivied using a normal AJAX programming )\
Please guide me .
let's say your visiting a website, and the website is updated continuously. Your browser needs to keep updating the data that you're viewing, meaning that the browser needs to keep communicating with the server, and get the updates.
you can use ajax to make requests every few seconds, each time fetch more data from the server. Problem is - you need to make a lot of ajax calls, and you open a connection (a socket) for each, and eventually, it is a very slow process. if the interval between the requests is large, you will have a delay between the updates on the servers, and the updates in your browser.
to solve that, we can manipulate the HTTP calls - keep the request (the connection) open, and continuously send data. that way, when the server wants to send something to the client (browser), there's an open connection, and it doesn't need to way for the next ajax call by the browser.
HTTP servers have a timeout on the requests, so just before request times out, browser will close it and make a new one.
another (better) method is using XMPP protocal, which is used in chats like facebook's and msn.
AJAX is a pull method - it requires the client to connect to the server. If you have some information that you want to display live - for example a live score in a football game - the AJAX call has to be made at regular intervals - even when there is no data waiting on the server. A Push Engine is the reverse - the client and server maintain a connection and the server pushes data when there is data to be sent.