Our Architecture is using a Push Engine to send data to the browser ,
Could anybody please tell me what is the use of Push Engine ??
( Why is it required , as the same thing can be achivied using a normal AJAX programming )\
Please guide me .
let's say your visiting a website, and the website is updated continuously. Your browser needs to keep updating the data that you're viewing, meaning that the browser needs to keep communicating with the server, and get the updates.
you can use ajax to make requests every few seconds, each time fetch more data from the server. Problem is - you need to make a lot of ajax calls, and you open a connection (a socket) for each, and eventually, it is a very slow process. if the interval between the requests is large, you will have a delay between the updates on the servers, and the updates in your browser.
to solve that, we can manipulate the HTTP calls - keep the request (the connection) open, and continuously send data. that way, when the server wants to send something to the client (browser), there's an open connection, and it doesn't need to way for the next ajax call by the browser.
HTTP servers have a timeout on the requests, so just before request times out, browser will close it and make a new one.
another (better) method is using XMPP protocal, which is used in chats like facebook's and msn.
AJAX is a pull method - it requires the client to connect to the server. If you have some information that you want to display live - for example a live score in a football game - the AJAX call has to be made at regular intervals - even when there is no data waiting on the server. A Push Engine is the reverse - the client and server maintain a connection and the server pushes data when there is data to be sent.
Related
I am currently building a Chrome extension that lets users auto-register for courses on a particular website once registration opens. The registration process is just a simple fetch POST request with an authentication header.
Now, this already works using the chrome.alarms API while the browser is open, but for obvious reasons I would want this to also work once the user closes the browser. Do you have any ideas how to do this? I really want to avoid to externally save user data..
If this is impossible, my idea would be to send the registration fetch to an external server (maybe even one hosted on a Raspberry Pi? Other ideas?) and then execute it once the registration opens.
I have a zabbix setup that monitors my servers. I also have cloudflare. I have setup monitoring for all the individual servers, but I would like to setup something to monitor response time for data served via cloudflare.
It would be a simple web page load that checks for response time, but of course I can't put zabbix agent on cloudflare!
Is there a way to do this?
Zabbix "web scenario" has
average download speed per second for all steps of whole scenario
download speed per second
response time
response code
see https://www.zabbix.com/documentation/current/manual/web_monitoring
There is also another web check, called "HTTP agent", to retrieve instead
page headers and content
see https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/http
both are executed by Zabbix Server / Proxy, no agent needed.
I am designing a web app where the server generates batches of data, and the client periodically checks whether new batches of data are available for download. The way I am doing this is that whenever the server generates a new batch of data, it is available at a particular URL. The client periodically checks the URL to see whether a new batch is available for it. (I am currently not using web sockets.) This batch of data is in the format of a JSON object.
Since I have very little web experience, I'm a bit confused about what to do when the client visits the URL. How should the client know whether the batches of data at the URL are new (in which case the client should download them) or old (in which case the client should ignore them, since it has already downloaded them in the past)?
Also, there may be multiple clients working with the same server, so the solution should work regardless of the number of clients.
Include the Timestamp property (through server side script) in the JSON which is thrown by the server. You need to change the value of timestamp property everytime you update data to your server. Now it would be easy for you to detect change by checking the modification date.
I have developed a desktop application using HTML 5 and node web-kit .
I would like to track parts of the app , such as how long its used , clicks ect.
I would like the analytics system to work both on and offline (storing data until its on-line).
Is there anything that I could use to do this?
The Google measurement protocol allows you to track everything that can send an http request. You need to generate a unqiue client id to group pageviews into session (the part is usually done by the Javascript tracker which does not help you) and can then choose between various interaction types and their related data to be added as parameters in a request to the Google Analytics server.
As far as offline capabilites, there is a "queue time" parameter that allows you to send delayed calls to GA. However as per documentation that delay is 4 hours at most (intended for Smartphones and Tablets that temporarily lose connection rather than to work permanently offline).
In the end it depends what data you need - you might just as well send calls to your own server and log them in a csv file and feed that to Klipfolio or some other dashboard solution (or even use Excel if you expect a low data volume).
I'm trying to develop a test framework for some ActionScript code we're developing (Flex 3.5). What's happening is this:
As part of a Web Analytics function we are calling a track method in a class, providing the relevant information as part of the call. This method is provided in a library (SWC), and we have no access to the code.
Ultimately the track method sends an outgoing http request to the tracking server. We can see this quite happily in HttpFox.
I was hoping to be able to capture this outgoing request and interrogate it in my test class, allowing us to a) run tests in a more standalone fashion, and b) programmatically determine that the correct information is being tracked.
No problem just run this developer tool that displays all requests leaving your machine.
http://www.charlesproxy.com/
Unless you're going to use a sniffing tool, which probably would be hard to use for a programmatic evaluation, I would recommend using a proxy to channel your request. You could let the track method send the request to a php script on the proxy server, have it evaluate the request content, and then forward it to the actual tracking server. I suppose on a tracking system, you won't need to worry about the response, so it shouldn't be too hard to implement.
You could run a web server on a localhost (or any really) and just make sure the DNS entry the code is trying to access points to the server you are running.