Here is my understanding about SPA.
Its an application where user gets the feeling that he is on the same page in response to any user triggered event.
Basically whole page won't be submitted , an ajax request will be triggered in backgroung for an event and response will be rendered on same page.
So its fast(as whole page is not submitted) and more responsive(there is no blank screen even for iota of time even when ajax
request is in process, which means there will/can be some screen user can still take action )
So as per my understanding any web application which is completely based on ajax and no where submitting the whole page like
document.form.submit (which is synchronous) is a single page application whether its using controllers, routing etc or not.
Is that correct ?
I have gone thru SPA wiki and got the feeling that application is which is completely based out
of ajax may not be SPA if it is not using controllers, routing etc
#MSach. Whatever you wrote for SPA is quite correct in addition,
First of all, a SPA is a web app which actually fits on a single
webpage providing a great user experience and loading everything's
that is needed on that first page.
A SPA is also ideal for a rich user experience which keeps the user really engaged in the application by keeping the pages moving fluently because of the
client side navigation and just as important as the other two is
reduced round tripping which means they'll make less trips between
the client and server and less post backs of the entire page.
A SPA often will persist important state in the client too whether it's in cache, in memory of the browser, or off to local storage.You can load what you need both data and views on the initial load but just the pieces that the user is absolutely going to use right away and then later you can load the
rest of it on demand asynchronously. And then as the user goes to
other parts of the application, we can progressively download other
features and data as needed.
Apps like Facebook and Gmail both have
SPA characteristics and there are some other apps too that are SPAs.
Related
First off - our needs don't require any sort of interaction with the web view, we simply want to display content from the web.
Situation: we want to make a glorified slide show that pulls in web content. We were intending on having a list of templates shown to the user on the TV app, they can pick one, and then the appropriate URL is hit for that template (which would live on the web). The web portion would handle things from there, navigating to a new URL every X seconds (which just displays the next set of data in the same template)
In learning that web views are restricted, and you can't sneak an app submission by apple while utilizing a web view, we've hit a dead end. Having hundreds of templates that are all hard coded doesn't seem maintainable for us to do, plus we can't deliver a new template to users (er.. clients) without going through the potentially lengthy app approval process every time.
Does anyone have any other bright ideas for storing templates on the web, maybe even in a data format, that we can download and interpret/parse in-app to know where to position image views, labels, etc?
Thanks for any suggestions!
I know I'm a little late to the party, but I wanted to answer your secondary question. It is absolutely possible to load TVML files from the web without updating the tvOS submitted app. We have a published tvOS app that is currently doing this.
As I recall, the tvOS app we submitted is very basic, with the only real change being the TVBaseURL. We placed all of our TVJS files on Google App Engine and the TVML files are in the GAE Storage bucket, so the TVBaseURL points to the public URL for those files. I have a cron job that reads an API and dynamically builds the TVML files several times a day. When the TVOS app runs, it loads the files from the GAE Storage bucket. The real benefit is that I can update the TVML layout and add or delete screens as needed without ever needing to go through the app submission process.
I have the following problem - there is a clock on my webpage and I'm trying to fight the following thing - if a user goes to another page and then presses back, the clock is desynchronized (it should show the time of the server). I have the mechanisms to synchronise the clock but I'm thinking how should I detect whether to fire them up (they should be used as rarely as possible as they are expensive). So is there a widely used way to detect whether user is using cached version of the page? One way I thought about is looking at user local time and if suddenly there is a jump between previous time registered and time registered now I can fire the mechanism, but is there a simpler, generic way (for example - maybe browser sends some message to webpage to communicate that...)?
It sounds like you're allowing the client or server to do a full page cache, which you don't want due to parts of the page relying on the current server date time.
You certainly want to remove client caching from you response headers of the main response, especially if you have any sort of authentication that you need to check before rendering the page; you will want to make sure the client is still logged in for example.
You will also want to remove any full response caching from the server. You should investigate server partial or donut caching which allows you to cache certain parts of the response (so it doesn't need to work out static data that won't change; for example the navigation or footer) but will go off and get new results for the parts of your page that should change on each request.
The date time display and all other reliant parts of the response will be updated, but static parts will come from the server cache.
A good (and easier) example of this problem is where you have a page that displays a random image on reload. If the page / response is cached on the server, this image will remain the same for each request. The server (code) would need set up to cache everything about the page apart from the image which would be updated by the server. The server (code) will therefore only need to work out the new image that needs to be applied to the response as all other parts of the page come from the cache on the server.
It's also worth noting that server cache is global. If you cache a particular response and another user then requests the same page, they will get the same response as the other user. You may need to vary the cache by certain parameters depending on the needs of your system.
I am not able to understand the difference between active and dynamic web pages.
I know that Active web pages are first downloaded on the client machine and then executed.
Dynamic web pages are executed on the server and then sent to the client.
But I am not able to correlate it with some real time example.
Kindly explain me the difference with some simple examples.
Also explain what is Applet and why it is active web page not dynamic.
As you said, dynamic is what's being executed on the server and then the result is being sent back to the client (browser). So for example when using PHP, your browser isn't able to execute PHP, so the server executes the PHP file and performs all logic in your code. The result will be an HTML file, which is then sent back to the client. The important thing to understand is that when the result is served to the client, the information in it won't change.
An active web page is a page where the browser performs the logic instead of the server. So for example when you've got a page where you're showing share prices, then you want it to update e.g. every 5 seconds. A solution would be to use AJAX with JavaScript. In contrast to PHP, your browser is able to execute JavaScript, so it is happening without reloading the page. So with an active page, everything is happening inside your browser without the need to reload the page every time you want new information.
An applet is an embedded application, like Flash or Java (not to be confused with JavaScript). To execute an applet, you most likely need a browser plugin. Because the applet is executed by the plugin and your browser, it is active and not dynamic (you don't need to request a new applet for information in it to change). The advantages of using an applet is that the programming language (like Java) has more possibilities than HTML. A lot of browser games are made with applets, but nowedays it is used less and less because we can achieve the same with techniques like JavaScript, HTML5 and WebGL.
I want to make a django server to refresh the content that you approach the database, if the idea is to first make the user see the current contents of the database and as the valley became the new content, this content comes and is placed above the previous content without reloading the page, in another part of the site is to make you change the current content with the new as it gets to the database?
evserver clearer is my choice, but really do not know how and what would be the most simple and efficient?
I think you should avoid HTTP Polling. Here's why:
if the frequency of the setInterval combined with the number of users on your web app is going to lead to a big resource drain. If you go through slides 9 to 19 in this presentation you'll see some quite dramatic figures for using Push (Note: this example uses a hosted service but hosting your own realtime server and using Push also has similar benefits)
between setInterval calls the data displayed in your app is potentially out of data. Using a Push technology means the instant that new data is available it can be push and displayed in your app. You don't want users looking at an app and thinking they are seeing correct information when they are not.
You should take a the following StackOverflow questions:
Django / Comet (Push): Least of all evils?
Need help understanding Comet in Python (with Django)
For Python/Comet see:
Python Comet Server
The latest recommendation for Comet in Python?
I'd recommend you also start considering "WebSockets" as well as "Comet". Most Comet servers now prefer to use a WebSocket connection when possible.
If you'd prefer to avoid installing and managing your own Comet/WebSocket solution then you could use a realtime hosted service which will allow you Push data through them using a REST API and your clients can receive events by embedding a JavaScript library and writing a small about of code to subscribe and receive the event.
The steps are quite straightforward:
Write a model to store data in DB
Write a view that will generate JSON-serialized data upon POST request.
Write a template that will contain JavaScript with setInterval() that will
proceed AJAX requests to the view and render recieved data. (I'd suggest using JQuery as it's well documented and widespread).
I have access to a web interface for a large amount of data. This data is usually accessed by people who only want a handful of items. The company that I work for wants me to download the whole set. Unfortunately, the interface only allows you to see fifty elements (of tens of thousands) at a time, and segregates the data into different folders.
Unfortunately, all of the data has the same url, which dynamically updates itself through ajax calls to an aspx interface. Writing a simple curl script to grab the data is difficult due to this and due to the authentication required.
How can I write a script that navigates around a page, triggers ajax requests, waits for the page to update, and then scrapes the data? Has this problem been solved before? Can anyone point me towards a toolkit?
Any language is fine, I have a good working knowledge of most web and scripting languages.
Thanks!
I usually just use a program like Fiddler or Live HTTP Headers and just watch what's happening behind the scenes. 99.9% of the time you'll see that there's a querystring or REST call with a very simple pattern that you can emulate.
If you need to directly control a browser
Have you thought of using tools like WatiN which are actually used for UI testing purposes but I suppose you could use it to programmaticly make requests anywhere and act upon responses.
If you just need to get the data
But since you can do whatever you please you can just make usual web requests from a desktop application and parse results. You could customize it to your own needs. And simulate AJax requests at will by setting certain request headers.
Maybe this ?
Website scraping using jquery and ajax
http://www.kelvinluck.com/2009/02/data-scraping-with-yql-and-jquery/