ExtJs and Sencha Touch Search Engine Optimization - html

I've started learning ExtJS 4 and Sencha Touch 2, and i really like it.
The main difference between Sencha products and jQuery(& others) is that instead of enhancing preexisting HTML, it generates its own DOM based on objects created in JavaScript.
Apps developed like this are great as intranet apps, but can you create a consumer oriented website using Sencha?(like an online store)
I see that you don't write any HTML code in ExtJS or Sencha Touch so i am wondering how can fully generated Javascript page be indexed by Search Engines like Google. As i know, the Google Bot only sees the plain HTML code.
Is there anyway to SEO a Sencha WebApp?
Kind Regards,
Dan Cearnau

Nothing is impossible. You just need to do some work.
1. Generate standard static page using PHP or smth else. The page should look like the page of your ExtJS app. But all links must have GET params in URL. Also PHP should aggregate input GET params.
2. Add your ExtJS app to the page. In the app you have to take into an account GET params and make proper request.
2a. If a real user opens your page: PHP generates the output, then ExtJS app starts and hides the static page and generates the dynamic output.
2b. If a crawler opens your page so JS is disabled, PHP aggregates the request according to GET params and generates the output.
You can add params to URL like #param1&param2&param3 in ExtJS when clicking on links, so real users will be able to share their links. Just learn the router on PHP-side to understand URLs like this.
There is no way to make SEO-friendly pages using JavaScript only.

Using a full blown app it would be close to impossible to SEO. They are far too dynamic. Search engines work of indexing pages. They can deal with some Ajax stuff by supporting pages with #s but imagine how many pages a fully functional app will have. Every view you have has 100s of options that would constitute a new page, which also has 100s of options. All these virtual pages would most likely be just slight variations from other pages. different sort order, different filter, moved panel, search option.
If you use ExtJs to enhance a website like jQuery is often used, then that's a different story. You will have html for the spiders to read and then you enhance how the content works via javascript (see progressive enhancement).

Actually in Touch 2 you can define paths and use history support. This will treat sections of your app as actual pages in the browser w/ standard functionality like going back in the browser etc... this will be your best bet when working with mobile SEO

Getting any kind of SEO out of a Sencha app is impossible since it builds everything on the fly. Even if you use the history support in Sencha Touch, thats also done on the fly and has no effect on SEO.
For consumer-facing websites, Sencha is not the answer. For back-end (for maybe managing the shopping cart), its a different story.

Related

Creating a multipage website without loading

So i am trying to create a website with multiple different pages. I was originally going to just take the traditional route but this website caught my eye: https://anyoneworldwide.com/
Everything aside from the "Choose your location" screen has no loading whatsoever. The URL changes but there is no loading indicator on my tab or "X" on the refresh button (I am using chrome btw)
So my question is; how am I able to use this kind loading technique in a website of my own?
The particular website mentioned in the question is developed using React. Its a javascript framework.The concept is know as Single Page Application. Where routing is done by javascript running in the browser and content is loaded using ajax calls. checkout this article.
Easy answer, pick one of the currently popular front end frameworks for building single page apps (SPA).
E.g. AngularJs, React.js, vue.js
These frameworks allow you to easily create client side routers, which inject (in one way or another) new content into the existing page, thus no refresh.
React is a popular open source and free library developed by facebook. It is used to develop single page applications which means that your website wont load at all. This increased the speed of your website and saves a lot of bandwidth. Using react you can make such a website as you mentioned. Not only React but also other frameworks like angular and Vuejs can be used.

Approach to building a GUI for a web application

First, a short disclaimer. I have next to no knowledge about web applications. I come from an iOS background where I exclusively wrote native code, so if you write your answers like I know nothing outside the shallow parts, that would be great.
I'm interested in learning a stack to develop web applications, but I'm not sure what the right way to build the GUI is. I know that a web front end consists of html and CSS to create the display and javascript as the bridge between the back end and the GUI, but I don't know the best way to put something together.
I know in iOS, you can use the Interface Builder (part of xcode that lets you graphically create the xml that describes the display) to create GUI's without any knowledge of how iOS translates the xml to some rendering, or even what is written in your xml files. Is there any analog to the web front end?
I'm mainly just looking for a list of the accepted ways to develop the GUI for a web application. If I have to learn HTML and CSS, so be it, but I'd like to know what my options are and the tradeoffs between each of them.
I can answer shortly stating that (technically) you can design web pages without coding in HTML or CSS, or even Javascript - although, you would be somewhat limited in your creative abilities and applications.
You can read about WYSIWYG html editors on this link, or try out ckeditor (someone said it's good)...
...I think a bit of background will help you reach a correct decision...
so here goes:
The Long Answer
I would start by trying to put the world of web programming and design into concepts that correlate with iOS coding.
If we look at the whole of a web app from an MVC perspective, then the browser is the view, the server is the controller and the database is the model... although this is very simplified.
Just like in iOS, each of these can be (but doesn't have to be) broken down into sub-MVC systems.
Just like any model in MVC, the view (the browser) can talk to the model (the database), but really it shouldn't. that's just bad practice.
If we break down the main-view (the browser) to a sub-MVC system, I would consider the HTML as the model, the CSS as the view and the browser (through links and javascript) as the controller.
It's not all that clear cut, but thinking like this helps me practice better and cleaner coding.
The HTML is the view's model for the web-app - it contains the data to be displayed or used.
HTML is a variation on the XML format and it contains data organized in a similar way to an XML file.
The basic HTML file will contain:
<html>
<head>
</head>
<body>
</body>
</html>
this should look familiar to you if you read any XML.
The CSS (cascading style sheets) is the view - it states HOW the html DATA should be displayed.
if your web app does't have any CSS, it will use the browser's default CSS/styling to be applied on the data in the HTML.
This "language" makes me think more about dictionaries in iOS (I think that's what they're called in Objective C). They have properties and values (like key-value pairs) that determine how the HTML data is displayed (if it's displayed).
They could look something like this:
body
{
color: white;
background-color: black
}
The browser is the web-app's view's controller - it makes it all work together and serves it up to the screen.
Javascript and links help us tell the controller what we want it to do, but it is the browser that acts (and willfully at times).
You can have a whole web app that acts without javascript, using only the default actions offered by links - in which case the browser will usually ask the server (the main controller) what to do.
Javascript helps us move some of the legwork from the server to the client, by allowing us to have a "smarter" controller for our view - just like in iOS.
The issue of the errant main-view / browser
Not all views are created equal, and not all browsers are the same.
Because the browser is used as the controller for the web-app's view, and because some browsers act differently then others, we web coders have the problem of working around someone else's idea of how our view's controller should behave.
You might see us complaining about it quite a lot (especially complaining about Internet Explorer).
These days, this issue is not as big of a problem as it used to be... it's just that some people don't update their computers...
WYSIWYG web editors
There are website builders and editors that try to work like X-Code does, by allowing us to build the website much like we would write word documents.
But, unlike X-Code which codes only the graphical interface of the view, these website builders write the model as well and usually add javascript into the mix.
When we use these tools (which I avoid), the whole MVC model breaks apart.
We can use them as a starting point for dirty work, but then we take their code apart and adjust it to our needs - usually by taking the code we need for the view (CSS) and applying it where we need it (and discarding much of the nonsense they add to the code and the HTML).
To summarize
As you can tell, HTML and CSS (and Javascript) are only a small part of a web app - as they all relate to the main view of the web app.
To write the controller and models for web apps, we use other tools (such as Ruby, PHP, js.node, MySQL and the like).
Coding the HTML and CSS isn't as hard as you might think, although it might be harder then I present it to be.
You can avoid writing code for web apps and use applications that offer WYSIWYG (What You See Is What You Get), but that would limit your abilities and will take away from your control over what you want to create.

Seperate html pages for each screen in Jquery mobile

I am newbie to Jquery Mobile, so far what ever examples i searched contains only one html page for whole application, with multipe div tags where each page/screen is defined as div tag with data-role as page with some header and footers optionally. Based on user actions, we are hiding some div's(pages) and showing only expected page. Also, this multi-page template seems to be standard design, as written by some blogs. Are there any other designing ways? what I would like to have is multipe html pages, for ex one for login, one for home, one for contact etc. Other wise it is difficult to understand/code/debug issues, especially people from Java background like me.So, what I want is some kind of MVC design with JQueryMobile, like each view/screen as sepearate html associated with one js (Controller). Can we have multiple html pages in JqueryMobile app? If possible how to pass data/ maintain session between them? Any samples are most welcome. Thanks In Advance.
Note: Also I don't want server side includes, may app contains 10 to 15 screens, each page will make a webservice call and fetch some data and map it to UI.
As jerone mentioned above, the jQuery Mobile documentation clearly says
We strongly recommend building your site or app as a series of separate pages like this because it's cleaner, more lightweight and works better without JavaScript.
See http://jquerymobile.com/demos/1.2.0/docs/pages/page-template.html
Each page (which can be a static HTML file or anything produced by a script (e.g PHP, Python or whatever)) is thus standalone and transitions from one page to another are done using AJAX calls.
There is no such thing as a controller except if you assume that your browser is one!
You can use multiple html files if you want. jQuery Mobile will automatic include these with AJAX: http://jquerymobile.com/demos/1.2.0/docs/pages/page-navmodel.html
Anyways, jQuery Mobile recommends one big html file with multiple pages.
From http://jquerymobile.com/demos/1.2.0/docs/pages/page-template.html
This template is standard HTML document with a single "page" container inside, unlike a multi-page template that has multiple pages within it. We strongly recommend building your site or app as a series of separate pages like this because it's cleaner, more lightweight and works better without JavaScript.

HTML Snapshot for crawler - Understanding how it works

i'm reading this article today. To be honest, im really interessed to "2. Much of your content is created by a server-side technology such as PHP or ASP.NET" point.
I want understand if i have understood :)
I create that php script (gethtmlsnapshot.php) where i include the server-side ajax page (getdata.php) and i escape (for security) the parameters. Then i add it at the end of the html static page (index-movies.html). Right? Now...
1 - Where i put that gethtmlsnapshot.php? In other words, i need to call (or better, the crawler need) that page. But if i don't have link on the main page, the crawler can't call it :O How can crawler call the page with _escaped_fragment_ parameters? It can't know them if i don't specific them somewhere :)
2 - How can crewler call that page with the parameters? As before, i need link to that script with the parameters, so crewler browse each page and save the content of the dinamic result.
Can you help me? And what do you think about this technique? Won't be better if the developers of crawler do their own bots in some others ways? :)
Let me know what do you think about. Cheers
I think you got something wrong so I'll try to explain what's going on here including the background and alternatives. as this indeed a very important topic that most of us stumbled upon (or at least something similar) from time to time.
Using AJAX or rather asynchronous incremental page updating (because most pages actually don't use XML but JSON), has enriched the web and provided great user experience.
It has however also come at a price.
The main problem were clients that didn't support the xmlhttpget object or JavaScript at all.
In the beginning you had to provide backwards compatibility.
This was usually done by providing links and capture the onclick event and fire an AJAX call instead of reloading the page (if the client supported it).
Today almost every client supports the necessary functions.
So the problem today are search engines. Because they don't. Well that's not entirely true because they partly do (especially Google), but for other purposes.
Google evaluates certain JavaScript code to prevent Blackhat SEO (for example a link pointing somewhere but with JavaScript opening some completely different webpage... Or html keyword codes that are invisible to the client because they are removed by JavaScript or the other way round).
But keeping it simple its best to think of a search engine crawler of a very basic browser with no CSS or JS support (it's the same with CSS, its party parsed for special reasons).
So if you have "AJAX links" on your website, and the Webcrawler doesn't support following them using JavaScript, they just don't get crawled. Or do they?
Well the answer is JavaScript links (like document.location whatever) get followed. Google is often intelligent enough to guess the target.
But ajax calls are not made. simple because they return partial content and no senseful whole page can be constructed from it as the context is unknown and the unique URI doesn't represent the location of the content.
So there are basically 3 strategies to work around that.
have an onclick event on the links with normal href attribute as fallback (imo the best option as it solves the problem for clients as well as search engines)
submitting the content websites via your sitemap so they get indexed, but completely apart from your site links (usually pages provide a permalink to this urls so that external pages link them for the pagerank)
ajax crawling scheme
the idea is to have your JavaScript xmlhttpget requests entangled with corresponding href attributes that look like so:
www.example.com/ajax.php#!key=value
so the link looks like:
go to my imprint
the function handleajax could evaluate the document.location variable to fire the incremental asynchronous page update. its also possible to pass an id or url or whatever.
the crawler however recognises the ajax crawling scheme format and automatically fetches http://www.example.com/ajax.php.php?%23!page=imprint instead of http://www.example.com/ajax.php#!page=imprint
so you the query string then contanis the html fragment from which you can tell which partial content has been updated.
so you have just have to make sure that http://www.example.com/ajax.php.php?%23!page=imprint returns a full website that just looks like the website should look to the user after the xmlhttpget update has been made.
a very elegant solution is also to pass the a object itself to the handler function which then fetches the same URL as the crawler would have fetched using ajax but with additional parameters. Your server side script then decides whether to deliver the whole page or just the partial content.
It's a very creative approach indeed and here comes my personal pr/ con analysis:
pro:
partial updated pages receive a unique identifier at which point they are fully qualified resources in the semantic web
partially updated websites receive a unique identifier that can be presented by search engines
con:
it's just a fallback solution for search engines, not for clients without JavaScript
it provides opportunities for black hat SEO. So Google for sure won't adopt it fully or rank pages with this technique high with out proper verification of the content.
conclusion:
just usual links with fallback legacy working href attributes, but an onclick handler are a better approach because they provide functionality for old browsers.
the main advantage of the ajax crawling scheme is that partially updated websites get a unique URI, and you don't have to do create duplicate content that somehow serves as the indexable and linkable counterpart.
you could argue that ajax crawling scheme implementation is more consistent and easier to implement. I think this is a question of your application design.

Creating a web application, then adding Ajax to it?

I imagine there are many of you out there who have developed an application online which automates a lot of processes and saves people at your company time and money.
The question is, what are your experiences with developing that application, having it all set in place, then "spicing" it up with some Ajax, so it makes for a better user experience?
Also, what libraries would you suggest using when adding Ajax to an already-developed web application?
Lastly, what are some common processes you see in web applications that Ajax does well with? For example, auto-populating the search box as you type.
My preferred way of building Ajax-enabled applications is to build it the old-fashioned way where every button, link, etc. posts to the server, and then hijack all those button, link, etc. clicks to the Ajax functionality.
This ensures that my app is down-browser compatible, which is good.
It doesn't really matter which you use, unless you're trying to do something very specialized.
Here's a good list: http://code.google.com/apis/ajaxlibs/.
Yes, auto-completers are a pretty handy implementation of Ajax. It's also quite useful for data-intensive activities like populating drill-down data.
A lot of what you can do with these libraries isn't Ajax-specific, there is a lot of UI interaction that can benefit the user as well. You can do things like slideshows and lightboxes quite easily with many of these libraries.
Pick the one that you're comfortable with. The syntax they all use is a little different. Give a few a spin and try to build simple examples. Stick with the one you like.
Using ASP.NET Ajax to wrap a few chunks of code is an easy way to get going. But personally I prefer to use jQuery. You can easily add some simple Ajax calls with it to make the UI more responsive without the ASP.NET Ajax overhead.
If you are using ASP.NET to write your applications, adding AJAX using ASP.NET AJAX is very straightforward and in many places will not require you to change any code at all except add two controls to the pages you want to modify.
This works using partial page loads. The controls you have to add (off the top off my head) are called something like
<asp:ScriptManager
and
<asp:UpdatePanel
The biggest thing I use for AJAX is lists and search forms. Why? Because the overhead of loading an entire page when you are going though a list of, let's say, 200 records, it will get frustrating for a user to go though everything. However, it is important that if you click on a link in the page and then hit the back button or use a link at the top to return to the same page you were on.
For search forms, as you fill out the form I use AJAX queries to return the first few results and a number indicating how many records that were returned.
For AJAX frameworks, I use mootools. http://www.mootools.net.
Please ignore if not using ASP.NET. Your platform wasn't clear from your question.
Depending on when you created your web application, your web config file may need some tweaks to use ASP.NET Ajax. The easiest way to see is to create a new web site with the ASP.NET Ajax template and compare the web config, copying over configuration items as needed to bring the old one up to date.
If "spicing it up" is all you're after then develop the fully functional app without AJAX first. From here you can unobtrusively add AJAX functionality and ensure that the app degrades well for non JavaScript-enabled browsers.
I've started using jQuery for JavaScript on my site. It takes away all the worry of cross-browser JavaScript differences - things like class and classname, and getElementById. It also includes some very handy and simple functionality for AJAX postbacks. It's very easy to learn and extremely lightweight when used well.
I've seen some good use of AJAX right here on Stack Overflow, things like the tag selector and the question lookup when you type a question title. I think these simple things work best; we're just adding to the user experience with small additions to functionality that are intuitive, we're not flooding the screen with drag/drop handles etc.
I would differ from the first poster. Adding Ajax isn't always as easy as 1,2,3. It really depends on what you are after.
Adding things such as a colour animation can be made fairly easy, but if you are after things such as auto populating a text box, this requires extra code. It's not as easy as adding just something client side. You would also need to add in server-side support to fetch the partial query results.
Going beyond that, it can become even more complex keeping your client-side script in sync with server-side support.
But with the spirit of simplicity in mind there are libraries you can use to 'spice' up a website with animations and other eyecandy that can be implemented fairly easily which have been mentioned already.
I've often had to Ajax-enable an old-fashioned ASP.NET 2.0 sites. The easiest way I've found to do that is to create a new Ajax-enabled site and copy and paste certain sections of the web.config into your old project's web.config.
Just compare the two and see what's missing in your old one. You'll obviously also need to add references to AjaxExtensions and AjaxControlToolkit.