What is best practice for web input control to gather multiple strings? - html

I am looking to add to a web form the ability for users to enter mutiple strings. It should work like the tags input to new questions on stackoverflow.
Is there a smaple on teh web of such a UI control for asp.net?
Or is there another solution for accepting multiple tags (of text/ strings) in a nice neet ui control?
(I didnt find anything usefull in the HTML5 set of controls.)
NOTE: for use with a asp.net web form (post backs) application

I think TextExt plugin is best for you. very handly and useful.
TextExt Plugin for jQuery
TextExt is a plugin for jQuery which is designed to provide
functionality such as tag input and autocomplete.
The core design principle behind TextExt is modularity and
extensibility. Each piece of functionality is separated from the main
core and can act individually or together with other plugins.
Features:
Tags
Autocomplete
AJAX loading
Placeholder text
Arrow
… and much more!
How To Use:
The steps to using TextExt are as follows:
Specify which plugins you need via the plugins option
Configure each plugin individually if necessary
Enjoy!
<script type="text/javascript">
$('#textarea').textext({
plugins : 'tags prompt focus autocomplete ajax arrow',
tagsItems : [ 'Basic', 'JavaScript', 'PHP', 'Scala' ],
prompt : 'Add one...',
ajax : {
url : '/manual/examples/data.json',
dataType : 'json',
cacheResults : true
}
});
</script>

Related

TinyMCE <p> </p><div id="ConnectiveDocSignExtentionInstalled" data-extension-version="1.0.4"></div>

I have a MultiLine TextField in ASP.net VB with TinyMCE. A user of our website has a plugin on his computers that insert unwanted HTML code into the TinyMCE TextField.
<p> </p><div id="ConnectiveDocSignExtentionInstalled" data-extension-version="1.0.4"></div>
Is there a way to filter this text and remove it before saving the full textfield content to my SQL database?
One option is to use a TinyMCE node filter to remove the element when it serializes the content inside the editor. Here's an example:
setup: (editor) => {
editor.on('PreInit',() => {
// Create a custom node filter to remove unwanted content when getting content from the editor
editor.serializer.addNodeFilter('div', (nodes) => {
nodes.forEach((node) => {
const id = node.attr('id');
if (id === 'ConnectiveDocSignExtentionInstalled') {
node.remove();
}
});
});
});
}
and a working version of it: https://fiddle.tiny.cloud/RAhaab/1
This works by registering a filter during the initialization sequence to remove the offending div added by the users extension when TinyMCE fetches the content from the editor. More information about the Node API used by the filter can be found here: https://www.tiny.cloud/docs/api/tinymce.html/tinymce.html.node/
Another option, that would depend on your integration, is to use server-side filtering of the content sent. How you do that would differ based on the frameworks, etc... being used.
Came here just to say that this extra code is entered into text areas that process HTML when you have the "Connective Signing Extension" installed.
This extension gets installed when you use an electronic European ID card to log into government or banking services in the EU.
Today, I found their website and reported this behavior via their contact form.
I hope they respond, but I recommend others experiencing the same behavior please contact them and bubble up the bug!

how to interact with HTML forms in delphi using TChromium

i need to update the design of a browser in delphi7 I was using the twebbrowser but he had many problems with javascript and navigation.. so i decide to migrate to Chromium. the problem is that I cannot find code on these components. does anyone know which command would be equivalent to this one in tchromium:
OleObject.Document.all.Item ('ElementbyId', 0) .value: = 'edit1.text';
i need to transfer a text from memo to textarea in html form and at the end send a click on the button on the html form. if anyone knows the commands and can share I appreciate it.
A more flexible alternative than DOM access could be to perform this in Javascript, with the ExecuteJavaScript method of TChromium.
From your summary description, the JS could be something like
document.getElementById('yourtextarea').value = <JSON stringified content of your memo>;
document.getElementById('yourform').submit();
Alternatively, you could implement a JS function in your HTML, and call it with ExecuteJavascript, this way there would not be anything specific (besides the function name) on the Delphi side, and the HTML would be free to evolve.
function setTextAreaAndSubmit(value) {
document.getElementById('yourtextarea').value = value;
document.getElementById('yourform').submit();
}

How do I generate SEO-friendly markup for a single-page web app? [duplicate]

There are a lot of cool tools for making powerful "single-page" JavaScript websites nowadays. In my opinion, this is done right by letting the server act as an API (and nothing more) and letting the client handle all of the HTML generation stuff. The problem with this "pattern" is the lack of search engine support. I can think of two solutions:
When the user enters the website, let the server render the page exactly as the client would upon navigation. So if I go to http://example.com/my_path directly the server would render the same thing as the client would if I go to /my_path through pushState.
Let the server provide a special website only for the search engine bots. If a normal user visits http://example.com/my_path the server should give him a JavaScript heavy version of the website. But if the Google bot visits, the server should give it some minimal HTML with the content I want Google to index.
The first solution is discussed further here. I have been working on a website doing this and it's not a very nice experience. It's not DRY and in my case I had to use two different template engines for the client and the server.
I think I have seen the second solution for some good ol' Flash websites. I like this approach much more than the first one and with the right tool on the server it could be done quite painlessly.
So what I'm really wondering is the following:
Can you think of any better solution?
What are the disadvantages with the second solution? If Google in some way finds out that I'm not serving the exact same content for the Google bot as a regular user, would I then be punished in the search results?
While #2 might be "easier" for you as a developer, it only provides search engine crawling. And yes, if Google finds out your serving different content, you might be penalized (I'm not an expert on that, but I have heard of it happening).
Both SEO and accessibility (not just for disabled person, but accessibility via mobile devices, touch screen devices, and other non-standard computing / internet enabled platforms) both have a similar underlying philosophy: semantically rich markup that is "accessible" (i.e. can be accessed, viewed, read, processed, or otherwise used) to all these different browsers. A screen reader, a search engine crawler or a user with JavaScript enabled, should all be able to use/index/understand your site's core functionality without issue.
pushState does not add to this burden, in my experience. It only brings what used to be an afterthought and "if we have time" to the forefront of web development.
What your describe in option #1 is usually the best way to go - but, like other accessibility and SEO issues, doing this with pushState in a JavaScript-heavy app requires up-front planning or it will become a significant burden. It should be baked in to the page and application architecture from the start - retrofitting is painful and will cause more duplication than is necessary.
I've been working with pushState and SEO recently for a couple of different application, and I found what I think is a good approach. It basically follows your item #1, but accounts for not duplicating html / templates.
Most of the info can be found in these two blog posts:
http://lostechies.com/derickbailey/2011/09/06/test-driving-backbone-views-with-jquery-templates-the-jasmine-gem-and-jasmine-jquery/
and
http://lostechies.com/derickbailey/2011/06/22/rendering-a-rails-partial-as-a-jquery-template/
The gist of it is that I use ERB or HAML templates (running Ruby on Rails, Sinatra, etc) for my server side render and to create the client side templates that Backbone can use, as well as for my Jasmine JavaScript specs. This cuts out the duplication of markup between the server side and the client side.
From there, you need to take a few additional steps to have your JavaScript work with the HTML that is rendered by the server - true progressive enhancement; taking the semantic markup that got delivered and enhancing it with JavaScript.
For example, i'm building an image gallery application with pushState. If you request /images/1 from the server, it will render the entire image gallery on the server and send all of the HTML, CSS and JavaScript down to your browser. If you have JavaScript disabled, it will work perfectly fine. Every action you take will request a different URL from the server and the server will render all of the markup for your browser. If you have JavaScript enabled, though, the JavaScript will pick up the already rendered HTML along with a few variables generated by the server and take over from there.
Here's an example:
<form id="foo">
Name: <input id="name"><button id="say">Say My Name!</button>
</form>
After the server renders this, the JavaScript would pick it up (using a Backbone.js view in this example)
FooView = Backbone.View.extend({
events: {
"change #name": "setName",
"click #say": "sayName"
},
setName: function(e){
var name = $(e.currentTarget).val();
this.model.set({name: name});
},
sayName: function(e){
e.preventDefault();
var name = this.model.get("name");
alert("Hello " + name);
},
render: function(){
// do some rendering here, for when this is just running JavaScript
}
});
$(function(){
var model = new MyModel();
var view = new FooView({
model: model,
el: $("#foo")
});
});
This is a very simple example, but I think it gets the point across.
When I instante the view after the page loads, I'm providing the existing content of the form that was rendered by the server, to the view instance as the el for the view. I am not calling render or having the view generate an el for me, when the first view is loaded. I have a render method available for after the view is up and running and the page is all JavaScript. This lets me re-render the view later if I need to.
Clicking the "Say My Name" button with JavaScript enabled will cause an alert box. Without JavaScript, it would post back to the server and the server could render the name to an html element somewhere.
Edit
Consider a more complex example, where you have a list that needs to be attached (from the comments below this)
Say you have a list of users in a <ul> tag. This list was rendered by the server when the browser made a request, and the result looks something like:
<ul id="user-list">
<li data-id="1">Bob
<li data-id="2">Mary
<li data-id="3">Frank
<li data-id="4">Jane
</ul>
Now you need to loop through this list and attach a Backbone view and model to each of the <li> items. With the use of the data-id attribute, you can find the model that each tag comes from easily. You'll then need a collection view and item view that is smart enough to attach itself to this html.
UserListView = Backbone.View.extend({
attach: function(){
this.el = $("#user-list");
this.$("li").each(function(index){
var userEl = $(this);
var id = userEl.attr("data-id");
var user = this.collection.get(id);
new UserView({
model: user,
el: userEl
});
});
}
});
UserView = Backbone.View.extend({
initialize: function(){
this.model.bind("change:name", this.updateName, this);
},
updateName: function(model, val){
this.el.text(val);
}
});
var userData = {...};
var userList = new UserCollection(userData);
var userListView = new UserListView({collection: userList});
userListView.attach();
In this example, the UserListView will loop through all of the <li> tags and attach a view object with the correct model for each one. it sets up an event handler for the model's name change event and updates the displayed text of the element when a change occurs.
This kind of process, to take the html that the server rendered and have my JavaScript take over and run it, is a great way to get things rolling for SEO, Accessibility, and pushState support.
Hope that helps.
I think you need this: http://code.google.com/web/ajaxcrawling/
You can also install a special backend that "renders" your page by running javascript on the server, and then serves that to google.
Combine both things and you have a solution without programming things twice. (As long as your app is fully controllable via anchor fragments.)
So, it seem that the main concern is being DRY
If you're using pushState have your server send the same exact code for all urls (that don't contain a file extension to serve images, etc.) "/mydir/myfile", "/myotherdir/myotherfile" or root "/" -- all requests receive the same exact code. You need to have some kind url rewrite engine. You can also serve a tiny bit of html and the rest can come from your CDN (using require.js to manage dependencies -- see https://stackoverflow.com/a/13813102/1595913).
(test the link's validity by converting the link to your url scheme and testing against existence of content by querying a static or a dynamic source. if it's not valid send a 404 response.)
When the request is not from a google bot, you just process normally.
If the request is from a google bot, you use phantom.js -- headless webkit browser ("A headless browser is simply a full-featured web browser with no visual interface.") to render html and javascript on the server and send the google bot the resulting html. As the bot parses the html it can hit your other "pushState" links /somepage on the server mylink, the server rewrites url to your application file, loads it in phantom.js and the resulting html is sent to the bot, and so on...
For your html I'm assuming you're using normal links with some kind of hijacking (e.g. using with backbone.js https://stackoverflow.com/a/9331734/1595913)
To avoid confusion with any links separate your api code that serves json into a separate subdomain, e.g. api.mysite.com
To improve performance you can pre-process your site pages for search engines ahead of time during off hours by creating static versions of the pages using the same mechanism with phantom.js and consequently serve the static pages to google bots. Preprocessing can be done with some simple app that can parse <a> tags. In this case handling 404 is easier since you can simply check for the existence of the static file with a name that contains url path.
If you use #! hash bang syntax for your site links a similar scenario applies, except that the rewrite url server engine would look out for _escaped_fragment_ in the url and would format the url to your url scheme.
There are a couple of integrations of node.js with phantom.js on github and you can use node.js as the web server to produce html output.
Here are a couple of examples using phantom.js for seo:
http://backbonetutorials.com/seo-for-single-page-apps/
http://thedigitalself.com/blog/seo-and-javascript-with-phantomjs-server-side-rendering
If you're using Rails, try poirot. It's a gem that makes it dead simple to reuse mustache or handlebars templates client and server side.
Create a file in your views like _some_thingy.html.mustache.
Render server side:
<%= render :partial => 'some_thingy', object: my_model %>
Put the template your head for client side use:
<%= template_include_tag 'some_thingy' %>
Rendre client side:
html = poirot.someThingy(my_model)
To take a slightly different angle, your second solution would be the correct one in terms of accessibility...you would be providing alternative content to users who cannot use javascript (those with screen readers, etc.).
This would automatically add the benefits of SEO and, in my opinion, would not be seen as a 'naughty' technique by Google.
Interesting. I have been searching around for viable solutions but it seems to be quite problematic.
I was actually leaning more towards your 2nd approach:
Let the server provide a special website only for the search engine
bots. If a normal user visits http://example.com/my_path the server
should give him a JavaScript heavy version of the website. But if the
Google bot visits, the server should give it some minimal HTML with
the content I want Google to index.
Here's my take on solving the problem. Although it is not confirmed to work, it might provide some insight or idea's for other developers.
Assume you're using a JS framework that supports "push state" functionality, and your backend framework is Ruby on Rails. You have a simple blog site and you would like search engines to index all your article index and show pages.
Let's say you have your routes set up like this:
resources :articles
match "*path", "main#index"
Ensure that every server-side controller renders the same template that your client-side framework requires to run (html/css/javascript/etc). If none of the controllers are matched in the request (in this example we only have a RESTful set of actions for the ArticlesController), then just match anything else and just render the template and let the client-side framework handle the routing. The only difference between hitting a controller and hitting the wildcard matcher would be the ability to render content based on the URL that was requested to JavaScript-disabled devices.
From what I understand it is a bad idea to render content that isn't visible to browsers. So when Google indexes it, people go through Google to visit a given page and there isn't any content, then you're probably going to be penalised. What comes to mind is that you render content in a div node that you display: none in CSS.
However, I'm pretty sure it doesn't matter if you simply do this:
<div id="no-js">
<h1><%= #article.title %></h1>
<p><%= #article.description %></p>
<p><%= #article.content %></p>
</div>
And then using JavaScript, which doesn't get run when a JavaScript-disabled device opens the page:
$("#no-js").remove() # jQuery
This way, for Google, and for anyone with JavaScript-disabled devices, they would see the raw/static content. So the content is physically there and is visible to anyone with JavaScript-disabled devices.
But, when a user visits the same page and actually has JavaScript enabled, the #no-js node will be removed so it doesn't clutter up your application. Then your client-side framework will handle the request through it's router and display what a user should see when JavaScript is enabled.
I think this might be a valid and fairly easy technique to use. Although that might depend on the complexity of your website/application.
Though, please correct me if it isn't. Just thought I'd share my thoughts.
Use NodeJS on the serverside, browserify your clientside code and route each http-request's(except for static http resources) uri through a serverside client to provide the first 'bootsnap'(a snapshot of the page it's state). Use something like jsdom to handle jquery dom-ops on the server. After the bootsnap returned, setup the websocket connection. Probably best to differentiate between a websocket client and a serverside client by making some kind of a wrapper connection on the clientside(serverside client can directly communicate with the server). I've been working on something like this: https://github.com/jvanveen/rnet/
Use Google Closure Template to render pages. It compiles to javascript or java, so it is easy to render the page either on the client or server side. On the first encounter with every client, render the html and add javascript as link in header. Crawler will read the html only but the browser will execute your script. All subsequent requests from the browser could be done in against the api to minimize the traffic.
This might help you : https://github.com/sharjeel619/SPA-SEO
Logic
A browser requests your single page application from the server,
which is going to be loaded from a single index.html file.
You program some intermediary server code which intercepts the client
request and differentiates whether the request came from a browser or
some social crawler bot.
If the request came from some crawler bot, make an API call to
your back-end server, gather the data you need, fill in that data to
html meta tags and return those tags in string format back to the
client.
If the request didn't come from some crawler bot, then simply
return the index.html file from the build or dist folder of your single page
application.

How to manually define place for blog post preview using Django and CKEditor?

I have a blog written in Python + Django.
Before I started use of WYSIWYG editor, to create a blog post preview I manually added custom html tag <post_cut/> and used python slice to show only a preview. It allowed to avoid issues with fixed length for preview or breaking html tags.
Now I added Django-CKEditor and it removes all html tags which "it doesn't understand".
I tried to do something with configuration (allowedContentRules, format_tags and etc.) but no success.
The questions is how to manage "post-cut" and how to do this using CKEditor.
P.S. it would be awesome also to have button for that.
Found the answer by myself.
You need to use extraAllowedContent if you want to add some extra tags.
Also found how to add custom button by creating a custom plugin.
But still looking for good solution that will utilize django-ckeditor
CKEDITOR_CONFIGS = {
'default': {
'extraAllowedContent': {
'post_cut': True,
},
# ...
# (other options)
}
}

How do search engines deal with AngularJS applications?

I see two issues with AngularJS application regarding search engines and SEO:
1) What happens with custom tags? Do search engines ignore the whole content within those tags? i.e. suppose I have
<custom>
<h1>Hey, this title is important</h1>
</custom>
would <h1> be indexed despite being inside custom tags?
2) Is there a way to avoid search engines of indexing {{}} binds literally? i.e.
<h2>{{title}}</h2>
I know I could do something like
<h2 ng-bind="title"></h2>
but what if I want to actually let the crawler "see" the title? Is server-side rendering the only solution?
(2022) Use Server Side Rendering if possible, and generate URLs with Pushstate
Google can and will run JavaScript now so it is very possible to build a site using only JavaScript provided you create a sensible URL structure. However, pagespeed has become a progressively more important ranking factor and typically pages built clientside perform poorly on initial render.
Serverside rendering (SSR) can help by allowing your pages to be pre-generated on the server. Your html containst the div that will be used as the page root, but this is not an empty div, it contains the html that the JavaScript would have generated if it were allowed to run.
The client downloads the HTML and renders it giving a very fast initial load, then it executes the JavaScript replacing the content of the root div with generated content in a process known as hydration.
Many newer frameworks come with SSR built in, notably NextJS.
(2015) Use PushState and Precomposition
The current (2015) way to do this is using the JavaScript pushState method.
PushState changes the URL in the top browser bar without reloading the page. Say you have a page containing tabs. The tabs hide and show content, and the content is inserted dynamically, either using AJAX or by simply setting display:none and display:block to hide and show the correct tab content.
When the tabs are clicked, use pushState to update the URL in the address bar. When the page is rendered, use the value in the address bar to determine which tab to show. Angular routing will do this for you automatically.
Precomposition
There are two ways to hit a PushState Single Page App (SPA)
Via PushState, where the user clicks a PushState link and the content is AJAXed in.
By hitting the URL directly.
The initial hit on the site will involve hitting the URL directly. Subsequent hits will simply AJAX in content as the PushState updates the URL.
Crawlers harvest links from a page then add them to a queue for later processing. This means that for a crawler, every hit on the server is a direct hit, they don't navigate via Pushstate.
Precomposition bundles the initial payload into the first response from the server, possibly as a JSON object. This allows the Search Engine to render the page without executing the AJAX call.
There is some evidence to suggest that Google might not execute AJAX requests. More on this here:
https://web.archive.org/web/20160318211223/http://www.analog-ni.co/precomposing-a-spa-may-become-the-holy-grail-to-seo
Search Engines can read and execute JavaScript
Google has been able to parse JavaScript for some time now, it's why they originally developed Chrome, to act as a full featured headless browser for the Google spider. If a link has a valid href attribute, the new URL can be indexed. There's nothing more to do.
If clicking a link in addition triggers a pushState call, the site can be navigated by the user via PushState.
Search Engine Support for PushState URLs
PushState is currently supported by Google and Bing.
Google
Here's Matt Cutts responding to Paul Irish's question about PushState for SEO:
http://youtu.be/yiAF9VdvRPw
Here is Google announcing full JavaScript support for the spider:
http://googlewebmastercentral.blogspot.de/2014/05/understanding-web-pages-better.html
The upshot is that Google supports PushState and will index PushState URLs.
See also Google webmaster tools' fetch as Googlebot. You will see your JavaScript (including Angular) is executed.
Bing
Here is Bing's announcement of support for pretty PushState URLs dated March 2013:
http://blogs.bing.com/webmaster/2013/03/21/search-engine-optimization-best-practices-for-ajax-urls/
Don't use HashBangs #!
Hashbang URLs were an ugly stopgap requiring the developer to provide a pre-rendered version of the site at a special location. They still work, but you don't need to use them.
Hashbang URLs look like this:
domain.example/#!path/to/resource
This would be paired with a metatag like this:
<meta name="fragment" content="!">
Google will not index them in this form, but will instead pull a static version of the site from the escaped_fragments URL and index that.
Pushstate URLs look like any ordinary URL:
domain.example/path/to/resource
The difference is that Angular handles them for you by intercepting the change to document.location dealing with it in JavaScript.
If you want to use PushState URLs (and you probably do) take out all the old hash style URLs and metatags and simply enable HTML5 mode in your config block.
Testing your site
Google Webmaster tools now contains a tool which will allow you to fetch a URL as Google, and render JavaScript as Google renders it.
https://www.google.com/webmasters/tools/googlebot-fetch
Generating PushState URLs in Angular
To generate real URLs in Angular, rather than # prefixed ones, set HTML5 mode on your $locationProvider object.
$locationProvider.html5Mode(true);
Server Side
Since you are using real URLs, you will need to ensure the same template (plus some precomposed content) gets shipped by your server for all valid URLs. How you do this will vary depending on your server architecture.
Sitemap
Your app may use unusual forms of navigation, for example hover or scroll. To ensure Google is able to drive your app, I would probably suggest creating a sitemap, a simple list of all the URLs your app responds to. You can place this at the default location (/sitemap or /sitemap.xml), or tell Google about it using webmaster tools.
It's a good idea to have a sitemap anyway.
Browser support
Pushstate works in IE10. In older browsers, Angular will automatically fall back to hash style URLs
A demo page
The following content is rendered using a pushstate URL with precomposition:
http://html5.gingerhost.com/london
As can be verified, at this link, the content is indexed and is appearing in Google.
Serving 404 and 301 Header status codes
Because the search engine will always hit your server for every request, you can serve header status codes from your server and expect Google to see them.
Update May 2014
Google crawlers now executes javascript - you can use the Google Webmaster Tools to better understand how your sites are rendered by Google.
Original answer
If you want to optimize your app for search engines there is unfortunately no way around serving a pre-rendered version to the crawler. You can read more about Google's recommendations for ajax and javascript-heavy sites here.
If this is an option I'd recommend reading this article about how to do SEO for Angular with server-side rendering.
I’m not sure what the crawler does when it encounters custom tags.
Let's get definitive about AngularJS and SEO
Google, Yahoo, Bing, and other search engines crawl the web in traditional ways using traditional crawlers. They run robots that crawl the HTML on web pages, collecting information along the way. They keep interesting words and look for other links to other pages (these links, the amount of them and the number of them come into play with SEO).
So why don't search engines deal with javascript sites?
The answer has to do with the fact that the search engine robots work through headless browsers and they most often do not have a javascript rendering engine to render the javascript of a page. This works for most pages as most static pages don't care about JavaScript rendering their page, as their content is already available.
What can be done about it?
Luckily, crawlers of the larger sites have started to implement a mechanism that allows us to make our JavaScript sites crawlable, but it requires us to implement a change to our site.
If we change our hashPrefix to be #! instead of simply #, then modern search engines will change the request to use _escaped_fragment_ instead of #!. (With HTML5 mode, i.e. where we have links without the hash prefix, we can implement this same feature by looking at the User Agent header in our backend).
That is to say, instead of a request from a normal browser that looks like:
http://www.ng-newsletter.com/#!/signup/page
A search engine will search the page with:
http://www.ng-newsletter.com/?_escaped_fragment_=/signup/page
We can set the hash prefix of our Angular apps using a built-in method from ngRoute:
angular.module('myApp', [])
.config(['$location', function($location) {
$location.hashPrefix('!');
}]);
And, if we're using html5Mode, we will need to implement this using the meta tag:
<meta name="fragment" content="!">
Reminder, we can set the html5Mode() with the $location service:
angular.module('myApp', [])
.config(['$location',
function($location) {
$location.html5Mode(true);
}]);
Handling the search engine
We have a lot of opportunities to determine how we'll deal with actually delivering content to search engines as static HTML. We can host a backend ourselves, we can use a service to host a back-end for us, we can use a proxy to deliver the content, etc. Let's look at a few options:
Self-hosted
We can write a service to handle dealing with crawling our own site using a headless browser, like phantomjs or zombiejs, taking a snapshot of the page with rendered data and storing it as HTML. Whenever we see the query string ?_escaped_fragment_ in a search request, we can deliver the static HTML snapshot we took of the page instead of the pre-rendered page through only JS. This requires us to have a backend that delivers our pages with conditional logic in the middle. We can use something like prerender.io's backend as a starting point to run this ourselves. Of course, we still need to handle the proxying and the snippet handling, but it's a good start.
With a paid service
The easiest and the fastest way to get content into search engine is to use a service Brombone, seo.js, seo4ajax, and prerender.io are good examples of these that will host the above content rendering for you. This is a good option for the times when we don't want to deal with running a server/proxy. Also, it's usually super quick.
For more information about Angular and SEO, we wrote an extensive tutorial on it at http://www.ng-newsletter.com/posts/serious-angular-seo.html and we detailed it even more in our book ng-book: The Complete Book on AngularJS. Check it out at ng-book.com.
You should really check out the tutorial on building an SEO-friendly AngularJS site on the year of moo blog. He walks you through all the steps outlined on Angular's documentation. http://www.yearofmoo.com/2012/11/angularjs-and-seo.html
Using this technique, the search engine sees the expanded HTML instead of the custom tags.
This has drastically changed.
http://searchengineland.com/bing-offers-recommendations-for-seo-friendly-ajax-suggests-html5-pushstate-152946
If you use:
$locationProvider.html5Mode(true);
you are set.
No more rendering pages.
Things have changed quite a bit since this question was asked. There are now options to let Google index your AngularJS site. The easiest option I found was to use http://prerender.io free service that will generate the crwalable pages for you and serve that to the search engines. It is supported on almost all server side web platforms. I have recently started using them and the support is excellent too.
I do not have any affiliation with them, this is coming from a happy user.
Angular's own website serves simplified content to search engines: http://docs.angularjs.org/?_escaped_fragment_=/tutorial/step_09
Say your Angular app is consuming a Node.js/Express-driven JSON api, like /api/path/to/resource. Perhaps you could redirect any requests with ?_escaped_fragment_ to /api/path/to/resource.html, and use content negotiation to render an HTML template of the content, rather than return the JSON data.
The only thing is, your Angular routes would need to match 1:1 with your REST API.
EDIT: I'm realizing that this has the potential to really muddy up your REST api and I don't recommend doing it outside of very simple use-cases where it might be a natural fit.
Instead, you can use an entirely different set of routes and controllers for your robot-friendly content. But then you're duplicating all of your AngularJS routes and controllers in Node/Express.
I've settled on generating snapshots with a headless browser, even though I feel that's a little less-than-ideal.
A good practice can be found here:
http://scotch.io/tutorials/javascript/angularjs-seo-with-prerender-io?_escaped_fragment_=tag
As of now Google has changed their AJAX crawling proposal.
Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.
tl;dr: [Google] are no longer recommending the AJAX crawling proposal [Google] made back in 2009.
Google's Crawlable Ajax Spec, as referenced in the other answers here, is basically the answer.
If you're interested in how other search engines and social bots deal with the same issues I wrote up the state of art here: http://blog.ajaxsnapshots.com/2013/11/googles-crawlable-ajax-specification.html
I work for a https://ajaxsnapshots.com, a company that implements the Crawlable Ajax Spec as a service - the information in that report is based on observations from our logs.
I have found an elegant solution that would cover most of your bases. I wrote about it initially here and answered another similar Stack Overflow question here which references it.
FYI this solution also includes hard coded fallback tags in case JavaScript isn't picked up by the crawler. I haven't explicitly outlined it, but it is worth mentioning that you should be activating HTML5 mode for proper URL support.
Also note: these aren't the complete files, just the important parts of those that are relevant. I can't help with writing the boilerplate for directives, services, etc.
app.example
This is where you provide the custom metadata for each of your routes (title, description, etc.)
$routeProvider
.when('/', {
templateUrl: 'views/homepage.html',
controller: 'HomepageCtrl',
metadata: {
title: 'The Base Page Title',
description: 'The Base Page Description' }
})
.when('/about', {
templateUrl: 'views/about.html',
controller: 'AboutCtrl',
metadata: {
title: 'The About Page Title',
description: 'The About Page Description' }
})
metadata-service.js (service)
Sets the custom metadata options or use defaults as fallbacks.
var self = this;
// Set custom options or use provided fallback (default) options
self.loadMetadata = function(metadata) {
self.title = document.title = metadata.title || 'Fallback Title';
self.description = metadata.description || 'Fallback Description';
self.url = metadata.url || $location.absUrl();
self.image = metadata.image || 'fallbackimage.jpg';
self.ogpType = metadata.ogpType || 'website';
self.twitterCard = metadata.twitterCard || 'summary_large_image';
self.twitterSite = metadata.twitterSite || '#fallback_handle';
};
// Route change handler, sets the route's defined metadata
$rootScope.$on('$routeChangeSuccess', function (event, newRoute) {
self.loadMetadata(newRoute.metadata);
});
metaproperty.js (directive)
Packages the metadata service results for the view.
return {
restrict: 'A',
scope: {
metaproperty: '#'
},
link: function postLink(scope, element, attrs) {
scope.default = element.attr('content');
scope.metadata = metadataService;
// Watch for metadata changes and set content
scope.$watch('metadata', function (newVal, oldVal) {
setContent(newVal);
}, true);
// Set the content attribute with new metadataService value or back to the default
function setContent(metadata) {
var content = metadata[scope.metaproperty] || scope.default;
element.attr('content', content);
}
setContent(scope.metadata);
}
};
index.html
Complete with the hardcoded fallback tags mentioned earlier, for crawlers that can't pick up any JavaScript.
<head>
<title>Fallback Title</title>
<meta name="description" metaproperty="description" content="Fallback Description">
<!-- Open Graph Protocol Tags -->
<meta property="og:url" content="fallbackurl.example" metaproperty="url">
<meta property="og:title" content="Fallback Title" metaproperty="title">
<meta property="og:description" content="Fallback Description" metaproperty="description">
<meta property="og:type" content="website" metaproperty="ogpType">
<meta property="og:image" content="fallbackimage.jpg" metaproperty="image">
<!-- Twitter Card Tags -->
<meta name="twitter:card" content="summary_large_image" metaproperty="twitterCard">
<meta name="twitter:title" content="Fallback Title" metaproperty="title">
<meta name="twitter:description" content="Fallback Description" metaproperty="description">
<meta name="twitter:site" content="#fallback_handle" metaproperty="twitterSite">
<meta name="twitter:image:src" content="fallbackimage.jpg" metaproperty="image">
</head>
This should help dramatically with most search engine use cases. If you want fully dynamic rendering for social network crawlers (which are iffy on JavaScript support), you'll still have to use one of the pre-rendering services mentioned in some of the other answers.
With Angular Universal, you can generate landing pages for the app that look like the complete app and then load your Angular app behind it.
Angular Universal generates pure HTML means no-javascript pages in server-side and serve them to users without delaying. So you can deal with any crawler, bot and user (who already have low cpu and network speed).Then you can redirect them by links/buttons to your actual angular app that already loaded behind it. This solution is recommended by official site. -More info about SEO and Angular Universal-
Use something like PreRender, it makes static pages of your site so search engines can index it.
Here you can find out for what platforms it is available: https://prerender.io/documentation/install-middleware#asp-net
Crawlers (or bots) are designed to crawl HTML content of web pages but due to AJAX operations for asynchronous data fetching, this became a problem as it takes sometime to render page and show dynamic content on it. Similarly, AngularJS also use asynchronous model, which creates problem for Google crawlers.
Some developers create basic html pages with real data and serve these pages from server side at the time of crawling. We can render same pages with PhantomJS on serve side which has _escaped_fragment_ (Because Google looks for #! in our site urls and then takes everything after the #! and adds it in _escaped_fragment_ query parameter). For more detail please read this blog .
The crawlers do not need a rich featured pretty styled gui, they only want to see the content, so you do not need to give them a snapshot of a page that has been built for humans.
My solution: to give the crawler what the crawler wants:
You must think of what do the crawler want, and give him only that.
TIP don't mess with the back. Just add a little server-sided frontview using the same API