How to reload an MVC page? - razor

This is an MVC razor page, that works fine on first load.
If I use a reload from javascript or even a refresh in the browser menu (Chrome), I get
Server Error in '/' Application.
>The view 'actionChangeUserSentence' or its master was not found or no view
>engine supports the searched locations. The following locations were searched:
>~/Views/Conduct/actionChangeUserSentence.aspx
>~/Views/Conduct/actionChangeUserSentence.ascx
>~/Views/Shared/actionChangeUserSentence.aspx
>~/Views/Shared/actionChangeUserSentence.ascx
>~/Views/Conduct/actionChangeUserSentence.cshtml
>~/Views/Conduct/actionChangeUserSentence.vbhtml
>~/Views/Shared/actionChangeUserSentence.cshtml
>~/Views/Shared/actionChangeUserSentence.vbhtml
> and so on...
meaning the url string is considered a non MVC url.
I tried several possibilities to reload, including:
-location.reload();
-window.location.reload(false); with both false and true
-location.href = location.href;
-history.go(0);
By the way, the reason I need to refresh is that js drawing on a html canvas doesn't work except after full page (re)load, reason still unknown. But anyway, a browser menu refresh should of course always work, I guess...
Thanks for any help.

I will suggest put that code in on ready function of drawing canvas may this will help you

Use just an Response.Redirect("/<your page>");

Related

How can you click an href link but block the linked page from displaying

It was hard to encapsulate my question in the title.
I've rewritten a personal web page that made extensive use of javascript. I'm simplifying matters (or so I hope).
I use a home automation server that has a REST interface. I can send commands such as 'http://192.168.0.111/rest/nodes/7%2032%20CD%201/cmd/DON to turn on a light. I'm putting links on my simplified web page using an href tag. The problem is that it opens the rest interface page and displays the XML result. I'd like to avoid having that page show up. ie when you click on the link it 'goes' to the link but doesn't change the page.
This seems like it should/would not be possible but I've learned not to underestimate the community.
Any thoughts?
You can use the fetch function if your browser is modern enough:
document.querySelector(lightSelector).addEventListener('click', e => {
e.preventDefault(); // prevent the REST page from opening
fetch(lightURL).then // put promises here
})
The variables lightSelector and lightURL are expected to be defined somewhere.
The code listens for clicks on the link, and when one comes, we prevent the REST page from opening (your intention) then we send a request to your light URL with fetch. You can add some promises to deal with the response (e.g. print "light on" or "light off" for example) using then but that's another matter.
If you make the "link" a button, you can get rid of e.preventDefault(); which is the part preventing the REST page from opening.

How do I generate SEO-friendly markup for a single-page web app? [duplicate]

There are a lot of cool tools for making powerful "single-page" JavaScript websites nowadays. In my opinion, this is done right by letting the server act as an API (and nothing more) and letting the client handle all of the HTML generation stuff. The problem with this "pattern" is the lack of search engine support. I can think of two solutions:
When the user enters the website, let the server render the page exactly as the client would upon navigation. So if I go to http://example.com/my_path directly the server would render the same thing as the client would if I go to /my_path through pushState.
Let the server provide a special website only for the search engine bots. If a normal user visits http://example.com/my_path the server should give him a JavaScript heavy version of the website. But if the Google bot visits, the server should give it some minimal HTML with the content I want Google to index.
The first solution is discussed further here. I have been working on a website doing this and it's not a very nice experience. It's not DRY and in my case I had to use two different template engines for the client and the server.
I think I have seen the second solution for some good ol' Flash websites. I like this approach much more than the first one and with the right tool on the server it could be done quite painlessly.
So what I'm really wondering is the following:
Can you think of any better solution?
What are the disadvantages with the second solution? If Google in some way finds out that I'm not serving the exact same content for the Google bot as a regular user, would I then be punished in the search results?
While #2 might be "easier" for you as a developer, it only provides search engine crawling. And yes, if Google finds out your serving different content, you might be penalized (I'm not an expert on that, but I have heard of it happening).
Both SEO and accessibility (not just for disabled person, but accessibility via mobile devices, touch screen devices, and other non-standard computing / internet enabled platforms) both have a similar underlying philosophy: semantically rich markup that is "accessible" (i.e. can be accessed, viewed, read, processed, or otherwise used) to all these different browsers. A screen reader, a search engine crawler or a user with JavaScript enabled, should all be able to use/index/understand your site's core functionality without issue.
pushState does not add to this burden, in my experience. It only brings what used to be an afterthought and "if we have time" to the forefront of web development.
What your describe in option #1 is usually the best way to go - but, like other accessibility and SEO issues, doing this with pushState in a JavaScript-heavy app requires up-front planning or it will become a significant burden. It should be baked in to the page and application architecture from the start - retrofitting is painful and will cause more duplication than is necessary.
I've been working with pushState and SEO recently for a couple of different application, and I found what I think is a good approach. It basically follows your item #1, but accounts for not duplicating html / templates.
Most of the info can be found in these two blog posts:
http://lostechies.com/derickbailey/2011/09/06/test-driving-backbone-views-with-jquery-templates-the-jasmine-gem-and-jasmine-jquery/
and
http://lostechies.com/derickbailey/2011/06/22/rendering-a-rails-partial-as-a-jquery-template/
The gist of it is that I use ERB or HAML templates (running Ruby on Rails, Sinatra, etc) for my server side render and to create the client side templates that Backbone can use, as well as for my Jasmine JavaScript specs. This cuts out the duplication of markup between the server side and the client side.
From there, you need to take a few additional steps to have your JavaScript work with the HTML that is rendered by the server - true progressive enhancement; taking the semantic markup that got delivered and enhancing it with JavaScript.
For example, i'm building an image gallery application with pushState. If you request /images/1 from the server, it will render the entire image gallery on the server and send all of the HTML, CSS and JavaScript down to your browser. If you have JavaScript disabled, it will work perfectly fine. Every action you take will request a different URL from the server and the server will render all of the markup for your browser. If you have JavaScript enabled, though, the JavaScript will pick up the already rendered HTML along with a few variables generated by the server and take over from there.
Here's an example:
<form id="foo">
Name: <input id="name"><button id="say">Say My Name!</button>
</form>
After the server renders this, the JavaScript would pick it up (using a Backbone.js view in this example)
FooView = Backbone.View.extend({
events: {
"change #name": "setName",
"click #say": "sayName"
},
setName: function(e){
var name = $(e.currentTarget).val();
this.model.set({name: name});
},
sayName: function(e){
e.preventDefault();
var name = this.model.get("name");
alert("Hello " + name);
},
render: function(){
// do some rendering here, for when this is just running JavaScript
}
});
$(function(){
var model = new MyModel();
var view = new FooView({
model: model,
el: $("#foo")
});
});
This is a very simple example, but I think it gets the point across.
When I instante the view after the page loads, I'm providing the existing content of the form that was rendered by the server, to the view instance as the el for the view. I am not calling render or having the view generate an el for me, when the first view is loaded. I have a render method available for after the view is up and running and the page is all JavaScript. This lets me re-render the view later if I need to.
Clicking the "Say My Name" button with JavaScript enabled will cause an alert box. Without JavaScript, it would post back to the server and the server could render the name to an html element somewhere.
Edit
Consider a more complex example, where you have a list that needs to be attached (from the comments below this)
Say you have a list of users in a <ul> tag. This list was rendered by the server when the browser made a request, and the result looks something like:
<ul id="user-list">
<li data-id="1">Bob
<li data-id="2">Mary
<li data-id="3">Frank
<li data-id="4">Jane
</ul>
Now you need to loop through this list and attach a Backbone view and model to each of the <li> items. With the use of the data-id attribute, you can find the model that each tag comes from easily. You'll then need a collection view and item view that is smart enough to attach itself to this html.
UserListView = Backbone.View.extend({
attach: function(){
this.el = $("#user-list");
this.$("li").each(function(index){
var userEl = $(this);
var id = userEl.attr("data-id");
var user = this.collection.get(id);
new UserView({
model: user,
el: userEl
});
});
}
});
UserView = Backbone.View.extend({
initialize: function(){
this.model.bind("change:name", this.updateName, this);
},
updateName: function(model, val){
this.el.text(val);
}
});
var userData = {...};
var userList = new UserCollection(userData);
var userListView = new UserListView({collection: userList});
userListView.attach();
In this example, the UserListView will loop through all of the <li> tags and attach a view object with the correct model for each one. it sets up an event handler for the model's name change event and updates the displayed text of the element when a change occurs.
This kind of process, to take the html that the server rendered and have my JavaScript take over and run it, is a great way to get things rolling for SEO, Accessibility, and pushState support.
Hope that helps.
I think you need this: http://code.google.com/web/ajaxcrawling/
You can also install a special backend that "renders" your page by running javascript on the server, and then serves that to google.
Combine both things and you have a solution without programming things twice. (As long as your app is fully controllable via anchor fragments.)
So, it seem that the main concern is being DRY
If you're using pushState have your server send the same exact code for all urls (that don't contain a file extension to serve images, etc.) "/mydir/myfile", "/myotherdir/myotherfile" or root "/" -- all requests receive the same exact code. You need to have some kind url rewrite engine. You can also serve a tiny bit of html and the rest can come from your CDN (using require.js to manage dependencies -- see https://stackoverflow.com/a/13813102/1595913).
(test the link's validity by converting the link to your url scheme and testing against existence of content by querying a static or a dynamic source. if it's not valid send a 404 response.)
When the request is not from a google bot, you just process normally.
If the request is from a google bot, you use phantom.js -- headless webkit browser ("A headless browser is simply a full-featured web browser with no visual interface.") to render html and javascript on the server and send the google bot the resulting html. As the bot parses the html it can hit your other "pushState" links /somepage on the server mylink, the server rewrites url to your application file, loads it in phantom.js and the resulting html is sent to the bot, and so on...
For your html I'm assuming you're using normal links with some kind of hijacking (e.g. using with backbone.js https://stackoverflow.com/a/9331734/1595913)
To avoid confusion with any links separate your api code that serves json into a separate subdomain, e.g. api.mysite.com
To improve performance you can pre-process your site pages for search engines ahead of time during off hours by creating static versions of the pages using the same mechanism with phantom.js and consequently serve the static pages to google bots. Preprocessing can be done with some simple app that can parse <a> tags. In this case handling 404 is easier since you can simply check for the existence of the static file with a name that contains url path.
If you use #! hash bang syntax for your site links a similar scenario applies, except that the rewrite url server engine would look out for _escaped_fragment_ in the url and would format the url to your url scheme.
There are a couple of integrations of node.js with phantom.js on github and you can use node.js as the web server to produce html output.
Here are a couple of examples using phantom.js for seo:
http://backbonetutorials.com/seo-for-single-page-apps/
http://thedigitalself.com/blog/seo-and-javascript-with-phantomjs-server-side-rendering
If you're using Rails, try poirot. It's a gem that makes it dead simple to reuse mustache or handlebars templates client and server side.
Create a file in your views like _some_thingy.html.mustache.
Render server side:
<%= render :partial => 'some_thingy', object: my_model %>
Put the template your head for client side use:
<%= template_include_tag 'some_thingy' %>
Rendre client side:
html = poirot.someThingy(my_model)
To take a slightly different angle, your second solution would be the correct one in terms of accessibility...you would be providing alternative content to users who cannot use javascript (those with screen readers, etc.).
This would automatically add the benefits of SEO and, in my opinion, would not be seen as a 'naughty' technique by Google.
Interesting. I have been searching around for viable solutions but it seems to be quite problematic.
I was actually leaning more towards your 2nd approach:
Let the server provide a special website only for the search engine
bots. If a normal user visits http://example.com/my_path the server
should give him a JavaScript heavy version of the website. But if the
Google bot visits, the server should give it some minimal HTML with
the content I want Google to index.
Here's my take on solving the problem. Although it is not confirmed to work, it might provide some insight or idea's for other developers.
Assume you're using a JS framework that supports "push state" functionality, and your backend framework is Ruby on Rails. You have a simple blog site and you would like search engines to index all your article index and show pages.
Let's say you have your routes set up like this:
resources :articles
match "*path", "main#index"
Ensure that every server-side controller renders the same template that your client-side framework requires to run (html/css/javascript/etc). If none of the controllers are matched in the request (in this example we only have a RESTful set of actions for the ArticlesController), then just match anything else and just render the template and let the client-side framework handle the routing. The only difference between hitting a controller and hitting the wildcard matcher would be the ability to render content based on the URL that was requested to JavaScript-disabled devices.
From what I understand it is a bad idea to render content that isn't visible to browsers. So when Google indexes it, people go through Google to visit a given page and there isn't any content, then you're probably going to be penalised. What comes to mind is that you render content in a div node that you display: none in CSS.
However, I'm pretty sure it doesn't matter if you simply do this:
<div id="no-js">
<h1><%= #article.title %></h1>
<p><%= #article.description %></p>
<p><%= #article.content %></p>
</div>
And then using JavaScript, which doesn't get run when a JavaScript-disabled device opens the page:
$("#no-js").remove() # jQuery
This way, for Google, and for anyone with JavaScript-disabled devices, they would see the raw/static content. So the content is physically there and is visible to anyone with JavaScript-disabled devices.
But, when a user visits the same page and actually has JavaScript enabled, the #no-js node will be removed so it doesn't clutter up your application. Then your client-side framework will handle the request through it's router and display what a user should see when JavaScript is enabled.
I think this might be a valid and fairly easy technique to use. Although that might depend on the complexity of your website/application.
Though, please correct me if it isn't. Just thought I'd share my thoughts.
Use NodeJS on the serverside, browserify your clientside code and route each http-request's(except for static http resources) uri through a serverside client to provide the first 'bootsnap'(a snapshot of the page it's state). Use something like jsdom to handle jquery dom-ops on the server. After the bootsnap returned, setup the websocket connection. Probably best to differentiate between a websocket client and a serverside client by making some kind of a wrapper connection on the clientside(serverside client can directly communicate with the server). I've been working on something like this: https://github.com/jvanveen/rnet/
Use Google Closure Template to render pages. It compiles to javascript or java, so it is easy to render the page either on the client or server side. On the first encounter with every client, render the html and add javascript as link in header. Crawler will read the html only but the browser will execute your script. All subsequent requests from the browser could be done in against the api to minimize the traffic.
This might help you : https://github.com/sharjeel619/SPA-SEO
Logic
A browser requests your single page application from the server,
which is going to be loaded from a single index.html file.
You program some intermediary server code which intercepts the client
request and differentiates whether the request came from a browser or
some social crawler bot.
If the request came from some crawler bot, make an API call to
your back-end server, gather the data you need, fill in that data to
html meta tags and return those tags in string format back to the
client.
If the request didn't come from some crawler bot, then simply
return the index.html file from the build or dist folder of your single page
application.

Using history.pushState in Angular results in "10 $digest() iterations reached. Aborting!" error

I'm trying to change the url in my app from "http://www.test.com/foo" to "http://www.test.com/bar+someVariable" (somevariable is a string that I recieve from an http request inside bar's controller) using history.pushState() . In my routes I enabled html5mode and everything works fine. I'm also using location.path() to switch between views and controllers as instructed in the docs. Now once the app switches view and controller I added history.pushState(null,null,"/bar"+somevariable) to "/bar"'s controller. Everything works and the url is updated but in the console I receive the "10 $digest() iterations reached. Aborting!" error. I suspect that activating the history.pushState function is somehow interfering with angular's $location or $route service.
What is the correct way to use history.pushState() within angular without receiving the $digest error?
By the way I'm using angular 1.0.3
Thanks ahead,
Gidon
Change the path with
$location.path('/newValue')
See: http://docs.angularjs.org/guide/dev_guide.services.$location
It is hard to know for sure without seeing the relevant source, but this is a common issue in older versions of IE (8 and 9 mostly I think). The solution that worked for me a few weeks ago when I encountered this (and may work for you if you're using IE) was changing my anchor tags in my navigation.
I had:
what fixed it:

CodeIgniter + jQuery(ajax) + HTML5 pushstate: How can I make a clean navigation with real URLs?

I'm currently trying to build a new website, nothing special, nice and small, but I'm stuck at the very beginning.
My problems are clean URLs and page navigation. I want to do it "the right way".
What I would like to have:
I use CodeIgniter to get clean URLs like
"www.example.com/hello/world"
jQuery helps me using ajax, so I can
.load() additional content
Now I want to use HTML5 features like pushstate to
get rid of the # in the URL
It should be possible to go back and forth without a page refresh but the page will still display the right content according to the current URL.
It should also be possible to reload a page without getting a 404 error. The site should exist thanks to CodeIgniter. (there is a controller and a view)
For example:
A very basic website. Two links, called "foo" and "bar" and a emtpy div box beneath them.
The basic URL is example.com
When you click on "foo" the URL changes to "example.com/foo" without reloading and the div box gets new content with jQuery .load(). The same goes for the other link, just of course different content and URL.
After clicking "foo" and then "bar" the back button will bring me back to "example.com/foo" with the according content. If I load this link directly or refresh the page, it will look the same. No 404 error or something.
Just think about this page and tell me how you would do this.
I would really love to have this kind of navigation and so I tried several things.
So far...
I know how to use CodeIgniter to get the URLs like this. I know how to use jQuery to load additional content and while I don't fully understand the html5 pushstate stuff, I at least got it to work somehow.
But I can't get it to work all together.
My code right now is a mess, that's the reason I don't really want to post it here. I looked at different tutorials and copy pasted some code together. Would be better to upload my CI folder I guess.
Some of the tutorials I looked at:
Dive into HTML5
HTML5 demos
Mozilla manipulating the browser history
Saner HTML5 history
Github: History.js
(max. number of links reached :/)
I think my main problem is, that everybody tries to make it compatible with all browsers and different versions, adds scripts/jQuery plugins and whatnot and I get confused by all the additional code. There is more code between my script-tags then actual html content.
Could somebody post the most basic method how to use HTML5 for my example page?
My failed attemp:
On my test page, when I go back, the URL changes, but the div box will still show the same content, not the old one. I also don't know how to change the URL in the script according to the href attribute from the link. Is there something like $(this).attr('href'), that changes according to which link I click? Right now I would have to use a script for every link, which of course is bad.
When I refresh the site, CodeIgniter kicks in and loads the view, but really only the view by itself, the one I loaded with ajax, not the whole page. But I guess that should be easy to fix with a layout and the right controller settings. Haven't paid much attention to this yet.
Thanks in advance for any help.
If you have suggestions, ideas, or simple just want to mention something, please let me know.
regards
DiLer
I've put up a successful minimal example of HTML5 history here: http://cairo140.github.com/html5-history-example/one.html
The easiest way to get into HTML5 pushstate in my opinion is to ignore the framework for a while and use the most simplistic state transition possible: a wholesale replacement of the <body> and <title> elements. Outside of those elements, the rest of the markup is probably just boilerplate, although if it varies (e.g., if you change the class on HTML in the backend), you can adapt that.
What a dynamic backend like CI does is essentially fake the existence of data at particular locations (identified by the URL) by generating it dynamically on the fly. We can abstract away from the effect of the framework by literally creating the resources and putting them in locations through which your web server (Apache, probably) will simply identify them and feed them on through. We'll have a very simple file system structure relative to the domain root:
/one.html
/two.html
/assets/application.js
Those are the only three files we're working with.
Here's the code for the two HTML files. If you're at the level when you're dealing with HTML5 features, you should be able to understand the markup, but if I didn't make something clear, just leave a comment, and I'll walk you through it:
one.html
<!doctype html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.js"></script>
<script src="assets/application.js"></script>
<title>One</title>
</head>
<body>
<div class="container">
<h1>One</h1>
Two
</div>
</body>
</html>
two.html
<!doctype html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.js"></script>
<script src="assets/application.js"></script>
<title>Two</title>
</head>
<body>
<div class="container">
<h1>Two</h1>
One
</div>
</body>
</html>
You'll notice that if you load one.html through your browser, you can click on the link to two.html, which will load and display a new page. And from two.html, you can do the same back to one.html. Cool.
Now, for the history part:
assets/application.js
$(function(){
var replacePage = function(url) {
$.ajax({
url: url,
type: 'get',
dataType: 'html',
success: function(data){
var dom = $(data);
var title = dom.filter('title').text();
var html = dom.filter('.container').html();
$('title').text(title);
$('.container').html(html);
}
});
}
$('a').live('click', function(e){
history.pushState(null, null, this.href);
replacePage(this.href);
e.preventDefault();
});
$(window).bind('popstate', function(){
replacePage(location.pathname);
});
});
How it works
I define replacePage within the jQuery ready callback to do some straightforward loading of the URL in the argument and to replace the contents of the title and .container elements with those retrieved remotely.
The live call means that any link clicked on the page will trigger the callback, and the callback pushes the state to the href in the link and calls replacePage. It also uses e.preventDefault to prevent the link from being processed the normal way.
Finally, there's a popstate event that fires when a user uses browser-based page navigation (back, forward). We bind a simple callback to that event. Of note is that I couldn't get the version on the Dive Into HTML page to work for some reason in FF for Mac. No clue why.
How to extend it
This extremely basic example can more or less be transplanted onto any site because it does a very uncreative transition: HTML replacement. I suggest you can use this as a foundation and transition into more creative transitions. One example of what you could do would be to emulate what Github does with the directory navigation in its repositories. It's an intermediate manoever that requires floats and overflow management. You could start with a simpler transition like appending the .container in the loaded page to the DOM and then animating the old container to {height: 0}.
Addressing your specific "For example"
You're on the right track for using HTML5 history, but you need to clarify your idea of exactly what /foo and /bar will contain. Basically, you're going to have three pages: /, /foo, and /bar. / will have an empty container div. /foo will be identical to / except in that container div has some foo content in it. /bar will be identical to /foo except in that the container div has some bar content in it. Now, the question comes to how you would extract the contents of the container through Javascript. Assuming that your /foo body tag looked something like this:
<body>
foo
bar
<div class="container">foo</div>
</body>
Then you would extract it from the response data through var html = $(data).filter('.container').html() and then put it back into the parent page through $('.container').html(html). You use filter instead of the much more reasonable find because from some wacky reason, jQuery's DOM parser produces a jQuery object containing every child of the head and every child of the body elements instead of just a jQuery object wrapping the html element. I don't know why.
The rest is just adapting this back into the "vanilla" version above. If you are stuck at any particular stage, let me know, and I can guide you better though it.
Code
https://github.com/cairo140/html5-history-example
Try this in your controller:
if (!$this->input->is_ajax_request())
$this->load->view('header');
$this->load->view('your_view', $data);
if (!$this->input->is_ajax_request())
$this->load->view('footer');

Winforms web browser control not firing document complete with AJAX web site

The VB.Net desktop app uses the IE browser control to navigate the web. When a normal page loads the document_complete event fires and I can read the resulting page and go from there. The issue I am having is that the page I am driving is written with AJAX, so the document complete event never fires. Furthermore, when you view the source of the page after it loaded a new portion via AJAX, it hasn't change. How are people handling this? What are my options?
This solution might solve your problem.
prerequists:
AxwebBrowser control,
reference to mshtml.dll
Dim axmshtml As mshtml.HTMLDocument = YourAxWebBrowserControl.Document
Dim HTMLSource As String = axmshtml.body.innerHTML 'html source, including DOM changes
If you know what you are looking for you can put the above code in a timer/loop
and simply monitor the page source for changes.
If wb is your webbrowser control, then instead of getting the HTML by using:
wb.DocumentText
use:
wb.Document.Body.InnerHtml
This will give you the updated html, reflecting the AJAX update.
As to detecting when the AJAX completes, for me it seems to be triggering a DocumentCompleted event. Not sure why it's different for you.
You need to interact with the Javascript code in the website using the methods on HtmlDocument.
I have seen this kind of behavior with C# when some AJAX scripts created a race condition. Adding the defer attribute to the script tag helped in that case. YMMV.
Not sure if this will work.
When the Ajax call completes, add a random anchor hash to the URL like so: foo.html#23234
then add your code to the NavigateComplete2 event.
http://msdn.microsoft.com/en-us/library/aa768334%28VS.85%29.aspx
I'm guessing that the page your load in your windows app does an AJAX call, which appears to refresh the page. In that case, the document_complete event isn't fired, because the webpage itself isn't refreshed, but a portion of the page.
I found a similar question about this problem, with an accepted answer in VB.Net.
You can use the ProgressChanged event, it seems to fire during ajax calls