Related
There are a lot of cool tools for making powerful "single-page" JavaScript websites nowadays. In my opinion, this is done right by letting the server act as an API (and nothing more) and letting the client handle all of the HTML generation stuff. The problem with this "pattern" is the lack of search engine support. I can think of two solutions:
When the user enters the website, let the server render the page exactly as the client would upon navigation. So if I go to http://example.com/my_path directly the server would render the same thing as the client would if I go to /my_path through pushState.
Let the server provide a special website only for the search engine bots. If a normal user visits http://example.com/my_path the server should give him a JavaScript heavy version of the website. But if the Google bot visits, the server should give it some minimal HTML with the content I want Google to index.
The first solution is discussed further here. I have been working on a website doing this and it's not a very nice experience. It's not DRY and in my case I had to use two different template engines for the client and the server.
I think I have seen the second solution for some good ol' Flash websites. I like this approach much more than the first one and with the right tool on the server it could be done quite painlessly.
So what I'm really wondering is the following:
Can you think of any better solution?
What are the disadvantages with the second solution? If Google in some way finds out that I'm not serving the exact same content for the Google bot as a regular user, would I then be punished in the search results?
While #2 might be "easier" for you as a developer, it only provides search engine crawling. And yes, if Google finds out your serving different content, you might be penalized (I'm not an expert on that, but I have heard of it happening).
Both SEO and accessibility (not just for disabled person, but accessibility via mobile devices, touch screen devices, and other non-standard computing / internet enabled platforms) both have a similar underlying philosophy: semantically rich markup that is "accessible" (i.e. can be accessed, viewed, read, processed, or otherwise used) to all these different browsers. A screen reader, a search engine crawler or a user with JavaScript enabled, should all be able to use/index/understand your site's core functionality without issue.
pushState does not add to this burden, in my experience. It only brings what used to be an afterthought and "if we have time" to the forefront of web development.
What your describe in option #1 is usually the best way to go - but, like other accessibility and SEO issues, doing this with pushState in a JavaScript-heavy app requires up-front planning or it will become a significant burden. It should be baked in to the page and application architecture from the start - retrofitting is painful and will cause more duplication than is necessary.
I've been working with pushState and SEO recently for a couple of different application, and I found what I think is a good approach. It basically follows your item #1, but accounts for not duplicating html / templates.
Most of the info can be found in these two blog posts:
http://lostechies.com/derickbailey/2011/09/06/test-driving-backbone-views-with-jquery-templates-the-jasmine-gem-and-jasmine-jquery/
and
http://lostechies.com/derickbailey/2011/06/22/rendering-a-rails-partial-as-a-jquery-template/
The gist of it is that I use ERB or HAML templates (running Ruby on Rails, Sinatra, etc) for my server side render and to create the client side templates that Backbone can use, as well as for my Jasmine JavaScript specs. This cuts out the duplication of markup between the server side and the client side.
From there, you need to take a few additional steps to have your JavaScript work with the HTML that is rendered by the server - true progressive enhancement; taking the semantic markup that got delivered and enhancing it with JavaScript.
For example, i'm building an image gallery application with pushState. If you request /images/1 from the server, it will render the entire image gallery on the server and send all of the HTML, CSS and JavaScript down to your browser. If you have JavaScript disabled, it will work perfectly fine. Every action you take will request a different URL from the server and the server will render all of the markup for your browser. If you have JavaScript enabled, though, the JavaScript will pick up the already rendered HTML along with a few variables generated by the server and take over from there.
Here's an example:
<form id="foo">
Name: <input id="name"><button id="say">Say My Name!</button>
</form>
After the server renders this, the JavaScript would pick it up (using a Backbone.js view in this example)
FooView = Backbone.View.extend({
events: {
"change #name": "setName",
"click #say": "sayName"
},
setName: function(e){
var name = $(e.currentTarget).val();
this.model.set({name: name});
},
sayName: function(e){
e.preventDefault();
var name = this.model.get("name");
alert("Hello " + name);
},
render: function(){
// do some rendering here, for when this is just running JavaScript
}
});
$(function(){
var model = new MyModel();
var view = new FooView({
model: model,
el: $("#foo")
});
});
This is a very simple example, but I think it gets the point across.
When I instante the view after the page loads, I'm providing the existing content of the form that was rendered by the server, to the view instance as the el for the view. I am not calling render or having the view generate an el for me, when the first view is loaded. I have a render method available for after the view is up and running and the page is all JavaScript. This lets me re-render the view later if I need to.
Clicking the "Say My Name" button with JavaScript enabled will cause an alert box. Without JavaScript, it would post back to the server and the server could render the name to an html element somewhere.
Edit
Consider a more complex example, where you have a list that needs to be attached (from the comments below this)
Say you have a list of users in a <ul> tag. This list was rendered by the server when the browser made a request, and the result looks something like:
<ul id="user-list">
<li data-id="1">Bob
<li data-id="2">Mary
<li data-id="3">Frank
<li data-id="4">Jane
</ul>
Now you need to loop through this list and attach a Backbone view and model to each of the <li> items. With the use of the data-id attribute, you can find the model that each tag comes from easily. You'll then need a collection view and item view that is smart enough to attach itself to this html.
UserListView = Backbone.View.extend({
attach: function(){
this.el = $("#user-list");
this.$("li").each(function(index){
var userEl = $(this);
var id = userEl.attr("data-id");
var user = this.collection.get(id);
new UserView({
model: user,
el: userEl
});
});
}
});
UserView = Backbone.View.extend({
initialize: function(){
this.model.bind("change:name", this.updateName, this);
},
updateName: function(model, val){
this.el.text(val);
}
});
var userData = {...};
var userList = new UserCollection(userData);
var userListView = new UserListView({collection: userList});
userListView.attach();
In this example, the UserListView will loop through all of the <li> tags and attach a view object with the correct model for each one. it sets up an event handler for the model's name change event and updates the displayed text of the element when a change occurs.
This kind of process, to take the html that the server rendered and have my JavaScript take over and run it, is a great way to get things rolling for SEO, Accessibility, and pushState support.
Hope that helps.
I think you need this: http://code.google.com/web/ajaxcrawling/
You can also install a special backend that "renders" your page by running javascript on the server, and then serves that to google.
Combine both things and you have a solution without programming things twice. (As long as your app is fully controllable via anchor fragments.)
So, it seem that the main concern is being DRY
If you're using pushState have your server send the same exact code for all urls (that don't contain a file extension to serve images, etc.) "/mydir/myfile", "/myotherdir/myotherfile" or root "/" -- all requests receive the same exact code. You need to have some kind url rewrite engine. You can also serve a tiny bit of html and the rest can come from your CDN (using require.js to manage dependencies -- see https://stackoverflow.com/a/13813102/1595913).
(test the link's validity by converting the link to your url scheme and testing against existence of content by querying a static or a dynamic source. if it's not valid send a 404 response.)
When the request is not from a google bot, you just process normally.
If the request is from a google bot, you use phantom.js -- headless webkit browser ("A headless browser is simply a full-featured web browser with no visual interface.") to render html and javascript on the server and send the google bot the resulting html. As the bot parses the html it can hit your other "pushState" links /somepage on the server mylink, the server rewrites url to your application file, loads it in phantom.js and the resulting html is sent to the bot, and so on...
For your html I'm assuming you're using normal links with some kind of hijacking (e.g. using with backbone.js https://stackoverflow.com/a/9331734/1595913)
To avoid confusion with any links separate your api code that serves json into a separate subdomain, e.g. api.mysite.com
To improve performance you can pre-process your site pages for search engines ahead of time during off hours by creating static versions of the pages using the same mechanism with phantom.js and consequently serve the static pages to google bots. Preprocessing can be done with some simple app that can parse <a> tags. In this case handling 404 is easier since you can simply check for the existence of the static file with a name that contains url path.
If you use #! hash bang syntax for your site links a similar scenario applies, except that the rewrite url server engine would look out for _escaped_fragment_ in the url and would format the url to your url scheme.
There are a couple of integrations of node.js with phantom.js on github and you can use node.js as the web server to produce html output.
Here are a couple of examples using phantom.js for seo:
http://backbonetutorials.com/seo-for-single-page-apps/
http://thedigitalself.com/blog/seo-and-javascript-with-phantomjs-server-side-rendering
If you're using Rails, try poirot. It's a gem that makes it dead simple to reuse mustache or handlebars templates client and server side.
Create a file in your views like _some_thingy.html.mustache.
Render server side:
<%= render :partial => 'some_thingy', object: my_model %>
Put the template your head for client side use:
<%= template_include_tag 'some_thingy' %>
Rendre client side:
html = poirot.someThingy(my_model)
To take a slightly different angle, your second solution would be the correct one in terms of accessibility...you would be providing alternative content to users who cannot use javascript (those with screen readers, etc.).
This would automatically add the benefits of SEO and, in my opinion, would not be seen as a 'naughty' technique by Google.
Interesting. I have been searching around for viable solutions but it seems to be quite problematic.
I was actually leaning more towards your 2nd approach:
Let the server provide a special website only for the search engine
bots. If a normal user visits http://example.com/my_path the server
should give him a JavaScript heavy version of the website. But if the
Google bot visits, the server should give it some minimal HTML with
the content I want Google to index.
Here's my take on solving the problem. Although it is not confirmed to work, it might provide some insight or idea's for other developers.
Assume you're using a JS framework that supports "push state" functionality, and your backend framework is Ruby on Rails. You have a simple blog site and you would like search engines to index all your article index and show pages.
Let's say you have your routes set up like this:
resources :articles
match "*path", "main#index"
Ensure that every server-side controller renders the same template that your client-side framework requires to run (html/css/javascript/etc). If none of the controllers are matched in the request (in this example we only have a RESTful set of actions for the ArticlesController), then just match anything else and just render the template and let the client-side framework handle the routing. The only difference between hitting a controller and hitting the wildcard matcher would be the ability to render content based on the URL that was requested to JavaScript-disabled devices.
From what I understand it is a bad idea to render content that isn't visible to browsers. So when Google indexes it, people go through Google to visit a given page and there isn't any content, then you're probably going to be penalised. What comes to mind is that you render content in a div node that you display: none in CSS.
However, I'm pretty sure it doesn't matter if you simply do this:
<div id="no-js">
<h1><%= #article.title %></h1>
<p><%= #article.description %></p>
<p><%= #article.content %></p>
</div>
And then using JavaScript, which doesn't get run when a JavaScript-disabled device opens the page:
$("#no-js").remove() # jQuery
This way, for Google, and for anyone with JavaScript-disabled devices, they would see the raw/static content. So the content is physically there and is visible to anyone with JavaScript-disabled devices.
But, when a user visits the same page and actually has JavaScript enabled, the #no-js node will be removed so it doesn't clutter up your application. Then your client-side framework will handle the request through it's router and display what a user should see when JavaScript is enabled.
I think this might be a valid and fairly easy technique to use. Although that might depend on the complexity of your website/application.
Though, please correct me if it isn't. Just thought I'd share my thoughts.
Use NodeJS on the serverside, browserify your clientside code and route each http-request's(except for static http resources) uri through a serverside client to provide the first 'bootsnap'(a snapshot of the page it's state). Use something like jsdom to handle jquery dom-ops on the server. After the bootsnap returned, setup the websocket connection. Probably best to differentiate between a websocket client and a serverside client by making some kind of a wrapper connection on the clientside(serverside client can directly communicate with the server). I've been working on something like this: https://github.com/jvanveen/rnet/
Use Google Closure Template to render pages. It compiles to javascript or java, so it is easy to render the page either on the client or server side. On the first encounter with every client, render the html and add javascript as link in header. Crawler will read the html only but the browser will execute your script. All subsequent requests from the browser could be done in against the api to minimize the traffic.
This might help you : https://github.com/sharjeel619/SPA-SEO
Logic
A browser requests your single page application from the server,
which is going to be loaded from a single index.html file.
You program some intermediary server code which intercepts the client
request and differentiates whether the request came from a browser or
some social crawler bot.
If the request came from some crawler bot, make an API call to
your back-end server, gather the data you need, fill in that data to
html meta tags and return those tags in string format back to the
client.
If the request didn't come from some crawler bot, then simply
return the index.html file from the build or dist folder of your single page
application.
I noticed that whenever you download a PDF in Chrome, it consistently makes two requests, and then cancels one of them. This is causing the request to be registered twice in my Web app, which don't want. Is there a way to get Chrome to only make one request for PDFs?
I've researched this topic quite a bit now, and I have not found a sufficient answer. Closely-related answers suggest that the problem is that Chrome is looking for a favicon, but the network tab shows that it is actually making the same request twice, and then canceling the second request.
Is there a way to prevent Chrome from making the second request?
Below is a link to a random PDF file that I found through Google which when clicked should demonstrates the behavior. I would've posted a picture of my network tab in devtools but this is my first post on Stack Overflow, and the site is prohibiting me from uploading a picture.
https://www.adobe.com/enterprise/accessibility/pdfs/acro6_pg_ue.pdf
It looks like a bug in Chrome: https://bugs.chromium.org/p/chromium/issues/detail?id=587709
The problem is that Chrome, when it loads an iframe that returns a PDF stream, writes an "embed" tag inside that iframe which again contains the same URL as the iframe. This triggers a request for that URL again, but Chrome immediately cancels it. (see the network tab)
But by that time, the damage is done.
We have the same issue here, and it does not occur in Firefox or IE.
We're still looking for a good solution to this problem.
I'm still trying to find a proper solution but as a partial "fix" for now you could have two options
1) set the content disposition to "attachment" in the header
setting that to "inline" cause chrome to run a second cancelled call
so for example you can do something like that (nodejs resp in example)
res.writeHead(200, {
'Content-Type' : 'application/pdf',
'Access-Control-Allow-Origin' : '*',
'Content-Disposition' : 'attachment; filename=print.pdf'
});
unfortunately this solution will force the browser to download the pdf straight away instead of rendering it inline and that's not maybe desiderable
2) adding "expires" in the headers
this solution will always fire a second cancelled call but it's ignored by the server
so for example you can do something like that (nodejs resp in example)
res.writeHead(200, {
'Content-Type' : 'application/pdf',
'Access-Control-Allow-Origin' : '*',
'Content-Disposition' : 'inline; filename=print.pdf',
'Expires' : new Date(new Date().getTime() + (60000))
});
I had the same problem in an iframe. I turned of the PDF Viewer extension and the problem disappeared. I'm thinking the extension downloads the file twice. The first time to get the size, the second time to download with a progress bar (using the size gathered in the first request)
I've tried the other solutions and none worked for me, I'm a little late, I know, but just for the record, I solved this in the following manner:
Adding the download attribute:
In my case I was using a form, so it goes like this:
<form action="/package.zip" method="POST" download>
This worked on Brave and Safari, which previously showed the same problem, I think it will work for Chrome.
With my case, problem wasn't browser related. I've noticed our scrollbar plugin's (OverlayScrollbars) DOM manipulations reloads embedded pdf data and calls controller more than once due to on plugin's construct or destroy events. After I've initialized scrollbar before DOM is ready, problem is solved.
I want to share a link via URL scheme for Telegram.
I have created this:
tg://msg?text = www.example.com?t=12
The link, opens telegram but nothing else happens.
I have used the same code for Viber, and it works:
viber://forward?text = www.example.com?t=12
And it opens a new message in Viber with this text:
www.example.com
In the other words, it cuts my URL.
Any ideas?
You can also use telegram.me share link which falls back to webogram if a telegram app is not installed on the device.
https://telegram.me/share/url?url=<URL>&text=<TEXT>
This works with me:
tg://msg?text=Mi_mensaje&to=+1555999
You have the following options for a URL...
https://t.me/share/url?url={url}&text={text}
https://telegram.me/share/url?url={url}&text={text}
tg://msg_url?url={url}&text={text}
In case you want to confirm, here is the official API source: Core.Telegram.org: Widgets -> Sharing Button.
If you are interested in watching a project that keeps track of these URLs, then check us out!: https://github.com/bradvin/social-share-urls#telegramme
You can use the link telegram.me which will provide a preview page with an alert requesting to open the link in the application.
https://telegram.me/share/url?url=<URL>&text=<TEXT>
The second option is calling the application link directly:
tg://msg_url?url=<url>&text=<encoded-text>
I particularly prefer the second option, which also works on desktop applications.
For Telegram share:
Objective C:
if([UIApplication sharedApplication] canOpenURL:[NSURL URLWithString:#"tg://msg?text=test"]){
[UIApplication sharedApplication] openURL:[NSURL URLWithString:#"tg://msg?text=test"]
}else{
//App not installed.
}
Swift 3.0:
let urlString = "tg://msg?text=test"
let tgUrl = URL.init(string:urlString.addingPercentEncoding(withAllowedCharacters: CharacterSet.urlQueryAllowed)!)
if UIApplication.shared.canOpenURL(tgUrl!)
{
UIApplication.shared.openURL(tgUrl!)
}else
{
//App not installed.
}
If you have used canOpenURL, then need to add in info.plist
<key>LSApplicationQueriesSchemes</key>
<array>
<string>tg</string>
</array>
PHP
Link
JavaScript
<script>TEXT="any text or url";</script>
<a onclick="window.location='tg://msg?text='+encodeURIComponent(TEXT);">Link</a>
Telegram
with this we can open xdg of telegram and if we select contact , by default sending text will come in message field.
Maybe you use localhost therefore it does not show share. try it in live host
To check if the Telegram is installed you can do the following (borrowed from the Whatsapp sharer module of ShareKit):
BOOL isTelegramInstalled = [[UIApplication sharedApplication] canOpenURL:[NSURL URLWithString:#"tg://msg?text=test"]];
iOS checks if there's any app installed which can handle the tg:// scheme, which is Telegram.
Try using tg://share:
Link
Just tested, this way it works both opening telegram app or browser in case it's not installed:
let webURL = NSURL(string: "https://t.me/<YOUR ID>")!
UIApplication.shared.open(webURL as URL)
Your have two problems:
On one hand you are using the wrong scheme for sharing URLs, the correct one is msg_url;
On the other hand you are trying to share a URL with parameters. You need to encode your ? in order to make it work. The percent code is %3F
Extra Tip: Also if you want it to be a link once shared you should include the HTTPS:// encoded of course.
Try this and you'll see it works fine: tg://msg_url?url=https%3A%2F%2Fwww.example.com%3Ft=12
If you want to open chat with bot or people, just write this simple code
<a href="https://t.me/targetedusername">
You should stop using the protocol for the desktop applications because it does not work on the evolving web: it does not work on Web apps, ChromeOS, or some mobile devices.
Always use the NEW way: https://t.me because it will open the Telegram DESKTOP app in Windows\Linux\Mac IF THE USER WANTS\HAS it, otherwise it will open the WEB page\web app which is THE WAY.
https://web.telegram.org/k
https://web.telegram.org/z
Telegram has TWO web apps that have been building out for the past year!
Both are great, adding TONs of parity, and have different approaches.
I have a web app which uses localStorage. Now we want to embed this web app on other (third-party) sites via iframe. We want to provide an iframe embed similar to youtube so that other websites can embed our web app in an iframe. Functionally it is the same as if it wouldn't be embedded. But it does not work. Chrome prints the error message:
Uncaught SecurityError: Failed to read the 'localStorage' property from 'Window': Access is denied for this document.
I just do the following check (in the iframe):
if (typeof window.localStorage !== 'undefined') {
// SETUP SESSION, AUHT, LOCALE, SETTINGS ETC
} else {
// PROVIDE FEEDBACK TO THE USER
}
I checked my security settings in Chrome like described in another Stackoverflow Thread but it doesn't work. Is there any change to make embedding possible without the need of adjusting (default) security settings of most modern browsers?
To give more information, we use Ember-CLI for our web app and turned on CSP (more info about the Ember-CLI CSP). Could CSP cause our web app to throw security errors?
Under Chrome's Settings > Privacy > Content Settings, you have the cookie setting set to "Block sites from setting any data".
This checkbox is what is causing the exception.
According to this
This exception is thrown when the "Block third-party cookies and site data" checkbox is set in Content Settings.
To find the setting, open Chrome settings, type "third" in the search box, click the Content Settings button, and view the fourth item under Cookies.
On the following URL: chrome://settings/content/cookies uncheck "Block third-party cookies".
If you're using incognito mode, make sure you turn off "Block third-party cookies".
Open a new tab in any incognito window, and turn off the option:
localStorage is per domain, per protocol. If you are trying to access localStorage from a standalone file, i.e. with file:/// protocol, there is no domain per se. Hence browsers currently would complain that your document does not have access to localStorage. If you put your file in a web server (e.g. deploy in Tomcat) and access it from localhost, you will be able to access localStorage.
I ran into this problem in my phone, I couldn't open a certain site with chrome.
It took me some time to find the cookies on my phone, when I found it, I saw that my cookies was blocked.
go to your Settings --> Site settings --> Cookies
and allow the site to save and read cookie data, make sure that you don't block third-party cookies!
I hope this helps you.
I checked all the answers but ended up not finding anything. Then I realized what browser I'm using. If you're using Brave (Chromium Based), you will get this error if your shield is up. Try lowering your shield.
A more secure way of doing this in Chrome would be to allow only the site(s) that you trust:
Chrome
-> "Settings"
-> "Show advanced settings..."
-> "Privacy"
-> "Content settings..."
-> "Manage exceptions..."
-> (add a pattern such as [*.]microsoft.com)
-> be sure to hit enter
-> "Done"
-> "Done"
If disable block third-party cookies is not an option, you can use try...catch:
try {
// SETUP SESSION, AUHT, LOCALE, SETTINGS ETC
} catch(err) {
// PROVIDE FEEDBACK TO THE USER
}
As has been pointed out in the comments, localstorage is single origin only -- the origin of the page. Attempting to access the page's localstorage from an iframe loaded from a different origin will result in an error.
The best you can do is hack it with XDM via the postMessage API. This library purports to do the heavy lifting for you, but I haven't tried it. However, I would make sure you're aware of IE's terrible support for XDM before going down this route.
imho it has nothing to do with CSP settings on your ember cli app but to do with browser settings.
Some browsers (Chrome) block localStorage content loaded into an iframe.
We too are facing a similar situation for our Ember App,were we have an ember app and a plugin which loads on 3rd party websites, the user token loaded into the iframe gets blocked in Chrom,we are experimenting with some solutions, will keep this thread posted as to how it goes.
To get rid of this warning - under Chrome's Settings -> Privacy -> Content settings, you have to clear the "Block third-party cookies and site data" option
Secure way of doing this in Chrome top right, click on eye logo and allow the site you are on to use third-party cookies:
Check this image if you can't find the eye logo
Clear Cookie
Chrome->setting->privacy and Policy->Sites that can never use cookies In turnremove cookie for local storage.
For all others like me who search for a Javascript solution/fix:
var storageSupported = false;
try
{
storageSupported = (window.localStorage && true);
}
catch (e) {}
if (storageSupported)
{
// your code
}
Credits: https://github.com/zoomsphere/ngx-store/issues/91
I have a difficult problem: I always get a Nullpointer exception on my webpace, when I
rapidly click on the same link. Or when I reload the page rapidly.
This is the error I get:
java.lang.NullPointerException
com.ibm.xsp.webapp.FacesServlet.acquireSyncToken(FacesServlet.java:285)
com.ibm.xsp.webapp.FacesServletEx.serviceView(FacesServletEx.java:161)
com.ibm.xsp.webapp.FacesServlet.service(FacesServlet.java:160)
com.ibm.xsp.webapp.FacesServletEx.service(FacesServletEx.java:138)
com.ibm.xsp.webapp.DesignerFacesServlet.service(DesignerFacesServlet.java:103)
com.ibm.designer.runtime.domino.adapter.ComponentModule.invokeServlet(ComponentModule.java:576)
com.ibm.domino.xsp.module.nsf.NSFComponentModule.invokeServlet(NSFComponentModule.java:1281)
com.ibm.designer.runtime.domino.adapter.ComponentModule$AdapterInvoker.invokeServlet(ComponentModule.java:847)
com.ibm.designer.runtime.domino.adapter.ComponentModule$ServletInvoker.doService(ComponentModule.java:796)
com.ibm.designer.runtime.domino.adapter.ComponentModule.doService(ComponentModule.java:565)
com.ibm.domino.xsp.module.nsf.NSFComponentModule.doService(NSFComponentModule.java:1265)
com.ibm.domino.xsp.module.nsf.NSFService.doServiceInternal(NSFService.java:653)
com.ibm.domino.xsp.module.nsf.NSFService.doService(NSFService.java:476)
com.ibm.designer.runtime.domino.adapter.LCDEnvironment.doService(LCDEnvironment.java:341)
com.ibm.designer.runtime.domino.adapter.LCDEnvironment.service(LCDEnvironment.java:297)
com.ibm.domino.xsp.bridge.http.engine.XspCmdManager.service(XspCmdManager.java:272)
Question: Can someone explain in detail what this acquireSyncToken does? Maybe then I can find the bug...
In my XPages I use
sessionScope.get(key) // same with applicationScope
sessionScope.put(key, value) // same with applicationScope
a LOT ;)
I tried very much, e.g. to wrap my lookups within
synchronize(applicationScope){
// lookups and so on...
}
and stuff like that, but that only made it worse, so I removed the synchronize-stuff...
Environment:
Domino Server 8.5.3 FP1
XPages
testing on modern Browsers like FF, Chrome
MacOS / Win7
Architecture:
I have one BIG xPage, where I basically add some CustomControls and due to the current URL embed another XPage.
Inside the CustomControls and XPages I have more Custom Controls and I added some Views as Datasources and did the wildest things with "repeat controls" and SSJS inside Computed Fields.
The heavy-weight DB-Lookups are cached in the applicationScope.
For more Info, please ask!
Thanks in advance!
This is a known issue. IBM advises to downgrade from FP1 or FP2 to 8.5.3 or UP1.
See Dojo xhrGet with sync:false issue with xe:viewJsonLegacyService and Domino 8.5.3 SP1 or http://www-01.ibm.com/support/docview.wss?uid=swg1LO71603