In my Typo3 extension I us a <f:for to iterate trough a couple of items. Exactly 184 Items.
I generate a slider out of this.
Problem is that this iteration is extremly slow. Is there a way to fast it up. Backend is fast less than a sec. Only the frontend rendering needs to long time.
My full frontend Code looks like this:
<f:if condition="{videos -> f:count()} > 4">
<f:then>
<f:for each="{videos}" as="video" iteration="i">
<f:if condition="{i.isFirst}">
<f:then>
<div class="item active">
</f:then>
<f:else>
<div class="item">
</f:else>
</f:if>
<div class="col-lg-3 thumbnailParent">
<f:link.action controller="FrontendVideo" action="show" arguments="{video : video}">
<f:render partial="Video/ShowThumbnail" arguments="{video : video, userAuthorization : userAuthorization}"/>
</f:link.action>
</div>
<!-- adding slider-class to one of all slides. condition: slide must have more than 4 videos for slide-effect -->
<f:if condition="{i.isLast}">
<f:then>
<script type="text/javascript">
addClassForSliding('{myCarouselID}');
function addClassForSliding(myCarouselID) {
$("#myCarousel"+myCarouselID).addClass("isCarousel");
if(!$("div.videoSlide").find("div").hasClass("thisIsTheOnlySliderWhichSlides")){
$("#myCarousel"+myCarouselID).addClass("thisIsTheOnlySliderWhichSlides");
}
}
</script>
</f:then>
<f:else></f:else>
</f:if>
</div>
</f:for>
</f:then>
<f:else>
<f:for each="{videos}" as="video" iteration="i">
<div class="item active">
<div class="col-lg-3">
<f:link.action controller="FrontendVideo" action="show" arguments="{video : video}">
<f:render partial="Video/ShowThumbnail" arguments="{video : video, userAuthorization : userAuthorization}"/>
</f:link.action>
</div>
</div>
</f:for>
</f:else>
</f:if>
Make sure your caches are enabled - if they are not, don't judge the performance based on uncached renderings.
Try to avoid the many conditions you use. And definitely don't leave the empty nodes like <f:else></f:else>.
Move the stuff you do in the last iteration, outside of the loop (thus saving another condition and a lot of node construction).
Avoid the iteration variable whenever possible. It adds additional processing and variable assignment to each iteration.
I assume you use JS to activate the component. So use JS to set the active CSS class, thus avoiding 1) opening and closing tags incorrectly like you do, and 2) avoid another condition that only is true a single time, like the other one.
Check your partial that you render. It may not be compilable. And every time you render it, the partial must be resolved. Note: in this type of use cases, a section almost always performs better than a partial. One tool I wrote which you can use, which also pre-compiles your templates and can fail if any template is not compatible: https://github.com/NamelessCoder/typo3-cms-fluid-precompiler
Generally speaking: don't output <script> in Fluid unless you have an extremely good reason. Whenever possible, load scripts externally and store whichever values the script needs, in for example data- properties. Faster parsing, faster loop.
Use actual profiling tools to precisely locate the bottleneck. Your code uses ViewHelpers and is also sensitive to configuration, e.g. the more setup you have for paths etc. the more processing needs to be done in f:render calls. Do not profile in development context!
Do not profile on a Docker setup - unless you're running Linux. And even then, take results with some reservations: file system performance will never be equal.
Avoiding iteration and your conditions, and moving the last block outside the loop, should remove a good 80% of the cost (not counting what happens in your partial you render - it could be absolutely horrible in performance and we'd never know since you didn't paste that one).
Finally, when selecting whether to render a partial or a section there are a couple of things to consider. Most of these completely depend on your use case (as in: how do you need your templates to be structured - does it make more sense with a partial you can overlay than a section you cannot?) but it is possible to say something general about performance:
When you render a section, which exists in the same template, the rendering takes place with a single function call to switch to that section with a new set of template variables.
But when you render a partial, the template file for this partial first has to be resolved before rendering can take place.
The resolved template is not possible to compile since the same compiled template must be possible to render in multiple different contexts.
Thus, resolving of a partial template may only be cached once per context, which means that if this same template is rendered in multiple contexts multiple times on a page, performance may suffer a lot compared to using a section (which gets compiled to a plain function call).
The more template paths you have, the tougher this is on file resolving.
You always need to choose the right tool for the task - that's one of the things our job is as developers - so these points are pretty generic. Some uses cases simply have no difference in performance between sections and partials, some don't suffer noticeably from using iteration; it all depends on your setup requirements and the data you are rendering. Profiling your templates certainly helps finding the right solution so I highly recommend doing that.
Related
I'm trying to make my CSP without unsafe inline.
Since I have to manually check every file from every app, I may as well move the scripts to external files instead of creating a million word CSP entry in the web.config by adding hashes or nounces.
This seems easy enough for client side content, but many templates have razor code in then such as:
<script>
alert(#myVar);
</script>
How can I move this to external?
So in general if you JS needs some input parameters you must of course put them somewhere, and only the razor will know what they are.
The simplest way is still to just have the initial call use the variables - like in your example above. If you have security concerns, doing type-checking in razor should eliminate that for you.
For example, if you do #((int)thing.property) than it simply cannot inject any unexpected payload.
If for some reason you really, really don't want this you can use a attribute-json convention, like
<div class="myGallery" init='{"files": 17}'> gallery contents </div>
and pick it up from the js you created. but this is quite a bit of work, so I would recommend the simpler way.
In Objective C to build a Mac OSX (Cocoa) application, I'm using the native Webkit widget to display local files with the file:// URL, pulling from this folder:
MyApp.app/Contents/Resources/lang/en/html
This is all well and good until I start to need a German version. That means I have to copy en/html as de/html, then have someone replace the wording in the HTML (and some in the Javascript (like with modal dialogs)) with German phrasing. That's quite a lot of work!
Okay, that might seem doable until this creates a headache where I have to constantly maintain multiple versions of the html folder for each of the languages I need to support.
Then the thought came to me...
Why not just replace the phrasing with template tags like %CONTINUE%
and then, before the page is rendered, intercept it and swap it out
with strings pulled from a language plist file?
Through some API with this widget, is it possible to intercept HTML before it is rendered and replace text?
If it is possible, would it be noticeably slow such that it wouldn't be worth it?
Or, do you recommend I do a strategy where I build a generator that I keep on my workstation which builds each of the HTML folders for me from a main template, and then I deploy those already completed with my setup application once I determine the user's language from the setup application?
Through a lot of experimentation, I found an ugly way to do templating. Like I said, it's not desirable and has some side effects:
You'll see a flash on the first window load. On first load of the application window that has the WebKit widget, you'll want to hide the window until the second time the page content is displayed. I guess you'll have to use a property for that.
When you navigate, each page loads twice. It's almost not noticeable, but not good enough for good development.
I found an odd quirk with Bootstrap CSS where it made my table grid rows very large and didn't apply CSS properly for some strange reason. I might be able to tweak the CSS to fix that.
Unfortunately, I found no other event I could intercept on this except didFinishLoadForFrame. However, by then, the page has already downloaded and rendered at least once for a microsecond. It would be great to intercept some event before then, where I have the full HTML, and do the swap there before display. I didn't find such an event. However, if someone finds such an event -- that would probably make this a great templating solution.
- (void)webView:(WebView *)sender didFinishLoadForFrame:(WebFrame *)frame
{
DOMHTMLElement * htmlNode =
(DOMHTMLElement *) [[[frame DOMDocument] getElementsByTagName: #"html"] item: 0];
NSString *s = [htmlNode outerHTML];
if ([s containsString:#"<!-- processed -->"]) {
return;
}
NSURL *oBaseURL = [[[frame dataSource] request] URL];
s = [s stringByReplacingOccurrencesOfString:#"%EXAMPLE%" withString:#"ZZZ"];
s = [s stringByReplacingOccurrencesOfString:#"</head>" withString:#"<!-- processed -->\n</head>"];
[frame loadHTMLString:s baseURL:oBaseURL];
}
The above will look at HTML that contains %EXAMPLE% and replace it with ZZZ.
In the end, I realized that this is inefficient because of page flash, and, on long bits of text that need a lot of replacing, may have some quite noticeable delay. The better way is to create a compile time generator. This would be to make one HTML folder with %PARAMETERIZED_TAGS% inside instead of English text. Then, create a "Run Script" in your "Build Phase" that runs some program/script you create in whatever language you want that generates each HTML folder from all the available lang-XX.plist files you have in a directory, where XX is a language code like 'en', 'de', etc. It reads the HTML file, finds the parameterized tag match in the lang-XX.plist file, and replaces that text with the text for that language. That way, after compilation, you have several HTML folders for each language, already using your translated strings. This is efficient because then it allows you to have one single HTML folder where you handle your code, and don't have to do the extremely tedious process of creating each HTML folder in each language, nor have to maintain that mess. The compile time generator would do that for you. However -- you'll have to build that compile time generator.
By transclusion I mean a page like
{{template
| blahblah=
| asd =
| df=
}}
So if there are too many "|"s, then will they make the page loading slow?
Let's say page "Template:*" is
*
so that {{*}} will render a bullet.
Please compare
(Template:A and page "A page")
and
(Template:B and page "B page")
Both A page and B page will display the same thing but which one will be faster to load if there are thousands more transclusion in this way?
Template:A
* {{{a}}}
* {{{b}}}
* {{{c}}}
A page
{{A
|a=q
|b=w
|c=e
}}
Template:B
{{{a}}}
B page
{{B
|a={{*}} q <br> {{*}} w <br> {{*}} e
}}
=====Question added==============
#llmari_Karonen Thank you very much.
What if the number is nearly 1000, so that the A page is
{{A
|a1=q
|a2=w
|a3=e
....
|a999=w
|a1000=h
}}
Still, thanks to caches, "for most page views, template transclusion has no effect on performance"?
And what do you mean by "for most page views"? You mean low enough page views?
You said "the recommended way to deploy MediaWiki is either behind reverse caching proxies or using the file cache. Either of these will add an extra caching layer in front of the parser cache."
Should this be done "before" posting any content on mediawiki? Or it doesn't matter if I do it after I post all the pages to mediawiki?
===What if the transclusion relationship is very complex===
#llmari_Karonen I got one more question. What if the transclusion relation is very complex?
For example
Page A is
{{temp
| ~~~
| ~~~
... (quite many)
| ~~~
}}
And Template:Temp has {{Temp2}},
and Template:Temp2 is again
{{temp3
|~~~
|~~~
... (very many)
|~~~
}}
Even in such case, due to the reasons you mentioned, numerous transclusions won't affect the loading speed of Page A?
Yes and no. Mostly no.
Yes, having lots of template transclusions on a page does slow down parsing somewhat, both because the templates need to be loaded from the DB and because they need to be reparsed every time they're used. However, there's a lot of caching going on:
Once a template is transcluded once on a given page, its source code is cached so that further transclusions of the same template on that page won't cause any further DB queries.
For templates used without parameters, MediaWiki also caches the parsed form of the template. Thus, in your example, {{*}} only needs to be parsed once.
In any case, once the page has been parsed once (typically after somebody edits it), MediaWiki caches the entire parsed HTML output and reuses it for subsequent page views. Thus, for most page views, template transclusion has no effect on performance, since the page will not need to be reparsed. (However, note that the default parser cache lifetime is fairly low. The default is OK for high-traffic wikis like Wikipedia, but for small wikis I'd strongly recommend increasing it to, say, one month, and setting the parser cache type to CACHE_DB.)
Finally, the recommended way to deploy MediaWiki is either behind reverse caching proxies or using the file cache. Either of these will add an extra caching layer in front of the parser cache.
Edit: To answer your additional questions:
Regardless of the number of parameters, each page still contains only one template transclusion (well, except for the {{*}} transclusions on page B, but those should be efficiently cached). Thus, they should be more or less equally efficient (as in, there should not be a noticeable difference in practice).
I mean that, most of the time when somebody views the page, it will (or at least should) be served from the cache, and so does not need to be reparsed. Situations where that does not happen include when:
the time since the page was last parsed exceeds the limit specified by $wgParserCacheExpireTime (24 hours by default, but this can and IMO should be increased for most wikis),
the page has been edited since it was added to the cache, and so needs to be reparsed (this typically happens immediately after clicking the "Save page" button),
a template used on the page has been edited, requiring the page to be reparsed,
another page linked from this page has been created or deleted, requiring a reparse to turn the link from red to blue or vice versa,
the page uses a MediaWiki extension that deliberately excludes it from caching, usually because the extension inserts dynamically changing content into the page,
someone has deliberately purged the page from the cache, causing an immediate reparse, or
the user viewing the page is using an unusual language or has changed some some other options in their preferences that affect page rendering, causing a separate cached version of the page to be generated for them (this version may be reused by any other user using the same set of preferences, or by the same user revisiting the page).
You can add a proxy in front of your wiki, and/or enable the file cache, at any time. Indeed, since setting up effective caching is a somewhat advanced task, you may want to wait until you get your wiki up and running without a front end cache first before attempting it. This also allows you to directly compare the performance before and after setting up the cache.
Let's say I have two pages which share much code, many libraries etc., but have some differences. For a concrete example, I include jQuery and in each page have different function in "document ready" (aka $(function() { ... })).
With JS that would be easy. I would include jQuery in each page, and have different piece of <script> on each page, or include script-behind-page-A.js in pageA.html, and script-behing-page-B.js in pageB.html.
How shall I achieve the same result with ClojureScript?
I suspect the compilation output is so big that it's best to have one big ball of JavaScript emitted by compiler. In that case, it clearly cannot have two different "document ready" functions.
Is the suggested flow to make the code consist mostly of functions that enable you to do things, few state variables initialized, and initialize each page individually with plain JS as needed?
I think the recommended approach would be what is explained in this ClojureScript tutorial by Mimmo Cosenza:
Produce a single, large JS output file (so you optimize/gzip it when this goes live)
Use different namespaces for the functions, make sure you export at least a single "entry-point function" for each page
On each HTML file, call the desired "entry point function", like this:
<!-- on the bottom of welcome.html -->
<script src="js/output.js"></script>
<script>myapp.welcome.init();</script>
<!-- on the bottom of login.html -->
<script src="js/output.js"></script>
<script>myapp.login.init();</script>
This is explained on detail on the part 6 of the tutorial.
I have an old-style CGI program which generates a large HTML table. Because the table contents are slow to calculate (even though it's not that big) I would like to print it one row at a time, and have the browser display rows as they are fetched.
If you search online you'll see that style="table-layout: fixed" is supposed to help some browsers do this, but on the other hand Firefox can render incrementally even without it. My test browser is Firefox 4.0b10 on Windows but I cannot get it to display incrementally using the simple example below:
<html>
<head>
<title>x</title>
</head>
<body>
<table width="100%" style="table-layout: fixed" rows="4" cols="4">
<col width="10%" />
<col width="20%" />
<col width="30%" />
<col width="40%" />
<tr><td width="10%">a</td><td width="20%">b</td><td width="30%">c</td><td width="40%">d</td></tr>
<!-- flush output, one second pause... -->
<tr><td width="10%">a</td><td width="20%">b</td><td width="30%">c</td><td width="40%">d</td></tr>
<!-- flush output, one second pause... -->
<tr><td width="10%">a</td><td width="20%">b</td><td width="30%">c</td><td width="40%">d</td></tr>
<!-- flush output, one second pause... -->
<tr><td width="10%">a</td><td width="20%">b</td><td width="30%">c</td><td width="40%">d</td></tr>
<!-- flush output, one second pause... -->
</table>
</body>
</html>
Instead the page is blank until the end of the download, when the whole table appears. I've tried various ways to tell the browser the table dimensions in advance so it should have no problem displaying it as it arrives, but they don't help. (Removing the hints doesn't help either.)
If I modify my CGI script to close and restart the table between each row, with an extra blank paragraph in between, then the page does render incrementally in the browser. This shows that the data is getting to the browser incrementally - just Firefox is choosing not to render it.
Ironically, much more complex scripts producing larger tables seem to do what I want, showing one row at a time as it downloads, but whenever I try to reduce the output to a minimal test case it doesn't work. This leads me to suspect there is some complexity heuristic used by Firefox's rendering engine.
What's the magic dust I need to tell the browser to always display the table as downloaded so far?
For what it is worth.
The Firefox I use Firefox 3.6.16 does not display until the page is downloaded, regardless of what is being downloaded.
You could look for settings in about:config, but I have not seen any solution to this,
there are addins to help with displaying placeholders. but they don't always work either.
Just found this
Read it OR try
about:config - in browse bar
select new
create integer
nglayout.initialpaint.delay
set value to 0
cheers
First of all, I'd say it's seldom good user interface design to load huge amounts of data at once. In most cases, it's better to offer some sort of search or at least paging functionality. There may be exceptions, but people simply cannot process very large quantities of information, and apps serving far more data than people have any use for aren't just wasting cycles, they are badly designed. Imagine if Google displayed the first 10,000 hits by default, or even 1,000. Most users wouldn't even look beyond the first 5 or so, and the amount of wasted bandwidth...
That said, it may of course not be your fault if the page is badly designed. Or it may be, but you'll need a quick fix before coming back to redesign the solution later. :)
One way to make this happen would be to render the table client-side instead of on the server. If there's nothing else heavy on the page, the user would be served quickly with the other content, and the area where the table will appear could contain an animated GIF or similar to indicate the software is "working". You could use an AJAX-call to fetch the data for the table, and modify the DOM in Javascript. You could then create for instance 100 rows at a time and use window.setTimeout to allow the browser to render the updated DOM before continuing.
A good article explaining the event dispatch model may be of help if you choose to go down this path:
http://dev.opera.com/articles/view/timing-and-synchronization-in-javascript/
OK, so how about dropping client-side rendering but fetching and replacing server-side rendered HTML within a placeholder (and thus the table) multiple times? The server-side would have to use a background thread and supply a handle (e.g. a GUID) in it's initial response that an AJAX call could then use to ask for the table. The server could reply with a few rows plus an indication that it's not done, prompting the client to repeat this until finished.
I suppose it would be a little messy, but at least it would allow you to use the same page for your report. A query string parameter could tell the page whether to render completely or emit script to call back to get the data.