I've used one static cfm page and it's has a single select query for showing a above 3000 records(without pagination). When i am try to view that page in FF its takes the 15 sec for shows the content. If any way(without pagination) to reduce the browser loading time?
Create a page that uses that uses AngularJS to show the table. Then populate the table via an AJAX call to get JSON.
Use fixed table layout so that the browser does not have to re-flow the content as it loads.
Don't load the data into a table at all. Do the layout with div's and span's
Optimize the SELECT query
Only select columns you need.
Avoid wildcards (*) in the SELECT clause
Don't join unnecessary tables.
You can also consider loading content dynamically via ajax.
Without seeing your code (or example code), we can't provide anything specifically tailored to your implementation of the query.
You could potentially
<cfflush>
the content, so it will start sending the response to the browser straight away, rather than building the entire page, then pushing the response back
Some other solutions are better options, especially for long term scalability and maintenance. However, if you're looking for a quick solution for now you could try breaking it up into a series of HTML tables. Every 500 records or so add this:
</table>
<cfflush>
<table...
This will insure that html rendered so far is sent to the browser (via the cfflush) while ColdFusion continues to work on the rest. Meanwhile, by closing out the table before doing so you're allowing the browser to properly render that block of the content in full without risking it waiting for the remainder.
This is a patch, and something you should only do until you can put a more involved solution (such as JQGrid) in place.
Related
When extracting data you can use CSS/xpaths. But is there a similar or reliable method of doing this in the page source.
www.amazon.com/Best-Sellers-Electronics-Televisions/zgbs/electronics/172659
You could get the page source and then parse using Regex but probably not be reliable if for instance the tv did not load on the page. I have looked up various solutions but I have yet to find one that mentions getting every tv at start of each line (1, 4, 7 etc,, in source) or using a reliable method e.g Css/xpaths in source of a page.
What would is the golden standard of reliable method of doing what I am after?
To get the page source you can use CURL if the page is rendered entirely on server side (most pages won't be), or headless chrome to get the actual DOM that will render in the browser (https://developers.google.com/web/updates/2017/04/headless-chrome).
For scraping the content, I've used cheerio (https://github.com/cheeriojs/cheerio) which will allow you to read in HTML to an object and then scrape your data off that using jQuery expressions. (Headless chrome allows you to execute JS on the pages you visit, so you don't necessarily need cheerio).
In your specific example you could get the TV on each line by combining the right class selectors to get the divs containing TV's, and using attribute selector with 'margin-left=0px' which would get first item on each line. That is obviously very much bound to structure of the page and will likely be broken by smallest of changes in the page source. (And not really any different from using xpaths. Still better than regex though)
With certain elements loading / not loading on the page (if that was what you meant by TV not being there), no golden solutions that I know of, except allowing sufficient time for the page to load and handling your scraper failing gracefully.
I am writing a program for managing an inventory. It serves up html based on records from a postresql database, or writes to the database using html forms.
Different functions (adding records, searching, etc.) are accessible using <a></a> tags or form submits, which in turn call functions using http.HandleFunc(), functions then generate queries, parse results and render these to html templates.
The search function renders query results to an html table. To keep the search results page ideally usable and uncluttered I intent to provide only the most relevant information there. However, since there are many more details stored in the database, I need a way to access that information too. In order to do that I wanted to have each table row clickable, displaying the details of the selected record in a status area at the bottom or side of the page for instance.
I could try to follow the pattern that works for running the other functions, that is use <a></a> tags and http.HandleFunc() to render new content but this isn't exactly what I want for a couple of reasons.
First: There should be no need to navigate away from the search result page to view the additional details; there are not so many details that a single record's full data should not be able to be rendered on the same page as the search results.
Second: I want the whole row clickable, not merely the text within a table cell, which is what the <a></a> tags get me.
Using the id returned from the database in an attribute, as in <div id="search-result-row-id-{{.ID}}"></div> I am able to work with individual records but I have yet to find a way to then capture a click in Go.
Before I run off and write this in javascript, does anyone know of a way to do this strictly in Go? I am not particularly adverse to using the tried-and-true js methods but I am curious to see if it could be done without it.
does anyone know of a way to do this strictly in Go?
As others have indicated in the comments, no, Go cannot capture the event in the browser.
For that you will need to use some JavaScript to send to the server (where Go runs) the web request for more information.
You could also push all the required information to the browser when you first serve the page and hide/show it based on CSS/JavaScript event but again, that's just regular web development and nothing to do with Go.
I'm creating a website that has a huge amount of HTML (many thousands of div elements). There's no real way to get away from having all these div elements and it's making the site load very slow (7-12 seconds). I've tried putting caching on the site, but it doesn't help, since the site still has to render all these div elements.
More specifically it's 140 dropdowns, that each contain 100-800 div elements and they take a long time to show.
My thoughts, was to render the div elements that are inside the dropdowns, after the page loads, but I don't know how to go about that?
What is the easiest way to render some of your partials AFTER the page has loaded? I'm using Rails 4 btw.
Any other suggestions on how to deal with HUGE amounts of HTML?
I have a similar issue on one of my pages. Here are some things to try related to the select boxes.
(The top two may not be relevant since you said you tried caching, but I'm including for completeness. What type of caching did you try? How did you verify it was the browser rendering that was slow?)
Double check the cause of the problem
Comment out the code that generates the select boxes and check whether the time in your rails logs (as opposed to your browser measurements) drops. This establishes your "lower bound" for performance improvements on that measure.
Avoid using rails helpers.
If you're using select_tag, options_for_select, or any of that family of methods you may be doing a lot of repeated work since each time they are called they need to rebuild the list of options. If the options are the same for each select box, you can build them up once then just dump them in the page:
<% options = options_from_collection_for_select(#array, "id", "name") %>
<%= select_tag "myfield", options %>
If you need different selected values for each, you can try:
Processing options after creation to add them. This is pretty gross and possibly won't give you much speed up over the default generators.
Dump the defaults into a javascript variable, then set them with JS after the page loads.
AJAX in partials
This will give the illusion of loading faster, even though server time is the same (though it may be parallelized somewhat, you add extra network delay). The easiest way to do this is with jQuery's .load method.
$("#element-1").load("/path/to/partial/1")
Generate select boxes in JS
If you can get all the data you need to the client relatively fast (maybe serve it up in a JSON endpoint?) you can try building up the select boxes directly with jQuery, which is covered in Programmatically create select list
Remove redundant HTML
Why do your dropdowns have divs inside them? Is there a way to simplify?
Although the question is few years old, maybe someone will benefit from this one.
1) in controller:
#companies = Company.all.pluck(:name, :id) #order is important here
2) in view:
<%= f.select(:company_id, options_for_select(
#companies, selected: #user.company_id
), {}, class:"form-control m-b") %>
pluck grabs only :name, :id values, array of arrays, e.g,
=> [["Sumptus xiphias canto.", 5], ["My First Co.", 1]]
is created and then options_for_select populates options.
My 5000 records are populated in ~300ms in AJAX modal window. Not super fast, however I don't think regular user would complain.
By transclusion I mean a page like
{{template
| blahblah=
| asd =
| df=
}}
So if there are too many "|"s, then will they make the page loading slow?
Let's say page "Template:*" is
*
so that {{*}} will render a bullet.
Please compare
(Template:A and page "A page")
and
(Template:B and page "B page")
Both A page and B page will display the same thing but which one will be faster to load if there are thousands more transclusion in this way?
Template:A
* {{{a}}}
* {{{b}}}
* {{{c}}}
A page
{{A
|a=q
|b=w
|c=e
}}
Template:B
{{{a}}}
B page
{{B
|a={{*}} q <br> {{*}} w <br> {{*}} e
}}
=====Question added==============
#llmari_Karonen Thank you very much.
What if the number is nearly 1000, so that the A page is
{{A
|a1=q
|a2=w
|a3=e
....
|a999=w
|a1000=h
}}
Still, thanks to caches, "for most page views, template transclusion has no effect on performance"?
And what do you mean by "for most page views"? You mean low enough page views?
You said "the recommended way to deploy MediaWiki is either behind reverse caching proxies or using the file cache. Either of these will add an extra caching layer in front of the parser cache."
Should this be done "before" posting any content on mediawiki? Or it doesn't matter if I do it after I post all the pages to mediawiki?
===What if the transclusion relationship is very complex===
#llmari_Karonen I got one more question. What if the transclusion relation is very complex?
For example
Page A is
{{temp
| ~~~
| ~~~
... (quite many)
| ~~~
}}
And Template:Temp has {{Temp2}},
and Template:Temp2 is again
{{temp3
|~~~
|~~~
... (very many)
|~~~
}}
Even in such case, due to the reasons you mentioned, numerous transclusions won't affect the loading speed of Page A?
Yes and no. Mostly no.
Yes, having lots of template transclusions on a page does slow down parsing somewhat, both because the templates need to be loaded from the DB and because they need to be reparsed every time they're used. However, there's a lot of caching going on:
Once a template is transcluded once on a given page, its source code is cached so that further transclusions of the same template on that page won't cause any further DB queries.
For templates used without parameters, MediaWiki also caches the parsed form of the template. Thus, in your example, {{*}} only needs to be parsed once.
In any case, once the page has been parsed once (typically after somebody edits it), MediaWiki caches the entire parsed HTML output and reuses it for subsequent page views. Thus, for most page views, template transclusion has no effect on performance, since the page will not need to be reparsed. (However, note that the default parser cache lifetime is fairly low. The default is OK for high-traffic wikis like Wikipedia, but for small wikis I'd strongly recommend increasing it to, say, one month, and setting the parser cache type to CACHE_DB.)
Finally, the recommended way to deploy MediaWiki is either behind reverse caching proxies or using the file cache. Either of these will add an extra caching layer in front of the parser cache.
Edit: To answer your additional questions:
Regardless of the number of parameters, each page still contains only one template transclusion (well, except for the {{*}} transclusions on page B, but those should be efficiently cached). Thus, they should be more or less equally efficient (as in, there should not be a noticeable difference in practice).
I mean that, most of the time when somebody views the page, it will (or at least should) be served from the cache, and so does not need to be reparsed. Situations where that does not happen include when:
the time since the page was last parsed exceeds the limit specified by $wgParserCacheExpireTime (24 hours by default, but this can and IMO should be increased for most wikis),
the page has been edited since it was added to the cache, and so needs to be reparsed (this typically happens immediately after clicking the "Save page" button),
a template used on the page has been edited, requiring the page to be reparsed,
another page linked from this page has been created or deleted, requiring a reparse to turn the link from red to blue or vice versa,
the page uses a MediaWiki extension that deliberately excludes it from caching, usually because the extension inserts dynamically changing content into the page,
someone has deliberately purged the page from the cache, causing an immediate reparse, or
the user viewing the page is using an unusual language or has changed some some other options in their preferences that affect page rendering, causing a separate cached version of the page to be generated for them (this version may be reused by any other user using the same set of preferences, or by the same user revisiting the page).
You can add a proxy in front of your wiki, and/or enable the file cache, at any time. Indeed, since setting up effective caching is a somewhat advanced task, you may want to wait until you get your wiki up and running without a front end cache first before attempting it. This also allows you to directly compare the performance before and after setting up the cache.
I have an old-style CGI program which generates a large HTML table. Because the table contents are slow to calculate (even though it's not that big) I would like to print it one row at a time, and have the browser display rows as they are fetched.
If you search online you'll see that style="table-layout: fixed" is supposed to help some browsers do this, but on the other hand Firefox can render incrementally even without it. My test browser is Firefox 4.0b10 on Windows but I cannot get it to display incrementally using the simple example below:
<html>
<head>
<title>x</title>
</head>
<body>
<table width="100%" style="table-layout: fixed" rows="4" cols="4">
<col width="10%" />
<col width="20%" />
<col width="30%" />
<col width="40%" />
<tr><td width="10%">a</td><td width="20%">b</td><td width="30%">c</td><td width="40%">d</td></tr>
<!-- flush output, one second pause... -->
<tr><td width="10%">a</td><td width="20%">b</td><td width="30%">c</td><td width="40%">d</td></tr>
<!-- flush output, one second pause... -->
<tr><td width="10%">a</td><td width="20%">b</td><td width="30%">c</td><td width="40%">d</td></tr>
<!-- flush output, one second pause... -->
<tr><td width="10%">a</td><td width="20%">b</td><td width="30%">c</td><td width="40%">d</td></tr>
<!-- flush output, one second pause... -->
</table>
</body>
</html>
Instead the page is blank until the end of the download, when the whole table appears. I've tried various ways to tell the browser the table dimensions in advance so it should have no problem displaying it as it arrives, but they don't help. (Removing the hints doesn't help either.)
If I modify my CGI script to close and restart the table between each row, with an extra blank paragraph in between, then the page does render incrementally in the browser. This shows that the data is getting to the browser incrementally - just Firefox is choosing not to render it.
Ironically, much more complex scripts producing larger tables seem to do what I want, showing one row at a time as it downloads, but whenever I try to reduce the output to a minimal test case it doesn't work. This leads me to suspect there is some complexity heuristic used by Firefox's rendering engine.
What's the magic dust I need to tell the browser to always display the table as downloaded so far?
For what it is worth.
The Firefox I use Firefox 3.6.16 does not display until the page is downloaded, regardless of what is being downloaded.
You could look for settings in about:config, but I have not seen any solution to this,
there are addins to help with displaying placeholders. but they don't always work either.
Just found this
Read it OR try
about:config - in browse bar
select new
create integer
nglayout.initialpaint.delay
set value to 0
cheers
First of all, I'd say it's seldom good user interface design to load huge amounts of data at once. In most cases, it's better to offer some sort of search or at least paging functionality. There may be exceptions, but people simply cannot process very large quantities of information, and apps serving far more data than people have any use for aren't just wasting cycles, they are badly designed. Imagine if Google displayed the first 10,000 hits by default, or even 1,000. Most users wouldn't even look beyond the first 5 or so, and the amount of wasted bandwidth...
That said, it may of course not be your fault if the page is badly designed. Or it may be, but you'll need a quick fix before coming back to redesign the solution later. :)
One way to make this happen would be to render the table client-side instead of on the server. If there's nothing else heavy on the page, the user would be served quickly with the other content, and the area where the table will appear could contain an animated GIF or similar to indicate the software is "working". You could use an AJAX-call to fetch the data for the table, and modify the DOM in Javascript. You could then create for instance 100 rows at a time and use window.setTimeout to allow the browser to render the updated DOM before continuing.
A good article explaining the event dispatch model may be of help if you choose to go down this path:
http://dev.opera.com/articles/view/timing-and-synchronization-in-javascript/
OK, so how about dropping client-side rendering but fetching and replacing server-side rendered HTML within a placeholder (and thus the table) multiple times? The server-side would have to use a background thread and supply a handle (e.g. a GUID) in it's initial response that an AJAX call could then use to ask for the table. The server could reply with a few rows plus an indication that it's not done, prompting the client to repeat this until finished.
I suppose it would be a little messy, but at least it would allow you to use the same page for your report. A query string parameter could tell the page whether to render completely or emit script to call back to get the data.