How to speed up browser rendering for large amounts of data using Angular? - html

Okay so I have around 20k records and 29 columns of text-only data that i fetch from a back-end into an Angular app.
once I get the data from the server (takes around 1s) I bind it to a field in a component and then use an HTML table combined with *ngFor to display all the data.
Chrome takes about 58 seconds to render everything in one go and scripting time is around 10s.
it is a big requirement for my company to be able to see results in the page in under 2.5s from the moment I click on the link to the component without using pagination or infinite scrolling.
what are my options to achieve such performance?

It's hard to tell with limited knowledge about the app and model but I see 2 options (you can go for one or both):
lazy load paginated table content. I don't know which (if any) UI framework you're using but you can check how it's handled in NgPrime for instance Lazy table
EDIT: It can be realized in few ways:
First: if backend is problem (not in this case) you can request only slice of data from backend and handle it with some library like NgPrime
Second: if you're not using any framework you can use something like virtual scroll Virtual scroll
slim down model which is sent over the backend. If large collection of objects has some nested objects as props it can significantly slow down both download and rendering. Instead of having nested object you can decide whether you really need all data which is trasferred and make some light object instead.
Also check out dev tools and analyse loading performance to get more info:
Find bottlenecks in chrome dev tools
EDIT:
With more data provided from comments, I would use virtual-scroll or similar mechanism (it loads and renders html dynamically).

Related

How would I structure a single page application that takes an input, fetches data from the back end, then renders it to the front end with routers?

Example: https://redditmetis.com/
Issue
I've been having trouble trying to structure a recent SPA I started. Like the above example, I need to accept an input, make a few API calls in the back-end, manipulate the data then render it the front-end. I'm currently going for a Django + React stack, since I'm pretty familiar with them. I can't really imagine how this would look like from a surface view, I've worked with API's before but I can't wrap my head around how the client and the server would interact with each other to make it all connect.
What I have so far
After looking into it, I think I need React Routers, similar to the example website provided. In my Django server, I plan on making separate API calls and running an algorithm to organize and sift through the received response, then pushing the product to the client. I'm still figuring out how to set that up, since most API calls are made on componentdidmount which only executes at the start of the DOM. This isn't much, but its a start.
If anyone has pointers on how to start, I'd appreciate it, thanks.
Each class object you make can have a componentdidmount method.
You can also use the fetch method within objects that update dynamically during state changes.

What techniques are available for programatically transforming HTML/DOM in an iOS Application?

I'm processing a variety of RSS feeds, which contain summaries, as well as the target page URL content, and trying to use a uniform transformation method.
XSLT was the first thing that occurred to me to try, as it would accomplish what I want, in a standard way, without a lot of fuss aside from adding new XSLT stylesheets to accommodate uniquely formatted sites and feed content.
Problem: XSLT libraries are considered "private" in iOS, and even linking statically against your own copy will get you rejected by the Apple Store analysis tools.
I've looked into the possibility if injecting the stylesheet and data into a UIWebView that wasn't displayed, but this seems like a really roundabout and hackish way to get at the system's underlying XSLT processor in an "approved" fashion.
What alternative techniques/libraries exist which would let me do this in a standard fashion, ie: without rolling my own.
I'm not sure I fully understand your requirements, but one possbility would be to use libxml (which is allowed in iOS) to parse the XML and if necessary manipulate the DOM. If you really need to do XML transformations this is going to be more effort than XSLT, but if you just need to extract data from the XML, that can be done fairly easily with xpath queries.
That said, I have read several people claiming they got XSLT working on iOS and had their apps approved in the app store. In particular, I've seen this stackoverflow answer claimed as a working solution by multiple people. And if that fails, another answer suggested building the libxslt library yourself with renamed symbols to bypass the app store checks. I would only suggest that as a last resort though.
You'll probably want to look into Hpple for something powerful but light weight / native. See the tutorial on getting started here: http://www.raywenderlich.com/14172/how-to-parse-html-on-ios. Good luck!
I'm going to also recommend TFHpple but I'm also going to elaborate on the solution. I've explored an app that navigates a 3rd party (well, I'm the 3rd party, they're the source but that's semantics) website/data source but there are some pitfalls. The biggest pitfall is obvious: if the data source DOM changes you need to change your app and re-release. A creative way around this would be to publish/expose a global copy of the DOM on a public server that way the end user doesn't have to update their app any time the data source changes (as long as the change isn't radical).
For instance, if your expected DOM search in TFHpple is #"//figure[#class='figure']/a" and then a week from now your data source's resource you're looking for is altered to #"//figure1[#class='figure1']/a" you just opened yourself to an App Store release... UNLESS... you publish the expected DOM searches on a web server you control in a data dictionary that your app can consume and serve out to the various DOM search elements within your app. The only problem I foresee here is that if the data source adds or removes a data element you want to consume you either have to release a build or handle the removal ahead of time (respectively).
Lastly if the data source DOM isn't well formed or consistent you may be beating your head against a wall more times than not.

Knockout viewModels, a single big one or multiple?

The project I'm working on is a single page web application developed using MVVM as design pattern.
Aside the first request for the entire page every other transaction is JSON-based, every JSON is binded using Knockout at presentation level.
At the moment we're developing the whole application using a single Knockout-viewModel, every single JSON is being parsed inside the viewModel and binded to the presentation level.
Now, considering how big is the viewModel at the moment I'm wondering if it's a good practice to split the whole thing in different (smaller) viewModels specifically binded to a single element in a page (like it's described in this article), making heavy use of the mapping plugin of Knockout to generate the empty structure (and refresh the datas).
In case this isn't the best practice, how do you suggest to manage the JSON binding? At the moment we're using $.jsonparse() to obtain an object then we push the different datas inside some observable array. But I don't think this is the best way to approach this problem.
Thank you.
I'm a big fan of fanning out complexity across lots of smaller modules, rather than a single monolithic module with all the complexity.
I tend to have multiple view models and communicate between them using the Knockout.Postbox library.

HTML5: accessing large structured local data

Summary:
Are there good HTML5/javascript options for selectively reading chunks of data (let's say to be eventually converted to JSON) from a large local file?
Problem I am trying to solve:
Some existing program locally and outputs a ton of data. I want to provide a browser-based interactive viewer that will allow folks to browse through these results. I have control over how the data is written out. I can write it all out in one big file, but since it's quite large, I can't just read the whole thing in memory. Hence, I am looking for some kind of indexed or db-like access to this from my webapp.
Thoughts on solutions:
1. Brute-force: HTML5 FileReader API has a nice slice() method for random access. So I could write out some kind of an index in the beginning of the file, use it to look up positions of other stored objects, and read them whenever they're needed. I figured I'd ask if there are already javascript libraries that do something like this (or better) before trying to implement this ugly thing.
2. HTML5 local database. Essentially, I am looking for an analog of HTML5 openDatabase() call that would open (a read-only) connection to a database based on a user-specified local file. From what I understand, there's no way to specify a file with a pre-loaded database. Furthermore, even if there was such a hack, it's not clear whether the local file format would be the same across browsers. I've seen the phonegap solution that populates the browser local database from SQL statements. I can do that too, but the data I am talking about is quite large (5-10GB): it will take a while to load, and such duplication seems rather pointless.
HTML5 does not sound like the appropriate answer for your needs. HTML5's focus is on the client side, and based on your description you're asking a lot out of the browsers, most likely more than they can handle.
I would instead recommend you look at a server-based solution to deliver the desired goal/results to the client view, something like Splunk would be a good product to consider.

Testing and mocking with Flex

I am developing a "dumb" front-end, it's an AIR application that interacts with a "smart" LiveCycle server. There are currently about 20 request & response pairs for the application. For many reasons (testing, developing outside the corporate network, etc), we have several XML files of fake data, and if a certain configuration flag is set, the files are loaded, a specific file is parsed and used to create a mock response. Each XML file is a set of responses for different situation, all internally consistent. We currently have about 10 XML files, each corresponding to different situation we can run into. This is probably going to grow to 30-50 XML files.
The current system was developed by me during one of those 90-hour-week release cycles, when we were under duress because LiveCycle was down again and we had a deadline to meet. Most of the minor crap has been cleaned up.
The fake data is in an object called FakeData, with properties like customerType1:XML, customerType2:XML, overdueCustomer1:XML, etc. Then in the FakeData constructor, all of the properties are set like this:
customerType1:XML = FileUtil.loadXML(File.applicationDirectory.resolvePath("fakeData/customerType1.xml");
And whenever you need some fake data (this happens in special FakeDelegates that extend the real LiveCycle Delegates), you get it from an instance of FakeData.
This is awful, for many reasons, but it works. One embarrassing part is that every time you create an instance of FakeData, it reloads all the XML files.
I'm trying to figure out if there's a design pattern that is not Singleton that can handle this more elegantly. The constraints are:
No global instances can be required (currently, all the code dealing with the fake data, including the fake delegates, is pulled out of production builds without any side-effects, and it needs to stay that way). This puts the Factory pattern out of the running.
It can handle multiple objects using the XML data without performance issues.
The XML files are read centrally so that the other code doesn't have to know where the XML files are, and so some preprocessing can be done (like creating a map of certain tag values and the associated XML file).
Design patterns, or other architecture suggestions, would be greatly appreciated.
Take a look at ASMock which was developed by a good friend of mine (and a member here Richard Szalay) and is based on .nets Rhino mocks. We've used it in several production environments now so i can vouch for it's stability.
should be able to get rid of any fake tests (more like integration tests) by using the mock object instead.
Wouldn't it make more sense to do traditional mocking with a mocking framework? Depending on your implementation, it might be possible to set up the Expects by reading the fake-data XML files.
Here is a Google Code project that offers mocking for ActionScript.