ASP.Net controls becomes unresponsive after a while of perfect function - html

I have been struggling with this issue for a while now, so I thought I ask you folks if anyone could offer any help/inspiration.
I have an ASP.Net application that runs inside another iOS App on an iPod atop of a barcode scanner.
So below are the steps.
1. Scan a barcode
2. Catch the barcode as a Querystring parameter on a page in ASP.Net application through the iOS App
3. Search the product and display details on the ASP Page
4. User will enter Quantity and add the product to the stock
and go ahead with another product. They can have 300-400 product at a time.
Now when the no. of products scanned and added into the stock, reaches to 100 - 105, the qty textbox on the page and other controls like buttons, start to become unresponsive and sometimes totally frozen.
if I scan a new product, the page searches and displays the product fine but I am not able to access the textbox to enter the qty.
I have contacted the iOS App developer and he has done some work to improve this from initially 70 to 105 scans. This is all for gen 5 iPod (500MB ram)
It even performs better when I use an gen 6 iPod (1 GB ram) and I can go upto 170-180 before the issue comes in to haunt me.
I have to kill and restart the iOS app to be able to work again.
I was using Linq in ASP.Net but I have replaced linq with simple sql reader etc and stored procs to keep it light. That did not help a great deal, may be about another 5 scans added before it freezes.
I have used the profiling in the VS 2012 and replaced the heavily performing methods/objects to reduce the bytes used
Is there a way that I can further optimize the page to help the situation?
I am not even sure if I am barking the right tree here.

It sounds like you have done all you can to reduce the resources of the ASP.Net client side page, You might squeeze a little performance out of it by running the pages through a minifier. As you mentioned the iOS dev was able to gain a pretty big increase in scan numbers it seems like the biggest resource overhead might be on the iOS side.
I am not familiar with iOS development but it seems your issue is entirely related to the hardware capacity of the device. There might be more tricks the iOS dev can use to try and force the scanner app to take priority, even close unneeded background processes when it starts.

Related

How to optimize website for Chrome: waiting and download order of magnitude slower than other browsers

NB: since the answer to this could involve JavaScript or PHP programming, or general networking, or IT systems, I put it here, but if some mod thinks it's better suited for SuperUser or ServerFault, I won't object to it being moved.
I have a landing page to which I'm driving traffic through PPC. I've set up AWS CloudWatch to get RUM data, and the page is performing terribly — an average load time of 9.9s and max of 21.5!
I've done all of the "standard" optimization I can think of or research. The site is built with WordPress, running on Apache on an EC2 server. I've
Upgraded the EC2 instance to ensure I have enough memory
Written a custom plugin to filter out any other plugins that aren't explicitly used on the landing page in question
Customised my theme code so that it sets proper srcset and creates the correct image sizes on upload
Minified all the JS and CSS that I include through plugins or themes I've written
Put the site itself as a distribution to CloudFront
Installed the WP Super Cache plugin, and created a separate CDN distribution on CloudFront for it
Set appropriate cache control headers on CloudFront and told it to gzip everything
Put a facade in place of any videos
The site is blazing fast for me — "load" is less than 1 second. But my RUM says that's not the case for my users. So, I dug a little deeper. 70.2% of my visitors use Chrome, and 27.7% use other, of which almost 1/3rd are Android Browser — which as I understand it is just some sort of "Chrome Lite" — so nearly 80% of my visitors are using some Chrome variant.
Sure enough — if I load the page on Safari (to ensure nothing has expired on CloudFront), clear my browser cache, and reload the page, the first request shows a waiting time of 21.2ms, TTFB of 22.6ms, and download time of 4.8ms. The whole page shows that it's finished loading in 973ms.
Firefox is slightly slower, with the first request taking 100ms total, and the whole load about 1.75s — not blazing fast, but still within the understood "2 second" limit for good user experience.
On the other hand, for Chrome that same first request takes almost 570ms waiting and 208ms download. So, just the first request (which is 36k in size) takes almost as long to load in Chrome as the whole site takes in Safari. And that repeats for every single request, where both the waiting time and the download time are an order of magnitude slower on Chrome than on Safari (on the same device on the same network):
Whereas on Safari:
I would think "waiting" and "download" times would be primarily network driven, but I can repeat this all day long and the results are the same.
I might just assume that Chrome is not optimized for the Mac on which I'm running it, but, as I said, this all started with RUM data, so it's clearly not that. As much as I might like to, I obviously can't force all of my visitors to swap their Android devices for iPhones. Equally obviously, I can't have an average load time of 10 seconds.
So, why is my site so slow on Chrome? What else can I do to optimize this?
The landing page in question is here: https://www.chrisrichardson.info/lp/prague-b/
Note, a lot of the optimization I've done is for that page in particular, so other pages on the site might perform even worse, but I don't care about that, at least at the moment.
Hahahaha.
OK, just leaving this here for posterity's sake. The 10x latency was because Chrome had preserved in devtools from a previous session throttling to 3G. So, if you stumble upon this problem, check your throttling.
That still doesn't address my RUM issue, however, but I'll open that up as a different question.

HTML 5 Local Storage proper usage

I'm using the new local storage that html5 offers.
When my mobile app (using phonegap) runs, it first goes to the server to get a list of members. The list doesn't change very often, so I was thinking maybe to keep it in the local storage and just refresh it every week or so.
My question is if it's right to do so, because it's a list of 900 people. Not too big but also not small.
Thanks.
900 people with (I guess) 5-6 fields for each one of them is around (did a test in my db with a text file) 45kb overhead.
Most phones and tablets now can live with that. (Aka, putting a bigger image in the background is going to load your app more).
So go for it.

WP8 Uploading/Downloading large files

I am fairly new to Windows Phone development. We have a scenario where we allow user to upload or download files but along with authentication (oAuth, NTLM, forms all standard mechanism but not limited to oAuth).
Now so far our RnD suggest that we have following options
1- Resource Intensive Agent
The constraints associated with Resource Intensive (like Minimum battery etc.) have lead us to drop this option
2- Periodic Agent
A relatively better option, however as they run after 30 minutes and the constraint of 10 minutes duration gives us doubt that on mobile if user wants to upload a video of say 1-2 GB, it does not guarantee competition and u can anticipate other problems associated with this approach.
3- Background File Transfer
This is the best option in our scenario however my colleague told me that it does not support basic windows authentication and that we cannot change user-agent etc.
4- On Application
Another option is to perform network operation on application but we cant retain user on application for longer duration and also after sometime lock screen would appear. So...
Can anyone who have experienced similar scenario or from product team can guide here. It's a common scenario, are we missing something here? or is it really API limitation?
Resource Intensive Agents will indeed not work for your use case because they require external power to work. Not to mention that if the user receives a phone call the agent terminates.
Periodic Agent Have a 25 second limited duration, not 10 minutes (10 minutes are in resource intensive agents), so they are really no an option if you need to upload a gigabyte of information.
Background File Transfers have a hard limit of 100 megabytes. (It's even less on cellular internet).
On Application is a very possible option, you can prevent the phone from going to lock screen if that's a problem. The bigger issue here is that the user is pretty much stuck for the duration of the upload. More importantly, this seems to be your only option out of the four you mentioned.

Increasing Google Chrome's max-connections-per-server limit to more than 6

As far as I know, at the current moment, late 2011 the max-connections-per-server limit remains 6. Please correct me if I am wrong. This is bad that we cannot fix this easily as in Firefox. As far as I know this value is hardcoded.
One of the solutions is to download the Chromium's sources and rebuild them. Is there a more easy solution?
Is there any tricky way to hack this without creating a dozen of mirror-domains?
Why I'm asking the question: My task is to create a html-javascript slideshow that will run inside a fullscreened browser, and a huge monitor is hanging on the wall. The javascript is really complicated, it preloads photos and makes a lot of ajax calls to my web services. If WIFI connection is slow, if 6 photos are loading, the AJAX calls fail, the application runs bad. I want a fast solution based, on http or browser or ubuntu tweak something else, because rebuilding the javascript app will take days.
Offtopic: do you know any other things that can be tweaked in my concrete situation?
IE is even worse with 2 connection per domain limit. But I wouldn't rely on fixing client browsers. Even if you have control over them, browsers like chrome will auto update and a future release might behave differently than you expect. I'd focus on solving the problem within your system design.
Your choices are to:
Load the images in sequence so that only 1 or 2 XHR calls are active at a time (use the success event from the previous image to check if there are more images to download and start the next request).
Use sub-domains like serverA.myphotoserver.com and serverB.myphotoserver.com. Each sub domain will have its own pool for connection limits. This means you could have 2 requests going to 5 different sub-domains if you wanted to. The downfall is that the photos will be cached according to these sub-domains. BTW, these don't need to be "mirror" domains, you can just make additional DNS pointers to the exact same website/server. This means you don't have the headache of administrating many servers, just one server with many DNS records.
I don't know that you can do it in Chrome outside of Windows -- some Googling shows that Chrome (and therefore possibly Chromium) might respond well to a certain registry hack.
However, if you're just looking for a simple solution without modifying your code base, have you considered Firefox? In the about:config you can search for "network.http.max" and there are a few values in there that are definitely worth looking at.
Also, for a device that will not be moving (i.e. it is mounted in a fixed location) you should consider not using Wi-Fi (even a Home-Plug would be a step up as far as latency / stability / dropped connections go).
BTW, HTTP 1/1 specification (RFC2616) suggests no more than 2 connections per server.
Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. A proxy SHOULD use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion.
There doesn't appear to be an external way to hack the behaviour of the executables.
You could modify the Chrome(ium) executables as this information is obviously compiled in. That approach brings a lot of problems with support and automatic upgrades so you probably want to avoid doing that. You also need to understand how to make the changes to the binaries which is not something most people can pick up in a few days.
If you compile your own browser you are creating a support issue for yourself as you are stuck with a specific revision. If you want to get new features and bug fixes you will have to recompile. All of this involves tracking Chrome development for bugs and build breakages - not something that a web developer should have to do.
I'd follow #BenSwayne's advice for now, but it might be worth thinking about doing some of the work outside of the client (the web browser) and putting it in a background process running on the same or different machines. This process can handle many more connections and you are just responsible for getting the data back from it. Since it is local(ish) you'll get results back quickly even with minimal connections.

Win32: Is it possible to build an app that houses other apps?

I was wondering, how would you go about writing an application that basically houses other applications inside of it?
The reason I ask is that I'd love to build an app that 'conquers' my current explosion of open windows. I've used virtual window managers before and they're nice and all, but I could do so many things with an app like I mention.
Alternatively does anyone know of an easy to use/intuitive application for confining windows to 'regions' of your screen? Something like GridMove, but more intuitive and less flakey?
You could create a window, then you could enumerate all Windows that have the style WS_OVERLAPPEDWINDOW, select the ones belonging to the application you want to house, then call SetParent on the window, setting the parent to the window you created. You could also use FindWindow to find a window by its title.
All the windows inside the house can never leave the house window's boundaries, but they still follow all the same rules. You can still click-and-drag windows etc.
The problem here is that if the application inside the house creates another window, its parent will most likely be the desktop window, not the house window.
I think what you are describing is generally called a Window Manager. The Windows shell is itself a (poor) example of a window manager. You might want to investigate some alternatives. I know there has been some success in getting KDE ported to Windows, so you might want to look at the current state of that project.
Microsoft also provides a PowerToy (IIRC) that gives you virtual desktop support, but it's really bad. Have you considered just getting a second monitor (and perhaps a utility such as MultiMon Taskbar to get a second task bar on the other monitor)?
Here is code that uses FindWindow / SetParent to create a tabbed view combining different applications Jedi Window Dock
I also wrote an application (not free, not open source) that takes this idea a bit further called WindowTabs.
The only caution I would give you is that not all applications like being parented. If your writing .NET, there are some "Gotcha's" there (which is why WindowTabs doesn't use parenting).
Also, in general, once you do a SetParent, you are joining the threads at a Win32 level meaning that if one hangs, all of them are toast.
Multiple Document Interfaces could help you out.
Despite the multiple down votes, I stand by this answer because the OP never stated the source of the "explosion of windows." I've seen business apps that open several windows at a time (or users that would open several instances "to save time") where MDI would've been a nice feature for them.
If the OP is a power user who has a need for another window manager because he runs many apps at once, then this really doesn't apply. It also isn't the problem I'd be addressing -- it would be finding a way to have fewer windows.
In general, there's always a VM.
It may be overkill or it may not work depending on the specifics of what you're trying to do. But VMWare will let you copy/paste files and text between your VM and local machine, so it's not that far off of being a true window manager. The system requirements aren't even that outrageous, considering how much memory iTunes + a typical browser eat up.