Slow text input in html field from barcode scanner - html

I have a webpage in my LAN in order to input barcodes in real time to a db through a field (framework django + postgresql + nginx). It works fine, but lately we have a customer that uses 72 char barcodes (Code Matrix) that slows down inputs because before next scan, user must wait the redraw of the last one in the field (it takes about 1-2 seconds, redrawing one char after the other).
Is there a way to reduce latency of drawing scanned text in the html field?
Best thing would be to show directly all the scanned barcode, not one char after the other. The scanner is set to add an "Enter" after the scanned text.

At the end, as Brad stated, the problem is more related to scanner's settings (USB in HID mode), although PC speed is also an issue. After several tests, on a dual core linux machine I estimate delay due 85% to the scanner and 15% to PC/browser combo.
To solve the problem I first searched and downloaded the complete manual of our 2D barcode scanner (306 pages), then I focused on USB Keystroke Delay as a cause, but default setting was already set to 'No Delay'.
The setting that affected reading speed was USB Polling Interval, an option that applies only to the USB HID Keyboard Emulation Device.
The polling interval determines the rate at which data can be sent between the scanner and the host computer. A lower number indicates a faster data rate: default was 8ms, wich I lowered to 3ms without problems. Lower rates weren't any faster, probably because it was reached the treshold where PC becomes the bottleneck.
CAUTION: Ensure your host machine can handle the selected data rate, selecting a data rate that is too fast for your host machine may result in lost data: in my case when I lowered polling interval to 1ms there were no data loss within the working PC, but when testing inside a virtual machine there was data loss as soon as I reached 6ms.
Another interesting thing is that browsers tend to respond significantly slower after a long period of use with many tabs open (a couple of hours in my case), probably due to caching.
Tests done with Firefox and Chromium browsers on old dual core PC with OS Lubuntu (linux).

This probably has nothing to do with your page, but with the speed of the scanner interface. Most of those scanners intentionally rate-limit their input so as not to fill the computer's buffer, avoiding characters getting dropped. Think about this... when you copy/paste text, it doesn't take a long time to redraw characters. Everything appears instantly.
Most of those scanners are configurable. Check to see if there is an option on your scanner to increase its character rate.

On Honeywell and many other brand scanners the USB Keystroke Interval is marked as an INTERCHARACHTER DELAY.
Also if there is a BAUD rate that would be something to increase.

Related

Chrome memory measurement now almost flat for longer test runs

In order to check our web application for memory leaks, I run a machine which does the following:
it runs automated End-to-End tests over (almost) the entire application in Chrome
after each block of tests, it goes to a state of the web application where almost nothing happens
it triggers gc(); for garbage collection
it saves totalJSHeapSize, and usedJSHeapSize to a file
it plots out the results for each test run to a graph
That way, we can see how much the memory increases and which are the problematic parts of our application: At some point the memory increases, at some point it decreases.
Till yesterday, it looked like this:
Bright red (upper line): totalJSHeapSize, light red (lower line): usedJSHeapSize
Yesterday, I updated Chrome to version 69. And now the chart looks quite different:
The start and end amount of memory used (usedJSHeapSize) is almost the same. But as you can clearly see, the way it changes over the course of the test (ca. 1,5h) is quite different.
My questions are now:
Is this a change in reality or in measurement? I.e. did Chrome change its memory handling? Or just the way it puts out memory values via totalJSHeapSize, and usedJSHeapSize?
Concerning memory leaks, is it good news or bad news for me? Like: Before I had dozens of spots where memory increases, now I have just three. Is this true? Or are the memory leaks in the now flat areas still there and hidden?
I'm also thankful for any background information on how Chrome changed its memory measurement.
Some additional info:
The VM runs under KUbuntu 18.04
It's a single web page application done with AngularJS 1.6
The outcome of the memory measurement is quite stable - both before and after the update of Chrome
EDIT:
It seems this was a bug of Chrome version 69. At least, with an update to Chrome 70, this strange behavior is gone and everything looks almost as before.
I don't think you should be worry about it. This can happen due to the memory manager used inside the chrome. You didn't mentioned the version of your first memory graph, possibility that the memory manager used between these two version is different. Chrome was using the TCMalloc which take the large chunk of memory from the OS and manage it, once the memory shortage happenned with TCMalloc then it ask again a big chunk of memory from OS and start managing it. So the later graph what you are seeing have less up and downs (but bigger then previous one) due to that. Hope it answered your query.
As you mentioned that
The outcome of the memory measurement is quite stable - both before and after the update of Chrome
You don't need to really worry about it, the way previously chrome was allocating memory and how it does with new version is different(possible different memory manager) that's it.

Browser Memory consumption / leak issues

I need help in understanding this Testing process.
Our Quality Assurance (QA) team is using Performance Monitor (from Microsoft) to test browser memory consumption & leak.
Steps QA do:
Open web browser and login to our webapp.
Note down the initial virtual bytes from tool (shown in screenshot)
Perform some operation(lets say search) for a couple of times.
Note down the virtual bytes from tool.
Calculate the difference between last & first virtual bytes allocated. (After converting virtual bytes to MB)
Divide this difference by number of total number of clicks performed by user.
Note down the remainder.
Now, this remainder should be less than 1. (this number is decided by them)
If its greater than 1, they say our webapp has memory leaks.
For Firefox & chrome, this remainder is less than 1 for us. But for IE 10 & 11 (32 & 64 bit both) remainder is more than 1.
Questions:
Is this some standard practice they are following?
How correct is their analysis process?
How can I convince them, if their analysis is not so right?
How should I go about fixing this problem?
P.S I'm not able to get more information from our QA.
P.S We use angular js for Client.
Note down the initial virtual bytes from tool (shown in screenshot)
Virtual bytes are nearly meaningless on 64bit because large chunks of address space can be reserved ahead of time without actually backing them with RAM or swap. Of course the amount is somewhat correlated to actual memory use, but it's just that "somewhat".
Calculate the difference between last & first virtual bytes allocated. (After converting virtual bytes to MB)
This calculation can be meaningless for a different reason. Browsers use complex memory management systems (custom allocators and garbage collectors) which may not immediately release memory back to the operating system after they have used it. Which means that for some amount of time their memory usage may only appear to grow, not shrink, even when you close tabs.
How should I go about fixing this problem?
Use the built-in memory tracking tools of the browsers. E.g. about:memory in firefox.

Chrome dev tools first memory heap snapshot is mysteriously large

I'm using the Profiles tab in the Chrome developer tools to record memory heap snapshots. My app has a memory leak, so I'm expecting the snapshots to gradually increase in size, which they do. But for reasons I don't understand, the first snapshot is always artificially large... creating a seemingly deceptive drop in memory between the first and second. All subsequent snapshots gradually increase as expected.
I know there is often extra memory used at the beginning of a page load, due to caching and other setup. But the same thing happens no matter when I take the first snapshot. It could be 30 seconds after the page is loaded or 30 minutes. Same pattern. My only guess is that the profile tool its self is interacting with the memory somehow, but that seems like a stretch.
Any ideas what's going on here?
Right before memory snapshot is taken Chrome tries to collect the garbage. It doesn't collect it thoroughly though, it only does a predefined number of passes (this magic number seems to be 7). Therefore, when the first snapshot is taken there still might be some uncollected garbage left.
Before making a first snapshot try going to the "Timeline" tab and forcing garbage collection manually.
From what I've tested, this always reduces the size of the first snapshot.

My Periodic Task just does not work

While attached to debugger it runs just fine. The Periodic Task is invoked and runs over and over, but when I deploy it to my device It seems to run 1-2 times and then stops.
What It does is setting the live tile background image from isolated storage. The images are created in the application and then saved to isolated storage. As mentioned it works well while attached to the debugger.
The only constraint I could think that could break it would be the memory cap. The application creates and saves 40 images of ~25kB each, and that isn't 1 MB! The application is maybe <4 MB, so that is 5 MB... a lot less than the 11 MB minimal requirement.
So it can't be the memory cap kicking in. Two consecutive unhandled crashes should also break the task, but I've thrown all the code in the task's OnInvoke() in a try/catch.
Now I'm out of ideas what stopping my periodic task when running without being connected to visual studio running in debugger. Any clues?
Firstly are you using Windows 8.1 phone by any chance? Since there is an issue with Periodic tasks do not run on windows phone 8.1 devices as you can see on this forum
Background agent can’t use more than 6MB of memory. You can get the current memory usage using the following snippet :
var memory = DeviceStatus.ApplicationMemoryUsageLimit
- DeviceStatus.ApplicationCurrentMemoryUsage;
automatically executed by the OS each 30 minutes
the operation can’t exceed 25 seconds per run
if the phone switch to battery saver mode the background agent may not be executed
on some devices only 6 background agents may be planned simultaneously
agents can’t use more that 6MB of memory
agents have to be re-planned each 2 weeks
an agent that crashes two times is automatically disabled by the system
Periodic tasks are unscheduled after two consecutive crashes. You need to make sure that this doesn't happen (check internet connectivity if required, set a timeout on web requests, etc.).
You should place your code in a try/catch block and log exceptions in the Isolated Storage to see what happened afterwards.
Here is the list of constraints that apply on scheduled agents (MSDN): Constraints for all Scheduled Task Types
Here is also a series of blog posts that could help you: Windows Phone: Background Agents Pitfalls
Have you actually measured and logged the memory that's being used? What you're saying isn't very correct:
When the background agent starts it has already taken 5-6MB to load what it needs from the .NET framework.
If you mean that the compressed files are 25KB each, you should know that the images in the memory are not compressed (at least not that much).
There are two things you can try:
Use this property and check the peak memory usage: DeviceStatus.ApplicationPeakMemoryUsage. Write it to some file (maybe every 5 images or so) and check if it's okay. Paste the results, please.
Note: When testing the memory usage, it's best to build the app in "Release" and run it without debugging on a device. That's most accurate. There are some minor variations, so you should run the agent several times to be sure it's working within the limits. You can force start it from the app using ScheduledActionService.LaunchForTest.
Also, I'd suggest you subscribe to the Application.Current.UnhandledException event and mark all exceptions as handled (and log them, so that you can fix them). That's for extra safety.
P.S. When the background agent stops executing, is it "blocked" in the list of background tasks on the device?

Reliably monitor a serial port (Nortel CS1000)

I have a custom python script that monitors the call logs from a Nortel phone system. This phone system is under extremely high volume throughout the day and it's starting to appear that some records may be getting lost.
Some of you may dislike this, but I'm not interested in sharing the source code or current method in any way. I would rather consider this from a "new project" approach.
I'm looking for insight into the easiest and safest way to reliably monitor heavy data output through a serial port on Linux. I'm not limiting this to any particular set of tools or languages, I want to find out what works best to do this one critical job. I'm comfortable enough parsing the data and inserting it into mysql that we could just assume the data could be dropped to a text file.
Thank you
Well, the way that I would approach this this to have 2 threads (or processes) working.
Thread 1: The read thread
This thread does nothing but read data from the raw serial port and put the data into a local buffer/queue (In memory is preferred for speed). It should do nothing else. Depending on the clock speed of the serial connection, this should be pretty easy to do.
Thread2: The processing thread
This thread just sleeps until there is data in the local buffer to process, then reads and processes it. That's it.
The reason for splitting it apart in two, is so that if one is busy (a block in MySQL for the processing thread) it won't affect the other. After all, while the serial port is buffered by the OS, the buffer size is limited.
But then again, any local program is likely going to be way faster than the serial port can send data. Serial transfer is actually quite slow relative to the clock speed of the processor (115.2kbps is about the limit on standard hardware). So unless you're CPU speed bound (such as on an Arduino), I can't see normal conditions affecting it too much. So your choice of language really shouldn't be of too much concern (assuming modern hardware). Stick to what you know.