How does Chromium implement failIfMajorPerformanceCaveat? - google-chrome

I'm looking for the general algorithm/checks that Chromium does when you specify failIfMajorPerformanceCaveat to be true when creating a WebGL context on a canvas.
I searched the Chromium source code, but quickly got lost in the sea of results and interfaces. I can get it to fail to create a WebGL context if I specify this flag as true and I have hardware acceleration disabled, so I know hardware acceleration enabled/disabled is part of the calculation - but is there more to it than that?
I only care about Chrome/Chromium on any reasonably modern version.

The only two references I could dig up are in /gpu/command_buffer/service/gles2_cmd_decoder.cc and in /gpu/command_buffer/service/gles2_cmd_decoder_passthrough.cc, and both are just basically
if (fail_if_major_perf_caveat && is_swiftshader_for_webgl) {
// fail
}
where SwiftShader is the software rendering engine used for 3D graphics.
This flag was added back in 2013, with the commit message saying
Currently only fails if using swiftshader.
and quick looking at other issues related to this flag I couldn't find something that would add more feature to it, so I doubt there is anything else hidden.
So this will fail only if the software render is active, which should happen only where HWA is disabled (or unavailable).
Note that at the time of implementation they discussed the possibility to make it smarter and fail only if SwiftShader would actually be slower than the native GL implementation, which isn't always the case. But once again it seems they never got back to it.

Related

Chrome versus Edge Javascript differences

From time to time i get Javascript Files (created by use of Adobe Animate) that crashes in either chrome or edge. In some cases these files crash only in chrome, in some cases they crash only in edge. Its always RangeError: "Maximum call stack size exceeded".
This behaviour is very good reproduceable and does not occur by random.
I checked v8 versions by "chrome://version/" and both browsers have the same version (V8 9.7.106.18)
I wonder how this can be?
(V8 developer here.)
Without knowing more about what those apps are doing, it's hard to be sure. There are a few factors at play:
The maximum size of the stack in bytes. Operating systems set an upper bound on this, beyond which they'd kill the process. To avoid that, V8 sets its own limit a little under what it estimates the OS limit would be. I wouldn't expect any differences in this regard when the V8 version is the same; however I don't know whether Edge overrides the default value with a different limit.
The size of each stack frame. This, too, should be the same as long as the V8 version is the same. It could be affected by optimization decisions (optimized code for a given function can use more or less stack space than unoptimized code for the same function), but I'd be surprised if Edge mucked with the optimization strategy.
The functions that get called, and the depth of any recursive calls that happen. In the simplest case, the generated JS could detect which browser it's running in and behave differently. It's also conceivable that the size of the window plays a role, e.g. if code iterates over every pixel of a dynamically-sized canvas; or things that have been stored in the profile (LocalStorage etc). If you have any browser extensions installed that change what the page is doing, that could also affect things. It's impossible to rule out anything without knowing more about what the app(s) is/are doing.

HTML localStorage setItem and getItem performance near 5MB limit?

I was building out a little project that made use of HTML localStorage. While I was nowhere close to the 5MB limit for localStorage, I decided to do a stress test anyway.
Essentially, I loaded up data objects into a single localStorage Object until it was just slightly under that limit and must requests to set and get various items.
I then timed the execution of setItem and getItem informally using the javascript Date object and event handlers (bound get and set to buttons in HTML and just clicked =P)
The performance was horrendous, with requests taking between 600ms to 5,000ms, and memory usage coming close to 200mb in the worser of the cases. This was in Google Chrome with a single extension (Google Speed Tracer), on MacOSX.
In Safari, it's basically >4,000ms all the time.
Firefox was a surprise, having pretty much nothing over 150ms.
These were all done with basically an idle state - No YouTube (Flash) getting in the way, not many tabs (nothing but Gmail), and with no applications open other than background process + the Browser. Once a memory-intensive task popped up, localStorage slowed down proportionately as well. FWIW, I'm running a late 2008 Mac -> 2.0Ghz Duo Core with 2GB DDR3 RAM.
===
So the questions:
Has anyone done a benchmarking of sorts against localStorage get and set for various different key and value sizes, and on different browsers?
I'm assuming the large variance in latency and memory usage between Firefox and the rest is a Gecko vs Webkit Issue. I know that the answer can be found by diving into those code bases, but I'd definitely like to know if anyone else can explain relevant details about the implementation of localStorage on these two engines to explain the massive difference in efficiency and latency across browsers?
Unfortunately, I doubt we'll be able to get to solving it, but the closer one can get is at least understanding the limitations of the browser in its current state.
Thanks!
Browser and version becomes a major issue here. The thing is, while there are so-called "Webkit-Based" browsers, they add their own patches as well. Sometimes they make it into the main Webkit repository, sometimes they do not. With regards to versions, browsers are always moving targets, so this benchmark could be completely different if you use a beta or nightly build.
Then there is overall use case. If your use case is not the norm, the issues will not be as apparent, and it's less likely to get noticed and adressed. Even if there are patches, browser vendors have a lot of issues to address, so there a chance it's set for another build (again, nightly builds might produce different results).
Honestly the best course of action would to be to discuss these results on the appropriate browser mailing list / forum if it hasn't been addressed already. People will be more likely to do testing and see if the results match.

Developing using pre-release dev tools

We're developing a web site. One of the development tools we're using has an alpha release available of its next version which includes a number of features which we really want to use (ie they'd save us from having to implement thousands of lines to do pretty much exactly the same thing anyway).
I've done some initial evaluations on it and I like what I see. The question is, should we start actually using it for real? ie beyond just evaluating it, actually using it for our development and relying on it?
As alpha software, it obviously isn't ready for release yet... but then nor is our own code. It is open source, and we have the skills needed to debug it, so we could in theory actually contribute bug fixes back.
But on the other hand, we don't know what the release schedule for it is (they haven't published one yet), and while I feel okay developing with it, I wouldn't be so sure about using it in production so if it isn't ready before we are then it may delay our own launch.
What do you think? Is it worth taking the risk? Do you have any experiences (good or bad) of similar situations?
[EDIT]
I've deliberately not specified the language we're using or the dev-tool in question in order to keep the scope of the question broad, as I feel it's a question that can apply to pretty much any dev environment.
[EDIT2]
Thank you to Marjan for the very helpful reply. I was hoping for more responses though, so I'm putting a bounty on this.
I've had experience contributing to an open source project once, like you said you hope to contribute. They ignored the patch for one year (they have customers to attend of course, although they don't sell the software but the support). After one year, they rejected the patch with no alternative solution to the problem, and without a sound foundation to do that. It was just out of their scope at that time, I guess.
In your situation I would try to solve one or two of their not-so-high priority, already reported bugs and see how responsive they are, and then decide. Because your success on deadlines will be compromised to theirs. If you have to maintain a copy of their artifacts, that's guaranteed pain.
In short: not only evaluate the product, evaluate the producers.
Regards.
My personal take on this: don't. If they don't come through for you in your time scale, you're stuck and will still have to put in the thousands of lines yourself and probably under a heavy time restriction.
Having said that, there is one way I see you could try and have your cake and eat it too.
If you see a way to abstract it out, that is to insulate your own code from the library's, for example using adapter or facade patterns, then go ahead and use the alpha for development. But determine beforehand what the latest date is according to your release schedule that you should start developing your own thousands of lines version behind the adapter/facade. If the alpha hasn't turned into an RC by then: grin and bear it and develop your own.
It depends.
For opensource environments it depends more on the quality of the release than the label (alpha/beta/stable) it has. I've worked with alpha code that is rock solid compared to alleged production code from another producer.
If you've got the source then you can fix the any bugs, whereas with closed source (usually commercially supported) you could never release production code built with a beta product because it's unsupported by the vendor who has the code, and so you can't fix it.
So in your position I'd be assessing the quality of the alpha version and then deciding if that could go into production.
Of course all of the above doesn't apply to anything even remotely safety critical.
It is just a question of managing risks. In open source, alpha release can mean a lot of different things. You need to be prepared to:
handle API changes;
provide bug fixes and workarounds;
test stability, performance and scalability yourself;
track changes much more closely, and decide whether to adopt then yet;
track the progress they are making and their responsiveness to patches/issues.
You do use continuous integration, do you?

Is there any way to monitor the number of CAS stackwalks that are occurring?

I'm working with a time sensitive desktop application that uses p/invoke extensively, and I want to make sure that the code is not wasting a lot of time on CAS stackwalks.
I have used the SuppressUnmanagedCodeSecurity attribute where I think it is necessary, but I might have missed a few places. Does anyone know if there is a way to monitor the number of CAS stackwalks that are occurring, and better yet pinpoint the source of the security demands?
You can use the Process Explorer tool (from Sysinternals) to monitor your process.
Bring up Process Explorer, select your process and right click to show "Properties". Then, on the .NET tab, select the .NET CLR Security object to monitor. Process Explorer will show counters for
Total Runtime Checks
Link Time Checks
% Time in RT Checks
Stack Walk Depth
These are standard security performance counters described here ->
http://msdn.microsoft.com/en-us/library/adcbwb64.aspx
You could also use Perfmon or write your own code to monitor these counters.
As far as I can tell, the only one that is really useful is item 1. You could keep an eye on that while you are debugging to see if it is increasing substantially. If so, you need to examine what is causing the security demands.
I don't know of any other tools that will tell you when a stackwalk is being triggered.

How can I support the support department better?

With the best will in the world, whatever software you (and me) write will have some kind of defect in it.
What can I do, as a developer, to make things easier for the support department (first line, through to third line, and development) to diagnose, workaround and fix problems that the user encounters.
Notes
I'm expecting answers which are predominantly technical in nature, but I expect other answers to exist.
"Don't release bugs in your software" is a good answer, but I know that already.
Log as much detail about the environment in which you're executing as possible (probably on startup).
Give exceptions meaningful names and messages. They may only appear in a stack trace, but that's still incredibly helpful.
Allocate some time to writing tools for the support team. They will almost certainly have needs beyond either your users or the developers.
Sit with the support team for half a day to see what kind of thing they're having to do. Watch any repetitive tasks - they may not even consciously notice the repetition any more.
Meet up with the support team regularly - make sure they never resent you.
If you have at least a part of your application running on your server, make sure you monitor logs for errors.
When we first implemented daily script which greps for ERROR/Exception/FATAL and sends results per email, I was surprised how many issues (mostly tiny) we haven't noticed before.
This will help in a way, that you notice some problems yourself before they are reported to support team.
Technical features:
In the error dialogue for a desktop app, include a clickable button that opens up and email, and attaches the stacktrace, and log, including system properties.
On an error screen in a webapp, report a timestamp including nano-seconds and error code, pid, etc so server logs can be searched.
Allow log levels to be dynamically changed at runtime. Having to restart your server to do this is a pain.
Log as much detail about the environment in which you're executing as possible (probably on startup).
Non-technical:
Provide a known issues section in your documentation. If this is a web page, then this correspond to a triaged bug list from your bug tracker.
Depending on your audience, expose some kind of interface to your issue tracking.
Again, depending on audience, provide some forum for the users to help each other.
Usability solves problems before they are a problem. Sensible, non-scary error messages often allow a user to find the solution to their own problem.
Process:
watch your logs. For a server side product, regular reviews of logs will be a good early warning sign for impending trouble. Make sure support knows when you think there is trouble ahead.
allow time to write tools for the support department. These may start off as debugging tools for devs, become a window onto the internal state of the app for support, and even become power tools for future releases.
allow some time for devs to spend with the support team; listening to customers on a support call, go out on site, etc. Make sure that the devs are not allowed to promise anything. Debrief the dev after doing this - there maybe feature ideas there.
where appropriate provide user training. An impedence mismatch can cause the user to perceive problems with the software, rather than the user's mental model of the software.
Make sure your application can be deployed with automatic updates. One of the headaches of a support group is upgrading customers to the latest and greatest so that they can take advantage of bug fixes, new features, etc. If the upgrade process is seamless, stress can be relieved from the support group.
Similar to a combination of jamesh's answers, we do this for web apps
Supply a "report a bug" link so that users can report bugs even when they don't generate error screens.
That link opens up a small dialog which in turn submits via Ajax to a processor on the server.
The processor associates the submission to the script being reported on and its PID, so that we can find the right log files (we organize ours by script/pid), and then sends e-mail to our bug tracking system.
Provide a know issues document
Give training on the application so they know how it should work
Provide simple concise log lines that they will understand or create error codes with a corresponding document that describes the error
Some thoughts:
Do your best to validate user input immediately.
Check for errors or exceptions as early and as often as possible. It's easier to trace and fix a problem just after it occurs, before it generates "ricochet" effects.
Whenever possible, describe how to correct the problem in your error message. The user isn't interested in what went wrong, only how to continue working:
BAD: Floating-point exception in vogon.c, line 42
BETTER: Please enter a dollar amount greater than 0.
If you can't suggest a correction for the problem, tell the user what to do (or not to do) before calling tech support, such as: "Click Help->About to find the version/license number," or "Please leave this error message on the screen."
Talk to your support staff. Ask about common problems and pet peeves. Have them answer this question!
If you have a web site with a support section, provide a hyperlink or URL in the error message.
Indicate whether the error is due to a temporary or permanent condition, so the user will know whether to try again.
Put your cell phone number in every error message, and identify yourself as the developer.
Ok, the last item probably isn't practical, but wouldn't it encourage better coding practices?
Provide a mechanism for capturing what the user was doing when the problem happened, a logging or tracing capability that can help provide you and your colleagues with data (what exception was thrown, stack traces, program state, what the user had been doing, etc.) so that you can recreate the issue.
If you don't already incorporate developer automated testing in your product development, consider doing so.
Have a mindset for improving things. Whenever you fix something, ask:
How can I avoid a similar problem in the future?
Then try to find a way of solving that problem.