Chrome versus Edge Javascript differences - google-chrome

From time to time i get Javascript Files (created by use of Adobe Animate) that crashes in either chrome or edge. In some cases these files crash only in chrome, in some cases they crash only in edge. Its always RangeError: "Maximum call stack size exceeded".
This behaviour is very good reproduceable and does not occur by random.
I checked v8 versions by "chrome://version/" and both browsers have the same version (V8 9.7.106.18)
I wonder how this can be?

(V8 developer here.)
Without knowing more about what those apps are doing, it's hard to be sure. There are a few factors at play:
The maximum size of the stack in bytes. Operating systems set an upper bound on this, beyond which they'd kill the process. To avoid that, V8 sets its own limit a little under what it estimates the OS limit would be. I wouldn't expect any differences in this regard when the V8 version is the same; however I don't know whether Edge overrides the default value with a different limit.
The size of each stack frame. This, too, should be the same as long as the V8 version is the same. It could be affected by optimization decisions (optimized code for a given function can use more or less stack space than unoptimized code for the same function), but I'd be surprised if Edge mucked with the optimization strategy.
The functions that get called, and the depth of any recursive calls that happen. In the simplest case, the generated JS could detect which browser it's running in and behave differently. It's also conceivable that the size of the window plays a role, e.g. if code iterates over every pixel of a dynamically-sized canvas; or things that have been stored in the profile (LocalStorage etc). If you have any browser extensions installed that change what the page is doing, that could also affect things. It's impossible to rule out anything without knowing more about what the app(s) is/are doing.

Related

How does Chromium implement failIfMajorPerformanceCaveat?

I'm looking for the general algorithm/checks that Chromium does when you specify failIfMajorPerformanceCaveat to be true when creating a WebGL context on a canvas.
I searched the Chromium source code, but quickly got lost in the sea of results and interfaces. I can get it to fail to create a WebGL context if I specify this flag as true and I have hardware acceleration disabled, so I know hardware acceleration enabled/disabled is part of the calculation - but is there more to it than that?
I only care about Chrome/Chromium on any reasonably modern version.
The only two references I could dig up are in /gpu/command_buffer/service/gles2_cmd_decoder.cc and in /gpu/command_buffer/service/gles2_cmd_decoder_passthrough.cc, and both are just basically
if (fail_if_major_perf_caveat && is_swiftshader_for_webgl) {
// fail
}
where SwiftShader is the software rendering engine used for 3D graphics.
This flag was added back in 2013, with the commit message saying
Currently only fails if using swiftshader.
and quick looking at other issues related to this flag I couldn't find something that would add more feature to it, so I doubt there is anything else hidden.
So this will fail only if the software render is active, which should happen only where HWA is disabled (or unavailable).
Note that at the time of implementation they discussed the possibility to make it smarter and fail only if SwiftShader would actually be slower than the native GL implementation, which isn't always the case. But once again it seems they never got back to it.

crackme crashes when hitting breakpoint

So I'm trying to improve my skills in reverse engineering, I'm still somewhat of a newbie.
Anyway, I found a crackme that's packed and has several anti debugging checks. (In windows)
If I attach my debugger to the process (After it unpacked), when I put a breakpoint in interesting places, the exe seems to crash if I hit that breakpoint. I'm almost certain that the program doesn't actually check for breakpoints, because even when I overwrote the return instruction to NOPS so that the code execution will hit the INT 3 instruction that are already (conveniently) there, it still crashes. Well maybe it does check, but even so, it doesn't seems to be the real problem.
It's worth noting that it's not in every place that the program crashes. (just in interesting places that I actually need to debug).
I would appreciate a guideline how should I go about dealing with it.
Thanks/

Implementation of synchronization primitives over HTML5 local storage

Consider a scenario where a browser has two or more tabs pointing to the same origin. Different event loops of the different tabs can lead to race conditions while accessing local storage and the different tabs can potentially overwrite each other's changes in local storage.
I'm writing a web application that would face such race conditions, and so I wanted to know about the different synchronization primitives that could be employed in such a scenario.
My reading of the relevant W3C spec, and the comment from Ian Hickson at the end of this blog post on the topic, suggests that what's supposed to happen is that a browser-global mutex controls access to each domain's localStorage. Each separate "thread" (see below for what I'm fairly confident that means) of JavaScript execution must attempt to acquire the storage mutex (if it doesn't have it already) whenever it examines local storage. Once it gets the mutex, it doesn't give it up until it's completely done.
Now, what's a thread, and what does it mean for a thread to be done? The only thing that makes sense (and the only thing that's really consistent with Hixie's claim that the mutex makes things "completely safe") is that a thread is JavaScript code in some browser context that's been initiated by some event. (Note that one possible event is that a <script> block has just been loaded.) The nature of JavaScript in the browser in general is that code in a <script> block, or code in a handler for any sort of event, runs until it stops; that is, runs to the end of the <script> body, or else runs until the event handler returns.
So, given that, what the storage mutex is supposed to do is to force all shared-domain scripts to block upon attempting to claim the mutex when one of their number already has it. They'll block until the owning thread is done — until the <script> tag code is exhausted, or until the event handler returns. That behavior would achieve this guarantee from the spec:
Thus, the length attribute of a Storage object, and the value of the various properties of that object, cannot change while a script is executing, other than in a way that is predictable by the script itself.
However, it appears that WebKit-based browsers (Chrome and Safari, and probably the Android browser too, and now maybe Opera?) don't bother with the mutex implementation, which leaves you in the situation that drove you to ask the question. If you're concerned with such race conditions (a perfectly reasonable attitude), then you can use either the locking mechanism suggested in the blog post (by someone who does, or did, work for Stackoverflow :) or else implement a version counting system to detect dirty writes. (edit — now that I think about it, an RDBMS-style version mechanism would be problematic, because there'd still be a race condition checking the version!)

HTML localStorage setItem and getItem performance near 5MB limit?

I was building out a little project that made use of HTML localStorage. While I was nowhere close to the 5MB limit for localStorage, I decided to do a stress test anyway.
Essentially, I loaded up data objects into a single localStorage Object until it was just slightly under that limit and must requests to set and get various items.
I then timed the execution of setItem and getItem informally using the javascript Date object and event handlers (bound get and set to buttons in HTML and just clicked =P)
The performance was horrendous, with requests taking between 600ms to 5,000ms, and memory usage coming close to 200mb in the worser of the cases. This was in Google Chrome with a single extension (Google Speed Tracer), on MacOSX.
In Safari, it's basically >4,000ms all the time.
Firefox was a surprise, having pretty much nothing over 150ms.
These were all done with basically an idle state - No YouTube (Flash) getting in the way, not many tabs (nothing but Gmail), and with no applications open other than background process + the Browser. Once a memory-intensive task popped up, localStorage slowed down proportionately as well. FWIW, I'm running a late 2008 Mac -> 2.0Ghz Duo Core with 2GB DDR3 RAM.
===
So the questions:
Has anyone done a benchmarking of sorts against localStorage get and set for various different key and value sizes, and on different browsers?
I'm assuming the large variance in latency and memory usage between Firefox and the rest is a Gecko vs Webkit Issue. I know that the answer can be found by diving into those code bases, but I'd definitely like to know if anyone else can explain relevant details about the implementation of localStorage on these two engines to explain the massive difference in efficiency and latency across browsers?
Unfortunately, I doubt we'll be able to get to solving it, but the closer one can get is at least understanding the limitations of the browser in its current state.
Thanks!
Browser and version becomes a major issue here. The thing is, while there are so-called "Webkit-Based" browsers, they add their own patches as well. Sometimes they make it into the main Webkit repository, sometimes they do not. With regards to versions, browsers are always moving targets, so this benchmark could be completely different if you use a beta or nightly build.
Then there is overall use case. If your use case is not the norm, the issues will not be as apparent, and it's less likely to get noticed and adressed. Even if there are patches, browser vendors have a lot of issues to address, so there a chance it's set for another build (again, nightly builds might produce different results).
Honestly the best course of action would to be to discuss these results on the appropriate browser mailing list / forum if it hasn't been addressed already. People will be more likely to do testing and see if the results match.

Is there any way to monitor the number of CAS stackwalks that are occurring?

I'm working with a time sensitive desktop application that uses p/invoke extensively, and I want to make sure that the code is not wasting a lot of time on CAS stackwalks.
I have used the SuppressUnmanagedCodeSecurity attribute where I think it is necessary, but I might have missed a few places. Does anyone know if there is a way to monitor the number of CAS stackwalks that are occurring, and better yet pinpoint the source of the security demands?
You can use the Process Explorer tool (from Sysinternals) to monitor your process.
Bring up Process Explorer, select your process and right click to show "Properties". Then, on the .NET tab, select the .NET CLR Security object to monitor. Process Explorer will show counters for
Total Runtime Checks
Link Time Checks
% Time in RT Checks
Stack Walk Depth
These are standard security performance counters described here ->
http://msdn.microsoft.com/en-us/library/adcbwb64.aspx
You could also use Perfmon or write your own code to monitor these counters.
As far as I can tell, the only one that is really useful is item 1. You could keep an eye on that while you are debugging to see if it is increasing substantially. If so, you need to examine what is causing the security demands.
I don't know of any other tools that will tell you when a stackwalk is being triggered.