Chrome Developer tools missing timeline views - google-chrome

So under timeline I would normal see options such as Events, Frames, Memory in the upper left but I no longer do. I'm not sure what I did but what I have there now is two checkboxes: capture stacks and capture memory. How can I get back the original setup?

They updated the interface, I'm pretty sure all the functionality is still available.
(In timeline view)
* Event mode is the default.
* Frames mode can be toggled with the icon next to the trash can.
* Memory can by turned on by checking the capture memory checkbox.

It is in-fact disappointing to see such a feature missing from chrome dev tool. FPS meter can be used to get similar information, but not as good as seeing info realtime in Timeline.
In addition to this, for memory details the following script from paulirish is useful,
https://github.com/paulirish/memory-stats.js

It moved into Performance Analysis Reference: https://developers.google.com/web/tools/chrome-devtools/evaluate-performance/reference

Related

Clojure agents: rate limiting?

Okay, so I have this small procedural SVG editor in Clojure.
It has a code pane where the user creates code that generates a SVG document, and a preview pane. The preview pane is updated whenever the code changes.
Right now, on a text change event, the code gets recompiled on the UI thread (Ewwwww!) and the preview pane updated. The compilation step should instead happen asynchronously, and agents seem a good answer to that problem: ask an agent to recompile the code on an update, and pass the result to the image pane.
I have not yet used agents, and I do not know whether they work with an implicit queue, but I suspect so. In my case, I have zero interest in computing "intermediate" steps (think about fast keystrokes: if a keystroke happens before a recompilation has been started, I simply want to discard the recompilation) -- ie I want a send to overwrite any pending agent computation.
How do I make that happen? Any hints? Or even a code sample? Is my rambling even making sense?
Thanks!
You describe a problem that has more to deal with execution flow control rather than shared state management. Hence, you might want to leave STM apart for a moment and look into futures: they're still executed in a thread pool as agents, but instead of agents they can be stopped by calling future-cancel, and inspecting their status with future-cancelled?.
There are no strong guarantees that the thread the future is executing can be effectively stopped. Still, your code will be able to try to cancel the future, and move on to schedule the next recompilation.
agents to indeed work on a queue, so each function gets the state of the agent and produces the next state of the agent. Agents track an identity over time. this sounds like a little more than you need, atoms are a slightly better fit for your task and used in a very similar manner.

HTML localStorage setItem and getItem performance near 5MB limit?

I was building out a little project that made use of HTML localStorage. While I was nowhere close to the 5MB limit for localStorage, I decided to do a stress test anyway.
Essentially, I loaded up data objects into a single localStorage Object until it was just slightly under that limit and must requests to set and get various items.
I then timed the execution of setItem and getItem informally using the javascript Date object and event handlers (bound get and set to buttons in HTML and just clicked =P)
The performance was horrendous, with requests taking between 600ms to 5,000ms, and memory usage coming close to 200mb in the worser of the cases. This was in Google Chrome with a single extension (Google Speed Tracer), on MacOSX.
In Safari, it's basically >4,000ms all the time.
Firefox was a surprise, having pretty much nothing over 150ms.
These were all done with basically an idle state - No YouTube (Flash) getting in the way, not many tabs (nothing but Gmail), and with no applications open other than background process + the Browser. Once a memory-intensive task popped up, localStorage slowed down proportionately as well. FWIW, I'm running a late 2008 Mac -> 2.0Ghz Duo Core with 2GB DDR3 RAM.
===
So the questions:
Has anyone done a benchmarking of sorts against localStorage get and set for various different key and value sizes, and on different browsers?
I'm assuming the large variance in latency and memory usage between Firefox and the rest is a Gecko vs Webkit Issue. I know that the answer can be found by diving into those code bases, but I'd definitely like to know if anyone else can explain relevant details about the implementation of localStorage on these two engines to explain the massive difference in efficiency and latency across browsers?
Unfortunately, I doubt we'll be able to get to solving it, but the closer one can get is at least understanding the limitations of the browser in its current state.
Thanks!
Browser and version becomes a major issue here. The thing is, while there are so-called "Webkit-Based" browsers, they add their own patches as well. Sometimes they make it into the main Webkit repository, sometimes they do not. With regards to versions, browsers are always moving targets, so this benchmark could be completely different if you use a beta or nightly build.
Then there is overall use case. If your use case is not the norm, the issues will not be as apparent, and it's less likely to get noticed and adressed. Even if there are patches, browser vendors have a lot of issues to address, so there a chance it's set for another build (again, nightly builds might produce different results).
Honestly the best course of action would to be to discuss these results on the appropriate browser mailing list / forum if it hasn't been addressed already. People will be more likely to do testing and see if the results match.

trace the data flow when the executable is running

I am practicing reversing skill using OLLdbg under windows.
there is an interactive window asking you input, let's say "serial number". My question is when user operate on the window, it is hard to locate related data flow within the debugger window. For example, if I click "F9", we can view the instruction flow; but When inputing on the window, I can't know which instructions have been executed.
My target is to find some jump instruction and change it, so that I can bypass the correct input requirement. I think the instruction should be quite close to instruction related to arg#, and related to TEST command.
Looking for hint or trick. Thanks.
One thing you could do is type something in the text field and then use an application such as Cheat Engine to find out where in the memory these characters are stored. Then you can put a memory (on access) breakpoint on the address of the first character in ollydbg. Then press the button that verifies the serial. When an instructions accesses this part of the memory it will break. You're inside a part of the code that verifies your string. Now from here you have to try to understand what the code is doing to find the instruction you want to alter.
Depending on how secure the application is, this will work. With a more secure application this most likely won't work. When your just starting reverse engineering I suggest you find some easy applications made for cracking and work your way to the more secure applications. A site where you can find many of these "crackmes" is crackmes.de. Also i can suggest lene151's tutorials here. Some of the best tutorials I've seen on reverse engineering.

How do software events work internally?

I am a student of Computer Science and have learned many of the basic concepts of what is going on "under the hood" while a computer program is running. But recently I realized that I do not understand how software events work efficiently.
In hardware, this is easy: instead of the processor "busy waiting" to see if something happened, the component sends an interrupt request.
But how does this work in, for example, a mouse-over event? My guess is as follows: if the mouse sends a signal ("moved"), the operating system calculates its new position p, then checks what program is being drawn on the screen, tells that program position p, then the program itself checks what object is at p, checks if any event handlers are associated with said object and finally fires them.
That sounds terribly inefficient to me, since a tiny mouse movement equates to a lot of cpu context switches (which I learned are relatively expensive). And then there are dozens of background applications that may want to do stuff of their own as well.
Where is my intuition failing me? I realize that even "slow" 500MHz processors do 500 million operations per second, but still it seems too much work for such a simple event.
Thanks in advance!
Think of events like network packets, since they're usually handled by similar mechanisms. Now think, your mouse sends a couple of hundred packets a second, maximum, and they're around 6 bytes each. That's nothing compared to the bandwidth capabilities of modern machines.
In fact, you could make a responsive GUI where every mouse motion literally sent a network packet (86 bytes including headers) on hardware built around 20 years ago: X11, the fundamental GUI mechanism for Linux and most other Unixes, can do exactly that, and frequently was used that way in the late 80s and early 90s. When I first used a GUI, that's what it was, and while it wasn't great by current standards, given that it was running on 20 MHz machines, it really was usable.
My understanding is as follows:
Every application/window has an event loop which is filled by the OS-interrupts.
The mouse move will come in there.
All windows have a separate queue/process by my knowledge (in windows since 3.1)
Every window has controls.
The window will bubble up this events to the controls.
The control will determine if the event is for him.
So its not necessary to "compute" which item is drawn under the mouse cursor.
The window, and then the control will determine if the event is for them.
By what criteria do you determine that it's too much? It's as much work as it needs to be. Mouse events happen in the millisecond range. The work required to get it to the handler code is probably measured in microseconds. It's just not an issue.
You're pretty much right - though mouse events occur at a fixed rate(e.g. an USB mouse on linux gives you events 125 times a second by default - which really is not a lot),, and the OS or application might further merge mouse events that's close in time or position before sending it off to be handled

Is there any way to monitor the number of CAS stackwalks that are occurring?

I'm working with a time sensitive desktop application that uses p/invoke extensively, and I want to make sure that the code is not wasting a lot of time on CAS stackwalks.
I have used the SuppressUnmanagedCodeSecurity attribute where I think it is necessary, but I might have missed a few places. Does anyone know if there is a way to monitor the number of CAS stackwalks that are occurring, and better yet pinpoint the source of the security demands?
You can use the Process Explorer tool (from Sysinternals) to monitor your process.
Bring up Process Explorer, select your process and right click to show "Properties". Then, on the .NET tab, select the .NET CLR Security object to monitor. Process Explorer will show counters for
Total Runtime Checks
Link Time Checks
% Time in RT Checks
Stack Walk Depth
These are standard security performance counters described here ->
http://msdn.microsoft.com/en-us/library/adcbwb64.aspx
You could also use Perfmon or write your own code to monitor these counters.
As far as I can tell, the only one that is really useful is item 1. You could keep an eye on that while you are debugging to see if it is increasing substantially. If so, you need to examine what is causing the security demands.
I don't know of any other tools that will tell you when a stackwalk is being triggered.