When using PM2 to debug Node.js, is there a CLI option to avoid a command prompt opening for every thread using the cluster module? - pm2

I'm using node.js's cluster module to fork 16 threads to fully leverage my i9 9900k with a node.js program. I need to debug memory leaks and PM2 seems to offer the best analysis capabilities of a Node.js program that I've been able to find, but when I run the program with PM2, it opens an additional command prompt for every thread forked and 16 windows popping up in my face every time I run it isnt optimal. Is there a CLI option for PM2 to avoid this?
A quick google didn't turn anything up but sometimes you gurus know a thing or two google doesn't.

Quick answer - no. You can not set the behaviour you want through PM2 options (at least I couldn't manage to find it as well).
Though, if you ask me. I think it is not a great way to look for some memory leaks by introducing more entropy in your debug setup. Make a minimal reproducible example with one Node.js script, that will have those memory leaks and use available tooling to gather snapshots, where you will find relative changes to the memory consumption which leads you to the solution.
There are a lot of articles out there, like:
Finding And Fixing Node.js Memory Leaks: A Practical Guide
Memory Leaks Demystified
Well, you got the idea. Hope it helps you find a leak.

Related

Reverse Engineering / Log or intercept program instructions

I'm trying to find a way of replicating the action / instruction that a physical button being pushed on a control panel sends to the software of a CNC machine of ours.
Ultimately I would like to integrate this instruction into an executable file I could make using AutoIT, but that is further down the line!
After some googling, resulting in all kinds of weird and wonderful results, I'm at a loss of how to begin this task. I believe I need to either use debugging software to find the instruction as it takes place, or possibly Process Monitor?
The machine runs off of a Windows XP machine.
Unfortunately obtaining this information from the manufacturer is not an option.
If anyone could help point me in the right direction that would be appreciated,
Thanks
Edit: I have since come across Windows Hooks, Detours and Interception, but still haven't made much progress!
Your topic is too broad ... You might as well be asking "How do I reverse engineer?" First thing I would do would be to load up the program in a debugger, put a breakpoint in the callback function and find out what the button is doing. What you will most likely find is that it's pushing some information onto the stack and making a call to an external .DLL such as an API or device driver ( you could probably find out which DLL using Process Monitor too ). Just load that .DLL up into your new program and make the same call.

Google Chrome Crash Report Analysis

I am new to the field of crash analysis. I recently, by accident, happened to crash Google Chrome. I do not know the reason as to why the crash really happened. I'd like to know it in depth though.
When the crash happened, there was a Crash report that was generated. I have saved that report in a text file on my system, as I did not know what to do with it at the out start.
Now I have heard people in the info sec world talk about things like, analyzing and reversing a crash dump, fuzzing a crash dump etc. and trying to reproduce the crash.
I am interested in understanding how these things are done and in the first place what they actually are. I need help with resources that can help me understand how to analyze and reproduce a crash etc. I happened to come across: -Chrome: Found a crash, is it a security vulnerability? and Best way to triage crashes found via fuzzing, on Linux? but these resources seemed a bit advance and not very basic. Also googling up gave me some resources of how to analyze a BSOD in Windows, but I could not find anything relevant for Google Chrome Crash Analysis.
Please help provide some good resources where I can understand these concepts.
My Platform is Mac OSX 10.9.2 and my Google Chrome is Version 35.0.1916.153.
Im afraid this is a broad topic. For a head start, read about use after free , index out of read/write class of bugs. These are the most common in browsers.
By reproducing they mean you do the same step of things which made the browser crash and see if it is crashing again. Like lets say opening a malformed HTML/PDF/Font (or any browser input, there are many many more file types.) If you could reproduce the crash, you could attach Chrome to a debugger and check the registers at the time of crash.
To know if the crash is of any use, see this particular question on SE. For OSX, there is an amazing tool called Crashwrangler by Apple itself. If Crashwrangler reports the crash as exploitable, it is a definite security bug. Else you would need to do manual analysis to reach a conclusion. For this you need some knowledge about assembly language and software exploitation. OpenSecurityTraining has some amazing content on this. I highly recommend it. Start with x86 Assembly on the beginner section and finally MOV to reverse engineering. It is important to know how the stack is laid out in the memory and registers to understand a crash dump. I wish you all the best in the journey. Hope this helps.

Java SE binary crash

I have a Java swing application that subscribes to a lot of data and displays this data in various ways. Under heavy load I have come to encounter that the JRE simply stops working with message "Java(TM) Platform SE binary has stopped working". This obviously shuts down my application and I need to restart it. I have tried to google for ways to troubleshoot this issue as I do not get a stacktrace in my code or anything that I can work with but I have found very little useful information beyond upgrading/re-installing the JRE and running virus scans. I have done both of these measures and rebooted the server but the problem still persists. I have tried to monitor the process with Java VisualVM (see dump below) but I am no expert on this tool and may not know what to look for. The observation that I have made is that the 'crashes' appear to coincide with Garbage Collections.
The issue is quite easy to reproduce and occurs after about 10 minutes of running the application. I do not run the application with any specific jvm parameters. The Java version is 1.6.0_31 (was _25 before upgrade) and I run on Windows 7 64-bit.
In the pic below from VisualVM the Java binary has just stopped working which appears to coincide with the GC-run.
Any help or ideas so that I can troubleshoot or remedy the problem is greatly appreciated. Thanks.
Three things to check:
If you've implemented the finalize() method anywhere, make sure it doesn't directly or indirectly lock any objects; this can cause a catatrophic deadlock correlated with GC.
If you've got native code, any number of weird things can happen if the code is not using global references correctly, including deadlocks and weird memory corruption, which would again correlate with GC activity.
Finally, GC might just be "stirring the pot" and exposing vanilla deadlocks which exist otherwise in the application; check your synchronization protocols.
Garbage collection pauses the VM's application threads while it happens, which might be exposing a race condition somewhere.

Is it possible to run "native" code on top of a managed OS?

I was reading up on Midori and kinda started wondering if this is possible.
On a managed OS, "managed code" is going to be native, and "native code" is going to be...alien? Is it possible, at least theoretically, to run the native code of today on a managed OS?
First, you should start by defining "managed" and "native". On a "managed" OS like Midori, the kernel is still ngen-ed (precompiled to machine code), instead of being jit-compiled from IL. So, I would rule that out as a distinction between "managed" and "native".
There are two other distinctions between "managed" and "native" code that come to my mind - code vrifiability and resource management.
Most "native" code is unverifiable, thus a "managed" OS loader might refuse to even load "native" images. Of course, it is possible to produce verifiable "native" code, but that puts a lot of limitations and in essence is no different from "managed" code.
Resources in a "managed" OS would be managed by the OS, not the app. A "native" code usually allocates and cleans up its resource. What would happen with a resource that was allocated by an OS API and given to the "native" code? Or vice versa? There should be quite clear rules on who and when will do the resource management and cleanup. For security reasons, I can't imagine the OS giving any direct control to the "native" code to any resources besides the process virtual memory. Therefore, the only reason to go "native" would be to implement your own memory management.
Today's "natve" code won't play by any of the rules above. Thus, a "managed" OS should refuse to execute it directly. Though, the "managed" OS might provide a virtualization layer like Hyper-V and host the "native" code in a virtual machine.
By managed I assume you mean the code runs in an environment which does some checks on the code for type safety, safe memory access etc. And native, well, the opposite. Now its this execution environment that determines whether it can allow native code to run without being verified. Look at it this way: The OS and the application on top both need an execution env to run in. Their only relationship is that the top application is calling the underlying OS for lower level tasks but in calling the OS, its actually being executed by the execution env(which may/may not support code verification depending on say, options passed in compiling the code for example) and when control is transferred to the OS, the execution env again is responsible for executing the OS code(this environment might be another envionment all together), in which case, it verifies the OS code(because its a managed OS).
So, theoretically, native code may/may not run on a managed OS. It all depends on the behaviour of the execution environment in which its running. Whether the OS is managed or not will not affect whether it will run on it or not.If the top application and the OS both have the same execution env(managed), then the native code will not run on the OS.
Technically, a native code emulator can be written in managed code, but it's not running on bare hardware.
I doubt any managed OS that relies on software verification to isolate access to shared resources (such as Singularity) allows running unmanaged code directly since it might be able to bypass all the protections provided by the software (unlike normal OSes, some managed OSes don't rely on protection techniques provided by hardware).
From the MS Research paper Singularity: Rethinking the Software Stack (p9):
A protection domain could, in principle, host a single process
containing unverifiable code written in an unsafe language such as
C++. Although very useful for running legacy code, we have not
yet explored this possibility. Currently, all code within a
protection domain is also contained within a SIP, which continues
to provide an isolation and failure containment boundary.
So it seems like, though unexplored at the moment, it is a distinct possibility. Unmanaged code could run in a hardware protected domain, it would take a performance hit from having to deal with virtual memory, the TLB, etc. but the system as a whole could maintain its invariants safely while running unmanaged code.

Does anyone use Virtualization to create a quicker disaster recovery of a development environment?

I'm getting pretty tired of my development box dying and then I end up having to reinstall a laundry list of tools that I use in development.
This time I think I'm going to set the development environment up on a Virtual Box VM and save it to an external HDD so that way I can bring the development environment back up quickly after I fix the real computer.
It seems to be like a good way to make a "hardware agnostic backup" and be able to get back up to speed quickly after a disaster.
Has anybody tried this? How well did it work? Did it save you time?
I used to virtualize all my development eviroments using VirtualBox.
Basically, i have a Debian vbox image file stamped in a DVD. When i have a new project i copy it to one of my external hdds and customize it to my project.
Once my project was delivery, then i copy the image from my external hdd to a blank DVD and file it.
I've done this with good success, we had this in our QA environment even and we'd also make use of Undo disks, so that if we want to test for example Microsoft patches we could roll the box back to it's previous state.
The only case we had issues was on SQL Server's particullary if you do a lot of disk activity. We had two VM's replicating gigs of data btw each other hosted on the same physical box. The disks just couldn't keep up; however, for all the other tiers it worked like a breeze.
One cool idea I just saw a presentation on is using VirtualBox, and have your host using OpenSolaris with ZFS. That makes it easy to take a snapshot of your image(s), and rollback to the snapshot when things go wrong, or when you want to restore to a known state for QA purposes.
I keep all development on virtual machines. In a multi-developer shop this allows for rapid deployment of a new development environment if someone fries their VM (via service pack or whatever) and allows a new developer to join the project almost immediately.
K
I'm reading the question much differently than the rest of you guys. I read it as the OP asking about keeping an image of a fresh install as a VM, then, when a server needs to be redeployed, you can restore from a backup of the VM.
In this case, the VM is nothing more than a different way of maintaining an image of an OS install, and if it works, it's not a half bad idea, IMO.
In the companies I work with, I encourage the use of network installable operating systems. With the right up-front work you can configure a boot server on your office network which will install your base operating system, all the drivers you need for your hardware, and all the software you'll use. Not only will this bail you out in a disaster scenario where you lose a machine, but it makes deploying hardware for new employees trivial.
This is easier with Linux than it is with Windows or Mac, but the latter two can work in this manner too.
I use the same network install methods for deploying servers in a live environment too.
The Virtualisation approach isn't a bad answer to the same problem, but to me it doesn't seem quite as clean.
That's not the way to go.
When you are developing you want to have many tools, some which require a lot of computing power.Keep in mind that (IIRC, I couldn't find it on VBox website ) only emulates a PIV.
At the moment only one VM simulates a dual core CPU, and that's very new. This is important because there are race conditions that can only be seen on multiple CPU machines, so you want to test your code under multiple CPU/cores.
I think a simpler and better thing to do is make a disk image of your system and configuration partitions, restore it once a month to keep a clean system, and restore it
when ever your system gets mussed.
Now a quick word about Windows, since the other systems where I have done this are no problem. The partitions that you image, should not be changed in between. Not a problem
for other OS's, but some briliant person decided to put Profiles on Windows smack dab in the system files. I simply make it a point to not put anything in my Profile (or on my Desktop which is in my Profile ) that I'm not willing to lose.