xdebug_break and start_with_request in the php.ini file - not working as expected - phpstorm

Per the docs for start_with_request = no at https://xdebug.org/docs/all_settings#start_with_request :
You can still start a Function Trace with xdebug_start_trace(), Step Debugging with xdebug_break(), or Garbage Collection Statistics with xdebug_start_gcstats().
My php.ini:
zend_extension="/usr/lib/php/20190902/xdebug.so"
xdebug.mode=debug
xdebug.client_host=192.168.1.121
xdebug.client_port=9000
xdebug.start_with_request=no
xdebug.output_dir=/opt/cris/profiling/grind_files
xdebug.profiler_output_name=callgrind.out.%p
xdebug.profiler_append=0
xdebug.log = /var/log/xdebug/xdebug.log
xdebug.log_level = 0
I have a xdebug_break() statement that does not turn on
the debugger in PhpStorm as expected when the execution passes over it.
Per the docs, I'd expect Xdebug to kick into play when it hits the xdebug_break() command with php.ini setup like this, and I'd start step debugging in PhpStorm.
I want to disable Xdebug from sending debugging output which is why I start_with_request is set to no. Once the code hits the xdebug_break() THEN start emitting debugging output so that PhpStorm goes into Step Debugging mode. If there is a different way to achieve this I'm all ears
What did I miss?

Related

How to get job and telescope command on chrome's V8 x64.release version? (No symbol "_v8_internal_Print_Object" in current context)

I'm trying to get chrome's V8 (d8) x64.release version to use the V8 support tools in GDB, specifically for the job and telescope commands (predominantly the former).
My x64.debug version has this implemented and works, but even after building the x64.release version in a similar manner I still cannot get these commands to work in the x64.release version. The output is always as:
gef➤ job 0xd98082f7b51
No symbol "_v8_internal_Print_Object" in current context.
I have set args.gn before, and after building via ninja -C to include v8_enable_object_print = true in my args.gn:
is_debug = false
target_cpu = "x64"
use_goma = false
v8_enable_object_print = true
v8_enable_disassembler = true
I also have my ~/.gdbinit containing:
source ~/Desktop/tools/v8/tools/gdbinit
source ~/Desktop/tools/v8/tools/gdb-v8-support.py
See: https://chromium.googlesource.com/v8/v8/+/refs/heads/main/tools/gdbinit (for the support tool I'm trying to build V8 with).
How can I get my /v8/out.gn/x64.release/d8 to run with compatibility of the job command?
Am I missing something here? If so your help would be very helpful.
EDIT Alternatively how can I disable all x64.debug V8 DCHECKS?
Thanks all, appreciate your time here.
How can I get my /v8/out.gn/x64.release/d8 to run with compatibility of the job command?
I'm not sure. Try adding symbol_level = 1 (or even symbol_level = 2) to your args.gn. That definitely helps with stack traces, and might also be the thing that lets GDB find the _v8_internal_Print_Object function by name.
Alternatively how can I disable all x64.debug V8 DCHECKS?
There is no flag to disable them, but you can edit the source to make them do nothing. See src/base/logging.h.

How to trace while developing a MediaWiki extension?

How to add tracing (for bug hunting) code to my MediaWiki extension?
When I add echo "XXX"; or var_dump(...);, I don't see it in output (despite the code line where I put this tracing works for sure, as I checked by adding exit(0); instead of this tracing and watching it crashing by exit as expected).
I assume you mean debug logging ("trace" is usually used for recording what method calls happen, as in XDebug function traces). The MediaWiki debugging help page has some information on it, although it's not in great shape. Basically you set $wgDebugLogGroups['mydebuglog'] to point to a logfile, and then use wfDebugLog( 'mydebuglog', 'XXX' ). (PSR-3-style structured logging is possible but requires some setting up.)
Usually var_dump works too, but there is a lot of stuff that happens outside of requests with a web response (jobs or heavy processing that's delayed until the response has been sent).
If you did mean tracing, the profiling help page has some information.

Fiware CEP server stops responding

In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.

my nodejs script is not exiting on its own after successful execution

I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();

Getting AIR stacktraces in ipad for release build [duplicate]

I'm trying to debug an issue on a clients machine. The problem is that the problem is a runtime error with very little clue as to where it is. It is an intermittent problem. I know ADL allows me to run the application in a debug mode. The problem is that to tell the user to download and manage the ADL invokation is going to be very difficult. It would be a lot easier if I could just give the end user one install/executable to install and run and then send me the trace of the issue. So what I'm looking for is easy steps for the client to be able to run the AIR app in debug mode. Downloading ADL and finding the install location of the app is going to be difficult to manage remotely with the end user.
Update:
You have to make sure you are working with AIR 3.5 and Flash 11.5 and also include the following flag "-swf-version=18" in additional compiler settings. You then have to catch the global error as mentioned in the answer and it will show you the location of the error. No line numbers of course. Just routine names. Thanks a milion to Lee for the awsome answer.
not a direct answer but if you publish for AIR3.5 (or 3.6 beta), you can get some debug info:
add a listener for uncaught RTEs to top level of your app:
this.loaderInfo.uncaughtErrorEvents.addEventListener(UncaughtErrorEvent.UNCAUGHT_ERROR, globalErrorHandler);
and grab debug info from error in listener:
function globalErrorHandler(event:UncaughtErrorEvent):void
{
var message:String;
//check for runtime error
if (event.error is Error)
message = (event.error as Error).getStackTrace();
//handle other errors
else if (event.error is ErrorEvent)
message = (event.error as ErrorEvent).text;
else
message = event.error.toString();
//do something with message (eg display it in textfield)
myTextfield.text = message;
}
getStackTrace will return a stack trace even for release AIR apps (as long as you use AIR3.5 or above).
Without the SDK Tools; I don't think it is possible to run an aIR app in debug mode. But, here are a few alternatives to consider:
The client must have some idea what is going on to cause the error, right? Can you give them a special build with Alert Boxes or logging or something to help isolate the error to a line of code?
Can you listen for the uncaughtException event? The event will give you the full stack trace ( Error.getStackTrace() ); which you could then log--possibly with other information. Then you just have to tell your client to "Go here" and "send me this file." Or even display the info in some Alert and have the user copy and paste it into an email to you. More info on uncaughtException here and here
check my post. Maybe it helps you to get stack trace with line numbers in a AIR release build.
How can I get stacktrace for Adobe AIR global runtime errors in non-debug mode?
I use it in 2 big projects right now and it works very well.
Greetings