MySQL debug trace (/tmp/mysqld.trace) does not dump timestamp.
what way to add timestamp?
(I need source code profile in detail)
I could use lldb(not gdb).
gdb has 'show debug timestamp' option.
https://sourceware.org/gdb/onlinedocs/gdb/Debugging-Output.html
iidb have the same option ?
It doesn't look like the MySQL debug trace uses the debugger at all. From the descriptions online, it is just programmatically dumping log info at various points. Since it doesn't stop in the debugger, then the "show debug timestamp" option won't help you in gdb since the debugger wouldn't be doing anything to trigger emitting the timestamp.
If you know what the trace printing function is, you could set a breakpoint there and get lldb to dump the current time using a Python breakpoint command. But I doubt you will want to have the debugger stop every time you want to dump a trace message, that would most likely slow you down too much.
I want to profile sql.
add debug output to souce code and lldb, but this is time consuming.
If mysqd.trace could print timestamp, best way.
Related
I can't use savepoint,rollback,commit commands with phpmyadmin which runs on wamp.
Your screenshot actually shows a warning* from phpMyAdmin, but if you click on "Go" it will still submit the query, which will execute correctly if your database supports it. In this case, the parser used by phpMyAdmin doesn't know how to correctly show the SAVEPOINT syntax, so it's warning you that something is beyond its ability to check.
However, my understanding of this functionality is that it's only valid for the current session. Since phpMyAdmin starts a new session each time you run a new query, this probably won't accomplish what you expect.
* I'd consider a warning different from an error in that a warning lets you continue and is often advice that something may not be as expected; an error is something that is definitely wrong and it won't let you proceed.
I have encountered a strange behavior in mysql, using dbi in Perl.
At the end of a perl program, I issue a mysql UPDATE command to a table. The command is executed using $dbh->execute(); and autocommit is turned on.
After the execute, the program issues $dbh->disconnect(); and exits.
The perl program runs as part of a script. Immediately when the perl program has stopped, another script executes. This script looks as the table that was updated, and here is when things become confusing to me.
Sometimes script 2 reads the old data in the table. Sometimes it sees what was just updated. I cannot understand how the initial perl program can do the $dbh->execute(); and yet it seems that the mysql table is updated several seconds later.
Any insight would be helpful! Cheers in advance.
Turns out the problem was never with either mysql or Perl.
The problem was that the two scripts were running as a script called by a crontab job. Unless specified, crontab did not run using the bash shell.
See
https://askubuntu.com/questions/117978/script-doesnt-run-via-crontab-but-works-fine-standalone
for more information.
Recently, one of my requirements have been to modify multiple dbs at one go and i have been using SOURCE command to execute the file (.sql file)
However, i wanted to know if there is an online way to do it because that way i can use nohup to let it run even if i log out or any network issues come along and my session ends. By online mode, i meant not having to go to mysql command line ( mysql> )
Wanted to know if this is possible at all? Please note that SQL file is targeted to modify multiple DBs at one go.
I don't know about an "online mode", but if you want to let your mysql run even if you log out if a network issue come along, I suggest you use GNU screen. If your session ends, anything runned inside screen will continue to run in the background, and you'll be able to reattach your session with screen -r when you'll signin again.
is there a way to make a hudson job fail if a certain string occurs in the console output?
The reason I ask is because we have some jobs that deploy EAR files (via mvn commands) and even though the job runs successfully, I see a string like this:
<26-Nov-2010 14:05:32 o'clock CET> <Info> <J2EE Deployment SPI> <BEA-260121>
<Initiating undeploy operation for application, legacyservice [archive: null],
to cde-server-c01 .>
[Deployer:149163]The domain edit lock is owned by another session in non-exclusive
mode - this deployment operation requires exclusive access to the edit lock
and hence cannot proceed.
ExitException: status 1
[INFO] Ignore exit
[INFO] Weblogic un-deployment successful
I have tried fiddling with the maven command, but it does not really fail. So I wonder, if there was another way to detect this flaw and fail the job.
I imagine failing the job if a string like this occurs:
requires exclusive access to the edit lock and hence cannot proceed.
I am either interested in a hudson plugin that can do this, or a native way of configure my job for this
This is what you are looking for:
http://wiki.hudson-ci.org/display/HUDSON/Log+Parser+Plugin
You can edit the parsing rules file to include any text you want. This should let you use the text requires exclusive access to the edit lock and hence cannot proceed. as a regex in the parsing file. The instructions on the wiki page above are quite clear.
I am using Kohana 3. I want to log the MySQL queries being executed by an application. The reason to determine the query of type INSERT,UPDATE and DELETE which are being executed in a process and store them in another MySQL table with date-time for further reference.
Can anybody tell how can I achieve this?
An alternative is to enable profiling for the database module, which will log the queries made to a file.
This will log ALL queries, not just the last one ;)
It shouldn't be too hard to parse the file, or to extend the profiling/logging/caching classes to save it to a database.
Sorry, because of the Kohana tag I approached the problem from the wrong angle. You want the MYSQL server to log the commands directly, so you get ALL of the commands, not just the last one.
See the mysql server docs on logging:
http://dev.mysql.com/doc/refman/5.0/en/server-logs.html
I did this using after() method of the controller. After execution of each controller action this after() method is executed, where I wrote logic to capture last query executed and stored in my db for further reference.