I have a Yii application with many concurrent console jobs writing to one database. Due to the high concurrency sometimes I get MySQL deadlock errors. Sometimes these can be too many. The console.log file becomes too big, and it translates to more expenses.
I want to prevent logging of specific CDbException instances, or at least suppress them altogether (I am handling the exceptions and can generate more compact log sentences from there).
YII__DEBUG is already commented out.
Can anyone please help me figure out how to do this?
Thanks a lot!!
Regards.
I decided to modify the log statement in yii/framwework/db/CDbCommand.php that was logging the failed SQL. I converted it into a trace statement:
Yii::trace(Yii::t('yii','CDbCommand::{method}() failed: {error}. The SQL statement executed was: {sql}.', array('{method}'=>$method, '{error}'=>$message, '{sql}'=>$this->getText().$par)),CLogger::LEVEL_ERROR,'system.db.CDbCommand');
I am anyway catching the exception and logging a more compact version of the sentence, so it is OK for me to do it.
This was the easiest way I could find. We don't upgrade Yii very often, so if and when we go to the next version I'll probably repeat the change.
Related
I was executing many queries against a MySQL database with the same transaction using Promise.all(), so all queries are executing in parallel, the if anything bad happens I rollback the transaction. But a friend said that running queries in parallel is a bad practice because if a query failed and the transaction rolled back there will be other queries still running in MySQL that using the same transaction and if they didn't find the transaction they will fir errors in MySQL itself.
He sugged executing the queries in series so if something bad happens the transaction will rollback and the next query will not execute.
I tried to find some proves about this issue but I couldn't find any or I missed some if exist.
Hopefully, someone can provide me with a clear answer or reference and thank in advanced.
Promise.all() method as described here waits for all promises to resolve and if only one of them gets rejected it will reject too. So the problems are 1. Are all of your methods passed to promise.all() returning promise or they are using callback functions? 2. Is it important that which one of those methods runs at first? because promise.all() doesn't care in what order they resolve 3. Is it important that how many methods are returning reject because promise.all() will reject at first rejection.
Moreover if you are using this method for MySQL and so on, sometimes your ORM may handle that somehow but rejecting.
So I personally agree with your friends as this method is hard to control but maybe find a use for it :)
PS: Hopfully other contributers will help me with other points I missed.
I'm developing a website in php and codeignitor with three collegues, we're using mysql database.
I know that insert can throw an exception due to constraint violation, connect the server can make exception too if the server is busy.
Now what are other exceptions that might occur ? I tried looking in the web and I'm surprised I didn't find what I want, My webapp is a link-sharing website with tags, votes, flags,comments, and search(by title and tags, no advanced search yet) .
PS
Obviously we're not going to handle errors(like bad sector) so exceptions is what we want here.
Other common errors are:
The various php-generated catchable fatal errors. See here. http://php.net/manual/en/errorfunc.constants.php
php's out of memory error, which you cannot catch.
php's maximum execution time error, also which you cannot catch.
all sorts of MySQL errors.
Many web application software developers create a last-chance error handler. It logs the error message and any available stack trace to a log file and presents a "sorry, that didn't work" page to the user.
As you might guess, it's best not to use MySQL to log errors, because if it's MySQL failing, it won't work.
This is a community wiki page. That means anybody can edit it.
Before converting a project to use mysql, I have questions regarding the best way to avoid loss of a simple record update due to either a server crash or a program shutdown due to exceeding a/the cgi run-time limit.
My project is public and therefore applicable to any / many hosts where high level server side management isn't an option.
I wish to open a list file (or table) and acquire a list of records to parse one at a time.
While parsing each acquired list record, have the program / script perform a task with each record and update a counter (simple table) upon successful completion of each task (alternatively update each record with a success flag).
Do mysql tables get auto updated to the hard drive when "updated" or "added" to, thus, avoiding loss of all table changes to the point of crash if / when the program / script is violently terminated as described?
To have any chance with and do same with simple text files the counter has to be opened and closed for each update (as all content of open files on most O/S get clobbered when crashed).
Any description outline of mysql commands / processes etc to follow, if needed to avoid described losses, would also be very much appreciated.
Also, if any sugestions, are they applicable to both InnoDB and MyISM?
A simple answer comes to mind: SQL TRANSACTIONS. They're like a stack of SQL commands that 1. have to be "commited" 2. would come into action only if the last command is successfully executed.
I think this would help:
http://www.sqlteam.com/article/introduction-to-transactions
If my answer wasn't correct, pls, let me know if i misunderstood your intensions.
I have a Perl script running on a FreeBSD/Apache system, which makes some simple queries to a MySQL database via DBI. The server is fairly active (150k pages a day) and every once in a while (as much as once a minute) something is causing a process to hang. I've suspected a file lock might be holding up a read, or maybe it's a SQL call, but I have not been able to figure out how to get information on the hanging process.
Per Practical mod_perl it sounds like the way to identify the operation giving me the headache is either system trace, perl trace, or the interactive debugger. I gather the system trace is ktrace on FreeBSD, but when i attach to one of the hanging processes in top, the only output after the process is killed is:
50904 perl5.8.9 PSIG SIGTERM SIG_DFL
That isn't very helpful to me. Can anyone suggest a more meaningful approach on this? I am not terribly advanced in Unix admin, so your patience if I sound stupid is greatly appreciated.... :o)
If I understood correctly, your Perl process is hanging while querying the MySQL, which, by itself, is still operational. MySQL server has the embedded troubleshooting feature for that, the log_slow_queries option. Putting the following lines in your my.cnf enables the trick:
[mysqld]
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 10
After that, restart or reload the MySQL daemon. Let the server run for a while to collect the stats and analyse what's going on:
mysqldumpslow -s at /var/log/mysql/mysql-slow.log | less
On one server of mine, the top record (-s at orders by average query time, BTW) is:
Count: 286 Time=101.26s (28960s) Lock=14.74s (4214s) Rows=0.0 (0), iwatcher[iwatcher]#localhost
INSERT INTO `wp_posts` (`post_author`,`post_date`,`post_date_gmt`,`post_content`,`post_content_filtered`,`post_title`,`post_excerpt`,`post_status`,`post_type`,`comment_status`,`ping_status`,`post_password`,`post_name`,`to_ping`,`pinged`,`post_modified`,`post_modified_gmt`,`post_parent`,`menu_order`,`guid`) VALUES ('S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S')
FWIW, it is a WordPress with over 30K posts.
Ktracing only gives you system calls, signals I/O and namei processing. And it generates a lot of data very quickly. So it might not be ideal to fish out trouble spots.
If you can see the standard output for your script, put some strategically placed print statements in your code around suspected trouble spots. Then running the program should show you were the hang occurs:
print "Before query X"
$dbh->do($statement)
print "After query X".
If you cannot see the standard output, either use e.g. the Sys::Syslog perl module, or call FreeBSD's logger(1) program to write the debugging info to a logfile. It is probably easiest to encapsulate that into a debug() function and use that instead or print statements.
Edit: If you don't want a lot of logging on disk, write the logging info to a socket (Sys::Syslog supports that with setlogsock()), and write another script to read from that socket and dump the debug text to a terminal, prefixed with the time the data was received. Once the program hangs, you can see what it was doing.
Under high traffic my mysql 5.0.45 server /Apache2/ CentOS 5 is getting "Error establishing mySQL database connection". I need to find the root cause.
I would very much appreciate any pointer to information about the procedure I should take to find the cause (memory limit, thread limits, CPU load, slow queries etc, large dataset, wrong keys ...) I would assume it involves looking at relevant log files etc....
Thank you.
That particular error message sounds like it's being generated by your application, and not by a system library. MySQL has functionality to report the specific errors that are occurring, so your best bet would be to utilize that in some way.
For instance, if you were using PHP, there is a function called mysql_error() that returns specifics about the last error encountered (too many connections, etc). You would put in some error handling near your connection call, and log the mysql_error() results if it failed.
You didn't mention what language you were using, but the MySQL libraries would provide the same functionality to whichever you are using. I'd suggest modifying your application code to take advantage of it.
I'm willing to bet this is because you're hitting the max user limit allowed by the mysql server but in general, do print the mysql errors, if not to the screen but at least to the log, or email.