How to track down a Drupal max_allowed_packet error? - mysql

One of my staging sites has recently started spewing huge errors on every admin page along the lines of:
User warning: Got a packet bigger than 'max_allowed_packet' bytes query: UPDATE cache_update SET data = ' ... ', created = 1298434692, expire = 1298438292, serialized = 1 WHERE cid = 'update_project_data' in _db_query() (line 141 of /var/www/vhosts/mysite/mypath/includes/database.mysqli.inc). (where "..." is about 1.5 million characters worth of serialized data)
How should I go about tracking down where the error originates? Would adding debugging code to _db_query do any good, since it gets called so much?

No need to track this down because you can't fix it I think.
This is the cache from update.module, containing information about which modules have updated versions and so on. So this is coming from one of the "_update_cache_set()" calls in that module.
Based on a wild guess, I'd say it is the one in this function: http://api.drupal.org/api/drupal/modules--update--update.fetch.inc/function/_update_refresh/6
It is basically building up an huge array with information about all projects on your site and tries to store it as a single, serialized value.
How many modules do you have installed on this site?
I can think of three ways to "fix" this error:
Increase the max_allowed_packet size. (max_allowed_packet setting in my.conf)
Disable update.module (It's not that useful on a staging/production site anyway, when you need to update on a dev site first anyway)
Disable some modules ;)

I had a similar error and went round and round for about an hour.
Increased memory limit to 512m and still had the issue. And figured that was enough. So went looking elsewhere.
I cleared the caches with drush, still the error, and then looked at the database tables.
I noticed that all the cache tables were cleared except cache_update. I truncated this table and bam, everything was working normally.
Before I got the memory limit error, I got a max_input_vars error since I am on PHP5.4. But this question and answer led me to this fix. Not quite sure how or why it worked, but it did.

Related

InnoDB: SHOW ENGINE STATUS truncating query in deadlock section

Recently I've been having problems troubleshooting deadlocks because the engine log is not displaying the full queries involved. Hibernate generates some pretty long queries, and it seems in the two cases I've been concerned about, InnoDB is limiting itself to 1024 characters when printing the thread information (the line starting with "MySQL thread id...") and the queries themselves (select * from...).
While looking into this, I found a bug report complaining about the same issue which had an associated patch that should allow us to fix this issue. However, I'm told that there is no DB variable by the name max_query_len in either 5.0 or 5.1. This is strange to me, as the old limit was 300 characters (which I am seeing more than) but it does not seem to actually be configurable. My question is what actually happened with this - is there a later patch I'm not seeing that changed the fix? 5.0.12's changelog does indeed suggest that the issue was fixed:
"SHOW ENGINE INNODB STATUS now can display longer query strings. (Bug #7819)"
but it doesn't seem to me that it actually is. The oldest version we might possibly have had running when the deadlock happened is 5.0.56, so we should at least have the 5.0.12 fix - we do have newer versions running, though, so later versions that might have overwritten the fix for the truncated queries could have been applied. Is anyone aware of what's happened here to cause the limit on what it prints to be lowered again?

MySql stops writing, but keeps reading

I have a magento store running on Debian, with LAMP, the server is a VPN with 1GB RAM, 1 Core Processor.
MySQL randomly but often stops writing new data to tables, magento doesn't show any error, says that it successfully saved the data. It can read the tables without problem, the website keeps running okay, it just doesn't save any new data.
If I restart MySQL it starts saving new data again, then randomly (I think, couldn't relate it to any action) stops writing some time later, it can be days or hours.
I've turned on query log, but not sure what to look for on it,
I found this common error after mysql stops writing: INSERT INTO index_process_event (process_id,event_id,status) VALUES ('7', '10453', 'error') ON DUPLICATE KEY UPDATE status = VALUES(status);
I've tried to reindex the whole process table as suggested by Henry, but no success.
After reindexing the event_id changed.
I don't believe the problem is low RAM, the website only get around 200 sessions/day, hardly more than 2 users online at the same time.
Thanks, I appreciate any help.
try to see if there is any space left on your storage.
also try to see in system log in addition of mysql log.
grep / find for any line that have "error" string.

Best way to process large database with Laravel

The database
I'm working with a database that has pretty big tables and it's causing me problems. One in particular has more than 120k lines.
What I'm doing with it
I'm looping over this table in a MakeAverage.php file to merge them into about 1k lines in a new table in my database.
What doesn't work
Laravel doesn't allow me to process it all at once even if I try to DB::disableQueryLog() or or a take(1000) limit for example. It returns me a blank page every time even if my error reporting was enabled (kind of like this). Also, I had no Laravel log file for this. I had to look in my php_error.log (I'm using MAMP) to realize that it was actually a memory_limit problem.
What I did
I increased the amount of memory before executing my code by using ini_set('memory_limit', '512M'). (It's bad practice, I should do it in php.ini.)
What happened?
It worked! However, Laravel thrown me an error because the page didn't finished to load after 30s because of the large amount of data.
What I will do
After spending some time on this issue and looking at other people having similar problems (see: Laravel forum, 19453595, 18775510 and 12443321), I thought that maybe PHP isn't the solution.
Since, I'm only creating a Table B from the average values of the Table A, I believe that a SQL is going to fits best my needs as it's clearly faster than PHP for that type of operation (see: 6449072) and I can use functions such as SUM, AVERAGE, COUNT and GROUP_BY (Reference).

Hibernate spring hangs

I'm working on an hibernate Spring Mysql app, sometimes when i make a gethibernateTemplate()get(class,id) i can see a bunch of HQL in the logs and the application hangs, have to kill tomcat. This method reads trhough a 3,000 lines file, and there should be 18 files of these, i've been thinking i probably been looking at this wrong. I need you to help me check this at database level , but i don´t know hot to approach. Maybe my database can´t take so many hits so fast.
I´v looked in phpMyAdmin in the information bout executions time section, i see a red values in:
Innodb_buffer_pool_reads 165
Handler_read_rnd 40
Handler_read_rnd_next 713 k
Created_tmp_disk_tables 8
Opened_tables 30
Can i set the application some how to threat more gently the database ?
How can i check if this is the issue ?
Update
I put a
Thread.sleep(2000);
at the end of each cycle and it made the same numbers of calls (18), so i guess this wont be te reason ? can i discard this approach ?
This is a different view of this question
Hibernate hangs or throws lazy initialization no session or session was closed
trying some different
Update 2
Think it might be the buffer reader reading the file?? file is 44KB, tried this method:
http://code.hammerpig.com/how-to-read-really-large-files-in-java.html
class but did not work.
Update 1 -- do never use a Sleep or something slow within an transaction. A transaction has to be closed as fast as possible, because it can block other database operations (what exactly will be blocked depends on the isolation level)
I do not really understand how the database is related to the files in your usecase. But if the stuff works for the first file and become slow later on, then the problem can be the Hibernate Session (to many objects), in this case start an new Transaction/Hibernate Session for each file.
I rewrote the program so i load the information directly into the database using mysql query LOAD DATA INFILE. It works very fast. Then i updated rows changing some fields i need also with sql queries. I think there is simply too much information at the same time to manage trhough memory and abstractions.

NonUniqueObjectException error inserting multiple rows, LAST_INSERT_ID() returns 0

I am using NHibernate/Fluent NHibernate in an ASP.NET MVC app with a MySQL database. I am working on an operation that reads quite a bit of data (relative to how much is inserted), processes it, and ends up inserting (currently) about 50 records. I have one ISession per request which is created/destroyed in the begin/end request event handlers (exactly like like http://ayende.com/Blog/archive/2009/08/06/challenge-find-the-bug-fixes.aspx), and I am reading in data and adding new objects (as in section 16.3 at https://www.hibernate.org/hib_docs/nhibernate/html/example-parentchild.html), and finally calling Flush() on the session to actually run all the inserts.
Getting data out and lazy loading work fine, and when I call Flush exactly 2 new records are being inserted (I am manually checking the table to find this out), and then I get the following error:
NonUniqueObjectException: a different object with the same
identifier value was already
associated with the session: 0, of
entity: ...
I am new to NHibernate and while searching for a solution have tried explicitly setting the Id property's generator to both Native and Identity (it is a MySQL database and the Id column is an int with auto_increment on), and explicitly setting the unsaved value for the Id property to 0. I still get the error, however.
I have also tried calling Flush at different times (effectively once per INSERT) and I then get the same error, but for an identity value other than 0 and at seemingly random points in the process (sometimes I do not get it at all in this scenario, but sometimes I do at different points).
I am not sure where to go from here. Any help or insight would be greatly appreciated.
EDIT: See the answer below.
EDIT: I originally posted a different "answer" that did not actually solve the problem, but I want to document my findings here for anyone else who may come across it.
After several days of trying to figure out the problem and resolve this issue, and being extremely frustrated because the issue seemed to go away for awhile and then come back intermittently (causing me to think multiple times that a change I made fixed it, when in fact it did not), I believe I have tracked down the real issue.
A few times after I turned the log4net level for NHibernate up to DEBUG, the problem went away, but I was finally able to get the error with that log level. Included in the log were these lines:
Building an IDbCommand object for the SqlString: SELECT LAST_INSERT_ID()
...
NHibernate.Type.Int32Type: 15:10:36 [8] DEBUG NHibernate.Type.Int32Type: returning '0' as column: LAST_INSERT_ID()
NHibernate.Id.IdentifierGeneratorFactory: 15:10:36 [8] DEBUG NHibernate.Id.IdentifierGeneratorFactory:
Natively generated identity: 0
And looking up just a few lines I saw:
NHibernate.AdoNet.ConnectionManager: 15:10:36 [8] DEBUG NHibernate.AdoNet.ConnectionManager: aggressively releasing database connection
NHibernate.Connection.ConnectionProvider: 15:10:36 [8] DEBUG NHibernate.Connection.ConnectionProvider: Closing connection
It seems that while flushing the session and performing INSERTs, NHibernate was closing the connection between the INSERT statement and the "SELECT LAST_INSERT_ID()" to get the id that was generated by MySQL for the INSERT statement. Or rather, I should say it was sometimes closing the connection which is one reason I believe the problem was intermittent. I can't find the link now, but I believe I also read in all my searching that MySQL will sometimes return the correct value from LAST_INSERT_ID() even if the connection is closed and reopened, which is another reason I believe it was intermittent. Most of the time, though, LAST_INSERT_ID() will return 0 if the connection is closed and reopened after the INSERT.
It appears there are 2 ways to go about fixing this issue. First is a patch available here that looks like it will make it into NHibernate 2.1.1, or which you can use to make your own build of NHibernate, which forces the INSERT and SELECT LAST_INSERT_ID() to run together. Second, you can set the connection.release_mode to on_close as described in this blog post which prevents NHibernate from closing the connection until the ISession is explicitly closed.
I took the latter approach, which is done in FluentNHibernate like this:
Fluently.Configure()
...
.ExposeConfiguration(c => c.Properties.Add("connection.release_mode", "on_close"))
...
This also had the side effect of drastically speeding up my code. What was taking 20-30 seconds to run (when it just so happened to work before I made this change) is now running in 7-10 seconds, so it is doing the same work in ~1/3 the time.