Not Getting "Packet Too Large" Error With Huge Packets - mysql

My situation is pretty straight forward.
I have a server configured with max_allowed_packet set to 16M. It's running Percona Server 5.1.52.
I wrote a simple script in perl to do huge bulk inserts. I know roughly the size the packets are going to be by knowing how big the data string I'm sending through DBI is.
No matter what size the bulk insert MySQL seems to accept the packet and do the insert, but I'm expecting it to give me a Packet Too Large error for anything over 16M.
Here's where it gets really weird...
If I set the max_allowed_packet to 16777215 (one byte less that 16M) or anything lower I get the error for packets over that size and obviously don't get the error for packets under that size.
So, it appears as if at anything below 16M the packet limit is obeyed, but 16M or greater and it's completely ignored.
Any thoughts as to what could be causing this? It's really bizarre and the opposite problem most people have with max_allowed_packet.
Is it possible that the mysql client could be doing some auto-chunking? The server only appears to be running one big query, so auto-chunking seems really unlikely since it'd probably show up as more than one insert.
Any variables I could check to get more information about what's going on?

What are you inserting? If it's anything but very large BLOBs, max_allowed_packet has no effect.
The message for regular fields are split up and reassembled as usual for network protocols. The parameter is intended for communicating BLOB fields. See the documentation.

I reported this as a MySQL bug and it was verified. So, I suppose that's that for now.
It's an issue in 5.1.52 and above.
Bug report here: http://bugs.mysql.com/58887

Related

MySQL "Packet for query too large" with query length exactly matching `max_allowed_packet`

I'm writing a tool that imports tables from a source to a dest, and it generates queries to do the importing. Currently, my tool is aware of what the dest max_allowed_packet is set to and only writes insert queries with enough rows as to not go over this limit.
Problem is, I happen to have stumbled upon a query that is exactly 1048576 (1MB), or my set max packet size. I would of course assume that a packet contains more than just the query, so this makes sense to me, but how do I find out what the actual max length of a single query with no parameters should be, given a max packet size?
I saw this post already that says to essentially divide the packet size by 11 to get my query length (actually saying the biggest param len * 11 should be the max packet size), but that sounds silly.
So this is so hard to answer as there is no fixed length, this is just some of the factors that play into TCP packet sizes
So to understand it there is the TCP layer before the MySQL Server, https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure
So from that, we can see the TCP header data can be of a varied size due to the options allowed, but critical to TCP there is 140 Bit's to start with, then our MySQL Data starts after the TCP options block so this is before there is any information for MySQL Server to process.
The MySQL server then has to have information about the Encoding, it's options such as is this query parameterized and such and then the query. a character in your string could be one of the common ones 2 bit's ASCII, 8-Bit UTF-8, 16-Bit UTF-16 and possibly I'm not sure if MySQL server supports) 32Bit UTF-32 (note this is the format the string sent to the DB server it is not the format saved by the DB Server)
This is why it's a variable-length and as such, there is no defined Query Size and an example would be the size of the query your write will more than likely be saved into the query header data for MySQL server to know where to stop reading. and that will push the data offsets forward for where the data starts. (E.G the bigger the query the bigger the header for MySQL is).
Another factor to note that the bigger the packet the longer the checksum takes to calculate, and should the checksum validation fail the whole packet has to be re-transmitted so getting these correct is a pain for most Server Software the balance between speed should everything go correctly, vs speed should the packet fail and have to be re-transmitted.

aws rds, lost connection to MySQL server during query, when importing large file

I try to import an 1.4G mysql file into aws rds. I tried the 2 cpu and 4G mem option. I still got error: Lost connection to MySQL server during query. My quetion is that how do I import large mysql file into rds.
MySQL Server and the MySQL client both have a parameter max_allowed_packet.
This is designed as a safety check to prevent the useless and disruptive allocation of massive amounts of memory that could occur if data corruption caused the receiving end of the connection to believe a packet¹ to be extremely large.
When transmitting queries and result sets, neither client nor server is allowed to send any single "thing" (usually a query or the value of a column) that is larger than max_allowed_packet -- the sending side will throw an error and refuse to send it if you try, and the receiving side will throw an error and then close the connection on you (so the client may or may not actually report the error thrown -- it may simply report that the connection was lost).
Unfortunately, the client setting and server setting for this same parameter are two independent settings, and they are uncoordinated. There is technically no requirement that they be the same, but discrepant values only works as long as neither of them ever exceeds the limit imposed by the other.
Worse, their defaults are actually different. In recent releases, the server defaults to 4 MiB, while the client defaults to 16 MiB.
Finding the server's value (SELECT ##MAX_ALLOWED_PACKET) and then setting the client to match the server (mysql --max-allowed-packet=max_size_in_bytes) will "fix" the mysterious Lost connection to MySQL server during query error message by causing the client to Do The Right Thing™ and not attempt to send a packet that the server won't accept. But you still get an error -- just a more informative one.
So, we need to reconfigure both sides to something more appropriate... but how do we know the right value?
You have to know your data. What's the largest possible value in any column? If that's a stretch (and in many cases, it is), you can simply start with a reasonably large value based on the longest line in a dump file.
Use this one-liner to find that:
$ perl -ne '$max = length($_) > $max ? length($_) : $max; END { print "$max\n" }' dumpfile.sql
The output will be the length, in bytes, of the longest line in your file.
You might want to round it up to the next power of two, or at least the next increment of 1024 (1024 is the granularity accepted by the server -- values are rounded) or whatever you're comfortable with, but this result should give you a value that should allow you to load your dump file without issue.
Now that we've established a new value that should work, change max_allowed_packet on the server to the new value you've just discovered. In RDS, this is done in the parameter group. Be sure the value has been applied to your server (SELECT ##GLOBAL.MAX_ALLOWED_PACKET;).
Then, you'll need to pass the same value to your client program, e.g. mysql --max-allowed-packet=33554432 if this value is smaller than the default client value. You can find the default client value with this:
$ mysql --help --verbose | grep '^max.allowed.packet'
max-allowed-packet 16777216
The client also allows you to specify the value in SI units, like --max-allowed-packet=32M for 32 MiB (33554432 bytes).
This parameter -- and the fact that there are two of them, one for the client and one for the server -- causes a lot of confusion and has led to the spread of some bad information: You'll find people on the Internet telling you to set it to ridiculous values like 1G (1073741824, which is the maximum value possible) but this is not a really good strategy since, as mentioned above, this is a protective mechanism. If a packet should happen to get corrupted on the network in just the wrong way, the server could conclude that it actually needs to allocate a substantial amount of memory just so that this packet can successfully be loaded into a buffer -- and this could lead to system impairment or a denial of service by starving the system for available memory.
The actual amount of memory the server normally allocates for reading packets from the wire is net_buffer_length. The size indicated in the packet isn't actually allocated unless it's larger than net_buffer_length.
¹ a packet refers to a layer 7 packet in the MySQL Client/Server Protocol sense. Not to be confused with an IP packet or datagram.
Your connection may timeout if you are importing from your local computer or laptop or a machine which is not in the same region as the RDS instance.
Try to import from an EC2 instance, which has access to this RDS. You will need to the upload the file to S3, ssh into the EC2 instance and run an import into RDS.

Mysql multiple inserts support how many extended values?

We all know that it's better to use multiple inserts in ONE query than to run MULTIPLE queries. But, I don't know upto how many these multiple extended values does Mysql support? I searched over net but didn't find the correct answer. I'm just curious to know this.
Example,
INSERT INTO tbl_name VALUES(1, 'John Doe'), (2, 'Peter England'), ....
I remember when I was using some MVC framework where it was trying to fire hundreds/thousands of inserts in one query, I used to get some sort of error message like Mysql server has gone away.
The limit for multiple inserts, like the one you are talking about would be bound by the packet limit.
See: http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html
This will affect all query types, and not just insert.
To add a little more context, the error you spoke of MySQL server has gone away would be a result of exceeding the packet limit. A quote from the page:
You can also get these errors if you send a query to the server that
is incorrect or too large. If mysqld receives a packet that is too
large or out of order, it assumes that something has gone wrong with
the client and closes the connection. If you need big queries (for
example, if you are working with big BLOB columns), you can increase
the query limit by setting the server's max_allowed_packet variable,
which has a default value of 1MB. You may also need to increase the
maximum packet size on the client end. More information on setting the
packet size is given in Section C.5.2.10, “Packet too large”.
Your query's ultimate length limit is set by the max_allowed_packet setting - if you exceed that, the query gets truncated and almost certainly becomes invalid.
While doing a multi-value insert is more efficient, don't go overboard and try to do thousands of value sets. Try splitting it up so you're only doing a few hundred at most, and definitely make sure that the query string's length doesn't go near max_allowed_packet.

How to track down a Drupal max_allowed_packet error?

One of my staging sites has recently started spewing huge errors on every admin page along the lines of:
User warning: Got a packet bigger than 'max_allowed_packet' bytes query: UPDATE cache_update SET data = ' ... ', created = 1298434692, expire = 1298438292, serialized = 1 WHERE cid = 'update_project_data' in _db_query() (line 141 of /var/www/vhosts/mysite/mypath/includes/database.mysqli.inc). (where "..." is about 1.5 million characters worth of serialized data)
How should I go about tracking down where the error originates? Would adding debugging code to _db_query do any good, since it gets called so much?
No need to track this down because you can't fix it I think.
This is the cache from update.module, containing information about which modules have updated versions and so on. So this is coming from one of the "_update_cache_set()" calls in that module.
Based on a wild guess, I'd say it is the one in this function: http://api.drupal.org/api/drupal/modules--update--update.fetch.inc/function/_update_refresh/6
It is basically building up an huge array with information about all projects on your site and tries to store it as a single, serialized value.
How many modules do you have installed on this site?
I can think of three ways to "fix" this error:
Increase the max_allowed_packet size. (max_allowed_packet setting in my.conf)
Disable update.module (It's not that useful on a staging/production site anyway, when you need to update on a dev site first anyway)
Disable some modules ;)
I had a similar error and went round and round for about an hour.
Increased memory limit to 512m and still had the issue. And figured that was enough. So went looking elsewhere.
I cleared the caches with drush, still the error, and then looked at the database tables.
I noticed that all the cache tables were cleared except cache_update. I truncated this table and bam, everything was working normally.
Before I got the memory limit error, I got a max_input_vars error since I am on PHP5.4. But this question and answer led me to this fix. Not quite sure how or why it worked, but it did.

Problem with Sphinx resultset larger than 16 MB in MySQL

I am accessing a large indexed text dataset using sphinxse via MySQL. The size of resultset is on the order of gigabytes. However, I have noticed that MySQL stops the query with following error whenever the dataset is larger than 16MB:
1430 (HY000): There was a problem processing the query on the foreign data source. Data source error: bad searchd response length (length=16777523)
length shows the length of resultset that offended MySQL. I have tried the same query with Sphinx's standalone search program. It works fine. I have tried all possible variables in both MySQL and Sphinx, but nothing is helping.
I am using Sphinx 0.9.9 rc-2 and MySQL 5.1.46.
Thanks
I finally solved the problem. It turns out that the sphinx plugin for mysql (SphinxSE) hard-codes the 16 MB response limit on the resultset in the source code (bad bad bad source-code). I changed SPHINXSE_MAX_ALLOC to 1*1024*1024*1024 in file ha_sphinx.cc, and everything works fine now.
you probably need to increase max_allowed_packet from its default value of 16M:
From mysql's documentation
Both the client and the server have their own max_allowed_packet variable, so if you want to handle big packets, you must increase this variable both in the client and in the server.
If you are using the mysql client program, its default max_allowed_packet variable is 16MB. To set a larger value, start mysql like this:
shell> mysql --max_allowed_packet=32M
That sets the packet size to 32MB.