I am accessing a large indexed text dataset using sphinxse via MySQL. The size of resultset is on the order of gigabytes. However, I have noticed that MySQL stops the query with following error whenever the dataset is larger than 16MB:
1430 (HY000): There was a problem processing the query on the foreign data source. Data source error: bad searchd response length (length=16777523)
length shows the length of resultset that offended MySQL. I have tried the same query with Sphinx's standalone search program. It works fine. I have tried all possible variables in both MySQL and Sphinx, but nothing is helping.
I am using Sphinx 0.9.9 rc-2 and MySQL 5.1.46.
Thanks
I finally solved the problem. It turns out that the sphinx plugin for mysql (SphinxSE) hard-codes the 16 MB response limit on the resultset in the source code (bad bad bad source-code). I changed SPHINXSE_MAX_ALLOC to 1*1024*1024*1024 in file ha_sphinx.cc, and everything works fine now.
you probably need to increase max_allowed_packet from its default value of 16M:
From mysql's documentation
Both the client and the server have their own max_allowed_packet variable, so if you want to handle big packets, you must increase this variable both in the client and in the server.
If you are using the mysql client program, its default max_allowed_packet variable is 16MB. To set a larger value, start mysql like this:
shell> mysql --max_allowed_packet=32M
That sets the packet size to 32MB.
Related
I am trying to copy over a database, and have been able to do so for months without issue. Today however, I ran into an error that says 'A BULK size specified must be increased'. I am using SQLYog.
I didn't find much on google about this but it seems as tough it should be fixed by increasing the bulk_insert_buffer_size through something like 'SET SESSION bulk_insert_buffer_size = 1024 * 1024 * 256'. (Tried with GLOBAL instead of SESSION too)
This has not worked and I am still getting the error unfortunately. The only other bit of information I found was the source code where the message is generated as seen here: from this page: https://github.com/Fale/sqlyog/blob/master/src/CopyDatabase.cpp. Unfortunately I really don't know what to do with that information. I tried looking through the code to find out what mysql variables (like bulk_insert_buffer_size) were tied to the variables used in the source code but wasn't able to follow it effectively.
Any help would be appreciated.
http://faq.webyog.com/content/24/101/en/about-chunks-and-bulks.html says you can specify BULK size:
The settings for the 'export' tool are available from 'preferences' and for the 'backup' 'powertool' the option is provided by the backup wizard.
You should make sure the BULK size is no larger than your MySQL Server's max_allowed_packet config option. This has a default value of 1MB, 4MB, or 64MB depending on your MySQL version.
I'm not a user of SQLYog, but I know mysqldump has a similar concept. Mysqldump auto-calculates the max bulk size by reading the max_allowed_packet.
For what it's worth, bulk_insert_buffer_size is not relevant for you unless you're copying into MyISAM tables. But in general, you shouldn't use MyISAM tables.
I'm trying to add a column to a mysql table that has over 25 million rows. I am running the sql command
ALTER TABLE `table_name` ADD COLUMN `column_name` varchar(128) NULL DEFAULT NULL;
This is being run using the mysql command line application.
Every time i try to run this it takes hours and then i get the error
ERROR 2013 (HY000): Lost connection to MySQL server during query
The database is running in a RDS instance on AWS and checking the monitoring statistics neither the memory or disk space is being exhausted.
Is there anything else i can try to add this column to the table?
Check your memory usage or, more probably, your disk usage (is there enough free space during the process?). Altering a table may require either a large amount of memory or a copy on disk of your table. Changing the alter algorithm from INPLACE to COPY can be even faster in your particular case.
You may also be hitting the innodb_online_alter_log_max_size limit, although in that case, only the query should fail, not the entire server. It is possible that the crash may be happening due to the ROLLBACK, and not the operation itself, though.
Finally, some application configurations or hosting servers cancels a query/http request that is taking too long, I recommend you to execute the same query on the command line client for testing purposes.
I try to import an 1.4G mysql file into aws rds. I tried the 2 cpu and 4G mem option. I still got error: Lost connection to MySQL server during query. My quetion is that how do I import large mysql file into rds.
MySQL Server and the MySQL client both have a parameter max_allowed_packet.
This is designed as a safety check to prevent the useless and disruptive allocation of massive amounts of memory that could occur if data corruption caused the receiving end of the connection to believe a packet¹ to be extremely large.
When transmitting queries and result sets, neither client nor server is allowed to send any single "thing" (usually a query or the value of a column) that is larger than max_allowed_packet -- the sending side will throw an error and refuse to send it if you try, and the receiving side will throw an error and then close the connection on you (so the client may or may not actually report the error thrown -- it may simply report that the connection was lost).
Unfortunately, the client setting and server setting for this same parameter are two independent settings, and they are uncoordinated. There is technically no requirement that they be the same, but discrepant values only works as long as neither of them ever exceeds the limit imposed by the other.
Worse, their defaults are actually different. In recent releases, the server defaults to 4 MiB, while the client defaults to 16 MiB.
Finding the server's value (SELECT ##MAX_ALLOWED_PACKET) and then setting the client to match the server (mysql --max-allowed-packet=max_size_in_bytes) will "fix" the mysterious Lost connection to MySQL server during query error message by causing the client to Do The Right Thing™ and not attempt to send a packet that the server won't accept. But you still get an error -- just a more informative one.
So, we need to reconfigure both sides to something more appropriate... but how do we know the right value?
You have to know your data. What's the largest possible value in any column? If that's a stretch (and in many cases, it is), you can simply start with a reasonably large value based on the longest line in a dump file.
Use this one-liner to find that:
$ perl -ne '$max = length($_) > $max ? length($_) : $max; END { print "$max\n" }' dumpfile.sql
The output will be the length, in bytes, of the longest line in your file.
You might want to round it up to the next power of two, or at least the next increment of 1024 (1024 is the granularity accepted by the server -- values are rounded) or whatever you're comfortable with, but this result should give you a value that should allow you to load your dump file without issue.
Now that we've established a new value that should work, change max_allowed_packet on the server to the new value you've just discovered. In RDS, this is done in the parameter group. Be sure the value has been applied to your server (SELECT ##GLOBAL.MAX_ALLOWED_PACKET;).
Then, you'll need to pass the same value to your client program, e.g. mysql --max-allowed-packet=33554432 if this value is smaller than the default client value. You can find the default client value with this:
$ mysql --help --verbose | grep '^max.allowed.packet'
max-allowed-packet 16777216
The client also allows you to specify the value in SI units, like --max-allowed-packet=32M for 32 MiB (33554432 bytes).
This parameter -- and the fact that there are two of them, one for the client and one for the server -- causes a lot of confusion and has led to the spread of some bad information: You'll find people on the Internet telling you to set it to ridiculous values like 1G (1073741824, which is the maximum value possible) but this is not a really good strategy since, as mentioned above, this is a protective mechanism. If a packet should happen to get corrupted on the network in just the wrong way, the server could conclude that it actually needs to allocate a substantial amount of memory just so that this packet can successfully be loaded into a buffer -- and this could lead to system impairment or a denial of service by starving the system for available memory.
The actual amount of memory the server normally allocates for reading packets from the wire is net_buffer_length. The size indicated in the packet isn't actually allocated unless it's larger than net_buffer_length.
¹ a packet refers to a layer 7 packet in the MySQL Client/Server Protocol sense. Not to be confused with an IP packet or datagram.
Your connection may timeout if you are importing from your local computer or laptop or a machine which is not in the same region as the RDS instance.
Try to import from an EC2 instance, which has access to this RDS. You will need to the upload the file to S3, ssh into the EC2 instance and run an import into RDS.
I i'm doing select from 3 joined tables on MySql server 5.6 running on azure instance with inno_db set to 2GB. I used to have 14GB ram and 2core server and I just doubled ram and cores hoping this will result positive on my select but it didn't happen.
My 3 tables I'm doing select from are 90mb,15mb and 3mb.
I believe I don't do anything crazy in my request where I select few booleans however i'm seeing this select is hangind the server pretty bad and I can't get my data. I do see traffic increasing to like 500MB/s via Mysql workbench but can't figure out what to do with this.
Is there anything I can do to get my sql queries working? I don't mind to wait for 5 minutes to get that data, but i need to figure out how to get it.
==================== UPDATE ===============================
I was able to get it done via cloning the table that is 90 mb and forfilling it with filtered original table. It ended up to be ~15mb, then I just did select all 3 tables joining then via ids. So now request completes in 1/10 of a second.
What did I do wrong in the first place? I feel like there is a way to increase some sizes of some packets to get such queries to work? Any suggestions on what shall I google?
Just FYI, my select query looked like this
SELECT
text_field1,
text_field2,
text_field3 ,..
text_field12
FROM
db.major_links,db.businesses, db.emails
where bool1=1
and bool2=1
and text_field is not null or text_field!=''
and db.businesses.major_id=major_links.id
and db.businesses.id=emails.biz_id;
So bool1,2 and textfield i'm filtering are the filds from that 90mb table
I know this might be a bit late, but I have some suggestions.
First take a look the max_allowed_packet in your my.ini file. This is usually found here in Windows:
C:\ProgramData\MySQL\MySQL Server 5.6
This controls the packet size, and usually causes errors in large queries if it isn't set correctly. I have mine set to 100M
Here is some documentation for you:
Official documentation
In addition I've slow queries when there are a lot of items in the where statement and here you have several. Make sure you have indexes and compound indexes on the values in your where clause especially related to the joins.
My situation is pretty straight forward.
I have a server configured with max_allowed_packet set to 16M. It's running Percona Server 5.1.52.
I wrote a simple script in perl to do huge bulk inserts. I know roughly the size the packets are going to be by knowing how big the data string I'm sending through DBI is.
No matter what size the bulk insert MySQL seems to accept the packet and do the insert, but I'm expecting it to give me a Packet Too Large error for anything over 16M.
Here's where it gets really weird...
If I set the max_allowed_packet to 16777215 (one byte less that 16M) or anything lower I get the error for packets over that size and obviously don't get the error for packets under that size.
So, it appears as if at anything below 16M the packet limit is obeyed, but 16M or greater and it's completely ignored.
Any thoughts as to what could be causing this? It's really bizarre and the opposite problem most people have with max_allowed_packet.
Is it possible that the mysql client could be doing some auto-chunking? The server only appears to be running one big query, so auto-chunking seems really unlikely since it'd probably show up as more than one insert.
Any variables I could check to get more information about what's going on?
What are you inserting? If it's anything but very large BLOBs, max_allowed_packet has no effect.
The message for regular fields are split up and reassembled as usual for network protocols. The parameter is intended for communicating BLOB fields. See the documentation.
I reported this as a MySQL bug and it was verified. So, I suppose that's that for now.
It's an issue in 5.1.52 and above.
Bug report here: http://bugs.mysql.com/58887