Error #1159 with MySQL FEDERATED table and one kind of query - mysql

I have one problem with FEDERATED table in MySQL. I have one server (MySQL version 5.0.51a), who serve to store client data and actually nothing more. The logic database are stored in another server (version 5.1.56), sometimes it should handle that data from first server. So the second server have one FEDERATED table, which connect to the first server.
Actually, it has worked without any problems, but recently I got strange errors with this solution. Some kind of queries on second server cannot be performed correctly.
For example SELECT * FROM table - doesn't work. It hangs exactly 3 minutes and then gives:
Error Code: 1159 Got timeout reading communication packets
Ok, I checked table on the first server and it's Ok. Then I tried some another queries to FEDERATED table and they work...
For example, query like SELECT * FROM table WHERE id=x returns the result. Probably it could have problem with size of result, so I tried query with dummy WHERE-clause like SELECT * FROM table WHERE id > 0 - and it also works...
Finally I found a "solution", which helped only for two days - on the first server I made a copy of table, and on second server I re-declared a new FEDERATED table with new connection string to this copy. And it works, but after two days the same problem with new copied table.
I've already talk with both server providers, they see no problems, everything seems to work and other hosting provider is the causer of problems.
I've checked all variables in MySQL and there is no timeout parameter with 3 minutes etc. So how can I deal so kind of problems? It seems to be something automatic on network or database side, but I don't know, how to detect the reason of problems.
Do You have any ideas?

You may try checking MTU size settings for network interfaces on both servers.

This warning is logged when idle threads are killed by wait_timeout.
Normally, the way to avoid threads getting killed by wait_timeout is to call mysql_close() in scripts when the connection is no longer needed. Unfortunately that doesn't work for queries made through federated tables because the query and the connection are not on the same server.
For example, when a query is executed on server A of a federated table (pointing to data on server B), it creates a connection on server B. Then when you run mysql_close() on server A it obviously can not close the connection that was created on server B.
Eventually the connection gets killed by mysql after the number of seconds specified in "wait_timeout" have passed (the default is 8 hours). This generates the warning in your mysqlerror.log "Got timeout reading communication packets"

Related

What could cause MySQL to intermittently fail to return a row?

I have a MySQL database that I am running very simple queries against as part of a webapp. I have received reports from users starting today that they got an error saying that their account doesn't exist, and when they log in again, it does (this happened to only a few people, and only once to each, so clearly it is rare). Based on my backend code, this error can only occur if the same query returns 0 row the first time, and 1 row the second. My query is basically SELECT * FROM users WHERE username="...". How is this possible? My suspicion is that the hard disk is having I/O failures, but I am unsure because I would not expect MySQL to fail silently in this case. That said, I don't know what else it could be.
This could be a bug with your mysql client (Though I'm unsure as to how the structure of your code is, it could just be bad query). However let's assume that your query has been working fine up until now with no prior issues, so we'll rule out bad code.
With that in mind, I'm assuming it's either a bug in your mysql client or your max connection count is reached (Had this issue with my previous host - Hostinger).
Let's say your issue is a bug in your mysql client, set your sessions to per session basis by running this
SET SESSION optimizer_switch="index_merge_intersection=off";
or in your my.cnf you can set it globally
[mysqld] optimizer_switch=index_merge_intersection=off
As for max connection you can either increase your max_connection value (depending if your host allows it), or you'll have to make a logic to close the mysql connection after a query execution.
$mysqli->close();

SQL Server Linked Server To MySQL "Too many connections"

I have attempted to find the answer here and via Google on how to control connections for a linked server ODBC connection.
Overview
I have a linked server from SQL Server 2014 to MySQL for the purposes of extracting data for our data warehouse. I've queried the database quite a few times without issue. Then yesterday, suddenly the query to read from the table is slow, and then I get reports that the application using this MySQL database are getting a "too many connections" error.
Details
The following query selects the data from MySQL and inserts to the SQL Server table.
INSERT INTO tmpCustomers
(fieldlist)
SELECT
myc.contact_id,
myl.franchise_id,
myl.lead_source,
LEFT(RTRIM(myc.first_name) + ' ' + RTRIM(myc.last_name),100) AS Name,
myc.first_name,
myc.last_name,
myc.company,
myc.Email,
myc.primary_phone,
myc.home_phone,
myc.mobile_phone,
myc.work_phone,
myc.fax,
myc.address1,
myc.Address2,
myc.City,
myc.[state],
myc.zip_code,
myc.created_date,
myc.updated_date
FROM [MYSQLDB]...[franchise] myf
INNER JOIN [MYSQLDB]...[leads] myl
ON myl.franchise_id = myf.franchise_id
INNER JOIN [MYSQLDBE]...[contact] myc
ON myc.contact_id = myl.contact_id
This query returns about 200K rows of data, and will grow. The MySQL database is used by our customer base, and this is a back-end process to pull data into our data warehouse.
The query has been working without issue over the past week of testing, until yesterday, where it caused our MySQL support to restart the MySQL server twice.
The ODBC setup was done using the "mysql-connector-odbc-5.3.6-win64.msi" version. I don't find any settings there to limit the number of connections. ODBC does show "Allow multiple statements", which this is not. It also has "Enable automatic reconnect", which I can't imagine why for a single query would be needed.
Summary
I can't afford to stop customers from connecting, and need to disable the process from using too many connections when doing the import.
Any input on this would be greatly appreciated.
Thanks
KDS
Update: 2016-Oct-05
AWS server - M3.xlarge
4 CPU
15 GiB
2 40 GiB SSD drives
It's better to optimize the MySQL server if you can't afford to stop customers from connecting.
With this much information, it hard to optimize or suggest something for MySQL optimization.
https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Better to update your configuration file. Or max_connections limit and InnoDB variable if you are using innoDB. And RAM also.
Can you update the above information in question section.
I'm going to mark this as answered as it's been about a year and no real solution to it. The issue was locks on the MySQL server as the SQL Server linked server was reading the data. SQL Server arguments like NOLOCK had no impact on resolving this.
So, what was done was to take a backup of the MySQL database nightly and restore it to a separate database that we linked to for SQL Server, and process the data from there. The reads are usually done in a matter of a minute or two. SQL Server was still putting a lock on the MySQL table, and users then started to stack multiple connections until all the connections to MySQL were used up.
So, since I only needed the data for reporting purposes daily, this separate database copy worked, but I don't know of any other fix to this.
Thanks
KD

Setting up MySQL 5.6 with Memcache fails without error

I am trying to setup MySQL 5.6 with the memcached plugin enabled. I followed the procedure on the mysql website and a couple of other tutorials 2, 3 that I found online. Specifically, as per 2, this should be really simple to setup and test.
I am trying to verify that the setup works as expected using telnet. When I set the value of a key from telnet, I get the return status of STORED. I can even fetch the value immediately from memcache. However, when I login into the DB, I do not see the new row. I don't see any errors in the logs either. "show plugins" shows that the daemon_memcached plugin is enabled.
[Edited]
Actually, things don't even the other way. I added a new row into the demo_test table and tried fetching it through the memcache interface. That didn't work either.
Any pointers about how to go about identifying what's wrong?
The memcache integration in MySQL communicates directly with the InnoDB storage engine, not the higher MySQL "server layer." As such, changes to table data through this interface do not invalidate queries against the table that have been stored in the query cache. This is in contrast to normal operations through the SQL interface, where any change to a table's data will immediately evict any and all results held from the query cache for queries against that table, without regard to whether or not the change to the table data actually invalidated each specific query impacted.
Repeat your query, but instead of SELECT, use SELECT SQL_NO_CACHE. If you get the result you expect, this is the explanation.
Once you have established that this is the cause, you will find that any SQL query that does an insert, delete, or update against the table will also have the effect of making memcache-changed data visible to SELECT queries, without the need for adding the SQL_NO_CACHE directive, and this will hold true even when the insert, delete, or update does not directly impact the rows in question, so long as it modifies something in the table in question.
Duh!! There was already a memcached instance running on port 11211. Unfortunately, mysql doesn't error out in this situation. When I was using telnet to connect to port 11211, I was reaching the existing memcached instance. It was storing/retrieving values that it had seen but wasn't communicating with MySQL.
I stopped the existing memcached instance and restarted mysql. I am now able to connect to port 11211. Using telnet, when I do a "get", I get back values from the db. Also, when I set new values from telnet, they get reflected in the DB (and can be retrieved using SQL).

Is query correct if connection was lost during it?

After establishing a remote connection to a MySQL server (using the MySQL command-line front-end) I started executing a very long stored procedure (I estimate that it may take longer than 7 hours) but in the middle of it I received the error:
ERROR 2013 (HY000): Lost connection to MySQL server during query
So I guess that my query got timed-out. This procedure just stores some values in a
previously empty table.
After receiving this error and re-establishing the connection to the server I could
verify that the procedure somehow continued executing. And some time later I also verified
that the previously empty table now has some rows.
My question is, Can I trust that the procedure's execution was correct even though the
connection was lost?
First, 7 hours seems too long. If the incompolete result bothers you, you could add some save point when N queries done, and combine the N queries into a single transaction. Then, Every time your procedure stops exceptionally, you could load the point you last saved.
I also suggest you could try to select the source data into your local pc, and make some script to get the result, then upload it to target table. This will reduce overhead on db server.

How do I fix the error that occurs during data transfer between SQL Server and MySql?

I have an SSIS package that connects to a mysql server and attempts to pulls data from different tables and inserts the rows into a SQL Server 2005 database.
One issue i notice is that at any given time it runs, regardless of what step it is on, it almost always fails to bring in the total records from mysql into sql server.
there are no errors thrown.
One morning it will have all 11M records and on the next run anywhere between 3K and 17K records.
Anyone notice anything like this?
I import data from two separate MySQL databases -- one over the Internet and one in-house. I've never had this type of issue. Can you describe how you setup your connection to the MySQL database? I used the ODBC driver available on the MySQL website and connect using an ADO.NET data source in my data flow that references the ODBC connection.
One possible way you could at least prevent yourself from loading incomplete data is only load new records. If the source table has an ID and the records never change once they are inserted, then you could feed in the maximum ID by checking your database first.
Another possible way to prevent loading incomplete data is loading the MySQL database into a staging table on your destination server and then only load records you haven't already loaded.
Yet another way to do it is load the data into a staging table, verify the records are greater than some minimum threshold such as the row count of the target table or the expected minimum number of transactions per day and then only commit the changes after this validation. If the rows are insufficent, then raise an error on the package and send a notification email. The advantage of raising an error is you can set your SQL Server Agent job to retry the step for a defined number of attempts to see if this resolves the issue.
Hope these tips help even if they don't directly address the root cause of your problem.
I've only tried MySQL -> SQL Server via SSIS once, but the error I found related to MySQL datetimes not converting to SQL Server datetimes. I would have thought this would break the whole dataflow, but depending on your configuration you could have set it purely to ignore bad rows?