Google Cloud functions + SQL Broken Pipe error - mysql

I have various Google Cloud functions which are writing and reading to a Cloud SQL database (MySQL). The processes work however when the functions happen to run at the same time I am getting a Broken pipe error. I am using SQLAlchemy with Python, MySQL and the processes are cloud functions and the db is a google cloud database.I have seen suggested solutions that involve setting timeout values to longer. I was wondering if this would be a good approach or if there is a better approach? Thanks for your help in advance.
Heres the SQL broken pipe error:
(pymysql.err.OperationalError) (2006, "MySQL server has gone away (BrokenPipeError(32, 'Broken pipe'))")
(Background on this error at: http://sqlalche.me/e/13/e3q8)
Here are the MySQL timeout values:
show variables like '%timeout%';
+-------------------------------------------+----------+
| Variable_name | Value |
+-------------------------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_semi_sync_master_async_notify_timeout | 5000000 |
| rpl_semi_sync_master_timeout | 3000 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 30 |
| wait_timeout | 28800 |
+-------------------------------------------+----------+
15 rows in set (0.01 sec)

If you cache your connection, for performance, it's normal to lost the connection after a while. To prevent this, you have to deal with disconnection.
In addition, because you are working with Cloud Functions, only one request can be handle in the same time on one instance (if you have 2 concurrent requests, you will have 2 instances). Thus, set your pool size to 1 to save resource on your database side (in case of huge parallelization)

Related

Connection issue on csv data download from RDS via MySQL CLI

I need to download locally (macOS BigSur) a large dataset (millions of rows) from an AWS RDS database (MySQL 5.7).
Thanks to this great post I am able to connect and download on my machine some data into a csv file:
mysql --host=$HOST --user $USER --password=$PASSWORD --database=$DATABASE --port=$PORT --batch \
--quick -e "$QUERY" \
| sed $'s/\\t/","/g;s/^/"/;s/$/"/;s/\\n//g' > $FILE_PATH
However, if I extend my query to thousands of records, after few seconds the process stops and the csv ends up truncated (literally the last written row is truncated half way), so I assume there is some kind of stream or timeout or buffer issue.
mysql> SHOW VARIABLES LIKE '%timeout';
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 60 |
| wait_timeout | 28800 |
+-----------------------------+----------+
Since the command does stops after ~10sec I assume it depends on the connect_timeout value. However I tried setting it with SET ##GLOBAL.connect_timeout=7200 but I get a permission error. I tried adding the --connect-timeout=7200 parameter on the command, but it does not work (which strikes me the most).
Running the query (limited to ~250k rows) on my client (SequelAce) it runs fine, so I can exclude issues with the data or the the SQL script itself.
Any ideas or suggestions?
Are there better tools for the job maybe?

Are "cleaned up" MySQL connections from a connection pool safe to delete?

Consider following list of connections:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 15 | cleaned up |
| 90850182 | Sleep | 105 | cleaned up |
| 88009697 | Sleep | 38 | delayed commit ok done |
| 88000267 | Sleep | 6 | delayed commit ok done |
| 88009819 | Sleep | 38 | delayed commit ok done |
| 90634882 | Sleep | 21 | cleaned up |
| 90634878 | Sleep | 21 | cleaned up |
| 90634884 | Sleep | 21 | cleaned up |
| 90634875 | Sleep | 21 | cleaned up |
+----------+---------+------+------------------------+
After some short time under minute:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 9 | cleaned up |
| 88009697 | Sleep | 32 | delayed commit ok done |
| 88000267 | Sleep | 9 | delayed commit ok done |
| 88009819 | Sleep | 31 | delayed commit ok done |
| 90634882 | Sleep | 14 | cleaned up |
| 90634878 | Sleep | 14 | cleaned up |
| 90634884 | Sleep | 14 | cleaned up |
| 90634875 | Sleep | 14 | cleaned up |
+----------+---------+------+------------------------+
8 rows in set (0.02 sec)
enter code here
After I finished writing this stackoverflow post:
+----------+---------+------+------------------------+
| ID | COMMAND | TIME | STATE |
+----------+---------+------+------------------------+
| 87997796 | Sleep | 0 | cleaned up |
| 88009697 | Sleep | 53 | delayed commit ok done |
| 88000267 | Sleep | 0 | delayed commit ok done |
| 88009819 | Sleep | 52 | delayed commit ok done |
| 90634882 | Sleep | 5 | cleaned up |
| 90634878 | Sleep | 5 | cleaned up |
| 90634884 | Sleep | 5 | cleaned up |
| 90634875 | Sleep | 5 | cleaned up |
+----------+---------+------+------------------------+
Context:
This is some 3rd vendor app opening connections (source code isn't available to us, so we don't know details). We know that their connection management is awful , they know it as well. It is awful because connections leak which you can see in first table - 90850182. If others have their timers reset, then this one starts to age infinitely. In older versions of the app it would stay forever. In newer version it is eventually captured by a "patch" which vendor introduced , which effectively cleans connections after the x seconds you specify. So it's "a leak healing patch".
The problem:
We are hosting hundreds of such vendor apps and most of them have much more than 8 connections as they have more traffic. That results in disgusting number(talking thousands) of connections we have to maintain. About 80% of connections sit in "cleaned up" state and under 120 seconds (cleaned eventually by aforementioned configurable app parameter).
This is all handled by Aurora RDS and AWS engineers told us that if the app doesn't close properly connections the standard "wait_timeout" isn't going to work. Well, "wait_timeout" becomes useless decoration in AWS Aurora, but let us take it with Jeff in other thread/topic.
So regardless, we have this magic configurable parameter from third party vendor set on this obscure app which controls eviction of stale connections and it works.
The questions:
Is it safe to evict connections which are in "cleaned up" state immediately?
At the moment this happens after 120 seconds which results in huge number of such connections. Yet in the tables above you can see that the timers are reset meaning that something is happening to these connections and they are not entirely stale. I.e. connection pooling of the app "touches" them for further re-use?
I don't posses knowledge of connection pools inner guts as how they are seen from within database. Are all reserved connections of a connection pool by default are "sleeping" in "cleaned up" state?
So say if you start cleaning too much, you will fight connection pool aggressively creating more to replenish?
Or reserved connections have some different state?
Even if you don't fully understand the context I'd expect veteran DBA or connection pool library maintainer to help with such questions. Otherwise will get my hands dirty and answer this myself eventually, would try apache connection pool, hikari, observe them and try to kill their idle connections (simulating magic parameter) and try this 3rd party app connection with 0 seconds magic parameter, see if it still works.
Appreciate your time :bow:.
The Answer
Yes, from AWS forum (https://forums.aws.amazon.com/thread.jspa?messageID=708499)
In Aurora the 'cleaned up' state is the final state of a connection
whose work is complete but which has not been closed from the client
side. In MySQL this field is left blank (no State) in the same
circumstance.
Also from the same post:
Ultimately, explicitly closing the connection in code is the best
solution here
From my personal experience as a MySQL DBA, and knowing that "cleaned up" represents a blank state, I'd definitely kill those connections.

What is the meaning of different mysql timeouts

I searched a lot on the internet but not found any brief explanation on mysql timeouts with examples. I want to know the meaning of mysql diffenernt timeouts as listed below and also want to know why and when we use them.
+----------------------------+----------+
| Variable_name | Value |
+----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 50 |
| interactive_timeout | 28800 |
| net_read_timeout | 3 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| wait_timeout | 28800 |
+----------------------------+----------+
Also in ruby on rails application i can set read_timeout in my database.yml file. if query is not able to read the data within specified read_timeout value mysql will close the connection. so i also want to know what is the differentce between net_read_timeout and read_timeout
Thanks,
From The Ultimate Guide to Ruby Timeouts
connect (or open) - time to open the connection
read (or receive) - time to receive data after connected
write (or send) - time to send data after connected
checkout - time to checkout a connection from the pool
statement - time to execute a database statement

MySQL Queries taking too long to load

I have a database table, with 300,000 rows and 113.7 MB in size. I have my database running on Ubuntu 13.10 with 8 Cores and 8GB of RAM. As things are now, the MySQL server uses up an average of 750% CPU. and 6.5 %MEM (results obtained by running top in the CLI). Also to note, it runs on the same server as Apache2 Web Server.
Here's what I get on the Mem line:
Mem: 8141292k total, 6938244k used, 1203048k free, 211396k buffers
When I run: show processlist; I get something like this in return:
2098812 | admin | localhost | phpb | Query | 12 | Sending data | SELECT * FROM items WHERE thumb = 'Halloween 2013 Horns/thumbs/Halloween 2013 Horns (Original).png'
2098813 | admin | localhost | phpb | Query | 12 | Sending data | SELECT * FROM items WHERE thumb = 'Halloween 2013 Witch Hat/thumbs/Halloween 2013 Witch Hat (Origina
2098814 | admin | localhost | phpb | Query | 12 | Sending data | SELECT * FROM items WHERE thumb = 'Halloween 2013 Blouse/thumbs/Halloween 2013 Blouse (Original).png
2098818 | admin | localhost | phpb | Query | 11 | Sending data | SELECT * FROM items WHERE parent = 210162 OR auto = 210162
Some queries are taking an excess of 10 seconds to execute, this is not the top of the list, but somewhere in the middle just to give kind of a perspective of how many queries are stacking up in this list. I feel that it may have something to do with my Query Cash configurations. Here are the configurations show from running the SHOW STATUS LIKE 'Qc%';
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 434 |
| Qcache_free_memory | 2037880 |
| Qcache_hits | 62580686 |
| Qcache_inserts | 10865474 |
| Qcache_lowmem_prunes | 4157011 |
| Qcache_not_cached | 3140518 |
| Qcache_queries_in_cache | 1260 |
| Qcache_total_blocks | 4440 |
+-------------------------+----------+
I noticed that the Qcache_lowmem_prunes seem a bit high, is this normal?
I've been searching around StackOverflow, but I couldn't find anything that would solve my problem. Any help with this would be greatly appreciated, thank you!
This is probably one for http://dba.stackexchange.com. That said...
Why are your queries running slow? Do they return a large result set, or are they just really complex?
Have you tried running one of these queries using EXPLAIN SELECT column FROM ...?
Are you using indexes correctly?
How have you configured MySQL in your my.cnf file?
What table types are you using?
Are you getting any errors?
Edit: Okay, looking at your query examples. What data type is items.thumb? Varchar, Text? Is it not at all possible to query this table using another method than literal text matching? (e.g. ID number). Does this column have an index?

How to delete sleep process in Mysql

I found that my mysql sever have many of connection who is sleep. i want to delete them all.
so how i can configure my mysql server than then delete or dispose the connection who is in sleep not currently in process.
are this possible to delete this thing in mysql tell me how i can do following
a connection allow only one time datareader open and destroy the connection [process] after giving resposnse of query.
If you want to do it manually you can do like this:
login to Mysql as admin:
mysql -uroot -ppassword;
And than run command:
mysql> show processlist;
You will get something like below :
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
| 49 | application | 192.168.44.1:51718 | XXXXXXXX | Sleep | 183 | | NULL ||
| 55 | application | 192.168.44.1:51769 | XXXXXXXX | Sleep | 148 | | NULL |
| 56 | application | 192.168.44.1:51770 | XXXXXXXX | Sleep | 148 | | NULL |
| 57 | application | 192.168.44.1:51771 | XXXXXXXX | Sleep | 148 | | NULL |
| 58 | application | 192.168.44.1:51968 | XXXXXXXX | Sleep | 11 | | NULL |
| 59 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+----+-------------+--------------------+----------+---------+------+-------+------------------+
You will see complete details of different connections. Now you can kill the sleeping connection as below:
mysql> kill 52;
Query OK, 0 rows affected (0.00 sec)
Why would you want to delete a sleeping thread? MySQL creates threads for connection requests, and when the client disconnects the thread is put back into the cache and waits for another connection.
This reduces a lot of overhead of creating threads 'on-demand', and it's nothing to worry about. A sleeping thread uses about 256k of memory.
you can find all working process execute the sql:
show process;
and you will find the sleep process, if you want terminate it, please remember the processid and excute this sql:
kill processid
but actually you can set a timeout variable in my.cnf:
wait_timeout=15
connect_timeout=10
interactive_timeout=100
for me with MySql server on windows,
I update the file (because cannot set variable with sql request due privileges):
D:\MySQL\mysql-5.6.48-winx64\my.ini
add the lines:
wait_timeout=61
interactive_timeout=61
restart service, and acknowledge new values with:
SHOW VARIABLES LIKE '%_timeout';
==> i do a connection tests and after 1 minutes all 10+ connections in sleep are disapeared!