How can I skip all errors on an RDS replica instance? - mysql

I use certain my.cnf settings like this. Does RDS instance allow such options?
slave-skip-errors = 1062,1054
replicate-ignore-db=verv_raw
replicate-ignore-table=verv.ox_session
replicate-wild-ignore-table=verv_raw.ox%
replicate-wild-ignore-table=verv_raw.ox%
I am aware of the procedure that skips one error at a time.
CALL mysql.rds_skip_repl_error;
But what I am looking for is an option to skip all errors on slave. Is it possible in RDS environment?

I solved it by creating a mysql event scheduler like this :
CREATE EVENT repl_error_skipper
ON SCHEDULE
EVERY 15 MINUTE
COMMENT 'Calling rds_skip_repl_error to skip replication error'
Do
CALL mysql.rds_skip_repl_error;
/*also you can add other logic */
To set other global variables you can find and set those (if available for changing) in rds parameter group (you will have to create new parameter group and set the variable values).

As mentioned, this command only skips one replication error. I wrote a PHP script to loop through this and ran it once a minute via cron job (my replica was log jammed with a series of thousands of bad queries than went through on the main DB)
for($i = 1; $i <= 30; $i++) {
$db = new mysqli('someserver.rds.amazonaws.com', 'root', 'password');
$res = $db->query('CALL mysql.rds_skip_repl_error;');
if(!$res) {
//echo 'Query failed: ' . $db->error . "\n";
return;
}
//var_dump($res->fetch_assoc());
$db->close();
sleep(1);
}
You'll need to tinker with this some (not every server would tolerate only one second between calls and 30 calls per minute), but it does work (albeit in a brute force manner). You must create a new DB connection every time to run this. The loop opens, runs and then closes the connection.

Related

Use same mysqli prepared statement for different queries?

Throughout some testings; a little question popped up. When I usually code database updates; I usually do this via callbacks which I code in PHP; to which I simply pass a given mysqli connection object as function argument. Executing all queries of for example three queries across the same single connection proved to be much faster than if closing and reopening a DB connection for each query of a given query sequence. This also works easily with SQL transactions, the connection can be passed along to callbacks without any issues.
My question is; can you also do this with prepared statement objects ? What I mean is, considering we successfully established a $conn object, representing the mysqli connection, is stuff like this legit? :
function select_users( $users_id, $stmt ) {
$sql = "SELECT username FROM users where ID = ?";
mysqli_stmt_prepare( $stmt, $sql );
mysqli_stmt_bind_param( $stmt, "i", $users_id );
mysqli_stmt_execute( $stmt );
return mysqli_stmt_get_result( $stmt );
}
function select_labels( $artist, $stmt ) {
$sql = "SELECT label FROM labels where artist = ?";
mysqli_stmt_prepare( $stmt, $sql );
mysqli_stmt_bind_param( $stmt, "s", $artist );
mysqli_stmt_execute( $stmt );
return mysqli_stmt_get_result( $stmt );
}
$stmt = mysqli_stmt_init( $conn );
$users = select_users( 1, $stmt );
$rappers = select_labels( "rapperxyz", $stmt );
or is it bad practice; and you should rather use:
$stmt_users = mysqli_stmt_init( $conn );
$stmt_rappers = mysqli_stmt_init( $conn );
$users = select_users( 1, $stmt_users );
$rappers = select_labels( "rapperxyz", $stmt_rappers );
During the testing; I noticed that the method by using a single statement object passed along callbacks works for server calls where I call like 4 not too complicated DB queries via the 4 according callbacks in a row.
When I however do a server call with like 10 different queries, sometimes (yes, only sometimes; for pretty much the same data used across the different executions; so this seems to be weird behavior to me) I get the error "Commands out of sync; you can't run this command now" and some other weird errors I've never experienced, like the amount of variables not matching the amount of parameters; although they prefectly do after checking them all. The only way to fix this I found after some research was indeed by using different statement objects for each callback. So, I just wondered; should you actually ALWAYS use ONE prepared statement object for ONE query, which you then may execute N times in a row?
Yes.
The "commands out of sync" error is because MySQL protocol is not like http. You can't send requests any time you want. There is state on the server-side (i.e. mysqld) that is expecting a certain sequence of requests. This is what's known as a stateful protocol.
Compare with a protocol like ftp. You can do an ls in an ftp client, but the list of files you get back depends on the current working directory. If you were sharing that ftp client connection among multiple functions in your app, you don't know that another function hasn't changed the working directory. So you can't be sure the file list you get from ls represents the directory you thought you were in.
In MySQL too, there's state on the server-side. You can only have one transaction open at a time. You can only have one query executing at a time. The MySQL client does not allow you to execute a new query where there are still rows to be fetched from an in-progress query. See Commands out of sync in the MySQL doc on common errors.
So if you pass your statement handle around to some callback functions, how can that function know it's safe to execute the statement?
IMO, the only safe way to use a statement is to use it immediately.

Phalcon + PHPUnit + DI: Too many db connections

I'm using PHPUnit with Phalcon. In my UnitTestCase (base test class), I've set up the connection thus:
protected function setUp(\Phalcon\DiInterface $di = null, \Phalcon\Config $config = null)
{
$dbparams = ...
if (is_null($di)) {
$di = new \Phalcon\DI\FactoryDefault();
}
$di->setShared('db', function() use ($dbconfig) {
return new \Phalcon\Db\Adapter\Pdo\Mysql($dbparams);
});
\Phalcon\DI:setDefault($di);
parent::setUp($di, $this->_config);
$this->_loaded = true;
}
I'm running into a problem, where, after a number of suites are run, I'm starting to get the following error (on every one of the test cases after a certain point):
PDOException: SQLSTATE[HY000] [1040] Too many connections
Am I doing something wrong?
So you just keep adding new connections with each test case. Since PHPUnit runs a single PHP process, none of the database connections are garbage-collected. The PHP process just keeps accumulating open connections until you exceed the database instance's max_connections value.
You can probably observe the number of connections growing if you open a session to MySQL and run SHOW PROCESSLIST from time to time.
You need to disconnect from the database in your PHPUnit tearDown() method.

mysql_query() keeps running after php exits

I've noticed that if I execute a long running mysql query with php using mysql_query() (I know I'm not supposed to use that) and then the php process gets killed then the query continues to run on the mysql server. This is not a persistent connection. The connection is made with:
$db = mysql_connect($host, $login, $pass, false);
$sql = 'SELECT COUNT(*) FROM `huge_table`';
$result = mysql_query($sql, $db);
For example, let's say I have a 1 billion row table and a php process does this for some reason:
SELECT COUNT(*) FROM `huge_table`
And then it times out (say because I'm running php-fpm with request_terminate_timeout=5), so it kills the process after 5 seconds to make sure it doesn't hog things.
Eventhough the process is killed, the query still runs on mysql even far after wait_timeout.
Is there anyway to make sure that if the php process exits for whatever reason it also kills any running queries that it made?
I'm using tokudb 5.5.38-tokudb-7.1.7-e which is mysql 5.5.38
crickeys, when a PHP script starts to execute and it gets to the part where it executes a MySQL query, that query is handed over to MySQL. The control of the query is no longer in PHP's hands....PHP at the point is only waiting for a response from MySQL then it can proceed. Killing the PHP script doesn't affect the MySQL query because well, the query is MySQL's business.
Put another way, PHP comes to the door, knocks, hands over the goods and waits for you to bring back a response so he can be on his way. Shooting him won't affect what's going on behind the door.
You could run something like this to retrieve the longest running processes and kill them:
<?php
$con=mysqli_connect("example.com","peter","pass","my_db");
// Check connection
if (mysqli_connect_errno()) {
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
$result = mysqli_query($con,"SHOW FULL PROCESSLIST");
while($row = mysqli_fetch_array($result)) {
if ($row["Time"] > $max_excution_time ) {
$sql="KILL ".$row["Id"];
mysql_query($sql);
}
}
mysqli_close($con); ?>
Well you can use a destructor to call
mysql_close(); function.
I hope I understood your question...
You can use KILL.
KILL CONNECTION is the same as KILL with no modifier: It terminates the connection associated with the given thread_id.
KILL QUERY terminates the statement that the connection is currently executing, but leaves the connection itself intact.
You should KILL QUERY in the shutdown event, and then do a mysqli_close().
You might get some valuable information from this question about timeouts: Client times out, while MySQL query remains running?

Should I avoid establishing a MySQL Connection (when using Memcache) or is establishing the connection anyway the way to go?

so I am just getting started using Memcache. I am ready to write some code, however I have an optimization question:
I wonder whether I should delay establishing a MySQL Connect as far as possible (and maybe not establish one at all, when everything can be read from the Memcache) OR establish it anyway to spare me coding time, based on the thought that not the connection but the actual querys make my server's CPU go crazy.
So, I have to choose between these two code examples:
1 - Connect to MySQL anyway
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die ("MEMCACHE: Could not connect!");
$db = mysql_connect('localhost', 'user', 'password') or die ("MySQL: Could not connect!");
mysql_select_db('database');
$sql = "SELECT id FROM table LIMIT 1";
$key = md5('query'.$sql);
//lookup value in memcache
$result = $memcache->get($key);
//check if we got something back
if($result == null) {
//fetch from database
$qry = mysql_query($sql) or die(mysql_error()." : $sql");
if(mysql_num_rows($qry)> 0) {
$result = mysql_fetch_object($qry);
//store in memcache for 60 seconds
$memcache->set($key,$result,0,60);
}
}
2 - Connect to MySQL as soon as it is needed
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die ("MEMCACHE: Could not connect!");
$sql = "SELECT id FROM table LIMIT 1";
$key = md5('query'.$sql);
//lookup value in memcache
$result = $memcache->get($key);
//check if we got something back
if($result == null) {
if(!$db){
$db = mysql_connect('localhost', 'user', 'password') or die ("MySQL: Could not connect!");
mysql_select_db('database');
}
//fetch from database
$qry = mysql_query($sql) or die(mysql_error()." : $sql");
if(mysql_num_rows($qry)> 0) {
$result = mysql_fetch_object($qry);
//store in memcache for 60 seconds
$memcache->set($key,$result,0,60);
}
}
The way to go is to connect to mySQL(and other stuff) only when you need it. that way you reduce the resources your app needs in this case network connections. and you don't put a load to the DB server.
General rule of thumb : use a resource only when you need it.
If speed is important, get the connection up front. It's not a big resource to have open, but it can take a while to establish.
Also, by connecting early you know on app startup if there's a problem (eg database server down) and you get confirmation that everything is good to go, rather than exuding some time later when you could have known earlier and fixed the problem before it was becme problem.
You might want to go further and run a heartbeat query to assert that the database is still there so similar reasons.
Note that this approach makes the database required to be up for your app to be up. You can do something in between: get the connection at startup, but if it`s not available fall back to a just in time approach, which gives you more flexibility. This is what I'd do.
I think it depends on concurrency. But cache the connections in a thread safe pool is better.
In many web applications, database connections are established and put in a thread safe pool, because establishing a connection is expensive.
It seems like getting data from memcached not directly from database, because it more fast and can hold so many threads.
Looking at the code you provided (usual last-century-style spaghetti) I'd vote for the first one.
Whatever logic added to the program flow will make your code more complex by a factor of ten. So, better leave it as plain as possible.
Or, even, I would advise not to use memcache at all, until you learn how to separate and encapsulate different matters.
Especially because there is no point in caching data you can get from db by the primary key.

Analyze execution time of queries on MySQL database?

I am using a MySQL database with phpMyAdmin as the frontend (I am not sure I have remote/client access yet). I have a script that queries the database and would like to see how long each query takes? What is the easiest way to do this? Could I install another PHP app on the server?
If you have access to MySQL config files, you can enable general query log and slow query log.
See here for details.
Since, as I think, 5.1.6, you can also do it in runtime:
SET GLOBAL long_query_time = 10 /* which queries are considered long */
SET GLOBAL slow_query_log = 1
, but try it anyway, I don't rememeber exact version when it appeared.
For most applications that I work on, I include query profiling output that can easily be turned on in the development environment. This outputs the SQL, execution time, stack trace and a link to display explain output. It also highlights queries running longer than 1 second.
Although you probably don't need something as sophisticated, you can get a fairly good sense of run time of queries by writing a function in PHP that wraps the query execution, and store debug information on the session (or simply output it). For example:
function run_query($sql, $debug=false, $output=false) {
$start = microtime(true);
$q = mysql_query($sql);
$time = microtime(true) - $start;
if($debug) {
$debug = "$sql<br/>$time<br/><br/>";
if($output) {
print $debug;
} else {
$_SESSION['sql_debug'] .= $debug;
}
}
return $q;
}
That's just kind of a rough idea. You can tweak it however you want.
Hope that helps -
You should set long_query_time to 1 since setting it to 10 will exclude most if not all of your queries.
In the mysql prompt type
set ##long_query_time=1;