My Codeigniter Model Code is below
$last_qt_pk_id = "";
$i = 0;
foreach($query->result() as $row){
$data [] = $row;
$rq_pk_id = $row->id;
$question_type_pk_id = $row->question_type_pk_id;
if($question_type_pk_id != $last_qt_pk_id){
$i =1;
} else {
++$i;
}
$up_data = array('receive_serial_ato_qt'=>$i);
$this->db->where('id',$rq_pk_id);
$this->db->update('qms_received_question_info',$up_data);
$last_qt_pk_id = $question_type_pk_id;
}
My Above Code Firstly it do update, then the question list display in view.
My Code is working nicely but sometimes the problem/error is showing.
My Error is given below by image
I cannot understand that why is showing the error sometimes. If i restart xampp then it has ok for some time but after some time then the error is showing again. I have tried with wamp and xampp but it is showing same problem.
Please, any help?
This error is entirely MySQL not doing as it should. The best solution is to forget about MySQL and migrate to other SQL like PostgreSQL, but lacking this option, maybe the following will help you:
Performance blog post
SQL Server - fix
also consider increasing the "innodb_lock_wait_timeout" value, and stop using active records when working with transactions.
Personally i don't use active records as it takes a lot of memory and slows the performance time
Related
I use certain my.cnf settings like this. Does RDS instance allow such options?
slave-skip-errors = 1062,1054
replicate-ignore-db=verv_raw
replicate-ignore-table=verv.ox_session
replicate-wild-ignore-table=verv_raw.ox%
replicate-wild-ignore-table=verv_raw.ox%
I am aware of the procedure that skips one error at a time.
CALL mysql.rds_skip_repl_error;
But what I am looking for is an option to skip all errors on slave. Is it possible in RDS environment?
I solved it by creating a mysql event scheduler like this :
CREATE EVENT repl_error_skipper
ON SCHEDULE
EVERY 15 MINUTE
COMMENT 'Calling rds_skip_repl_error to skip replication error'
Do
CALL mysql.rds_skip_repl_error;
/*also you can add other logic */
To set other global variables you can find and set those (if available for changing) in rds parameter group (you will have to create new parameter group and set the variable values).
As mentioned, this command only skips one replication error. I wrote a PHP script to loop through this and ran it once a minute via cron job (my replica was log jammed with a series of thousands of bad queries than went through on the main DB)
for($i = 1; $i <= 30; $i++) {
$db = new mysqli('someserver.rds.amazonaws.com', 'root', 'password');
$res = $db->query('CALL mysql.rds_skip_repl_error;');
if(!$res) {
//echo 'Query failed: ' . $db->error . "\n";
return;
}
//var_dump($res->fetch_assoc());
$db->close();
sleep(1);
}
You'll need to tinker with this some (not every server would tolerate only one second between calls and 30 calls per minute), but it does work (albeit in a brute force manner). You must create a new DB connection every time to run this. The loop opens, runs and then closes the connection.
I've noticed that if I execute a long running mysql query with php using mysql_query() (I know I'm not supposed to use that) and then the php process gets killed then the query continues to run on the mysql server. This is not a persistent connection. The connection is made with:
$db = mysql_connect($host, $login, $pass, false);
$sql = 'SELECT COUNT(*) FROM `huge_table`';
$result = mysql_query($sql, $db);
For example, let's say I have a 1 billion row table and a php process does this for some reason:
SELECT COUNT(*) FROM `huge_table`
And then it times out (say because I'm running php-fpm with request_terminate_timeout=5), so it kills the process after 5 seconds to make sure it doesn't hog things.
Eventhough the process is killed, the query still runs on mysql even far after wait_timeout.
Is there anyway to make sure that if the php process exits for whatever reason it also kills any running queries that it made?
I'm using tokudb 5.5.38-tokudb-7.1.7-e which is mysql 5.5.38
crickeys, when a PHP script starts to execute and it gets to the part where it executes a MySQL query, that query is handed over to MySQL. The control of the query is no longer in PHP's hands....PHP at the point is only waiting for a response from MySQL then it can proceed. Killing the PHP script doesn't affect the MySQL query because well, the query is MySQL's business.
Put another way, PHP comes to the door, knocks, hands over the goods and waits for you to bring back a response so he can be on his way. Shooting him won't affect what's going on behind the door.
You could run something like this to retrieve the longest running processes and kill them:
<?php
$con=mysqli_connect("example.com","peter","pass","my_db");
// Check connection
if (mysqli_connect_errno()) {
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
$result = mysqli_query($con,"SHOW FULL PROCESSLIST");
while($row = mysqli_fetch_array($result)) {
if ($row["Time"] > $max_excution_time ) {
$sql="KILL ".$row["Id"];
mysql_query($sql);
}
}
mysqli_close($con); ?>
Well you can use a destructor to call
mysql_close(); function.
I hope I understood your question...
You can use KILL.
KILL CONNECTION is the same as KILL with no modifier: It terminates the connection associated with the given thread_id.
KILL QUERY terminates the statement that the connection is currently executing, but leaves the connection itself intact.
You should KILL QUERY in the shutdown event, and then do a mysqli_close().
You might get some valuable information from this question about timeouts: Client times out, while MySQL query remains running?
I'm about to pull my hair out... why wont this run?
Mage::getSingleton('core/resource')->getConnection('core_write')->query('DELETE FROM catalog_product_super_attribute WHERE product_id = 46');
When I run that query via command line, or phpmyadmin even, it executes just fine deleting all the rows HOWEVER when I try to run it using code it doesn't work. I've even tried just bypassing Magento hoping it was something with them BUT when I tried to delete using straight mysql or mysqli in php it wouldn't work either.
Any thoughts or suggestions would be AWESOME.
$transaction = Mage::getSingleton('core/resource')->getConnection('core_write');
try {
$transaction->beginTransaction();
$transaction->query('DELETE FROM catalog_product_super_attribute WHERE product_id = 46');
$transaction->commit();
} catch (Exception $e) {
$transaction->rollBack(); // if anything goes wrong, this will undo all changes you made to your database
}
so I am just getting started using Memcache. I am ready to write some code, however I have an optimization question:
I wonder whether I should delay establishing a MySQL Connect as far as possible (and maybe not establish one at all, when everything can be read from the Memcache) OR establish it anyway to spare me coding time, based on the thought that not the connection but the actual querys make my server's CPU go crazy.
So, I have to choose between these two code examples:
1 - Connect to MySQL anyway
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die ("MEMCACHE: Could not connect!");
$db = mysql_connect('localhost', 'user', 'password') or die ("MySQL: Could not connect!");
mysql_select_db('database');
$sql = "SELECT id FROM table LIMIT 1";
$key = md5('query'.$sql);
//lookup value in memcache
$result = $memcache->get($key);
//check if we got something back
if($result == null) {
//fetch from database
$qry = mysql_query($sql) or die(mysql_error()." : $sql");
if(mysql_num_rows($qry)> 0) {
$result = mysql_fetch_object($qry);
//store in memcache for 60 seconds
$memcache->set($key,$result,0,60);
}
}
2 - Connect to MySQL as soon as it is needed
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die ("MEMCACHE: Could not connect!");
$sql = "SELECT id FROM table LIMIT 1";
$key = md5('query'.$sql);
//lookup value in memcache
$result = $memcache->get($key);
//check if we got something back
if($result == null) {
if(!$db){
$db = mysql_connect('localhost', 'user', 'password') or die ("MySQL: Could not connect!");
mysql_select_db('database');
}
//fetch from database
$qry = mysql_query($sql) or die(mysql_error()." : $sql");
if(mysql_num_rows($qry)> 0) {
$result = mysql_fetch_object($qry);
//store in memcache for 60 seconds
$memcache->set($key,$result,0,60);
}
}
The way to go is to connect to mySQL(and other stuff) only when you need it. that way you reduce the resources your app needs in this case network connections. and you don't put a load to the DB server.
General rule of thumb : use a resource only when you need it.
If speed is important, get the connection up front. It's not a big resource to have open, but it can take a while to establish.
Also, by connecting early you know on app startup if there's a problem (eg database server down) and you get confirmation that everything is good to go, rather than exuding some time later when you could have known earlier and fixed the problem before it was becme problem.
You might want to go further and run a heartbeat query to assert that the database is still there so similar reasons.
Note that this approach makes the database required to be up for your app to be up. You can do something in between: get the connection at startup, but if it`s not available fall back to a just in time approach, which gives you more flexibility. This is what I'd do.
I think it depends on concurrency. But cache the connections in a thread safe pool is better.
In many web applications, database connections are established and put in a thread safe pool, because establishing a connection is expensive.
It seems like getting data from memcached not directly from database, because it more fast and can hold so many threads.
Looking at the code you provided (usual last-century-style spaghetti) I'd vote for the first one.
Whatever logic added to the program flow will make your code more complex by a factor of ten. So, better leave it as plain as possible.
Or, even, I would advise not to use memcache at all, until you learn how to separate and encapsulate different matters.
Especially because there is no point in caching data you can get from db by the primary key.
I am using a MySQL database with phpMyAdmin as the frontend (I am not sure I have remote/client access yet). I have a script that queries the database and would like to see how long each query takes? What is the easiest way to do this? Could I install another PHP app on the server?
If you have access to MySQL config files, you can enable general query log and slow query log.
See here for details.
Since, as I think, 5.1.6, you can also do it in runtime:
SET GLOBAL long_query_time = 10 /* which queries are considered long */
SET GLOBAL slow_query_log = 1
, but try it anyway, I don't rememeber exact version when it appeared.
For most applications that I work on, I include query profiling output that can easily be turned on in the development environment. This outputs the SQL, execution time, stack trace and a link to display explain output. It also highlights queries running longer than 1 second.
Although you probably don't need something as sophisticated, you can get a fairly good sense of run time of queries by writing a function in PHP that wraps the query execution, and store debug information on the session (or simply output it). For example:
function run_query($sql, $debug=false, $output=false) {
$start = microtime(true);
$q = mysql_query($sql);
$time = microtime(true) - $start;
if($debug) {
$debug = "$sql<br/>$time<br/><br/>";
if($output) {
print $debug;
} else {
$_SESSION['sql_debug'] .= $debug;
}
}
return $q;
}
That's just kind of a rough idea. You can tweak it however you want.
Hope that helps -
You should set long_query_time to 1 since setting it to 10 will exclude most if not all of your queries.
In the mysql prompt type
set ##long_query_time=1;