Stop mysql query when press stop button - mysql

I got stuck with this problem, I found many posts but seemed it's not useful. So I post again here and hope someone can help me.
Let say I have 2 button, 1 is Start button and 1 is Stop button. When I press start will call ajax function which query very long time. I need when I press Stop will stop immediately this query, not execute anymore.
this is function used to call query and fetch row. (customize Mysqli.php)
public function fetchMultiRowset($params = array()) {
$data = array();
$mysqli = $this->_adapter->getConnection();
$mysqli->multi_query($this->bindParams($this->_sql, $params));
$thread_id = mysqli_thread_id($mysqli);
ignore_user_abort(true);
ob_start();
$index = 0;
do {
if ($result = $mysqli->store_result()) {
while ($row = $result->fetch_array(MYSQLI_ASSOC)) {
$data[$index] = $row;
$index++;
echo " ";
ob_flush();
flush();
}
$result->free();
}
}
while ($mysqli->more_results() && $mysqli->next_result());
ob_end_flush();
return $data;
}
Function in Model:
public function select_entries() {
$data = null;
try {
$db = Zend_Db_Adapter_Mysqlicustom::singleton();
$sql = "SELECT * FROM report LIMIT 2000000";
$data = $db->fetchMultiRowset($sql);
$db->closeConnection();
} catch (Exception $exc) {
}
return $data;
}
Controller:
public function testAction(){
$op = $this->report_test->select_entries();
}
In AJAX I used xhr.abort() to stop the AJAX function. But it still runs the query while AJAX was aborted.
How do I stop query? I used Zend Framework.

EDIT: I did not look in detail at your program, now I see that not the query itself is taking so long, but the reading of all the data. So just check every 1000 rows, if the ajax call is still active. Ajax Abort.
Solution in case of a long-running SQL-query:
You would have to allow the application to kill database queries, and you need to implement a more complex interaction between Client and Server, which could lead to security holes if done wrong.
The Start-Request should contain a session and a page id (secure id, so not 3 and 4 and 5 but a non-guessable but unique hash of some kind). The backend then connects this id with the query. This could be done in some extra table of the database, but also via comments in the SQL query, like "Session fid98a08u4j, Page 940jfmkvlz" => s:<session>p:<page>.
/* s:fid98a08u4jp:940jfmkvlz */ select * from ...
If the user presses "stop", you send session and page id to the server. The php-code then fetches the list of your running SQL Queries and searches for session and page and extracts the query id.
Then the php sends a
kill query <id>
to the MySQL-server.
This might lead to trouble when not using transactions, and this might damage replication. And even a kill query might take some time in the state 'killing'.
So be sure that you can and want not to split the long running query into subqueries, which check if the request is still valid every few seconds, or that you do not just want to kill the query for cosmetical reasons.

Related

Laravel keeps getting the same cache result even if I input different search keyword

I got a api request that has a parameter in it which is projectname. The problem is when I search for example A the results will be A but when I search for B the result is till A even if I search C the result is still the same. I think the cache saved the first results from the first search string. My question is how could I save every results in every search query without getting the same result based on the search query?
Here is my code
public function getRecordDetails(Request $request){
if(!empty($request->limit)){
$limit = " LIMIT ".$_REQUEST['limit'];
}
else{
$limit= '';
}
if(empty($request->projectname)){
dd('Field is empty');
}
else{
$data = Cache::rememberForever('results', function () use($request) {
$result = DB::connection('mysql2')
->table('xp_pn_ura_transactions')
->whereRaw(DB::raw("CONCAT(block, ' ', street,' ',project_name,' ', postal_code,'')LIKE '%$request->projectname%' order by STR_TO_DATE(sale_date, '%d-%M-%Y') desc"))
->limit($request->limit)
->distinct()
->get();
$count = DB::connection('mysql2')
->table('xp_pn_ura_transactions')
->whereRaw(DB::raw("CONCAT(block, ' ', street,' ',project_name,' ', postal_code,'')LIKE '%$request->projectname%'"))
->count();
return json_encode(array('count'=>$count,'result'=>$result));
});
return $data;
}
}
PS: This question is based here How could I cache every api response results in my query in Laravel? I answered here but this is different problem based on my answer. Thanks for helping.
Laravel find the Cache by the key. You're using results as your key.
So no matter how different request you pass. It still can find the cache by results.
So it will return the first cache you store in results.
$key = "results:".$request->projectname.':' $request->limit;
Cache::rememberForever($key, function () use ($request) {
...
}
This one will store every different projectname you request.
However
Problem 1:
There are so many diff possibilities that user can request.
I don't think it is a good idea to store all these cache. If there are not that much, it is ok.
Solution:
Or you can use remember() instead of rememberForever()
$ttl = ????; // Find the appropriate time to expire the cache
$value = Cache::remember($key, $ttl, function () {});
Problem 2:
There is a $request->limit in your cache.
That means if someone insert or delete a record in that table. next time you request with another limit, you will face the duplicated records.
Solution:
So I think you can clear the cache after you create , update or delete the records.
Because you are using the same cache slug over and over. You should change the cache slug according to the changed input. Adding the $request as a use argument to your function will not magically change the cache slug.
In your case, this should work:
Cache::rememberForever("results_{$request->projectname}", function () use ($request) {
you should add text value after and before key id like bello
Cache::rememberForever('product_'.$product->id.'_key',function ()
});

Import of 50K+ Records in MySQL Gives General error: 1390 Prepared statement contains too many placeholders

Has anyone ever come across this error: General error: 1390 Prepared statement contains too many placeholders
I just did an import via SequelPro of over 50,000 records and now when I go to view these records in my view (Laravel 4) I get General error: 1390 Prepared statement contains too many placeholders.
The below index() method in my AdminNotesController.php file is what is generating the query and rendering the view.
public function index()
{
$created_at_value = Input::get('created_at_value');
$note_types_value = Input::get('note_types_value');
$contact_names_value = Input::get('contact_names_value');
$user_names_value = Input::get('user_names_value');
$account_managers_value = Input::get('account_managers_value');
if (is_null($created_at_value)) $created_at_value = DB::table('notes')->lists('created_at');
if (is_null($note_types_value)) $note_types_value = DB::table('note_types')->lists('type');
if (is_null($contact_names_value)) $contact_names_value = DB::table('contacts')->select(DB::raw('CONCAT(first_name," ",last_name) as cname'))->lists('cname');
if (is_null($user_names_value)) $user_names_value = DB::table('users')->select(DB::raw('CONCAT(first_name," ",last_name) as uname'))->lists('uname');
// In the view, there is a dropdown box, that allows the user to select the amount of records to show per page. Retrieve that value or set a default.
$perPage = Input::get('perPage', 10);
// This code retrieves the order from the session that has been selected by the user by clicking on a table column title. The value is placed in the session via the getOrder() method and is used later in the Eloquent query and joins.
$order = Session::get('account.order', 'company_name.asc');
$order = explode('.', $order);
$notes_query = Note::leftJoin('note_types', 'note_types.id', '=', 'notes.note_type_id')
->leftJoin('users', 'users.id', '=', 'notes.user_id')
->leftJoin('contacts', 'contacts.id', '=', 'notes.contact_id')
->orderBy($order[0], $order[1])
->select(array('notes.*', DB::raw('notes.id as nid')));
if (!empty($created_at_value)) $notes_query = $notes_query->whereIn('notes.created_at', $created_at_value);
$notes = $notes_query->whereIn('note_types.type', $note_types_value)
->whereIn(DB::raw('CONCAT(contacts.first_name," ",contacts.last_name)'), $contact_names_value)
->whereIn(DB::raw('CONCAT(users.first_name," ",users.last_name)'), $user_names_value)
->paginate($perPage)->appends(array('created_at_value' => Input::get('created_at_value'), 'note_types_value' => Input::get('note_types_value'), 'contact_names_value' => Input::get('contact_names_value'), 'user_names_value' => Input::get('user_names_value')));
$notes_trash = Note::onlyTrashed()
->leftJoin('note_types', 'note_types.id', '=', 'notes.note_type_id')
->leftJoin('users', 'users.id', '=', 'notes.user_id')
->leftJoin('contacts', 'contacts.id', '=', 'notes.contact_id')
->orderBy($order[0], $order[1])
->select(array('notes.*', DB::raw('notes.id as nid')))
->get();
$this->layout->content = View::make('admin.notes.index', array(
'notes' => $notes,
'created_at' => DB::table('notes')->lists('created_at', 'created_at'),
'note_types' => DB::table('note_types')->lists('type', 'type'),
'contacts' => DB::table('contacts')->select(DB::raw('CONCAT(first_name," ",last_name) as cname'))->lists('cname', 'cname'),
'accounts' => Account::lists('company_name', 'company_name'),
'users' => DB::table('users')->select(DB::raw('CONCAT(first_name," ",last_name) as uname'))->lists('uname', 'uname'),
'notes_trash' => $notes_trash,
'perPage' => $perPage
));
}
Any advice would be appreciated. Thanks.
Solved this issue by using array_chunk function.
Here is the solution below:
foreach (array_chunk($data,1000) as $t)
{
DB::table('table_name')->insert($t);
}
There is limit 65,535 (2^16-1) place holders in MariaDB 5.5 which is supposed to have identical behaviour as MySQL 5.5.
Not sure if relevant, I tested it on PHP 5.5.12 using MySQLi / MySQLND.
This error only happens when both of the following conditions are met:
You are using the MySQL Native Driver (mysqlnd) and not the MySQL client library (libmysqlclient)
You are not emulating prepares.
If you change either one of these factors, this error will not occur. However keep in mind that doing both of these is recommended either for performance or security issues, so I would not recommend this solution for anything but more of a one-time or temporary problem you are having. To prevent this error from occurring, the fix is as simple as:
$dbh->setAttribute(PDO::ATTR_EMULATE_PREPARES, true);
While I think #The Disintegrator is correct about the placeholders being limited. I would not run 1 query per record.
I have a query that worked fine until I added one more column and now I have 72k placeholders and I get this error. However, that 72k is made up of 9000 rows with 8 columns. Running this query 1 record at a time would take days. (I'm trying to import AdWords data into a DB and it would literally take more than 24 hours to import a days worth of data if I did it 1 record at a time. I tried that first.)
What I would recommend is something of a hack. First either dynamically determine the max number of placeholders you want to allow - i.e. 60k to be safe. Use this number to determine, based on the number of columns, how many complete records you can import/return at once. Create the full array of data for you query. Use a array_chunk and a foreach loop to grab everything you want in the minimum number of queries. Like this:
$maxRecords = 1000;
$sql = 'SELECT * FROM ...';
$qMarks = array_fill(0, $maxInsert, '(?, ...)');
$tmp = $sql . $implode(', ', $qMarks);
foreach (array_chunk($data, $maxRecords) AS $junk=>$dataArray) {
if (count($dataArray) < $maxRecords)) { break; }
// Do your PDO stuff here using $tmp as you SQL statement with all those placeholders - the ?s
}
// Now insert all the leftovers with basically the same code as above except accounting for
// the fact that you have fewer than $maxRecords now.
Using Laravel model, copy all 11000 records from sqlite database to mysql database in few seconds. Chunk data array to 500 records:
public function handle(): void
{
$smodel = new Src_model();
$smodel->setTable($this->argument('fromtable'));
$smodel->setConnection('default'); // sqlite database
$src = $smodel::all()->toArray();
$dmodel = new Dst_model();
$dmodel->setTable($this->argument('totable'));
$dmodel->timestamps = false;
$stack = $dmodel->getFields();
$fields = array_shift($stack);
$condb = DB::connection('mysql');
$condb->beginTransaction();
$dmodel::query()->truncate();
$dmodel->fillable($stack);
$srcarr=array_chunk($src,500);
$isOK=true;
foreach($srcarr as $item) {
if (!$dmodel->query()->insert($item)) $isOK=false;
}
if ($isOK) {
$this->notify("Przenieśliśmy tabelę z tabeli : {$this->argument('fromtable')} do tabeli: {$this->argument('totable')}", 'Będzie świeża jak nigdy!');
$condb->commit();
}
else $condb->rollBack();
}
You can do it with array_chunk function, like this:
foreach(array_chunk($data, 1000) as $key => $smallerArray) {
foreach ($smallerArray as $index => $value) {
$temp[$index] = $value
}
DB::table('table_name')->insert(temp);
}
My Fix for above issue:
On my side when i got this error I fixed it by reducing the the bulk insertion chunk size from 1000 to 800 and it worked for me.
Actually there were too many fields in my table and most them contains the details descriptions of size like a complete page text. when i go for there bulk insertion the service caused crashed and through the above error.
I think the number of placeholders is limited to 65536 per query (at least in older mysql versions).
I really can't discern what this piece of code is generating. But if it's a gigantic query, There's your problem.
You should generate one query per record to import and put those into a transaction.

PHP PDO - Testing connection before doing query?

public function smart_query($query, $options = null, $bindoptions = null)
{
// Code to Reconnect incase of timeout
try {
$this->db->query('SELECT * FROM templates');
}
catch (PDOException $e)
{
echo $e;
$pdooptions = array(
PDO::ATTR_PERSISTENT => true,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
);
$this->db = new PDO("mysql:host=localhost;dbname=$this->database", "$this->username", "$this->password", $pdooptions);
}
$this->statement = $this->db->prepare($query);
if($bindoptions != null)
{
$this->bind($bindoptions);
}
$this->execute();
if($options != null)
{
// Return Single Row
if($options['rows'] == 1)
{
return $this->statement->fetch(PDO::FETCH_ASSOC);
}
// Return Multiple Rows
elseif($options['rows'] != 1)
{
return $this->statement->fetchAll(PDO::FETCH_ASSOC);
}
}
}
I've saw this code today, and got really confused.
It looks like he is trying to process a simple query, before doing the actual query.
Why is he checking if the connection is still open?
I thought that PDO only destroys it's connection upon script finishing executing automatically?
Is that correct to check if it's open or closed?
This implements a form of lazy loading.
This first time a query is executed through this class/function, the database connection may not be established yet. This is the purpose of this check, so that the consumer (you) does not have to mind about it.
The connection is then stored in the $this->db class member, for future reuse when you call this method again in the course of your script (and yes, this connection will stay open until the script ends -- unless it is closed explicitely beforehand, of course).
For information, this check is slightly inefficient. A simple $this->db->query('SELECT 1') would suffice, without the need to read a table at all.

codeigniter mysql transactions

I'm new to database transactions and what i've found is extremely confusing. I have a queue table that contains email address, send datetime, and sent datetime. My cron job is constantly firing and selecting rows with the 'send datetime' passed now. It sends an email to the address and updates the 'sent datetime' column.
If this cron job fired at the exact same time, there is potential for the them to grab the same rows, thus sending the email twice.
From what i understand, transactions all depend on the success or failure or queries. How do i check that in this scenario?
$this->db->trans_begin();
$query = $this->db->query('SELECT * FROM Queue_table where send >= now LIMIT 100');
foreach ($query->result() as $row)
{
//code to send email to $row->email;
$this->db->query('UPDATE Queue_table SET sent = now WHERE id = '$row->id'');
}
//the following doesn't make much sense to me. What would cause
// this to be false in this scenario?
if ($this->db->trans_status() === FALSE)
{
$this->db->trans_rollback();
}
else
{
$this->db->trans_commit();
}
Am i going about doing this completely wrong?

session_regenerate_id and database handler

i am using database handler for my sessions which working fine but now i stack into a problem on authentication.
When user login with username/password i do session_regenerate_id and after that i am trying to select the current session_id.
Here is my code
session_regenerate_id();
echo $checkQ=" SELECT * FROM my_sessions WHERE id='".session_id()."' ";
......
but i dont get any results. The session_id is the correct one.
After finish load the page and copy paste the SQL Command to phpMyAdmin i get the results.
I know thats its stupid but the only reason i can think of is that session_regenerate_id() "is too slow" so when i try to read the session_id at next line the session_id has not created in database yet.
Can anyone help me!
I know it has been a while, I hope you have found an answer since this was posted, but I'll add my solution for posterity's sake.
The call to session_generate_id() will cause the value of session_id() to change:
<?php
$before = session_id();
session_regenerate_id();
$after = session_id();
var_dump($before == $after); // outputs false
This problem manifested for me because in the session write handler I was doing this (without such bogus method names, of course):
<?php
class MySQLHandler
{
function read($id)
{
$row = $this->doSelectSql($id);
if ($row) {
$this->foundSessionDuringRead = true;
}
// snip
}
function write($id, $data)
{
if ($this->foundSessionDuringRead) {
$this->doUpdateSql($id, $data);
}
else {
$this->doInsertSql($id, $data);
}
}
}
The write() method worked fine if session_regenerate_id() was never called. If it was called, however, the $id argument to write() is different to the $id passed to read(), so the update won't find any records with the new $id because they've never been inserted.
Some people suggest to use MySQL's "REPLACE INTO" syntax, but that deletes and replaces the row, which plays merry havoc if you want to have a creation date column. What I did to fix the problem was to hold on to the session ID that was passed to read, then update the session ID in the database during write using the id passed to read as the key:
<?php
class MySQLHandler
{
function read($id)
{
$row = $this->doSelectSql($id);
if ($row) {
$this->rowSessionId = $id;
}
// snip
}
function write($id, $data)
{
if ($this->rowSessionId) {
$stmt = $this->pdo->prepare("UPDATE session SET session_id=:id, data=:data WHERE session_id=:rowSessionId AND session_name=:sessionName");
$stmt->bindValue(':id', $id);
$stmt->bindValue(':rowSessionId', $this->rowSessionId);
$stmt->bindValue(':data', $data);
$stmt->bindValue(':sessionName', $this->sessionName);
$stmt->execute();
}
else {
$this->doInsertSql($id, $data);
}
}
}
I think I'm having the same problem you are having. It's unclear to me whether this is a PHP (cache) feature or a bug.
The issue is that, when using a custom SessionHandler and calling session_regenerate_id(true), the new session is not created until the script terminates. I have confirmed that by doing the same thing you did: SELECTing the new session id from the database. And the new session is not there. However, after the script finishes, it is.
This is how I fixed it:
$old_id = session_id();
// If you SELECT your DB and search for $old_id, it will be there.
session_regenerate_id(TRUE);
$new_id = session_id();
// If you SELECT your DB for either $old_id or $new_id, none will be there.
session_write_close();
session_start();
// If you SELECT your DB for $new_id, it will be there.
Therefore the solution (workaround) I came about was to force PHP to write the session. I hope this helps.