MySQL to select or insert based on condition - mysql

How to do this in a single MySQL query:
if (select count(*)..)=10
select a record from the same table
else
insert a record into the same table

If you want to use one single SQL command (but I don't know why) you can use a Stored Procedure:
CREATE PROCEDURE `select_or_insert`()
MODIFIES SQL DATA
COMMENT 'blah blah'
BEGIN
IF ((SELECT COUNT(*) FROM `your_table`) = 10) THEN
SELECT ... FROM ... ;
ELSE
INSERT INTO ... ;
END IF;
END;
To invoke the Procedure you will issue the following command:
CALL `select_or_insert`();
If the SELECT is executed, the statement will return a resultset.

As long as only one webserver is involved, I would try APC http://php.net/manual/de/book.apc.php with some nice system of timeout.
Say for example: if an IP has already sent a request in the last 2 seconds, it's current request should be refused and the timeout changes to 4 seconds, after that 10 seconds etc.
if ($timeoutLevel = apc_fetch("locked_" . $ip)){
$timeoutLevel++;
$timeout = getNextTimeout($timeoutLevel);
apc_store("locked_".$ip, $timeoutLevel, $timeout);
show_my_error_page(get_friendly_text("please do not try again for $timeout seconds! You are Blocked!"));
exit();
}
else{
$timeoutLevel = 1;
$timeout = INITIAL_PAGE_TIMEOUT;
apc_store("locked_".$ip, $timeoutLevel, $timeout);
}
That should cost at max around 50 Byte per ip of the last x seconds, so if it is not a DDOS then the webserver should have that RAM.
But be careful: some html-pages contain references to css, javascript, images, sounds, ajax-calls might come later, json-requests etc. pp.
After $timeout seconds APC automatically drops that value, so no need to hire a cleaning women.

Related

MySQL 5.7 Unable to run query longer than 900 seconds [duplicate]

I would like to set a maximum execution time for sql queries like set_time_limit() in php. How can I do ?
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
/*+ MAX_EXECUTION_TIME(1000) */ --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Update: This variable was added in MySQL 5.7.4 and renamed to max_execution_time in MySQL 5.7.8. (source)
If you're using the mysql native driver (common since php 5.3), and the mysqli extension, you can accomplish this with an asynchronous query:
<?php
// Heres an example query that will take a long time to execute.
$sql = "
select *
from information_schema.tables t1
join information_schema.tables t2
join information_schema.tables t3
join information_schema.tables t4
join information_schema.tables t5
join information_schema.tables t6
join information_schema.tables t7
join information_schema.tables t8
";
$mysqli = mysqli_connect('localhost', 'root', '');
$mysqli->query($sql, MYSQLI_ASYNC | MYSQLI_USE_RESULT);
$links = $errors = $reject = [];
$links[] = $mysqli;
// wait up to 1.5 seconds
$seconds = 1;
$microseconds = 500000;
$timeStart = microtime(true);
if (mysqli_poll($links, $errors, $reject, $seconds, $microseconds) > 0) {
echo "query finished executing. now we start fetching the data rows over the network...\n";
$result = $mysqli->reap_async_query();
if ($result) {
while ($row = $result->fetch_row()) {
// print_r($row);
if (microtime(true) - $timeStart > 1.5) {
// we exceeded our time limit in the middle of fetching our result set.
echo "timed out while fetching results\n";
var_dump($mysqli->close());
break;
}
}
}
} else {
echo "timed out while waiting for query to execute\n";
// kill the thread to stop the query from continuing to execute on
// the server, because we are abandoning it.
var_dump($mysqli->kill($mysqli->thread_id));
var_dump($mysqli->close());
}
The flags I'm giving to mysqli_query accomplish important things. It tells the client driver to enable asynchronous mode, while forces us to use more verbose code, but lets us use a timeout(and also issue concurrent queries if you want!). The other flag tells the client not to buffer the entire result set into memory.
By default, php configures its mysql client libraries to fetch the entire result set of your query into memory before it lets your php code start accessing rows in the result. This can take a long time to transfer a large result. We disable it, otherwise we risk that we might time out while waiting for the buffering to complete.
Note that there's two places where we need to check for exceeding a time limit:
The actual query execution
while fetching the results(data)
You can accomplish similar in the PDO and regular mysql extension. They don't support asynchronous queries, so you can't set a timeout on the query execution time. However, they do support unbuffered result sets, and so you can at least implement a timeout on the fetching of the data.
For many queries, mysql is able to start streaming the results to you almost immediately, and so unbuffered queries alone will allow you to somewhat effectively implement timeouts on certain queries. For example, a
select * from tbl_with_1billion_rows
can start streaming rows right away, but,
select sum(foo) from tbl_with_1billion_rows
needs to process the entire table before it can start returning the first row to you. This latter case is where the timeout on an asynchronous query will save you. It will also save you from plain old deadlocks and other stuff.
ps - I didn't include any timeout logic on the connection itself.
Please rewrite your query like
select /*+ MAX_EXECUTION_TIME(1000) */ * from table
this statement will kill your query after the specified time
You can find the answer on this other S.O. question:
MySQL - can I limit the maximum time allowed for a query to run?
a cron job that runs every second on your database server, connecting and doing something like this:
SHOW PROCESSLIST
Find all connections with a query time larger than your maximum desired time
Run KILL [process id] for each of those processes
pt_kill has an option for such. But it is on-demand, not continually monitoring. It does what #Rafa suggested. However see --sentinel for a hint of how to come close with cron.

How to enforce 2 MySQL queries to run in series

I need every day to get a list of all Customers past due and insert them into a new table, then i need to move the dates with one month, so they will be processed next month again.
Currently i am running 2 SQL Queries in series
insert into history select * from customers where nextdate<CURDATE()
update customers set nextdate=calculation() where nextdate<CURDATE()
But sometimes customers are updated, but not inserted into history.
In think that the update begin to run before mysql finished the select.
I am using Node js, he current method i am using for serialization, is that i am running the update in the callback from the insert, but probably the insert is called back before the code actually runs.
It doesn't happen every time but i think that making a Stored procedure with both in it can help, anyone had some experience with it?
There are two issues here. The first is that your SELECT and UPDATE don't wait for each because SELECT does not lock the table. You can force locked reading by using:
INSERT INTO history SELECT * FROM customers WHERE nextdate<CURDATE() FOR UPDATE;
See https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html for more information on Locking Reads.
The second is that you probably also want to learn about transactions, which let you issue a batch of queries such that either all of them get committed, or none of them do.
In the case of your code, you want to add the start and end commands for issueing a transaction:
START TRANSACTION;
INSERT INTO history SELECT * FROM customers WHERE nextdate<CURDATE() FOR UPDATE;
UPDATE customers SET nextdate=calculation() WHERE nextdate<CURDATE();
COMMIT;
See https://dev.mysql.com/doc/refman/5.7/en/commit.html for all the details on this mechanism, which anything from MySQL to Postgres to SQLite supports.
you can use promise provided by nodejs to make them in series or you can use Async series
Async Series
async.series([
function(callback) {
//run your First SQL Query based on response send success or failure callback
callback(null, 'one');
},
function(callback) {
//run your Second SQL Query based on response send success or failure callback
callback(null, 'two');
}
]
// your call call back should close connection if needed.
or you can go with first option
Using Promise
(I will recommend to use this one)
var conn = db.config(mysql);
run_query(conn,[ 'insert into history select * from customers where nextdate<CURDATE()' ]).then(function(result){
console.log(result); // result of 1st query
return run_query(conn,[ 'update customers set nextdate=calculation() where nextdate<CURDATE()']);
}).then(function(result){
console.log(result.something); // result of 2nd query
conn.end();
}).catch(function(err){
// will run if any error occur
console.log('there was an error', err);
});

How to run 'SELECT FOR UPDATE' in Laravel 3 / MySQL

I am trying to execute SELECT ... FOR UPDATE query using Laravel 3:
SELECT * from projects where id = 1 FOR UPDATE;
UPDATE projects SET money = money + 10 where id = 1;
I have tried several things for several hours now:
DB::connection()->pdo->exec($query);
and
DB::query($query)
I have also tried adding START TRANSACTION; ... COMMIT; to the query
and I tried to separate the SELECT from the UPDATE in two different parts like this:
DB::query($select);
DB::query($update);
Sometimes I get 0 rows affected, sometimes I get an error like this one:
SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.
SQL: UPDATE `sessions` SET `last_activity` = ?, `data` = ? WHERE `id` = ?
I want to lock the row in order to update sensitive data, using Laravel's database connection.
Thanks.
In case all you need to do is increase money by 10, you don't need to lock the row before update. Simply executing the update query will do the job. The SELECT query will only slow down your script and doesn't help in this case.
UPDATE projects SET money = money + 10 where id = 1;
I would use diferent queries for sure, so you can have control on what you are doing.
I would use a transaction.
If we read this simple explanations, pdo transactions are quite straightforward. They give us this simple but complete example, that ilustrates how everithing is as we should expect (consider $db to be your DB::connection()->pdo).
try {
$db->beginTransaction();
$db->exec("SOME QUERY");
$stmt = $db->prepare("SOME OTHER QUERY?");
$stmt->execute(array($value));
$stmt = $db->prepare("YET ANOTHER QUERY??");
$stmt->execute(array($value2, $value3));
$db->commit();
}
catch(PDOException $ex) {
//Something went wrong rollback!
$db->rollBack();
echo $ex->getMessage();
}
Lets go to your real statements. For the first of them, the SELECT ..., i wouldn't use exec, but query, since as stated here
PDO::exec() does not return results from a SELECT statement. For a
SELECT statement that you only need to issue once during your program,
consider issuing PDO::query(). For a statement that you need to issue
multiple times, prepare a PDOStatement object with PDO::prepare() and
issue the statement with PDOStatement::execute().
And assign its result to some temp variable like
$result= $db->query ($select);
After this execution, i would call $result->fetchAll(), or $result->closeCursor(), since as we can read here
If you do not fetch all of the data in a result set before issuing
your next call to PDO::query(), your call may fail. Call
PDOStatement::closeCursor() to release the database resources
associated with the PDOStatement object before issuing your next call
to PDO::query().
Then you can exec the update
$result= $db->exec($update);
And after all, just in case, i would call again $result->fetchAll(), or $result->closeCursor().
If the aim is
to lock the row in order to update sensitive data, using Laravel's database connection.
Maybe you can use PDO transactions :
DB::connection()->pdo->beginTransaction();
DB::connection()->pdo->commit();
DB::connection()->pdo->rollBack();

Avoid MySQL multi-results from SP with Execute

i have an SP like
BEGIN
DECLARE ...
CREATE TEMPORARY TABLE tmptbl_found (...);
PREPARE find FROM" INSERT INTO tmptbl_found
(SELECT userid FROM
(
SELECT userid FROM Soul
WHERE
.?.?.
ORDER BY
.?.?.
) AS left_tbl
LEFT JOIN
Contact
ON userid = Contact.userid
WHERE Contact.userid IS NULL LIMIT ?)
";
DECLARE iter CURSOR FOR SELECT userid, ... FROM Soul ...;
...
l:LOOP
FETCH iter INTO u_id, ...;
...
EXECUTE find USING ...,. . .,u_id,...;
...
END LOOP;
...
END//
and it gives multi-results. Besides it's inconvenient, if i get all this multi-results (which i really don't need at all), about 5 (limit's param) for each of the hundreds of thousands of records in Soul, i'm afraid it will take all my memory (and all in vain).
Also, i noticed, if i do prepare from an empty string, it still has multi-results...
At least how to get rid of them in the execute statement?
And i would like to have a recipe to avoid ANY output from SP, for any possible statement
(i also have a lot of "update ..."s and "select ... into "s inside, if they can produce multi's).
Tnx for any help...
Well. I'll just say that it has come out that there wasn't really a problem. I didn't investigate hard, but it looks like the server didn't actually try to execute the statement ("call Proc();") to see whether there will be any results to return - it just looked at the code and assumed that there will be multiple result sets, requiring connection to be capable of handling them. But in PhpMyAdmin, which i was using at the time, it wasn't. However, issuing the same command from the MySQL command line client did the trick - no complaining about the given connection context, and no multis, too, because they don't have to be there - it's just a MySQL's estimation. I didn't have to conclude from the error, that the SP like this one will certainly return multis in MySQL, flushing all the intermediately fetched data, which i will need to suppress somehow.
It may be not so as i supposed, but the problem is gone now.

Updating the db 6000 times will take few minutes?

I am writing a test program with Ruby and ActiveRecord, and it reads a document
which is like 6000 words long. And then I just tally up the words by
recordWord = Word.find_by_s(word);
if (recordWord.nil?)
recordWord = Word.new
recordWord.s = word
end
if recordWord.count.nil?
recordWord.count = 1
else
recordWord.count += 1
end
recordWord.save
and so this part loops for 6000 times... and it takes a few minutes to
run at least using sqlite3. Is it normal? I was expecting it could run
within a couple seconds... can MySQL speed it up a lot?
With 6000 calls to write to the database, you're going to see speed issues. I would save the various tallies in memory and save to the database once at the end, not 6000 times along the way.
Take a look at AR:Extensions as well to handle the bulk insertions.
http://rubypond.com/articles/2008/06/18/bulk-insertion-of-data-with-activerecord/
I wrote up some quick code in perl that simply does:
Create the database
Insert a record that only contains a single integer
Retrieve the most recent record and verify that it returns what it inserted
And it does steps #2 and #3 6000 times. This is obviously a considerably lighter workload than having an entire object/relational bridge. For this trivial case with SQLite it still took 17 seconds to execute, so your desire to have it take "a couple of seconds" is not realistic on "traditional hardware."
Using the monitor I verified that it was primarily disk activity that was slowing it down. Based on that if for some reason you really do need the database to behave that quickly I suggest one of two options:
Do what people have suggested and find away around the requirement
Try buying some solid state disks.
I think #1 is a good way to start :)
Code:
#!/usr/bin/perl
use warnings;
use strict;
use DBI;
my $dbh = DBI->connect('dbi:SQLite:dbname=/tmp/dbfile', '', '');
create_database($dbh);
insert_data($dbh);
sub insert_data {
my ($dbh) = #_;
my $insert_sql = "INSERT INTO test_table (test_data) values (?)";
my $retrieve_sql = "SELECT test_data FROM test_table WHERE test_data = ?";
my $insert_sth = $dbh->prepare($insert_sql);
my $retrieve_sth = $dbh->prepare($retrieve_sql);
my $i = 0;
while (++$i < 6000) {
$insert_sth->execute(($i));
$retrieve_sth->execute(($i));
my $hash_ref = $retrieve_sth->fetchrow_hashref;
die "bad data!" unless $hash_ref->{'test_data'} == $i;
}
}
sub create_database {
my ($dbh) = #_;
my $status = $dbh->do("DROP TABLE test_table");
# return error status if CREATE resulted in error
if (!defined $status) {
print "DROP TABLE failed";
}
my $create_statement = "CREATE TABLE test_table (id INTEGER PRIMARY KEY AUTOINCREMENT, \n";
$create_statement .= "test_data varchar(255)\n";
$create_statement .= ");";
$status = $dbh->do($create_statement);
# return error status if CREATE resulted in error
if (!defined $status) {
die "CREATE failed";
}
}
What kind of database connection are you using? Some databases allow you to connect 'directly' rather then using a TCP network connection that goes through the network stack. In other words, if you're making an internet connection and sending data through that way, it can slow things down.
Another way to boost performance of a database connection is to group SQL statements together in a single command.
For example, making a single 6,000 line SQL statement that looks like this
"update words set count = count + 1 where word = 'the'
update words set count = count + 1 where word = 'in'
...
update words set count = count + 1 where word = 'copacetic'"
and run that as a single command, performance will be a lot better. By default, MySQL has a 'packet size' limit of 1 megabyte, but you can change that in the my.ini file to be larger if you want.
Since you're abstracting away your database calls through ActiveRecord, you don't have much control over how the commands are issued, so it can be difficult to optimize your code.
Another thin you could do would be to keep a count of words in memory, and then only insert the final total into the database, rather then doing an update every time you come across a word. That will probably cut down a lot on the number of inserts, because if you do an update every time you come across the word 'the', that's a huge, huge waste. Words have a 'long tail' distribution and the most common words are hugely more common then more obscure words. Then the underlying SQL would look more like this:
"update words set count = 300 where word = 'the'
update words set count = 250 where word = 'in'
...
update words set count = 1 where word = 'copacetic'"
If you're worried about taking up too much memory, you could count words and periodically 'flush' them. So read a couple megabytes of text, then spend a few seconds updating the totals, rather then updating each word every time you encounter it. If you want to improve performance even more, you should consider issuing SQL commands in batches directly
Without knowing about Ruby and Sqlite, some general hints:
create a unique index on Word.s (you did not state whether you have one)
define a default for Word.count in the database ( DEFAULT 1 )
optimize assignment of count:
recordWord = Word.find_by_s(word);
if (recordWord.nil?)
recordWord = Word.new
recordWord.s = word
recordWord.count = 1
else
recordWord.count += 1
end
recordWord.save
Use BEGIN TRANSACTION before your updates then COMMIT at the end.
ok, i found some general rule:
1) use a hash to keep the count first, not the db
2) at the end, wrap all insert or updates in one transaction, so that it won't hit the db 6000 times.