My application consists of a couple of Apache servers talking to a common MySQL box. Part of the application lets users create one hour appointments in the future. I need a mechanism which prevents different users coming from the different Apache instances at the same time, book the same one hour appointment slot. I've seen a similar "inter-system mutex" solution implemented on Oracle databases (basically 'select ... for update') but haven't dealt with the details on doing the same with MySQL. Would appreciate any advise, code or documentation references, best practices, etc. Did try to google around but mostly discussions about the MySQL internal mutexes come up.
These are my MySQL settings I thought relevant (my code will have try-catch and all and should never bail without unlocking what it locked but have to account for what happens in those cases as well):
mysql> show variables like 'innodb_rollback_on_timeout';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| innodb_rollback_on_timeout | OFF |
+----------------------------+-------+
1 row in set (0.00 sec)
mysql> show variables like 'innodb_lock_wait_timeout';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| innodb_lock_wait_timeout | 100 |
+--------------------------+-------+
1 row in set (0.00 sec)
mysql> select ##autocommit;
+--------------+
| ##autocommit |
+--------------+
| 1 |
+--------------+
1 row in set (0.00 sec)
Any alternate solutions (outside of MySQL) you could recommend? I do have a memcached instance running as well but gets flushed rather often (and not sure I want to have memcachedb, etc. to make that fault tolerant).
Appreciate your help...
One can also use MySQL's and MariaDB's GET_LOCK (and RELEASE_LOCK) functions:
https://dev.mysql.com/doc/refman/5.5/en/miscellaneous-functions.html#function_get-lock (outdated link)
https://dev.mysql.com/doc/refman/8.0/en/locking-functions.html#function_get-lock
https://mariadb.com/kb/en/library/get_lock/
The functions can be used to realize the behavior described in the question.
Acquiring a lock my_app_lock_1.
SELECT GET_LOCK('my_app_lock_1', 1000); -- lock's name 'my_app_lock_1', timeout 1000 ms
+---------------------------------+
| GET_LOCK('my_app_lock_1', 1000) |
+---------------------------------+
| 1 |
+---------------------------------+
Releasing the lock:
DO RELEASE_LOCK('my_app_lock_1'); -- DO makes a result set ignored
Please note (the quotes from MariaDB's documentation):
Names are locked on a server-wide basis. If a name has been locked by one client, GET_LOCK() blocks any request by another client for a lock with the same name. This allows clients that agree on a given lock name to use the name to perform cooperative advisory locking. But be aware that it also allows a client that is not among the set of cooperating clients to lock a name, either inadvertently or deliberately, and thus prevent any of the cooperating clients from locking that name. One way to reduce the likelihood of this is to use lock names that are database-specific or application-specific. For example, use lock names of the form db_name.str or app_name.str.
Locks obtained with GET_LOCK() do not interact with transactions.
Answering my own question here. A variation of this is what we eventually end up doing (in PHP):
<?php
$conn = mysql_connect('localhost', 'name', 'pass');
if (!$conn) {
echo "Unable to connect to DB: " . mysql_error();
exit;
}
if (!mysql_select_db("my_db")) {
echo "Unable to select mydbname: " . mysql_error();
exit;
}
mysql_query('SET AUTOCOMMIT=0'); //very important! this makes FOR UPDATE work
mysql_query('START TRANSACTION');
$sql = "SELECT * from my_mutex_table where entity_id = 'my_mutex_key' FOR UPDATE";
$result = mysql_query($sql);
if (!$result) {
echo "Could not successfully run query ($sql) from DB: " . mysql_error();
exit;
}
if (mysql_num_rows($result) == 0) {
echo "No rows found, nothing to print so am exiting";
exit;
}
echo 'Locked. Hit Enter to unlock...';
$response = trim(fgets(STDIN));
mysql_free_result($result);
echo "Unlocked\n";
?>
To verify it works run from two different consoles. Time performance is a bit worse than standard file lock based mutexes but still very acceptable.
Related
I have been on a wild goose chase trying to find the culprit downing our website. I have tracked it down using top -i followed by examining the process showing 90+% CPU usage with pidstat -t -p {PROCESS_ID} 1. Lastly I stopped the tracking of pidstat grabbed the TID of the mysqld command and queried it in mysql on the cli with mysql> select * from performance_schema.threads where THREAD_OS_ID = {PROCESS_ID} \G.
mysql> select * from performance_schema.threads where THREAD_OS_ID = {PROCESS_ID} \G
*************************** 1. row ***************************
THREAD_ID: 61
NAME: thread/sql/one_connection
TYPE: FOREGROUND
PROCESSLIST_ID: 36
PROCESSLIST_USER: {USER}
PROCESSLIST_HOST: localhost
PROCESSLIST_DB: {DB_NAME}
PROCESSLIST_COMMAND: Query
PROCESSLIST_TIME: 0
PROCESSLIST_STATE: Sending data
PROCESSLIST_INFO: SELECT ID
FROM wp_posts
WHERE post_title = 'https://med05.example.co.uk/in4glestates/{SERIAL_NUMBER}/{SERIAL_NUMBER}/main/LOGO-MA-Roof-Seating.jpg'
AND post_type = 'attachment'
PARENT_THREAD_ID: NULL
ROLE: NULL
INSTRUMENTED: YES
HISTORY: YES
CONNECTION_TYPE: Socket
THREAD_OS_ID: 8189
1 row in set (0.00 sec)
Now I am aware of the query that is causing so much trouble, and the PROCESSLIST_INFO from the mysql query suggests it is this block of code that is causing so much demand on the CPU:
private static function reset_attachments($new_property)
{
global $wpdb;
$sql = "SELECT ID FROM {$wpdb->prefix}posts WHERE post_parent = {$new_property} AND post_type='attachment'";
$res = $wpdb->get_results($sql);
foreach($res as $row) {
wp_delete_attachment($row->ID, true);
}
return null;
}
I am not a beginner but this truly is my greatest attempt, I was wondering if there is anyone who can help me speed my function reset_attachments() up to a tolerable level for my CPU?
Why is this function requiring so much CPU?
What is a more efficient way to write my function reset_attachments()?
As always, if you would like any further information, please let it be known and I am more than happy to help you help me!
EDIT:
Thought to append this answer here referencing the PROCESSLIST_STATE: Sending data:
https://stackoverflow.com/a/24626122/10134447
How can I amend the function reset_attachments() to break it down so it can handle the query in a staggered approach?
I am happy to sacrifice some execution time if it can bring the CPU load down.
Okay, I understand what are errors and warnings in the context of MySQL. But what's the need of note-level warning? I have already searched the MySQL documentation but didn't find anything relevant. It would be better if someone could shed some light on the what are they and why they are useful.
mysql> create database if not exists city;
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> show warnings
-> ;
+-------+------+------------------------------------------------+
| Level | Code | Message |
+-------+------+------------------------------------------------+
| Note | 1007 | Can't create database 'city'; database exists |
+-------+------+------------------------------------------------+
1 row in set (0.00 sec)
I've always considered Note to be like an "FYI": something happened, or didn't, that may be of interest. The closest definition I can find in the docs is:
... events that do not affect the integrity of the reload operation
which is from the sql_notes server variable, one perhaps not often used outside of mysqldump.
Trawling through the MySQL source code, looks like Sql_Condition::SL_NOTE annotates warnings of this level. There are a few, but they are mostly as you'd expect for non-impactful information:
Event already exists
Table already exists
Query '%s' rewritten to '%s' by a query rewrite plugin
Password set
Sadly, I would have expected the code docblock to give a little more information about them, but it doesn't:
class Sql_condition {
public:
/**
Enumeration value describing the severity of the condition.
*/
enum enum_severity_level { SL_NOTE, SL_WARNING, SL_ERROR, SEVERITY_END };
This might warrant a documentation bug report to MySQL team.
Interestingly, MariaDB has this to say:
A note is different to a warning in that it only appears if the sql_notes variable is set to 1 (the default), and is not converted to an error if strict mode is enabled.
My takeaway from that, in Maria and possibly by extension MySQL: notes are warnings, but ones that can be ignored because no data-loss or side-effect is described.
I am executing this code:
$model1 = Mage::getModel('enterprise_targetrule/index')->load(5511);
var_dump($model1);
$model2 = Mage::getModel('enterprise_targetrule/index')->
load(5511)->
setFlag('0')->
save();
var_dump($model2);
$model3 = Mage::getModel('enterprise_targetrule/index')->load(5511);
var_dump($model3);
die();
The outputs from the var_dump calls are exactly what I would expect: $_data[flag] is 1 for $model1, 0 for $model2 and $model3, and $_origData[flag] is 1 for $model1 and $model2, and 0 for $model3.
So far, it is all looking exactly right. However, when I then (immediately after running this code), execute select * from enterprise_targetrule_index on my database, I get this result:
mysql> select * from enterprise_targetrule_index;
+-----------+----------+-------------------+---------+------+
| entity_id | store_id | customer_group_id | type_id | flag |
+-----------+----------+-------------------+---------+------+
| 5511 | 7 | 0 | 1 | 1 |
WHY?
Why is the flag not getting updated? The models are correct, all the fields are correct, the save and load calls all succeed and return perfect results, but the database is not updated! It's like the change I save() doesn't get written, and yet can somehow still be loaded, at least within that script. What is going on here? What is special about this model, that makes it unable to save?
$model2 = Mage::getModel('enterprise_targetrule/index')->load(5511);
$model2->setFlag('0');
$model2->save();
echo $model2->getFlag();
If you use var_dump it displays the whole Object..
Turns out the reason for this, is the die. When sql queries are executed, they are only performed in memory - they don't get written to the database until and unless the process terminates successfully. Because I was using die(), this prevented the queries from being written.
I was thrown by this initially, because it occurs entirely in memory and not in the database. Once it is written, it does go down as a transaction, but the transaction all gets written at the same time, which is why I didn't see the rollback command in the mysql general log - it wasn't technically rolling back, but preventing even the first query from being written. Very strange, and does make testing harder, but good to keep in mind.
I have a script residing on my webserver's cron that should run every night. It has stopped running recently due to exceeding the time limits set by the webserver on cron jobs. It used to run fine. Anytime I manually ran it, it was very quick (well under 5 minutes). All of the sudden, it takes over half an hour.
The script basically updates a MySQL database. The DB is around 60mb according to them. I can't seem to find this information, but it seems reasonable (though the file that I transfer to the server every night is only around 2mb).
I have undertaken their steps suggested to optimize my DB, but nothing really came of that. It still takes ages for the script to run. All the script does is delete everything out of the DB and fill it in again with our updated inventory.
So now I am running "show full processlist" on one Putty window, while running the script in another. "show full processlist" shows only a couple of items, both of which show 0 for the time.
mysql> show full processlist;
+-----------+--------------+--------------------+-------------------------+---------+------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----------+--------------+--------------------+-------------------------+---------+------+-------+-----------------------+
| 142841868 | purposely omitted | purposely omitted | purposely omitted_net_-_main | Sleep | 0 | | NULL |
| 142857238 | purposely omitted | purposely omitted | NULL | Query | 0 | NULL | show full processlist |
+-----------+--------------+--------------------+-------------------------+---------+------+-------+-----------------------+
2 rows in set (0.05 sec)
If I keep using the show full processlist command really quickly, occasionally I can catch other things being listed in this table but then they disappear the next time I run it. This indicates to me that they are being processed very quickly!
So does anyone have any ideas what is going wrong? I am fairly new to this :(
Thanks!!
PS here is my code
#!/usr/bin/perl
use strict;
use DBI;
my $host = 'PURPOSLEY OMITTED';
my $db = 'PURPOSLEY OMITTED';
my $db_user = 'PURPOSLEY OMITTED';
my $db_password = "PURPOSLEY OMITTED";
my $dbh = DBI->connect("dbi:mysql:$db:$host", "$db_user", "$db_password");
$dbh->do("DELETE FROM main");
$dbh->do("DELETE FROM keywords");
open FH, "PURPOSLEY OMITTED" or die;
while (my $line = <FH>) {
my #rec = split(/\|/, $line);
print $rec[1].' : '.$rec[2].' : '.$rec[3].' : '.$rec[4].' : '.$rec[5].' : '.$rec[6].' : '.$rec[7];
$rec[16] =~ s/"//g;
$rec[17] =~ s/"//g;
$rec[13] =~ chomp($rec[13]);
my $myquery = "INSERT INTO main (medium, title, artist, label, genre, price, qty, catno,barcode,created,received,tickler,blurb,stockid) values (\"$rec[0]\",\"$rec[1]\",\"$rec[2]\",\"$rec[3]\",\"$rec[4]\",\"$rec[5]\",\"$rec[6]\",\"$rec[7]\",\"$rec[8]\",\"$rec[9]\",\"$rec[10]\",\"$rec[11]\",\"$rec[12]\",\"$rec[13]\")";
$dbh->do($myquery);
$dbh->do("INSERT IGNORE INTO keywords VALUES (0, '$rec[2]','$rec[13]')");
$dbh->do("INSERT LOW_PRIORITY IGNORE INTO keywords VALUES (0, \"$rec[1]\", \"$rec[13]\")");
print "\n";
}
close FH;
$dbh->disconnect();
I have two suggestions:
(less impact) use TRUNCATE instead of DELETE, it is significantly faster, and is particularly easy to use when yo don't need to worry about an auto-incrementing value.
Restructure slightly to work in batches for the inserts. Usually I do this by keeping a stack variable of a given size (start with something like 20 rows), and for the first 20 rows, it just fills the stack; but on the 20th row it also actually performs the insert and resets the stack. It might boggle your mind how much this can improve performance :-)
Pseudo-code:
const buffer_size = 20
while(row) {
stack.addvalues(row.values)
if(stack.size >= buffer_size) {
// INSERT INTO mytable (fields) VALUES stack.all_values()
stack.empty()
}
then play with the "buffer" size. I have seen scripts where tweaking the buffer to upwards of 100-200 rows at a time sped up massive imports by almost as many times (i.e. a drastically disproportionate amount of work was involved in the "overhead" of executing the individual INSERTs (network, etc)
I've built an important MySQL database, with a lot of view, triggers, functions and procedures.
It's very hard to test, and to not forget anything, so, I've written Cucumber scenarios for all of the features of my DB (Insert, Select, etc., request on functions an procedures etc., and views)
This help us a lot when we test the behavior of all this, and even before writing view and other code, it's very helpful to determinate want we really want to do.
My problem is: after writing Cucumber features, we all test by hand in a MySQL Shell.
I'm new in BDD/TDD and Agile methods, but I've done some search to know how to make some automation, but found nothing very interesting for my case.
Is there somebody who can provide some interesting way to create automation for this?
I don't know Ruby, but by example, is it possible to use RSPec directly with MySQL (with some examples)?
Or in another language, or any solution you can think of!
Thanks in advance!
[EDIT]
If found some interesting things with RSpec and MySQL:
Mysql Support For Cucumber Nagios
mysql_steps.rb
My problem is: I don't have any knoledge with Ruby, RSPec, etc.
I'm working on it with the excellent "Pick Axe" book, and RSPec book from PragProg
But I will be very grateful for a little example of of RSpec steps given the code below:
The MySQL Procedure
DELIMITER $$
CREATE PROCEDURE `prc_liste_motif` (
IN texte TEXT,
IN motif VARCHAR(255),
OUT nb_motif INT(9),
OUT positions TEXT)
BEGIN
DECLARE ER_SYNTAXE CONDITION FOR SQLSTATE '45000';
DECLARE sousChaine TEXT;
DECLARE positionActuelle INT(9) DEFAULT 1;
DECLARE i INT(9) DEFAULT 1;
IF
LENGTH(motif) > LENGTH(texte)
THEN
SIGNAL ER_SYNTAXE
SET MESSAGE_TEXT =
'Bad Request: Le motif est plus long que le texte.',
MYSQL_ERRNO = 400;
END IF;
SET positions = '';
SET nb_motif = 0;
REPEAT
SET sousChaine = SUBSTRING_INDEX(texte, motif, i);
SET positionActuelle = LENGTH(sousChaine) + 1;
IF
positionActuelle < LENGTH(texte) + 1
THEN
IF
LENGTH(positions) > 0
THEN
SET positions = CONCAT(positions, ',');
END IF;
SET positions = CONCAT(positions, positionActuelle);
SET nb_motif = nb_motif + 1;
END IF;
SET i = i + 1;
UNTIL LENGTH(sousChaine) >= LENGTH(texte)
END REPEAT;
END$$
The Cucumber feature:
Feature: Procedure prc_liste_motif
In order to precess a string according to a given unit
I want to know the number of units present in the chain and their positions
Knowing that the index starts at 1
Background: the database mydatabase in our SGBDR server
Given I have a MySQL server on 192.168.0.200
And I use the username root
And I use the password xfe356
And I use the database mydatabase
Scenario Outline: Using the procedure with good values in parameters
Given I have a procedure prc_liste_motif
And I have entered <texte> for the first parameter
And I have entered <motif> for the second parameter
And I have entered <nb_motif> for the third parameter
And I have entered <positions> for the fourth parameter
When I call prc_liste_motif
Then I should have <out_nb_motif> instead of <nb_motif>
Then I should have <out_positions> instead of <positions>
Exemples:
| texte | motif | nb_motif | positions | out_nb_motif | out_positions |
| Le beau chien | e | | | 3 | 2,5,12 |
| Allo | ll | | | 1 | 2 |
| Allo | w | | | 0 | |
An exemple of passed test by hand in MySQL:
$ mysql -h 192.168.0.200 -u root -p xfe356
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.5.9 MySQL Community Server (GPL)
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> USE mydatabase
Database changed
mysql> SET #texte = 'Le beau chien';
Query OK, 0 rows affected (0.00 sec)
mysql> SET #motif = 'e';
Query OK, 0 rows affected (0.00 sec)
mysql> SET #nb_motif = NULL;
Query OK, 0 rows affected (0.00 sec)
mysql> SET #positions = NULL;
Query OK, 0 rows affected (0.00 sec)
mysql> SET #out_nb_motif = 3;
Query OK, 0 rows affected (0.00 sec)
mysql> SET #out_positions = '2,5,12';
Query OK, 0 rows affected (0.00 sec)
mysql> CALL prc_liste_motif(#texte, #motif, #nb_motif, #positions);
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT #nb_motif = #out_nb_motif AND #positions = #out_positions;
+-----------------------------------------------------------+
| #nb_motif = #out_nb_motif AND #positions = #out_positions |
+-----------------------------------------------------------+
| 1 |
+-----------------------------------------------------------+
1 row in set (0.00 sec)
thanks in advance for your help !
Here is some pseudocode for one way you could test your database with RSpec:
describe "prc_liste_motif" do
before(:all) do
# Set up database connection here
end
describe "good values" do
context "Le beau chien" do
let(:texte) { "Le beau chien" }
# Set up other variables here
let(:results) { # call prc_liste_motif here }
it "has the correct out_nb_motif" do
out_nb_motif = # however you derive this from the results of the procedure
out_nb_motif.should == 3
end
it "has the correct out_positions" do
# test out_positions here
end
end
end
end
One thing I noticed in your sample manual test was how you are checking the results:
SELECT #nb_motif = #out_nb_motif AND #positions = #out_positions;
This will tell you whether or not those two values are correct, but if you get 0 results for this query, you do not immediately know which of the two values is incorrect and you do not know what the value you are getting instead is; getting that information requires more investigation.
By splitting up the checking for these two values into 2 RSpec tests, when the tests have finished running you can know if both are correct, if one is incorrect, or if both are incorrect. If one or both are incorrect, RSpec will also return a message for the failed test that says "Expected 3, got 4" which can help you debug faster.
As you add more tests for different inputs, I recommend refactoring the pseudocode I've given here to use shared_examples_for. The PragProg RSpec book that you're already reading is a great reference.
Cucumber's a natural-language BDD tool, which is designed to get non-technical stakeholders on board so that you can have conversations with them about what the system should do. It also lets you reuse steps quite easily - similar contexts, events and outcomes.
If you're writing a database, I think it's likely that your users, and the audience for that database, will be technical. There may also be limited opportunities for reusing steps, so Cucumber may not be the best tool. You're probably right about moving to something like RSpec instead. The English-language tools introduce a layer of abstraction and another aspect to maintenance which can be a pain in the neck, so I'd pick a tool which suits what you're doing, rather than starting with the tool and trying to fit your needs around it.
Once you've done that, you can either use ActiveRecord to create domain-object results from your queries, or you can just call the SQL directly. RSpec is just Ruby with some matchers. This forum might help you.
Something else you could do is to knock up a small application which actually uses your database. Not only will this ensure that your database is genuinely valuable; it will provide users with examples of how to use it. That won't be very difficult to do with Rails. If you go down this route, then you can use Cucumber with something like Webrat or Watir if you want to, because you'll be documenting the kind of things that other applications could use your database for at a higher level. Just make sure that
any live examples you provide go to
test data instead of production, and
that
if your little example app
suddenly turns into the real app
(which sometimes happens), you're in a position to spot that happening and take appropriate political and financial steps.
Java also has quite a lot of support for MySQL and you could use Hibernate instead of ActiveRecord, but I think the maintenance costs will be much less in Ruby.