I have a C process that is rapidly writing to a mysql database ~10 times per second. This process uses the MySql C Connector.
After about 2 minutes of running, the process hangs and in system monitor shows
"futex_wait_queue_me"
, and also
"Can't initialized threads: error 11"
is printed to console, I assume by the C connector library(since I do not print this). Following that write, connections to mysql fail with
"MySQL server has gone away".
What could be causing this? I am only writing from 1 thread.
fyi, I am using the library as so. mutex lock and unlock are there for future as i will be multithreading the logging. The logging events in actual app will be much less frequent, but I am trying to stress it as much as possible in this particular test.
//pseudocode:
while(1)
mutexlock
connect();
mysql_query();
disconnect();
sleep(100ms);
mutexunlock
A better solution, maybe not the best
connect();
while(1)
mutexlock
if error on mysql_query();
disconnect();
connect();
sleep(100ms);
mutexunlock
//connect/disconnect functions
int DBConnector::connect()
{
if(DBConnector::m_isConnected) return 0;//already connected...
if(!mutexInitialized)
{
pthread_mutex_init(&DBLock, 0);
}
if(mysql_library_init(0, NULL, NULL))
{
LoggingUtil::logError("DBConnector.DB_connect [DB library init error] " + string(mysql_error(&DBConnector::m_SQLHandle)));
DBConnector::m_isConnected = false;
return -1;
}
if((mysql_init(&m_SQLHandle)) == NULL)
{
LoggingUtil::logError("DBConnector.DB_connect [DB mysql init error] " + string(mysql_error(&DBConnector::m_SQLHandle)));
DBConnector::m_isConnected = false;
return -1;
}
if((mysql_real_connect(&DBConnector::m_SQLHandle, host.c_str(), user.c_str(), pw.c_str(), db.c_str(), port, socket.c_str(), client_flags)) == NULL)
{
LoggingUtil::logError("DBConnector.DB_connect [DB Connect error] " + string(mysql_error(&DBConnector::m_SQLHandle)));
DBConnector::m_isConnected = false;
return -1;
}
DBConnector::m_isConnected = true;
return 0;
}
int DBConnector::disconnect()
{
DBConnector::m_isConnected = false;
mysql_close(&DBConnector::m_SQLHandle);
mysql_library_end();
return 0;
}
Try to not call
mysql_library_init(0, NULL, NULL);
and
mysql_library_end();
at each connection attempt.
Also your second idea of not reconnecting at every mysql-access is much better as establishing a connection will always take some time/resource. For nothing in your case.
After a query has failed, you don't need to re-connect to the database.
Related
in my context, I'm running 2 stored procedures asynchronously using EF Core. This is causing me a deadlock and timeout issue.
Below I show the code of the mehtod that calls the other 2 that invoke the stored procedures:
public PortfolioPublishJobStep...
private async Task DoExecuteAsync(ProcessingContext context)
{
var (startDate, endDate) = GetInterval(context);
var portfolioApiId = context.Message.ManagedPortfolioApiId;
using var transactionScope = TransactionScopeFactory.CreateTransactionScope(timeout: Timeout, transactionScopeAsyncFlowOption: TransactionScopeAsyncFlowOption.Enabled);
var asyncTasks = new List<Task>();
foreach (var publishableService in _publishableServices)
{
var asyncTask = publishableService.PublishAsync(portfolioApiId, startDate, endDate);
asyncTasks.Add(asyncTask);
}
await Task.WhenAll(asyncTasks.ToArray()).ConfigureAwait(continueOnCapturedContext: false);
transactionScope.Complete();
}
And here below the classes/methods that invoke their respective procedures...
PortfolioFinancialBenchmarkDataService...
public async Task PublishAsync(string portfolioApiId, DateTime startDate, DateTime endDate)
{
if (string.IsNullOrWhiteSpace(portfolioApiId))
{
throw new ArgumentException(nameof(portfolioApiId));
}
var repository = UnitOfWork.Repository<PortfolioFinancialBenchmarkData>();
await repository.RemoveAsync(x => x.PortfolioApiId == portfolioApiId && x.ReferenceDate >= startDate && x.ReferenceDate <= endDate).ConfigureAwait(continueOnCapturedContext: false);
var parameters = new[]
{
DbParameterFactory.CreateDbParameter<MySqlParameter>("#PortfolioApiId", portfolioApiId),
DbParameterFactory.CreateDbParameter<MySqlParameter>("#StartDate", startDate),
DbParameterFactory.CreateDbParameter<MySqlParameter>("#EndDate", endDate)
};
await repository.ExecuteSqlCommandAsync("CALL PublishPortfolioFinancialBenchmarkData(#PortfolioApiId, #StartDate, #EndDate);", parameters).ConfigureAwait(continueOnCapturedContext: false);
}
And this:
PortfolioFinancialDataService...
public async Task PublishAsync(string portfolioApiId, DateTime startDate, DateTime endDate)
{
if (string.IsNullOrWhiteSpace(portfolioApiId))
{
throw new ArgumentException(nameof(portfolioApiId));
}
var repository = UnitOfWork.Repository<PortfolioFinancialData>();
await repository.RemoveAsync(x => x.PortfolioApiId == portfolioApiId && x.ReferenceDate >= startDate && x.ReferenceDate <= endDate).ConfigureAwait(continueOnCapturedContext: false);
var parameters = new[]
{
DbParameterFactory.CreateDbParameter<MySqlParameter>("#PortfolioApiId", portfolioApiId),
DbParameterFactory.CreateDbParameter<MySqlParameter>("#StartDate", startDate),
DbParameterFactory.CreateDbParameter<MySqlParameter>("#EndDate", endDate)
};
await repository.ExecuteSqlCommandAsync("CALL PublishPortfolioFinancialData(#PortfolioApiId, #StartDate, #EndDate);", parameters).ConfigureAwait(continueOnCapturedContext: false);
}
I believe the problem is the simultaneous connection to the database.
I thought I mitigated this using TransactionScopeAsyncFlowOption, as I've seen elsewhere, but the problem persists.
During the execution of procedures, a deadlock occurs in one of the tables that one of the procedures feeds, and a timeout error occurs.
And the exception message:
MySqlConnector.MySqlException (0x80004005): Lock wait timeout exceeded; try restarting transaction
At some point I also received the following message:
MySqlConnector.MySqlException (0x80004005): XA_RBDEADLOCK: Transaction branch was rolled back: deadlock was detected
Tests I performed:
Set database timeout from 50 to 100s, Fail
Set PortfolioPublishJobStep timeout from 2 to 3 min and PortfolioFinancialBenchmarkDataService and PortfolioFinancialDataService from 1 to 2 min, Fail
Run only 1 of the 2 stored procedures, Success
Run procedures synchronously, Success
Thus, I conclude that the problem may be in the opening of 2 transactions, and I believe that one may be waiting for the end of the other...
There are many situations where Mysql causes locks. For example:
Performing a DML operation without commit, and then performing a delete operation will lock the table.
Insert and update the same piece of data in the same transaction.
Improper design of table index leads to deadlock in the database.
Long things, blocking DDL, and then blocking all subsequent operations of the same table.
Solution
Emergency method:
show full processlist;killdrop the problematic process.
Show full processlist; kill x;
Sometimes through the processlist, it is not possible to see where there is a lock waiting. When the two transactions are in the commit phase, it cannot be reflected in the processlist.
The cure method:
select * from innodb_trx;
Check which transactions occupy table resources.
Through this method, you need to have some understanding of innodb to deal with.
innodb_lock_wait_timeout: Waiting time for row-level locks of InnoDB's dml operation
innodb_lock_wait_timeout refers to the longest time the transaction waits to obtain resources.
If the resource is not allocated after this time, it will return to the application failure; when the lock wait exceeds the set time, the following error will be reported ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction;.
The time unit of the parameter is seconds, the minimum can be set to 1s (generally not set so small), the maximum can be set to 1073741824 seconds, the default installation value is 50s (default parameter setting).
How to modify the value of innode lock wait timeout?
set innodb_lock_wait_timeout=100; set global innodb_lock_wait_timeout=100;
Or:Modify the parameter file/etc/my.cnf innodb_lock_wait_timeout = 50
I've been testing Go in hopes to use it for a new site and wanted to make sure it was as fast or faster than PHP. So I ran a basic test doing bulk inserts in Go and PHP because I'll need bulk inserts.
My tests used transactions, prepared statements, the same machine, the exact same table definition, no index but the PK, and the same logic in the function.
Results:
100k Inserts in PHP (mysqli) was 4.42 seconds
100k Inserts in Go (Go-MySQL-Driver) was 9.2 seconds
The go mysql driver i'm using is the most popular one 'Go-MySQL-Driver' found here: https://github.com/go-sql-driver/mysql
I'm wondering if anyone can tell me if my code in go is not set up right or if this is just how go is.
The functions add a bit of variability to a few of the row variables just so every row isnt the same.
Go Function:
func fill_table(w http.ResponseWriter, r *http.Request, result_string *string, num_entries_to_add int) {
defer recover_show_error(result_string)
db := getDBConn()
defer db.Close()
var int_a int = 9
var int_b int = 4
var int_01 int = 1
var int_02 int = 1451628000 // Date Entered (2016-1-1, 1am)
var int_03 int = 11
var int_04 int = 0
var int_05 int = 0
var float_01 float32 = 90.0 // Value
var float_02 float32 = 0
var float_03 float32 = 0
var text_01 string = ""
var text_02 string = ""
var text_03 string = ""
start_time := time.Now()
tx, err := db.Begin()
if err != nil {
panic(err)
}
stmt, err := tx.Prepare("INSERT INTO " + TABLE_NAME +
"(`int_a`,`int_b`,`int_01`,`int_02`,`int_03`,`int_04`,`int_05`,`float_01`,`float_02`,`float_03`,`text_01`,`text_02`,`text_03`) " +
"VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)")
if err != nil {
panic(err)
}
defer stmt.Close()
var flip int = 0
for i := 0; i < num_entries_to_add; i++ {
flip = ((int)(i / 500)) % 2
if flip == 0 {
float_01 += .1 // add to Value
} else {
float_01 -= .1 // sub from Value
}
int_02 += 1 // add a second to date.
_, err = stmt.Exec(int_a, int_b, int_01, int_02, int_03, int_04, int_05, float_01, float_02, float_03, text_01, text_02, text_03)
if err != nil {
panic(err)
}
}
err = tx.Commit()
if err != nil {
panic(err)
}
elapsed := time.Since(start_time)
*result_string += fmt.Sprintf("Fill Table Time = %s</br>\n", elapsed)
}
PHP Function:
function FillTable($num_entries_to_add){
$mysqli= new mysqli("localhost", $GLOBALS['db_username'], $GLOBALS['db_userpass'], $GLOBALS['database_name']);
if ($mysqli->connect_errno == 0) {
$int_a = 9;
$int_b = 4;
$int_01 = 1;
$int_02 = 1451628000; // Date Entered (2016-1-1, 1am)
$int_03 = 11;
$int_04 = 0;
$int_05 = 0;
$float_01 = 90.0; // Value
$float_02 = 0;
$float_03 = 0;
$text_01 = "";
$text_02 = "";
$text_03 = "";
$mysqli->autocommit(FALSE); // This Starts Transaction mode. It will end when you use mysqli->commit();
$sql = "INSERT INTO " . $GLOBALS['table_name'] .
"(`int_a`,`int_b`,`int_01`,`int_02`,`int_03`,`int_04`,`int_05`,`float_01`,`float_02`,`float_03`,`text_01`,`text_02`,`text_03`) " .
"VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)";
$start_time = microtime(true);
if($stmt = $mysqli->prepare($sql)) {
$stmt->bind_param('iiiiiiidddsss', $int_a, $int_b, $int_01, $int_02, $int_03, $int_04, $int_05, $float_01, $float_02, $float_03, $text_01, $text_02, $text_03);
$flip = 0;
for ($i = 1; $i <= $num_entries_to_add; $i++) {
$flip = ((int)($i / 500)) % 2;
if ($flip == 0) {
$float_01 += .1; // add Value
}
else {
$float_01 -= .1; // sub Value
}
$int_02 += 1; // add a second to date.
$stmt->execute(); //Executes a prepared Update
}
$mysqli->commit(); // Transaction mode ends now
$stmt->close(); //Close statement
}
$execute_time = microtime(true) - $start_time;
echo $GLOBALS['html_newline'] . $GLOBALS['html_newline'] .
'FillDataEntryTable Speed: '.$execute_time.' sec' . $GLOBALS['html_newline'] . $GLOBALS['html_newline'];
$thread_id = $mysqli->thread_id; // Get MySQL thread ID
$mysqli->kill($thread_id); // Kill MySQL Server connection
$mysqli->close(); // Close MySQL Server connection
}
}
In my testing to find what language I want to use for my new website I experimented with php, golang, and java. I don't have much experience with any of the languages so anything I say here could be corrected by someone in the future.
My main test was batch inserts into the mysql database because I'll be needing it for an app.
I wanted to move away from php because it's a non-compiled old scripting language which is slower at many things than golang and java. It's also an awkward syntax for many things. However php mysqli is actually 2x faster than golang for large "transactions" unless you awkwardly spawn many go-routines to divide the work up.
During my testing and research I found out a few things.
The PHP mysqli "transactions" api are probably using some kind of batch operations to get a "transaction" done because mysqli has no separate batch functions and the transactions are quicker than single inserts. But in most other languages transactions don't auto-batch everything and don't even increase the execution time. They are just a mechanism to roll back everything in the transaction if something goes wrong. What increases execution time in other languages is using batches.
But one of the big problems with go mysql interface right now appears to be no real support for batch operations. The closest I got was to jerry rig one and make my own batch operation as pointed out by this post (golang - mysql Insert multiple data at once?). Doing this I was able to get the execution time in go from 9.2s to 3.9s without spawning other go routines. But since there's no real support for it the batch operation only returns a single result set for the first operation of the batch. This is worthless to me because I need to return autoinc Ids for my inserted rows. There were other problems with this setup too that I wont go into.
So lastly I tried java on a tomcat server. Tomcat/java installation is a bit more involved than go but programming in java was so much easier and natural. JDBC is an excellent driver with fulls support for easy batch operations with prepared statements. It did 100k inserts in only 1 sec. It's the clear winner in my book. Plus java syntax is much more natural than golang IMO.
I'm implementing fluid simulator using PhysiX. Unfortunately sth is wrong with cuda context manager and I have a problem with recognizing what it is. I have an init method which looks like this:
void InitializePhysX() {
bool recordMemoryAllocations = true;
const bool useCustomTrackingAllocator = true;
PxAllocatorCallback* allocator = &gDefaultAllocatorCallback;
PxErrorCallback* error = &gDefaultErrorCallback;
PxFoundation* mFoundation = PxCreateFoundation(PX_PHYSICS_VERSION, *allocator, *error);
if(!mFoundation)
printf("PxCreateFoundation failed!\n");
PxProfileZoneManager* mProfileZoneManager = &PxProfileZoneManager::createProfileZoneManager(mFoundation);
if(!mProfileZoneManager)
printf("PxProfileZoneManager::createProfileZoneManager failed!\n");
#ifdef PX_WINDOWS
pxtask::CudaContextManagerDesc cudaContextManagerDesc;
pxtask::CudaContextManager* mCudaContextManager = pxtask::createCudaContextManager(*mFoundation, cudaContextManagerDesc, mProfileZoneManager);
if( mCudaContextManager ){
if( !mCudaContextManager->contextIsValid() ){
mCudaContextManager->release();
mCudaContextManager = NULL;
printf("invalid context\n");
}
} else {
printf("create cuda context manager failed\n");
}
#endif
mPhysX = PxCreatePhysics(PX_PHYSICS_VERSION, *mFoundation, PxTolerancesScale(), recordMemoryAllocations, mProfileZoneManager);
if(!mPhysX)
printf("PxCreatePhysics failed!\n");
...
}
When I try to run my application it occures that mCudaContextManger is never created properly. "create cuda context manager failed" is being wrote on the console and:
"....\LowLevel\software\src\PxsContext.cpp (1122) : warning : GPU operation faied. No px::CudaContextManager available.
....\SimulationController\src\particles\ScParticleSystemSim.cpp (73) : warning : GPU particle system creation failed. Falling back to CPU implementation."
I have GeForce560Ti with newest driver (error also shows up on GeForce460 on my friend's laptop). Physix is set to use GPU in NVidia Control Panel.
Does anybody know what we made wrong and how to make GPU work? Thanks in advance!
File PhysX3Gpu_x86.dll was missing. I added it and now everything is fine.
I need to try to understand how MySQL processes/connections work. I have googled and dont see anything in laymans terms so I'm asking here. Here is the situation.
Our host is giving us grief over "too many MySQL processes". We are on a shared server. We are allowed .2 of the server mySQL processes - which they claim is 50 connections - and they say we are using .56.
From the technical support representative:
"Number of MySQL procs (average) - 0.59 meant that you were using
0.59% of the total MySQL connections available on the shared server. The acceptable value is 0.20 which is 50 connections. "
Here is what we are running:
Zen Cart: 1.5.1 35K products. Auto updating of 1-20
products every 10 hours via cron.
PHP version 5.3.16
MySQL version 5.1.62-cll
Architecture i686
Operating system linux
We generally have about 5000 hits per day on the site and Google bot loves to visit even though I have the crawl rate set to minimum in Google webmaster tools.
I'm hoping someone can explain MySQL processes to me in terms of what this host is talking about. Every time I ask them I get an obfuscated answer that is vague and unclear. Is a new MySQL process created every time a visitor visits the site? That does not seem right.
According to the tech we were using 150 connections at that particular time.
EDIT:
here is the connection function in zencart
function connect($zf_host, $zf_user, $zf_password, $zf_database, $zf_pconnect = 'false', $zp_real = false) {
$this->database = $zf_database;
$this->user = $zf_user;
$this->host = $zf_host;
$this->password = $zf_password;
$this->pConnect = $zf_pconnect;
$this->real = $zp_real;
if (!function_exists('mysql_connect')) die ('Call to undefined function: mysql_connect(). Please install the MySQL Connector for PHP');
$connectionRetry = 10;
while (!isset($this->link) || ($this->link == FALSE && $connectionRetry !=0) )
{
$this->link = #mysql_connect($zf_host, $zf_user, $zf_password, true);
$connectionRetry--;
}
if ($this->link) {
if (#mysql_select_db($zf_database, $this->link)) {
if (defined('DB_CHARSET') && version_compare(#mysql_get_server_info(), '4.1.0', '>=')) {
#mysql_query("SET NAMES '" . DB_CHARSET . "'", $this->link);
if (function_exists('mysql_set_charset')) {
#mysql_set_charset(DB_CHARSET, $this->link);
} else {
#mysql_query("SET CHARACTER SET '" . DB_CHARSET . "'", $this->link);
}
}
$this->db_connected = true;
if (getenv('TZ') && !defined('DISABLE_MYSQL_TZ_SET')) #mysql_query("SET time_zone = '" . substr_replace(date("O"),":",-2,0) . "'", $this->link);
return true;
} else {
$this->set_error(mysql_errno(),mysql_error(), $zp_real);
return false;
}
} else {
$this->set_error(mysql_errno(),mysql_error(), $zp_real);
return false;
}
I wonder if it is a problem with connection pooling. Try changing this line:
$this->link = #mysql_connect($zf_host, $zf_user, $zf_password, true);
to this:
$this->link = #mysql_connect($zf_host, $zf_user, $zf_password);
The manual is useful here - the forth parameter is false by default, but your code is forcing it to be true, which creates a new connection even if an existing one is already open (this is called connection pooling and saves creating new connections unnecessarily i.e. saves both time and memory).
I would offer a caveat though: modifying core code in a third-party system always needs to be done carefully. There may be a reason for the behaviour they've chosen, though there's not much in the way of comments to be able to tell. It may be worth asking a question via their support channels to see why it works this way, and whether they might consider changing it.
You know how vBulletin has a sql profiler when in debug mode? How would I go about building one for my own web application? It's built in procedural PHP.
Thanks.
http://dev.mysql.com/tech-resources/articles/using-new-query-profiler.html
The above link links how you can get al the sql profile information after any query.
Best way to implement it is to create a database class and have it have a "profile" flag to turn on logging of queries and the appropriate information as shown int he link above.
Example:
Class dbthing{
var $profile = false;
function __construct($profile = false){
if($profile){
$this->query('set profiling=1');
$this->profile = true;
}
...
}
function query($sql, $profile_this == false){
...
if($this->profile && !$profile_this)
$this->query("select sum(duration) as qtime from information_schema.profiling where query_id=1", true);
... // store the timing here
}
}
I use a database connection wrapper that I can place a profiling wrapper arround. This way I can discard the wrapper, or change it, without changing my base connector class.
class dbcon {
function query( $q ) {}
}
class profiled_dbcon()
{
private $dbcon;
private $thresh;
function __construct( dbcon $d, $thresh=false )
{
$this->dbcon = $d;
$this->thresh = $thresh;
}
function queury( $q )
{
$begin = microtime( true );
$result = this->dbcon->query();
$end = microtime( true );
if( $this->thresh && ($end - $begin) >= $this->thresh ) error_log( ... );
return $result;
}
}
For profiling with a 10 second threshold:
$dbc = new profiled_dbcon( new dbcon(), 10 );
I have it use error_log() what the times were. I would not log query performance back to the database server, that affects the database server performance. You'd rather have your web-heads absorb that impact.
Though late, Open PHP MyProfiler would help you achieve this, and you can extract functional sections from the code for your usage.