keep MYSQL connection functional for multiple queries - mysql

I am having a very hard time finding information on this topic. I have a running application on a Raspberry Pi where I have an infinite loop with the below code. outside of it i have the MYSQL *con; to be reused. So my code works well the first time, but the second time i get the following error. I thought adding the MYSQL_close() would do the trick but it didn't.
output:
valor: d9274cb5 -651735883
valor: d9274cb5 -651735883
1
This handle is already connected. Use a separate handle for each connection.
code:
uint32_t intVal;
sscanf(&sn_str[1],"%x",&intVal);
fprintf(stderr, "valor: %x %d\n", intVal, intVal);
if (mysql_real_connect(con, "localhost", "rfid", "******",
"Order2Dock", 0, NULL, 0) == NULL)
{
fprintf(stderr, "1 \n");
finish_with_error(con);
}
if (mysql_query(con, "INSERT INTO `Order2Dock`.`Actividad`(`TiempoInicio`,`Proceso_Id`, `Orden_Id`) VALUES (now(),1,1)")) {
fprintf(stderr, "2 \n");
finish_with_error(con);
}
mysql_close(con);

Keep your connect command outside of the loop
$mysqli = new mysqli("localhost", "my_user", "my_password", "world");
if (!$mysqli) {
fprintf(stderr, "1 \n");
finish_with_error(con);
}
Then start the loop, you dont need to connect everytime you loop:
while (1+1 = 2){
if (!$mysqli){
$mysqli = new mysqli("localhost", "my_user", "my_password", "world");
}
uint32_t intVal;
sscanf(&sn_str[1],"%x",&intVal);
fprintf(stderr, "valor: %x %d\n", intVal, intVal);
if ($mysqli->query("INSERT INTO `Order2Dock`.`Actividad`(`TiempoInicio`,`Proceso_Id`, `Orden_Id`) VALUES (now(),1,1)") === TRUE) {
echo '<p>Yeah, another query!!</p>';
}
}
Edit: I just added a condition to test if the link is still up, otherwise re-connect to the database.
I was just thinking that if this is an infinite loop running over a Web server like apache or IIS... then something must be configured in order to let the script run forever instead and prevent the web server from timing it out.
Cheers.

w3schools
php refrerence
I heard "mysql" will be deleted on future php..
using "mysqli" and you can use mysqli_multi_query

Related

Parallel mysql I/O in Ruby

Good day to you. I'm writing a cron job that hopefully will split a huge MySQL table to several threads and do some work on them. This is the minimal sample of what I have at the moment:
require 'mysql'
require 'parallel'
#db = Mysql.real_connect("localhost", "root", "", "database")
#threads = 10
Parallel.map(1..#threads, :in_processes => 8) do |i|
begin
#db.query("SELECT url FROM pages LIMIT 1 OFFSET #{i}")
rescue Mysql::Error => e
#db.reconnect()
puts "Error code: #{e.errno}"
puts "Error message: #{e.error}"
puts "Error SQLSTATE: #{e.sqlstate}" if e.respond_to?("sqlstate")
end
end
#db.close
The threads don't need to return anything, they get their job share and they do it. Only they don't. Either connection to MySQL is lost during the query, or connection doesn't exist (MySQL server has gone away?!), or no _dump_data is defined for class Mysql::Result and then Parallel::DeadWorker.
How to do that right?
map method expects a result; I don't need a result, so I switched to each:
Parallel.each(1..#threads, :in_processes => 8) do |i|
Also this solves a problem with MySQL: I just needed to start the connection inside the parallel process. When using each loop, it's possible. Of course, connection should be closed inside the process also.

How do I connect to my web domain's mySQL database using Qt?

I have a web domain and had already mySql database in it. I wish to connect and retrieve data from the database to my Qt Application. Here is my attempt and my result. (The host name, database name, username and password were just edited).
QSqlDatabase db = QSqlDatabase::addDatabase("QSQLITE");
db.setHostName("www.mydomain.com");
db.setDatabaseName("myDatabase");
db.setUserName("myName");
db.setPassword("myPass");
if(!db.open()){
QSqlError err = db.lastError();
qDebug() << err.text();
}
else {
QSqlQuery qry;
qDebug() << "Connected to SQL Database!";
if (qry.exec("select * from dataTable;")){
while(qry.next()){
qDebug() << qry.value(1).toString();
}
}
else {
qDebug() << "ERROR QUERY";
}
qDebug() << "Closing...";
db.close();
}
return a.exec();
}
It shows that it got connected but upon executing a query. It returns an error. Furthermore, I tried changing to an invalid hostname and/or username and it still got connected.
1) Try w/o the semi-colon.
"For SQLite, the query string can contain only one statement at a time." (http://qt-project.org/doc/qt-4.8/qsqlquery.html#exec)
However this is one statement, the interpreter may get confused because of the semi-colon.
2) "Note that the last error for this query is reset when exec() is called." (http://qt-project.org/doc/qt-4.8/qsqlquery.html#exec)
Because this is not a prepared statement, try avoiding exec() so information about the last error is available:
QSqlQuery qry("select * from dataTable");
if(qry.lastError()) {
// ...
}
while(qry.next()) {
qDebug() << qry.value(1).toString();
}

Perl Module Instantiation + DBI + Forks "Mysql server has gone away"

I have written a perl program that parses records from csv into a db.
The program worked fine but took a long time. So I decided to fork the main parsing process.
After a bit of wrangling with fork it now works well and runs about 4 times faster. The main parsing method is quite database intensive. For interests sake, for each record that is parsed there are the following db calls:
1 - there is a check that the uniquely generated base62 is unique against a baseid map table
2 - There is an archive check to see if the record has changed
3 - The record is inserted into the db
The problem is that I began to get "Mysql has gone away" errors while the parser was being run in forked mode, so after much fiddling I came up with the following mysql config:
#
# * Fine Tuning
#
key_buffer = 10000M
max_allowed_packet = 10000M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
max_connections = 10000
table_cache = 64
thread_concurrency = 32
wait_timeout = 15
tmp_table_size = 1024M
query_cache_limit = 2M
#query_cache_size = 100M
query_cache_size = 0
query_cache_type = 0
That seems to have fixed problems while the parser is running However, I am now getting a "Mysql server has gone away" when the next module is run after the main parser.
The strange thinf is the module causing problems involves a very simple SELECT query on a table with currently only 3 records. Run directly as a test (not after the parser) it works fine.
I tried adding a pause of 4 minutes after the parser module runs - but I get the same error.
I have a main DBConnection.pm model with this:
package DBConnection;
use DBI;
use PXConfig;
sub new {
my $class = shift;
## MYSQL Connection
my $config = new PXConfig();
my $host = $config->val('database', 'host');
my $database = $config->val('database', 'db');
my $user = $config->val('database', 'user');
my $pw = $config->val('database', 'password');
my $dsn = "DBI:mysql:database=$database;host=$host;";
my $connect2 = DBI->connect( $dsn, $user, $pw, );
$connect2->{mysql_auto_reconnect} = 1;
$connect2->{RaiseError} = 1;
$connect2->{PrintError} = 1;
$connect2->{ShowErrorStatement} = 1;
$connect2->{InactiveDestroy} = 1;
my $self = {
connect => $connect2,
};
bless $self, $class;
return $self;
}
Then all modules, including the forked parser modules, open a connection to the DB using:
package Example;
use DBConnection;
sub new {
my $class = shift;
my $db = new DBConnection;
my $connect2 = $db->connect();
my $self = {
connect2 => $connect2,
};
bless $self, $class;
return $self;
}
The question is if I have Module1.pm that calls Module2.pm that calls Module3.pm and each of them instantiates a connection with the DB as shown above (ie in the constructor) then are they using different connections to the database or the same connection?
What I wondered is if the script takes say 6 hours to finish, if the top level call to the db connection is timing out the lower level db connection even though the lower level module is making its 'own' connection.
It is very frustrating trying to find the problem as I can only reproduce the error after running a very long parse process.
Sorry for the long question, thanks in advance to anyone who can give me any ideas.
UPDATE 1:
Here is the actual forking part:
my $fh = Tie::Handle::CSV->new( "$file", header => 1 );
while ( my $part = <$fh> ) {
if ( $children == $max_threads ) {
$pid = wait();
$children--;
}
if ( defined( $pid = fork ) ) {
if ($pid) {
$children++;
} else {
$cfptu = new ThreadedUnit();
$cfptu->parseThreadedUnit($part, $group_id, $feed_id);
}
}
}
And then the ThreadedUnit:
package ThreadedUnit;
use CollisionChecker;
use ArchiveController;
use Filters;
use Try::Tiny;
use MysqlLogger;
sub new {
my $class = shift;
my $db = new DBConnection;
my $connect2 = $db->connect();
my $self = {
connect2 => $connect2,
};
bless $self, $class;
return $self;
}
sub parseThreadedUnit {
my ( $self, $part, $group_id, $feed_id ) = #_;
my $connect2 = $self->{connect2};
## Parsing stuff
## DB Update in try -> catch
exit();
}
So as I understand the DB connection is being called after the forking.
But, as I mentioned above the forked code outlined just above works fine. It is the next module that does not work which is being run from a controller module which just runs through each worker module one at time (the parser being one of them) - the controller module does not create a DB connection in its construct or anywhere else.
Update 2
I forgot to mention that I don't get any errors in the 'problem' module following the parser if I only parse a small number of files and not the full queue.
So it is almost as if the intensive forked parsing and accessing the DB makes it un-available for normal non-forked processes just after it ends for some undetermined time.
The only thing I have noticed when the parser run finishes in Mysql status is the Threads_connected sits around, say, 500 and does not decrease for some time.
It depends on how your program is structured, which isn't clear from the question.
If you create the DB connection before you fork, Perl will make a copy of the DB connection object for each process. This would likely cause problems if two processes try to access the database concurrently with the same DB connection.
On the other hand, if you create the DB connections after forking, each module will have its own connection. This should work, but you could have a timeout problem if Module x creates a connection, then waits a long time for a process in Module y to finish, then tries to use the connection.
In summary, here is what you want:
Don't have any open connections at the point you fork. Child processes should create their own connections.
Only open a connection right before you want to use it. If there is a point in your program when you have to wait, open the connection after the waiting is done.
Read dan1111's answer but I suspect you are connecting then forking. When the child completes the DBI connection handle goes out of scope and is closed. As dan1111 says you are better connecting in the child for all the reasons he said. Read about InactiveDestroy and AutoInactiveDestroy in DBI which will help you understand what is going on.

Bind9 and MySQL DLZ Buffer Error

I compiled Bind 9 from source (see below) and set up Bind9 with MySQL DLZ.
I keep getting an error when I attempt to fetch anything from the server about buffer overflow. I've googled many times but can not find anything on how to fix this error.
Configure options:
root#anacrusis:/opt/bind9/bind-9.9.1-P3# named -V BIND 9.9.1-P3 built
with '--prefix=/opt/bind9' '--mandir=/opt/bind9/man'
'--infodir=/opt/bind9/info' '--sysconfdir=/opt/bind9/config'
'--localstatedir=/opt/bind9/var' '--enable-threads'
'--enable-largefile' '--with-libtool' '--enable-shared'
'--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr'
'--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=yes'
'--with-dlz-bdb=no' '--with-dlz-filesystem=yes' '--with-dlz-stub=yes'
'--with-dlz-ldap=yes' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing
-DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' using OpenSSL version: OpenSSL 1.0.1 14 Mar 2012 using libxml2
version: 2.7.8
This is the error I get when I dig example.com (with debug):
Query String: select ttl, type, mx_priority, case when
lower(type)='txt' then concat('"', data, '"')
else data end from dns_records where zone = 'example.com' and host = '#'
17-Sep-2012 01:09:33.610 dns_rdata_fromtext: buffer-0x7f5bfca73360:1:
unexpected end of input 17-Sep-2012 01:09:33.610 dns_sdlz_putrr
returned error. Error code was: unexpected end of input 17-Sep-2012
01:09:33.610 Query String: select ttl, type, mx_priority, case when
lower(type)='txt' then concat('"', data, '"')
else data end from dns_records where zone = 'example.com' and host = '*'
17-Sep-2012 01:09:33.610 query.c:2579: fatal error: 17-Sep-2012
01:09:33.610 RUNTIME_CHECK(result == 0) failed 17-Sep-2012
01:09:33.610 exiting (due to fatal error in library)
Named.conf
options {
directory "/opt/bind9/";
allow-query-cache { none; };
allow-query { any; };
recursion no;
};
dlz "Mysql zone" {
database "mysql
{host=localhost dbname=system ssl=false user=root pass=*password*}
{select zone from dns_records where zone = '$zone$'}
{select ttl, type, mx_priority, case when lower(type)='txt' then concat('\"', data, '\"')
else data end from dns_records where zone = '$zone$' and host = '$record$'}
{}
{}
{}
{}";
};
Do you run named single-threaded (with "-n 1" parameter)? If not, named will crash in various places when working on more than one query in parallel, since the MySQL DLZ module is not thread safe.
Manually log into the DB and run the query. See what it comes up with. The error says it's got an unexpected end of input, meaning it was expecting to get something and it never got it. So the first thing is to see if it you can get it manually. Maybe the mysqld isn't running. Maybe the user isn't defined or password is set wrong or permissions are not granted on that table. These could all account for the error.
Assuming all this works then you have two options: Enable more logging in your named.conf so you have more data to work with on what's happeningRemove and reinstall BIND, ensuring that all hashes match on all libraries and that all dependancies are in place.
I have gotten Bind with DLZ working on CentOS 7. I do not get the error that is effecting you.
I realize this is an older post but I thought I would share my conf files , and configure options.
I am using Bind 9.11.0
configure
./configure --prefix=/usr --sysconfdir=/etc/bind --localstatedir=/var --mandir=/usr/share/man --infodir=/usr/share/info --enable-threads --enable-largefile --with-libtool --enable-shared --enable-static --with-openssl=/usr --with-gssapi=/usr --with-gnu-ld --with-dlz-postgres=no --with-dlz-mysql=yes --with-dlz-bdb=no --with-dlz-filesystem=yes --with-dlz-stub=yes --enable-ipv6
named.conf
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local
#auskommentiert !!!
#include "/etc/bind/named.conf.options";
#include "/etc/bind/named.conf.local";
#include "/etc/bind/named.conf.default-zones";
key "rndc-key" {
// how was key encoded
algorithm hmac-md5;
// what is the pass-phrase for the key
secret "noway";
};
#options {
#default-key "rndc-key";
#default-server 127.0.0.1;
#default-port 953;
#};
controls {
inet * port 953 allow { any; } keys { "rndc-key"; };
#inet * port 53 allow { any; } keys { "rndc-key"; };
};
logging {
channel query.log {
file "/var/log/query.log";
// Set the severity to dynamic to see all the debug messages.
severity dynamic;
};
category queries { query.log; };
};
dlz "Mysql zone" {
database "mysql
{host=172.16.254.100 port=3306 dbname=dyn_server_db user=db_user pass=db_password}
{SELECT zone FROM dyn_dns_records WHERE zone = '$zone$'}
{SELECT ttl, type, mx_priority, IF(type = 'TXT', CONCAT('\"',data,'\"'), data) AS data
FROM dyn_dns_records
WHERE zone = '$zone$' AND host = '$record$' AND type <> 'SOA' AND type <> 'NS'}
{SELECT ttl, type, data, primary_ns, resp_person, serial, refresh, retry, expire, minimum
FROM dyn_dns_records
WHERE zone = '$zone$' AND (type = 'SOA' OR type='NS')}
{SELECT ttl, type, host, mx_priority, IF(type = 'TXT', CONCAT('\"',data,'\"'), data) AS data, resp_person, serial, refresh, retry, expire, minimum
FROM dyn_dns_records
WHERE zone = '$zone$' AND type <> 'SOA' AND type <> 'NS'}
{SELECT zone FROM xfr_table where zone='$zone$' AND client = '$client$'}";
};

Can MySQL support concurrent queries per connection?

If I have 10 queries, and each query is updating a particular table (i.e., 10 different tables).
Can I open one mySQL connection, spawn 10 threads, each thread handles 1 query such that they can run concurrently instead of executing one by one.
Thanks!
No you can't:
MySQL client library (at least native C one) is not thread safe to use same connection from different threads. You need to use a connection per thread.
If you just need update/insert queries running in parallel (asynchronously in terms of MySQL API) - you can use INSERT DELAYED and UPDATE LOW_PRIORITY queries.
Because of the way the MySQL protocol works, these will get sent to the server one-by-one if you only have one connection open.
There is no file "log.conn.txt" created, so there is no conflict between concurrent queries to the single mysql client connection:
<?
declare(ticks=1);
pcntl_signal(SIGUSR1, create_function('$signo', 'sleep(1);while (($pid=pcntl_wait(#$status, WNOHANG))>0) {}'));//protect against zombie children
$pdo=new PDO('mysql:host=192.168.0.2;port=3306;dbname=baseinfo', 'dev', 'dev',
array(PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION,
PDO::MYSQL_ATTR_INIT_COMMAND=>'set names utf8'
)
);
for ($i=0; $i<20; ++$i)
{if (($pid=pcntl_fork())===-1)
{//...
continue;
}
else if ($pid)
{$pids[]=$pid;
pcntl_wait($status, WNOHANG); //protect against zombie children, one wait vs one child
}
else if ($pid===0)
{ob_start();//prevent output to main process
register_shutdown_function(create_function('$pars', 'ob_end_clean();posix_kill(posix_getppid(), SIGUSR1);posix_kill(getmypid(), SIGKILL);'), array());//to kill self before exit();, or else the resource shared with parent will be closed
for ($j=0; $j<200; ++$j)
{try
{file_put_contents('log.'.$i.'.txt', $pdo->query('select partner_login from base_account where id=100')->fetch(PDO::FETCH_COLUMN, 0)."\t".time().substr(microtime(),2,6)."\n", FILE_APPEND);
}
catch (Exception $e)
{if ($pdo->getAttribute(PDO::ATTR_SERVER_INFO)==='MySQL server has gone away')
{file_put_contents('log.conn.txt', time().substr(microtime(),2,6).":{$i}:{$j} lost\n", FILE_APPEND);
$pdo=&new PDO('mysql:host=192.168.0.2;port=3306;dbname=baseinfo', 'dev', 'dev',
array(PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION,
PDO::MYSQL_ATTR_INIT_COMMAND=>'set names utf8'
)
);
}
}
usleep(50000);
}
exit();//avoid foreach loop in child process
}
}
//wait all child to end, avoid close db connection before all children self killed
foreach ($pids as $p)
{pcntl_waitpid($p, $status);
}
?>