how to run db.fsyncLock() with Motor - tornado-motor

Is it possible to run the db.fsyncLock() command through Motor? I want my application to make a backup of the database and need to flush and lock the files before I make the copy.

The mongo shell executes fsyncLock by calling a MongoDB command:
https://docs.mongodb.com/manual/reference/method/db.fsyncLock/
As that page shows, the shell provides a simple wrapper around a fsync database command with the following syntax:
{ fsync: 1, lock: true }
So you can run it with Motor as any MongoDB command:
await client.admin.command(SON([('fsync', 1), ('lock', true)]))
Here, "client" is a MotorClient. Use "await" if you're in a native coroutine defined with "async def", or use "yield" if you're in a generator-based coroutine decorated with "#gen.coroutine".

Related

mariadb c-connector bind / execute mess up memory allocation

i'm using mariadb c-connector with prepare, bind and execute. it works usualy. but one case end up in "corrupted unsorted chunks" and core dumping when freeing bind buffer. i suggest the whole malloc organisation is messed up after calling mysql_stmt_execute(). my test's MysqlDynamic.c show:
the problem only is connected to x509cert variable bound by bnd[9]
freeing memory only fails if bnd[9].is_null = 0, if is_null execute end normally
freeing memory (using FreeStmt()) after bind and before execute end normally
print of bnd[9].buffer before execute show (void*) is connected to the correct string buffer
same behavior for setting bnd[9].buffer_length to STMT_INDICATOR_NTS or strlen()
other similar bindings (picture, bnd[10]) do not lead to corrupted memory and core dump.
i defined a c structure test for test data in my test program MysqlDynamic.c which is bound in MYSQL_BIND structure.
bindings for x509cert (string buffer) see bindInsTest():
bnd[9].buffer_type = MYSQL_TYPE_STRING;
bnd[9].buffer_length = STMT_INDICATOR_NTS;
bnd[9].is_null = &para->x509certI;
bnd[9].buffer = (void*) para->x509cert;
please get the details out of source file MysqlDynamic.c. please adapt defines in the source to your environment, verify content, and run it. you will find compile info in source code. MysqlDynymic -c will create the table. MysqlDynamic -i will insert 3 records each run. And 'MysqlDynamic -d` drop the the table again.
MysqlDynamic -vc show:
session set autocommit to <0>
connection id: 175
mariadb server ver:<100408>, client ver:<100408>
connected on localhost to db test by testA
>> if program get stuck - table is locked
table t_test created
mysql connection closed
pgm ended normaly
MysqlDynamic -i show
ins2: BufPara <92> name<master> stamp<> epoch<1651313806000>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<0x5596a0f0c220> buf<0x5596a0f0c220> null<0> length<172>
ins1: BufPara <91> name<> stamp<2020-04-30> epoch<1650707701123>
cert is cert<0x5596a0f181d0> buf<0x5596a0f181d0> null<0>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
ins0: BufPara <90> name<gugus> stamp<1988-10-12T18:43:36> epoch<922337203685477580>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
free(): corrupted unsorted chunks
Aborted (core dumped)
checking t_test table content show all records are inserted as expected.
you can disable loading of x509cert and/or picture by commenting out the defines line 57/58. the program than end normally. you also can comment out line 208. the buffers are then indicated as NULL.
Questions:
is there a generic coding mistake in the program causing this behavior?
can you run the program in your environment without core dumping? i'm currently using version 10.04.08.
any improvment in code will be welcome.

How to execute python code and airflow macro in a BashOperator?

I am trying to execute the following task in Airflow:
time_zone = "America/New_York"
t1= BashOperator(
task_id='example_task',
bash_command=('script.sh --arg1 hello '
f'--arg2 {{ execution_date + timedelta(days=1).astimezone(pytz.timezone({time_zone})) }}'),
dag=dag)
The problem I have is with the bash_command. I am trying to execute python code and use a macro inside the bash_command, but the script above yields airflow.exceptions.AirflowException: Bash command failed.
My question is: Is what I am trying to do possible and if so, how? I guess I am just scripting jinja in the wrong way... but I am not sure.
The reason why the above does not work is because I was using both jinja2 and python f-strings at the same time, thus resulting in confusion.
There is no way (which I have found) to combine the two directly from the bash_command=. A wrapper python function to execute the bash command and a PythonOperator to execute the wrapper function is a solution, as it provides great flexibility over the usage of the airflow macros (the reason why I use jinja2 in the bash_command= and python code.
It's not the cleanest solution, but it works.

When loading a fixture ends with "Killed"

In my local envoriment I can find load all my fixtures.
But on my staging server, one of the fixtures just ends with "Killed"
I grep'ed through my whole project for the word "Killed" but did not find it anywhere.
php bin/console doctrine:fixures:load --fixtures src/ReviewBundle/DataFixtures/ORM/LoadRatingTypes.php -vvv
Careful, database will be purged. Do you want to continue y/N ?y
> purging database
> loading ReviewBundle\DataFixtures\ORM\LoadRatingTypes
Killed
I tail my dev.log file and all I got was
[2016-03-17 15:26:54] doctrine.DEBUG: "START TRANSACTION" [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM cms_block [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM cms_page [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM rating_type [] []
My php log is empty
My mysql.log and mysql.err log is empty.
I have no idea where I should look
"Killed" often happens when PHP runs out of memory, and shuts down your fixture process. This happens mainly when your PHP script tries to load too many entities. Are you trying to load a massive number of fixtures?
There are workarounds to avoid memory overflows in your fixtures scripts. I answered a pretty similar question some weeks ago, you should have a look at it.
If you still have problems after implementing these modifications, you could also increase PHP memory limit.

kill mysql query in perl

I have a perl script in which i create a table from the existing mysql databases. I have hundreds of databases and tables in each database contains millions of records, because of this, sometimes, query takes hours because of problems with indexing, and sometimes i run out disk space due to improper joins. Is there a way i can kill the query in the same script watching for memory consumption and execution time?
P.S. I am using DBI module in perl for mysql interface
As far as execution time, you use alarm Perl functionaility to time out.
Your ALRM handle can either die (see example below), or issue a DBI cancel call (sub { $sth->cancel };)
The DBI documentation actually has a very good discussion of this as well as examples:
eval {
local $SIG{ALRM} = sub { die "TIMEOUT\n" }; # N.B. \n required
eval {
alarm($seconds);
... code to execute with timeout here (which may die) ...
};
# outer eval catches alarm that might fire JUST before this alarm(0)
alarm(0); # cancel alarm (if code ran fast)
die "$#" if $#;
};
if ( $# eq "TIMEOUT\n" ) { ... }
elsif ($#) { ... } # some other error
As far as watching for memory, you just need for ALRM handler - instead of simply dying/cancelling - first checks the memory consumption of your script.
I won't go into details of how to measure memory consumption since it's an unrelated question that was likely already answered comprehensively on SO, but you can use size() method from Proc::ProcessTable as described in the Perlmonks snippet Find memory usage of perl program.
I used KILL QUERY command as described in http://www.perlmonks.org/?node_id=885620
This is the code from my script
eval {
eval { # Time out and interrupt work
my $TimeOut=Sys::SigAction::set_sig_handler('ALRM',sub {
$dbh->clone()->do("KILL QUERY ".$dbh->{"mysql_thread_id"});
die "TIMEOUT\n";
});
#Set alarm
alarm($seconds);
$sth->execute();
# Clear alarm
#alarm(0);
};
# Prevent race condition
alarm(0);
die "$#" if $#;
};
This code kills the query and also removes all the temporary tables
Watch out that you can't kill a query, using the same connection handler if you have a query that is stuck because of a table lock. You must open another connection with the same user and will that thread id.
Of course, you have to store in a hash the list of currently opened thread ids.
Mind that once you've terminated a thread id, the rest of the Perl code will execute .. on a unblessed handler.

restart multi-threaded perl script and close mysql connection

I have a Perl script that reads a command file and restarts itself if necessary by doing:
myscript.pl:
exec '/home/foo/bin/myscript.pl';
exit(0);
Now, this works fine except for one issue. The thread that reads the command file does not have access to the DBI handle I use. And over a number of restarts I seem to build up the number of open mysql connections till I get the dreaded "Too Many Connections" error. The DBI spec says:
"Because of this (possibly temporary) restriction, newly created threads must make their own connections to the database. Handles can't be shared across threads."
Any way to close connections or perhaps a different way to restart the script?
Use a flag variable that is shared between threads. Have the command line reading thread set the flag to exit, and the thread holding the DB handle release it and actually do the re-exec:
#!/usr/bin/perl
use threads;
use threads::shared;
use strict; use warnings;
my $EXIT_FLAG :shared;
my $db_thread = threads->create('do_the_db_thing');
$db_thread->detach;
while ( 1 ) {
sleep rand 10;
$EXIT_FLAG = 1 if 0.05 > rand or time - $^T > 20;
}
sub do_the_db_thing {
until ( $EXIT_FLAG ) {
warn sprintf "%d: Working with the db\n", time - $^T;
sleep rand 5;
}
# $dbh->disconnect; # here
warn "Exit flag is set ... restarting\n";
exec 'j.pl';
}
You could try registering an atexit function to close the DBI handle at the point where it is opened, and then use fork & exec to restart the script rather then just exec. The parent would then call exit, invoking the atexit callback to close the DBI handle. The child could re-exec itself normally.
Edit: After thinking for a couple more minutes, I believe you could skip the atexit entirely because the handle would be closed automatically upon the parent's exit. Unless, of course, you need to do a more complex operation upon closing the DB handle than a simple filehandle close.
my $pid = fork();
if (not defined $pid) {
#Could not fork, so handle the error somehow
} elsif ($pid == 0) {
#Child re-execs itself
exec '/home/foo/bin/myscript.pl';
} else {
#Parent exits
exit(0);
}
If you expect a lot of connections, you probably want DBI::Gofer to act as a DBI proxy for you. You create as many connections in as many scripts as you like, and DBI::Gofer shares them when it can.