In my local envoriment I can find load all my fixtures.
But on my staging server, one of the fixtures just ends with "Killed"
I grep'ed through my whole project for the word "Killed" but did not find it anywhere.
php bin/console doctrine:fixures:load --fixtures src/ReviewBundle/DataFixtures/ORM/LoadRatingTypes.php -vvv
Careful, database will be purged. Do you want to continue y/N ?y
> purging database
> loading ReviewBundle\DataFixtures\ORM\LoadRatingTypes
Killed
I tail my dev.log file and all I got was
[2016-03-17 15:26:54] doctrine.DEBUG: "START TRANSACTION" [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM cms_block [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM cms_page [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM rating_type [] []
My php log is empty
My mysql.log and mysql.err log is empty.
I have no idea where I should look
"Killed" often happens when PHP runs out of memory, and shuts down your fixture process. This happens mainly when your PHP script tries to load too many entities. Are you trying to load a massive number of fixtures?
There are workarounds to avoid memory overflows in your fixtures scripts. I answered a pretty similar question some weeks ago, you should have a look at it.
If you still have problems after implementing these modifications, you could also increase PHP memory limit.
Related
i'm using mariadb c-connector with prepare, bind and execute. it works usualy. but one case end up in "corrupted unsorted chunks" and core dumping when freeing bind buffer. i suggest the whole malloc organisation is messed up after calling mysql_stmt_execute(). my test's MysqlDynamic.c show:
the problem only is connected to x509cert variable bound by bnd[9]
freeing memory only fails if bnd[9].is_null = 0, if is_null execute end normally
freeing memory (using FreeStmt()) after bind and before execute end normally
print of bnd[9].buffer before execute show (void*) is connected to the correct string buffer
same behavior for setting bnd[9].buffer_length to STMT_INDICATOR_NTS or strlen()
other similar bindings (picture, bnd[10]) do not lead to corrupted memory and core dump.
i defined a c structure test for test data in my test program MysqlDynamic.c which is bound in MYSQL_BIND structure.
bindings for x509cert (string buffer) see bindInsTest():
bnd[9].buffer_type = MYSQL_TYPE_STRING;
bnd[9].buffer_length = STMT_INDICATOR_NTS;
bnd[9].is_null = ¶->x509certI;
bnd[9].buffer = (void*) para->x509cert;
please get the details out of source file MysqlDynamic.c. please adapt defines in the source to your environment, verify content, and run it. you will find compile info in source code. MysqlDynymic -c will create the table. MysqlDynamic -i will insert 3 records each run. And 'MysqlDynamic -d` drop the the table again.
MysqlDynamic -vc show:
session set autocommit to <0>
connection id: 175
mariadb server ver:<100408>, client ver:<100408>
connected on localhost to db test by testA
>> if program get stuck - table is locked
table t_test created
mysql connection closed
pgm ended normaly
MysqlDynamic -i show
ins2: BufPara <92> name<master> stamp<> epoch<1651313806000>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<0x5596a0f0c220> buf<0x5596a0f0c220> null<0> length<172>
ins1: BufPara <91> name<> stamp<2020-04-30> epoch<1650707701123>
cert is cert<0x5596a0f181d0> buf<0x5596a0f181d0> null<0>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
ins0: BufPara <90> name<gugus> stamp<1988-10-12T18:43:36> epoch<922337203685477580>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
free(): corrupted unsorted chunks
Aborted (core dumped)
checking t_test table content show all records are inserted as expected.
you can disable loading of x509cert and/or picture by commenting out the defines line 57/58. the program than end normally. you also can comment out line 208. the buffers are then indicated as NULL.
Questions:
is there a generic coding mistake in the program causing this behavior?
can you run the program in your environment without core dumping? i'm currently using version 10.04.08.
any improvment in code will be welcome.
I am having a little trouble when creating a directory in TCL8.5.9 on a Windows 7 Computer.
set CurrentDir [ file dirname $GUI_DB_path]
set ImageFolderPath [ file join $CurrentDir "DeflectionPlots" ]
# Always try to delete the Folder no matter if it exists or not
file delete -force $ImageFolderPath
# sometimes the following throws an error. Do not understand why.
# Create a clean and empty ImageFolder
file mkdir $ImageFolderPath
Sometimes, but not always i get the error:
cant't create directory.....$ImageFolderPath..... No such file or Directory
Well, that is why I want to create it. Running the code a second time without any changes results in the creation of the Directory as desired. What causes this and how can i resolve the issue? I could catch the error, but then I still would not have my Folder created.
Windows file operations (or their internal locking) is often slow.
I run into problems like yours where deletions/new files/renames take
a while and then I get errors because the file(s) are in some sort of
operating system limbo.
You can add a short sleep between the delete and the create and that
should resolve the issue on Windows.
set ::img_create_sleep 0
after 200 [list set ::img_create_sleep 1]
vwait ::img_create_sleep
it seems that it is not possible for me to trigger an event in OpenNMS using a threshold...
first the fact (as much detail as i can)
i want to monitor a html file, better, the content.
if a value is not what i expected OpenNMS should call be.
my html file:
Document Count: 5
in /var/lib/opennms/rrd/snmp/NODE are two files named: "documentCount" (.jbr & .meta)
--> because of the http-datacollection-config.xml
in my logfiles is written:
INFO [LegacyScheduler-Thread-2-of-50] RrdUtils: updateRRD: updating RRD file /var/lib/opennms/rrd/snmp/21/documentCount.jrb with values '1385031023:5'"
so the "5" is collected correctly.
now i created a threshold for this case:
<threshold type="high" ds-type="node"
value="4.0" rearm="2.0" trigger="1" triggeredUEI="uei.opennms.org/threshold/highThresholdExceeded"
filterOperator="or" ds-name="documentCount"
/>
in my collectd-configuration.xml is the threshold also enabled:
in my opinion the threshold of 4 is exceeded, because the value is 5. so the highTresholdEvent should be fired. BUT IT DOESNT.
so i'm here to ask if someone had an idea.
regards dawn
Check collectd.log with the following
tail -f collectd.log | grep -i thresholding
Threshold checking was moved to evaluate while the data is being retrieved a while back as opposed to a post process of rrd files.
Even with the log setting at info you should find some clues as to why the threshold rule is not matching any data.
I have a Shell script that needs to run in a loop, and perform a series of commands, and when it's finished repeat, hence the loop. Between each command there is a sleep command for a few minutes. The "job" should never terminate. I can have the script start a boot time, but it needs to continue where it left off in the sequence for the commands when the system is rebooted.
How can I best accomplished this? Should I create a MySQL table of the queue of commands, and have it delete each row after each time it successfully executes it? Then when it completes the loops it would re-populate the queue table and start from the top.
It seems like I'm missing something to make this more simple. Thanks in advance for your helpful insight!
You may want to rewrite your code so that it looks like this:
while: ; do
case $step in
0) command_1 && ((step++)) ;;
1) command_2 && ((step++)) ;;
...
9) command_9 && step=0 ;;
*) echo "ERROR" >&2 ; exit 1 ;;
esac
done
So you would be aware of what has been done by testing the value of step.
Then, you may want to set a trap before the while loop is executed, so that, on exit, the value of step is written to a log file:
trap "echo step=$step > log_file" EXIT
Then, all you need to do is to source the log file at the beginning of the script, and the last one will continue its job where it has been stopped.
MySQL sounds like a pretty complex solution for this case. In general I would think about some sort of filesystem based markers. You could keep the current state of execution in one or more files e. g. in /var/run and make your script check for these files when it starts up.
When you complete one step, you rename the file to reflect the next step that needs to be done and so on.
At the end, rename it or remove it so that the next time the script runs, it will start a new cycle.
I think you can use a cron job for this. A cron job can run each minute and with a "lock file" strategy you can run the script only if the lock file is not present hence when the previous running script was ended.
I have a perl script in which i create a table from the existing mysql databases. I have hundreds of databases and tables in each database contains millions of records, because of this, sometimes, query takes hours because of problems with indexing, and sometimes i run out disk space due to improper joins. Is there a way i can kill the query in the same script watching for memory consumption and execution time?
P.S. I am using DBI module in perl for mysql interface
As far as execution time, you use alarm Perl functionaility to time out.
Your ALRM handle can either die (see example below), or issue a DBI cancel call (sub { $sth->cancel };)
The DBI documentation actually has a very good discussion of this as well as examples:
eval {
local $SIG{ALRM} = sub { die "TIMEOUT\n" }; # N.B. \n required
eval {
alarm($seconds);
... code to execute with timeout here (which may die) ...
};
# outer eval catches alarm that might fire JUST before this alarm(0)
alarm(0); # cancel alarm (if code ran fast)
die "$#" if $#;
};
if ( $# eq "TIMEOUT\n" ) { ... }
elsif ($#) { ... } # some other error
As far as watching for memory, you just need for ALRM handler - instead of simply dying/cancelling - first checks the memory consumption of your script.
I won't go into details of how to measure memory consumption since it's an unrelated question that was likely already answered comprehensively on SO, but you can use size() method from Proc::ProcessTable as described in the Perlmonks snippet Find memory usage of perl program.
I used KILL QUERY command as described in http://www.perlmonks.org/?node_id=885620
This is the code from my script
eval {
eval { # Time out and interrupt work
my $TimeOut=Sys::SigAction::set_sig_handler('ALRM',sub {
$dbh->clone()->do("KILL QUERY ".$dbh->{"mysql_thread_id"});
die "TIMEOUT\n";
});
#Set alarm
alarm($seconds);
$sth->execute();
# Clear alarm
#alarm(0);
};
# Prevent race condition
alarm(0);
die "$#" if $#;
};
This code kills the query and also removes all the temporary tables
Watch out that you can't kill a query, using the same connection handler if you have a query that is stuck because of a table lock. You must open another connection with the same user and will that thread id.
Of course, you have to store in a hash the list of currently opened thread ids.
Mind that once you've terminated a thread id, the rest of the Perl code will execute .. on a unblessed handler.