Can I do multiple MySQL work in Laravel? - mysql

I wrote a code like this (I use MySQL, PDO, InnoDB, Laravel4, localhost & MAC) :
$all_queue = Queue1::all()->toArray(); //count about 10000
ob_end_clean();
foreach($all_queue as $key=>$value) {
$priceCreate=array(...);
Price::create($priceCreate);
Queue1::where('id',$value['id'])->delete();
}
This worked for me (65mg ram usage), but when it was working, other parts of my program(such as other tables) didn't work. I can't open my database on mysql even. My program and my sql wait and when process is completed ,they work.
I don't know what am i supposed to do.
I think this is not for laravel and this is for my php or mysql configuration.
this is my php.ini and mysql config

I assume
$all_foreach($all_queue as $key=>$value) {
Is
foreach($all_queue as $key=>$value) {
And that you have no errors (you have set debug true in your app config).
Try to set no time limit for your script.
In your php.ini
max_execution_time = 3600 ;this is one hour, set to 0 to no limit
Or in code
set_time_limit(0)
And if it's a memory problem try to free memory and unset unused vars. I'ts a good practice in long scripts to free space.
...
}//end foreach loop
unset($all_queue); //no longer needed, so unset it to free memory

Related

When loading a fixture ends with "Killed"

In my local envoriment I can find load all my fixtures.
But on my staging server, one of the fixtures just ends with "Killed"
I grep'ed through my whole project for the word "Killed" but did not find it anywhere.
php bin/console doctrine:fixures:load --fixtures src/ReviewBundle/DataFixtures/ORM/LoadRatingTypes.php -vvv
Careful, database will be purged. Do you want to continue y/N ?y
> purging database
> loading ReviewBundle\DataFixtures\ORM\LoadRatingTypes
Killed
I tail my dev.log file and all I got was
[2016-03-17 15:26:54] doctrine.DEBUG: "START TRANSACTION" [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM cms_block [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM cms_page [] []
[2016-03-17 15:26:54] doctrine.DEBUG: DELETE FROM rating_type [] []
My php log is empty
My mysql.log and mysql.err log is empty.
I have no idea where I should look
"Killed" often happens when PHP runs out of memory, and shuts down your fixture process. This happens mainly when your PHP script tries to load too many entities. Are you trying to load a massive number of fixtures?
There are workarounds to avoid memory overflows in your fixtures scripts. I answered a pretty similar question some weeks ago, you should have a look at it.
If you still have problems after implementing these modifications, you could also increase PHP memory limit.

Get value of java options in jruby

I'd like to know, in a running jruby script, which java options are set. Is there a way to do this?
My problem is this: There are some scripts that I know require much more memory than others, and I would like to add a constraint in the code so that execution will stop early with proper warnings, rather than running out of memory at some unspecified time in the future.
Perhaps something I could stick in a BEGIN{}, like:
if is_set_joption?('-J-Xmx') then
if get_joption('-J-Xmx').match(/\d+/)[0].to_i < 1000 then
puts "You're gonna run out of memory...";
abort();
end
else
puts "I recommend you start with -J-Xmx1000m.";
abort();
end
(... where is_set_joption? and get_joption are made up methods.)
Running jruby 1.7.8.
It'll be in your ENV if you've set JAVA_OPTS in your environment. You could get it from that.. You've already initiated the JVM at this point though, so you'll want to set that elsewhere, like in your command line when you exec jruby with -D
One can do:
require 'java';
java_import 'java.lang.Runtime';
mxm = Runtime.getRuntime.maxMemory.to_i;
if mxm < (512 * 1024 * 1024) then
raise "You're gonna need a bigger boat.";
end

kill mysql query in perl

I have a perl script in which i create a table from the existing mysql databases. I have hundreds of databases and tables in each database contains millions of records, because of this, sometimes, query takes hours because of problems with indexing, and sometimes i run out disk space due to improper joins. Is there a way i can kill the query in the same script watching for memory consumption and execution time?
P.S. I am using DBI module in perl for mysql interface
As far as execution time, you use alarm Perl functionaility to time out.
Your ALRM handle can either die (see example below), or issue a DBI cancel call (sub { $sth->cancel };)
The DBI documentation actually has a very good discussion of this as well as examples:
eval {
local $SIG{ALRM} = sub { die "TIMEOUT\n" }; # N.B. \n required
eval {
alarm($seconds);
... code to execute with timeout here (which may die) ...
};
# outer eval catches alarm that might fire JUST before this alarm(0)
alarm(0); # cancel alarm (if code ran fast)
die "$#" if $#;
};
if ( $# eq "TIMEOUT\n" ) { ... }
elsif ($#) { ... } # some other error
As far as watching for memory, you just need for ALRM handler - instead of simply dying/cancelling - first checks the memory consumption of your script.
I won't go into details of how to measure memory consumption since it's an unrelated question that was likely already answered comprehensively on SO, but you can use size() method from Proc::ProcessTable as described in the Perlmonks snippet Find memory usage of perl program.
I used KILL QUERY command as described in http://www.perlmonks.org/?node_id=885620
This is the code from my script
eval {
eval { # Time out and interrupt work
my $TimeOut=Sys::SigAction::set_sig_handler('ALRM',sub {
$dbh->clone()->do("KILL QUERY ".$dbh->{"mysql_thread_id"});
die "TIMEOUT\n";
});
#Set alarm
alarm($seconds);
$sth->execute();
# Clear alarm
#alarm(0);
};
# Prevent race condition
alarm(0);
die "$#" if $#;
};
This code kills the query and also removes all the temporary tables
Watch out that you can't kill a query, using the same connection handler if you have a query that is stuck because of a table lock. You must open another connection with the same user and will that thread id.
Of course, you have to store in a hash the list of currently opened thread ids.
Mind that once you've terminated a thread id, the rest of the Perl code will execute .. on a unblessed handler.

mysql / file hash question

I'd like to write a script that traverses a file tree, calculates a hash for each file, and inserts the hash into an SQL table together with the file path, such that I can then query and search for files that are identical.
What would be the recommended hash function or command like tool to create hashes that are extremely unlikely to be identical for different files?
Thanks
B
I've been working on this problem for much too long. I'm on my third (and hopefully final) rewrite.
Generally speaking, I recommend SHA1 because it has no known collisions (whereas MD5 collisions can be found in minutes), and SHA1 doesn't tend to be a bottleneck when working with hard disks. If you're obsessed with getting your program to run fast in the presence of a solid-state drive, either go with MD5, or waste days and days of your time figuring out how to parallelize the operation. In any case, do not parallelize hashing until your program does everything you need it to do.
Also, I recommend using sqlite3. When I made my program store file hashes in a PostgreSQL database, the database insertions were a real bottleneck. Granted, I could have tried using COPY (I forget if I did or not), and I'm guessing that would have been reasonably fast.
If you use sqlite3 and perform the insertions in a BEGIN/COMMIT block, you're probably looking at about 10000 insertions per second in the presence of indexes. However, what you can do with the resulting database makes it all worthwhile. I did this with about 750000 files (85 GB). The whole insert and SHA1 hash operation took less than an hour, and it created a 140MB sqlite3 file. However, my query to find duplicate files and sort them by ID takes less than 20 seconds to run.
In summary, using a database is good, but note the insertion overhead. SHA1 is safer than MD5, but takes about 2.5x as much CPU power. However, I/O tends to be the bottleneck (CPU is a close second), so using MD5 instead of SHA1 really won't save you much time.
you can use md5 hash or sha1
function process_dir($path) {
if ($handle = opendir($path)) {
while (false !== ($file = readdir($handle))) {
if ($file != "." && $file != "..") {
if (is_dir($path . "/" . $file)) {
process_dir($path . "/" . $file);
} else {
//you can change md5 to sh1
// you can put that hash into database
$hash = md5(file_get_contents($path . "/" . $file));
}
}
}
closedir($handle);
}
}
if you working in Windows change slashes to backslashes.
Here's a solution I figured out. I didn't do all of it in PHP though it'd be easy enough to do if you wanted:
$fh = popen('find /home/admin -type f | xargs sha1sum', 'r');
$files = array();
while ($line = fgets($fh)) {
list($hash,$file) = explode(' ', trim($line));
$files[$hash][] = $file;
}
$dupes = array_filter($files, function($a) { return count($a) > 1; });
I realise I've not used databases here. How many files are you going to be indexing? Do you need to put that data into a database and then search for the dupes there?

restart multi-threaded perl script and close mysql connection

I have a Perl script that reads a command file and restarts itself if necessary by doing:
myscript.pl:
exec '/home/foo/bin/myscript.pl';
exit(0);
Now, this works fine except for one issue. The thread that reads the command file does not have access to the DBI handle I use. And over a number of restarts I seem to build up the number of open mysql connections till I get the dreaded "Too Many Connections" error. The DBI spec says:
"Because of this (possibly temporary) restriction, newly created threads must make their own connections to the database. Handles can't be shared across threads."
Any way to close connections or perhaps a different way to restart the script?
Use a flag variable that is shared between threads. Have the command line reading thread set the flag to exit, and the thread holding the DB handle release it and actually do the re-exec:
#!/usr/bin/perl
use threads;
use threads::shared;
use strict; use warnings;
my $EXIT_FLAG :shared;
my $db_thread = threads->create('do_the_db_thing');
$db_thread->detach;
while ( 1 ) {
sleep rand 10;
$EXIT_FLAG = 1 if 0.05 > rand or time - $^T > 20;
}
sub do_the_db_thing {
until ( $EXIT_FLAG ) {
warn sprintf "%d: Working with the db\n", time - $^T;
sleep rand 5;
}
# $dbh->disconnect; # here
warn "Exit flag is set ... restarting\n";
exec 'j.pl';
}
You could try registering an atexit function to close the DBI handle at the point where it is opened, and then use fork & exec to restart the script rather then just exec. The parent would then call exit, invoking the atexit callback to close the DBI handle. The child could re-exec itself normally.
Edit: After thinking for a couple more minutes, I believe you could skip the atexit entirely because the handle would be closed automatically upon the parent's exit. Unless, of course, you need to do a more complex operation upon closing the DB handle than a simple filehandle close.
my $pid = fork();
if (not defined $pid) {
#Could not fork, so handle the error somehow
} elsif ($pid == 0) {
#Child re-execs itself
exec '/home/foo/bin/myscript.pl';
} else {
#Parent exits
exit(0);
}
If you expect a lot of connections, you probably want DBI::Gofer to act as a DBI proxy for you. You create as many connections in as many scripts as you like, and DBI::Gofer shares them when it can.