I use following code in order to save a file:
$file = UploadedFile::getInstance($model, 'uploadedFile');//Get the uploaded file
$fp = fopen($file->tempName, 'r');
//$content = fread($fp, filesize($file->tempName));
$content = file_get_contents($file->tempName);
fclose($fp);
$model->content = $content;
$model->save();
With mentioned code I can save files up to approximately 1 MB. But larger files throw an error after $model->save():
PDOStatement::execute(): MySQL server has gone away
I use mediumblob type. What can be a problem?
The problem was a max_allowed_packet = 1M inside my.ini.
Related
I´m trying to import about 3gb csv files to phpmyadmin. Some of them contains more terminated chars and then importing stops because of wrong fields.
I have two colums which i want to fill. Im using : as terminanting char but when there is more of them in line it just stops. I cannot manage csv files they are too big. I want to skip error lines or look for other solutions. How can i do this ?
csv files looks like this
ahoj123:dublin
cat:::dog
pes::lolko
As a solution to your problem, I have written a simple PHP file that will "fix" your file for you ..
It will open "test.csv" with contents of:
ahoj123:dublin
cat:::dog
pes::lolko
And convert it to the following and save to "fixed_test.csv"
ahoj123:dublin
cat:dog
pes:lolko
Bear in mind that I am basing this on your example, so I am letting $last keep it's EOL character since there is no reason to remove or edit it.
PHP file:
<?php
$filename = "test.csv";
$handle = fopen($filename, "r+") or die("Could not open $filename" . PHP_EOL);
$keep = '';
while(!feof($handle)) {
$line = fgets($handle);
$elements = explode(':', $line);
$first = $elements[0];
$key = (count($elements) - 1);
$last = $elements[$key];
$keep .= "$first:$last";
}
fclose($handle);
$new_filename = "better_test.csv";
$new_handle = fopen("fixed_test.csv", "w") or die("Could not open $new_filename" . PHP_EOL);
fwrite($new_handle, $keep);
fclose($new_handle);
So i already complete a script that will insert data into mysql table and move those file into a directory until all files are none. There around 51 files and it took around 9 sec to complete the execution. So my question is . is there a better way to speed up the execution process?
the codes are
our $DIR="/home/aimanhalim/LOG";
our $FILENAME_REGEX = "server_performance_";
# mariaDB config hash
our %db_config = ( "username"=>"root", "password"=> "", "db"=>"Top_Data", "ip" => "127.0.0.1", "port" => "3306");
main();
exit;
sub main()
{
my $start = time();
print "Searching file $FILENAME_REGEX in $DIR...\n";
opendir (my $dr , $DIR) or die "<ERROR> Cannot open dir: $DIR \n";
while( my $file = readdir $dr )
{
print "file in $DIR: [$file]\n";
next if (($file eq ".") || ($file eq "..") || ($file eq "DONE"));
#Opening The File in the directory
open(my $file_hndlr, "<$DIR/$file");
#Making Variables.
my $line_count = 0;
my %data = ();
my $dataRef = \%data;
my $move = "$DIR/$file";
print "$file\n";
while (<$file_hndlr>)
{
my $line = $_;
chomp($line);
print "line[$line_count] - [$line]\n";
if($line_count == 0)
{
# get load average from line 0
($dataRef) = get_load_average($line,$dataRef);
print Dumper($dataRef);
}
elsif ($line_count == 2)
{
($dataRef) = get_Cpu($line,$dataRef);
print Dumper($dataRef);
}
$line_count++;
}
#insert db
my ($result) = insert_record($dataRef,\%db_config,$file);
my $Done_File="/home/aimanhalim/LOG/DONE";
sub insert_record(){
my($data,$db_config,$file)=#_;
my $result = -1; # -1 fail; 0 - succ
# connect to db
# connect to MySQL database
my $dsn = "DBI:mysql:database=".$db_config->{'db'}.";host=".$db_config->{'ip'}.";port=".$db_config->{'port'};
my $username = $db_config->{'username'};
my $password = $db_config->{'password'};
my %attr = (PrintError=>0,RaiseError=>1 );
my $dbh = DBI->connect($dsn,$username,$password,\%attr) or die $DBI::errstr;
print "We Have Successfully Connected To The Database \n";
$stmt->execute(#param_bind);
****this line is insert data statement***
$stmt->finish();
print "The Data Has Been Inserted Successfully\n";
$result = 0;
return($result);
# commit
$dbh->commit();
# return succ / if fail rollback and return fail
$dbh->disconnect();
}
exit;
editted
so pretty much this is my code with some sniping here and there.
i tried to put the 'insert_record' below the comment #insert db but i dont think that do anything :U
You are connecting to the database for every file that you want to insert (if I read your code correctly, there seems to be a closing curly brace missing, it won't actually compile). Opening new database connections is (comparably) slow.
Open the connection once, before inserting the first file and re-use it for subsequent inserts into the database. Close the connection after your last file was inserted into the database. This should give you a noticable speed up.
(Depending on the amount of data, 9 seconds might actually not be too bad; but since there is no information on that, it's hard to say).
What we have got : A single file csv file with field names as header.
What we need :
On the basis of size of the file we need to split it into multiple smaller csv files with exptension _00*.
Condtion : If file_size < 5 GB then no action.
If File_size > 5 GB then Split it into Multiple file with any dimension that ranges between ( 1 GB to < 5 GB ) .
Here we need to take care that while splitting the file by size we don't split a single record.
We need to preserve the header record of source file and replicate it into each new file.
Along with each small file a blank file with same name but with extension (.ok) needs to be created . It is just for notification that the file got created.
In the end delete the source file. Only keep new files. and create 1 final file with same name as source file but with extension .ok
Ex : Source file : file_name_20160316.csv size : 8.8 Gb
Output :
file_name_20160316_001.csv ( size : 4 GB)
file_name_20160316_001.ok
file_name_20160316_002.csv ( size : 4.8 GB)
file_name_20160316_002.ok
file_name_20160316.ok
Please help us writing Unix code for the same.
#!/usr/bin/perl -p
BEGIN
{
$dim = 5e9;
$header = <>; # We need to preserve the header record
exit if -s ARGV < $dim; # If file_size < 5 GB then no action.
$headsize = $told = tell;
# ranges between ( 1 GB to < 5 GB )
$dim = ($dim+(-s _)/int(1+(-s _)/$dim))/2 if (-s _)%$dim <= 1e9;
($base = $ARGV) =~ s/.csv/_/;
$extent = "000"
}
if (tell > $lim) # need new file?
{
$lim = $told+$dim-$headsize;
open OK, ">$base$extent.ok" and close OK if $output;
$output = $base.++$extent.'.csv';
open STDOUT, ">$output" or die "$output: $!\n";
print $header # replicate into each new file.
}
$told = tell;
END
{
open OK, ">$base$extent.ok" and close OK if $output;
chop $base;
unlink $ARGV and open OK, ">$base.ok" and close OK
}
I am working with a 6.0 MB JSON file that is being used with about 100 other scripts on a server that will soon be set up. I wish to compress the file by deleting all of the extra spaces, tabs, returns, etc., but all of the sources I've found for compressing the file can't handle the file's size (it's around 108,000 lines of code). I need to break the file up in a way that it will be easy to reassemble once each chunk has been compressed. Does anyone know how to break it up in an efficient way? Help would be much appreciated!
Because python scripts could already handle the large file, I ended up using ipython and writing a .py script that would dump the script without spaces. To use this script, one would type:
$ ipython -i compression_script.py
This is the code within compression_script.py:
import json
filename= raw_input('Enter the file you wish to compress: ')# file name we want to compress
newname = 'compressed_' + filename # by default i have set the new filename to be 'compressed_' + filename
fp = open(filename)
jload = json.load(fp)
newfile = json.dumps(jload, indent = None, separators = (',', ':'))
f = open(newname, 'wb')
f.write(newfile)
f.close()
print('Compression complete! Type quit to exit IPython')
you can be done in php also like ....
//
$myfile = fopen("newfile.txt", "w") or die("Unable to open file!");
$handle = fopen("somehugefile.json", "r");
if ($handle) {
$i = 0;
while (!feof($handle)) {
$buffer = fgets($handle, 5096);
$buffer = str_replace("\r\n","", $buffer);
$buffer = str_replace("\t","", $buffer);
fwrite($myfile, $buffer);
$i++;
//var_dump($buffer);
/*
if ($i == 1000) {
die('stop');
}
*/
}
fclose($handle);
fclose($myfile);
}
I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten.
The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed.
I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned.
Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me?
Thanks for any help or advice that you can give!
function Invoke-SqlRestore {
param(
[string]$backup_file_name,
[string]$server_name,
[string]$database_name,
[switch]$norecovery=$false
)
# Get a new connection to the server
[Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name
Write-Host "Starting restore to $database_name on $server_name."
Try {
$backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File")
# Get local paths to the Database and Log file locations
If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath }
Else { $database_path = $server.Settings.DefaultFile}
If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath }
Else { $database_log_path = $server.Settings.DefaultLog}
# Load up the Restore object settings
$restore = New-Object Microsoft.SqlServer.Management.Smo.Restore
$restore.Action = 'Database'
$restore.Database = $database_name
$restore.ReplaceDatabase = $true
if ($norecovery.IsPresent) { $restore.NoRecovery = $true }
Else { $restore.Norecovery = $false }
$restore.Devices.Add($backup_device)
# Get information from the backup file
$restore_details = $restore.ReadBackupHeader($server)
$data_files = $restore.ReadFileList($server)
# Restore all backup files
ForEach ($data_row in $data_files) {
$logical_name = $data_row.LogicalName
$physical_name = Get-FileName -path $data_row.PhysicalName
$restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$restore_data.LogicalFileName = $logical_name
if ($data_row.Type -eq "D") {
# Restore Data file
$restore_data.PhysicalFileName = $database_path + "\" + $physical_name
}
Else {
# Restore Log file
$restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name
}
[Void]$restore.RelocateFiles.Add($restore_data)
}
$restore.SqlRestore($server)
# If there are two files, assume the next is a Log
if ($restore_details.Rows.Count -gt 1) {
$restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log
$restore.FileNumber = 2
$restore.SqlRestore($server)
}
}
Catch {
$ex = $_.Exception
Write-Output $ex.message
$ex = $ex.InnerException
while ($ex.InnerException) {
Write-Output $ex.InnerException.message
$ex = $ex.InnerException
}
Throw $ex
}
Finally {
$server.ConnectionContext.Disconnect()
}
Write-Host "Restore ended without any errors."
}
I having the same problem, I'm trying to restore the database from a back taken from the same server but with a different name.
I have profiled the restore process and it doesn't add the 'with move' with the different file names. This is why it will restore the database when the database doesn't exist,but fail when it does.
There is an issue with the .PhysicalFileName property.
I was doing the SMO restore and was running into errors. The only way I found to diagnose the problem was to run SQL profile during the execution of my powershell script.
This showed me the actual T-SQL that was being executed. I then copied this into a query and tried to execute it. This showed me the actual errors: In my case it was that my database was had multiple data files that needed to be relocated.
The attached script works for databases that have only one data file.
Param
(
[Parameter(Mandatory=$True)][string]$sqlServerName,
[Parameter(Mandatory=$True)][string]$backupFile,
[Parameter(Mandatory=$True)][string]$newDBName
)
# Load assemblies
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null
# Create sql server object
$server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") $sqlServerName
# Copy database locally if backup file is on a network share
Write-Host "Loaded assemblies"
$backupDirectory = $server.Settings.BackupDirectory
Write-Host "Backup Directory:" $backupDirectory
$fullBackupFile = $backupDirectory + "\" + $backupFile
Write-Host "Copy DB from: " $fullBackupFile
# Create restore object and specify its settings
$smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")
$smoRestore.Database = $newDBName
$smoRestore.NoRecovery = $false;
$smoRestore.ReplaceDatabase = $true;
$smoRestore.Action = "Database"
Write-Host "New Database name:" $newDBName
# Create location to restore from
$backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($fullBackupFile, "File")
$smoRestore.Devices.Add($backupDevice)
# Give empty string a nice name
$empty = ""
# Specify new data file (mdf)
$smoRestoreDataFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$defaultData = $server.DefaultFile
if (($defaultData -eq $null) -or ($defaultData -eq $empty))
{
$defaultData = $server.MasterDBPath
}
Write-Host "defaultData:" $defaultData
$smoRestoreDataFile.PhysicalFileName = Join-Path -Path $defaultData -ChildPath ($newDBName + "_Data.mdf")
Write-Host "smoRestoreDataFile.PhysicalFileName:" $smoRestoreDataFile.PhysicalFileName
# Specify new log file (ldf)
$smoRestoreLogFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$defaultLog = $server.DefaultLog
if (($defaultLog -eq $null) -or ($defaultLog -eq $empty))
{
$defaultLog = $server.MasterDBLogPath
}
$smoRestoreLogFile.PhysicalFileName = Join-Path -Path $defaultLog -ChildPath ($newDBName + "_Log.ldf")
Write-Host "smoRestoreLogFile:" $smoRestoreLogFile.PhysicalFileName
# Get the file list from backup file
$dbFileList = $smoRestore.ReadFileList($server)
# The logical file names should be the logical filename stored in the backup media
$smoRestoreDataFile.LogicalFileName = $dbFileList.Select("Type = 'D'")[0].LogicalName
$smoRestoreLogFile.LogicalFileName = $dbFileList.Select("Type = 'L'")[0].LogicalName
# Add the new data and log files to relocate to
$smoRestore.RelocateFiles.Add($smoRestoreDataFile)
$smoRestore.RelocateFiles.Add($smoRestoreLogFile)
# Restore the database
$smoRestore.SqlRestore($server)
"Database restore completed successfully"
Just like if you do this from T-SQL, if there is something using the database, then that'll block the restore. Whenever I'm tasked with restoring a database, I like to take it offline (with rollback immediate) first. That kills any connections to the db. You may have to set it back online first; I don't remember if restore is smart enough to realise that the files that you're overwriting belong to the database you're restoring or not. Hope this helps.