I am trying to force a manual merge for certain files per this question but it isn't working. Certain pom files are being auto merged even though I believe I have configured my .hgrc correctly. Any ideas?
I tried fiddling with the merge tools priority. Originally we had merge = bc under [ui] - removed this but it didn't help.
My .hgrc:
[ui]
editor = notepad
username = Boo Hoo <boo.hoo#who.com>
ssh = plink
[extensions]
fetch =
hgext.extdiff =
mq =
hgext.graphlog =
[extdiff]
cmd.kdiff3 =
cmd.examdiff = C:\Program Files (x86)\ExamDiff Pro\ExamDiff.exe
cmd.bc = C:\Program Files (x86)\Beyond Compare 3\BCompare.exe
opts.bc = /leftreadonly
[merge-tools]
bc.executable = C:\Program Files (x86)\Beyond Compare 3\BComp
bc.args = /leftreadonly /centerreadonly $local $other $base $output
bc.priority = 1
bc.premerge = True
manual.executable = C:\Program Files (x86)\Beyond Compare 3\BComp
manual.args = /leftreadonly /centerreadonly $local $other $base $output
manual.priority = 100
manual.premerge = False
[merge-patterns]
.hgtags = manual
pom.xml = manual
**\pom.xml = manual
Considering the file name patterns, and the fact that merge-pattern are glob by default, rooted at the root directory (see hgrc merge-pattern), you could try:
**/pom.xml
(to use the shell-style path separator '/' instead of '\')
or try a regex pattern:
re:.*[/\\]pom.xml$
Related
I feel like this is pretty simple, but I'm missing something. I have 130 folders, all containing the same file, "Document.pdf". Of course, the contents vary from file to file, but they all have the same name and extension. What I'm trying to do is have a script take all those 130 files, and give them names from "1.pdf" to "130.pdf", in order. The folders are in order as well (1-130). I have these folders on both local storage and Google Drive, so any solution involving either bash or GScripts will be good with me. Thanks.
This should do the trick:
Code:
function renameFiles() {
var iter = 1;
while (iter <= 130) {
Logger.log(iter);
var folders = DriveApp.getFoldersByName(iter);
while (folders.hasNext()) {
var folder = folders.next();
Logger.log("Folder: " + folder.getName());
var files = folder.getFilesByName("Document.pdf");
while (files.hasNext()) {
var file = files.next();
file.setName(iter + "." + file.getMimeType().substr(-3));
Logger.log("File: " + file.getName());
}
}
iter++;
}
}
Assumptions:
Folders are named 1, 2, .... 130
Sample ouptut:
Assuming the directories are actually named 1, 2, and so on up to and including 130, here is a bash solution:
# Edit this to desired path
parent_dir='.'
find "$parent_dir" -maxdepth 1 -type d | while read -r directory; do
dir_name="$(basename directory)"
if [ "$dir_name" -ge 1 ] && [ "$dir_name" -le 130 ]; then
mv "$directory/Document.pdf" "$directory/$dir_name.pdf"
fi
done
This uses basename to get the name of each directory (i.e., the value between 1 and 130), and uses that to rename the Document.pdf files.
Note that -maxdepth is not POSIX. To replace -maxdepth with POSIX options, see this answer.
I'm trying to set "HomeDirectory" parameter to a list of users given a .csv file. I'm using an modified script created by Trevor Sullivan and listed on the following post:
Url
This is my script:
$UserList = Import-Csv -Path c:\scripts\Users2.csv;
foreach ($User in $UserList) {
$Account = Get-ADUser -Identity $User.SamAccountName
$Account.HomeDirectory = '\\adminclusterfs\homedir\{0}' -f $Account.SamAccountName;
$Account.homeDrive = "O:"
Set-ADUser -Instance $Account -PassThru
}
The script works almost fine... The HomeDirectory and HomeDrive parameters are setting up correctly for each user on the .csv file but, the folder aren't created on the fileserver.
when I set these parameters manually, the folder are created as well
Someone has solved this issue?
I really appreciate your help!!
AD Users and Computers creates the folder for you, if your account has access to. That's a feature of the Users and Computers program running on your computer, not of the AD server itself.
You will have to add code in your script to create the folder.
Here the complete script that I made to set HomeDirectory and HomeDrive parameter, create the folder and assign it the access rigths. Thank you to Trevor Sullivan and Sean Kearney for their posts.
# Import the user data from CSV
$UserList = Import-Csv -Path c:\scripts\Users2.csv;
# For each user ...
foreach ($User in $UserList) {
# Get the user's AD account
$Account = Get-ADUser -Identity $User.SamAccountName
# Dynamically declare their home directory path in a String
$Account.HomeDirectory = '\\fileserver\homedir\{0}' -f $Account.SamAccountName;
$Account.homeDrive = "O:"
# Set their home directory and home drive letter in Active Directory
Set-ADUser -Instance $Account
# Create the folder on the root of the Homedirectory Share
NEW-ITEM –path $Account.HomeDirectory -type directory -force
# Set parameters for Access rule
$IdentityReference = 'Domain\' + $Account.SamAccountName
$FileSystemAccessRights = [System.Security.AccessControl.FileSystemRights]"FullControl"
$InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]::"ContainerInherit","ObjectInherit"
$PropagationFlags = [System.Security.AccessControl.PropagationFlags]"None"
$AccessControl = [System.Security.AccessControl.AccessControlType]"Allow"
# Build Access Rule from parameters
$AccessRule = NEW-OBJECT System.Security.AccessControl.FileSystemAccessRule -argumentlist ($IdentityReference,$FileSystemAccessRights,$InheritanceFlags,$PropagationFlags,$AccessControl)
# Get current Access Rule from Home Folder for User
$HomeFolderACL = GET-ACL $Account.HomeDirectory
$HomeFolderACL.AddAccessRule($AccessRule)
# Set Access rule to the folder
SET-ACL –path $Account.HomeDirectory -AclObject $HomeFolderACL
}
I am working with a 6.0 MB JSON file that is being used with about 100 other scripts on a server that will soon be set up. I wish to compress the file by deleting all of the extra spaces, tabs, returns, etc., but all of the sources I've found for compressing the file can't handle the file's size (it's around 108,000 lines of code). I need to break the file up in a way that it will be easy to reassemble once each chunk has been compressed. Does anyone know how to break it up in an efficient way? Help would be much appreciated!
Because python scripts could already handle the large file, I ended up using ipython and writing a .py script that would dump the script without spaces. To use this script, one would type:
$ ipython -i compression_script.py
This is the code within compression_script.py:
import json
filename= raw_input('Enter the file you wish to compress: ')# file name we want to compress
newname = 'compressed_' + filename # by default i have set the new filename to be 'compressed_' + filename
fp = open(filename)
jload = json.load(fp)
newfile = json.dumps(jload, indent = None, separators = (',', ':'))
f = open(newname, 'wb')
f.write(newfile)
f.close()
print('Compression complete! Type quit to exit IPython')
you can be done in php also like ....
//
$myfile = fopen("newfile.txt", "w") or die("Unable to open file!");
$handle = fopen("somehugefile.json", "r");
if ($handle) {
$i = 0;
while (!feof($handle)) {
$buffer = fgets($handle, 5096);
$buffer = str_replace("\r\n","", $buffer);
$buffer = str_replace("\t","", $buffer);
fwrite($myfile, $buffer);
$i++;
//var_dump($buffer);
/*
if ($i == 1000) {
die('stop');
}
*/
}
fclose($handle);
fclose($myfile);
}
$uploadDir = 'images/';
$fileName = $_FILES['Photo']['name'];
$tmpName = $_FILES['Photo']['tmp_name'];
$fileSize = $_FILES['Photo']['size'];
$fileType = $_FILES['Photo']['type'];
$filePath = $uploadDir . $fileName;
$result = move_uploaded_file($tmpName, $filePath);
As commented by Darrel on the move_uploaded_file() manual page:
move_uploaded_file apparently uses the root of the Apache installation (e.g. "Apache Group\Apache2" under Windows) as the upload location if relative pathnames are used.
For example,
$ftmp = $_FILES['userfile']['tmp_name'];
$fname = $_FILES['userfile']['name'];
move_uploaded_file($ftmp, $fname);
moves the file to
"Apache Group\Apache2\$fname";
In contrast, other file/directory related functions use the current directory of the php script as the offset for relative pathnames. So, for example, if the command
mkdir('tmp');
is called from 'Apache Group\Apache2\htdocs\testpages\upload.php', the result is to create
'Apache Group\Apache2\htdocs\testpages\tmp'
On the other hand, if 'mkdir' is called just before 'move_uploaded_file', the behavior changes. The commands,
mkdir('tmp');
move_uploaded_file($ftmp, $fname);
used together result in
"Apache Group\Apache2\htdocs\testpages\tmp\$fname"
being created. Wonder if this is a bug or a feature.
I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten.
The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed.
I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned.
Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me?
Thanks for any help or advice that you can give!
function Invoke-SqlRestore {
param(
[string]$backup_file_name,
[string]$server_name,
[string]$database_name,
[switch]$norecovery=$false
)
# Get a new connection to the server
[Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name
Write-Host "Starting restore to $database_name on $server_name."
Try {
$backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File")
# Get local paths to the Database and Log file locations
If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath }
Else { $database_path = $server.Settings.DefaultFile}
If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath }
Else { $database_log_path = $server.Settings.DefaultLog}
# Load up the Restore object settings
$restore = New-Object Microsoft.SqlServer.Management.Smo.Restore
$restore.Action = 'Database'
$restore.Database = $database_name
$restore.ReplaceDatabase = $true
if ($norecovery.IsPresent) { $restore.NoRecovery = $true }
Else { $restore.Norecovery = $false }
$restore.Devices.Add($backup_device)
# Get information from the backup file
$restore_details = $restore.ReadBackupHeader($server)
$data_files = $restore.ReadFileList($server)
# Restore all backup files
ForEach ($data_row in $data_files) {
$logical_name = $data_row.LogicalName
$physical_name = Get-FileName -path $data_row.PhysicalName
$restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$restore_data.LogicalFileName = $logical_name
if ($data_row.Type -eq "D") {
# Restore Data file
$restore_data.PhysicalFileName = $database_path + "\" + $physical_name
}
Else {
# Restore Log file
$restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name
}
[Void]$restore.RelocateFiles.Add($restore_data)
}
$restore.SqlRestore($server)
# If there are two files, assume the next is a Log
if ($restore_details.Rows.Count -gt 1) {
$restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log
$restore.FileNumber = 2
$restore.SqlRestore($server)
}
}
Catch {
$ex = $_.Exception
Write-Output $ex.message
$ex = $ex.InnerException
while ($ex.InnerException) {
Write-Output $ex.InnerException.message
$ex = $ex.InnerException
}
Throw $ex
}
Finally {
$server.ConnectionContext.Disconnect()
}
Write-Host "Restore ended without any errors."
}
I having the same problem, I'm trying to restore the database from a back taken from the same server but with a different name.
I have profiled the restore process and it doesn't add the 'with move' with the different file names. This is why it will restore the database when the database doesn't exist,but fail when it does.
There is an issue with the .PhysicalFileName property.
I was doing the SMO restore and was running into errors. The only way I found to diagnose the problem was to run SQL profile during the execution of my powershell script.
This showed me the actual T-SQL that was being executed. I then copied this into a query and tried to execute it. This showed me the actual errors: In my case it was that my database was had multiple data files that needed to be relocated.
The attached script works for databases that have only one data file.
Param
(
[Parameter(Mandatory=$True)][string]$sqlServerName,
[Parameter(Mandatory=$True)][string]$backupFile,
[Parameter(Mandatory=$True)][string]$newDBName
)
# Load assemblies
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null
# Create sql server object
$server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") $sqlServerName
# Copy database locally if backup file is on a network share
Write-Host "Loaded assemblies"
$backupDirectory = $server.Settings.BackupDirectory
Write-Host "Backup Directory:" $backupDirectory
$fullBackupFile = $backupDirectory + "\" + $backupFile
Write-Host "Copy DB from: " $fullBackupFile
# Create restore object and specify its settings
$smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")
$smoRestore.Database = $newDBName
$smoRestore.NoRecovery = $false;
$smoRestore.ReplaceDatabase = $true;
$smoRestore.Action = "Database"
Write-Host "New Database name:" $newDBName
# Create location to restore from
$backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($fullBackupFile, "File")
$smoRestore.Devices.Add($backupDevice)
# Give empty string a nice name
$empty = ""
# Specify new data file (mdf)
$smoRestoreDataFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$defaultData = $server.DefaultFile
if (($defaultData -eq $null) -or ($defaultData -eq $empty))
{
$defaultData = $server.MasterDBPath
}
Write-Host "defaultData:" $defaultData
$smoRestoreDataFile.PhysicalFileName = Join-Path -Path $defaultData -ChildPath ($newDBName + "_Data.mdf")
Write-Host "smoRestoreDataFile.PhysicalFileName:" $smoRestoreDataFile.PhysicalFileName
# Specify new log file (ldf)
$smoRestoreLogFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$defaultLog = $server.DefaultLog
if (($defaultLog -eq $null) -or ($defaultLog -eq $empty))
{
$defaultLog = $server.MasterDBLogPath
}
$smoRestoreLogFile.PhysicalFileName = Join-Path -Path $defaultLog -ChildPath ($newDBName + "_Log.ldf")
Write-Host "smoRestoreLogFile:" $smoRestoreLogFile.PhysicalFileName
# Get the file list from backup file
$dbFileList = $smoRestore.ReadFileList($server)
# The logical file names should be the logical filename stored in the backup media
$smoRestoreDataFile.LogicalFileName = $dbFileList.Select("Type = 'D'")[0].LogicalName
$smoRestoreLogFile.LogicalFileName = $dbFileList.Select("Type = 'L'")[0].LogicalName
# Add the new data and log files to relocate to
$smoRestore.RelocateFiles.Add($smoRestoreDataFile)
$smoRestore.RelocateFiles.Add($smoRestoreLogFile)
# Restore the database
$smoRestore.SqlRestore($server)
"Database restore completed successfully"
Just like if you do this from T-SQL, if there is something using the database, then that'll block the restore. Whenever I'm tasked with restoring a database, I like to take it offline (with rollback immediate) first. That kills any connections to the db. You may have to set it back online first; I don't remember if restore is smart enough to realise that the files that you're overwriting belong to the database you're restoring or not. Hope this helps.