Packer azure update existing shared image gallery image - packer

I'm trying to update an existing shared image in an Azure Shared Image Gallery and I keep getting an error during the build that states "the managed image named oracle.8.3.4.base already exists in the resource group rg-sig-qa-eastus, use the -force option to automatically delete it."
I'm using the following snippet. Any ideas as to what I may be doing wrong?
source "azure-arm" "linux_kube_docker" {
client_id = var.client_id
client_secret = var.client_secret
subscription_id = var.subscription_id
tenant_id = var.tenant_id
location = var.location
os_type = var.os_type
shared_image_gallery_timeout = "2h5m2s"
azure_tags = {
environment = "qa"
source = "packer"
}
shared_image_gallery {
subscription = var.subscription_id
resource_group = var.gallery_resource_group
gallery_name = var.gallery_name
image_name = var.managed_image_name
image_version = "0.0.1" # current version
}
managed_image_name = var.managed_image_name
managed_image_resource_group_name = var.managed_image_resource_group
shared_image_gallery_destination {
gallery_name = var.gallery_name
image_name = var.managed_image_name
image_version = "0.0.2" # new version
replication_regions = var.replication_regions
resource_group = var.gallery_resource_group
}
vm_size = var.vm_size
}
build {
sources = ["source.azure-arm.linux_kube_docker"]
provisioner "shell" {
execute_command = "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'"
inline = ["echo upgrade done!"]
inline_shebang = "/bin/sh -x"
}
provisioner "shell" {
execute_command = "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'"
inline = [
"/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"]
inline_shebang = "/bin/sh -x"
}
}
KC

You're using a managed image for the build that already exists. Packer does copy the captured image to the gallery but doesn't delete it afterward. With -force you can overwrite the current image.
I do have a bad experience in reusing the same image version for the gallery, it didn't look like updated after the copy was done.

Related

How approve backstopjs ref images in GitHub Actions?

I'm using BackstopJS for regression tests and trying to implement GitHub workflow.
At first little introduction how BackstopJS works:
We have reference images (pictures) of browser pages
We run BackstopJS test and compare actual browser view and reference image
Check backstop report HTML page in browser and decide which is correct actual view or reference image
If browser view is an updated correct version, we run backstop approve command to rewrite reference image with new actual image
What can be implemented inside GitHub actions:
Download reference images from S3 bucket
Run BackstopJS test
Save HTML report and actual browser images as artifacts
Download HTML report stored as artifact and check if new version of images is correct
??? Here is a problem
Problem:
Workflow is already ended, and we don't able to approve new images. So, is here any way to add dialog inside Pull Request if test Action failed to be able upload new images (stored as artifacts) to S3 as new reference images? Or some way to retry failed test with new parameters (let's say it will be env AUTO_APPROVE=true) to be able re-run test with new images approvement?
Finally, I implemented interactive workflow:
---
name: 'BackstopJS test'
on:
pull_request:
types:
- edited
- opened
- synchronize
branches:
- 'develop'
env:
AWS_ACCOUNT_ID: '12345678'
AWS_REGION: 'us-east-1'
AWS_BUCKET_NAME: 'bucket_name'
AWS_BUCKET_PATH: 'bucket_folder'
AWS_BUCKET_KEY: 'bitmaps_archive.zip'
defaults:
run:
shell: bash
working-directory: backstop_test
jobs:
test:
# yamllint disable rule:line-length
if: ${{ (github.event.action != 'edited' ) || contains(github.event.pull_request.body, 'approve ') }}
name: 'BackstopJS test'
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
pull-requests: read
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Get last commit message (only 100 commits in PR are acceptable)
if: ${{ github.event.action != 'edited' }}
env:
COMMITS_URL: ${{ github.event.pull_request.commits_url }}
run: |
if [ "${COMMITS_URL}x" != "x" ]; then
echo "COMMIT_MSG=$(curl -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" "${COMMITS_URL}?per_page=100" | jq -r .[-1].commit.message)" >> "${GITHUB_ENV}"
else
echo '::warning ::Cannot get commits list URL'
echo 'COMMIT_MSG=' >> "${GITHUB_ENV}"
fi
- name: Search for approve directives in PR body or commit message
shell: python
env:
PR_MSG: ${{ github.event.pull_request.body }}
run: |
from os import environ as env
from sys import exit
file_path = env.get('GITHUB_ENV', None)
if file_path is None:
raise OSError('Environ file not found')
autoapprove = False
approve_only = False
commit_message = env.get('COMMIT_MSG', '')
pr_message = env.get('PR_MSG', '')
with open(file_path, 'a') as gh_envs:
if '[cancel test]' in commit_message.lower() or '[skip test]' in commit_message.lower():
gh_envs.write('SKIP_TEST=1\n')
print("::warning ::Test is skipped by commit tag")
exit(0)
elif 'cancel test' in pr_message.lower() or 'skip test' in pr_message.lower():
gh_envs.write('SKIP_TEST=1\n')
print("::warning ::Test is skipped by tag in PR message")
exit(0)
else:
gh_envs.write('SKIP_TEST=0\n')
if '[approve me]' in commit_message.lower():
autoapprove = True
approve_only = True
print("Reference bitmaps will be approved by commit message")
else:
print("Last commit message:", commit_message)
if '${{ github.event.action }}' == 'edited':
approve_only = True
pr_message = pr_message.split('\n')
last_commit_id = '${{ github.event.pull_request.head.sha }}'
commit_id = None
for line in pr_message:
if line.startswith('approve '):
commit_id = line.split(' ')[-1].rstrip('\n\r')
break
if commit_id:
if last_commit_id.startswith(commit_id):
autoapprove = True
else:
print(
"::warning ::approved commit sha and last commit sha are missmatched:",
commit_id,
"/",
last_commit_id
)
else:
print("Auto approvment disabled")
with open(file_path, 'a') as gh_envs:
if autoapprove:
gh_envs.write('AUTOAPPROVE=1\n')
else:
gh_envs.write('AUTOAPPROVE=0\n')
if approve_only and autoapprove:
gh_envs.write('APPROVE_ONLY=1\n')
else:
gh_envs.write('APPROVE_ONLY=0\n')
- name: Configure AWS credentials
if: ${{ env.SKIP_TEST != 1 }}
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT_ID }}:role/github-iam-role
aws-region: ${{ env.AWS_REGION }}
role-session-name: backstopjs_test_runner
- name: Download and extract reference bitmaps
if: ${{ env.SKIP_TEST != 1 }}
run: |
aws s3 cp "s3://${AWS_BUCKET_NAME}/${AWS_BUCKET_PATH}/${AWS_BUCKET_KEY}" "./backup_${AWS_BUCKET_KEY}" && unzip -od ./backstop_data "backup_${AWS_BUCKET_KEY}" || echo "::warning file=${AWS_BUCKET_KEY}::No reference bitmaps archive found"
- name: Run tests
if: ${{ env.SKIP_TEST != 1 }}
run: |
if [ "${AUTOAPPROVE}" == "1" ] || [ "${AUTOAPPROVE^^}" == "TRUE" ] || [ "${AUTOAPPROVE^^}" == "YES" ]; then
echo "**Autoapprove is activated. Reference images will be renewed** " >> "${GITHUB_STEP_SUMMARY}"
else
{
echo "**Autoapprove is not active. Current reference images will be used** ";
echo "";
echo "Add \`approve ${{ github.event.pull_request.head.sha }}\` line to PR description";
echo "and test JOB will be automatically re-run to approve new reference bitmaps";
echo "";
echo "* if you will do new commit into PR, \`sha\` of current approvment should be updated too ";
echo "";
echo "Also \`[approve me]\` tag may be added inside commit message to renew bitmaps automatically ";
} >> "${GITHUB_STEP_SUMMARY}"
fi
{
echo "";
echo "Test may be cancelled by using \`[skip test]\` (or \`[cancel test]\`) tag inside commit message ";
echo "or by using \`skip test\` (or \`cancel test\`) code word inside PR message ";
echo "";
echo "---";
} >> "${GITHUB_STEP_SUMMARY}"
# Run tests here.
# If AUTOAPPROVE=1 then backstop test → backstop approve → backstop test will be run (both reports will be saved)
# If AUTOAPPROVE=1 and APPROVE_ONLY=1 then backstop reference → backstop test will be run
# ...
# Set Job to fail or success depends on tests exit status code
# In this example tests are not included, and status will be always failed if autoapprove is 0
if [ "${AUTOAPPROVE}x" == "1x" ]; then
echo "IS_FAILED=0" >> "${GITHUB_ENV}"
else
echo "IS_FAILED=1" >> "${GITHUB_ENV}"
echo "Error: test \`BLAHBLAHBLAH\` failed with status code: \`1\`" >> "${GITHUB_STEP_SUMMARY}"
fi
- name: Upload new reference ritmaps to S3 bucket
if: ${{ env.AUTOAPPROVE == 1 && env.IS_FAILED == 0 && env.SKIP_TEST != 1 }}
run: |
cd backstop_data && zip -ur "../${AWS_BUCKET_KEY}" bitmaps_reference && cd .. && \
aws s3 cp "${AWS_BUCKET_KEY}" "s3://${AWS_BUCKET_NAME}/${AWS_BUCKET_PATH}/${AWS_BUCKET_KEY}"
if [ -f "backup_${AWS_BUCKET_KEY}" ]; then
aws s3 cp "backup_${AWS_BUCKET_KEY}" "s3://${AWS_BUCKET_NAME}/${AWS_BUCKET_PATH}/backup_${AWS_BUCKET_KEY}"
fi
- name: Save HTML reports
if: ${{ env.SKIP_TEST != 1 }}
uses: actions/upload-artifact#v3
with:
name: html_reports
path: backstop_test/report
- name: Save logs (only if failed)
if: ${{ env.IS_FAILED == 1 && env.SKIP_TEST != 1 }}
uses: actions/upload-artifact#v3
with:
name: test_logs
path: backstop_test/logs
- name: Set to fail
if: ${{ env.IS_FAILED == 1 && env.SKIP_TEST != 1 }}
uses: actions/github-script#v3
with:
script: |
core.setFailed('Some of regression tests failed. Check Summary for detailed info')
Flow runs on:
PR contents updates (updated)
New commits inside PR (synchronize)
New PR created (opened)
If PRs body has line skip test (cancel test) or commits message has tag [skip test] ([cancel test]), then test will be skipped
If PRs body has line approve commit-SHA (where commit-SHA is a sha of a last commit in PR) or commits message has tag [approve me], then new reference bitmaps will be created
If approve line is present in PR, only one test with new reference images will be run
If approve tag is present in commit message, two tests will be run (before and after approvement) and two reports will be saved
Reference images are uploaded/downloaded/stored from/to/in S3 bucket

Powershell Make a function for directories and add a prefix at the end

Make a function that makes 3 directories with the name John_S and add the prefix and appended with the number 1, 2, m
Example
1. John_S1
2. John_S2
3. John_S3
Use a loop (ForEach)
Use a variable for the number of iterations
What I have so far...
$DirName = "John_S"
function mulcheck {New-item "$DirName"}
$i = 1
foreach($DirName in $DirNames)
{$newname = $DirName Rename-Item $($DirName) $newname $i++}
The easiest way to generate the numbers 1 through 3 is with the .. range operator:
foreach($suffix in 1..3){
mkdir "John_S${suffix}"
}
To make the function re-usable with something other than John_S, declare a [string] parameter for the prefix:
function New-Directories([string]$Prefix) {
foreach($suffix in 1..3){
mkdir "${Prefix}${suffix}"
}
}
If I understand your latest comment correctly, you want a function that takes the name of a new folder and checks if a folder with that name altready exists in the rootpath. When that is the case, it should create a new folder with the given name, but with an index number appended to it, so it has a unique name.
For that you can use something like this:
function New-Folder {
[CmdletBinding()]
param (
[Parameter(Mandatory = $false)]
[string]$RootPath = $pwd, # use the current working directory as default
[Parameter(Mandatory = $true)]
[string]$FolderName
)
# get an array of all directory names (name only) of the folders with a similar name already present
$folders = #((Get-ChildItem -Path $RootPath -Filter "$FolderName*" -Directory).Name)
$NewName = $FolderName
if ($folders.Count) {
$count = 1
while ($folders -contains $NewName) {
# append a number to the FolderName
$NewName = "{0}{1}" -f $FolderName, $count++
}
}
# we now have a unique foldername, so create the new folder
$null = New-Item -Path (Join-Path -Path $RootPath -ChildPath $NewName) -ItemType Directory
}
New-Folder -FolderName "John_S"
If you run this several times, you will have created several folders like

How to check if runner is currently running a job using jobs API

I would like to check which runners are currently running jobs but i fail to find anything that would give me this information using API.
I know which ones are active and can take jobs but not the ones that are actually running them at the current time.
So my question is, how can i determine which runners are currently processing a job
You can list all the runners, get theirs ids and then for each runner check if there are jobs with status running:
List all runners API using /runners/all
List Runner jobs using /runners/$runner_id/jobs?status=running
The following bash script uses curl and jq :
#!/bin/bash
token=YOUR_TOKEN
domain=your.domain.com
ids=$(curl -s -H "PRIVATE-TOKEN: $token" "https://$domain/api/v4/runners/all" | \
jq '.[].id')
set -- $ids
for i
do
result=$(curl -s \
-H "PRIVATE-TOKEN: $token" \
"https://$domain/api/v4/runners/$i/jobs?status=running" | jq '. | length')
if [ $result -eq 0 ]; then
echo "runner $i is not running jobs"
else
echo "runner $i is running $result jobs"
fi
done
Output:
runner 6 is not running jobs
runner 7 is running 1 jobs
runner 8 is not running jobs
Using python :
import requests
import json
token = "YOUR_TOKEN"
domain = "your.domain.com"
r = requests.get(
f'https://{domain}/api/v4/runners/all',
headers = { "PRIVATE-TOKEN": token }
)
ids = [ i["id"] for i in json.loads(r.text) ]
for i in ids:
r = requests.get(
f'https://{domain}/api/v4/runners/{i}/jobs?status=running',
headers = { "PRIVATE-TOKEN": token }
)
num_jobs = len(json.loads(r.text))
if num_jobs > 0:
print(f'runner {i} is running {num_jobs} jobs')
else:
print(f'runner {i} is not running jobs')

Changing NTFS security on user with fullcontrol to modify

I have thousands of folders I need to change users with Fullcontrol access to modify access. The following is a list of what I have:
A script that changes NTFS perms:
$acl = Get-Acl "G:\Folder"
$acl | Format-List
$acl.GetAccessRules($true, $true, [System.Security.Principal.NTAccount])
#second $true on following line turns on inheritance, $False turns off
$acl.SetAccessRuleProtection($True, $True)
$rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
$acl.AddAccessRule($rule)
$rule = New-Object System.Security.AccessControl.FileSystemAccessRule("My-ServerTeam","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
$acl.AddAccessRule($rule)
$rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Users","Read", "ContainerInherit, ObjectInherit", "None", "Allow")
$acl.AddAccessRule($rule)
Set-Acl "G:\Folder" $acl
Get-Acl "G:\Folder" | Format-List
A text file with the directories and users that need to be changed from fullcontrol to modify.
I can always create a variable for the path and/or username and create a ForEach loop, but I'm not sure how to change the users that exist in the ACL for each folder to Modify, but keep the Admin accounts as full control. Any help would be appreciated.
Went another route and got what I needed. I'm not surprised noone tried to help me on this one.... it was tough. I'll post the scripts for the next person who has this issue.
There are two scripts. The first I obtained from the internet and altered a bit. The second script launches the first with the parameters required to automate.
First Script Named SetFolderPermission.ps1:
param ([string]$Path, [string]$Access, [string]$Permission = ("Modify"), [switch]$help)
function GetHelp() {
$HelpText = #"
DESCRIPTION:
NAME: SetFolderPermission.ps1
Sets FolderPermissions for User on a Folder.
Creates folder if not exist.
PARAMETERS:
-Path Folder to Create or Modify (Required)
-User User who should have access (Required)
-Permission Specify Permission for User, Default set to Modify (Optional)
-help Prints the HelpFile (Optional)
SYNTAX:
./SetFolderPermission.ps1 -Path C:\Folder\NewFolder -Access Domain\UserName -Permission FullControl
Creates the folder C:\Folder\NewFolder if it doesn't exist.
Sets Full Control for Domain\UserName
./SetFolderPermission.ps1 -Path C:\Folder\NewFolder -Access Domain\UserName
Creates the folder C:\Folder\NewFolder if it doesn't exist.
Sets Modify (Default Value) for Domain\UserName
./SetFolderPermission.ps1 -help
Displays the help topic for the script
Below Are Available Values for -Permission
"#
$HelpText
[system.enum]::getnames([System.Security.AccessControl.FileSystemRights])
}
<#
function CreateFolder ([string]$Path) {
# Check if the folder Exists
if (Test-Path $Path) {
Write-Host "Folder: $Path Already Exists" -ForeGroundColor Yellow
} else {
Write-Host "Creating $Path" -Foregroundcolor Green
New-Item -Path $Path -type directory | Out-Null
}
}
#>
function SetAcl ([string]$Path, [string]$Access, [string]$Permission) {
# Get ACL on FOlder
$GetACL = Get-Acl $Path
# Set up AccessRule
$Allinherit = [system.security.accesscontrol.InheritanceFlags]"ContainerInherit, ObjectInherit"
$Allpropagation = [system.security.accesscontrol.PropagationFlags]"None"
$AccessRule = New-Object system.security.AccessControl.FileSystemAccessRule($Access, $Permission, $AllInherit, $Allpropagation, "Allow")
# Check if Access Already Exists
if ($GetACL.Access | Where {$_.IdentityReference -eq $Access}) {
Write-Host "Modifying Permissions For: $Access on directory: $Path" -ForeGroundColor Yellow
$AccessModification = New-Object system.security.AccessControl.AccessControlModification
$AccessModification.value__ = 2
$Modification = $False
$GetACL.ModifyAccessRule($AccessModification, $AccessRule, [ref]$Modification) | Out-Null
} else {
Write-Host "Adding Permission: $Permission For: $Access"
$GetACL.AddAccessRule($AccessRule)
}
Set-Acl -aclobject $GetACL -Path $Path
Write-Host "Permission: $Permission Set For: $Access on directory: $Path" -ForeGroundColor Green
}
if ($help) { GetHelp }
if ($Access -AND $Permission) {
SetAcl $Path $Access $Permission
}
The next script calls the first script and adds the needed parameters. A CSV containing 2 columns with the folders and usernames with full control.
$path = "C:\Scripts\scandata\TwoColumnCSVwithPathandUserwithFullControl.csv"
$csv = Import-csv -path $path
foreach($line in $csv){
$userN = $line.IdentityReference
$PathN = $line.Path
$dir = "$PathN"
$DomUser = "$userN"
$Perm = "Modify"
$scriptPath = "C:\Scripts\SetFolderPermission.ps1"
$argumentList1 = '-Path'
$argumentList2 = "$dir"
$argumentList3 = '-Access'
$argumentList4 = "$DomUser"
$argumentList5 = '-Permission'
$argumentList6 = "$Perm"
Invoke-Expression "$scriptPath $argumentList1 $argumentList2 $argumentList3 $argumentList4 $argumentList5 $argumentList6"

SMO restore of SQL database doesn't overwrite

I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten.
The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed.
I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned.
Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me?
Thanks for any help or advice that you can give!
function Invoke-SqlRestore {
param(
[string]$backup_file_name,
[string]$server_name,
[string]$database_name,
[switch]$norecovery=$false
)
# Get a new connection to the server
[Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name
Write-Host "Starting restore to $database_name on $server_name."
Try {
$backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File")
# Get local paths to the Database and Log file locations
If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath }
Else { $database_path = $server.Settings.DefaultFile}
If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath }
Else { $database_log_path = $server.Settings.DefaultLog}
# Load up the Restore object settings
$restore = New-Object Microsoft.SqlServer.Management.Smo.Restore
$restore.Action = 'Database'
$restore.Database = $database_name
$restore.ReplaceDatabase = $true
if ($norecovery.IsPresent) { $restore.NoRecovery = $true }
Else { $restore.Norecovery = $false }
$restore.Devices.Add($backup_device)
# Get information from the backup file
$restore_details = $restore.ReadBackupHeader($server)
$data_files = $restore.ReadFileList($server)
# Restore all backup files
ForEach ($data_row in $data_files) {
$logical_name = $data_row.LogicalName
$physical_name = Get-FileName -path $data_row.PhysicalName
$restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$restore_data.LogicalFileName = $logical_name
if ($data_row.Type -eq "D") {
# Restore Data file
$restore_data.PhysicalFileName = $database_path + "\" + $physical_name
}
Else {
# Restore Log file
$restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name
}
[Void]$restore.RelocateFiles.Add($restore_data)
}
$restore.SqlRestore($server)
# If there are two files, assume the next is a Log
if ($restore_details.Rows.Count -gt 1) {
$restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log
$restore.FileNumber = 2
$restore.SqlRestore($server)
}
}
Catch {
$ex = $_.Exception
Write-Output $ex.message
$ex = $ex.InnerException
while ($ex.InnerException) {
Write-Output $ex.InnerException.message
$ex = $ex.InnerException
}
Throw $ex
}
Finally {
$server.ConnectionContext.Disconnect()
}
Write-Host "Restore ended without any errors."
}
I having the same problem, I'm trying to restore the database from a back taken from the same server but with a different name.
I have profiled the restore process and it doesn't add the 'with move' with the different file names. This is why it will restore the database when the database doesn't exist,but fail when it does.
There is an issue with the .PhysicalFileName property.
I was doing the SMO restore and was running into errors. The only way I found to diagnose the problem was to run SQL profile during the execution of my powershell script.
This showed me the actual T-SQL that was being executed. I then copied this into a query and tried to execute it. This showed me the actual errors: In my case it was that my database was had multiple data files that needed to be relocated.
The attached script works for databases that have only one data file.
Param
(
[Parameter(Mandatory=$True)][string]$sqlServerName,
[Parameter(Mandatory=$True)][string]$backupFile,
[Parameter(Mandatory=$True)][string]$newDBName
)
# Load assemblies
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null
# Create sql server object
$server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") $sqlServerName
# Copy database locally if backup file is on a network share
Write-Host "Loaded assemblies"
$backupDirectory = $server.Settings.BackupDirectory
Write-Host "Backup Directory:" $backupDirectory
$fullBackupFile = $backupDirectory + "\" + $backupFile
Write-Host "Copy DB from: " $fullBackupFile
# Create restore object and specify its settings
$smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")
$smoRestore.Database = $newDBName
$smoRestore.NoRecovery = $false;
$smoRestore.ReplaceDatabase = $true;
$smoRestore.Action = "Database"
Write-Host "New Database name:" $newDBName
# Create location to restore from
$backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($fullBackupFile, "File")
$smoRestore.Devices.Add($backupDevice)
# Give empty string a nice name
$empty = ""
# Specify new data file (mdf)
$smoRestoreDataFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$defaultData = $server.DefaultFile
if (($defaultData -eq $null) -or ($defaultData -eq $empty))
{
$defaultData = $server.MasterDBPath
}
Write-Host "defaultData:" $defaultData
$smoRestoreDataFile.PhysicalFileName = Join-Path -Path $defaultData -ChildPath ($newDBName + "_Data.mdf")
Write-Host "smoRestoreDataFile.PhysicalFileName:" $smoRestoreDataFile.PhysicalFileName
# Specify new log file (ldf)
$smoRestoreLogFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")
$defaultLog = $server.DefaultLog
if (($defaultLog -eq $null) -or ($defaultLog -eq $empty))
{
$defaultLog = $server.MasterDBLogPath
}
$smoRestoreLogFile.PhysicalFileName = Join-Path -Path $defaultLog -ChildPath ($newDBName + "_Log.ldf")
Write-Host "smoRestoreLogFile:" $smoRestoreLogFile.PhysicalFileName
# Get the file list from backup file
$dbFileList = $smoRestore.ReadFileList($server)
# The logical file names should be the logical filename stored in the backup media
$smoRestoreDataFile.LogicalFileName = $dbFileList.Select("Type = 'D'")[0].LogicalName
$smoRestoreLogFile.LogicalFileName = $dbFileList.Select("Type = 'L'")[0].LogicalName
# Add the new data and log files to relocate to
$smoRestore.RelocateFiles.Add($smoRestoreDataFile)
$smoRestore.RelocateFiles.Add($smoRestoreLogFile)
# Restore the database
$smoRestore.SqlRestore($server)
"Database restore completed successfully"
Just like if you do this from T-SQL, if there is something using the database, then that'll block the restore. Whenever I'm tasked with restoring a database, I like to take it offline (with rollback immediate) first. That kills any connections to the db. You may have to set it back online first; I don't remember if restore is smart enough to realise that the files that you're overwriting belong to the database you're restoring or not. Hope this helps.