How to run different pre and post SSDT pubish scripts depending on the deploy profile - publish

I am using SSDT in Visual Studio 2013.
I have created some pre and post publish scripts for the development server. The pre-deployment scripts empty data from tables and re-set all the auto identity fields. The post-deployment scripts populate the tables with static and sample data.
Now I need to publish the database to our staging and live database servers. I have created new "publish.xml" profiles for these servers. But obviously I don't want the same pre and post scripts to run.
How can I either specify different scripts depending on the publish profile, or make the scripts aware of the target and perform different actions.
My biggest concern is publishing to the live server and accidentally destroying data.
Thanks in advance.
Doug

You have a few options:
1 - Wrap your data changes in calls to #servername or something unique to the environment so you would have something like:
if ##servername = 'dev_server'
begin
delete blah
insert blah
end
2 - You can also achieve something similar using sqlcmd variables, pass in a variable called "/v:DestoryData=true" or something and then you can reference that in your script.
3 - Don't use pre/post deploy scripts but have your own mechanism for running them i.e. use a batch file to deploy your dacpacs and add a call to sqlcmd before and after - the downside to this is that when deploying, changes to a table result in any foreign keys being disabled before the pre-deploy and re-enabled after the post-deploy.
4 - Edit the dacpacs, the pre/post deploy scripts are just text files inside the dacpac which is essentially a zip file that follows the microsoft packaging format and there is a .net packaging api to let you modify it.
I think that is about it, please ask if anything is unclear :)
ed

I would suggest to use SQLCMD Variables for your conditional script execution.
If right-click on a DB project and choose Properties, there is a tab "SQLCMD Variables"
Enter "$(ServerName)" as variable and something as default value.
Then you need to open your EVERY .publish.xml in XML editor to insert the following code after PropertyGroup part:
<ItemGroup>
<SqlCmdVariable Include="ServerName">
<Value>[YourVersionOfServer]</Value>
</SqlCmdVariable>
</ItemGroup>
[YourVersionOfServer] should be equal to the result of ##servername on each of your servers.
The final .publish.xml might look like:
Then you should wrap you conditional code in pre and post deployment files with:
if ##servername = '$(ServerName)'
begin
... code for specific server
end
Thus you can guarantee that the right code hits the right server

First set up SQLCMD Variables by right clicking on the project and going to properties and the SQLCMD Variables tab:
Then set up a folder structure for you to organize scripts that you want to run for a specific server or any other thing you want to switch off of, like customer. Each server gets a folder. I like to barrel all the scripts I want to run for that folder into an index file in that folder. The index lists out a :r command followed by each scrip in the folder that should run, organized by filename with a numerical prefix so the order can be controlled.
In the index file in the folder that groups all the server folders will do something different than listing out a call to each server's index file, instead it switches which index file to run based on the SQLCMD variable passed in based on the publish profile. It does so with the following simple code:
:r .\$(Customer)\Index.sql
The reason you want to do it like this by setting up folders and index files is that not only does it keep things organized it also allows you to use Go statements in all of your files. You can then use one script with :r statements for all other scripts you want to run, nesting to your heart's content.
You could set up your sqlcmd file to do it the following way which doesn't require you to set up folders or specific names for files, but it requires you to remove all GO statements. By doing it the above way you don't have to remove any go statements.
if ##servername = '$(ServerName)'
begin
... code for specific server
end
Then when I right click the project and hit publish it builds the project and pops up with the following dialog. I can change which scripts get run by changing the SQLCMD variable.
I like to save off the most commonly used settings as a separate publish profile and then I can get back to it by just clicking on it's .xml file in the solution explorer. This process makes everything so easy and there is no need to modify the xml by hand, just click save profile as, Load Profile, or Create Profile using the publish database dialog shown above.
Also, you can generate your index files with the following powershell script:
foreach($directory in (Get-ChildItem -Directory -Recurse) | Get-Item)
{
#skip writing to any index file with ---Ignore in it.
if(-Not ((Test-Path $directory\Index.sql) -and (Get-Content $directory\Index.sql | %{$match = $false}{ $match = $match -or $_ -match "---Ignore" }{$match})))
{
$output = "";
foreach($childitem in (Get-ChildItem $directory -Exclude Index.sql, *.ps1 | Sort-Object | Get-Item))
{
$output= $output + ":r .\" + $childitem.name
if($childitem -is [system.io.directoryinfo])
{
$output = $output + "\Index.sql";
}
$output = $output + "`r`n"
}
Out-File $directory\Index.sql -Encoding utf8 -InputObject $output
}
}

For anyone wondering you could also do something like this:
Publish profile:
<ItemGroup>
<SqlCmdVariable Include="Environment">
<Value>Dev</Value>
</SqlCmdVariable>
</ItemGroup>
Post deployment script:
if '$(Environment)' = 'Dev'
begin
... code for specific server
end
It seemed a bit more natural to me this way compared to the "ServerName" semantics. I also had issues trying to use ##servername, but not sure why.

Related

Dynamic User in .bat file

This is my first time trying to use .bat files. I am trying to make one that keeps my front end of my Access database on the most current version for users (only ~7).
I am using:
md C:\Users\tmyers\Desktop
del C:\Users\tmyers\Desktop\Quotations.accdr
copy "\Users\counter1152\desktop\PM DESKTOP FILES\Lighting Project Management Application\Quotations.accdr" C:\Users\tmyers\Desktop
C:\Users\tmyers\Desktop\Quotations.accdr
Part of the copy string I removed due to being the path to our server (just in case). I need the user to somehow be dynamic based on the person executing the file. I could make a special .bat file for each person since I dont have many users, but that seems sloppy.
As a secondary question, how can I get the cmd prompt window to not show when this is executed? That would just be a personal preference of mine. I did find https://superuser.com/questions/140047/how-to-run-a-batch-file-without-launching-a-command-window for reference but am not 100% sure how to do wscript.
try with :
md "%userprofile%\Desktop"
del "%userprofile%\Desktop\Quotations.accdr"
copy "\Users\counter1152\desktop\PM DESKTOP FILES\Lighting Project Management Application\Quotations.accdr" "%userprofile%\Desktop"
"%userprofile%\Desktop\Quotations.accdr"
To auto-hide the cmd.exe window you can use windowmode.bat and getCMDPid.bat:
call getcmdbit.bat
call windowmode.bat -pid %errorlevel% -mode hidden

Keyboard short for uploading two files in PhpStorm

The problem
In PhpStorm I have a style.css- and a app.js-file that I have to upload to a server over and over again. I'm trying to automate it.
They're compiled by Webpack, so they are generated/compiled. Which means that I can't simply use the 'Tools' >> 'Deployment' >> 'Upload to...' (since that file isn't and won't every be open).
What I currently do
At the moment, every time I want to see the changed I've done, then I do this (for each file):
Navigate to the files in the file-tree (using the mouse)
Select it
The I've set up a shortcut for Main menu >> Tools >> Deployment >> Upload to..., where-after I select the server I want to upload to.
I do this approximately 100+ times per day.
The ideal solution
The ideal solution would be, that if I pressed a shortcut like CMD + Option + Shift + G
That it then uploaded a selection of files (a scope?) to a predefined remote server.
Solution attempts
Open and upload.
Changing to those files (using CMD + p) and then uploading them (once they're open). But the files are generated, which means that it takes PhpStorm a couple of seconds to render the content (which is necessary before I can do anything with the file) - so that's not faster.
Macro.
Recording a macro, uploading the two files, looking like this:
If I go to the menu and trigger the Macro, then it works. So far so good.
But if I assign a shortcut key and trigger that shortcut while in a file, then it shows me this:
And if I press '1' (for it to upload to number 1 on the list), then it uploads the file that I'm currently in(!?), and not the two files from my macro.
I've tried several different shortcuts (to rule out some kind of keyboard-shortcut-clash):
CMD + Option + CTRL + 0
CMD + Shift 0
CMD + ;
... Same result.
And the PhpStorm Macro's doesn't seem to give me that many options anyways.
Keyboard Maestro.
I've tried doing it using Keyboard Maestro.
But I can't get it setup right. Because if it can't find the folders (if they're off-screen or if I'm in a different project and forgot to adjust they shortcuts), then it blasts through the rest of the recorded actions, resulting in chaos. Ideally it should stop, if it can't find the file on the screen.
Update1 - External program
Even if it's not possible to do in PhpStorm, - are there then another program that I could achieve this with?
Update2 - Automatic Deployment in PhpStorm
I've previously used this, - but I've had happen a few times that I started sync'ing waaaay to many files, overwriting critical core files. It seems smart, but can possibly tear down walls if I've forgotten to define an ignore properly.
I wish there was an 'Automatic Deployment for theses files'-function.
Update3 - File Watchers
I looked into file-watchers ( recommendation from #LazyOne ). Based on this forum thread, then file watchers cannot be used to upload files.
It is possible to accomplish it using external program scp (Secure Copy Protocol):
Steps:
1. Create a Scope (for compiled files app.js and style.css)
2. Create a Custom File Watcher with scp over that Scope
Start with Scope:
Create a Local Scope with name scp files for your compiled files directory (I will assume that your webpack compiles into dist directory):
Then, to add dist directory into Scope, select that folder and click on Include Recursively. Apply and Move to File Watchers
Create a custom template for File Watcher:
Choose a Name
Choose File type as Any
Choose Scope as scp files(created earlier)
Choose Program as scp
Choose Arguments as $FileName$ REMOTE_USER#REMOTE_HOST:/REMOTE_DIR_PATH/$FileName$
Choose Working directory as $FileDir$
That's it, basically what we have done is every time when a file in that scope changes, that file is copied with scp to the remote server to the corresponding path.
Voila. Apply Everything and recompile your project and you will see that everything is uploaded to the server.
(I assumed that you have already set up your ssh client; Generated public/private keys; Added a public key in your remote server; And, know ssh credentials to connect to your remote server)
I figured this out myself. I posted the answer here.
The two questions are kind of similar but not identical.
This way I found is also not the best, since it stores the server password in clean text. So I'll leave the question open, in case someone can come up with a better way to achieve this.

How to get a response from a script back to Hudson and to fail/success emails?

I'm starting a Python script from a Hudson job. The script is started though 'Execute Windows batch command' in build section as 'python my_script.py'
Now I'd need to get some data created by the script back to Hudson and add it to the fail/success emails. My current approach is that the Python script writes data to stderr which is read to a temp file by the batch and then taken into an environment variable. I can see the environment variable correctly right after the script execution (using set command), but in the post-build actions it's not visible any more. The email sending is probably done in different process, so the variables are not visible anymore. I'm accessing the env vars in the email as ${ENV, varname} (or actually in debug mode as $ENV to print them all)
Is there a way to make the environment variable global inside Hudson?
Or can someone provide a better solution for getting data back from Python script to Hudson.
All the related parts (Hudson, batch and Python script) are under my control and can be modified as needed.
Thanks.
Every build step get's is own shell. This implies, that your environment variables are only valid within the build step.
You can just write the data in a nice format to the std output (use a header that is easy to identify) and if the job fails, the data output gets attached in the email.
If you insist on only putting in the data, you can use the following token for the Editable Email Notification post build action (Email-ext plugin).
${BUILD_LOG_REGEX, regex, linesBefore, linesAfter, maxMatches, showTruncatedLines, substText}

Run native binary CGI on lighttpd

I'm trying to set up lighttpd to run binary CGI app (not PHP script or smth, but a binary file, compiled from C++ source). I actually have
server.modules = (
...
"mod_cgi"
...
)
uncommented, have myApp.exe in htdocs/app, and also
cgi.assign = ( "myApp.exe" => "myApp.exe" )
Then, to make all the stuff work by accessing, say, http://localhost:8080/app/myApp.exe?p=a&..., I had to put an empty myApp.exe in lighttpd root folder (where the server's exe is). It's actually strange and sucks, and also not all CGIs can work that way. Applying these actions to another CGI app (that works perfectly on properly tuned Apache) gave no success.
What am I doing wrong?
The docs: http://redmine.lighttpd.net/wiki/1/Docs:ModCGI
I've made a test with a tcl script as cgi and this was my working config:
cgi.assign = ( "" => "/usr/bin/tclsh" )
index-file.names = ("lighttd_test.tcl")
The cgi.assign allows you to specify file extensions to be handled by specific applications. This example means: Any filetype will be opened through /usr/bin/tclsh. Since my index-file is a tcl script, I get the content which I put through the script's STDOUT.
In case you want to run a binary executable this is the place to specify it.
Maybe this link provides some more info about binary cgi for you: http://redmine.lighttpd.net/issues/1256

How can I get a Windows batch or Perl script to run when a file is added to a directory?

I am trying to write a script that will parse a local file and upload its contents to a MySQL database. Right now, I am thinking that a batch script that runs a Perl script would work, but am not sure if this is the best method of accomplishing this.
In addition, I would like this script to run immediately when the data file is added to a certain directory. Is this possible in Windows?
Thoughts? Feedback? I'm fairly new to Perl and Windows batch scripts, so any guidance would be appreciated.
You can use Win32::ChangeNotify. Your script will be notified when a file is added to the target directory.
Checking a folder for newly created files can be implemented using the WMI functionality. Namely, you can create a Perl script that subscribes to the __InstanceCreationEvent WMI event that traces the creation of the CIM_DirectoryContainsFile class instances. Once that kind of event is fired, you know a new file has been added to the folder and can process it as you need.
These articles provide more information on the subject and contain VBScript code samples (hope it won't be hard for you to convert them to Perl):
How Can I Automatically Run a Script Any Time a File is Added to a Folder?
WMI and File System Monitoring
The function you want is ReadDirectoryChangesW. A quick search for a perl wrapper yields this Win32::ReadDirectoryChanges module.
Your script would look something like this:
use Win32::ReadDirectoryChanges;
$rdc = new Win32::ReadDirectoryChanges(path => $path,
subtree => 1,
filter => $filter);
while(1) {
#results = $rdc->read_changes;
while (scalar #results) {
my ($action, $filename) = splice(#results, 0, 2);
... run script ...
}
}
You can easily achieve this in Perl using File::ChangeNotify. This module is to be found on CPAN: http://search.cpan.org/dist/File-ChangeNotify/lib/File/ChangeNotify.pm
You can run the code as a daemon or as a service, make it watch one or more directories and then automatically execute some code (or start up a script) if some condition matches.
Best of all, it's cross-platform, so should you want to switch to a Linux machine or a Mac, it would still work.
It wouldn't be too hard to put together a small C# application that uses the FileSystemWatcher class to detect files being added to a folder and then spawn the required script. It would certainly use less CPU / system resources / hard disk bandwidth than polling the folder at regular intervals.
You need to consider what is a sufficient heuristic for determining "modified".
In increasing order of cost and accuracy:
file size (file content can still be changed as long as size is maintained)
file timestamp (If you aren't running ntpd time is not monotonic)
file sha1sum (bulletproof but expensive)
I would run ntpd, and then loop over the timestamps, and then compare the checksum if the timestamp changes. This can cover a lot of ground in little time.
These methods are not appropriate for a computer security application, they are for file management on a sane system.