I think someone here can help me out of the trouble I'm facing.
I got my sites DDOSed and all the html files in my site were added 7 lines of codes at the end of the files. I downloaded all the contaminated files and would like to remove all those bad codes. But the task was overwhelmed. There are 400+ htmls.
So I would ask those gurus, is there any method that I can bulk delete the last 7 lines? I tried notepad++ and other apps but failed to find a better way.
PS. the HTMLS spread in different directory.
Lots of people seem to be using PowerShell for tasks like this these days.
Test this before you run it, but I think this is what you're looking for:
gci c:/USE_REAL_FOLDER_NAME_HERE/ | % {
$path = $_
$file = gc $_
$file[0..($file.length-7)] | % {$_.trimstart()} | out-file $path
}
Related
I am using SSDT in Visual Studio 2013.
I have created some pre and post publish scripts for the development server. The pre-deployment scripts empty data from tables and re-set all the auto identity fields. The post-deployment scripts populate the tables with static and sample data.
Now I need to publish the database to our staging and live database servers. I have created new "publish.xml" profiles for these servers. But obviously I don't want the same pre and post scripts to run.
How can I either specify different scripts depending on the publish profile, or make the scripts aware of the target and perform different actions.
My biggest concern is publishing to the live server and accidentally destroying data.
Thanks in advance.
Doug
You have a few options:
1 - Wrap your data changes in calls to #servername or something unique to the environment so you would have something like:
if ##servername = 'dev_server'
begin
delete blah
insert blah
end
2 - You can also achieve something similar using sqlcmd variables, pass in a variable called "/v:DestoryData=true" or something and then you can reference that in your script.
3 - Don't use pre/post deploy scripts but have your own mechanism for running them i.e. use a batch file to deploy your dacpacs and add a call to sqlcmd before and after - the downside to this is that when deploying, changes to a table result in any foreign keys being disabled before the pre-deploy and re-enabled after the post-deploy.
4 - Edit the dacpacs, the pre/post deploy scripts are just text files inside the dacpac which is essentially a zip file that follows the microsoft packaging format and there is a .net packaging api to let you modify it.
I think that is about it, please ask if anything is unclear :)
ed
I would suggest to use SQLCMD Variables for your conditional script execution.
If right-click on a DB project and choose Properties, there is a tab "SQLCMD Variables"
Enter "$(ServerName)" as variable and something as default value.
Then you need to open your EVERY .publish.xml in XML editor to insert the following code after PropertyGroup part:
<ItemGroup>
<SqlCmdVariable Include="ServerName">
<Value>[YourVersionOfServer]</Value>
</SqlCmdVariable>
</ItemGroup>
[YourVersionOfServer] should be equal to the result of ##servername on each of your servers.
The final .publish.xml might look like:
Then you should wrap you conditional code in pre and post deployment files with:
if ##servername = '$(ServerName)'
begin
... code for specific server
end
Thus you can guarantee that the right code hits the right server
First set up SQLCMD Variables by right clicking on the project and going to properties and the SQLCMD Variables tab:
Then set up a folder structure for you to organize scripts that you want to run for a specific server or any other thing you want to switch off of, like customer. Each server gets a folder. I like to barrel all the scripts I want to run for that folder into an index file in that folder. The index lists out a :r command followed by each scrip in the folder that should run, organized by filename with a numerical prefix so the order can be controlled.
In the index file in the folder that groups all the server folders will do something different than listing out a call to each server's index file, instead it switches which index file to run based on the SQLCMD variable passed in based on the publish profile. It does so with the following simple code:
:r .\$(Customer)\Index.sql
The reason you want to do it like this by setting up folders and index files is that not only does it keep things organized it also allows you to use Go statements in all of your files. You can then use one script with :r statements for all other scripts you want to run, nesting to your heart's content.
You could set up your sqlcmd file to do it the following way which doesn't require you to set up folders or specific names for files, but it requires you to remove all GO statements. By doing it the above way you don't have to remove any go statements.
if ##servername = '$(ServerName)'
begin
... code for specific server
end
Then when I right click the project and hit publish it builds the project and pops up with the following dialog. I can change which scripts get run by changing the SQLCMD variable.
I like to save off the most commonly used settings as a separate publish profile and then I can get back to it by just clicking on it's .xml file in the solution explorer. This process makes everything so easy and there is no need to modify the xml by hand, just click save profile as, Load Profile, or Create Profile using the publish database dialog shown above.
Also, you can generate your index files with the following powershell script:
foreach($directory in (Get-ChildItem -Directory -Recurse) | Get-Item)
{
#skip writing to any index file with ---Ignore in it.
if(-Not ((Test-Path $directory\Index.sql) -and (Get-Content $directory\Index.sql | %{$match = $false}{ $match = $match -or $_ -match "---Ignore" }{$match})))
{
$output = "";
foreach($childitem in (Get-ChildItem $directory -Exclude Index.sql, *.ps1 | Sort-Object | Get-Item))
{
$output= $output + ":r .\" + $childitem.name
if($childitem -is [system.io.directoryinfo])
{
$output = $output + "\Index.sql";
}
$output = $output + "`r`n"
}
Out-File $directory\Index.sql -Encoding utf8 -InputObject $output
}
}
For anyone wondering you could also do something like this:
Publish profile:
<ItemGroup>
<SqlCmdVariable Include="Environment">
<Value>Dev</Value>
</SqlCmdVariable>
</ItemGroup>
Post deployment script:
if '$(Environment)' = 'Dev'
begin
... code for specific server
end
It seemed a bit more natural to me this way compared to the "ServerName" semantics. I also had issues trying to use ##servername, but not sure why.
I am taking coverage using jscoverage . Now the problem is after storing the report for say 15 times it stops working .So i get a report with some lines covered . Now if i again try to start the coverage freshly and try to merge the jscoverage.json of the new and the old files then it gets corrupted . Can someone suggest how to merge two jscoverage.json files ??
NOte: the coverage i am taking is for the same js file so directory and everything remains the same .
You can merge reports using JSCover (successor of jscoverage). Use the following command,
java -cp JSCover-all.jar jscover.report.Main --merge REPORT-DIR1 REPORT-DIR2 REPORT-DIR3...DEST-DIR
See,
http://tntim96.github.io/JSCover/manual/manual.xml#reportMerging
I'm working on some code with a partner. Our make files differ slightly courtesy of different build setups. Because of this, so far we have not been tracking this file. However it would be nice to have at least one of ours tracked. The problem is, when that is done and the other person runs hg update, their copy gets update and the code won't compile.
Is there a way to track the file, but have it such that you can update the working directory selectively? Or is there some other way I should deal with this problem?
This is a slight variant of the standard "how do I deal with a config file" question. The standard answer in SVN, Mercurial, and Git is: don't track the file, instead track <file>.example. Then each user copies that over to <file> and tweaks it as needed.
But Makefiles are a bit smarter than config files: they execute code and can include other files. In which case, it starts making sense to track the Makefile normally and have it include another local file if it's present that overrides the default rules. For instance, the following will work with GNU Make:
# pull in any local user tweaks
-include Makefile.local
MQ extension is the best and The Right Way (tm) to do it (not easiest, but...)
Store common part of file in repo, individual personalisation - in own MQ-patches
Is it possible to combine your Makefiles? Then there is not chance of losing your different configurations by not storing them in version control.
For example, you could add a conditional statement based on the username. My username is ryan and this code echos my name, but if it is run on your computer, it probably will echo "not ryan."
all:
if [ `whoami` = "ryan" ]; then echo "ryan"; else echo "not ryan"; fi
I often have the following scenario: in order to reproduce a bug for reporting, I create a small sample project, sometimes a maven multi module project. So there may be a hierarchy of directories and it will usually contain a few small text files. Standard procedure would of course be to create a zip file and send that. But on some mailing lists attachments are not allowed, and so I am looking for a way to automatically create an installation script, that I can post to such mailing lists.
Basically I would be happy with a Unix flavor only that creates mkdir statements to create directories and >> statements to write the file contents. (Actually, apart from the relative path delimiters, the windows and unix versions can probably be identical)
Does such a tool exist somewhere? If not, I'll probably write one in java, but I'm happy to accept solutions in all kinds of languages.
(The tool could run under windows or unix, but the target platform for the generated scripts should be either unix or configurable)
I think you're looking for shar, which creates a shell archive (shell script that when run produces a given directory hierarchy). It is available on most systems; you can use GNU sharutils if you don't already have it.
Normal usage for packing up a directory tree would be something like:
shar `find somedirectory -print` > archive.sh
If you're using GNU sharutils, and want to create "vanilla" archives which use only the most portable of shell builtins, mkdir, and sed, then you should invoke it as shar -V. You can remove some more extra baggage from the scripts by using -xQ; -x to remove checks for existing files, and -Q to remove verbose output from the archive.
shar -VxQ `find somedir -print` > archive.sh
If you really want something even simpler, here's a dirt-simple version of shar as a shell script. It takes filenames on standard input instead of arguments for simplicity and to be a little more robust.
#!/bin/sh
while read filename
do
if test -d $filename
then
echo "mkdir -p '$filename'"
else
echo "sed 's/^X//' <<EOF > '$filename'"
sed 's/^/X/' < "$filename"
echo 'EOF'
fi
done
Invoke as:
find somedir -print | simpleshar > archive.sh
You still need to invoke sed, as you need some way of ensuring that no lines in the here document begin with the delimiter, which would close the document and cause later lines to be interpreted as part of the script. I can't think of any really good way to solve the quoting problem using only shell builtins, so you will have to rely on sed (which is standard on any Unix-like system, and has been practically forever).
if your problem are non-text-file-hating filters:
in times long forgotten, we used uuencode to get past 8-bit eating relays -
is that a way to get past attachment eating mail boxes these days ?
So why not zip and uuencode ?
(or base64, which is its younger cousin)
I have a personal Mercurial repository tracking some changes I am working on. I'd like to share these changes with a collaborator, however they don't have/can't get Mercurial, so I need to send the entire file set and the collaborator will merge on their end. I am looking for a way to extract the "tip" version of the subset of files that were modified between two revision numbers. Is there a way to easily do this in Mercurial?
Adding a bounty - This is still a pain for us. We often work with internal "customers" who take our source code releases as a .zip, and testing a small fix is easier to distribute as a .zip overlay than as a patch (since we often don't know the state of their files).
The best case scenario is to put the proper pressure on these folks to get Mercurial, but barring that, a patch is probably better than a zipped set of files, since the patch will track deletes and renames. If you still want a zip file, I've written a short script that makes a zip file:
import os, subprocess, sys
from zipfile import ZipFile, ZIP_DEFLATED
def main(revfrom, revto, destination, *args):
root, err = getoutput("hg root")
if "no Merurial repository" in err:
print "This script must be run from within an Hg repository"
return
root = root.strip()
filelist, _ = getoutput("hg status --rev %s:%s" % (revfrom, revto))
paths = []
for line in filelist.split('\n'):
try:
(status, path) = line.split(' ', 1)
except ValueError:
continue
if status != 'D':
paths.append(path)
if len(paths) < 1:
print "No changed files could be found."
return
z = ZipFile(destination, "w", ZIP_DEFLATED)
os.chdir(root)
for path in paths:
z.write(path)
z.close()
print "Done."
def getoutput(cmd):
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p.communicate()
if __name__ == '__main__':
main(*sys.argv[1:])
The usage would be nameofscript.py fromrevision torevision destination. E.g., nameofscript.py 45 51 c:\updates.zip
Sorry about the poor command line interface, but hey the script only took 25 minutes to write.
Note: this should be run from a working directory within a repository.
Well. hg export $base:tip > patch.diff will produce a standard patch file, readable by most tools around.
In particular, the GNU patch command can apply the whole patch against the previous files. Isn't it enough? I dont see why you would need the set of files: to me, applying a patch seems easier than extracting files from a zip and copying them to the right place. Plus, if your collaborator has local changes, you will overwrite them. You're not using a Version Control tool to bluntly force the other person to merge manually the changes, right? Let patch deal with that, honestly :)
In UNIX this can be done with:
hg status --rev 1 --rev 2 -m -a -n | xargs zip changes.zip
I also contributed an extension, see the hgexportfiles extension on bitbucket for more info. The export files extension works on a given revision or revision range and creates the set of changed files in a specified directory. It's easy to zip the directory as part of a script.
To my knowledge, there's not a handy tool for this (though a mercurial plugin might be doable). You can export a patch for the fileset, using hg export from:to (where from and to identify revisions.) If you really need the entire files as seen on tip, you could probably hack something together based on the output of hg diff --stat -r from:to , which outputs a list of files with annotations about how many lines were changed, like:
...
src/test/scala/RegressionTest.scala | 25 +++++++++++++----------
src/test/scala/SLDTest.scala | 2 +-
15 files changed, 111 insertions(+), 143 deletions(-)
If none of your files have spaces or special characters in their names, you could use something like:
hg diff -r156:159 --stat | head - --lines=-1 | sed 's!|.*$!!' | xargs zip ../diffed.zip
I'll leave dealing with special characters as an exercise for the reader ;)
Here is a small and ugly bash script that will do the job, at least if you work in an Linux environment. This has absolutely no checks what so ever and will most likely break when you have moved a file but it is a start.
Command:
zipChanges.sh REVISION REPOSITORY DESTINATION
zipChanges.sh 3 /home/hg/repo /home/hg/files.tgz
Code:
#!/bin/sh
REV=$1
SRC_REPO=$2
DST_ZIP=$3
cd $SRC_REPO
FILES=$(hg status --rev $1 $SRC_REPO | cut -c3-)
IFS=$'\n'
FILENAMES=""
for line in ${FILES}
do
FILENAMES=$FILENAMES" \""$SRC_REPO"/"$line"\""
done
CMD="tar czf \"$DST_ZIP\" $FILENAMES"
eval $CMD
I know you already have a few answers to this one but a friend of mine had a similar issue and I created a simple program in VB.Net to do this for him perhaps it could help for you too, the prog and a copy of the source is at the bottom of the article linked below.
http://www.simianenterprises.co.uk/blog/mercurial-export-changed-files-80.html
Although this does not let you pick an end revision at the moment, it would be very easy to add that in using the source, however you would have to update to the target revision manually before extracting the files.
If needed you could even mod it to create the zip instead of a folder of files (which is also nice and easy to manually zip)
hope this helps either you or anyone else who wants this functionality.
i just contributed an extension here https://sites.google.com/site/alessandronegrin/pack-mercurial-extension
I ran into this problem recently. My solution:
hg update null
hg debugsetparents (starting revision)
hg update (ending revision)
This will have the effect of deleting all tracked files that were not changed between those two revisions. You will have to remove any untracked files yourself, though. After doing this, the local branch will be in an inconsistent state; you can fix this by running hg debugrebuildstate (or simply deleting the local branch, if you no longer need it).