I have a pretxncommit hook that uses Python script. This Python script is itself under version control in the same repo. Everything worked fine until I added changes to script itself, which led to a bunch of errors when trying to commit or merge with such changes.
How can I fix this? The best solution I can come up with is to use a subrepo, but I don't really like it.
Error example:
Traceback (most recent call last):
File "hg", line 43, in <module>
File "mercurial\dispatch.pyc", line 30, in run
TypeError: unsupported operand type(s) for &: 'str' and 'int'
Running a hook where the script is found in the repository itself can be tricky if a commit means that hook script changes during execution or is erronous - especially as the script itself can be in a somewhat undefined state at some point when it is modified itself.
One solution is to run the hook from another site outside the repository, like ~/bin and use additionally a hook in the post-commit or maybe txnclose which updates the script version as found in ~/bin from the repository, possibly preceeded by a sanity test to make sure you don't update to a broken version.
That's the way I update all scripts which run my compile farm: they all are in a CF-related repository, and hooks to that CF-related repository first trigger test runs with the newly committed versions to ensure the CF will work with them and subsequently only updates the scripts for the CF permanently when those tests pass successfully.
Related
Today i've tried to push changes into our shared repository hosted on an apache(2.2.x) running webdav(over HTTPS).
The repository in the dav-directory is a clone of my working directory. Option NoUpdate is enabled. Both Repositories are initiated.
To move on I mapped the dav-directory/repositoy as network drive and set the repository to push to "y:/"
When I try to push from Workbench the exception "aborted, ret 255" is thrown.
% hg --repository C:\wamp\www\ommon push y:
pushing to y:
searching for changes
abort: Y:\.hg/store/journal: The system cannot find the file specified
[command returned code 255 Thu Jun 20 12:08:28 2013]
Pushing from commandline throws:
pushing to y:\
searching for changes
abort: y:\.hg/store/journal: The system cannot find the file specified
Exception AttributeError: "'transaction' object has no attribute 'file'" in
<bound method transaction.__del__ of <mercurial.transaction.transaction object>>
I tried to alter the path to directory since the side-swapped dividers are looking strange to me. But it did not succeed.
Further information: I'm not using hgweb or any cgi-script based version.
EDIT Multiple google entries in reference to the issue left me with the idea that pushing changes to a repository provided by webDAV is not entirely possible. Further I have to use hgWeb to resolve that.
But why do I have to? My idea is that webDAV is capable of writing. Since i mapped the directory as a network drive - mercurial should be able to push changes on to the webserver likewise it does to a local directory.
Can someone confirm this?
Windows WebDAV support can be shaky. It's very possible that because of mercurial's likely advanced file-system operations, the OS does something incorrectly, or something apache's mod_dav cannot cope with.
It's also possible that something simpler is wrong, like apache blocking access to paths starting with a ..
You may be able to find something in your apache log, but I would recommend not doing this and use a true mercurial server instead.
Mercurial's http-repositories NEVER speak on WebDAV
You have to use any Mercurial-capable web-frontend for communication with repo or mount WebDAV-drive as local drive and access repository on it as repository on local FS
In our current Java project we want to compare the local with the remote revision number of an alreay cloned mercurial repository, especially we want to get the latest revision number from the server. We are using javahg to access mercurial functions. But we can't find any command in the javahg library to achieve that.
Normally, you would use the identity command, but this is not supported in this library. Another way could be to use the incoming command, which is supported, but it seems not to work for us. We tried to execute the following code line:
IncomingCommand.on(localRepo).execute(serverURL)
and the resulting bundle returns "-1". After a quick look into the source code of the execution function we found out that this method operates only on local repositories.
Has anybody an idea how the incoming command could be used to get the latest revision from the remote repository? Or is there another way to do this?
Any help is appreciated. Thanks!
The incoming command downloads a 'bundle file' containing the remote changesets not present locally. From the Bundle instance you can use getOverlayRepository() to get a Repository instance that any other command can be invoked on.
Here's an example of using Incoming with a remote repository:
Repository repoB = ..;
Bundle bundle = IncomingCommand.on(repoB).execute("http://localhost:" + port);
List<Changeset> changesets = bundle.getChangesets();
List<Changeset> heads = bundle.getOverlayRepository().heads();
I'm not sure the precise semantics of 'identify' but maybe a similar effect could be achieved by listing heads of the bundle overlay repository.
Identify seems much more efficient if you're just interested in the node id and not the changes themselves. Feel free to post a feature request here: https://bitbucket.org/aragost/javahg
Is there a hack or a way out to run a hook on the server whenever a client pushes or pulls and gets "no changes found". I want to be able to run a script on a client's pull irrespective of whether the repository has changed or not.
Currently it is easily possible to use the standard hooks (pretxnchangegroup, changegroup, etc.) for all cases but they do not get triggered when there is no change.
A hook wont' be able to do what you're looking for. The hooks that run on the server side are all triggered by the arrival of things (changesets or pushkeys). There's no server-side hook for nothing-arrived nor for commands that never send things (incoming, outgoing, etc.).
One long shot I tried that didn't work was using a post-serve hook. When a client connects to a remote repository over ssh it's really running hg serve on that client, so I was hoping as post-serve hook would be executed at the end of a ssh session, but no such luck (indeed that hook likely doesn't exist at all).
Fortunately there are some other hackier-options than hooks. For repositories you access over ssh you can alter the .ssh/authorized-keys file to force the execution of some command:
command="/home/me/hg_and_somethingelse.sh" ssh-rsa AAAA....
then in /home/me/hg_and_something_else.sh you'd have:
#!/bin/bash
hg $#
echo ANY COMMAND FROM HERE ONWARD IS RUN FOR EVERY PUSH, PULL, CLONE, etc.
Similar for http-served repos you'd just tack whatever you want at the end of the wsgi file you're using.
As a caveat there are a lot of tools (IDEs for example) that check for changes often, so your script is going to be run more frequently than you're imagining.
I am using windows explorer to clone a repository from a folder in a mapped drive into my local working folder. I have never encountered this problem before with any of my previous repositories, but I keep getting an error while the directory is cloned. The error points to a file that is of "unkown format". This is the exact error:
abort: index data/Scripts/tiny_mce/themes/advanced/skins/default/dialog.css.i unknown format 5661!
About half of the repository gets cloned, but it never finishes because of that error half way through. I have used a remote command that supposedly skips over unknown formats (I found it somewhere online), but it doesn't work. This would be the complete command:
hg clone --remotecmd skip-unknown-format --verbose -- Q:\mercurial-repository\east C:\working
Any pointers on how I can skip that one file and/or how I can fix this error?
This is my hg version info:
version 2.4
with Mercurial-2.2.1, Python-2.6.6, PyQt-4.8.6, Qt-4.7.4
I have a .dtsx file (an SSIS package) that downloads files from an FTP server and imports data. It runs fine whenever I run it manually. However, when I schedule calling the package as a step in a SQL server agent job, it fails. The step it fails at is the one where I call a .bat file. The error in the job history viewer says this:
Error: 2009-05-26 12:52:25.64
Code: 0xC0029151 Source: Execute
batch file Execute Process Task
Description: In Executing
"D:\xxx\import.bat" "" at "", The
process exit code was "1" while the
expected was "0". End Error DTExec:
The package execution returned
DTSER_FAILURE (1).
I think it's a permissions issue, but I'm not sure how to resolve this. The job owner is an admin user, so I've verified they have permissions to the directory where the .bat file is located. I've tried going into Services and changing the "Log On As" option for SQL Server Agent, and neither option works (Local System Account and This Account). Does anyone have ideas as to what other permissions need to be adjusted in order to get this to work?
I tried executing just the batch file as a SQL Job step, and it gave more specifics. It showed that it failed when I was trying to call an executable, which was in the same directory as my .bat file, but not in the windows/system32 directory, which is where it was executing from.
I moved the executable to the system32 directory, but then I had no clue where my files were being downloaded to. Then I found that there's a property for the Execute Process Task (the one that executes the .bat) called WorkingDirectory. I set this to be the directory where the bat is located, moved the executable back into the same one as the .bat file, and it's now working as expected.
For me it was a permissions issue. Go to Environment --> Directories, then change Local directory to something the SQLAgentUser can access. I used C:\temp. Click the dropdown for Save, and choose "Set defaults".
Are you executing the SSIS job in the batch file, or is the batch file a step in the SSIS control flow?
I'm assuming the latter for this answer. What task are you using to execute the batch file (e.g. simple execute program task or a script task). If the latter, it looks like your batch file is actually failing on some step, not the SSIS script. I'd check the permissions of what your batch file is trying to access
In fact, it might be a better idea to rewrite the batch file as a script task in SSIS, because you'll get much better error reporting (it'll tell you which step in the script fails).
You could try executing the batch file using the runas command in a command window. If you try and execute it under the local system or network system account, it should give you a better error. If it does error, you can check the error level by going "echo %ERRORLEVEL%".
If it wasn't the latter, and you're executing the SSIS package via a batch file, why?
Are you possibly accessing a mapped drive in your .bat file? If so, you can't rely on the mapped drive from within the service, so you'd have to use UNC path.
I had the same error and I resolved it by logging on to the user account that runs the job, opened Coreftp site in question there, test the site access, made the change there (in my case, I had to reenter the new password) and now it works.
So yes, it is an issue of file access. This one is file access to the coreftp site in question.