TortoiseHg - seems to be very slow on Win7 - mercurial

I have recently setup a Mercurial clone with TortoiseHg on our network - it seems to take forever to add files, do commits etc.. It usually hangs for 3-5 minutes at a time & for seem reason it really doesn't like any kind of right-clicking in TortoiseHg.
I am fairly new to Mercurial so there could be some settings to speed this all up but I am not sure of how to best approach this, my pc specs are below:
Intel Core 2 Quad CPU Q8300 # 2.50Ghz
4GB RAM (3GB Usable)
The actual clone is pretty big - in total just around 200MB, I'm not sure if this large size is causing the slowdown, or the fact that the clone itself isn't on my machine but on our local network.
Any ideas of how best to optimise everything?

First, I would try to do the same mercurial operations from the command line, to rule out a slow GUI.
We have the same setup here at my work. The "main" repos is on a mapped network drive. Accessing this is slow, so I've made a local clone for fast access and only synchronizing when necessary.
Now when I think about it, why don't you have a clone on your local machine? Isn't that the entire point of dvcs?

Related

PHPStorm cache on downloaded files?

So I've used PHPStorm before, and have been asked to evaluate it (along with some other coworkers) as I already had my own private license, for how effective it would be with my current company. Although I'm hitting a bit of a snag that I really dont think should be a show stopper.
Anyway, the way my company has its development environments setup now is a bit odd. We check everything into subversion, into different directories than what it will end up on the clients system because we save them to debian packages. This makes working with the files directly from subversion difficult, as PHPstorm has no idea where related files are located.
However, because of this, our files on our development virtual machines are not directly under subversion. Instead, we patch up our virtual machines by installing the updated packages when needed.
This makes life difficult for an IDE, which wants to keep a local copy of the files on your system. The best way I can figure out how to do this, is to run a synchronize between the remote server and local server (going by timestamp and size should be fine, and completes in less than a minute). It would be fine to tell developers "after you patch, make sure you sync with phpstorm".
However, the problem I'm having is, if I modify a file on the remote system, sync (and it says it downloaded) it takes several minutes after opening the file for the remote changes to be seen in phpstorm
I have no idea why this would be, and could potentially lead to really bad results if someone makes a few quick changes, saves, and overwrites the needed files.
I'm currently running phpstorm on Ubuntu 14.04 64-bit
Any help would be appreciated

Tuning mercurial server and identifying bottleneck

I have a very large mercurial repo that takes hours for an initial clone. Clone is being done over https, via scmmanager. I would like to try and get this down to minutes, if possible.
My mercurial repo is running on a server with 24 cores, the load is around 2 while doing a clone from my workstation. I'm wondering how I can tune mercurial on the server to use more cores perhaps. iowait is at 0. Network traffic is low on the server, iftop shows 5Mb/s TX, and I have gigabit ethernet. 4 gigs of ram are being used out of 64 gigs total, and 24 gigs are used for disk cache and 1.5 gigs for buffers.
On my workstation, I have tried renicing hg to -5. Load is around 0.5. My workstation has 8 cores. My workstation has gigabit ethernet also, and seeing minimal traffic also, around 2 Mb/s RX. iowait is also zero on my workstation.
OS for mercurial server is CentOS 6. OS for workstation is Debian jessie. Any ideas would be greatly appreciated.
Edit: tried doing a clone over ssh as well as a local non-hardlinked uncompressed clone. Both take a very long time. (Hours) Repo size is 8 gigs. Unsure how to proceed.
The clone itself is very low complexity operation for Mercurial. Suspect scmmanager. Try cloning direct from the filesytem using a ssh:// URL and load should be just about zilch.
Also try disabling compression on the clone with --uncompressed. If it helps hgweb makes that settable on the server side.
What is the total size of your repositories?
The suggestions Ry4an makes, should indeed resolve your issue. If they don't, you could also create a bundle and transfer that for the initial clone. See hg help bundle for details.
Basically: this creates a single file containing revisions (in your case, you'll want to use '--all') to make sure it contains the entire repository). You can then use wget or any other command to download the file (which should be much faster than 5 Mbps).

Many binary files synchronization

I have about 100 000 files on office server (images, pdf's, etc...)
Each day files count grows about 100-500 items, and about 20-50 old files changes.
What is the best way to synchronize Web-server with these files?
Can any system like Mercurial, GIT help?
(On office server, I'll commit changes, and web-server periodically do updates)?
Second problem is, that on Web-server I have user-generated-content (binary-files) (other files).
Each day this users upload about 1000-2000 new files. Old files don't change.
And I need to backup these files to local machine.
Can any system like Merurial, GIT help in this situation?
(On web-server I'll commit these files by cron, and on local machine I'll do updates)
Thanks
UPD.
Office server is Windows Server 2008 R2
Web-server is Debian 5 lenny
The simplest and most reliable mechanism (in my experience) is rsync.
On Windows, however, rsync over ssh is badly broken due to issues with how Cygwin interacts with named pipes. Rsync over its own protocol works (as long as you don't care about encryption), but I've had lots of problems getting rsync to stay up as a Windows service for more than a few days at a time. DeltaCopy is a Windows app that uses the rsync tools behind the scenes; it seems to work very well, though I haven't tried the ssh option.
A DVCS is not a good solution in this case: it will keep the all history, which you don't always need, and will make any clone a massive operation.
An artifact repository like Nexus is much more adapted if you need some kind of versioning with integrity check associated with your binaries.
Otherwise (no versioning), a simple rsync like Marcelo proposes is enough.

Hudson slaves, how to access workspace

Howto configure system to have one master and multiple slaves where building normal c-code with gmake? How slaves can access workspace from master? I guess NFS share is way to go, but if that's not possible any other options?
http://wiki.hudson-ci.org/display/HUDSON/Distributed+builds is there but cannot understand how workspace sharing is handled?
Rsync? From master: SCM job -> done -> rsync to all slaves -> build job and if was done on slave -> rsync workspace back to master?
Any proof of concept or real life solutions?
When Hudson runs a build on a slave node, it does a checkout from source control on that node. If you want to copy other files over from the master node, or copy other items back to the master node after a build, you can use the Copy to Slave plugin.
It's surely a late answer, but may help others.
I'm currently using the "Copy Artifact plug-in" with great results.
http://wiki.hudson-ci.org/display/HUDSON/Copy+Artifact+Plugin
(https://stackoverflow.com/a/4135171/2040743)
Just one way of doing things, others exist.
Workspaces are actually not shared when distributed to multiple machines, as they exist as directories in each of the multiple machines. To solve the coordination of items, any item that needs distributed from one workspace to another is copied into a central repository via SCP.
This means that sometimes I have a task which needs to wait on the items landing in the central repository. To fix this, I have the task run a shell script which polls the repository via SCP for the presence of the needed items, and it errors out if the items aren't available after five minutes.
The only downside to this is that you need to pass around a parameter (build number) to keep the builds on the same page, preventing one build from picking up a previous version built artifact. That and you have to set up a lot of SSH keys to avoid the need to pass a password in when running the SSH scripts.
Like I said, not the ideal solution, but I find it is more stable than the ssh artifact grabbing code for my particular release of Hudson (and my set of SSH servers).
One downside, the SSH servers in most Linux machines seem to really lack performance. A solution like mine tends to swamp your SSH server with a lot of connections coming in at about the same time. If you find the same happens with you, you can add timer delays (easy, imperfect solution) or you can rebuild the SSH server with high-performance patches. One day I hope that the high-performance patches make their way into the SSH server base code, provided that they don't negatively impact the SSH server security.

Mercurial local repository backup

I'm a big fan of backing things up. I keep my important school essays and such in a folder of my Dropbox. I make sure that all of my photos are duplicated to an external drive. I have a home server where I keep important files mirrored across two drives inside the server (like a software RAID 1).
So for my code, I have always used Subversion to back it up. I keep the trunk folder with a stable copy of my application, but then I create a branch named with my username, and inside there is my working copy. I make very few changes between commits to that branch, with the understanding that the code in there is my backup.
Now I'm looking into Mercurial, and I must admit I haven't truly used it yet so I may have this all wrong. But it seems to me that you have a server-side repository, and then you clone it to a working directory in the form of a local repository. Then as you work on something, you make commits to that local repository, and when things are in a state to be shared with others, you hg push to the parent repository on the server.
Between pushes of stable, tested, bug-free code, where is the backup?
After doing some thinking, I've come to the conclusion that it is not meant for backup purposes and it assumes you've handled that on your own. I guess I need to keep my Mercurial local repositories in my dropbox or some other backed-up location, since my in-progress code is not pushed to the server.
Is this pretty much it, or have I missed something? If you use Mercurial, how do you backup your local repositories? If you had turned on your computer this morning and your hard drive went up in flames (or, more likely, the read head went bad, or the OS corrupted itself, ...), what would be lost? If you spent the past week developing a module, writing test cases for it, documenting and commenting it, and then a virus wipes your local repository away, isn't that the only copy?
So then on the flip side, do you create a remote repository for every local repository and push to it all the time?
How do you find a balance? How do you ensure your code is backed up? Where is the line between using Mercurial as backup, and using a local filesystem backup utility to keep your local repositories safe?
It's ok thinking of Subversion as a 'backup', but it's only really doing that by virtue of being on a separate machine, which isn't really intrinsic to Subversion. If your Subversion server was the same machine as your development machine - not uncommon in the Linux world - you're not really backed up in the sense of having protection from hardware failure, theft, fire, etc. And in fact, there is some data in that case that is not backed up at all - your current code may exist in two places but everything else in the repository (eg. the revision history) only exists in one place, on the remote server.
It's exactly the same for Mercurial except that you've taken away the need for a separate server and thus made it so that you have to explicitly think about backing up rather than it being a side-effect of needing to have a server somewhere. You can definitely set up another Mercurial repository somewhere and push your changes to that periodically and consider that your backup. Alternatively, simply backup your local repository in the same way that you'd back up any other important directory. With you having a full copy of the repository locally, including all revision history and other meta data, this is arguably even more convenient and safe than the way you currently do it with Subversion.
The "hidden" .hg directory stores all of the local commits. You can back up this directory using a standard backup program.
The changes get to the remote directory only when you push. Commits stay local, but you get them if you clone your repository. Then, yes, if you want your things to get to the server repository you have to push to it "all the time".
On the other hand, nothing stops you to have several machines and push content from one to another. Every mercurial repository can turn itself into a server in a matter of seconds typing "hg serve".
I'm not sure it really answer to you question, but I too am a big fan of backup and manage things this way with many clones of my repository (I also use massively mq to work in patch mode but that's another story).
PS: as a sidenote, I'm considering to use mercurial as a tool for filesystem backup. The only thing that bother me is that for this purpose I would prefer to disable the diff feature and treat all files as binary, but that should be easy.