Has repository update behaviour changed? - mercurial

For several years, we have been using Mercurial internally (version 3.1.2) on a designated server, which is now being retired.
On a new server, the entire repository has been copied over, and Mercurial 4.0.0 installed. Everything seems to work, but
http://192.168.0.3:8000/?sort=lastchange
we make heavy use of the above command to inform 'pullers' of repositories that have been updated by other users.
It doesn't order correctly, the last modified date of a repository committed to doesn't update (isn't reported as being updated).
I'm stuck!

It all works fine, the situation arose because of an ignorance of how Mercurial works.
Originally, the parent folder for all our repositories was copied from the old server to the new. After exhausting all other routes, this copy was redone by zipping, copying and unzipping.
(Lasse, one of Mercurial's roles is to inform and manage the situations you describe. We reduce this possibility by knowing which repos to pull every morning, before starting work. Try it!).

Related

Using mercurial, I added a new file and wrote code in it, then deleted that file. Can I retrieve it?

Pretty much the title. I've looked at a lot of similar questions asked here, and I can't seem to find something that applies.
Started by syncing with HEAD. Created a few new files. Filled in those files, they were being tracked at this point. I then not only deleted the files, but also removed them from being tracked (because of stupid UI). According to my understanding, those files are gone for good, but I thought I'd check with people who are smarter than me: Is it possible to retrieve them?
Mercurial does not store uncommitted changes, so if you did not commit the files then they are lost.
If you did commit them, then hg update -C will restore them (and all other files --- make sure there are no other changes you haven't committed and want to keep) to the latest commit for your working dir.

Is there a way to get TortoiseHg to pull automatically as changes occur?

I'm using TortoiseHg for working with a mercurial repository. I understand that it can be used for offline work which is why this doesn't happen automatically, but...
Q: Is there a way configure TortoiseHg to periodically pull changes or at least pull automatically when it is in use?
My machine is generally at work and the last thing I want to do is to start merging while not having the latest changes only to find that I need to resolve duplicate heads and re-do my work again.
Is there an existing way to tell TortoiseHg that it should keep in sync so that it doesn't fall too far out of sync? I'm not talking about updating my local respository in any way - I'm just talking about it doing a pull so that it knows the current state of the server's repository as closely as possible.
Please advise - and thanks!
Jeremy

A practical way to provide code updates via Mercurial without sharing main BitBucket account

I suspect this might be really obvious but I can't find a straightforward solution in the documentation or forums:
I have written some code that is held in a Mercurial repository on BitBucket.
I use this code to build Linux virtual servers. When I build a server, I clone the repo onto the server, run my build script, and then delete the clone. The result is a configured server with several files from my repo located in various folders on the server.
Now, I'm looking for a mechanism where I can roll out bug fixes and improvements to my users' servers after I have handed them over. At that time, I won't have SSH access to the servers and I cannot expect my end users to do anything more complicated than kick off a cron job or launch a script.
To achieve this, I'm thinking of setting up a BitBucket account for my users with read-only access to my repo.
I have no problem writing a script to clone my repo, via this read-only account, and apply the updates, but I don't want to include all my files. In particular, I want to exclude my build script as it is commercially sensitive. I know I could remove it from my repo, but then my build wouldn't work.
Reading around, it seems I may need to create a branch or a fork of my repo (which?). Or maybe a sub-repo? Then, I could remove the sensitive files from that branch/fork/sub-repo and allow my users to clone it via a script.
That's OK, but I need a way to update the branched/forked/sub repo as I make changes to the main one. Can this be automatic? In other words, can it be set up to always reflect the updates made in the main repo? Excluding the sensitive files of course.
I'm not sure I'd want updates to be automatic though, so I'd also like to know how to transfer updates from the main to the branch/fork/sub manually. A merge? If I do a merge, how do I make sure my sensitive files don't get copied across?
To sum up, I have a main repo which contains some sensitive files and I need a way to roll out updates of all but those sensitive files to my read-only users.
Sorry if this is hugely obvious. I'm sure it's a case of not seeing the wood for the trees and being overwhelmed by the possibilities. Thanks.
I don't think that you need to solve this in Mercurial at all.
What you actually need is Continuous Integration / a build server.
The simplest solution goes like this:
Set up a build server with something like TeamCity or Jenkins, that's always online and monitors changes in your Bitbucket repository.
You can set it up so that when there's a change in your repository, the build server runs your build script and copies the output to some FTP server, or download site, or whatever.
Now you have a single location that always contains the most recent code changes, but without the sensitive files like the build script.
Then, you can set up a script or cron job that your end users can run to get the newest version of the code from that central location.
You are ok with two branches, one for the users clone (main) and other for your main development (dev), the tricky part is merging the new changes from dev to main.
You can solve this by excluding files in the merge process. Excluding a file while merging in Mercurial
By setting the [merge-patterns] section in your .hgrc you can sepcify what files are not affected by the merge.
[merge-patterns]
build.sh = internal:local
For more info read hg help merge-tools.
"internal:local"
Uses the local version of files as the merged version.
Entire Mercurial trees always get moved around together, so you can't clone or pull just part of a repository (along the file tree axis). You could keep a branch that has only part of the files, and then keep another branch that has everything, making it easy to merge the the partial (in terms of files) branch into the other branch (but merging the other way wouldn't be particularly easy).
I'm thinking maybe subrepositories work for your particular use case.

Pushing/Pulling specific files/folders in Mercurial

I am (still) trying to completely migrate our company's SVN to HG.
For the most part I've succeeded, but we ran across a problem.
Our codebase has over 30 different projects, each one on its folder.
I've been asked multiple times how to commit and then push specific files to our central repository instead of being forced to commit everything everywhere to then push it, it's certainly annoying. Not being able to pull only specific projects is also an nuisance.
Is there any way to handle this like we used to in SVN? Where we could just commit what we wanted and not everything, and update only what was necessary.
Thank you.
A major difference between SVN and Mercurial is that you should have one repository per project in Mercurial.
You can change your repository to be multiple repositories using the convert extension.
Like Steve Kaye said you should create one repo per project, but as well you may want to create one master repo and include all your projects as subrepos This will allow svn like behavior of getting a copy of everything.

Keeping a database structure up to date in a project where code is on subversion?

I have been working with Subversion for a while now, and it's been incredible for the management of my projects, and even to help managing the deployment to several different servers, but there is just the one thing that still annoys me. Whenever I make any changes to the database structure, I need to update every server manually, I have to keep track of any changes I made, and because some of my servers run branches of the project (modifications that are still being worked on, or were made for different purposes), it's a bit awkward.
Until now, I've been using a "database.sql" file, which is a dump of the database structure for a specific revision. But it just seems like such a bad way to manage this.
And I was wondering, how does everyone else manage their MySQL databases when they're working on a project and using Subversion?
In my team here is what we currently do:
we only have one branch: the trunk, which is where every developer checked in his changes.
When we want to release a new version of our solution, we create a new branch from the trunk. (after stabilizing it a bit).
For each release, we also have a file to migrate the schema of our databases from version n-1 to version n. We also have a script to rollback from n to n-1. So when we start a new release, we create new migration & rollback files which are comitted in the trunk.
Thus we are able to rebuild the database corresponding to any version of our solution starting from any "version" of a given schema.
Actually, we also had a lot of debates on this question and this is finally what we chose to do. But if you guys have some ideas to help us to improve, let us know :)
Liquibase might be something useful for you.
I've played around with this quite a bit, although not to the point of using it in anger.
Basically you define your database and scripts in their syntax, and they generate upgrade and from-scratch scripts for various databases for you.
Takes a bit of getting used to, but works quite well.