Is it possible to configure MediaWiki to not keep a version history of each page?
When a page is edited, I want the old version to be gone forever.
How can I do this?
If you are on MW>=1.21, you could hook into the PageContentSaveComplete hook. This is after everything is written to the database on saving an article. At this point, you'd query the database for all revisions for this page id, and remove every revision except the newest. See here for an example of how you could delete a revision, and also here for how to purge the text table afterwards.
I have no idea if this appoach would be entirely waterproof.
Related
Every time I pull from a repository I get not only the changesets but also all the bookmarks from that repository. In some situations this is quite annoying. Is there a way to pull without getting bookmarks?
(I'm actually using TortoiseHG, but information about plain Mercurial command line is useful and appreciated as well.)
(Background: In TortoiseHG having many bookmarks gets cluttered quickly. That doesn't matter in the remote repository where the bookmarks should remain for future reference. But locally I don't need or want them. So after each pull from the remote repository I have to delete each bookmark individually. This gets old fast...)
I don't think that push copies bookmarks, so why not go to the remote repository and from there push into yours?
Alternatively, .hg/bookmarks is just a text file, what happens if you copy it, pull, and then restore the original?
Is there a way to pull without getting bookmarks?
No. You get all remote bookmarks from remote repository every time
In TortoiseHG having many bookmarks gets cluttered quickly
How?! Not active "external" bookmark will remain invisible in the past history quickly
but... you can rename your local, important bookmarks (give some unique prefix) and after it deleting remote bookmarks after each pull can be a lot easier: grep -v PREFIX for content of .hg/bookmarks file
I'm using TortoiseHg for working with a mercurial repository. I understand that it can be used for offline work which is why this doesn't happen automatically, but...
Q: Is there a way configure TortoiseHg to periodically pull changes or at least pull automatically when it is in use?
My machine is generally at work and the last thing I want to do is to start merging while not having the latest changes only to find that I need to resolve duplicate heads and re-do my work again.
Is there an existing way to tell TortoiseHg that it should keep in sync so that it doesn't fall too far out of sync? I'm not talking about updating my local respository in any way - I'm just talking about it doing a pull so that it knows the current state of the server's repository as closely as possible.
Please advise - and thanks!
Jeremy
Recently I submitted a file to Perforce as “add” (a new file).
Then I submitted several more changes to it.
Now I realize that the original “add” should have been an “integrate” because the file is really a copy and modification of another, existing file.
Is there a way to add the integration link after the fact?
If not, what is the easiest way of doing this? If we obliterate all the affected changelists, and then re-submit them but with the correct integration history, will that work?
Just talked to Perforce Support on the phone. The answer is no, you cannot “change history”. However, the recommended course of action is to:
Take a copy of each change made to the new file(s)
Obliterate all the added files that should have been an integrate
Re-submit each change that was made
It may be possible to generate Perforce journal (database) records that put the missing data in place. These are plain text entries that are replayed into the live database by a system admin. The database schema is documented: www.perforce.com/perforce/doc.current/schema
You'd need to be very careful and work with Perforce Support while doing this, and try it on a test system first. It's normally not worth the effort.
One of the biggest problems i have today is that every time that i make a commit to git i get make the changes on the data base by hand. I don't want that the schema of the data base is always up to date.
I would like to be able have a pre commit hook that check the database schema and include it as part of the commit. Also that every time that I make a pull the data base gets updated.
Anyone has something like this already?
(I have a LAMP server, but I'm willing to install anything that helps me with this)
Like this?
http://www.edmondscommerce.co.uk/git/using-git-to-track-db-schema-changes-with-git-hook/
I have some files I'd like to add to have them as a "backup". The thing is, I'd like to commit them only one time, and then, I'd like for Mercurial to don't track them anymore ( don't notify me if they're changed, and don't commit them on other commits ).
Basically, something like this:
hg add my_folder
hg commit -m "added first version of my_folder"
Then, after a while, the contents of that folder might change. And if I commit other files, the new version of that folder will get commited as well. This is something I'd like to avoid. Is it possible, without specifying directly which files I want to commit?
I've never seen any option in Mercurial that might allow that... but why not simply copy them elsewhere ?
I mean, what's the point of using a Version Tracking System if you don't need versioning on these items anyway ?
We ran into a similar case with binary documents ('.doc', images, etc...) and finally decided to commit them on a separate repository, dedicated to those.
I think the traditional way of doing this is to commit files named something like "file.ext.default", and just inform users that they should copy the defaults and modify the copies.
VCSs aren't backup sysytems. consider using a proper backup mechanism.
having said that you should be able to do this using hooks, there are many ways you could do this but ACLs would be an obvious one assuming a remote server