Restricting users from pushing change sets to default (Mercurial) - mercurial

I want to restrict certain users from pushing changesets to the default branch of a repository. If it is possible, how would you do it?

The ACL extension should work for you. However, you need to take into account the following considerations:
The extension must be enabled in the server repositories. That is, the hgrc file of each served repository should have ACL settings defined:
[extensions]
acl =
[hooks]
pretxnchangegroup.acl = python:hgext.acl.hook
[acl]
sources = serve
[acl.deny.branches]
default = user1, user2, user3
These users that have push denied are system users. That is, the username is taken from the credentials provided by the web server in your case. It has nothing to do with the Author: field in the commit metadata.
You can only allow or deny complete chagegroups. If one of your denied users pushes a group of commits containing a single commit to the default branch, the whole push will be denied (even if the other commits are allowed). This is not so strange if your users tend to merge with the default branch very often.
You could also write your own pretxnchangegroup hook but you will not be much more capable than the ACL extension.

The current answer will check at the moment when you push (and the check is server side, as what I make up from the acl code). What you want is a check on commit to your local repository. For that you should make a 'pretxncommit'-hook (note there are multiple kinds of hooks that act on different events).
Do the following:
According to A successful Git branching model, there should be no direct commits on master, only merges. To impose this we can add a callback hook to mercurial to check commits and disallow them if they are directly on master. To do that add the following line of code in your project's .hg/hgrc:
[hooks]
pretxncommit.nocommittomasterhook = python:%USERPROFILE%\hgnocommittomaster.py:nocommittomaster
The in your Windows homedirectory create the file 'hgnocommittomaster.py' with contents (original example):
from mercurial.node import bin, nullid
from mercurial import util
# Documentation is here: https://www.mercurial-scm.org/wiki/MercurialApi
# Script based on: https://www.mercurial-scm.org/wiki/HookExamples
def nocommittomaster(ui, repo, node, **kwargs):
n = bin(node)
start = repo.changelog.rev(n)
end = len(repo.changelog)
failed = False
for rev in xrange(start, end):
n = repo.changelog.node(rev)
ctx = repo[n]
p = ctx.parents()
if ctx.branch() == 'master' and len(p) == 1:
if p[0].branch() != 'master':
# commit that creates the branch, allowed
continue
if len(ctx.files()) == 0 and len(ctx.tags()) == 1: # will not hit?, '.hgtags' always changed?
continue # Only a tag is added; allowed.
elif len(ctx.files()) == 1 and len(ctx.tags()) == 1:
if ctx.files()[0] == '.hgtags':
continue # Only a tag is added; allowed.
ui.warn(' - changeset rev=%d (%s) on stable branch and is not a merge !\n' % (rev,ctx))
failed = True
if failed:
ui.warn('* Please strip the offending changeset(s)\n'
'* and re-do them, if needed, on another branch!\n')
return True
This post was inspired by: Mercurial pre commit hook and Mercurial Pre-Commit Hook: How to hook to python program in current directory?

Related

Using a secret, private action

I am giving a coding lesson where students can upload answers to our quizzes using personal, private repositories. So here's how the repository structure of my organization looks like:
my_organization/student_1_project
my_organization/student_2_project
my_organization/...
my_organization/student_n_project
I would like to run a private GitHub Action at any push on a student repository. This Action would run partial reviews of the student's work, and notify me of stuffs. Its code would need to be unreachable from students, of course, otherwise providing hints & solutions.
I have three questions:
Can my workflow in e.g. my_organization/student_2_project be to use a private action my_organization/my_private_action? It seems like yes thanks to actions/checkout#v2 (see here) but pretty sure that involves playing with keys or tokens or secrets - I'm not so at ease with that and currently get an error although it does exist:
Error: fatal: repository 'https://github.com/my_organization/my_private_action' not found
Can it prevent the student (owner/admin of my_organization/student_2_project) to see the code in my_organization/my_private_action?
With the same constraints, could the private action be hosted in another organization?
Thanks a lot for your help!
This is how I understand the restrictions:
Using an action from a private/internal repository currently isn't supported directly, see this issue on the roadmap. A possible workaround is adding a personal access token with access to the private repo that contains the action and then checking it out like this:
- name: Get private repo with action
uses: actions/checkout#v2
with:
repository: yourorg/privateactionrepo
ref: master
token: ${{ secrets.PAT_TOKEN }}
path: .github/actions
You can then use the action in another step like
uses: ./.github/actions/actionname
The PAT can be a secret on the org level so you don't have to add it to every single student repo.
Since the student's repo has access to the PAT, they can use it to create a workflow that checks out the private repo and does whatever they want with it – upload its contents, print every file etc.
As long as the PAT has the permissions to check out the repo containing the action, the action can live anywhere, including in another organization.
Alternatively, if you want to prevent your students from seeing your action, you could add a workflow to your students' repositories that sends a request to the GitHub API and then have a trigger in your action on the repository_dispatch event.

boto3 cache session token not working

Either there's something borked in my environment or this functionality is broken. It appears it worked at one point according to the blog I followed:
What I'd like to do is run my script, enter the MFA. Then be able to run it again without entering MFA making use of cached session token.
The samples I've seen are:
session = boto3.Session(profile_name='w2-cf3')
ec2_client = session.client('ec2',region_name='us-west-2')
I'm then prompted for my mfa:
Enter MFA code:
I enter it and my code runs. At this point, my session token should be cached, that's how it works in awscli. However, on the second run, instead of reading in my cached session for this profile, boto3 disregards and prompts me again for my MFA:
Enter MFA code:
Here's what my ~/.aws/config file looks like:
[profile default]
region = us-west-2
output = json
[profile w2-cf3]
region = us-west-2
source_profile = default
role_arn = arn:aws:iam::<accountid>:role/<role>
mfa_serial = arn:aws:iam::<accountid>:mfa/<user>
Here's what my ~/.aws/credentials file looks like:
[default]
aws_access_key_id=<access key>
aws_secret_access_key=<secret key>
Expected: I expected the second time I run my script is would make use of the cached session token like it does in awscli. The session token provided by AWS lasts 1 hour.
This is discussed in the GitHub repo for botocore here and a pull request has been submitted too and being discussed.
You're correct, this seems it was working back in 2014 but has been somehow removed, from the discussion on the thread mentioned above, this should be re-implemented soon, follow the pull request thread and make sure to upgrade when it is being release.

Need to be able to Insert/Delete New Groups in openfire via HTTP or MySQL

I know how to insert a new group via MySQL, and it works, to a degree. The problem is that the database changes are not loaded into memory if you insert the group manually. Sending a HUP signal to the process does work, but it is kludgy and a hack. I desire elegance :)
What I am looking to do, if possible is to make changes (additions/deletions/changes) to a group via MySQL, and then send an HTTP request to the openfire server to read the new changes. Or in the alternative, add/delete/modify groups similar to how the User Service works.
If anyone can help I would appreciate it.
It seems to me that if sending a HUP signal works for you, then that's actually quite a simple, elegant and efficient way to get Openfire to read your new group, particularly if you do it with the following command on the Openfire server (and assuming it's running a Linux/Unix OS):
pkill -f -HUP openfire
If you still want to send an HTTP request to prompt Openfire to re-read the groups, the following Python script should do the job. It is targeted at Openfire 3.8.2, and depends on Python's mechanize library, which in Ubuntu is installed with the python-mechanize package. The script logs into the Openfire server, pulls up the Cache Summary page, selects the Group and Group Metadata Cache options, enables the submit button and then submits the form to clear those two caches.
#!/usr/bin/python
import mechanize
import cookielib
# Customize to suit your setup
of_host = 'http://openfire.server:9090'
of_user = 'admin_username'
of_pass = 'admin_password'
# Initialize browser and cookie jar
br = mechanize.Browser()
br.set_cookiejar(cookielib.LWPCookieJar())
# Log into Openfire server
br.open(of_host + '/login.jsp')
br.select_form('loginForm')
br.form['username'] = of_user
br.form['password'] = of_pass
br.submit()
# Select which cache items to clear in the Cache Summary page
# On my server, 13 is Group and 14 is Group Metadata Cache
br.open(of_host + '/system-cache.jsp')
br.select_form('cacheForm')
br.form['cacheID'] = ['13','14']
# Activate the submit button and submit the form
c = br.form.find_control('clear')
c.readonly = False
c.disabled = False
r = br.submit()
# Uncomment the following line if you want to view results
#print r.read()

How to access Hudson job1 artifacts from another job2?

We have a production job and a nightly job for a project in Hudson. The production job needs to pull off some artifacts from a specific nightly build # (which is provided as parameter). Can anyone help us with a hint on how to achieve this?
The Copy Artifact plugin seems to be capable of doing this.
Another approach could be to fetch the artifact via
http://server/jobs/job1/[build #]/artifacts/
You can use "Build Environment" configuration tools in the job's configuration page. Tick the Configure M2 Extra Build Steps box and add an Execute Shell which grep things from the desired artifact.
We have similar need and use the following system groovy:
import hudson.model.*
def currentBuild = Thread.currentThread().executable;
currentBuild.addAction(new ParametersAction(new StringParameterValue('LAST_BUILD_STATUS', 'FAILURE')));
def buildJob = Hudson.instance.getJob("ArtifactJobName");
def artifacts = buildJob.getLastBuild().getArtifacts();
if (buildJob.getLastBuild().getResult() == Result.SUCCESS && artifacts != null && artifacts.size() > 0) {
currentBuild.addAction(new ParametersAction(new StringParameterValue('VARIABLE_NAME', artifacts[0].getFileName())));
currentBuild.addAction(new ParametersAction(new StringParameterValue('LAST_BUILD_STATUS', 'SUCCESS')));
}
This creates a VARIABLE_NAME with the artifact name in it from ArtifactJobName, which we use since they are all stored in a specific folder. I am not sure what will happen if you have multiple artifacts, but it seems you could get them from the artifacts array.
You could use getLastSuccessfulBuild to prevent issue when another ArtifactJobName is being build at the moment and you get array with null.

Useful Mercurial Hooks [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What are some useful Mercurial hooks that you have come across?
A few example hooks are located in the Mercurial book:
acl
bugzilla
notify
check for whitespace
I personally don't find these very useful. I would like to see:
Reject Multiple Heads
Reject Changegroups with merges (useful if you want users to always rebase)
Reject Changegroups with merges, unless commit message has special string
Automatic links to Fogbugz or TFS (similar to bugzilla hook)
Blacklist, would deny pushes that had certain changeset ids. (Useful if you use MQ to pull changes in from other clones)
Please stick to hooks that have either bat and bash, or Python. That way they can be used by both *nix and Windows users.
My favorite hook for formal repositories is the one that refuses multiple heads. It's great when you've got a continuous integration system that needs a post-merge tip to build automatically.
A few examples are here: MercurialWiki: TipsAndTricks - prevent a push that would create multiple heads
I use this version from Netbeans:
# This software may be used and distributed according to the terms
# of the GNU General Public License, incorporated herein by reference.
#
# To forbid pushes which creates two or more headss
#
# [hooks]
# pretxnchangegroup.forbid_2heads = python:forbid2_head.forbid_2heads
from mercurial import ui
from mercurial.i18n import gettext as _
def forbid_2heads(ui, repo, hooktype, node, **kwargs):
if len(repo.heads()) > 1:
ui.warn(_('Trying to push more than one head, try run "hg merge" before it.\n'))
return True
I've just created a small pretxncommit hook that checks for tabs and trailing whitespace and reports it rather nicely to the user. It also provides a command for cleaning up those files (or all files).
See the CheckFiles extension.
Another good hook is this one. It allows multiple heads, but only if they are in different branches.
Single head per branch
def hook(ui, repo, **kwargs):
for b in repo.branchtags():
if len(repo.branchheads(b)) > 1:
print "Two heads detected on branch '%s'" % b
print "Only one head per branch is allowed!"
return 1
return 0
I like the Single Head Per Branch hook mentioned above; however, branchtags() should be replaced with branchmap() since branchtags() is no longer available. (I couldn't comment on that one so I stuck it down here).
I also like the hook from https://bobhood.wordpress.com/2012/12/14/branch-freezing-with-mercurial/ for Frozen Branches. You add a section in your hgrc like this:
[frozen_branches]
freeze_list = BranchFoo, BranchBar
and add the hook:
def frozenbranches(ui, repo, **kwargs):
hooktype = kwargs['hooktype']
if hooktype != 'pretxnchangegroup':
ui.warn('frozenbranches: Only "pretxnchangegroup" hooks are supported by this hook\n')
return True
frozen_list = ui.configlist('frozen_branches', 'freeze_list')
if frozen_list is None:
# no frozen branches listed; allow all changes
return False
try:
ctx = repo[kwargs['node']]
start = ctx.rev()
end = len(repo)
for rev in xrange(start, end):
node = repo[rev]
branch = node.branch()
if branch in frozen_list:
ui.warn("abort: %d:%s includes modifications to frozen branch: '%s'!\n" % (rev, node.hex()[:12], branch))
# reject the entire changegroup
return True
except:
e = sys.exc_info()[0]
ui.warn("\nERROR !!!\n%s" % e)
return True
# allow the changegroup
return False
If anyone attempts to update the frozen branches (e.g., BranchFoo, BranchBar) the transaction will be aborted.