Trigger a build in TeamCity on all branches, except for master for a specific user - configuration

So we have this TeamCity setup, that automatically triggers a build every single time something is committed in GitHub. The problem is that I only need to build on all branches for all users + on master for ONE SPECIFIC user.
I took a look at TeamCity's Triggers:
It has a Trigger Rule option:
Unfortunately though, it doesn't have a "branch" filter, so that I could only accept builds from my particular user on master
So how do I configure this? Thanks.

Related

A practical way to provide code updates via Mercurial without sharing main BitBucket account

I suspect this might be really obvious but I can't find a straightforward solution in the documentation or forums:
I have written some code that is held in a Mercurial repository on BitBucket.
I use this code to build Linux virtual servers. When I build a server, I clone the repo onto the server, run my build script, and then delete the clone. The result is a configured server with several files from my repo located in various folders on the server.
Now, I'm looking for a mechanism where I can roll out bug fixes and improvements to my users' servers after I have handed them over. At that time, I won't have SSH access to the servers and I cannot expect my end users to do anything more complicated than kick off a cron job or launch a script.
To achieve this, I'm thinking of setting up a BitBucket account for my users with read-only access to my repo.
I have no problem writing a script to clone my repo, via this read-only account, and apply the updates, but I don't want to include all my files. In particular, I want to exclude my build script as it is commercially sensitive. I know I could remove it from my repo, but then my build wouldn't work.
Reading around, it seems I may need to create a branch or a fork of my repo (which?). Or maybe a sub-repo? Then, I could remove the sensitive files from that branch/fork/sub-repo and allow my users to clone it via a script.
That's OK, but I need a way to update the branched/forked/sub repo as I make changes to the main one. Can this be automatic? In other words, can it be set up to always reflect the updates made in the main repo? Excluding the sensitive files of course.
I'm not sure I'd want updates to be automatic though, so I'd also like to know how to transfer updates from the main to the branch/fork/sub manually. A merge? If I do a merge, how do I make sure my sensitive files don't get copied across?
To sum up, I have a main repo which contains some sensitive files and I need a way to roll out updates of all but those sensitive files to my read-only users.
Sorry if this is hugely obvious. I'm sure it's a case of not seeing the wood for the trees and being overwhelmed by the possibilities. Thanks.
I don't think that you need to solve this in Mercurial at all.
What you actually need is Continuous Integration / a build server.
The simplest solution goes like this:
Set up a build server with something like TeamCity or Jenkins, that's always online and monitors changes in your Bitbucket repository.
You can set it up so that when there's a change in your repository, the build server runs your build script and copies the output to some FTP server, or download site, or whatever.
Now you have a single location that always contains the most recent code changes, but without the sensitive files like the build script.
Then, you can set up a script or cron job that your end users can run to get the newest version of the code from that central location.
You are ok with two branches, one for the users clone (main) and other for your main development (dev), the tricky part is merging the new changes from dev to main.
You can solve this by excluding files in the merge process. Excluding a file while merging in Mercurial
By setting the [merge-patterns] section in your .hgrc you can sepcify what files are not affected by the merge.
[merge-patterns]
build.sh = internal:local
For more info read hg help merge-tools.
"internal:local"
Uses the local version of files as the merged version.
Entire Mercurial trees always get moved around together, so you can't clone or pull just part of a repository (along the file tree axis). You could keep a branch that has only part of the files, and then keep another branch that has everything, making it easy to merge the the partial (in terms of files) branch into the other branch (but merging the other way wouldn't be particularly easy).
I'm thinking maybe subrepositories work for your particular use case.

simultaneous instances of the same hudson/jenkins job

I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".

Best git mysql versioning system?

I've started using git with a small dev team of people who come and go on different projects; it was working well enough until we started working with Wordpress. Because Wordpress stores a lot of configurations in MySQL, we decided we needed to include that in our commits.
This worked well enough (using msyql dump on pre-commits, and pushing the dumped file into mysql on post-checkout) until two people made modifications to plugins and committed, then everything broke again.
I've looked at every solution I could find, and thought Liquibase was the closest option, but wouldn't work for us. It requires you to specify schema in XML, which isn't really possible because we are using plugins which insert data/tables/modifications automatically into the DB.
I plan on putting a bounty on it in a few days to see if anyone has the "goldilocks solution" of:
The question:
Is there a way to version control a MySQL database semantically (not using diffs EDIT: meaning that it doesn't just take the two versions and diff it, but instead records the actual queries run in sequence to get from the old version to the current one) without the requirement of a developer written schema file, and one that can be merged using git.
I know I can't be the only one with such a problem, but hopefully there is somebody with a solution?
The proper way to handle db versioning is through a version script which is additive-only. Due to this nature, it will conflict all the time as each branch will be appending to the same file. You want that. It makes the developers realize how each others' changes affect the persistence of their data. Rerere will ensure you only resolve a conflict once though. (see my blog post that touches on rerere sharing: http://dymitruk.com/blog/2012/02/05/branch-per-feature/)
Keep wrapping each change within a if then clause that checks the version number, changes the schema or modifies lookup data or something else, then increments the version number. You just keep doing this for each change.
in psuedo code, here is an example.
if version table doesn't exist
create version table with 1 column called "version"
insert the a row with the value 0 for version
end if
-- now someone adds a feature that adds a members table
if version in version table is 0
create table members with columns id, userid, passwordhash, salt
with non-clustered index on the userid and pk on id
update version to 1
end if
-- now some one adds a customers table
if version in version table is 1
create table customers with columns id, fullname, address, phone
with non-clustered index on fullname and phone and pk on id
update version to 2
end if
-- and so on
The benefit of this is that you can automatically run this script after a successful build of your test project if you're using a static language - it will always roll you up to the latest. All acceptance tests should pass if you just updated to the latest version.
The question is, how do you work on 2 different branches at the same time? What I have done in the past is just spun up a new instance that's delimited in the db name by the branch name. Your config file is cleaned (see git smudge/clean) to set the connection string to point to the new or existing instance for that branch.
If you're using an ORM, you can automate this script generation as, for example, nhibernate will allow you to export the graph changes that are not reflected in the db schema yet as a sql script. So if you added a mapping for the customer class, NHibernate will allow you to generate the table creation script. You just script the addition of the if-then wrapper and you're automated on the feature branch.
The integration branch and the release candidate branch have some special requirements that will require wiping and recreating the db if you are resetting those branches. That's easy to do in a hook by ensuring that the new revision git branch --contains the old revision. If not, wipe and regenerate.
I hope that's clear. This has worked well in the past and requires the ability for each developer to create and destroy their own instances of dbs on their machines, although could work on a central one with additional instance naming convention.

Possible to use a different set of hooks for a user or group in Mercurial?

I am not sure if this is possible currently but is it possible to specify a separate set of hooks for a user or group (groups from the ACL extension).
I know you can specify the hooks on each user's machine individually but I would like to also place the hooks on the central repo (so that those hooks are ran when they push).
For example
say I have hooks for group A
[hooks]
pretxngroupchange.A=python:Group-A-hook.py:hook
for all of group A
and
[hooks]
pretxngroupchange.B=python:Group-B-hook.py:hook
If someone from group A pushes, I don't want the hooks for group B to be triggered
Is this possible? Even if I can't do it by groups (which I think Mercurial should pick up Os-level groups), is it possible for Hg to run it per user?
You could use a single script which looks up users by name and performs a different activity based on the user. In order to avoid having to update the script for new users, you could revision a user list in an Hg repo and read the latest version of the repo inside this hook.
I'm not sure that this qualifies as a "good idea", but it might work if you can't find another solution.

Maintaining multiple workspaces for each build in Hudson

Is it possible to maintain multiple workspaces for each build in Hudson? Suppose if i want to keep the last 5 builds, is it possible to have the five corresponding workspace folders also? Currently whenever a new build is scheduled it overwrites the workspace.
Right now, the idea is to reuse the workspace.
It is based on the SCM used (a SVN workspace or a Git workspace or a ClearCase snapshot or dynamic view or ...), and in none of those SCM plugins I see the option to build a new workspace or to save (copy) an old one for each run of the Job.
One (poor) solution would be to:
copy the job four times, resulting in 5 jobs to be modified for specifying 5 different workspaces (based on the same SCM configuration, meaning those 5 workspaces select the same versions in each one of them),
and have them scheduled to run one after the other.
As far as I know, there's no built in way to do it.
You do have a couple of options:
As one of your build steps, you could tar (or zip) up the workspace and record it as a build artifact.
Generate a tag with each successful build (e.g. with the Subversion Tagging Plugin)
Although not ideal, you could use the Backup Plugin.
The backup plugin allows you to back up the workspace. So, you could run the plugin after every build and it would archive the workspace.
Again, not ideal, but if this is a must-have requirement, and if it works with the way you're using Hudson, then it could work.
Depending on what you want to do, you have a few options.
If you need the last five workspace for another job, you can use the clone workspace SCMlink text plugin. Since I have never used it, I don't know if you can access the archived workspace manually (through the UI) later.
Another option worth to try, is to use the archive option and archive the whole workspace (I think the filter setting for the archive option would be **/*). You can than download the workspace in a zipped version from every job run. The beauty of this solution is, that the artifacts will be cleaned up when you delete the particular job run (manually or through the job setting to delete old builds).
Of course you can also do it manually and run an copy as the last step of your build. You will need five directories (you can name them 1 to 5). First delete the oldest one and rename the others (4->5, 3->4, ..). The last step would be to copy the workspace to the directory holding the newest copy (in our example 1). This will require you to maintain your own archive job. Therefore I prefer one of the above mentioned options.