Stop TeamCity from Auto Checkout when adding a repo - mercurial

I'm trying to configure TeamCity for use in our continuous integration.
Our project has approximately 35 mercurial repos spread across 4 cities. All in all the code in the repos are approximately 30GB in size.
Our problem is that if we add/remove a repo from the VCS roots of a build configuration, the configuration automatically does a complete clean re-checkout of all repos. This adds an extra 3 hours to our build cycle.
Is there anyway to turn this off?
We have TeamCity versions 7.0 and 7.1
UPDATE:
Additional details for one of the build configurations:
Name: BE - Full Build
Description: none
Build number format: %AssemblyBuildNumber%, next build number: #%AssemblyBuildNumber%
Artifact paths:
none specifed
Build options:
hanging builds detection: ON
status widget: OFF
maximum number of simultaneously running builds: unlimited
Version Control Settings edit ยป
VCS checkout mode: Automatically on server
Checkout directory: default
Clean all files before build: OFF
VCS labeling: disabled
Attached VCS roots:
< All the repos with no rules and no labels >
Show changes from snapshot dependencies: OFF

Perhaps an agent side checkout + local mirror could help you. Take a look at internal properties section here: http://confluence.jetbrains.net/display/TCD7/Mercurial

Related

How to install NPM dependencies of a github action?

I am setting up github actions, and struggling to see how to handle the dependencies of a particular action.
Below is a simplified view of my repository setup, with the action and workflow YAML files as I currently understand them.
my-org/my-tool#master: a specific tool in my organisation that I want to enable as a github action
package.json
tool.js: the tool, depending on npm packages described in package.json
action.yml
name: my-tool
inputs:
file:
description: file to be processed by my-tool
required: true
runs:
using: node16
main: tool.js
my-org/my-code#branch: a code repository where I want to run the tool as an action
my-code.js: some code that needs to be processed by the tool
.github/workflows/use-tool.yml:
# ...
jobs:
tool:
steps:
- uses: my-org/my-tool#master
with:
file: my-code.js
# ...
If my-tool has no dependencies, this setup works well enough. But my-tool needs to have dependencies installed to function properly, i.e., a single npm install in its root directory. How do I specify this dependency in a github action setup?
I have multiple options, none of which are truly satisfying to me:
I can define a workflow that checks out my-tool, installs the dependencies, and then runs it. However, I don't want that logic to be in the workflow yaml of the my-code repository, as I have multiple similar repositories, all of which require my-tool, which will result in duplication. I don't see how to do this in the action.yml file itself. It seems I have to choose between a javascript action to run node tools, or a composite action to run shell tools.
I can bundle the node_modules directory in the my-tool repository; this bloats my repos unnecessarily.
I can bundle an ncc distribution of my-tool in its repo; same problem.
I can define and build a container image of my-tool with its dependencies installed, and use a container action to run the tool on my-code. This seems like overhead to me, and I have no idea how this container would access the files from the my-org/my-code repo (which would be its core purpose).
Options 2 and 3 are the ones described in the example published by github. I would be happy with using an artifact (either node_modules dir or ncc distribution) as the basis for an action, but there seems to be no way to do this.
My specific use case concerns npm dependencies, but there seems to be a generic use case for "action with dependencies", support for which is unclear to me from reading the documentation. I can't be the only dev in the world working with a toolset with dependencies of its own. Maybe I am looking at this setup in the wrong way. Is there a best practice for actions with dependencies? Is there a best practice to only have actions with zero dependencies?
I look forward to learn from any partial answer or suggestion.

How to handle GitHub Actions deleting its cache after 7 days

I have a GitHub Action that runs tests for my Python/Django project. It caches the virtual environment that Pipenv creates. Here's the workflow with nearly everything but the relevant steps commented out/removed:
jobs:
build:
runs-on: ubuntu-latest
services:
# postgres:
steps:
#- uses: actions/checkout#v2
#- name: Set up Python
#- name: Install pipenv and coveralls
- name: Cache pipenv virtualenv
uses: actions/cache#v2
id: pipenv-cache
with:
path: ~/.pipenv
key: ${{ runner.os }}-pipenv-v4-${{ hashFiles('**/Pipfile.lock') }}
restore-keys: |
${{ runner.os }}-pipenv-v4-
- name: Install dependencies
env:
WORKON_HOME: ~/.pipenv/virtualenvs
PIPENV_CACHE_DIR: ~/.pipenv/pipcache
if: steps.pipenv-cache.outputs.cache-hit != 'true'
run: pipenv install --dev
# Run tests etc.
This works fine usually, but because caches are removed after 7 days, if this is run less frequently than that, it can't find the cache and the Install Dependencies step fails with:
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/.pipenv/virtualenvs/my-project-CfczyyRI/bin/pip'
I then bump the cache key's version number (v4 above) and the action runs OK.
I thought the if: steps.pipenv-cache.outputs.cache-hit != 'true' would fix this but it doesn't. What am I missing?
First alternative: using a separate workflow, with a schedule trigger event, you can run workflows on a recurring schedule.
That way, you force a refresh of those dependencies in the workflow cache.
Second alternative: use github.rest.actions.getActionsCacheList from actions/github-script (as seen here) again in a separate workflow, just to read said cache, and check if it still disappear after 7 days.
Third alternative: check if reading the cache through the new Web UI is enough to force a refresh.
On that third point (Oct. 2022):
Manage caches in your Actions workflows from Web Interface
Caching dependencies and other commonly reused files enables developers to speed up their GitHub Actions workflows and make them more efficient.
We have now enabled Cache Management from the web interface to enable developers to get more transparency and control over their cache usage within their GitHub repositories.
Actions users who use actions/cache can now:
View a list of all cache entries for a repository.
Filter and sort the list of caches using specific metadata such as cache size, creation time, or last accessed time.
Delete a corrupt or a stale cache entry
Monitor aggregate cache usage for repositories and organizations.
In addition to the Cache Management UX that we have now enabled, you could also use our Cache APIs or install the GitHub CLI extension for Actions cache to manage your caches from your terminal.
Learn more about dependency caching to speed up your Actions workflows.

On TeamCity, Mercurial repo is checked out without execute bit set

In my Mercurial repository, I have some build scripts. However, when TeamCity checks out the repo, it doesn't set the execute bit on them, even though it's set in the repository. The build then fails, as it can't run the scripts, as you would expect. How do I make TeamCity respect the execute bit?
I am running TeamCity 9.0 on Ubuntu Server 14.04.
Changing the VCS checkout mode to Agent rather than Server fixed this problem for me.
To change the VCS checkout mode:
On the project configuration settings, go to Version Control Settings
Click on Show advanced options
Change VCS checkout mode from Automatically on server to Automatically on agent

How do I choose an artifact from Nexus in a Hudson / Jenkins job?

I have a job in Hudson server A which builds an artifact and deploys it to Nexus. I have another job in a completely separate Hudson server B which needs to download the artifact and deploy it. This job is normally run manually, and the person running it needs to indicate which version of the artifact to deploy - they may not always want to deploy the latest version (e.g. to roll back to a previous known good version).
Currently, I achieve this by using a parameterized build, and require the user to pass in the artifact version number; the job then uses the Execute shell build step to run wget on a URL constructed using the parameter. This is error prone.
Ideally I'd like a plugin that lets the user browse the artifact versions in the Nexus repository and pick and choose the one to deploy, but I'm open to other suggestions. A plugin that also handles the download would be nice, but I can live without it as long as I can still get a string that I can use in shell commands.
I've looked through the available Hudson & Jenkins plugins around Maven style artifact repositories, but they all seem more concerned with pushing artifacts into repos rather than getting them back down.
I'm using Hudson's "Copy Artifact" in other jobs, to get artifacts from other Hudson jobs on the same server, but this doesn't work across different Hudson servers, which is why I've turned to Nexus (which we're already using anyway).
Does anyone have any suggestions?
I recommend using rundeck to execute your deployments.
There is a rundeck plugin for Nexus that enables rundeck to display a pull down menu of available versions in Nexus.
There is a rundeck plugin for Jenkins that can be used to invoke deployments using rundeck and kick-off post deployment jobs (like integration testing) inn Jenkins.

Checkout previous success version (revision) in teamcity

Oure teamcity server (6.5) configured to checkout sources from SVN. For some build proceess cases I need checkout previous successfully builded version(revision). Can teamcity do this? And if can, how to configure checkout?
It sounds like you're looking for TeamCity Snapshot Dependencies, with the "Only use successful builds from suitable ones" option.
You'd end up with two build configurations:
Performs initial builds on commit to SVN
Has a snapshot dependency on #1, so when this build runs (either automatically - via a trigger - or manually), it grabs the same sources as the last successful build of #1.
Both of the build configurations would use the same VCS root.