how to configure gitlab-runner with conan remote? - gitlab-ci-runner

I'm trying to setup conan for a gitlab-runner.
When I configure conan for myself, I can use "conan remote add...", "conan user..." and the results are saved in my home directory.
However, I can't login as gitlab-runner, so I don't know how to make those settings persist. I can define CONAN_USER_HOME in .gitlab-ci.yml to point to a directory, but it's not clear that gitlab-runner will have the permissions to read anything I add to that directory.
Is this typically done by adding those conan commands to .gitlab-ci.yml so they are invoked for every run? That feels like a waste of resources.

Conan provides some environment variables which helps you to login according your remote name: CONAN_LOGIN_USERNAME_ and CONAN_LOGIN_PASSWORD_.
However it does not solve your problem completely, you need to add your remote address. So that, you can use Gitlab env vars to be dynamic:
image: conanio/gcc8:latest
run:
script:
- conan remote add upload_repo ${CONAN_REMOTE}
- conan create . demo/stable
- conan upload foo/0.1.0#demo/stable --all -r upload_repo
Here I just add my Conan repository by the env var ${CONAN_REMOTE} which will be configured through my Gitlab env vars. Also, I should have CONAN_LOGIN_USERNAME_upload_repo and CONAN_LOGIN_PASSWORD_upload_repo, otherwise I would need an extra step for conan user -r upload_repo -p <password> <username>
It works for simple build, but I would say it is limited and don't scale well when you need to build different configurations.
IMO you should try Conan Package Tools which is an extension to be used on CI. You can generate a template for gitlab, running:
conan new foo/0.1 -cis -ciglg
It will generate the files build.py and .gitlab-ci.yml.
Also, you can take a look on this example using Gitlab.

Conan seems better integrated with GitLab through, as an alternative approach, a package registry.
And with GitLab 14.6 (December 2021), you even can:
Publish Conan packages with only name and version
You use the GitLab Conan repository to publish and share your C/C++ packages. >
When creating a Conan package, there are four fields to consider: name, version, user, and channel.
These fields uniquely identify a package. The user and channel fields are optional in Conan 2.0, but GitLab required you to continue using them.
Customizing your naming conventions to match the requirements in GitLab instead of the standards set by Conan is inefficient and impractical.
We’ve updated the GitLab Conan repository to align with Conan.
Now you can publish and download your Conan packages with or without the user and channel fields.
This improvement helps you to be more efficient and makes it easier to find and validate packages in the user interface. This change is the first step in a broader set of planned improvements to the Conan repository to help move the feature from Beta to GA.
See Documentation and Issue.

Related

Corporate ca-certificate in git repo

We want to use GitHub Actions for CI. The Dockerfile we are using behind our corporate FW involves COPYing our certificate and updating ca-certificates.
That means I need to add the corporate certificate in the git repo for CI purposes.
That bothers me not so much in terms of security (it's a public key) but rather because I figure if every organization did that too the code could be cluttered by useless stuff.
I'm thinking of getting rid of all stuff RE certificates in the public repo and tell people to edit the Dockerfile should they need to build images behind the FW.
How do people go about that?
I would keep:
the CA in an external source
the Dockerfile generic (in that it would not need to be edited
The idea would be, for instance, to set the certificate in an environment variable, used then in the Dockerfile during docker build.
A wrapper script 'build' (versioned in the same repository) would:
check if the environment variable is set (and exit while complaining if not set)
call docker build.
Any user cloning the repository, and calling 'build' would discover the local requirement, even if they never read the README.

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

Create an RPM that can also manipulate files and add users

I'm trying to create an RPM in Fedora 15 that will install my software, but in order for my software to work correctly once installed, I also need to edit other (configuration) files on the system, add users/groups, etc. Performing some of these tasks is only allowed by the root user. I know to never create an RPM as the root user, and I understand why that is such a bad idea. However, if I add shell script statements to my spec file (%post, %prep... any section) to edit the necessary files, add users/groups, etc., my rpmbuild command fails with message "Permission denied" (not surprisingly).
What's the best way to handle this? Do I have to tell my users to install my package first, and then perhaps run a shell script as root to configure it all? That doesn't seem very elegant. I was hoping to allow a user to do everything with one simple command such as 'yum install mysoftware'.
Much of my research suggests that perhaps this shouldn't even be done via RPM. I've read many parts of Maximum RPM, and lots of other good resources, but haven't found what I'm looking for. I'm new to creating RPMs, but have already been able to successfully create a simple spec file for my software... I just can't get everything configured properly after the package is unzipped and installed to the correct location. Any input is greatly appreciated!
useradd should be run in %pre and shouldn't run during rpmbuild. That's the standard way of doing it. I would recommend the packaging guidelines and specifically the section on users and groups.
The %pre section of your RPM .spec file should check for all the conditions necessary for your software to install.
The %post section of your RPM .spec file should make all the modifications needed for your software to run.
To avoid file permission errors in the %post section of your RPM .spec file, you can set the file permissions and ownership in the %files section. That way, the user who installs the RPM has the appropriate permissions to modify the configuration files.
%install
# Copy files to directories on your installation server
%files
# Set file permissions and ownership on your installation server
%attr(775, myuser, mygroup) /path/to/my/file
%pre
# Check if custom user 'myuser' exists. If not, create it.
# Check if custom group 'mygroup' exists. If not, create it.
# All other checks here
%post
# Perform post-installation steps here, like editing other (configuration) files.
echo "Installation complete."

Can Jenkins store artifacts outside the job directory?

I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.

How to configure hosted Mercurial in TeamCity 5

This is probably a simple problem and I'm feeling exceptionally dumb because I can't find a any kind of documentation.
I've just installed TeamCity 5 and I want to get files from my Mercurial hosting and there is two fields I just can't figure out.
HG Command path. What should I put here? The path to a file containing what? Can I get an example of that file somewhere?
The host is using Mercurial over SSH where do I define my private key?
Pull changes from? Should I put the address I'm cloning from i.e. ssh://username#myhost.something/project
I figured this out for my TeamCity 5 server last week.
HG Command path: HG
Pull changes from: https://bitbucket.org/.../.../
Don't put the username# in the URL. This is specificed as in the Username/Password fields. If you include the username in the URL it'll fail as there is a bug in the configuration tool. You'll also see a screenshot of the configuration attached to the thread:
http://www.jetbrains.net/devnet/message/5254640#5254640
I'd suggest getting things working with HTTPS and then moving to SSH if possible. This breaks things down into two easier to solve configuration problems. I used the following tutorial to get SSH going on my Windows client machine.
http://www.codza.com/mercurial-with-ssh-setup-on-windows
I've not set this up on my TeamCity server yet. However I did get TeamCity to pick up my Mercurial.ini settings by putting the ini file in \Documents and Settings\TeamCity, which is the account the service runs under.
I've not used team city, but I think hg command path is probably the full path to your local mercurial executable. For me (on linux) that's:
$ type hg
hg is /usr/bin/hg
On windows it's where the 'hg' executable in your system path was placed by whichever (of the many) windows installers for mercurial you used.
Pull changes from sounds like the URL to the repo, so:
ssh://username#myhost.something/project
or
ssh://username#myhost.something//project # note the _two_ double slashes
if you're using absolute paths on the server side.
Your private key location/specification depends on what you're using for ssh and whether or not you're running ssh-agent, but here's a links that explicitly points from within mercurial.ini, which seems sound:
http://dev.openttdcoop.org/projects/home/wiki/Configuring_TortoiseHg_(Windows)#Pointing-to-you-Private-key