Corporate ca-certificate in git repo - github-actions

We want to use GitHub Actions for CI. The Dockerfile we are using behind our corporate FW involves COPYing our certificate and updating ca-certificates.
That means I need to add the corporate certificate in the git repo for CI purposes.
That bothers me not so much in terms of security (it's a public key) but rather because I figure if every organization did that too the code could be cluttered by useless stuff.
I'm thinking of getting rid of all stuff RE certificates in the public repo and tell people to edit the Dockerfile should they need to build images behind the FW.
How do people go about that?

I would keep:
the CA in an external source
the Dockerfile generic (in that it would not need to be edited
The idea would be, for instance, to set the certificate in an environment variable, used then in the Dockerfile during docker build.
A wrapper script 'build' (versioned in the same repository) would:
check if the environment variable is set (and exit while complaining if not set)
call docker build.
Any user cloning the repository, and calling 'build' would discover the local requirement, even if they never read the README.

Related

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

Jenkins projects pointing to same Mercurial repo do not share source

I am using Jenkins for our build server. I have multiple projects using the same Mercurial (Hg) repository and want to avoid each project cloning it's own local repo to build from (since the repo is rather large). This is supposed to be possible via Jenkins and the Mercurial plugin.
In my Mercurial plugin configuration I have checked both "Use Repository Caches" and "Use Repository Sharing". In each project, the same repository location (a network location specified via IP address) is listed.
However, each project still seems to want to create a clone of the repository. Any ideas?
In our setup (using Jenkins 1.506), I've defined a custom workspace under the Advanced Project Options for each of my builds, typically at [project]\repo and then build from there into a \build\ folder.
If you define the custom workspace for each Jenkins project to point to the same shared custom workspace using the same source for the repo it will reuse what is already there.
I've not tested this, but I would assume that under this setup, it is important to prevent concurrent builds from occurring in the same working directory. Bad things would follow.
As a followup question: What is your rationale for not wanting each build to have its own source code?

Can Jenkins store artifacts outside the job directory?

I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.

Web development scheme for staging and production servers using Git Push

I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way.
I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server.
Pushing to Staging Server:
As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory.
Pushing to Production Server:
Here's my newest source of confusion...
In the response that I cited above, it made me curious as to why #Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls?
Config Files:
With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions?
Accessing Data:
Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost?
I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.
Pushing to the production server
I assume that in the response you quote, the answer refers to pushing to the production server as "a different story", just because one can push any old commit to the staging server for testing, but you would be very careful only to push a thoroughly tested version to the production server.
I think the approach you refer to (of deploying by pushing to a bare repository with a post-receive that does git checkout -f with an appropriately set GIT_WORK_TREE) is a good one for deploying from git.
Config Files
That is a reasonable plan, but you have to be a somewhat careful about using .gitignore to ignore configuration files - you might want to look at this answer for more about this:
How can I have different versions of a file in the local working directory, remote working directory and git ftp target?
Accessing data
I think the question about data for your staging server really is a separate issue, since none of that data will be in your version control system - it might be worth adding another question here about that issue. You could have a script that dumps data on your live server and imports it to the staging server, but I can think of many situations in which that would be undesirable, particularly where customer details and data protections laws have to be considered.
The Git FAQ reccomends this post-receive hook script to reset the head of a non-bare repository after it is pushed to. It will save any changes that are uncommitted on the remote using a stash. Personally, I'd rather it reject the push in that case, but that can be cone
(please note: Lots of answers contain out of date links to the FAQ and the script - hopefully these will remain valid for some time at least)
I use git flow too. For config files in expressionengine, we use ee master config which basically determines the environment its in and applied a specific config. I imagine it could easily be modified for whatever you're doing.
For deployments, we use Beanstalk which allows you to add "[deploy:Environment]" to a commit message, which will make it upload (ftp) your specified branch (the one you commit to) to the specified environment, which you configure in their web interface, when you git push.
I've been trying to find an effective solution for .htaccess files that will allow me to htpasswd one of my environments, but not all. It looks like it's possible in Apache 2.3 with something like this:
<if "%{HTTP_HOST} == 'dev.example.com'">
# auth directives
</if>
but sadly, most of the production servers we use are running an earlier version, which doesn't support the directive :(

Get changes from mercurial to FTP site

I work with a partner on an PHP site for a client. We have a common Mercurial repository (on Bitbucket), both local copies and the live site. We have only FTP access to the live site (which can't be changed since it is a hosting package with FTP only).
I want to be able to push changes from the repository to the live site.
Until now I simply keep track of changed files in the repo and copy them manually with FileZilla - a error prone and annoying task. My idea is, to mount the remote location locally (i.e. using CurlFtpFS) and tell mercurial to automagically copy changed files to the site. Ideally I want to be able to specify which changes but this would be a bonus. It would be sufficient if the local state of the files within the repo are synced.
Is there any good way to do this using linux commandline tools?
My first recommendation is, if at all possible, get a package that allows more access. FTP only is just brutal.
But since you are looking for a real answer to your question, I have two ideas for you:
I would suggest looking into the mercurial FTP Extension. I personally have never used it since I have never gotten myself stuck in a ftp-only situation (not for a long time at least), but it looks promising. Looks like if you make sure that you tag your production releases it will work really well for you. (make sure to use the -uploaded param)
Also, if you only ever want the tip to be installed on your production env, then you could look at the suggestion Martin Geisler made on the bitbucket user group a few days ago. Basically his suggestion is to utilize bitbucket's "ping url" functionality. You would have to write a server-side script/url handler that would accept that ping, then fetch the tip from bitbucket (as a zip) and then unzip/unpack it. This is a bit complicated, but if you are looking for complete automation and the tip will always be the best this could work for you.
One notion is the use the hg archive command:
hg archive /path/to/curlftpsfs
which will put a snapshot of your repo in that location -- it will however overwrite any file already there.
Another option is to create a Mercurial clone in that same /path/to/curlftpsfs and then just do a hg pull ; hg update in it on your local system with the remote one mounted. Setting that up initially will mean transferring the whole thing but subsequently you'll only be sending deltas.
Some folks don't like this last options because it exposes your entire /.hg repository too, but you can block access to that at the web server.
I came across this problem a while ago after switching from AWS to a local web hosting that provides only ssh/ftp.
My previous approach of updating a production site on AWS using "hg pull; hg update -C" can no longer be used on the new web hosting. They don't have mercurial installed for shared hosts.
So, what I did is to mount the remote location using ftp, to a local machine (i.e. your laptop), then run the hg pull and update commands locally on your machine at the path where has the remote ftp site mounted.
Windows solution:
BeyondCompare (http://www.scootersoftware.com/) is an awesome piece of software. Apart from being awesome it can mirror your local folder to the FTP site. It's comparing files and only transfers what's new.