Github Action how to deal with standalone config file - github-actions

We are using Github Action to deploy our code. On push, the source code will be pushed and we were able to build the code and deploy successfully if the config file is also tracked by the repository. However, we are encountering a problem with a config file in .gitignore.
Our app has different versions, controlled by this config file, and also this file is different from testing to production. Therefore, this file is standalone and not tracked by the git repository. However, for Github actions to build the project correctly, this file is necessary and has to be placed on a certain path of the project, e.g., /configs/env_configs.json.
This seems like a very common use case but I find very little information in Github action's document.
Is there a good way to work this out?

Related

GitHub Actions checkout#v3: where is repo downloaded?

I am working with GitHub Actions to build code on Windows, Linux, and MacOS. I use actions/commit#v3 #actions/checkout#v3 to download my repo to each server. However, I do not know where the repo gets downloaded. I have to curl other files and add them to the repo folder for the build to work.
Does anyone know where repos are downloaded on each server (Windows, Linux, and MacOS) with actions/commit#v3 #actions/checkout#v3? I'm having trouble finding anything in the documentation. If the path is set in an environment variable, I would prefer to use that instead of hard coding the path for each server.
The environment variable you're looking for is GITHUB_WORKSPACE.
The default working directory on the runner for steps, and the default location of your repository when using the checkout action. For example, /home/runner/work/my-repo-name/my-repo-name.
Source: https://docs.github.com/en/actions/learn-github-actions/environment-variables#default-environment-variables

Can I use opam to make a package out of a local file and install it?

I'm new to opam and trying to figure out how to use it properly. For a class, I want to set up students with an environment that has some custom packages installed. (The package will consist of some raw .ml files that I got from a colleague at another school; the files are on their github but there's no .opam file that I can see, and as far as I know they're not in any official package release.)
Can I somehow call these local .ml files a package and ask opam to install it? Do the files have to be on github first, and if so can I use my colleague's existing repository as the source? I don't want to make any of this public, since it is not my own work; I just want to configure my local environment so that the code in the files can be included easily as a package. Basically I don't know the best way to proceed so I'm happy for any advice.
You can add a custom opam file in the base directory of the project. See the documentation for how to create that file.
Then you can enter opam pin add . in the base directory and your project will be installed as if it was an opam package. Check opam pin --help for more info (you can also pin to a remote git project for instance).
Note that though the default repository is hosted on github, this is in no way a requirement for opam. Opam is dependent on git but you can absolutely use it with a private git repository. If you want to use your colleague's repository as the source, that is totally doable though it is often preferable to have the opam file at the root of the directory (you can do a PR on their repository or make your own fork of it on github, the site makes it clear you copied the code).
If pinning is not to your taste, you can also create your own repository though this is probably a bit too heavyweight for your needs.
Good luck!

SSIS Deployment Woes

I'm quite confused as to how to create a deployment in SSIS 2008 that I can use throughout the various sites we are going to deploy to. I'm using the deployment utility to deploy my ETL packages which are file based and executed using a SQL job.
When I rebuild my solution, the deployment files are created along with their configuration files which I bind my connection strings to. I've discovered that each of the packages are still referencing the configuration files in my project folder, rather than the configuration files in the deployment folder. I thought that when I created a deployment, the paths referencing the configuration files would be relative paths.
Ideally, what I would have liked to have been able to do would be to copy the contents of the deployment folder to a flash drive, plug it in at the site I'm deploying to and edit the configuration file per the customer site, execute the deployment manifest file in the folder and expect everything to work. But this doesn't seem to be the case.
I also notice that the SQL job has an option to specify the configuration files for the packages, but this doesn't seem to have an effect either. I must clearly be doing something wrong here, please could someone assist.
Seems like you are encountering these two issues with SSIS deployment and execution:
Configuration file references are stored with absolute paths (meaning the concrete path used in the development environment when the configuration file reference was created, and in production this is the same path that will be used).
Specifying a different configuration file at runtime in SSIS 2008 cannot override values specified at design time (see Understanding How SSIS Package Configurations Are Applied at Run Time).
To deploy your packages with a simple file copy the way you describe, you must change your packages to use a relative reference to your configuration files:
Right click the package file and select View Source to open the XML view of the package source. Search for your configuration file, which will include the path, and remove the path; keeping only the filename portion. Alternatively, change the absolute path to a relative path to the configuration file. Save and close the XML view of the package.
Now when you deploy the package and the configuration file together, ensuring they have the same relative location to each other, the package will find the config file by the relative path, and work the way you expect.
Note: from this point forward you will need to open the BIDS IDE by double-clicking on the project or solution file. If you launch Visual Studio, and then open the project or solution from within the IDE, the IDE will not be able to find the configuration file when you execute the package (the current directory will be Windows\System32, not your package folder).

How can I stop "jekyll build" from overwriting existing files in the output directory?

The source for my Jekyll-powered website lives in a git repo, but the website also needs to have a couple large static files that are too large to go under version control. Thus, they are not part of the Jekyll build pipeline.
I would like for these to simply live in an assets directory in the Jekyll destination (which is a server directory; note that I don't have have any control over the server here; all I can do is dump static files into a designated directory) that does not exist in the git repo. But, running jekyll build deletes everything in the output directory.
Is there a way to change Jekyll's behavior in this case? Or is there some other good way to handle this issue?
Not sure this addresses the specific case in the OP, but seeing as how I kept getting to this page when I finally found an answer here, I thought I'd add an answer to this question in case it helps others.
I have a git post-hook that builds my jekyll site in my webhost when I push to my host, but it was also deleting anything else that I had FTP'ed over. So now I've put anything I need to stick around in a directory (external/ in my case), and added the following to my _config.yml:
exclude: [external]
keep_files: [external]
and now files in external/ survive.
If you upload Jekyll's output directory via FTP to your server, you can use a FTP tool that lets you ignore folders.
For example, my own site is built with Jekyll, but hosted on my own webspace, so I'm uploading it via FTP.
I explained in this answer how I scripted the building and uploading process, so I can update my site with a single click.
In my case (Windows), I used WinSCP, a free command-line FTP client, for this.
If you're not on Windows, you need to use something else, but there are probably other FTP tools out there that are able to ignore folders.
To ignore your assets folder in WinSCP, you just need to put this line into the script file:
(the file which contains the actual WinSCP commands - read my other answer for more information)
option exclude "assets/"
Now you can upload your large assets folder on the server once, and it won't be overwritten/deleted when you later update your site via FTP.

Jenkins projects pointing to same Mercurial repo do not share source

I am using Jenkins for our build server. I have multiple projects using the same Mercurial (Hg) repository and want to avoid each project cloning it's own local repo to build from (since the repo is rather large). This is supposed to be possible via Jenkins and the Mercurial plugin.
In my Mercurial plugin configuration I have checked both "Use Repository Caches" and "Use Repository Sharing". In each project, the same repository location (a network location specified via IP address) is listed.
However, each project still seems to want to create a clone of the repository. Any ideas?
In our setup (using Jenkins 1.506), I've defined a custom workspace under the Advanced Project Options for each of my builds, typically at [project]\repo and then build from there into a \build\ folder.
If you define the custom workspace for each Jenkins project to point to the same shared custom workspace using the same source for the repo it will reuse what is already there.
I've not tested this, but I would assume that under this setup, it is important to prevent concurrent builds from occurring in the same working directory. Bad things would follow.
As a followup question: What is your rationale for not wanting each build to have its own source code?