gitlab runners artifacts: subfolders and files without parent folder in zip - gitlab-ci-runner

I have a folder named "public" containing subfolders (with subfolders and files, and so on) and files like:
└ public
└ folder1
└ file1.txt
└ folder2
p_file1.txt
p_file2.txt
These folders and files should be returned as artifact from a gitlab runners run. So long that works but also the folder "public" is getting part of the zip file as parent folder to the others. I only want to get the subfolders and files into the zip file without their parent folder "public" (replace "public" in the above example but "artifact.zip" for the intended structure).
So far I tried:
- "public"
- "public/*"
- "public/**"
- "public/**/*"
Edit (I was maybe not that clear):
I would like to specify it within gitlab-ci.yml:
artifacts:
name: app
paths:
- ???

When GitLab collects artifacts, the relative path within the zip archive will always match the relative path from which they are collected in the workspace, irrespective of what paths: rule was used to match the file.
The paths: and exclude: patterns simply determine which files are included.
From the docs:
The paths keyword determines which files to add to the job artifacts. All paths to files and directories are relative to the repository where the job was created.
In order to get the artifacts to appear in the order you want, you'll have to arrange them how you want relative to the working directory. For example using mv ./public/subdir ./subdir in your job then a matching rule to artifact ./subdir.
e.g.
my_job:
# ...
after_script:
- mv public/subdir ./subdir
artifacts:
paths:
- subdir
If getting the rules to match is difficult (for example, if you want to use glob patterns that would otherwise match files in your repo), you could consider either using artifacts:untracked or a standalone job for rearranging artifacts from the previous job's artifacts.
For example, suppose you wanted to match all files under public containing html files, but you wanted subdirectories under public to be at the top-level of your artifact distribution. However you (1) don't know what subdirectories will exist ahead of time and (2) glob patterns in the root may match files in your repo and (3) you can't reasonably add exclude: rules to prevent artifacting of unwanted files that may exist in your repository. -- you can use one of the following strategies:
Move the files to the workspace root (or however you want them arranged) and use artifacts:untracked
my_job:
after_script:
- mv ./public/* ./ # move subdirs and files to root
# or however you want them arranged in your artifact archive,
# relative to the workspace root
- rm -rf ./public
artifacts:
untracked: true # artifact everything not tracked by git
Use a secondary job to re-arrange your artifacts
stages:
- one
- two
my_job:
stage: one
# ...
artifacts:
paths:
- public/**/*.html
expire_in: "1h" # optionally, expire these quickly
rearrange_artifacts:
stage: two
needs: [my_job] # only restore artifacts from this job, start right away
variables:
GIT_STRATEGY: none # prevent repository checkout, optional
script:
- mv public/* ./ # move subdirectories and files to root
- rm -rf ./public
artifacts:
untracked: true # artifact everything, excluding git-tracked files
## or; artifact the workspace dir, including git tracked files
# paths:
# - $CI_PROJECT_DIR

Related

Use artifacts as Jekyll assets in Github pages

I use Gitlab pages and Jekyll to generate a website, and a python script to generate the images and data files (JSON) used by Jekyll. As I need to update these files daily, I commit and push dozens of images to update the website, which is not really convenient.
I also use Github Actions to generate and store these files as artifacts on Github:
- name: Main script
run: |
python generate_images.py --reload # saves in folder saved_images
# I manually commit and push these images in jekyll/assets/img to update the site
- name: Upload images artifacts
uses: actions/upload-artifact#v1
with:
name: saved_images
path: saved_images
I would find it better to tell Jekyll to use the artifacts, instead of the committed files, so that I can update the site by just re-launching the github action (hopefully without extra commit or branch change). Actually that's what I've seen on Gitlab on another project:
pages:
stage: Web_page
before_script:
- python generate_images.py --reload
- cp -r saved_images/*.png jekyll/assets/img
- cd jekyll
- bundle install --path vendor
script:
- bundle exec jekyll build -d ../public
...
So I wonder if it is possible to use artifacts as Jekyll assets and data files in Github pages?

Why is Mercurial ignoring some of my files?

Having run 'hg init' and 'hg add' to create a new Mercurial repository and add the files, I find that quite a few of the files are not being tracked (they show up with 'hg status -i'), yet do not seem to match any pattern in my .hgignore file, so I don't see what the issue is. Here's the .hgignore file:
# Eclipse project files
.classpath
.project
.settings/
# IntelliJ project files
\.iml
\.ipr
\.iws
.idea/
out
# Grails files and dirs that should not be versioned
target
web-app/WEB-INF/classes
web-app/WEB-INF/tld/c.tld
web-app/WEB-INF/tld/fmt.tld
stacktrace.log
plugin.xml
devDb.*
prodDb.*
# Mac OS/X finder files
.DS_Store
oldhg/
All files in e.g., '/grails-app/views/layouts' are ignored, and yet I can see nothing in the .hgignore file which would cause this. What am I missing? How can I force these files not to be ignored?
The string out matches anything containing that, including layouts/. If you want it to only match at the beginning or end of a name, you need to anchor it with ^ or $.

Jekyll Website won't load

I've been trying for a while to get a Jekyll website running on Github Pages, but it doesn't seem to work. I've been getting the error
Your site is having problems building: The symbolic link
/vendor/bundle/ruby/2.3.0/gems/ffi-1.9.18/ext/ffi_c/libffi-x86_64-linux-gnu/include/ffitarget.h
targets a file which does not exist within your site's repository. For
more information, see
https://help.github.com/articles/page-build-failed-symlink-does-not-exist-within-your-site-s-repository/.
I have already tried it with 9 different Jekyll themes, but none of them seem to work, so I'm clearly doing something wrong. Here are the steps that I am taking
1) Create a new repo and put the files from a Jekyll Theme there, OR fork it from another repo (e.g. https://github.com/iwiedenm/jekyll-theme-massively-src)
2) Git pull it into my computer and make sure I'm on the gh-pages branch
3) Run bundle install --path vendor/bundle
4) Make sure it was built with bundle exec jekyll serve
5) Once it looks good, upload it into Github
git add *
git commit -m 'Test'
git push
Then I go to the repo in the browser and I see the error above, and I can't see the website because of that missing "ffitarget.h" file. When I go look for it in that directory, I am able to find it, but Github doesn't seem to be able to find it.
Nick Shu
PS: Feel free to mark this as a duplicate. I have seen other pages, such as this and I tried it, but it didn't work.
Github page will use local gems in vendor. If you commit them, you will have errors each time github pages tries to resolve symbolic links.
From a fresh repository
Add vendor/** in your .gitignore file before you do a git add . *.
The dot in git add . * forces git to stage dotfiles (.gitignore, ...).
From an already existing repository containing gems in a vendor folder
Add vendor/** in your .gitignore file,
Remove vendor/ files from versioning, git rm --cached -r vendor/
You can now stage, commit and push
git add . *
git commit -m 'remove vendor from versioning'
git push origin master`
Notes :
you can publish master branch content, as gh-pages branch is no more mandatory. See documentation.
unless you have special needs like debuging, it's not necessary to download gems for each of your project. You can just do a bundle install.
Ensure the vendor/bundle/ directory has been excluded..
By default, Jekyll excludes that directory and therefore, it would not care about the contents in your vendor directory..
When you fork/clone a repo, there's a possibility that the exclude: list has been customized (therefore overriding the default setting). You can ensure vendor/bundle/ is ignored by Jekyll by adding it to your custom exclude list:
# Exclude list
exclude:
- README.md
- Gemfile
- Gemfile.lock
- node_modules
- gulpfile.js
- package.json
- _site
- src
- vendor
- CNAME
- LICENSE
- Rakefile
- old
- vendor/bundle/
To locally emulate how the site is built on GitHub Pages, you can build using the --safe switch:
bundle exec jekyll serve --safe

Clone a Mercurial repository into a non-empty directory

tl;dr;
hg clone ssh://hg#bitbucket.org/team/repo ~/prod/ fails with "destination is not empty" if ~/prod/ is not empty. Can I force cloning?
I am trying to write my first Ansible playbook that should deploy my code from a Bitbucket Mercurial repository to my server. There is a deployment path, ~/prod, which contains all code files as well as the data in ~/prod/media and ~/prod/db.db. To make sure the playbook works even if the ~/prod directory is empty or doesn't exist, this is what I have so far:
- name: create directory
file: path=/home/user/prod state=directory
- name: clone repo
hg:
repo: ssh://hg#bitbucket.org/team/repo
dest: /home/user/prod
force: yes
In my understanding, it ensures that the deployment directory exists and then clones the repo there. It works beautifully if the directory doesn't exist or is empty. However, as soon as I've cloned the repo once, this playbook fails with destination is not empty.
I can move media and db.db out first, then delete all other files, then clone, then move the data back. But it looks cumbersome.
I simply want to force cloning. But I cannot find the way to do it. Presumably this is so wrong that Mercurial won't allow me doing this. Why and what's a better way to go?
Though I haven't yet read it anywhere, looks like force-cloning is impossible. The two alternatives then are, as explained in another thread on the same topics:
indeed, clone the .hg folder to another directory and then move it to the target directory
or, hg init /home/user/prod and then hg pull ssh://hg#bitbucket.org/team/repo /home/user/prod; hg update -C -R /home/user/prod.
With the second one, it is possible to optimise the Ansible task, to perform this action only if the target directory doesn't contain .hg:
- name: recreate repo
command:
hg ssh://hg#bitbucket.org/team/repo -R /home/user/prod
creates=/home/user/prod/.hg # <-- only execute command if .hg does not exist
- name: update files
hg:
repo: ssh://hg#bitbucket.org/team/repo
dest: /home/user/prod
clone: no
update: yes # optional, for readability
force: yes
notify: "restart web services"

how can I split a mercurial repository?

The format of the hg mv command is hg rename [OPTION]... SOURCE... DEST
. Path names are relative to the current directory. Thus, when you are at a command prompt at the root directory and specify hg mv -n -I * A\B Z, mercurial will create the directory Z under the root directory, and move A\B\readme.txt to Z\readme.txt.
How can you specify, under Windows, that Z is the repository root directory? I tried using '.' as destination, i.e. hg mv -n -I * A\B . but got a message that A\B\readme.txt will be copied to B\readme.txt, not to readme.txt at the root. I tried using '~' as the destination, but hg mv -n -I * A\B ~ got me a new directory named "~" below the root, obviously not what I wanted.
So my question is: How do I specify the repository root directory as the destination to the mercurial move command?
edit: I'll try to clarify the issue.
I have an OldDev repository containing two products: Product-A and Product-B. Using the '~' symbol to denote OldDev's root folder, OldDev contains two folders: ~/Product-A and ~/Product-B (in addition, of course, to ~/.hg where its metadata is stored).
Each product is composed of a few projects, and each such project is assigned a folder under the product's folder. Thus Product-A has the Project-A, Project-B and Project-C, stored in ~/Product-A/Project-A, ~/Product-A/Project-B and ~/Product-A/Project-C, correspondingly. ~/Product-A/Project-A/xxx.cs is one of (Product-A's) Project-A's files.
Now I want to extract Project-A to its own NewDev repository. As it's the single project in NewDev, it makes no sense to retain the product/project hierarchy, so I want it to be at the root of NewDev: it xxx.cs file, for example, will be #/xxx.cs, where # is the root folder of NewDev (the one contianing NewDev's .hg directory where NewDev's metadata is stored).
To extract Project-A to NewDev I used the the convert extension, as documented in "split a repository in two". I used a mapfile containing the one mapping include Product-A/Project-A.
So far, NewDev is an exact subtree of OldDev. It does not contain ~/Product-B, it does not contain ~/Product-A/Project-B nor ~/Product-A/Project-C. It only contains ~/Product-A/Project-A. The files that remained are located at exactly the same paths as before, but only those files that belong to Product-A's Project-A were retained.
So, I've achieved half of my goals: I split OldDev, with its many products and projects, and created NewDev with only one project (Project-A). However, the files of Project-A are not at # but at their old (OldDev) location #/Product-A/Project-A. I need to move them up two steps so xxx.cs, will be at #/xxx.cs and not at #/Product-A/Project-A/xxx.cs
To move the files I tried to use the hg mv command, but I can't figure how to specify the root (#) as the destination.
Solution: What worked for me, based on Marc Anton Dahmen's answer, is as follows:
convert1.txt: hg convert -s hg -d hg --filemap mapfile1.txt olddev temprepo
mapfile1.txt: include Product-A/Project-A
convert2.txt: hg convert -s hg -d hg --filemap mapfile2.txt temprepo newrepo
mapfile2.txt: rename Product-A/Project-A .
Where the text of convrert1.txt and convert2.txt, of course, shell commands.
You must use the rename directive in your filemap instead of include like so:
rename Project-A .
Moving every file in a repository and the repository data is not an hg mv operation because that cannot change where the repository meta-data is stored.
The wording of your question is still really ambiguous, but I have a decent guess as to what you want to do.
Suppose you have a repo called /some/dir/avi-repo and you really want it to be in /avi-repo. Use clone:
cd /
hg clone /some /avi-repo
Now you have two identical copies of the repo, one in /some/dir/avi-repo and one in /avi-repo. You can delete all of /some/dir/avi-repo now.
Your desire seems a little more complicated than that with a tree like:
/some
---- /.hg # the repository meta-data
---- /dir # no files in here just the sub-dir
-------- /avi-repo
------------/file.c
------------/file.dat
------------/important-file.txt
And you want to move avi-repo to /some/avi-repo. You should be able to do that with the right sequence of mercurial commands, but it is far easier to:
mkdir /temp
cd /temp
hg clone /some /temp/avi-clone
rm -r /some
mkdir /some
hg clone /temp/avi-clone /some
Or some variant of that. The point is that repatriating an entire repository is not a job for hg mv.