. Hello, can someone tell me, why gitlab cannot find my artifacts?
Logfile:
$ ls -la /build/Project*.zip
-rw-r--r-- 1 root root 1641 Nov 25 21:18 /build/Project-1.0.zip
Uploading artifacts...
WARNING: /build/Project*.zip: no matching files
CI File:
package:
stage: package
script:
- ... ... ...
- ls -la /build/Project*.zip
only:
- master
artifacts:
paths:
- "/build/$CI_PROJECT_NAME*.mkp"
expire_in: 1 week
The path of artifacts has to be relative to and be a child of $CI_PROJECT_DIR.
Make it relative to $CI_PROJECT_DIR.
Put it like:
- cp -r $(pwd)/target/*.html $CI_PROJECT_DIR/report
- ls -la $CI_PROJECT_DIR/report
artifacts:
paths:
- report/*.html
To know more go to: https://gitlab.com/gitlab-org/gitlab-foss/-/issues/15530
Related
I was using a CI CD pipeline to deploy my project to the server.
However it suddenly stopped working and I got two errors.
The first one is related to git and
The second one is a docker error.
Can somebody help me what could be the problem?
32 out: Total reclaimed space: OB
33 err: error: cannot pull with rebase:
You have unstaged changes. err: error: please commit or stash them. 34 35
out: docker build -f Dockerfile . -t
tourmix-next
36 err: time="20***-10-08T11:06:33Z"
level-error msg="Can't add file
/mnt/tourmix-main/database/mysql.sock
to tar: archive/tar: sockets not supported"
37 out: Sending build context to Docker daemon 255MB
38
out: Step 1/21 : FROM node:1ts as
dependencies
39 out: Its: Pulling from library/node
40 out: Digest:
sha256:b35e76ba744a975b9a5428b6c3cde1a1 cf0be53b246e1e9a4874f87034***b5a
47 41 out: Status: Downloaded newer image for node:1ts
2 42 out: ---> 946ee375d0e0
3 4 out: Step 2/21: WORKDIR /tourmix out: ---> Using cache
5 45 out: ---> 05e933ce4fa7
This is my Dockerfile:
1 FROM node:1ts as dependencies
2 WORKDIR /tourmix
3 COPY package*.json ./
4 RUN npm install --force
5
6 FROM node:lts as builder
7 WORKDIR /tourmix
8 COPY . .
9 COPY -from-dependencies /tourmix/node_modules ./node_modules
10 RUN npx prisma generate
11 RUN npm run build
12
13 FROM node:lts as runner
14 WORKDIR /tourmix
15 ENV NODE_ENV production
16 # If you are using a custom next.config.js file, uncomment this line.
17 COPY --from-builder /tourmix/next.config.js ./
18 COPY --from-builder /tourmix/public ./public
19 COPY --from-builder /tourmix/.next ./.next
20 COPY --from-builder /tourmix/node_modules ./node_modules
21 COPY -from-builder /tourmix/package.json ./package.json
22 COPY --from-builder /tourmix/.env ./.env
24 # copy the prisma folder
25 EXPOSE 3000
26 CMD ["yarn", "start"]
This is my GitHub workflow file:
# This is a basic workflow that is manually triggered
name: Deploy application
# Controls when the action will run. Workflow runs when manually triggered using the UI
# or API.
on:
push:
branches: [master]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "greet"
deploy:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- name: multiple command
uses: appleboy/ssh-action#master
with:
host: ${{secrets.SSH_HOST}}
username: ${{ secrets. SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
port: ${{ secrets.SSH_PORT}} passphrase: ${{ secrets.SSH_PASSPHRASE}}
script:|
docker system prune -a -f
cd /mnt/tourmix-main
git pull origin master --rebase
make release
docker system prune -a -f
- uses: actions/checkout#v3
with:
clean: 'true'
Start with the first error:
Add a git clean pre-step in your pipeline, to clean any private file from your workspace.
If you are using GitLab as a CICD platform, use Git clean flags (GitLab Runner 11.10+, Q2 2019)
For a GitHub Action, if the error is on the git pull command, add a git clean -ffdx just before the git pull.
script:|
docker system prune -a -f
cd /mnt/tourmix-main
git clean -ffdx <====
git stash <====
git pull origin master --rebase
make release
docker system prune -a -f
I'm using a GitHub Actions to deploy to a Google Cloud Function. The steps in my workflow include:
steps:
- name: "Checkout repository"
uses: actions/checkout#v3
# Setup Python so we can install Pipenv and generate requirements.txt.
- name: "Setup Python"
uses: actions/setup-python#v4
with:
python-version: '3.10'
- name: "Install Pipenv"
run: |
pipenv requirements > requirements.txt
ls -la
cat requirements.txt
- name: "Generate requirements.txt"
run: pipenv requirements > requirements.txt
- id: "auth"
name: "Authenticate to Google Cloud"
uses: "google-github-actions/auth#v0"
with:
workload_identity_provider: "..."
service_account: "..."
- id: "deploy"
uses: "google-github-actions/deploy-cloud-functions#v0"
with:
name: "my-function"
runtime: "python310"
Once I've generated the requirements.txt file I want that to be deployed along with my application code (checked out in the step above). The requirements.txt file gets generated during the build but it never gets deployed. (Confirmed by looking at the source in Cloud Functions).
How can I ensure this file is deployed along with my application code?
Update 1:
Here is the output after listing the contents of the directory after generating requirements.txt:
total 56
drwxr-xr-x 6 runner docker 4096 Sep 6 20:38 .
drwxr-xr-x 3 runner docker 4096 Sep 6 20:38 ..
-rw-r--r-- 1 runner docker 977 Sep 6 20:38 .env.example
-rw-r--r-- 1 runner docker 749 Sep 6 20:38 .gcloudignore
drwxr-xr-x 8 runner docker 4096 Sep 6 20:38 .git
drwxr-xr-x 3 runner docker 4096 Sep 6 20:38 .github
-rw-r--r-- 1 runner docker 120 Sep 6 20:38 .gitignore
-rw-r--r-- 1 runner docker 139 Sep 6 20:38 Pipfile
-rw-r--r-- 1 runner docker 454 Sep 6 20:38 Pipfile.lock
-rw-r--r-- 1 runner docker 1276 Sep 6 20:38 README.md
drwxr-xr-x 5 runner docker 4096 Sep 6 20:38 app
drwxr-xr-x 2 runner docker 4096 Sep 6 20:38 data
-rw-r--r-- 1 runner docker 2169 Sep 6 20:38 main.py
-rw-r--r-- 1 runner docker 27 Sep 6 20:38 requirements.txt
Update 2: Showing the contents of requirements.txt reveals it to only contain:
-i https://pypi.org/simple
No dependencies are included. This could well be the problem but I'm not yet sure why.
Update 3: The error shown in the deploy stage of the workflow is:
ModuleNotFoundError: No module named 'aiohttp'
This is because there is no requirements.txt file to install prior to running the function. aiohttp just happens to be the first dependency listed in my source code.
As explained by #ianyoung, the problem was with the pip file. The requirements.txt was empty, the requirements file is a list of all of a project’s dependencies. This includes the dependencies needed by the dependencies. It also contains the specific version of each dependency, specified with a double equals sign (==).
I'm trying to mount PVC to MongoDB deployment without privileged access.
I've tried to set anyuid for pods via:
oc adm policy add-scc-to-user anyuid -z default --as system:admin
In deployment I'm using securityContext config. I've tried several combination of fsGroup etc. :
spec:
securityContext:
runAsUser: 99
runAsGroup: 99
supplementalGroups:
- 99
fsGroup: 99
When I go to the pod uid and guid is set correctly:
bash-4.2$ id
uid=99(nobody) gid=99(nobody) groups=99(nobody)
bash-4.2$ whoami
nobody
bash-4.2$ cd /var/lib/mongodb/data
bash-4.2$ touch test.txt
touch: cannot touch 'test.txt': Permission denied
But pod can't write to the pvc directory:
ERROR: Couldn't write into /var/lib/mongodb/data
CAUSE: current user doesn't have permissions for writing to /var/lib/mongodb/data directory
DETAILS: current user id = 99, user groups: 99 0
DETAILS: directory permissions: drwxr-xr-x owned by 0:0, SELinux: system_u:object_r:container_file_t:s0:c234,c491
I've tried to instantiate also MySQL template with PVC without any configuration change from OpenShift catalog and it's the same issue.
Thanks for the help.
Temporary solution is to use init container with root privileges to change owner of mounted path:
initContainers:
- name: mongodb-init
image: alpine
command: ["sh", "-c", "chown -R 99 /var/lib/mongodb/data"]
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-pvc
But also I'm looking at tool named Udica. It can generate SELinux security policies for container: https://github.com/containers/udica
How to set up a new Symfony project with MySQL database using Docker?
I've been trying to set up a new project using Docker for over a week now. I've read trough Docker documentation, found a few tutorials, but nothing really worked for me. And I'm just not able to crack how Docker set up works. Last time I tried I just got a RuntimeException and an ErrorException errors
Project Structure:
-myProject
-bin
-...
-config
-...
-docker
-build
-php
-Dockerfile
-php
-public
-index.php
-src
-...
-var
-...
-vendor
-...
-docker-compose.yaml
-...
My docker-compose.yaml:
version: '3.7'
services:
php:
build:
context: .
dockerfile: docker/build/php/Dockerfile
ports:
- "8100:80"
# Configure the database
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-root}
My Dockerfile:
FROM php:7.3-apache
COPY . /var/www/html/
I expected to have "Welcome to Symfony" page but I got an error page.
Errors:
ErrorException
Warning: file_put_contents(/var/www/html/var/cache/dev/srcApp_KernelDevDebugContainerDeprecations.log): failed to open stream: Permission denied
AND
RuntimeException
Unable to write in the cache directory (/var/www/html/var/cache/dev)
What I need is some help to set up my Symfony 4 project with MySQL using Docker
OK so to make it work I just needed to give permision to var folder using chmod in Dockerfile
FROM php:7.3.2-apache
COPY . /var/www/html/
RUN chmod -R 777 /var/www/html/ /var/www/html/
Found this answer in the comments, but the person that left it removed the comment
You actualy have no need to chmod your project root folder to something unnecessary open, like 0777.
In php:* containers php workers run from www-data user. So all you need to do is chown your current project root dir to www-data and verify that www-data user can actualy create folders in it (ls -lah will help you).
Here is my php stage from symfony 4.3 projects:
FROM php:7.3-fpm as runtime
# install php ext/libraries and do other stuff.
WORKDIR /var/www/app
RUN chown -R www-data:www-data /var/www/app
COPY --chown=www-data:www-data --from=composer /app/vendor vendor
COPY --chown=www-data:www-data bin bin
COPY --chown=www-data:www-data config config
COPY --chown=www-data:www-data public public
COPY --chown=www-data:www-data src src
COPY --chown=www-data:www-data .env .env
I am getting the following error when trying to run hg update:
abort: Operation not permitted:
/var/www/simira/web/public/images/nominations/13/big/4f196667cf5a2.jpg
Here is some info:
$ cd /var/www/simira/web/public/images/nominations/13/big/
$ ll ./4f196667cf5a2.jpg
-rw-rw-r-- 1 martin portadesign 15356 Feb 2 22:10 4f196667cf5a2.jpg
$ ll -d ./
drwxrwxr-x 2 martin portadesign 4096 Feb 2 22:10 ./
$ id
uid=5004(clime) gid=5007(portadesign) groups=5007(portadesign),10(wheel),48(apache)
Tell me what is wrong, please...
The problem was caused by hg attempting to change the permissions of the file:
$ sudo hg update
$ ll ./4f196667cf5a2.jpg
./ -rwxrwxr-x 1 martin portadesign 15356 Feb 2 22:10 4f196667cf5a2.jpg
As can be seen, it added executable bit to the image. That is the only bit that hg acually tracks and there does not seem to be a "switch-off" option. The problem is that only an owner of the file can change its permissions.