How to add additional permissions to custom steps in GitHub Actions - github-actions

I have two steps in GitHub Actions:
The first uploads a zipped artifact:
- name: Upload artifact
uses: actions/upload-artifact#master
with:
name: artifacts
path: target/*.jar
The second uses a custom java command to read the uploaded artifact:
name: Read artifact
runs: java -jar pipeline-scan.jar -- "artifacts.zip"
I've redacted the java command, but it's supposed to scan my zip file using Veracode. GitHub Actions returns the following error:
java -jar pipeline-scan.jar: error: argument -f/--file: Insufficient
permissions to read file: 'artifacts.zip'
I've tried changing the permissions of the GITHUB_TOKEN, but apparently you can only pass in the $GITHUB_TOKEN secret with a "uses" parameter and not a "runs" parameter. I've also made sure that my default workflow permissions are set to "read and write permissions."
Does anyone know how to resolve this permissions issue?

Related

elastic beanstalk document root resolves to /var/www/html/var/www/html/

I want to deploy a laravel site using elastic beanstalk.
I'm using pipelines pulling from a BitBucket repository.
After I created my EB application and environment, I changed the document-root to /web/public because the laravel-root is under the '[repo-root]/web' directory.
The deployment is failing:
2023/02/12 01:40:11 [error] 3857#3857: *109 "/var/www/html/var/www/html/web/public/index.php" is not found (2: No such file or directory), client: ..., server: , request: "GET / HTTP/1.1", host: "..."
A similar project where the laravel-root === 'repo-root' and document-root: public works, but this is not ideal.
How can I configure the pipeline or EB to use the '[repo-root]/web' as the document-root?
I've unsuccessfully tried various values for the document-root, but nothing seems to work.
In another forum, someone suggested changing the pipeline to return the laravel-root as an artifact, but I'm not sure how to do this. Seems like it is stored as a zip under S3 and if I change to Full Clone I get an invalid-structure error related to code build. I don't know what that means since I'm not using code build.
TIA
While I'm sure there are a number of ways to solve this, what worked for me was using CodeBuild to pull the code from the repo and a buildspec.yml file to create a zip of just the directory required for deployment.
buildspec.yml
version: 0.2
phases:
pre_build:
commands:
- cd web
- zip -r ../web.zip ./*
artifacts:
files:
- web.zip
Still under CodeBuild, I configured the Artifacts to output to an S3 bucket. Then I created a Code Pipeline with a Source stage that pulls the zip from the build bucket and a Deploy stage that sends the source artifact to Elastic Beanstalk (provider). When setting up the pipeline, it seems to want you to have a 'Build' stage between Source and Deploy, but I deleted this.
It looks like you can also leverage artifact handling and let CodeBuild do the packaging (zipping). I haven't tested this. https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.base-directory
...
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
As far as the weird pathing issue in the original post, I think there was some sort of EB config cache issue/corruption. When I rebuilt the environment, that error was gone.

Context error : Where does Azure CLI command run?

I'm actually setting up a devops environement using Github Actions and Microsoft Azure services. One of the steps i use in my pipeline is building a Docker image and pushing it to Azure Container Registry (ACR). To do that, i'm using the official action.
The problem is that when my Dockerfile is built , the server cannot find the path for the files i used in it.
To make it work, i tried to change the folder i passed to the action but with no result. Despite my Dockerfile is at the root of my project ( the default value in the action ), i get an error even when i'm explicitely giving the path.
I understood that the context of the server in which it runs is way different than mine. Knowing that in my workflow i build the project (to generate the JAR file) before trying to build the Docker image so the JAR file exists on the server which runs the workflow (Github server). I tried to debug the Build action, and the line which fails is 26 : az acr build ..., i'm actually 99% sure that all arguments are correct, but i still get the context error.
I tried to understand by myself and searched in the Azure CLI documentation but couldn't find the information. So now the question i'm asking myself is : does the az acr build run locally on the shell which called it (check scenario 1 image) ? or on an azure server which would explains why the server cannot find the JAR file (scenario 2) ?
And if it is scenario 2, is there a way to make pass the JAR file to az acr build and influence the server context ? Or should i ignore the official action and rewrite an action by myself which build the image locally not using the az acr build command ?
My Dockerfile (Spring Boot project) :
FROM openjdk:11
COPY target/devOps-0.0.1-SNAPSHOT.jar devOps-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-jar", "/devOps-0.0.1-SNAPSHOT.jar"]
The error i get :
Step 2/3 : COPY target/devOps-0.0.1-SNAPSHOT.jar devOps-0.0.1-SNAPSHOT.jar
COPY failed: file not found in build context or excluded by .dockerignore: stat target/devOps-0.0.1-SNAPSHOT.jar: file does not exist
2022/11/02 08:16:14 Container failed during run: build. No retries remaining.
failed to run step ID: build: exit status 1
Scenario 1 :
Scenario 2 :

Why is the checkout of a private repository on GitHub Actions returning "Error : fatal: could not read Username for 'https://github.com'"?

The project's local development environment makes it mandatory to have a .npmrc file with the following content:
registry=https://registry.npmjs.org/
#my-organization:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=your-GitHub-token-should-be-here-and-I-will-not-share-my-for-security-reasons
Hence, any client properly authenticated into the GitHub Packages
Registry can install our private NPM packages hosted for free on GitHub Registry by running:
npm ci #my-organization/our-package
Ok, it works on my local development environment.
Now, I am building a Continuous Integration process with GitHub Actions which is a different but similar challenge. I have this on my .yaml file:
- name: Create .npmrc for token authentication
uses: healthplace/npmrc-registry-login-action#v1.0
with:
scope: '#my-organization'
registry: 'https://npm.pkg.github.com'
# Every user has a GitHub Personal Access Token (PAT) to
# access NPM private repos. The build of GitHub Actions is
# symmetrical to what every developer on the project has to
# face to build the application on their local development
# environment. Hence, GitHub Actions also needs a Token! But,
# it is NOT SAFE to insert the text of a real token on this
# yml file. Thus, the institutional workaround is to insert
# the `{{secret}}` below which is aligned/set in the project
# settings on GitHub!
auth-token: ${{secrets.my_repo_secret_key_which_is_not_being_shared}}
On GitHub settings->secrets->actions->"add secret":
On the secret value, I added the same content I have on .npmrc.
I was expecting it to work. Unfortunately, an error message is retrieved:
Error: fatal: could not read Username for 'https://github.com': terminal prompts disabled
Why is that so?
I made the mistake of adding all the content on my .npmrc.
It is wrong. And GitHub already knows some things, such as the scope. #my-organization.
Hence, the solution is only adding the following snippet (using the example provided on the question):
your-GitHub-token-should-be-here-and-I-will-not-share-my-for-security-reasons
And it works as expected :)

Google Cloud build always deploys my cloud-function with previous commit

I have a problem with my automated cloud function deployment
I have a cloud function stored in a Google Cloud repository
Git code includes a cloudbuild.yaml file with this content :
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["functions", "deploy", "myfunction", "--region=europe-west1"]
timeout: "1600s"
I only have a branch Master.
When i push my commit, cloudbuild triggers and deploys the cloud function
The problem is that it always deploys the previous commit, not the last
For example :
2:23 : I push my source code to Google Source repository
Here is the result :
At 2:23:33, cloudbuild triggers and deploys successfully the cloud function
Here is the log of Cloudbuild :
starting build "e3a0e735-50fc-4315-bafd-03128156d69f"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/myproject/r/myrepo
* branch 1b67729b8498c35fc19a45b14b8d674635300594 -> FETCH_HEAD
HEAD is now at 1b67729 PrayingforCommit
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
Deploying function (may take a while - up to 2 minutes)...
...............................................done.
availableMemoryMb: 256
entryPoint: process_gcs
eventTrigger:
eventType: google.storage.object.finalize
failurePolicy: {}
resource: projects/_/buckets/mybucket
service: storage.googleapis.com
ingressSettings: ALLOW_ALL
labels:
deployment-tool: cli-gcloud
name: projects/myproject/locations/europe-west1/functions/myfunction
runtime: python37
serviceAccountEmail: myproject#appspot.gserviceaccount.com
sourceRepository:
deployedUrl: https://source.developers.google.com/projects/myproject/repos/myrepo/revisions/2ed14c3225e7fcc089f2bc6a0ae29c7564ec12b9/paths/
url: https://source.developers.google.com/projects/myproject/repos/myrepo/moveable-aliases/master/paths/
status: ACTIVE
timeout: 60s
updateTime: '2020-04-15T00:24:55.184Z'
versionId: '2'
PUSH
DONE
As you can see, the commit that triggers is the 1b67729, but the DeployedUrl line says 2ed14c3 which is the previous commit
Operation ended at 2:24:55, i see the same time in my cloud function source tab
If i just click the edit button, then deploy button, to manually force the cloud function rebuild, it deploys the correct commit (1b67729)
Here are the parameters of the cloud-function :
Where is my mistake with cloudbuild, and how to always deploy the last commit ???
Thanks for your help
I have run into this same issue (though I was using GitHub mirrors rather than native Cloud Source Repositories).
Cloud Functions does not check for updates to source repos when the --source flag is omitted
Your function was previously deployed directly from a source repository and you are not passing a --source flag to gcloud. Under these circumstances gcloud ignores the code in the local directory.
The easiest fix is to explicitly specify the source in your cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["functions", "deploy", "myfunction", "--region=europe-west1", "--source=."]
timeout: "1600s"
You would not hit this if the function had never been configured to fetch its source directly from the repository.
Your Cloud Function has previously been deployed from a source repository
When configuring a Cloud Function, there is an option to fetch source code from a Cloud Source Repository. You are prompted with this as an option when creating a function through the console, but could also have used gcloud:
gcloud functions deploy NAME \
--source https://source.developers.google.com/projects/PROJECT_ID/repos/REPOSITORY_ID/moveable-aliases/master/paths/SOURCE \
[... other gcloud options ...]
This is the 'sourceRepository' setting that you can see in the build output.
However, as you can see in the console (resolved to 1b67729b) Cloud Functions is not updating its code from there.
Omitting the --source flag to gcloud leads to potentially confusing behaviour
When you leave out the --source flag to gcloud, if the function was previously deployed directly from a repository, specific behaviour applies. From the documentation for gcloud functions deploy:
If you do not specify the --source flag:
The current directory will be used for new function deployments.
If the function was previously deployed using a local filesystem path, then the function's source code will be updated using the
current directory.
If the function was previously deployed using a Google Cloud Storage location or a source repository, then the function's source
code will not be updated.
You are hitting the third option.
When deploying through Cloud Build, it is better not to link Cloud Functions directly to the source repository
Cloud Build is configured to run gcloud in a directory containing a copy of your source code. You can therefore deploy directly from the local filesystem - this packages up the function in a zip file, uploads it to Cloud Storage, and tells Cloud Functions to fetch it.
If you tell Cloud Functions to fetch its code from a repository, then a second checkout of the repo is made and a zip file created from there, which will be slightly slower (and potentially prone to a race condition if the branch has updated in the background).

Why does gitlab CI does not find my junit report artifact?

I am trying to upload J-unit reports on Gitlab CI(these are test results from my Cypress automation framework). I am using Junit-merge. Due the architecture of Cypress (each test in isolation), it requires an extra 'merge' for the reports to get them into one file. Locally evertything works fine:
Junit generates single reports of each test with a hashcode
After all reports have been generated I run a script (shown below) that mixed all the reports into one single .xml file and outputs it below the 'results' package.
Tried to debug it locally, but locally everything just works fine. Possiblities I could think of: Either the merge script is not handled properly or Gitlab does not accept the relative path to the .xml file.
{
"baseUrl": "https://www-acc.anwb.nl/",
"reporter": "mocha-junit-reporter",
"reporterOptions": {
"mochaFile": "results/resultsreport.[hash].xml",
"testsuiteTitle": "true"
}
}
This is the Cypress.json file, where I configured the Junit reporter and let it output the single testfiles in the results package.
cypress-e2e:
image: cypress/base:10
stage: test
script:
- npm run cy:run:staging
- npx junit-merge -d results -o results/results.xml
artifacts:
paths:
- results/results.xml
reports:
junit: results/results.xml
expire_in: 1 week
This is part of the yml file. The npx junit-merge command makes sure all .xml files in the results package are being merged into results.xml.
Again, locally everything works as expected. The error I get from gitlab Ci is:
Uploading artifacts...
WARNING: results/results.xml: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1
Artifacts can only exist in
directories relative to the build directory and specifying paths which don't
comply to this rule trigger an unintuitive and illogical error message (an
enhancement is discussed at
gitlab-ce#15530
). Artifacts need to be uploaded to the GitLab instance (not only the GitLab
runner) before the next stage job(s) can start, so you need to evaluate
carefully whether your bandwidth allows you to profit from parallelization
with stages and shared artifacts before investing time in changes to the
setup.
https://gitlab.com/gitlab-org/gitlab-ee/tree/master/doc/ci/caching
which means next configuration should fix the problem:
artifacts:
reports:
junit: <testing-repo>/results/results.xml
expire_in: 1 week