Manifest not found deploying from Travis CI to IBM Bluemix (CloudFoundry) - manifest

Simply put, Travis CI reports that CF cannot find the manifest file necessary to deploy my application upon build completion.
Manifest file is not found in the current directory, please provide either an app name or manifest
.travis.yml:
deploy:
edge: true
provider: cloudfoundry
api: $CF_API
username: $CF_USER
password: $CF_PASS
organization: $CF_ORG
space: $CF_ENV
on:
branch: development
manifest.yml:
---
applications:
- name: Website
memory: 512M
domain: mybluemix.net
host: Website
buildpack: https://github.com/cloudfoundry/php-buildpack.git
Both .travis.yml and manifest.yml are in the root directory as expected.

It turns out I had a cd directory inside the before_script of my travis.yml.
Putting said manifest into directory fixed it.

Related

elastic beanstalk document root resolves to /var/www/html/var/www/html/

I want to deploy a laravel site using elastic beanstalk.
I'm using pipelines pulling from a BitBucket repository.
After I created my EB application and environment, I changed the document-root to /web/public because the laravel-root is under the '[repo-root]/web' directory.
The deployment is failing:
2023/02/12 01:40:11 [error] 3857#3857: *109 "/var/www/html/var/www/html/web/public/index.php" is not found (2: No such file or directory), client: ..., server: , request: "GET / HTTP/1.1", host: "..."
A similar project where the laravel-root === 'repo-root' and document-root: public works, but this is not ideal.
How can I configure the pipeline or EB to use the '[repo-root]/web' as the document-root?
I've unsuccessfully tried various values for the document-root, but nothing seems to work.
In another forum, someone suggested changing the pipeline to return the laravel-root as an artifact, but I'm not sure how to do this. Seems like it is stored as a zip under S3 and if I change to Full Clone I get an invalid-structure error related to code build. I don't know what that means since I'm not using code build.
TIA
While I'm sure there are a number of ways to solve this, what worked for me was using CodeBuild to pull the code from the repo and a buildspec.yml file to create a zip of just the directory required for deployment.
buildspec.yml
version: 0.2
phases:
pre_build:
commands:
- cd web
- zip -r ../web.zip ./*
artifacts:
files:
- web.zip
Still under CodeBuild, I configured the Artifacts to output to an S3 bucket. Then I created a Code Pipeline with a Source stage that pulls the zip from the build bucket and a Deploy stage that sends the source artifact to Elastic Beanstalk (provider). When setting up the pipeline, it seems to want you to have a 'Build' stage between Source and Deploy, but I deleted this.
It looks like you can also leverage artifact handling and let CodeBuild do the packaging (zipping). I haven't tested this. https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.base-directory
...
artifacts:
files:
- '**/*'
base-directory: 'my-build*'
As far as the weird pathing issue in the original post, I think there was some sort of EB config cache issue/corruption. When I rebuilt the environment, that error was gone.

How to log into Github Container Registy using Github Actions

I am trying to write a GitHub actions script to automatically build a docker image, and then push it into the GitHub Container Registry when new code is checked in to the main branch. This is the code that I'm using to try to log into the container registry:
name: Build and publish Docker image.
on: [push]
jobs:
publish-docker-image:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{github.actor}}
password: ${{secrets.CONTAINER_REG_ACCESS}}
For more context, the CONTAINER_REG_ACCESS secret is a personal access token, though it was created by a different member of my organization.
This error shows up in GitHub after it runs its automated tests on my pull request.
Run docker/login-action#v1
Logging into ghcr.io...
Error: Error response from daemon: Get "https://ghcr.io/v2/": denied: denied
What is the best practice from logging into the GitHub container registry using a GitHub actions script? Is there a way to log in using an organizations credentials instead of my personal GitHub ID?

Why is the checkout of a private repository on GitHub Actions returning "Error : fatal: could not read Username for 'https://github.com'"?

The project's local development environment makes it mandatory to have a .npmrc file with the following content:
registry=https://registry.npmjs.org/
#my-organization:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=your-GitHub-token-should-be-here-and-I-will-not-share-my-for-security-reasons
Hence, any client properly authenticated into the GitHub Packages
Registry can install our private NPM packages hosted for free on GitHub Registry by running:
npm ci #my-organization/our-package
Ok, it works on my local development environment.
Now, I am building a Continuous Integration process with GitHub Actions which is a different but similar challenge. I have this on my .yaml file:
- name: Create .npmrc for token authentication
uses: healthplace/npmrc-registry-login-action#v1.0
with:
scope: '#my-organization'
registry: 'https://npm.pkg.github.com'
# Every user has a GitHub Personal Access Token (PAT) to
# access NPM private repos. The build of GitHub Actions is
# symmetrical to what every developer on the project has to
# face to build the application on their local development
# environment. Hence, GitHub Actions also needs a Token! But,
# it is NOT SAFE to insert the text of a real token on this
# yml file. Thus, the institutional workaround is to insert
# the `{{secret}}` below which is aligned/set in the project
# settings on GitHub!
auth-token: ${{secrets.my_repo_secret_key_which_is_not_being_shared}}
On GitHub settings->secrets->actions->"add secret":
On the secret value, I added the same content I have on .npmrc.
I was expecting it to work. Unfortunately, an error message is retrieved:
Error: fatal: could not read Username for 'https://github.com': terminal prompts disabled
Why is that so?
I made the mistake of adding all the content on my .npmrc.
It is wrong. And GitHub already knows some things, such as the scope. #my-organization.
Hence, the solution is only adding the following snippet (using the example provided on the question):
your-GitHub-token-should-be-here-and-I-will-not-share-my-for-security-reasons
And it works as expected :)

How to place files outside app deployment directory in AWS Elastic Beanstalk?

In AWS EB, how to place my environment.properties (contains app runtime config like port, logs dir, DB info, security keys, etc.) under /var/env_config/myapp, so it can be referred by the app at runtime?
Though my further plan is to put this environment.properties in a secure non app directory of local or remote file system as it contains sensitive information.
global.env = propsReader(path.join(process.env.ENV_PATH, 'env-main.properties'));
On the EB, I have added an Environment property 'ENV_PATH = /var/env_config/myapp'
EB logs:
web: > myapp#1.0.0 start /var/app/current
web: > node src/app-main.js
web: 8266 [
web: '/opt/elasticbeanstalk/node-install/node-v12.18.1-linux-x64/bin/node',
web: '/var/app/current/src/app-main.js'
web: ]
web: /var/env_config/myapp
web: internal/fs/utils.js:230
web: throw err;
web: ^
web: Error: ENOENT: no such file or directory, open '/var/env_config/myapp/env-main.properties'
I just wanna deploy my application in the same fashion in AWS EB or Docker or VM or local machine, with just an environment property saying where the required runtime input files are.
How to access Elastic Beanstalk file system to configure my .properties file?
Not sure what do you mean by "accessing file system", but usually you would create .ebextensions folder in your project directory. The extensions are commonly used for running commands or scripts when you are deploying your app. There are special sections for that:
commands: You can use the commands key to execute commands on the EC2 instance. The commands run before the application and web server are set up and the application version file is extracted.
container_commands: You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed.
Therefore, you could use the above sections to modify your .properties file during deployment of your application into EB.

Configure settings.xml for Openshift 3.10 S2I Maven Builds

I would like to to customize settings.xml for s2i maven builds in Openshift 3.10. While this is easily done in version 3.11 using config maps:
https://docs.openshift.com/container-platform/3.11/dev_guide/builds/build_inputs.html#using-secrets-during-build
I did not found any solution for 3.10. Is there a workaround / solution for this?
thank you!
In 3.11, you can create a ConfigMap for your settings.xml file
$ oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>
And use that to override it in your build. (Source)
source:
git:
uri: https://github.com/wildfly/quickstart.git
contextDir: helloworld
configMaps:
- configMap:
name: settings-mvn
As you point out, 3.10, there is no support for ConfigMaps in BuildConfigs, however, you can create a secret with the same content
$ oc create secret generic settings-mvn --from-file=settings.xml=<path/to/settings.xml>
And use that to override it in your build. (Source)
source:
git:
uri: https://github.com/wildfly/quickstart.git
contextDir: helloworld
secrets:
- secret:
name: settings-mvn
Alternatively, you can also include the settings.xml file in your git repo in order to override the default settings.xml. Simply placing your file at source_dir/configuration/settings.xml should be sufficient. (Source)