Bluemix DevOps deployment error with launch configuration - manifest

I am trying to deploy the people in the news sample application and this is the error, Could not find launch configuration manifest nor the top-level project manifest. Please restore it or provide a project manifest.
I am following this one http://peopleinnews.mybluemix.net/deployinfo.html
When I get to number 8 the path has a ' . ' (period) on my UI and I can not remove it to type peopleInNews/
peopleinthenews/manifest.yml (below)
applications:
- services:
- ttn-cloudantNoSQLDB
- re-service
disk_quota: 1024M
host: peopleinnews
name: People In News
command: node app.js
path: .
domain: mybluemix.net
instances: 1
memory: 512M
then I tried changing path manually (below)
applications:
- services:
- ttn-cloudantNoSQLDB
- re-service
disk_quota: 1024M
host: peopleinnews
name: People In News
command: node app.js
path: peopleinnews/
domain: mybluemix.net
instances: 1
memory: 512M
Can anyone tell me more of what this error with the project manifest is?

The manifest.yml file is formatted incorrectly. For the values you've provided, this is how it should be formatted
applications:
- name: People In News
disk_quota: 1024M
host: peopleinnews
command: node app.js
path: peopleinnews/
domain: mybluemix.net
instances: 1
memory: 512M
services:
- ttn-cloudantNoSQLDB
- re-service
Specifically, services is property at the same level as the other properties, and the service names should be properties nested under the services property.
YAML can be a confusing syntax to author; when in doubt, use the awesome CF Manifest Generator at http://cfmanigen.mybluemix.net/ to build your manifest online.

The deployment instruction for the 'People In The News' sample application has been updated in http://peopleinnews.mybluemix.net/deployinfo.html#DeployJazzHub. The important change is in the 'Command' field within the launch configuration panel. Once the correct location of the 'app.js' is specified, the deployment should then go smoothly.

Related

mkdir /.gitlab-runner: permission denied running GitLab Runner in Kubernetes deployed via Helm

I'm trying to deploy the GitLab Runner (15.7.1) onto an on-premise Kubernetes cluster and getting the following error:
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
This is occurring with both the 15.7.1 image (Ubuntu?) and the alpine3.13-v15.7.1 image. Looking at the deployment, it looks likes it should be trying to use /home/gitlab-runner, but for some reason it is trying to use root (/), which is a protected directory.
Anyone else experience this issue or have a suggestion as to what to look at?
I am using the Helm chart (0.48.0) using a copy of the images from dockerhub (simply moved into a local repository as internet access is not available from the cluster). Connectivity to GitLab appears to be working, but the error causes the overall startup to fail. Full logs are:
Registration attempt 4 of 30
Runtime platform arch=amd64 os=linux pid=33 revision=6d480948 version=15.7.1
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Created missing unique system ID system_id=r_Of5q3G0yFEVe
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
I have tried the 15.7.1 image, the alpine3.13-v15.7.1 image, and the gitlab-runner-ocp:amd64-v15.7.1 image and searched the values.yaml for anything relevant to the path. Looking at the deployment template, it appears that it ought to be using /home/gitlab-runner as the directory (instead of /) [though the docs suggested it was /home].
As for "what was I expecting", of course I was expecting that it would "just work" :)
So, resolved this (and other) issues with:
Updated helm deployment template to mount an empty volume at /.gitlab-runner
[separate issue] explicitly added builds_dir and environment [per gitlab-org/gitlab-runner#3511 (comment 114281106)].
These two steps appeared to be sufficient to get the Helm chart deployment working.
You can easily create and mount the emptyDir (in case you are creating gitlab-runner with kubernetes manifest *.yml file):
volumes:
- emptyDir: {}
name: gitlab-runner
volumeMounts:
- name: gitlab-runner
mountPath: /.gitlab-runner
-------------------- OR --------------------
volumeMounts:
- name: root-gitlab-runner
mountPath: /.gitlab-runner
volumes:
- name: root-gitlab-runner
emptyDir:
medium: "Memory"

Google Cloud Build windows builder error "Failed to get external IP address: Could not get external NAT IP from list"

I am trying to implement automatic deployments for my Windows Kubernetes container app. I'm following instructions from the Google's windows-builder, but the trigger quickly fails with this error at about 1.5 minutes in:
2021/12/16 19:30:06 Set ingress firewall rule successfully
2021/12/16 19:30:06 Failed to get external IP address: Could not get external NAT IP from list
ERROR
ERROR: build step 0 "gcr.io/[my-project-id]/windows-builder" failed: step exited with non-zero status: 1
The container, gcr.io/[my-project-id]/windows-builder, definitely exists and it's located in the same GCP project as the Cloud Build trigger just as the windows-builder documentation commanded.
I structured my code based off of Google's docker-windows example. Here is my repository file structure:
repository
cloudbuild.yaml
builder.ps1
worker
Dockerfile
Here is my cloudbuild.yaml:
steps:
# WORKER
- name: 'gcr.io/[my-project-id]/windows-builder'
args: [ '--command', 'powershell.exe -file build.ps1' ]
# OPTIONS
options:
logging: CLOUD_LOGGING_ONLY
Here is my builder.ps1:
docker build -t gcr.io/[my-project-id]/test-worker ./worker;
if ($?) {
docker push gcr.io/[my-project-id]/test-worker;
}
Here is my Dockerfile:
FROM gcr.io/[my-project-id]/test-windows-node-base:onbuild
Does anybody know what I'm doing wrong here? Any help would be appreciated.
Replicated the steps from GitHub and got the same error. It is throwing Failed to get external IP address... error because the External IP address of the VM is disabled by default in the source code. I was able to build it successfully by adding '--create-external-ip', 'true' in cloudbuild.yaml.
Here is my cloudbuild.yaml:
steps:
- name: 'gcr.io/$PROJECT_ID/windows-builder'
args: [ '--create-external-ip', 'true',
'--command', 'powershell.exe -file build.ps1' ]

Unable to load the pictures from MySQL in laravel on google cloud app engine

I deploy a laravel project on google cloud app engine everting is working fine expect 'pictures' that am fetching from products table in MySQL database.
The images are broken, all the thing are working fine on the localhost.
Here's my app.yaml:
runtime: php73
handlers:
- url: /assets
static_dir: public/assets
- url: /(.+\.(gif|png|jpg))$
static_files: public/uploads
upload: .+\.(gif|png|jpg)$
runtime_config:
document_root: public
env_variables:
## Put production environment variables here.
APP_KEY: Already Get this from .env
APP_STORAGE: /tmp
VIEW_COMPILED_PATH: /tmp
SESSION_DRIVER: cookie
CACHE_DRIVER: database
## Set these environment variables according to your CloudSQL configuration.
DB_DATABASE: ufurnitures
DB_USERNAME: ufurnitures
DB_PASSWORD: Already set the pass
DB_SOCKET: /cloudsql/ufurniture:us-central1:ufurnitures
## To use Stackdriver logging in your Laravel application, copy
## "app/Logging/CreateStackdriverLogger.php" and "config/logging.php"
## into your Laravel application. Then uncomment the following line:
# LOG_CHANNEL: stackdriver
beta_settings:
# for Cloud SQL, set this value to the Cloud SQL connection name,
# e.g. "project:region:cloudsql-instance"
cloud_sql_instances: ufurniture:us-central1:ufurnitures
I have an 'uploads' folder in a public directory where pictures are getting saved.
Just change this:
- url: /(.+\.(gif|png|jpg))$
static_files: public/uploads
upload: .+\.(gif|png|jpg)$
To this!
- url: /uploads
static_dir: public/uploads
And hurry it worked! ^_^

YAML unmarshal error cannot unmarshal !!str ` ` to str

I am trying to push my Spring Boot application on Pivotal Cloud Foundry (PCF) via manifest.yml file.
While pushing the app i am getting the following error:
{
Pushing from manifest to org mastercard_connect / space developer-sandbox as e069875...
Using manifest file C:\Sayli\Workspace\PCF\healthwatch-api\healthwatch-api\manifest.yml
yaml: unmarshal errors:
line 6: cannot unmarshal !!str `healthw...` into []string
FAILED }
Here is the manifest.yml file:
{applications:
- name: health-watch-api
memory: 2GB
instances: 1
paths: healthwatch-api-jar\target\healthwatch-api-jar-0.0.1-SNAPSHOT.jar
services: healthwatch-api-database
}
Your manifest is not valid. The link #K.AJ posted is a good reference.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html
Here's an example, which uses the values from your file.
---
applications:
- name: health-watch-api
memory: 2G
path: healthwatch-api-jar/target/healthwatch-api-jar-0.0.1-SNAPSHOT.jar
services:
- healthwatch-api-database
You don't need the leading/trailing { }'s, it's path not paths and services is an array. I think the last one is what the cli is complaining about the most.
Hope that helps!
I got this error while using Pulumi, using GitHub Actions. The cause was not having a Variable available in the GH yaml config. This resulted in the value that I was trying to add to pulumi.dev.yaml being added as 'null'; after correcting this issue, I could see the correct value.

Setup environment variable for doctrine and elasticsearch

How to setup environment variable for symfony.
Like if i run my project than it should detetched the envirment and do the action, as an example ---
http: //production.com -> prod * environment *
http: //localhost:9200 -> * dev * environment --- for elasticsearch
http: //localhost:8000 -> * dev * environment --- for doctrine/mysql
So if i run a mysql request on localhost it should make the request at
http: //localhost:8000
and if i make a request for elasticsearch it should make the request at
http: //localhost:9200
and if it runs in the production environment it should do the request at
http: //production.com:9200 --- elasticsearch
http: //production.com:8000 --- doctrine/mysql
I think it can be done at parameters.yml but i really did not get how it can be done.
Can someone help me to solve this problem.
Thanks a lot in advanced .
I'm not exactly sure what's the problem here so I'll give you a more general answer.
Symfony has a really great way to configure your project for different situations (or environments). You should have a look at the official documentation which explains things in depth.
By default, Symfony comes with 3 configurations for different environments:
app/config/config_dev.yml for development
app/config/config_prod.yml for production
app/config/config_test.yml for (unit) testing
Each of these config files can override settings from the base configuration file which is app/config/config.yml. You would store your general/common settings there. Whenever you need to override something for a specific environment, you just go to the environment config and change it.
Lets say you have the following base configuration in app/config/config.yml:
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
host: "%prod_database_host%"
port: "%prod_database_port%"
dbname: "%prod_database_name%"
user: "%prod_database_user%"
password: "%prod_database_password%"
charset: UTF8
Now lets say, you have 3 different databases for each environment - prod, dev and test. The way to do this is to override the configuration in the environment configuration file (lets say app/config/config_dev.yml:
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
host: "%dev_database_host%"
port: "%dev_database_port%"
dbname: "%dev_database_name%"
user: "%dev_database_user%"
password: "%dev_database_password%"
charset: UTF8
Add the necessary %dev_*% parameters to your app/config/parameters.yml.dist and app/config/parameters.yml. Now, whenever you open your application using the dev environment, it will connect to the specified database in your parameters (%dev_database...%).
This is pretty much it. You can do the same for any configuration you need to be changed in a specific environment. You should definitely have a look at the documentation. It's explained straight-forward with examples.