YAML unmarshal error cannot unmarshal !!str ` ` to str - manifest

I am trying to push my Spring Boot application on Pivotal Cloud Foundry (PCF) via manifest.yml file.
While pushing the app i am getting the following error:
{
Pushing from manifest to org mastercard_connect / space developer-sandbox as e069875...
Using manifest file C:\Sayli\Workspace\PCF\healthwatch-api\healthwatch-api\manifest.yml
yaml: unmarshal errors:
line 6: cannot unmarshal !!str `healthw...` into []string
FAILED }
Here is the manifest.yml file:
{applications:
- name: health-watch-api
memory: 2GB
instances: 1
paths: healthwatch-api-jar\target\healthwatch-api-jar-0.0.1-SNAPSHOT.jar
services: healthwatch-api-database
}

Your manifest is not valid. The link #K.AJ posted is a good reference.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html
Here's an example, which uses the values from your file.
---
applications:
- name: health-watch-api
memory: 2G
path: healthwatch-api-jar/target/healthwatch-api-jar-0.0.1-SNAPSHOT.jar
services:
- healthwatch-api-database
You don't need the leading/trailing { }'s, it's path not paths and services is an array. I think the last one is what the cli is complaining about the most.
Hope that helps!

I got this error while using Pulumi, using GitHub Actions. The cause was not having a Variable available in the GH yaml config. This resulted in the value that I was trying to add to pulumi.dev.yaml being added as 'null'; after correcting this issue, I could see the correct value.

Related

Using JSON Patch on Kubernetes yaml file

I'm trying to use JSON Patch on one of my Kubernetes yaml file.
apiVersion: accesscontextmanager.cnrm.cloud.google.com/v1beta1
kind: AccessContextManagerServicePerimeter
metadata:
name: serviceperimetersample
spec:
status:
resources:
- projectRef:
external: "projects/12345"
- projectRef:
external: "projects/123456"
restrictedServices:
- "storage.googleapis.com"
vpcAccessibleServices:
allowedServices:
- "storage.googleapis.com"
- "pubsub.googleapis.com"
enableRestriction: true
title: Service Perimeter created by Config Connector
accessPolicyRef:
external: accessPolicies/0123
description: A Service Perimeter Created by Config Connector
perimeterType: PERIMETER_TYPE_REGULAR
I need to add another project to the perimeter (spec/status/resources).
I tried using following command:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op": "add", "path": "/spec/status/resources/-/projectRef", "value": {"external": {"projects/01234567"}}}]'
But it resulted in error:
The request is invalid: the server rejected our request due to an error in our request
I'm pretty sure that my path is not correct because it's nested structure. I'd appreciate any help on this.
Thank you.
I don't have the CustomResource you're using so I can't test this, but I think this should work:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op":"add","path":"/spec/status/resources/2","value":{"projectRef":{"external":"projects/12345"}}}]'

Annotation Validation Error when trying to install Vault on OpenShift

Following this tutorial on installing Vault with Helm on OpenShift, I encountered the following error after executing the command:
bash
helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f my_values.yaml
For the config:
values.yaml
bash
echo '# Custom values for the Vault chart
global:
# If deploying to OpenShift
openshift: true
server:
dev:
enabled: true
serviceAccount:
create: true
name: vault-sa
injector:
enabled: true
authDelegator:
enabled: true' > my_values.yaml
The error:
$ helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f values.yaml
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
What exactly is happening, or how can I reset this specific name space to point to the right release namespace?
Have you by chance tried the exact same thing before, because that is what the error is hinting.
If we dissect the error, we get the to the root of the problem:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists.
So something on the cluster already exists that you were trying to deploy via the helm chart.
Unable to continue with install:
Helm is aborting due to this failure
ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists
So the cluster role vault-agent-injector-clusterrole that the helm chart is supposed to put onto the cluster already exsits. ClusterRoles aren't namespace specific, hence the "namespace" is blank.
and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
The default behavior is to try to import existing resources that this chart requires, but it is not possible, because the owner of that ClusterRole is different from the deployment.
To fix this, you can remove the existing deployment of your chart and then give it an other try and it should work as expected.
Make sure all resources are gone. For this particular one you can check with kubectl get clusterroles

Packer HCL2 config file support

In https://packer.io/guides/hcl/from-json-v1/, it says
Note: Starting from version 1.5.0 Packer can read HCL2 files.
And my packer is packer_1.5.5_linux_amd64.zip which is suppose to be able to read HCL2 files. However, when I tried it, I got
$ packer build -only=docker hcl-example
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
==> Builds finished but no artifacts were created.
$ packer build -h
Usage: packer build [options] TEMPLATE
Will execute multiple builds in parallel as defined in the template.
The various artifacts created by the template will be outputted.
Options:
-color=false Disable color output. (Default: color)
-debug Debug mode enabled for builds.
-except=foo,bar,baz Run all builds and post-procesors other than these.
-only=foo,bar,baz Build only the specified builds.
-force Force a build to continue if artifacts exist, deletes existing artifacts.
-machine-readable Produce machine-readable output.
-on-error=[cleanup|abort|ask] If the build fails do: clean up (default), abort, or ask.
-parallel=false Disable parallelization. (Default: true)
-parallel-builds=1 Number of builds to run in parallel. 0 means no limit (Default: 0)
-timestamp-ui Enable prefixing of each ui output with an RFC3339 timestamp.
-var 'key=value' Variable for templates, can be used multiple times.
-var-file=path JSON file containing user variables. [ Note that even in HCL mode this expects file to contain JSON, a fix is comming soon ]
and I don't see any switches from above to switch to HCL2 mode.
What I'm missing here?
$ packer version
Packer v1.5.5
$ cat hcl-example
# the source block is what was defined in the builders section and represents a
# reusable way to start a machine. You build your images from that source.source
"amazon-ebs" "example" {
ami_name = "packer-test"
region = "us-east-1"
instance_type = "t2.micro"
}
[UPDATE:]
To address Matt's comment/concern, I've changed the content of hcl-example to the whole list in https://packer.io/guides/hcl/from-json-v1/, and
mv hcl-example hcl-example.hcl
$ packer validate hcl-example.hcl
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
Named it with .pkr.hcl extension solved the problem.

Bluemix DevOps deployment error with launch configuration

I am trying to deploy the people in the news sample application and this is the error, Could not find launch configuration manifest nor the top-level project manifest. Please restore it or provide a project manifest.
I am following this one http://peopleinnews.mybluemix.net/deployinfo.html
When I get to number 8 the path has a ' . ' (period) on my UI and I can not remove it to type peopleInNews/
peopleinthenews/manifest.yml (below)
applications:
- services:
- ttn-cloudantNoSQLDB
- re-service
disk_quota: 1024M
host: peopleinnews
name: People In News
command: node app.js
path: .
domain: mybluemix.net
instances: 1
memory: 512M
then I tried changing path manually (below)
applications:
- services:
- ttn-cloudantNoSQLDB
- re-service
disk_quota: 1024M
host: peopleinnews
name: People In News
command: node app.js
path: peopleinnews/
domain: mybluemix.net
instances: 1
memory: 512M
Can anyone tell me more of what this error with the project manifest is?
The manifest.yml file is formatted incorrectly. For the values you've provided, this is how it should be formatted
applications:
- name: People In News
disk_quota: 1024M
host: peopleinnews
command: node app.js
path: peopleinnews/
domain: mybluemix.net
instances: 1
memory: 512M
services:
- ttn-cloudantNoSQLDB
- re-service
Specifically, services is property at the same level as the other properties, and the service names should be properties nested under the services property.
YAML can be a confusing syntax to author; when in doubt, use the awesome CF Manifest Generator at http://cfmanigen.mybluemix.net/ to build your manifest online.
The deployment instruction for the 'People In The News' sample application has been updated in http://peopleinnews.mybluemix.net/deployinfo.html#DeployJazzHub. The important change is in the 'Command' field within the launch configuration panel. Once the correct location of the 'app.js' is specified, the deployment should then go smoothly.

YAML and JSON error aws static.config file

I'm trying to deploy a node.js application on elasticbeanstalk (I'm following directions here http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_express.html), where the following needs to be done:
Step 6 [To update your application with a database] Point 5. On your local computer, update node-express/.ebextensions/static.config to add a production flag to the environment variables.
option_settings:
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
- option_name: NODE_ENV
value: production
But when I deploy, I'm getting the error:
2014-08-29 10:15:11 ERROR The configuration file .ebextensions/static.config in application version git-5376bdbd807e9f181e6a907f996068b4075dffe0-1409278503377 contains invalid YAML or JSON. YAML exception: while parsing a block mapping
in "<reader>", line 1, column 1:
option_settings:
^
expected <block end>, but found BlockEntry
in "<reader>", line 5, column 1:
- option_name: NODE_ENV
^
, JSON exception: Unexpected character (o) at position 0.. Update the configuration file.
I'm new to this and unable to figure out how to correct it. Please help.
Can you specify which text editor are you using for creating this YAML file? By any chance are you on a Windows machine? My first guess is that there could be some invalid character in your config file that is not visible in the text editor. If you have already not done so can you double check that there are no ctrl characters etc. in your file. I generally check for such invalid characters in vim.
Second thing to check: YAML is sensitive about whitespace and indentation, so can you double check your indentation are correct. I found this online website for validating your YAML format. You can give it a try.
You can alternatively try the following JSON in your config file. Just replace the contents of the YAML file with this JSON and it should work.
{
"option_settings": [
{
"option_name": "/public",
"namespace": "aws:elasticbeanstalk:container:nodejs:staticfiles",
"value": "/public"
},
{
"option_name": "NODE_ENV",
"value": "production"
}
]
}
YAML as specified in the documentation should work just fine if there are no invalid characters and indentation is correct. For now you can try the above json.
Try to set the namespace of the application environment --> aws:elasticbeanstalk:application:environment
.yaml is more human friendly
option_settings:
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: productio