YAML and JSON error aws static.config file - json

I'm trying to deploy a node.js application on elasticbeanstalk (I'm following directions here http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_express.html), where the following needs to be done:
Step 6 [To update your application with a database] Point 5. On your local computer, update node-express/.ebextensions/static.config to add a production flag to the environment variables.
option_settings:
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
- option_name: NODE_ENV
value: production
But when I deploy, I'm getting the error:
2014-08-29 10:15:11 ERROR The configuration file .ebextensions/static.config in application version git-5376bdbd807e9f181e6a907f996068b4075dffe0-1409278503377 contains invalid YAML or JSON. YAML exception: while parsing a block mapping
in "<reader>", line 1, column 1:
option_settings:
^
expected <block end>, but found BlockEntry
in "<reader>", line 5, column 1:
- option_name: NODE_ENV
^
, JSON exception: Unexpected character (o) at position 0.. Update the configuration file.
I'm new to this and unable to figure out how to correct it. Please help.

Can you specify which text editor are you using for creating this YAML file? By any chance are you on a Windows machine? My first guess is that there could be some invalid character in your config file that is not visible in the text editor. If you have already not done so can you double check that there are no ctrl characters etc. in your file. I generally check for such invalid characters in vim.
Second thing to check: YAML is sensitive about whitespace and indentation, so can you double check your indentation are correct. I found this online website for validating your YAML format. You can give it a try.
You can alternatively try the following JSON in your config file. Just replace the contents of the YAML file with this JSON and it should work.
{
"option_settings": [
{
"option_name": "/public",
"namespace": "aws:elasticbeanstalk:container:nodejs:staticfiles",
"value": "/public"
},
{
"option_name": "NODE_ENV",
"value": "production"
}
]
}
YAML as specified in the documentation should work just fine if there are no invalid characters and indentation is correct. For now you can try the above json.

Try to set the namespace of the application environment --> aws:elasticbeanstalk:application:environment
.yaml is more human friendly
option_settings:
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: productio

Related

Annotation Validation Error when trying to install Vault on OpenShift

Following this tutorial on installing Vault with Helm on OpenShift, I encountered the following error after executing the command:
bash
helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f my_values.yaml
For the config:
values.yaml
bash
echo '# Custom values for the Vault chart
global:
# If deploying to OpenShift
openshift: true
server:
dev:
enabled: true
serviceAccount:
create: true
name: vault-sa
injector:
enabled: true
authDelegator:
enabled: true' > my_values.yaml
The error:
$ helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f values.yaml
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
What exactly is happening, or how can I reset this specific name space to point to the right release namespace?
Have you by chance tried the exact same thing before, because that is what the error is hinting.
If we dissect the error, we get the to the root of the problem:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists.
So something on the cluster already exists that you were trying to deploy via the helm chart.
Unable to continue with install:
Helm is aborting due to this failure
ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists
So the cluster role vault-agent-injector-clusterrole that the helm chart is supposed to put onto the cluster already exsits. ClusterRoles aren't namespace specific, hence the "namespace" is blank.
and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
The default behavior is to try to import existing resources that this chart requires, but it is not possible, because the owner of that ClusterRole is different from the deployment.
To fix this, you can remove the existing deployment of your chart and then give it an other try and it should work as expected.
Make sure all resources are gone. For this particular one you can check with kubectl get clusterroles

Packer HCL2 config file support

In https://packer.io/guides/hcl/from-json-v1/, it says
Note: Starting from version 1.5.0 Packer can read HCL2 files.
And my packer is packer_1.5.5_linux_amd64.zip which is suppose to be able to read HCL2 files. However, when I tried it, I got
$ packer build -only=docker hcl-example
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
==> Builds finished but no artifacts were created.
$ packer build -h
Usage: packer build [options] TEMPLATE
Will execute multiple builds in parallel as defined in the template.
The various artifacts created by the template will be outputted.
Options:
-color=false Disable color output. (Default: color)
-debug Debug mode enabled for builds.
-except=foo,bar,baz Run all builds and post-procesors other than these.
-only=foo,bar,baz Build only the specified builds.
-force Force a build to continue if artifacts exist, deletes existing artifacts.
-machine-readable Produce machine-readable output.
-on-error=[cleanup|abort|ask] If the build fails do: clean up (default), abort, or ask.
-parallel=false Disable parallelization. (Default: true)
-parallel-builds=1 Number of builds to run in parallel. 0 means no limit (Default: 0)
-timestamp-ui Enable prefixing of each ui output with an RFC3339 timestamp.
-var 'key=value' Variable for templates, can be used multiple times.
-var-file=path JSON file containing user variables. [ Note that even in HCL mode this expects file to contain JSON, a fix is comming soon ]
and I don't see any switches from above to switch to HCL2 mode.
What I'm missing here?
$ packer version
Packer v1.5.5
$ cat hcl-example
# the source block is what was defined in the builders section and represents a
# reusable way to start a machine. You build your images from that source.source
"amazon-ebs" "example" {
ami_name = "packer-test"
region = "us-east-1"
instance_type = "t2.micro"
}
[UPDATE:]
To address Matt's comment/concern, I've changed the content of hcl-example to the whole list in https://packer.io/guides/hcl/from-json-v1/, and
mv hcl-example hcl-example.hcl
$ packer validate hcl-example.hcl
Failed to parse template: Error parsing JSON: invalid character '#' looking for beginning of value
At line 1, column 1 (offset 1):
1: #
^
Named it with .pkr.hcl extension solved the problem.

How to S3 cmd for CSS Static Website Hosting?

I am having an issue linking main.css with my index.html file on aws s3. I've attempted to s3cmd but I keep running into this error...
chrisscott#ChristophersMBP ~ % s3cmd put -m Documents/Code/html+css/api-search/css/main.css s3://iniquityscure.org/main.css
/usr/local/bin/s3cmd:308: SyntaxWarning: "is" with a literal. Did you mean "=="?
if response["status"] is 200:
/usr/local/bin/s3cmd:310: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif response["status"] is 204:
Usage: s3cmd [options] COMMAND [parameters]
s3cmd: error: option -m: invalid MIME-Type format: 'Documents/Code/html+css/api-search/css/main.css'
I think two options:
You can try without "-m" and s3cmd set auto mime type.
You can set correct mime type: -m "text/css".
Best regards

YAML unmarshal error cannot unmarshal !!str ` ` to str

I am trying to push my Spring Boot application on Pivotal Cloud Foundry (PCF) via manifest.yml file.
While pushing the app i am getting the following error:
{
Pushing from manifest to org mastercard_connect / space developer-sandbox as e069875...
Using manifest file C:\Sayli\Workspace\PCF\healthwatch-api\healthwatch-api\manifest.yml
yaml: unmarshal errors:
line 6: cannot unmarshal !!str `healthw...` into []string
FAILED }
Here is the manifest.yml file:
{applications:
- name: health-watch-api
memory: 2GB
instances: 1
paths: healthwatch-api-jar\target\healthwatch-api-jar-0.0.1-SNAPSHOT.jar
services: healthwatch-api-database
}
Your manifest is not valid. The link #K.AJ posted is a good reference.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html
Here's an example, which uses the values from your file.
---
applications:
- name: health-watch-api
memory: 2G
path: healthwatch-api-jar/target/healthwatch-api-jar-0.0.1-SNAPSHOT.jar
services:
- healthwatch-api-database
You don't need the leading/trailing { }'s, it's path not paths and services is an array. I think the last one is what the cli is complaining about the most.
Hope that helps!
I got this error while using Pulumi, using GitHub Actions. The cause was not having a Variable available in the GH yaml config. This resulted in the value that I was trying to add to pulumi.dev.yaml being added as 'null'; after correcting this issue, I could see the correct value.

AWS Beanstalk Application Health Check

We use Beanstalk to deploy node applications. It works very well. I've created a couple of config files in an .ebextensions directory, to apply configuration info to our apps when we load them up. Again mostly works well. I have one thing that does not, and that is defining the application health check URL. I can't get it to go. One odd thing about it, it seems to be only parameter I have come across so far that has spaces in it, and I'm wondering about that. I have tried enclosing the values in quotes, just to see if that is the problem, but it still doesn't work. Has anyone done this before, and can tell me whether it works, and if there is something syntactical about this that is incorrect? As I said, the rest of the params get set correctly in beanstalk, just the last one doesn't. Note #environment# gets replaced with a grunt script before this gets deployed.
Here's the config file:
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: #environment#
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.10
- namespace: aws:autoscaling:trigger
option_name: LowerThreshold
value: 40
- namespace: aws:autoscaling:trigger
option_name: MeasureName
value: CPUUtilization
- namespace: aws:autoscaling:trigger
option_name: UpperThreshold
value: 60
- namespace: aws:autoscaling:trigger
option_name: Unit
value: Percent
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /load_balance_test
Adding this worked for me:
# .ebextensions/healthcheckurl.config
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /health
- namespace: aws:elasticbeanstalk:environment:process:default
option_name: HealthCheckPath
value: /health
I discovered the second setting by doing eb config, which gives a nice overview of environment settings that can be overriden with option_settings in .ebextensions/yet-another.config files.
The spaces in this property name are weird, but it works when used with the alternative shorthand syntax for options:
option_settings:
aws:elasticbeanstalk:application:
Application Healthcheck URL: /
I use CloudFormation for EB, and in CF the syntax for that parameter is very strange. If that config file works the same as CF, the following string should work for you:
HTTP:80/load_balance_test
If you're using Terraform, then just make sure you have spaces in the name, and it will work fine:
setting {
namespace = "aws:elasticbeanstalk:application"
name = "Application Healthcheck URL"
value = "/api/health"
}
I just tried. It worked for me. Only the format specified in the original question worked for me i.e.,
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /api/v1/health/
you also might want to set the health_check_type to ELB instead of default EC2.
this is how I configured mine
$ cat .ebextensions/0090_healthcheckurl.config
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: "ELB"
HealthCheckGracePeriod: "600"
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /_status