I'm attempting to get a SSM param for the params in my serverless.yml
Without it, sls deploy works as expected, it's adding that param that breaks the deploy. The credentials are set up on a gitlab runner user the export command for access key id and the secret access key.
.gitlab-ci.yml
before_script:
- pip install virtualenv
- python -m virtualenv venv_api
- source venv_api/bin/activate
- pip install -r requirements.txt
- curl -sL https://deb.nodesource.com/setup_lts.x | bash -
- apt-get install -y nodejs
- npm config set prefix /usr/local
- npm install -g serverless
- serverless plugin install -n serverless-dynamodb-autoscaling
- serverless plugin install -n serverless-python-requirements
script:
# deploy to staging env
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- sls deploy --stage staging --verbose
serverless.yml
params:
default:
SOME_VARIABLE: ${ssm:SOME_VARIABLE}
[...]
provider:
name: aws
runtime: python3.9
region: us-west-2
[...]
The error I'm getting is
$ sls deploy --stage staging --verbose
Running "serverless" from node_modules
Environment: linux, node 18.12.1, framework 3.25.0 (local) 3.25.0v (global), plugin
6.2.2, SDK 4.3.2
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "params.default.SOME_VARIABLE": AWS
provider credentials not found. Learn how to set up AWS provider credentials in
our docs here: <http://slss.io/aws-creds-setup>.,
Related
I want to set up a github actions container with both dart and python. I have used the dart actions template and installed python. However, I keep getting an error saying
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied: pip in /__t/Python/3.8.7/x64/lib/python3.8/site-packages (21.0.1)
/__w/_temp/95e6ebc6-5365-42a8-8197-9f5d14c042d3.sh: 2: /__w/_temp/95e6ebc6-5365-42a8-8197-9f5d14c042d3.sh: pip: not found
Here is my yaml file:
name: Dart
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
# Note that this workflow uses the latest stable version of the Dart SDK.
# Docker images for other release channels - like dev and beta - are also
# available. See https://hub.docker.com/r/google/dart/ for the available
# images.
container:
image: google/dart:latest
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Print Dart SDK version
run: dart --version
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
cd integ_tests
dart pub get
# Run uvicorn
- name: Run uvicorn
run: |
cd fastapi/
uvicorn app.main:app --reload --port 8000
# run my test
- name: Run dart test
run: |
cd integ_tests
dart lib/main.dart --dry true
Additionally, I'm concerned that running uvicorn inside the container will make the container hang (since it would never exit). If this is the case, how do I go about starting a localhost with uvicorn without letting the container run forever?
EDIT: full log
If I run it with sudo I get an error saying
/__w/_temp/2ffb7222-f1dd-4273-870c-c85ac57b9da3.sh: 1: /__w/_temp/2ffb7222-f1dd-4273-870c-c85ac57b9da3.sh: sudo: not found
I suspect the problem here is still that you are attempting to run pip inside the container. Here's why. The dart version is provided after the setup-python but before pip installation. I would change the order to ensure that dart --version step is before the "Run dart test" step. This will ensure that all python build and configure is done right after setup-python
I looked at a Python build of mine and on the step of upgrading pip, I get this:
> Run python -m pip install --upgrade pip
Requirement already satisfied: pip in /opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages (21.0.1)
Collecting pytest
I believe this will have uvicorn running outside the container (i.e., in the runner VM).
I'm trying to run Codecept.js on circleCI but I keep running into the same issue where it says Failed to launch chrome.
I believe it is a problem with puppeteer but I cannot find the issue online.
I've tried adding the following to my codecept.conf.js file.
helpers: {
Puppeteer: {
url: process.env.CODECEPT_URL || 'http://localhost:3030'
},
chrome: {
args: ["--headless", "--no-sandbox"]
}
},
I've tried to install chrome onto the container that I'm running:
docker-compose exec aubisque npx codeceptjs run --steps
As I thought it might be that chrome didn't exist. I couldn't figure out how to do this though. I have also read that puppeteer uses its own type of chrome :S.
acceptance:
working_directory: ~/aubisque-api
docker:
- image: circleci/node:latest-browsers
environment:
NODE_ENV: development
steps:
- checkout
- setup_remote_docker
- restore_cache:
name: Restore NPM Cache
keys:
- package-lock-cache-{{ checksum "package-lock.json" }}
- run:
name: Install git-crypt
command: |
curl -L https://github.com/AGWA/git-crypt/archive/debian/0.6.0.tar.gz | tar zxv &&
(cd git-crypt-debian && sudo make && sudo make install)
- run:
name: decrypt files
command: |
echo $DECRYPT_KEY | base64 -d >> keyfile
git-crypt unlock keyfile
rm keyfile
- run:
name: Build and run acceptance tests
command: |
docker-compose -f docker-compose-ci.yml build --no-cache
docker-compose -f docker-compose-ci.yml up -d
docker-compose exec aubisque npx codeceptjs run --steps
This is my circle/config.yml file where I run my acceptance tests. I am running the code in workflows and before I run this job I am running a job that installs the npm modules.
Unable to install ingress-nginx for kubernetes on Docker desktop
I was using the following in cmd line to install ingress nginx so far:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
as shown in the web page: https://che.eclipse.org/running-eclipse-che-on-kubernetes-using-docker-desktop-for-mac-5d972ed511e1
I seems like the installatio procedure has changed. Can anyone let me know step by step instructions to install ingress-nginx? I coudnt install it by following the procedure described here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
Installation via helm works perfectly for me. Assuming you have kubectl binary installed and configured to use for your k8s cluster you can follow below steps one by one to achieve installation of nginx-ingress controller
1.Install helm binary (if doesn't exist)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/get_helm.sh | bash
2.Install helm for your cluster (if not installed yet)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/install.sh | bash
You should see output like
...
Waiting for tiller install...
Helm install complete
3.Then install nginx-ingress via helm
helm install stable/nginx-ingress --name nginx-ingress
How can we use bitbucket pipelines to update an asp.net core website on aws elastic beanstalk?
i know this is late answer but i did same thing few days ago so here is example how i did it
firstly you have to enable pipeline in bitbucket choose .NET CORE
in bitbucket-pipelines.yml you need yo write something like this:
image: microsoft/dotnet:sdk
pipelines:
branches:
staging:
- step:
name: build publish prepare and zip
caches:
- dotnetcore
script:
- apt-get update && apt-get install --yes zip
- export PROJECT_NAME=<your-project-name>
- dotnet restore
- dotnet build $PROJECT_NAME
- dotnet publish --self-contained --runtime win-x64 --configuration Release
- zip -j site.zip /opt/atlassian/pipelines/agent/build/<your-project-name>/bin/Release/netcoreapp2.0/win-x64/publish/* -x aws-windows-deployment-manifest.json
- zip -r -j application.zip site.zip /opt/atlassian/pipelines/agent/build/<your-project-name>/bin/Release/netcoreapp2.0/win-x64/publish/aws-windows-deployment-manifest.json
artifacts:
- application.zip
- step:
name: upload to elasticbeanstalk
script:
- pipe: atlassian/aws-elasticbeanstalk-deploy:0.5.0
variables:
APPLICATION_NAME: '<application-name>'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
#COMMAND: 'upload-only'
ZIP_FILE: 'application.zip'
ENVIRONMENT_NAME: '<environment-name>'
WAIT: 'true'
in settings -> pipelines -> variables you have to set aws keys: access secret and region that will used by $ ($AWS_SECRET_ACCESS_KEY)
additionally you will have to create s3bucket "-elsticbeanstalk-deployments" (if you dont create it, when the environment will try to upload your zip it will show you error with name of bucket "not found" so just copy the name and create it in s3)
EDIT: My goal is to be able to emit metrics from my spring-boot application and have them sent to a Graphite server. For that I am trying to set up statsd. If you can suggest a cleaner approach, that would be better.
I have a Beanstalk application which requires statsd to run as a background process. I was able to specify commands and packages through ebextensions config file as follows:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_run_statsd:
command: node stats.js exampleConfig.js
cwd: /home/ec2-user/statsd
When I try to deploy the application to a new environment, the EC2 node never comes up fully. I logged in to check what might be going on and noticed in /var/log/cfn-init.log that 01_nodejs_install, 02_mkdir_statsd and 03_fetch_statsd were executed successfully. So I guess the system was stuck on the fourth command (04_run_statsd).
2016-05-24 01:25:09,769 [INFO] Yum installed [u'git']
2016-05-24 01:25:37,751 [INFO] Command 01_nodejs_install succeeded
2016-05-24 01:25:37,755 [INFO] Command 02_mkdir_statsd succeeded
2016-05-24 01:25:38,700 [INFO] Command 03_fetch_statsd succeeded
cfn-init.log (END)
I need help with the following:
If there is a better way to install and run statsd while instantiating an environment, I would appreciate if you could provide details on that approach. This current scheme seems hacky.
If this is the approach I need to stick with, how can I run the fourth command so that statsd can be run as a background process?
Tried a few things and found that the following ebextensions configs work:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_change_config:
command: cat exampleConfig.js | sed 's/2003/<graphite server port>/g' | sed 's/graphite.example.com/my.graphite.server.hostname/g' > config.js
cwd: /home/ec2-user/statsd
05_run_statsd:
command: setsid node stats.js config.js >/dev/null 2>&1 < /dev/null &
cwd: /home/ec2-user/statsd
Note that I added another command (04_change_config) so that I may configure my own Graphite server and port in statsd configs. This change is not needed to address the original question, though.
The actual run command uses setsid to run the command as a daemon.