How can we use bitbucket pipelines to update an asp.net core website on aws elastic beanstalk?
i know this is late answer but i did same thing few days ago so here is example how i did it
firstly you have to enable pipeline in bitbucket choose .NET CORE
in bitbucket-pipelines.yml you need yo write something like this:
image: microsoft/dotnet:sdk
pipelines:
branches:
staging:
- step:
name: build publish prepare and zip
caches:
- dotnetcore
script:
- apt-get update && apt-get install --yes zip
- export PROJECT_NAME=<your-project-name>
- dotnet restore
- dotnet build $PROJECT_NAME
- dotnet publish --self-contained --runtime win-x64 --configuration Release
- zip -j site.zip /opt/atlassian/pipelines/agent/build/<your-project-name>/bin/Release/netcoreapp2.0/win-x64/publish/* -x aws-windows-deployment-manifest.json
- zip -r -j application.zip site.zip /opt/atlassian/pipelines/agent/build/<your-project-name>/bin/Release/netcoreapp2.0/win-x64/publish/aws-windows-deployment-manifest.json
artifacts:
- application.zip
- step:
name: upload to elasticbeanstalk
script:
- pipe: atlassian/aws-elasticbeanstalk-deploy:0.5.0
variables:
APPLICATION_NAME: '<application-name>'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
#COMMAND: 'upload-only'
ZIP_FILE: 'application.zip'
ENVIRONMENT_NAME: '<environment-name>'
WAIT: 'true'
in settings -> pipelines -> variables you have to set aws keys: access secret and region that will used by $ ($AWS_SECRET_ACCESS_KEY)
additionally you will have to create s3bucket "-elsticbeanstalk-deployments" (if you dont create it, when the environment will try to upload your zip it will show you error with name of bucket "not found" so just copy the name and create it in s3)
Related
When I run skaffold in a github workflow like this
skaffold build
it calls the gradle jib correctly, creates an image and pushes it to the ghcr successfully. Grdale finishes successfully as can be seen in the log. Nevertheless, something happens afterwards that fails. It seems someone tries to access the just built image but is not authorized. This does not happen, if I execute it locally. And it does not fail in the github workflow if I call gradlew jib directly without skaffold being involved.
Built and pushed image as ghcr.io/tobias-neubert/motd-service:453f4c4-dirty
BUILD SUCCESSFUL in 11s
4 actionable tasks: 4 executed
time="2023-02-15T12:07:09Z" level=error msg="No matching credentials were found for \"ghcr.io\""
time="2023-02-15T12:07:09Z" level=error msg="No matching credentials were found for \"ghcr.io\""
getting image: GET https://ghcr.io/token?scope=repository%3Atobias-neubert%2Fmotd-service%3Apull&service=ghcr.io: UNAUTHORIZED: authentication required
Error: Process completed with exit code 1.
The github workflow:
name: Build and push motd-service
on:
push:
permissions:
packages: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout#v3
- name: Set up Java
uses: actions/setup-java#v2
with:
java-version: 17
distribution: temurin
- name: Setup Gradle
uses: gradle/gradle-build-action#v2
- name: Make gradlew executable
run: chmod +x ./gradlew
- name: Install skaffold
run: |
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \
sudo install skaffold /usr/local/bin/
- name: Deactivate collecting skaffold metrics
run: skaffold config set --global collect-metrics false
- name: Build the motd image
env:
GH_PASSWORD: '${{ secrets.GITHUB_TOKEN }}'
run: skaffold build
Does anybody know what happens here?
It tries to fetch the digest of the new image, which it needs to render the k8s resources. Pushing the image was made by gradle. The jib plugin is configured to use environment variables for authenticating against ghcr.io. But skaffold does not know about those. So it fails to authenticate. A docker login does the trick, although it is not safe in a CI. So now I have to search for a better way to tell skaffold to authenticate against the registry
I'm attempting to get a SSM param for the params in my serverless.yml
Without it, sls deploy works as expected, it's adding that param that breaks the deploy. The credentials are set up on a gitlab runner user the export command for access key id and the secret access key.
.gitlab-ci.yml
before_script:
- pip install virtualenv
- python -m virtualenv venv_api
- source venv_api/bin/activate
- pip install -r requirements.txt
- curl -sL https://deb.nodesource.com/setup_lts.x | bash -
- apt-get install -y nodejs
- npm config set prefix /usr/local
- npm install -g serverless
- serverless plugin install -n serverless-dynamodb-autoscaling
- serverless plugin install -n serverless-python-requirements
script:
# deploy to staging env
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- sls deploy --stage staging --verbose
serverless.yml
params:
default:
SOME_VARIABLE: ${ssm:SOME_VARIABLE}
[...]
provider:
name: aws
runtime: python3.9
region: us-west-2
[...]
The error I'm getting is
$ sls deploy --stage staging --verbose
Running "serverless" from node_modules
Environment: linux, node 18.12.1, framework 3.25.0 (local) 3.25.0v (global), plugin
6.2.2, SDK 4.3.2
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "params.default.SOME_VARIABLE": AWS
provider credentials not found. Learn how to set up AWS provider credentials in
our docs here: <http://slss.io/aws-creds-setup>.,
I am trying to create and run a buildconfig yml file.
C:\OpenShift>oc version
Client Version: 4.5.31
Kubernetes Version: v1.18.3+65bd32d
Background:-
I have multiple Springboot WebUI applications which i need to deploy on OpenShift
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes),
for each and every application seems to be very inefficient.
Instead i would like to have a single set of parameterized yml files
to which i can pass on custom parameters to setup each individual application
Solution so far:-
Version One
Dockerfile-
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties
configmap/myapp-configmap created
$ oc describe cm myapp-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
APPPATH:
----
/app
ARTIFACT:
----
myapp.jar
ARTIFACTURL:
----
"https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
MY_PORT:
----
12305
Events: <none>
buildconfig.yaml snippet
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: ARTIFACTURL
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACTURL
- name: ARTIFACT
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACT
This works fine. However I somehow need to have those env: variables in file.
I am doing this to have greater flexibility, i.e. lets say I have a new variable introduced in docker file, I need NOT change the buildconfig.yml
I just add the new key:value pair to the property file, rebuild and we are good to go
This is what I do next;
Version Two
Dockerfile
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
#Intializing the variables file;
RUN ["sh", "-c", "source ./MyApp.properties"]
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties=C:\MyRepo\MyTemplates\MyApp.properties
configmap/myapp-configmap created
C:\OpenShift>oc describe configmaps test-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
MyApp.properties:
----
APPPATH=/app
ARTIFACTURL="https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
ARTIFACT=myapp.jar
MY_PORT=12035
Events: <none>
buildconfig.yaml snippet
source:
contextDir: "${param_source_contextdir}"
configMaps:
- configMap:
name: "${param_app_name}-configmap"
However the build fails
STEP 9: RUN ls ./MyApp.properties
ls: cannot access ./MyApp.properties: No such file or directory
error: build error: error building at STEP "RUN ls ./MyApp.properties": error while running runtime: exit status 2
This means that the config map file didnt get copy to folder.
Can you please suggest what to do next?
I think you are misunderstanding Openshift a bit.
The first thing you say is
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes), for each and every application seems to be very inefficient.
But that's how kubernetes/openshift works. If your resource files look the same, but only use a different git resource or image for example, then you probably are looking for Openshift Templates.
Instead i would like to have a single set of parameterized yml files to which i can pass on custom parameters to setup each individual application
Yep, I think Openshift Templates is what you are looking for. If you upload your template to the service catalog, whenever you have a new application to deploy, you can add some variables in a UI and click deploy.
An Openshift Template is just a parameterised file for all of your openshift resources (configmap, service, buildconfig, etc.).
If your application needs to be build from some git repo, using some credentials, you can parameterise those variables.
But also take a look at Openshift's Source-to-Image solution (I'm not sure what version you are using, so you'll have to google some resources). It can build and deploy your application without you having to write your own Resource files.
I want to set up a github actions container with both dart and python. I have used the dart actions template and installed python. However, I keep getting an error saying
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied: pip in /__t/Python/3.8.7/x64/lib/python3.8/site-packages (21.0.1)
/__w/_temp/95e6ebc6-5365-42a8-8197-9f5d14c042d3.sh: 2: /__w/_temp/95e6ebc6-5365-42a8-8197-9f5d14c042d3.sh: pip: not found
Here is my yaml file:
name: Dart
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
# Note that this workflow uses the latest stable version of the Dart SDK.
# Docker images for other release channels - like dev and beta - are also
# available. See https://hub.docker.com/r/google/dart/ for the available
# images.
container:
image: google/dart:latest
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Print Dart SDK version
run: dart --version
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
cd integ_tests
dart pub get
# Run uvicorn
- name: Run uvicorn
run: |
cd fastapi/
uvicorn app.main:app --reload --port 8000
# run my test
- name: Run dart test
run: |
cd integ_tests
dart lib/main.dart --dry true
Additionally, I'm concerned that running uvicorn inside the container will make the container hang (since it would never exit). If this is the case, how do I go about starting a localhost with uvicorn without letting the container run forever?
EDIT: full log
If I run it with sudo I get an error saying
/__w/_temp/2ffb7222-f1dd-4273-870c-c85ac57b9da3.sh: 1: /__w/_temp/2ffb7222-f1dd-4273-870c-c85ac57b9da3.sh: sudo: not found
I suspect the problem here is still that you are attempting to run pip inside the container. Here's why. The dart version is provided after the setup-python but before pip installation. I would change the order to ensure that dart --version step is before the "Run dart test" step. This will ensure that all python build and configure is done right after setup-python
I looked at a Python build of mine and on the step of upgrading pip, I get this:
> Run python -m pip install --upgrade pip
Requirement already satisfied: pip in /opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages (21.0.1)
Collecting pytest
I believe this will have uvicorn running outside the container (i.e., in the runner VM).
So I am pretty stuck yet so close to getting a Google Apps Script project to push and deploy with Clasp through Googles Cloudbuild. So the push and deploy commands come from Googles Clasp cli which requires you to log in with your Google credentials with clasp login. The login will create a file in your home dir called ~/.clasprc.json with your credentials. This is needed to push and deploy. In the cloudbuild.yaml I created a substitution called _CLASPRC to hold the contents of this file and used my own custom image to write it to the container while running the build.
Now for the issue, I get the error below when the push command runs which is basically a not very useful way of saying I'm not logged in or any other error with the .clasprc.json. Since this is the only error I ever get no matter what the problem is, the issue is a bit hard to debug.
Could not read API credentials. Are you logged in globally?
I have tried putting the .clasprc.json in the home dir and the project dir but get the same issue both ways. I'm pretty sure the file is getting written to the projects dir because when I try to run on my local without the .clasp.json it complains it's missing before complaining I'm not logged in. When the .clasp.json is there it only complains I'm not logged in.
The project is just a personal project of mine and it is all open source on Github so here is the link to the actual project if you want some reference to the actual code. My Lil Admin and the builder I used My Builders. However you really don't need the project, to reproduce follow the steps below on your local.
make sure to have a GCP project created and the gcloud cli with Apps Script API enabled
have the clasp cli with npm install -g #google/clasp
clasp login to get a .clasprc.json and auth with GCP
clasp create --title "My Script" --type webapp and take note of the Scripts ID
associate the apps script project with your GCP project
The following steps are the files which lead to the problem. Simply add them to the clasp project created.
6. Here is the entrypoint for my Clasp Builder Image:
builder/clasp_ci.sh
#!/bin/bash
# if there is a _CLASPRC var and no .clasprc.json file
if [ ! -z "${_CLASPRC}" -a ! -f "${HOME}/.clasprc.json" ]; then
echo $_CLASPRC > "$HOME/.clasprc.json"
fi
# if there is a _SCRIPT_ID and PROJECT_ID and no .clasp.json file
if [ ! -z "${_SCRIPT_ID}" -a ! -z "$PROJECT_ID" -a ! -f ".clasp.json" ]; then
cat > '.clasp.json' << EOF
{"scriptId":"$_SCRIPT_ID","projectId": "$PROJECT_ID"}
EOF
fi
# pass args to clasp
clasp "$#"
The builders dockerfile
builder/Dockerfile
# use Node LTS (Boron)
FROM node:8.16.1
COPY clasp_ci.sh /usr/local/bin/clasp_ci
# install Clasp CLI
RUN npm install -g #google/clasp && \
chmod +x /usr/local/bin/clasp_ci
ENTRYPOINT ["/usr/local/bin/clasp_ci"]
now the cloudbuild to push the clasp builder
builder/cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/clasp', '.' ]
images:
- 'gcr.io/$PROJECT_ID/clasp'
my cloudbuild ci for an apps script project. If you're making a new project to follow along you don't need the build steps nor the dir key in the push and deploy steps. This is pretty specific to the project in the links to my project above.
cloudbuild.yaml
steps:
- id: install
name: 'gcr.io/cloud-builders/npm'
args: ['install']
- id: build-server
name: 'gcr.io/cloud-builders/npm'
args: ['run','gas']
env:
- 'NODE_ENV=production'
- id: build-client
name: 'gcr.io/cloud-builders/npm'
args: ['run','prod']
env:
- 'NODE_ENV=production'
- id: push
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['push','-f']
- id: deploy
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['deploy','$TAG_NAME']
substitutions:
_CLASPRC: 'your clasp rc file in your home dir after logging in locally'
_SCRIPT_ID: 'your script id of the apps script project to deploy to'
Here is the command to load the builder. Make sure to replace yourproject with your actual project ID.
cd builder && gcloud builds submit --project yourproject --config=cloudbuild.yaml .
the command to finally get the error. Make sure to replace yourproject with your actual project ID and your_script_id with your actual script ID you took note of in step 4.
gcloud builds submit --project yourproject --config=cloudbuild.yaml . \
--substitutions=_CLASPRC="$(cat $HOME/.clasprc.json)" \
--substitutions=_SCRIPT_ID="your_script_id"
I have also tried using the credentials created from logging in with OAuth but I got the same exact error. However this may be useful in solving the issue. Docs for Clasp Run with OAuth
Hopefully someone can help me get this working. If so, this would be the first documentation online for a Cloudbuild CI with Apps Script and Clasp since I can't find anyone doing this anywhere. I have found some links using travis and jenkins but what they are doing for some reason does not work. Does anyone see what something that I'm not? What am I missing here?!?!
Some other somewhat related or never solved issues:
https://github.com/google/clasp/issues/524
https://github.com/google/clasp/blob/master/tests/README.md
https://github.com/google/clasp/issues/225
https://github.com/gazf/google-apps-script-ci-starter
OK, so after a bunch of debugging I find out the cloudbuild substitution variables do not translate to environment variables in the container. You have to manually set the environment variables to the substitution variables and then the container will get the variables it needs.
Here is the updated CI Entry point:
builder/clasp_si.sh
#!/bin/bash
if [ ! -z "${CLASPRC}" -a ! -f "${HOME}/.clasprc.json" ]; then
echo $CLASPRC > "${HOME}/.clasprc.json"
fi
if [ ! -z "${SCRIPT_ID}" -a ! -z "$PROJECT_ID" -a ! -f ".clasp.json" ]; then
cat > '.clasp.json' << EOF
{"scriptId":"$SCRIPT_ID","projectId": "$PROJECT_ID"}
EOF
fi
clasp "$#"
and then the updated cloudbuild config:
cloudbuild.yaml
steps:
- id: install
name: 'gcr.io/cloud-builders/npm'
args: ['install']
- id: build-server
name: 'gcr.io/cloud-builders/npm'
args: ['run','gas']
env:
- 'NODE_ENV=production'
- id: build-client
name: 'gcr.io/cloud-builders/npm'
args: ['run','prod']
env:
- 'NODE_ENV=production'
- id: push
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['push','-f']
env:
- 'CLASPRC=$_CLASPRC'
- 'SCRIPT_ID=$_SCRIPT_ID'
- 'PROJECT_ID=$PROJECT_ID'
- id: deploy
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['deploy','$TAG_NAME']
env:
- 'CLASPRC=$_CLASPRC'
- 'SCRIPT_ID=$_SCRIPT_ID'
- 'PROJECT_ID=$PROJECT_ID'
substitutions:
_CLASPRC: 'your clasp rc file in your home dir after logging in locally'
_SCRIPT_ID: 'your script id of the apps script project to deploy to'