How do I specify whether to zappa deploy or zappa update my application in Github actions with some sort of if statement
My Workflow Actions as per below
name: Dev Deploy
on:
push:
branches:
- mybranch
jobs:
dev-deploy:
name: Deploy to Dev
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Set up Python 3.9.10
uses: actions/setup-python#v1
with:
python-version: 3.9.10
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest
pip install python-Levenshtein
pip install virtualenv
- name: Install zappa
run: pip install zappa
- name: Install Serverless
run: npm install -g serverless
- name: Configure Serverless for zappa Services
run: serverless config credentials --provider aws --key myAWSKey --secret myAWSSecret
- name: Deploy to Dev
run: |
python -m virtualenv envsp
source envsp/bin/activate
zappa deploy dev
If application already deployed once, I get the error
Error: This application is already deployed - did you mean to call update?
In which case I would want to run zappa update dev
Related
I'm trying to set up a deployment using GitHub actions. Build the React app and use express to serve it.
name: EB Deploy
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.9
uses: actions/setup-python#v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install awsebcli
- uses: actions/setup-node#v2
with:
node-version: '16'
- name: Deploy to Elastic Beanstalk
run: |
cd client
npm i
npm run build
mv dist ..
cd ..
eb deploy
app.use(express.static(path.join(__dirname, './dist')))
Everything works fine except eb doesn't deploy dist folder.
Even when I run eb labs download to download the deployed version I don't see dist folder in the zip file.
I ended up zipping the whole dir and eb deploy --staged it, for it to work.
I guess eb was ignoring the dist folder because it wasn't being tracked by git. I added these commands:
git config --global user.email "GH-DEPLOY#aws.null"
git config --global user.name "GH-DEPLOY"
git add dist
git commit -m 'add react app'
eb deploy --staged
I am trying to get a coverage report in GitHub Actions
but when I run the pipeline I gives me this error:
Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock
Then I search around add added the sudo mysql start service and now I get this error, but I don't know were or have to write mit -root -password and the -host?
Access denied for user 'root'#'localhost' (using password: YES)")
how do I do that?
name: Django CI
on:
push:
branches: [ unittests ]
paths-ignore: '**/SkoleProtocol/attendanceCode/tests/test_selenium.py'
pull_request:
branches: [ unittests ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.7, 3.8, 3.9]
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.7
uses: actions/setup-python#v2
with:
python-version: 3.7
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Create test database
run: |
sudo service mysql start
- name: Coverage report
run: |
pip install coverage
coverage run manage.py test
coverage report
- name: Lint with flake8
run: |
pip install flake8
flake8 ./attendanceCode --exit-zero # Exit with status code "0" even if there are errors.
- name: Django Tests
run: |
python3 manage.py test
Two things that stand out.
Do you happen to have the credentials in an env file else where?
- name: Create test database
run: |
sudo service mysql start
You just happen to start the service rather than creating the database.
Try:
sudo /etc/init.d/mysql start // To start the service
mysql -uroot -proot -e "CREATE DATABASE __dbname__;"
I want to set up a github actions container with both dart and python. I have used the dart actions template and installed python. However, I keep getting an error saying
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied: pip in /__t/Python/3.8.7/x64/lib/python3.8/site-packages (21.0.1)
/__w/_temp/95e6ebc6-5365-42a8-8197-9f5d14c042d3.sh: 2: /__w/_temp/95e6ebc6-5365-42a8-8197-9f5d14c042d3.sh: pip: not found
Here is my yaml file:
name: Dart
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
# Note that this workflow uses the latest stable version of the Dart SDK.
# Docker images for other release channels - like dev and beta - are also
# available. See https://hub.docker.com/r/google/dart/ for the available
# images.
container:
image: google/dart:latest
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Print Dart SDK version
run: dart --version
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
cd integ_tests
dart pub get
# Run uvicorn
- name: Run uvicorn
run: |
cd fastapi/
uvicorn app.main:app --reload --port 8000
# run my test
- name: Run dart test
run: |
cd integ_tests
dart lib/main.dart --dry true
Additionally, I'm concerned that running uvicorn inside the container will make the container hang (since it would never exit). If this is the case, how do I go about starting a localhost with uvicorn without letting the container run forever?
EDIT: full log
If I run it with sudo I get an error saying
/__w/_temp/2ffb7222-f1dd-4273-870c-c85ac57b9da3.sh: 1: /__w/_temp/2ffb7222-f1dd-4273-870c-c85ac57b9da3.sh: sudo: not found
I suspect the problem here is still that you are attempting to run pip inside the container. Here's why. The dart version is provided after the setup-python but before pip installation. I would change the order to ensure that dart --version step is before the "Run dart test" step. This will ensure that all python build and configure is done right after setup-python
I looked at a Python build of mine and on the step of upgrading pip, I get this:
> Run python -m pip install --upgrade pip
Requirement already satisfied: pip in /opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages (21.0.1)
Collecting pytest
I believe this will have uvicorn running outside the container (i.e., in the runner VM).
I am currently trying to implement a workflow which requires protobuf to be installed. However, on Ubuntu I have to compile this myself. The problem is that this takes quite some time to do so I figured caching this step is the thing to do.
However, I am not sure how I can use actions/cache for this, if at all possible.
The following is how I am installing protobuf and my Python dependencies:
name: Service
on:
push:
branches: [develop, master]
jobs:
test:
runs-on: ubuntu-18.04
steps:
- name: Install protobuf-3.6.1
run: |
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.6.1/protobuf-all-3.6.1.tar.gz
tar -xvf protobuf-all-3.6.1.tar.gz
cd protobuf-3.6.1
./configure
make
make check
sudo make install
sudo ldconfig
- uses: actions/checkout#v2
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip setuptools
pip install -r requirements.txt
How can I cache these run steps s.t. they don't have to run each time?
I tested the following:
name: Service
on:
push:
branches: [develop, master]
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Load from cache
id: protobuf
uses: actions/cache#v1
with:
path: protobuf-3.6.1
key: protobuf3
- name: Compile protobuf-3.6.1
if: steps.protobuf.outputs.cache-hit != 'true'
run: |
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.6.1/protobuf-all-3.6.1.tar.gz
tar -xvf protobuf-all-3.6.1.tar.gz
cd protobuf-3.6.1
./configure
make
make check
- name: Install protobuf
run: |
cd protobuf-3.6.1
sudo make install
sudo ldconfig
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip setuptools
pip install -r requirements.txt
I would also delete all the source files once it's built.
I have an action on master branch which on push/merge builds a package, uploads it to PyPI then checks out to develop branch, bumps version in develop branch and pushes to the origin of develop branch. Develop branch has an action that listens to push/merge and does a snapshot release.
When I push to develop the develop action works perfectly and does a snapshot release, but when master branch pushes, push is successful but the action does not get triggered. What am I missing?
Both actions are added below.
name: Build and Upload Package to PyPI | Master Branch
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Set up Python
uses: actions/setup-python#v1
with:
python-version: '3.5'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
pip install GitPython
pip install bumpversion
- name: Strip 'snapshot' from version
run: sed -i 's/-snapshot//g' setup.py
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
TWINE_REPOSITORY_URL: https://pypi.domain.com
run: |
python setup.py sdist bdist_wheel
twine upload dist/*
- name: Bump Verison and Push to develop
run: |
git stash
git config --local user.email "name#email.com"
git config --local user.name "username"
git checkout develop
python bump_version.py
cat .bumpversion.cfg
git remote set-url --push origin https://username:$GITHUB_TOKEN#github.com/repo/path
git push origin develop
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Build and Upload Package to PyPI | Develop Branch
on:
push:
branches:
- develop
jobs:
bumpTag_build_and_publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Set up Python
uses: actions/setup-python#v1
with:
python-version: '3.5'
- name: Install dependencies for setup
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
TWINE_REPOSITORY_URL: https://pypi.domain.co,
run: |
python setup.py sdist bdist_wheel
twine upload dist/*
Provided secrets.GITHUB_TOKEN is intentionally not allowed to trigger workflows. As seen in documention:
(...) if an action pushes code using the repository's GITHUB_TOKEN, a new workflow will not run even when the repository contains a workflow configured to run when push events occur.
If you need your automagic push to be "visible" by workflows, you need to create Personal Access Token, add it to repo secrets, and use that instead of GITHUB_TOKEN.
Note that GitHub assumes that you know what you're doing, if you use non-stock token - which means preventing possible infinite loop is on you. While it's not a case in your scenario for now (develop branch does not push anything), it's worth to remember in case one of workflows will change some day.