how to set up github actions for dart and python - github-actions

I want to set up a github actions container with both dart and python. I have used the dart actions template and installed python. However, I keep getting an error saying
WARNING: The directory '/github/home/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Requirement already satisfied: pip in /__t/Python/3.8.7/x64/lib/python3.8/site-packages (21.0.1)
/__w/_temp/95e6ebc6-5365-42a8-8197-9f5d14c042d3.sh: 2: /__w/_temp/95e6ebc6-5365-42a8-8197-9f5d14c042d3.sh: pip: not found
Here is my yaml file:
name: Dart
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
# Note that this workflow uses the latest stable version of the Dart SDK.
# Docker images for other release channels - like dev and beta - are also
# available. See https://hub.docker.com/r/google/dart/ for the available
# images.
container:
image: google/dart:latest
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Print Dart SDK version
run: dart --version
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
cd integ_tests
dart pub get
# Run uvicorn
- name: Run uvicorn
run: |
cd fastapi/
uvicorn app.main:app --reload --port 8000
# run my test
- name: Run dart test
run: |
cd integ_tests
dart lib/main.dart --dry true
Additionally, I'm concerned that running uvicorn inside the container will make the container hang (since it would never exit). If this is the case, how do I go about starting a localhost with uvicorn without letting the container run forever?
EDIT: full log
If I run it with sudo I get an error saying
/__w/_temp/2ffb7222-f1dd-4273-870c-c85ac57b9da3.sh: 1: /__w/_temp/2ffb7222-f1dd-4273-870c-c85ac57b9da3.sh: sudo: not found

I suspect the problem here is still that you are attempting to run pip inside the container. Here's why. The dart version is provided after the setup-python but before pip installation. I would change the order to ensure that dart --version step is before the "Run dart test" step. This will ensure that all python build and configure is done right after setup-python
I looked at a Python build of mine and on the step of upgrading pip, I get this:
> Run python -m pip install --upgrade pip
Requirement already satisfied: pip in /opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages (21.0.1)
Collecting pytest
I believe this will have uvicorn running outside the container (i.e., in the runner VM).

Related

AWS provider credentials not found when getting SSM param on serverless

I'm attempting to get a SSM param for the params in my serverless.yml
Without it, sls deploy works as expected, it's adding that param that breaks the deploy. The credentials are set up on a gitlab runner user the export command for access key id and the secret access key.
.gitlab-ci.yml
before_script:
- pip install virtualenv
- python -m virtualenv venv_api
- source venv_api/bin/activate
- pip install -r requirements.txt
- curl -sL https://deb.nodesource.com/setup_lts.x | bash -
- apt-get install -y nodejs
- npm config set prefix /usr/local
- npm install -g serverless
- serverless plugin install -n serverless-dynamodb-autoscaling
- serverless plugin install -n serverless-python-requirements
script:
# deploy to staging env
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- sls deploy --stage staging --verbose
serverless.yml
params:
default:
SOME_VARIABLE: ${ssm:SOME_VARIABLE}
[...]
provider:
name: aws
runtime: python3.9
region: us-west-2
[...]
The error I'm getting is
$ sls deploy --stage staging --verbose
Running "serverless" from node_modules
Environment: linux, node 18.12.1, framework 3.25.0 (local) 3.25.0v (global), plugin
6.2.2, SDK 4.3.2
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "params.default.SOME_VARIABLE": AWS
provider credentials not found. Learn how to set up AWS provider credentials in
our docs here: <http://slss.io/aws-creds-setup>.,

running gui application on github hosted runner

for testing purposes, is it possible to run GUI applications on GitHub-hosted runners?
I tried to run Windows Calculator (Microsoft.WindowsCalculator_8wekyb3d8bbwe!App) on "windows-2022" via WinAppDriver and it fails with "WebDriverException: Package was not found".
Any suggestion(s)?
TIA,
Adrian.
P.S. here is my GitHub Actions workflow for the above:
# ISSUE fails with WebDriverException: Package was not found
# see https://github.com/QA-Automation-Starter/qa-automation/actions/runs/3234841483/jobs/5298454871
build-and-test-on-windows:
name: windows build&test
# see https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners
runs-on: windows-2022
environment: development
steps:
- uses: actions/checkout#v2
- uses: actions/setup-java#v3
with:
java-version: '8'
distribution: 'temurin'
cache: maven
settings-path: ${{ github.workspace }}
# ISSUE somehow should run WinAppDriver
# see https://github.com/microsoft/WinAppDriver/issues/1722
# and https://github.com/actions/runner-images/blob/main/images/win/Windows2022-Readme.md
# TODO maybe, should re-publish the site from here (?)
- run: |
choco install -y autologon
autologon %USERNAME% $USERDOMAIN%
start cmd /c "C:\Program Files (x86)\Windows Application Driver\WinAppDriver.exe"
cd qa-testing-example
mvn install ^
-s %GITHUB_WORKSPACE%\settings.xml ^
-Pmode-build-fast,mode-build-quiet,environment-default,testing-windows,device-windows
shell: cmd

If statement in Github actions if Zappa already deploy application

How do I specify whether to zappa deploy or zappa update my application in Github actions with some sort of if statement
My Workflow Actions as per below
name: Dev Deploy
on:
push:
branches:
- mybranch
jobs:
dev-deploy:
name: Deploy to Dev
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Set up Python 3.9.10
uses: actions/setup-python#v1
with:
python-version: 3.9.10
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest
pip install python-Levenshtein
pip install virtualenv
- name: Install zappa
run: pip install zappa
- name: Install Serverless
run: npm install -g serverless
- name: Configure Serverless for zappa Services
run: serverless config credentials --provider aws --key myAWSKey --secret myAWSSecret
- name: Deploy to Dev
run: |
python -m virtualenv envsp
source envsp/bin/activate
zappa deploy dev
If application already deployed once, I get the error
Error: This application is already deployed - did you mean to call update?
In which case I would want to run zappa update dev

GitHub Actions install jinja j2

Using GitHub Actions, I'm trying to install j2:
jobs:
install-packages:
runs-on: ubuntu-latest
steps:
- run: |
sudo apt-get install -y jq
pip3 install --user --upgrade j2cli
j2 --version
This successfully installs j2cli, but the last j2 --version produces Error: Process completed with exit code 127. (logs).
Why is this happening?
When you execute your script using a run step it is executed in a bash shell by default. The error code 127 is emitted by shell when the given command is not found within your PATH environment variable and it is not a built-in shell command. In other words, the system doesn't understand your command, because it doesn't know where to find the j2 command you're trying to call. When we know what the error means we can fix it by adding pip3 package installation directory to the PATH. We can do it manually by locating the path by calling pip3 show j2cli or we can set up a Python environment to do it automatically using a dedicated setup-python action before calling pip3 installer. Having that in mind the script should be adjusted:
jobs:
install-packages:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-python#v2
with:
python-version: 3.x
- run: |
pip3 install --user --upgrade j2cli
j2 --version
It should fix the error.
Please note we don't need to install jq binary as it comes pre-installed with the GitHub-hosted runner. That's why you don't need the:
sudo apt-get install -y jq
If we look at the log included we can see it clearly
jq is already the newest version (1.5+dfsg-2).
You can find the software included with the GitHub-hosted runner here.

Run statsd as a daemon on EC2 instances programatically

EDIT: My goal is to be able to emit metrics from my spring-boot application and have them sent to a Graphite server. For that I am trying to set up statsd. If you can suggest a cleaner approach, that would be better.
I have a Beanstalk application which requires statsd to run as a background process. I was able to specify commands and packages through ebextensions config file as follows:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_run_statsd:
command: node stats.js exampleConfig.js
cwd: /home/ec2-user/statsd
When I try to deploy the application to a new environment, the EC2 node never comes up fully. I logged in to check what might be going on and noticed in /var/log/cfn-init.log that 01_nodejs_install, 02_mkdir_statsd and 03_fetch_statsd were executed successfully. So I guess the system was stuck on the fourth command (04_run_statsd).
2016-05-24 01:25:09,769 [INFO] Yum installed [u'git']
2016-05-24 01:25:37,751 [INFO] Command 01_nodejs_install succeeded
2016-05-24 01:25:37,755 [INFO] Command 02_mkdir_statsd succeeded
2016-05-24 01:25:38,700 [INFO] Command 03_fetch_statsd succeeded
cfn-init.log (END)
I need help with the following:
If there is a better way to install and run statsd while instantiating an environment, I would appreciate if you could provide details on that approach. This current scheme seems hacky.
If this is the approach I need to stick with, how can I run the fourth command so that statsd can be run as a background process?
Tried a few things and found that the following ebextensions configs work:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_change_config:
command: cat exampleConfig.js | sed 's/2003/<graphite server port>/g' | sed 's/graphite.example.com/my.graphite.server.hostname/g' > config.js
cwd: /home/ec2-user/statsd
05_run_statsd:
command: setsid node stats.js config.js >/dev/null 2>&1 < /dev/null &
cwd: /home/ec2-user/statsd
Note that I added another command (04_change_config) so that I may configure my own Graphite server and port in statsd configs. This change is not needed to address the original question, though.
The actual run command uses setsid to run the command as a daemon.