When I was trying to run the git action to build a docker image, got the following error. Any insight into what went wrong? Thanks!
Here is the workflow yaml:
name: Python application
on:
push:
paths:
- 'python/*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Set up Python 3.7
uses: actions/setup-python#v1
with:
python-version: 3.7
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r ./python/requirements.txt
- name: Build & Push Image
run: |
cd ./python
echo "${{ secrets.DOCKERPW }}" | docker login -u "[your dockerhub login here]" --password-stdin
docker image build -t [your dockerhub username here]/gitops:hellov1.0 .
docker push [your docker hub username here]/gitops:hellov1.0
the error message in the log:
Run cd ./python
cd ./python
echo *** | docker login -u xsqian --password-stdin
docker image build -t xsqian/gitops:hellov1.0 .
docker push xsqian/gitops:hellov1.0
shell: /usr/bin/bash -e {0}
env:
pythonLocation: /opt/hostedtoolcache/Python/3.7.11/x64
/home/runner/work/_temp/30d82465-db58-4fc4-8139-ed8d8f5762d6.sh: line 2: unexpected EOF while looking for matching `"'
Error: Process completed with exit code 2.
Related
On Github Action workflow main.yml, I did the following to add to PYTHONPATH
PWD=$(pwd)
export PYTHONPATH=$PWD/src:$PWD/tests:$PYTHONPATH
I verified the PYTHONPATH using the following command
echo "PYTHONPATH=$PYTHONPATH"
and the output is PYTHONPATH=/home/runner/work/my_api/my_api/src:/home/runner/work/my_api/my_api/tests
I have a module called my_api live under /home/runner/work/my_api/my_api/src
But now I'm getting ModuleNotFoundError: No module named 'my_api' It seems export PYTHONPATH has no impact on the system. Below is the complete workfile YML file.
name: Integration Test Run
env:
HISTORIC_DATA_FOLDER: /usr/my_api_historic_data
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Python 3
uses: actions/setup-python#v1
with:
python-version: 3.6
- name: Filessytem Setup
run: |
pwd
mkdir my_api_historic_data_test
- name: Docker Compose
run: |
sudo docker-compose -f docker-compose-github.yml build
sudo docker-compose -f docker-compose-github.yml --verbose --env-file .env up &
- name: Intgration Test Setup
run: |
echo "-----pwd-----"
pwd
echo "-----ls-----"
ls
echo "-----ls src/-----"
ls src/
echo "----PYTHONPATH------"
PWD=$(pwd)
export PYTHONPATH=$PWD/src:$PWD/tests:$PYTHONPATH
echo "PYTHONPATH=$PYTHONPATH"
echo "-----HISTORIC PATH----"
export HISTORIC_DATA_FOLDER=/home/runner/work/my_api/my_api/my_api_historic_data_test
echo "HISTORIC_DATA_FOLDER=$HISTORIC_DATA_FOLDER"
- name: Integreation Test Run
run: |
sleep 30
pip install requests
sudo python -m unittest discover
As already said in comments, each step runs in it's own shell. You need to make sure your variable is exported properly, so that it's available in all subsequent steps.
echo "PYTHONPATH=$PYTHONPATH" >> $GITHUB_ENV
See docs for more details.
I am trying to get a coverage report in GitHub Actions
but when I run the pipeline I gives me this error:
Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock
Then I search around add added the sudo mysql start service and now I get this error, but I don't know were or have to write mit -root -password and the -host?
Access denied for user 'root'#'localhost' (using password: YES)")
how do I do that?
name: Django CI
on:
push:
branches: [ unittests ]
paths-ignore: '**/SkoleProtocol/attendanceCode/tests/test_selenium.py'
pull_request:
branches: [ unittests ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.7, 3.8, 3.9]
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.7
uses: actions/setup-python#v2
with:
python-version: 3.7
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Create test database
run: |
sudo service mysql start
- name: Coverage report
run: |
pip install coverage
coverage run manage.py test
coverage report
- name: Lint with flake8
run: |
pip install flake8
flake8 ./attendanceCode --exit-zero # Exit with status code "0" even if there are errors.
- name: Django Tests
run: |
python3 manage.py test
Two things that stand out.
Do you happen to have the credentials in an env file else where?
- name: Create test database
run: |
sudo service mysql start
You just happen to start the service rather than creating the database.
Try:
sudo /etc/init.d/mysql start // To start the service
mysql -uroot -proot -e "CREATE DATABASE __dbname__;"
I am trying to install GSAP 3 with Shockingly Green package.
The following are the steps recommended by the plugin.
//npm.greensock.com/:_authToken=XXXXXXXXXXXXXXXXXXXXXX
#gsap:registry=https://npm.greensock.com
And
yarn add gsap#npm:#gsap/shockingly
This is my workflow file which I am using in github action.
name: Deployment
on:
push:
branches: [ development ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v1.4.4
with:
node-version: 14.16.0
- name: Setup PHP with intl
uses: shivammathur/setup-php#v2
with:
php-version: '7.4'
extensions: intl-67.1
- name: Install Composer
run: sudo composer
- name: Install dependencies
run: |
composer install -o
yarn
env:
NPM_AUTH_TOKEN: ${{ secrets.NPM_AUTH_TOKEN }}
- name: Build
run: yarn build
- name: Sync
env:
dest: 'root#XXXXXXXXXXX:/var/www/html/wp-content/themes/XXXXXXX'
run: |
echo "${{secrets.DEPLOY_KEY}}" > deploy_key
chmod 600 ./deploy_key
rsync -chav --delete \
-e 'ssh -i ./deploy_key -o StrictHostKeyChecking=no' \
--exclude /deploy_key \
--exclude /.git/ \
--exclude /.github/ \
--exclude /node_modules/ \
./ ${{env.dest}}
I have added proper secret NPM_AUTH_TOKEN with the code provided to me by GSAP.
But I keep getting this error.
yarn install v1.22.17
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
[1/5] Validating package.json...
[2/5] Resolving packages...
[3/5] Fetching packages...
error An unexpected error occurred: "https://npm.greensock.com/#gsap%2fshockingly/-/shockingly-3.8.0.tgz: Request failed \"403 Forbidden\"".
info If you think this is a bug, please open a bug report with the information provided in "/home/runner/work/lacadives-theme/lacadives-theme/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
Error: Process completed with exit code 1.
I think action is running on wrong registry.
Is there a way that I can change the registry to #gsap:registry=https://npm.greensock.com.
There were two issues which needed to be addressed.
Setting up .npmrc correctly
And removing yarn.lock file
My final yml file.
name: Deployment
on:
push:
branches: [ development ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v1.4.4
with:
node-version: 14.16.0
- name: Create NPMRC
run: |
echo "//npm.greensock.com/:_authToken=XXXXXXXXXXXXXXXXXXXXXXXXXXXX" >> ~/.npmrc
echo "#gsap:registry=https://npm.greensock.com" >> ~/.npmrc
- name: Setup PHP with intl
uses: shivammathur/setup-php#v2
with:
php-version: '7.4'
extensions: intl-67.1
- name: Install Composer
run: sudo composer
- name: Install dependencies
run: |
composer install -o
rm yarn.lock
yarn
- name: Build
run: yarn build
- name: Sync
env:
dest: 'root#XXXXXXX:/var/www/html/wp-content/themes/XXXXXXXXX'
run: |
echo "${{secrets.DEPLOY_KEY}}" > deploy_key
chmod 600 ./deploy_key
rsync -chav --delete \
-e 'ssh -i ./deploy_key -o StrictHostKeyChecking=no' \
--exclude /deploy_key \
--exclude /.git/ \
--exclude /.github/ \
--exclude /node_modules/ \
./ ${{env.dest}}
I have the following yaml file. It were working just fine until yesterday. Unfortunately starting from today received the below warning and followed by the following error.
Hope someone will be able to point me to solution to fixed this issue. Below is the yaml code
name: CI_dev
on:
pull_request:
branches: [ dev ]
jobs:
test_pipeline:
runs-on: ubuntu-latest
steps:
# Install Salesforce CLI
- name: Install Salesforce CLI
run: |
wget https://developer.salesforce.com/media/salesforce-cli/sfdx-linux-amd64.tar.xz
mkdir sfdx-cli
tar xJf sfdx-linux-amd64.tar.xz -C sfdx-cli --strip-components 1
./sfdx-cli/install
#Checkout master
- name: 'checkout master'
uses: actions/checkout#master
#read secret, authenticate and deploy
- name: 'Populate auth file with SFDX_URL secret'
shell: bash
run: 'echo ${{ secrets.secret}} > ./secret.txt'
- name: 'Authenticate'
run: 'sfdx force:auth:sfdxurl:store --sfdxurlfile=./secret.txt -a secretAlias'
- name: 'Deploy'
run: "sfdx force:source:deploy --sourcepath ./force-app/main/default -l RunLocalTests -u secretAlias"
Below is the warning that appear on the authenticate step
Warning: force:auth:sfdxurl:store is not a sfdx command.
Did you mean auth:sfdxurl:store? [y/n]:
And below is the error that appear on the Deploy step
ERROR running force:source:deploy: No org configuration found for name secretAlias
Error: Process completed with exit code 1.
sfdx (at least linux distributions) have recently updated from 7.82.1 to 7.83.1 (January 2021)
since 7.83.1 it follows different syntax format.
You need to remove force: from your 'Authenticate' command line as it is advised in error message.
You can look your current version with:
sfdx --version
Busy Box was right. just need to remove force from force:auth and its alread working again. below is the updated yaml file as reference.
name: CI_dev
on:
pull_request:
branches: [ dev ]
jobs:
test_pipeline:
runs-on: ubuntu-latest
steps:
# Install Salesforce CLI
- name: Install Salesforce CLI
run: |
wget https://developer.salesforce.com/media/salesforce-cli/sfdx-linux-amd64.tar.xz
mkdir sfdx-cli
tar xJf sfdx-linux-amd64.tar.xz -C sfdx-cli --strip-components 1
./sfdx-cli/install
#Checkout master
- name: 'checkout master'
uses: actions/checkout#master
#read secret, authenticate and deploy
- name: 'Populate auth file with SFDX_URL secret'
shell: bash
run: 'echo ${{ secrets.secret}} > ./secret.txt'
- name: 'Authenticate'
run: 'sfdx auth:sfdxurl:store --sfdxurlfile=./secret.txt -a secretAlias'
- name: 'Deploy'
run: "sfdx force:source:deploy --sourcepath ./force-app/main/default -l RunLocalTests -u secretAlias"
I am currently trying to implement a workflow which requires protobuf to be installed. However, on Ubuntu I have to compile this myself. The problem is that this takes quite some time to do so I figured caching this step is the thing to do.
However, I am not sure how I can use actions/cache for this, if at all possible.
The following is how I am installing protobuf and my Python dependencies:
name: Service
on:
push:
branches: [develop, master]
jobs:
test:
runs-on: ubuntu-18.04
steps:
- name: Install protobuf-3.6.1
run: |
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.6.1/protobuf-all-3.6.1.tar.gz
tar -xvf protobuf-all-3.6.1.tar.gz
cd protobuf-3.6.1
./configure
make
make check
sudo make install
sudo ldconfig
- uses: actions/checkout#v2
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip setuptools
pip install -r requirements.txt
How can I cache these run steps s.t. they don't have to run each time?
I tested the following:
name: Service
on:
push:
branches: [develop, master]
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Load from cache
id: protobuf
uses: actions/cache#v1
with:
path: protobuf-3.6.1
key: protobuf3
- name: Compile protobuf-3.6.1
if: steps.protobuf.outputs.cache-hit != 'true'
run: |
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.6.1/protobuf-all-3.6.1.tar.gz
tar -xvf protobuf-all-3.6.1.tar.gz
cd protobuf-3.6.1
./configure
make
make check
- name: Install protobuf
run: |
cd protobuf-3.6.1
sudo make install
sudo ldconfig
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip setuptools
pip install -r requirements.txt
I would also delete all the source files once it's built.