I want to test a CLI that should connect to PostgreSQL and MySQL servers using GitHub Actions, on all platforms if possible: Linux, Windows and macOS.
I found instructions on how to run a Postgres service and how to run a MySQL service and combined these into a workflow:
name: Test
on: [push]
jobs:
init_flow:
name: 'Run MySQL and Postgres on ${{ matrix.os }}'
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
# via https://github.com/actions/example-services/blob/master/.github/workflows/postgres-service.yml
services:
postgres:
image: postgres:10.8
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
# will assign a random free host port
- 5432/tcp
# needed because the postgres container does not provide a healthcheck
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
mysql:
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: root
ports:
- 3306
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
steps:
- uses: actions/checkout#v1
- run: node -v
env:
# use localhost for the host here because we are running the job on the VM.
# If we were running the job on in a container this would be postgres
POSTGRES_HOST: localhost
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }} # get randomly assigned published port
MYSQL_PORT: ${{ job.services.mysql.ports[3306] }}
But this only seems to work on Linux, not Windows or macOS, see the results of the action on GitHub:
Linux ✔
Windows ❌
macOS ❌
Windows fails during Initialize Containers with ##[error]Container operation is only supported on Linux, macOS even during Set up job with ##[error]File not found: 'docker'.
GitHub Actions services docs do not mention that this will only work on Linux, but I also do not know much about containers or Docker so might be missing something obvious.
(It is not important that MySQL and PostgreSQL run on the same operating system by the way - they only have to be accessible by the main job.)
Is it possible to run MySQL and PostgreSQL for GitHub Actions using Windows and macOS?
If not, what is the best workaround here?
Well, normally it's supported only on Linux. I was wondering if it would be supported in other VMs, so I ask Github. Here the answer :
Currently, Docker container actions can only execute in the GitHub-hosted Linux environment, and is not supported on other environments (such as Windows and MacOS).
More details please reference here: https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-actions#types-...
We notice that some other users also had reported the same question, we had reported this as a feature request to the appropriate engineering team.
Ref: https://github.community/
I'm not sure that it's possible yet. I know that container actions only work on Linux virtual machines at the moment, as you can see from the documentation here.
https://help.github.com/en/articles/about-actions#types-of-actions
services are using containers, so it would make sense that it doesn't work on Windows and MacOS yet.
An alternative workaround is to use an external database of course.
A simple way to do this might be the free tiers of Heroku's offering:
For Postgres they have Heroku Postgres
For MySQL I can use the ClearDB or JawsDB addon
They all give you tiny space and limits, but in my case this will probably be enough for now.
For MySQL, I found a (temporary[1]) workaround:
Per Software in virtual environments for GitHub Actions I learned that all operating systems currently have a local installation of MySQL 5.7 running on port 3306 with credentials root:root. You can use this MySQL instance in your jobs.
Unfortunately for me PostgreSQL is not installed.
[1] I recall reading a product manager of GitHub Actions telling people that the installed software might change and especially the databases might go away soon unfortunately (can't recall or find the link though, somewhere in GitHub Community, GitHub Actions)
Turns out the MySQL credentials root:root also only work on Linux, I could not find working ones for Windows and macOS.
Related
I am trying to run my github runner as root for self hosted linux servers. Can anyone point me to easy solution that I can implement quickly in following code:
name: Test
on: push
jobs:
Test1:
runs-on: selfhosted-linux # This should run on this self hosted runner only
steps:
- uses: actions/checkout#v2
At this point I cannot ssh into the selfhoste linux but can access it only via code in the workflow folder
and I would like to run the checkout as root rather then non root user.
You need to set the environment variable RUNNER_ALLOW_RUNASROOT before you run config.sh to set up the runner. e.g.
RUNNER_ALLOW_RUNASROOT=1 ./config.sh --token asdlkjfasdlkj
I have a GitHub Action that runs tests for my Python/Django project. It caches the virtual environment that Pipenv creates. Here's the workflow with nearly everything but the relevant steps commented out/removed:
jobs:
build:
runs-on: ubuntu-latest
services:
# postgres:
steps:
#- uses: actions/checkout#v2
#- name: Set up Python
#- name: Install pipenv and coveralls
- name: Cache pipenv virtualenv
uses: actions/cache#v2
id: pipenv-cache
with:
path: ~/.pipenv
key: ${{ runner.os }}-pipenv-v4-${{ hashFiles('**/Pipfile.lock') }}
restore-keys: |
${{ runner.os }}-pipenv-v4-
- name: Install dependencies
env:
WORKON_HOME: ~/.pipenv/virtualenvs
PIPENV_CACHE_DIR: ~/.pipenv/pipcache
if: steps.pipenv-cache.outputs.cache-hit != 'true'
run: pipenv install --dev
# Run tests etc.
This works fine usually, but because caches are removed after 7 days, if this is run less frequently than that, it can't find the cache and the Install Dependencies step fails with:
FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/.pipenv/virtualenvs/my-project-CfczyyRI/bin/pip'
I then bump the cache key's version number (v4 above) and the action runs OK.
I thought the if: steps.pipenv-cache.outputs.cache-hit != 'true' would fix this but it doesn't. What am I missing?
First alternative: using a separate workflow, with a schedule trigger event, you can run workflows on a recurring schedule.
That way, you force a refresh of those dependencies in the workflow cache.
Second alternative: use github.rest.actions.getActionsCacheList from actions/github-script (as seen here) again in a separate workflow, just to read said cache, and check if it still disappear after 7 days.
Third alternative: check if reading the cache through the new Web UI is enough to force a refresh.
On that third point (Oct. 2022):
Manage caches in your Actions workflows from Web Interface
Caching dependencies and other commonly reused files enables developers to speed up their GitHub Actions workflows and make them more efficient.
We have now enabled Cache Management from the web interface to enable developers to get more transparency and control over their cache usage within their GitHub repositories.
Actions users who use actions/cache can now:
View a list of all cache entries for a repository.
Filter and sort the list of caches using specific metadata such as cache size, creation time, or last accessed time.
Delete a corrupt or a stale cache entry
Monitor aggregate cache usage for repositories and organizations.
In addition to the Cache Management UX that we have now enabled, you could also use our Cache APIs or install the GitHub CLI extension for Actions cache to manage your caches from your terminal.
Learn more about dependency caching to speed up your Actions workflows.
I'm trying to setup CLI Xdebug in the Lando environment.
It works flawlessly from the web, but I can't manage to make it working from CLI to debug tests and scripts.
Here is my .lando.yml file:
name: develop
recipe: wordpress
proxy:
appserver_nginx:
- develop.loc
config:
php: '7.3'
via: nginx
database: mariadb
xdebug: true
I use PHPStorm as my IDE. I already setup server, server mapping, and ports 9000 and 9003 for listening, but it still doesn't stop at breakpoints.
Did anyone setup CLI Xdebug with Lando? Any ideas? Thanks for helping.
I managed to make it working.
So, the idea is to setup two environment variables:
PHP_IDE_CONFIG="serverName=localhost", where localhost is the name of the server in your PhpStorm settings.
XDEBUG_TRIGGER="1" - this variable triggers the xdebug.
But how can we provide the variables dynamically to make XDEBUG_TRIGGER working only when you want to?
For such things Lando has the tooling option!
So, we can create the custom command which would make the magic happen, like this:
tooling:
phpdebug:
service: appserver
cmd:
- php
env:
XDEBUG_TRIGGER: 1
PHP_IDE_CONFIG: "serverName=localhost"
Then restart (or rebuild) you appserver, and you'll have a brand new custom command for debugging PHP from CLI, like this:
lando phpdebug test.php
So, it works the same as lando php test.php, but with providing all the environment variables needed to run Xdebug.
Update:
If someone is interested in how to debug WP CLI from the Lando environment:
lando phpdebug /app/vendor/wp-cli/wp-cli/bin/../php/boot-fs.php --version
So, it's the same as lando wp --version, but with providing Xdebug environment variables
P.S. Please be aware - it's the instructions for Xdebug 3.
I'm starting to use Cloud Build for a project, and I'm having the following issue:
Using this cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
- name: 'gcr.io/cloud-builders/npm'
args: ['test']
The build run okay on the first step because it is able to install the dependencies and everything but in the second step it fails because it needs to connect to a MySQL database.
I saw in another SO post, you can use an extra build step to run the Cloud SQL proxy and connect that way. Check it out that let us know if that solution works for you.
Openshfit details:
Paid Professional version.
Version Information:
Been trying to create a build from a Dockerfile in Openshift.
Its tough going.
So I tried to use the existing templates in the Cluster Console.
One of which is the Docker one. When i press "Try it" it generates a sample BuildConfig, when I try to then Create it, it gives me the error:
(i have now raised the above in the Origin upstream issue tracker)
Anyhoo...anyone know how to specify a buildConfig an image from a Dockerfile in a git repo? I would be grateful to know.
You can see the build strategies allowed for OpenShift Online on the product website: https://www.openshift.com/products/online. Dockerfile build isn't deprecated, it's just explicitly disallowed in OpenShift Online. You can build your Dockerfile locally and push it directly to the OpenShift internal registry (commands for docker login and docker push are on your cluster's About page).
However, in other environments (not OpenShift Online), you can specify a Dockerfile build as follows and providing a Git Repo with a Dockerfile contained within (located at BuildConfig.spec.source.contextDir)
strategy:
type: Docker
There are additional options that can be configured for a Dockerfile build as well, outlined in https://docs.okd.io/latest/dev_guide/builds/build_strategies.html#docker-strategy-options.