Why is cython expressing different Python requirements in the same build? - conda-build

In trying to build Chaco using the following conda command:
conda build --python=3.9 --numpy=1.23 --use-local -c dbanas -c defaults -c conda-forge conda.recipe/chaco/
and the following meta.yaml file:
{% set name = "chaco" %}
{% set version = "5.1.1" %}
{% set enable_version = "5.3.1" %}
package:
name: "{{ name|lower }}"
version: "{{ version }}"
source:
git_url: https://github.com/enthought/chaco.git
git_rev: {{ version }}
build:
number: 1
script: "{{ PYTHON }} -m pip install . --no-deps --ignore-installed -vv "
requirements:
build:
- cmake
- Cython
- enable ={{ enable_version }}
- git
- importlib_resources
- numpy x.x
- pyface >=7.4.2
- python
- setuptools
- vs2019_win-64 # Uncomment for Windows build only.
# - {{compiler('c')}} # Would like to get one of these working, instead of above hack.
# - {{compiler('cxx')}}
run:
- Cython
- enable ={{ enable_version }}
- importlib_resources
- numpy x.x
- pyface >=7.4.2
- python
test:
# Python imports
imports:
- chaco
- chaco.api
# - chaco.contour
# - chaco.downsample
# - chaco.downsample.tests
# - chaco.layers
# - chaco.overlays
# - chaco.plugin
# - chaco.scales
# - chaco.scales.tests
# - chaco.shell
# - chaco.tests
# - chaco.tools
# - chaco.tools.tests
# - chaco.tools.toolbars
- chaco.ui
# commands:
# You can put test commands to be run here. Use this to test that the
# entry points work.
# You can also put a file called run_test.py in the recipe that will be run
# at test time.
# requires:
# Put any additional test requirements here. For example
# - nose
about:
home: http://docs.enthought.com/chaco
license: BSD License
summary: 'interactive 2-dimensional plotting'
license_family: BSD
# See
# http://docs.continuum.io/conda/build.html for
# more information about meta.yaml
extra:
recipe-maintainers:
- capn-freako
I'm getting a dependency version conflict report, which includes the following two very different expectations of cython, regarding Python version:
{snip}
Package vc conflicts for:
{snip}
cython -> python[version='>=2.7,<2.8.0a0'] -> vc[version='10.*|>=9,<10.0a0']
{snip}
Package python conflicts for:
{snip}
cython -> python[version='2.7.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3.10,<3.11.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0|>=3.6,<3.7.0a0|>=3.5,<3.6.0a0|>=3.11,<3.12.0a0|3.4.*']
{snip}
Why is cython expressing such a narrow range of acceptable Python versions in the first case (none of which are 3.*!) and such a broad range in the second case?
I also see this strange message, at the very end of my output:
The following specifications were found to be incompatible with your system:
- feature:/win-64::__win==0=0
- feature:|#/win-64::__win==0=0
- setuptools -> wincertstore[version='>=0.2'] -> __win
Your installed version is: 0

Related

Unable to parse Commit Message in the Github Action

I've been working on this for months now, but
I still can't parse the 'Commit Message' properly with Python (see script below).
You see, with every commit in my repository, every commit message begins with what represents the release's version number.
As of this writing, for example, parsing the commit message would result the a tag:
v8.11.0
I get this error message instead:
I'm not certain if it's creating the variable, tag, or not.
Python is not working for me. Would anyone have another approach?
# This workflow tests and releases the latest build
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build-and-test"
build-and-test:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Use the standard Java Action to setup Java
# we want the latest Java 12
- uses: actions/setup-java#v1
with:
java-version: '12.x'
# Use the community Action to install Flutter
# we want the stable channel
- uses: subosito/flutter-action#v1
with:
channel: 'stable'
# Get flutter packages
- run: flutter pub get
# Check for any formatting issues in the code.
- run: flutter format .
# Analyze our Dart code, but don't fail with there are issues.
- run: flutter analyze . --preamble --no-fatal-infos --no-fatal-warnings
# Run our tests
- run: flutter test --coverage
# Upload to codecov
- uses: codecov/codecov-action#v2
with:
token: ${{secrets.CODECOV_TOKEN}}
file: ./coverage/lcov.info
# Parse a tag from the commit message
- id: get-tag
shell: python3 {0}
run: |
import json
import os
with open(os.environ['GITHUB_EVENT_PATH']) as fh:
event = json.load(fh)
tag = event['head_commit']['message'].split()[0] <----- tag NOT CREATED?!
# Create a Release
- uses: softprops/action-gh-release#v1
env:
# This token is provided by Actions, you do not need to create your own token
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: v${{ steps.get-tag.outputs.tag }} <----- ERROR HERE!
release_name: ${{ steps.get-tag.outputs.tag }} <----- ERROR HERE!
body: |
See CHANGELOG.md
draft: false
prerelease: false
Using an alternative approach, I'm able to produce a tag using the current date.
This proves that it all works expect when trying to assign a 'tag' value using Python.
# Get current datetime in ISO format
- id: date
run: echo "::set-output name=date::$(date -u +'%Y-%m-%d')"
# Create a Release
- uses: softprops/action-gh-release#v1
env:
# This token is provided by Actions, you do not need to create your own token
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ steps.date.outputs.date }}v${{ github.run_number }}
name: ${{ steps.date.outputs.date }}v${{ github.run_number }}
body: |
See CHANGELOG.md
draft: false
prerelease: false
Any ideas?
steps.get-tag.outputs.tag is not correctly output in your workflow.
You should output it as described in the docs:
- id: get-tag
shell: python3 {0}
run: |
import json
import os
with open(os.environ['GITHUB_EVENT_PATH']) as fh:
event = json.load(fh)
tag = event['head_commit']['message'].split()[0]
print("::set-output name=tag::" + tag) # <--- This line

yq - inserting JSON as a raw string

I am writing a GitHub Action that does some CD and it uses yq to insert environment variables into a yaml file for deployment.
I'm trying to read a JSON from a GH secret that will eventually be read from env and loaded into python, where said string will be evaluated as a dictionary.
Running this in a terminal, for example:
yq -i '.value="{\"web\": \"test\"}"' test.yaml
Gives me:
value: '{"web": "test"}'
But in a Github Action, where I am doing this:
env:
JSON="{\"web\": \"test\"}"
...
- name: test
run : |
yq -i '
.value=strenv(JSON)
' deployment.yaml
Gives me:
Error: Bad expression, please check expression syntax
Doing other variations of that string, e.g. '{\"web\": \"test\"}', '\"{\"web\": \"test\"}\"' etc also gives me the same error.
I've tried searching on the yq repository and consulted the documentation but can't seem to find what I am looking for.
To summarise, my problem is that I want to read a JSON string as a string when it is evaluated by yq.
One of yq users has recently contributed to yq's github action docs regarding using env variables in github actions - it may help here:
- name: Get an entry with a variable that might contain dots or spaces
id: get_username
uses: mikefarah/yq#master
with:
cmd: yq '.all.children.["${{ matrix.ip_address }}"].username' ops/inventories/production.yml
- name: Reuse a variable obtained in another step
run: echo ${{ steps.get_username.outputs.result }}
See https://mikefarah.gitbook.io/yq/usage/github-action for more info.
Disclaimer: I wrote yq

Deploying chart using helmfile returns exit code 1

I'm trying to deploy a chart using helmfile. It works just fine locally using the same version and the same cluster.
The helmfile
environments:
dev:
values:
- kubeContext: nuc
- host: urbantz-api.dev.fitfit.dk
prod:
values:
- kubeContext: nuc
- host: urbantz-api.fitfit.dk
releases:
- name: urbantz-api
namespace: urbantz-api-{{ .Environment.Name }}
chart: helm/
kubeContext: "{{ .Values.kubeContext }}"
# verify: true
values:
- image:
tag: '{{ requiredEnv "IMAGE_TAG" }}'
- ingress:
enabled: true
hosts:
- host: {{ .Values.host }}
paths:
- path: /
The complete pipeline can be found here but the relevant command can be seen below
[ "$IMAGE_TAG" == "latest" ] && ./helmfile --debug -e dev sync
The complete output from the pipeline can be found here but the relevant part can be seen below
...
NOTES:
1. Get the application URL by running these commands:
http://urbantz-api.dev.fitfit.dk/
helm:whTHc> WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/runner/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/runner/.kube/config
helm:whTHc> NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
urbantz-api urbantz-api-dev 4 2021-03-13 12:07:01.111013559 +0000 UTC deployed urbantz-api-0.1.0 1.16.0
getting deployed release version failed:Failed to get the version for:helm
Removed /tmp/helmfile212040489/urbantz-api-dev-urbantz-api-values-569bd76cf
Removed /tmp/helmfile850374772/urbantz-api-dev-urbantz-api-values-57897fc66b
UPDATED RELEASES:
NAME CHART VERSION
urbantz-api helm/
urbantz-api urbantz-api-dev 4 2021-03-13 12:07:01.111013559 +0000 UTC deployed urbantz-api-0.1.0 1.16.0
Error: Process completed with exit code 1.
Please be aware that I'm also getting the message "getting deployed release version failed:Failed to get the version for:helm" when running locally. But the exit code is still 0.
UPDATE: I made it work by adding a ls at the end of my pipeline. The expression [ "$IMAGE_TAG" == "latest" ] && ./helmfile --debug -e dev sync exits with 1 if the evaluation fails. Does anyone have a better solution than doing a ls on the line after?
Change file permissions on the configuration file.
chmod 600 ~/.kube/config

Reference the runner context in job's env clause

Here's a GitHub Actions workflow file for a Python project named spam:
name: PyInstaller
on:
[...]
jobs:
create_release:
[...]
make_artifact:
needs: create_release
strategy:
matrix:
os: [ ubuntu-latest, windows-latest ]
runs-on: ${{ matrix.os }}
env:
ARTIFACT_PATH: dist/spam.zip
ARTIFACT_NAME: spam-${{ runner.os }}.zip
steps:
[...]
When this runs, the workflow fails at startup with this:
The workflow is not valid. [...]:
Unrecognized named-value: 'runner'. Located at position 1 within expression: runner.os
I'm attempting to use the os attribute of the runner context. This SO Q&A mentions that the env context can only be used in specific places, so I suspect something similar is happening here. However, I can't find any official documentation addressing this.
Is there any way to reference the runner context to set an environment variable within the env clause of a job, as shown above?
I'm looking for a way to set the environment variable for all steps in the job, so an env inside a step item won't do.
The workaround I've got for now is to add a step specifically to set environment variables:
steps:
- name: Setup environment
run: |
echo "ARTIFACT_NAME=spam-${{ runner.os }}.zip" >> $GITHUB_ENV
however this only works on the Linux runner.
If you scroll down a bit further in the GitHub Actions docs you linked, there's an example workflow printing different contexts to the log.
- name: Dump runner context
env:
RUNNER_CONTEXT: ${{ toJson(runner) }}
I set up a test repo with a workflow demonstration:
on: push
jobs:
one:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- 'ubuntu-latest'
- 'windows-latest'
- 'macos-latest'
steps:
- name: Dump runner context
env:
RUNNER_CONTEXT: ${{ toJson(runner) }}
run: echo "$RUNNER_CONTEXT"
- name: Get runner OS
env:
RUNNER_OS: ${{ runner.os }}
run: echo "$RUNNER_OS"
- name: Create file with runner OS in name
env:
OS_FILENAME: 'spam-${{ runner.os }}.zip'
run: |
echo "OS_FILENAME=spam-${{ runner.os }}.zip" >> $GITHUB_ENV
touch "./${{ env.OS_FILENAME }}"
touch blah.txt
- name: List created file
run: ls -l "./${{ env.OS_FILENAME }}"
It looks like you can also set and access env in steps, and those persist across workflow steps. For example, I set the environment variable $OS_FILENAME in step 3 using the echo syntax, and reference it in step 4. This works across all the OS options offered on GitHub Actions.
Note that the GitHub Actions docs state that "Environment variables must be explicitly referenced using the env context in expression syntax or through use of the $GITHUB_ENV file directly; environment variables are not implicitly available in shell commands.". Basically, it means you can't implicitly refer to env variables like $FOO and instead must refer to them as ${{ env.FOO }} in shell commands.
So for your scenario, does it satisfy your requirements if you set $ARTIFACT_NAME in the first step of the job? I wonder if the reason might be that the runner context isn't created until the first step - I'm not sure I have a way of testing this.

Set build number for conda metadata inside Azure pipeline

I am using the bash script to build the conda pakage in azure pipeline conda build . --output-folder $(Build.ArtifactStagingDirectory) And here is the issue, Conda build uses the build number in the meta.yml file(see here).
A solution of What I could think of is, first, copy all files to Build.ArtifactStagingDirectory and add the Azure pipeline's Build.BuildNumber into the meta.yml and build the package to Build.ArtifactStagingDirectory (within a sub-folder)
I am trying to avoid do it by writing shell script to manipulate the yaml file in Azure pipeline, because it might be error prone. Any one knows a better way? Would be nice to read a more elegant solution in the answers or comments.
I don't know much about Azure pipelines. But in general, if you want to control the build number without changing the contents of meta.yaml, you can use a jinja template variable within meta.yaml.
Choose a variable name, e.g. CUSTOM_BUILD_NUMBER and use it in meta.yaml:
package:
name: foo
version: 0.1
build:
number: {{ CUSTOM_BUILD_NUMBER }}
To define that variable, you have two options:
Use an environment variable:
export CUSTOM_BUILD_NUMBER=123
conda build foo-recipe
OR
Define the variable in conda_build_config.yaml (docs), as follows
echo "CUSTOM_BUILD_NUMBER:" >> foo-recipe/conda_build_config.yaml
echo " - 123" >> foo-recipe/conda_build_config.yaml
conda build foo-recipe
If you want, you can add an if statement so that the recipe still works even if CUSTOM_BUILD_NUMBER is not defined (using a default build number instead).
package:
name: foo
version: 0.1
build:
{% if CUSTOM_BUILD_NUMBER is defined %}
number: {{ CUSTOM_BUILD_NUMBER }}
{% else %}
number: 0
{% endif %}