I'm trying to automatically import a project from Github into ReadtheDocs. When creating the documentation it is failing due to a missing dependency. I've tried adding the setup.py installation in the config, but am running into the following:
Problem in your project's configuration. Invalid "python.install.0": .readthedocs.yml: "path" or "requirements" key is required
Current Configuration yaml:
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# Optionally build your docs in additional formats such as PDF and ePub
formats: all
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.6
install:
- method: setuptools
- path: .
I wasn't able to find an answer that leverage the pre-existing setup.py file, but was able to get it working with a requirements.txt file. The relevant portion is the install portion of the python section in the readthedocs.yml file (seen below).
Inside the requirements.txt file I simply copied the install requirements section from the setup.py.
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# Optionally build your docs in additional formats such as PDF and ePub
formats: all
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.6
install:
- requirements: docs/requirements.txt
I also had this issue just now, and eventually figured out that the YAML declaration is wrong. It should not be:
python:
version: 3.6
install:
- method: setuptools
- path: .
This defines two entries in python.install, the first one containing only the method key set to setuptools, and the second one containing only the path key set to .. They have nothing to do with each other semantically, and so readthedocs complains about the first entry missing that path key. Instead, use:
python:
version: 3.6
install:
- method: setuptools
path: .
This now defines python.install to be a list with exactly one entry, python.install.0, that has both required keys. And so readthedocs started accepting my config after I did this one-character deletion.
Yet another example of YAML being less intuitive than one would like.
Related
I have a conflict while attempting a conda build:
Package python_abi conflicts for:
pyibis-ami[version='>=4.0.5'] -> click -> python_abi[version='3.10.*|3.8.*|3.11.*',build='*_cp311|*_cp310|*_cp38']
{snip}
pyibis-ami[version='>=4.0.5'] -> python_abi=3.9[build=*_cp39]
There's clearly a conflict: click doesn't support 3.9, but pyibis-ami demands it.
But, click is a dependency of pyibis-ami.
And I just successfully built pyibis-ami, before attempting this build!
(It's a direct dependency of the package I'm trying to build now.)
So, how did I succeed at building pyibis-ami?!
Why didn't the same conflict block that build?
Some additional sleuthing:
The pyibis-ami package does not call for any specific version of click.
Looking at what's available for the most recent version of click (8.1.3), I find:
one noarch build w/ dependency: __unix,
one noarch build w/ dependency: __win, and
several osx-arm64 (the platform I'm working on) builds, all dependent upon a specific (different) Python minor version, for instance:
dependencies:
- python >=3.9,<3.10.0a0
- python >=3.9,<3.10.0a0 *_cpython
- python_abi 3.9.* *_cp39
(There are similar builds for: 3.8, 3.10, and 3.11.)
Now, I'm giving the --python=3.9 option in my conda build ... command, but I have noticed cases in which Python v3.8 gets selected for the temporary build virtual environment, despite that --python=3.9 command line option.
And I'm wondering if that's what's happening here.
Two questions:
Where can I find the log file for my last build attempt, in order to see which version of Python was actually selected for the conda build ... virtual environment?
What things are allowed to override the --python=3.9 command line option?
I am importing a project to read the docs; I have a series of git tags representing older versions of the library; which I would like to have docs generated for those versions as well. The problem is that there is no .readthedocs.yaml file in the repo when that tag was placed.
Normally in this case I would put some of the relevant settings in web interface, which clearly states those settings would be ignored with the presence of a .readthedocs.yaml file; but my config file does have a pre_build task
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.10"
jobs:
pre_build:
- doxygen ./doc/doxygen/doxygen.conf
sphinx:
builder: html
configuration: doc/conf.py
python:
install:
- requirements: doc/requirements.txt
Is there a way I can build documentation for past git tags using the "current" config file?
I'm building DBI and DBD::mysql in a continuous integration build server. The build of DBI is successful, as seen in the excerpt of the build log below. It clearly installs DBI/DBD.pm in the correct location.
pushd DBI-1.643
perl Makefile.PL INSTALL_BASE=/data/pods/mysql-tools/mysql-tools/current
...
Installing /data/pods/mysql-tools/mysql-tools/current/lib/perl5/x86_64-linux-thread-multi/DBI/DBD.pm
...
Appending installation info to /data/pods/mysql-tools/mysql-tools/current/lib/perl5/x86_64-linux-thread-multi/perllocal.pod
But the next part of the build for DBD::mysql fails because it can't find the files installed by DBI.
pushd DBD-mysql-4.050
perl Makefile.PL INSTALL_BASE=/data/pods/mysql-tools/mysql-tools/current --ssl
Can't locate DBI/DBD.pm in #INC (#INC contains:
/usr/local/lib64/perl5
/usr/local/share/perl5
/usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl
/usr/lib64/perl5
/usr/share/perl5 .)
at Makefile.PL line 15.
You can see, MakeMaker for DBD::mysql isn't adding the install location to its #INC at all. It just has default directories.
Is there a way to pass an argument to MakeMaker to add the install directory to #INC? I suppose I could hard-code it, but that seems improper and hard to maintain. Is there a better way to automatically add INSTALL_BASE/lib/perl5/<arch> to #INC?
Environment:
CentOS 7 Linux
Perl 5.16.3
I would have preferred to use cpanm of course. But the CI build server is isolated from the internet because of my employer's security policy. No http proxying is allowed from CI.
According to the documentation, INSTALL_BASE is used for telling make install where to put the installed module:
INSTALL_BASE
INSTALL_BASE can be passed into Makefile.PL to change where your
module will be installed. INSTALL_BASE is more like what everyone else
calls "prefix" than PREFIX is.
but it does not tell perl where to look for installed modules. To do that you can use the environment variable PERL5LIB, according to the documentation :
PERL5LIB
A list of directories in which to look for Perl library files before
looking in the standard library. Any architecture-specific and
version-specific directories, such as version/archname/, version/, or
archname/ under the specified locations are automatically included if
they exist, with this lookup done at interpreter startup time. In
addition, any directories matching the entries in
$Config{inc_version_list} are added.
This seems like a very basic question but I can't find the answer :-S
I exclude like this:
exclude:
- "*.json"
- "Gemfile*"
- "*.txt"
- vendor
- README.md
- somefile.html
Pretty straightforward. To create the production build I run: $ JEKYLL_ENV=production bundle exec jekyll build
How can I exclude the somefile.html file only when I run the production ENV?
Perhaps you could try using a specific config file for production and one for development. In the production you could exclude the files using exclude as described here.
Then run:
jekyll build --trace --config _config.yml,_config_dev.yml
or
jekyll build --trace --config _config.yml,_config_prod.yml
In the _config.yml you'd set generic settings, and in the config with the environment suffix you'd set a environment specific configuration.
The trace flag is optional, it will help setting it up since it will show occuring errors.
optional: -t, --trace Show the full backtrace when an error occurs
I am trying to package the ffvideo module for conda. It is a Cython module that links into ffmpeg. I am able to build the recipe (so the linking works at compile time), however I cannot install the resulting package in a new environment. The reason is that at execution time the package cannot find the dlls it was linked to at compile time (their path is now different, because they are in a different environment).
I tried using the binary_has_prefix_files flag in the conda recipe, which I point to Lib\site-packages\ffvideo.pyd. However, it does not seem to help.
Is there a way to link Cython packages to relative paths or something like that?
The recipe is at the moment:
package:
name: ffvideo
version: 0.0.13
source:
fn: b45143f755ac.zip
url: https://bitbucket.org/groakat/ffvideo/get/b45143f755ac.zip
# md5: cf42c695fab68116af2c8ef816fca0d9
build: [win]
number: 3 [win]
binary_has_prefix_files:
- Lib\site-packages\ffvideo.pyd
requirements:
build:
- python
- cython [win]
- mingw [win]
- ffmpeg-dev [win]
- mingw
- pywin32
- setuptools
- libpython
run:
- python
- ffmpeg-dev [win]
- cython
- mingw
- pywin32
- setuptools
- libpython
about:
home: https://bitbucket.org/groakat/ffvideo/
license: Standard PIL license
The package is on binstar https://binstar.org/groakat/ffvideo/files . The dependencies are all in my channel https://binstar.org/groakat/
One more thought. As ffvideo depends on ffmpeg-dev which I also packaged, might it be that I need to use the binary_has_prefix_files option there as well?
To quote Travis Oliphant's answer from the conda mailing list:
On Windows, our current recommended approach is to:
1) put the DLLS next to the executable that needs them
2) put the DLLS in a directory that is on your PATH environment variable.
By default, Anaconda and Miniconda add two directories to the path
(the root directory and %ROOT% / Scripts). You could either put the
dlls in that directory or add the directory the dlls are located to
your PATH.