Build documentation for older version before .readthedocs.yaml was added to repo - read-the-docs

I am importing a project to read the docs; I have a series of git tags representing older versions of the library; which I would like to have docs generated for those versions as well. The problem is that there is no .readthedocs.yaml file in the repo when that tag was placed.
Normally in this case I would put some of the relevant settings in web interface, which clearly states those settings would be ignored with the presence of a .readthedocs.yaml file; but my config file does have a pre_build task
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.10"
jobs:
pre_build:
- doxygen ./doc/doxygen/doxygen.conf
sphinx:
builder: html
configuration: doc/conf.py
python:
install:
- requirements: doc/requirements.txt
Is there a way I can build documentation for past git tags using the "current" config file?

Related

How to install NPM dependencies of a github action?

I am setting up github actions, and struggling to see how to handle the dependencies of a particular action.
Below is a simplified view of my repository setup, with the action and workflow YAML files as I currently understand them.
my-org/my-tool#master: a specific tool in my organisation that I want to enable as a github action
package.json
tool.js: the tool, depending on npm packages described in package.json
action.yml
name: my-tool
inputs:
file:
description: file to be processed by my-tool
required: true
runs:
using: node16
main: tool.js
my-org/my-code#branch: a code repository where I want to run the tool as an action
my-code.js: some code that needs to be processed by the tool
.github/workflows/use-tool.yml:
# ...
jobs:
tool:
steps:
- uses: my-org/my-tool#master
with:
file: my-code.js
# ...
If my-tool has no dependencies, this setup works well enough. But my-tool needs to have dependencies installed to function properly, i.e., a single npm install in its root directory. How do I specify this dependency in a github action setup?
I have multiple options, none of which are truly satisfying to me:
I can define a workflow that checks out my-tool, installs the dependencies, and then runs it. However, I don't want that logic to be in the workflow yaml of the my-code repository, as I have multiple similar repositories, all of which require my-tool, which will result in duplication. I don't see how to do this in the action.yml file itself. It seems I have to choose between a javascript action to run node tools, or a composite action to run shell tools.
I can bundle the node_modules directory in the my-tool repository; this bloats my repos unnecessarily.
I can bundle an ncc distribution of my-tool in its repo; same problem.
I can define and build a container image of my-tool with its dependencies installed, and use a container action to run the tool on my-code. This seems like overhead to me, and I have no idea how this container would access the files from the my-org/my-code repo (which would be its core purpose).
Options 2 and 3 are the ones described in the example published by github. I would be happy with using an artifact (either node_modules dir or ncc distribution) as the basis for an action, but there seems to be no way to do this.
My specific use case concerns npm dependencies, but there seems to be a generic use case for "action with dependencies", support for which is unclear to me from reading the documentation. I can't be the only dev in the world working with a toolset with dependencies of its own. Maybe I am looking at this setup in the wrong way. Is there a best practice for actions with dependencies? Is there a best practice to only have actions with zero dependencies?
I look forward to learn from any partial answer or suggestion.

ReadtheDocs config for setup.py installation

I'm trying to automatically import a project from Github into ReadtheDocs. When creating the documentation it is failing due to a missing dependency. I've tried adding the setup.py installation in the config, but am running into the following:
Problem in your project's configuration. Invalid "python.install.0": .readthedocs.yml: "path" or "requirements" key is required
Current Configuration yaml:
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# Optionally build your docs in additional formats such as PDF and ePub
formats: all
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.6
install:
- method: setuptools
- path: .
I wasn't able to find an answer that leverage the pre-existing setup.py file, but was able to get it working with a requirements.txt file. The relevant portion is the install portion of the python section in the readthedocs.yml file (seen below).
Inside the requirements.txt file I simply copied the install requirements section from the setup.py.
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# Optionally build your docs in additional formats such as PDF and ePub
formats: all
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.6
install:
- requirements: docs/requirements.txt
I also had this issue just now, and eventually figured out that the YAML declaration is wrong. It should not be:
python:
version: 3.6
install:
- method: setuptools
- path: .
This defines two entries in python.install, the first one containing only the method key set to setuptools, and the second one containing only the path key set to .. They have nothing to do with each other semantically, and so readthedocs complains about the first entry missing that path key. Instead, use:
python:
version: 3.6
install:
- method: setuptools
path: .
This now defines python.install to be a list with exactly one entry, python.install.0, that has both required keys. And so readthedocs started accepting my config after I did this one-character deletion.
Yet another example of YAML being less intuitive than one would like.

How do I build a Polymer 2.x project with Polymer CLI?

Can anyone point me to a tutorial that uses Polymer 2 and polymer-build from Polymer CLI? When I use any example in the polymer-starter-kit and use polymer serve, it works fine; but when I use polymer build and serve the bundled or unbundled directory, I get 404 errors. I have even updated to the newest alpha version of polymer-cli.
Also, using https://github.com/tony19/generator-polymer-init-2-x-app generators have the same problem.
I also spent quit a bit of time to figure this one out. Please use the polymer-cli#next instead of polymer-cli
Plain polymer-cli doesn't seem to have the latest build and optimizations to support Polymer 2.0#Preview related functionality.
You can install polymer-cli#next. In Ubuntu, you can simply use npm install -g polymer-cli#next
Then on, the bundled and unbundled versions of the application generated through polymer build would just works fine.
Edit:
You can find my sample Polymer2.0#Preview version of the code at https://github.com/phani1kumar/phani1kumar.github.io branch is "devmaster".
the sw-precache-config.js is initial render-blocking. This will load all the resources that the main page needs to make the app available for offline use. src/lazy-resources.html loads resources for the next routes.
You would need to get a proper configuration based on your layout and main page in the following 3 files:
sw-precache-config.js, polymer.json, src/lazy-resources.html. This is a practice followed in the shop app from Polymer team, you may opt to a different mechanism for lazy loading. The bottom-line for lazy loading is to load the resources after Polymer.RenderStatus.afterNextRender.
You may also find the following article interesting: https://medium.com/#marcushellberg/how-i-sped-up-the-initial-render-of-my-polymer-app-by-86-eeff648a3dc0#.pi2iucwzi
I noticed a bug in the generator in that the starter-kit subgenerator was missing a dependency on webcomponentsjs, which would cause an error with polymer-build. And as you discovered, polymer.json was also missing dependencies for the polyfill support of webcomponentsjs, which caused 404s on polyfilled browsers (such as Linux Chrome). That's all fixed now in v0.0.6.
You'll also need a version of polymer-build that does not try to uglify the JavaScript, which would fail due to its inability to recognize ES6. The new-build-flags branch of the polymer-cli repo replaces uglify with babili for ES6 minification (added in PR#525). You could check out that branch and build it yourself, or you could install it from here:
npm i -g tony19-contrib/polymer-cli#dist-new-build-flags
For convenience, this branch is added as a devDependency when generating the 2.0 starter kit with generator-polymer-init-2-x-app.
To build and serve a Polymer 2.0 Starter Kit project:
Generate a 2.0 Starter Kit (using generator-polymer-init-2-x-app, v0.0.6 or newer) by selecting 2-x-app - starter application template:
$ polymer init
? Which starter template would you like to use?
...
2-x-app - (2.0 preview) blank application template
2-x-app - (2.0 preview) blank element template
❯ 2-x-app - (2.0 preview) starter application template
After the project generator finishes, build the project with yarn build:
$ yarn build
info: Deleting build/ directory...
info: Generating build/ directory...
info: Build complete!
Note that the output is only build/, and no longer build/bundled/ and build/unbundled/.
Serve up the contents of the build directory, and automatically open a browser to it:
$ polymer serve build -o
You could also serve it with a different tool to verify that the build output would work outside of the context of any Polymer tools. Start a Python server in build/, and manually open a browser to it:
$ cd build
$ python -m SimpleHTTPServer

conda: linking cython package to dll

I am trying to package the ffvideo module for conda. It is a Cython module that links into ffmpeg. I am able to build the recipe (so the linking works at compile time), however I cannot install the resulting package in a new environment. The reason is that at execution time the package cannot find the dlls it was linked to at compile time (their path is now different, because they are in a different environment).
I tried using the binary_has_prefix_files flag in the conda recipe, which I point to Lib\site-packages\ffvideo.pyd. However, it does not seem to help.
Is there a way to link Cython packages to relative paths or something like that?
The recipe is at the moment:
package:
name: ffvideo
version: 0.0.13
source:
fn: b45143f755ac.zip
url: https://bitbucket.org/groakat/ffvideo/get/b45143f755ac.zip
# md5: cf42c695fab68116af2c8ef816fca0d9
build: [win]
number: 3 [win]
binary_has_prefix_files:
- Lib\site-packages\ffvideo.pyd
requirements:
build:
- python
- cython [win]
- mingw [win]
- ffmpeg-dev [win]
- mingw
- pywin32
- setuptools
- libpython
run:
- python
- ffmpeg-dev [win]
- cython
- mingw
- pywin32
- setuptools
- libpython
about:
home: https://bitbucket.org/groakat/ffvideo/
license: Standard PIL license
The package is on binstar https://binstar.org/groakat/ffvideo/files . The dependencies are all in my channel https://binstar.org/groakat/
One more thought. As ffvideo depends on ffmpeg-dev which I also packaged, might it be that I need to use the binary_has_prefix_files option there as well?
To quote Travis Oliphant's answer from the conda mailing list:
On Windows, our current recommended approach is to:
1) put the DLLS next to the executable that needs them
2) put the DLLS in a directory that is on your PATH environment variable.
By default, Anaconda and Miniconda add two directories to the path
(the root directory and %ROOT% / Scripts). You could either put the
dlls in that directory or add the directory the dlls are located to
your PATH.

Stop TeamCity from Auto Checkout when adding a repo

I'm trying to configure TeamCity for use in our continuous integration.
Our project has approximately 35 mercurial repos spread across 4 cities. All in all the code in the repos are approximately 30GB in size.
Our problem is that if we add/remove a repo from the VCS roots of a build configuration, the configuration automatically does a complete clean re-checkout of all repos. This adds an extra 3 hours to our build cycle.
Is there anyway to turn this off?
We have TeamCity versions 7.0 and 7.1
UPDATE:
Additional details for one of the build configurations:
Name: BE - Full Build
Description: none
Build number format: %AssemblyBuildNumber%, next build number: #%AssemblyBuildNumber%
Artifact paths:
none specifed
Build options:
hanging builds detection: ON
status widget: OFF
maximum number of simultaneously running builds: unlimited
Version Control Settings edit »
VCS checkout mode: Automatically on server
Checkout directory: default
Clean all files before build: OFF
VCS labeling: disabled
Attached VCS roots:
< All the repos with no rules and no labels >
Show changes from snapshot dependencies: OFF
Perhaps an agent side checkout + local mirror could help you. Take a look at internal properties section here: http://confluence.jetbrains.net/display/TCD7/Mercurial