Meson dependencies catch22 and lazy access to target output - generator

We ‘d like to introduce Meson to build our existing C++ application. Our structure is as follows:
Wie have 8 main modules (mod_X)
Every module has 20-40 subdirs, each with 5 – 100 cpp files; separated in libs and executables.
mod_INFRA/apps/myparser has a target that creates a code generator executable
Which depends only on mod_INFRA/libs/A
The code generator must be applied to certain files (*.rules) in numerous subdirs in all modules and subdirs, including mod_INFRA itself.
The generated source code must be compiled and considered with the target in subdir_X
What I’d like to achieve:
In root/meson.builddefine a common and re-usable custom_target or generator that I can invoke/apply in every module and subdir as needed.
Problem:
In root/meson.build, we define common variables such as compiler flags etc., and we do subdir(‘mod_INFRA’) for each module. In mod_INFRA/meson.build I do subdir(‘apps/xyz’), subdir(‘libs/abc’), etc. for each subdir. That’s all fine
However I struggle to define the custom_target or generator in root/meson.build. The required executable is not yet available before subdir('mod_INFRA'). And after subdir(..) it is too late, as I need the generator already to build files in other subdirs in mod_INFRA.
A possible solution could be a “proxy” that lazily resolves the executable by target name. E.g. if I could do (pseudo code): generator(getTargetOutput(‘myparser’), …). But I could not find out if that is available.
Any other thoughts on how to resolve it, without completely restructuring the directory structure?
- meson.build
- mod_INFRA
- meson.build
- apps
- meson.build
- myparser
- meson.build
- libs
- subdir_INFRA_A (required to build myparser)
- meson.build
- subdir_INFRA_B
- meson.build
- subdir_INFRA_C (requires parser to generate code)
- meson.build
- mod_A
- meson.build
- subdir_A_A (requires parser to generate code)
- meson.build
- subdir_A_B (requires parser to generate code)
- meson.build
- mod_B
...

Somebody suggested: From the top level can you do a subdir straight into "mod_INFRA/libs/subdir_INFRA_A" and "apps/myparser" directories to build them, before returning to the top level and then subdir down a single level repeatedly thereafter? That did the trick,

Related

GitHub Actions Ignore Certain Files Inside a Directory

I have a project where I use GitHub Actions. I now need to ignore certain file changes inside certain folders. Here is my project structure:
masterDir
- contentDir
- dir1
- file1.ignore.md
- file2.md
- dir2
- file3.md
- file4.ignore.md
So I would like that my GitHub actions are not triggered for any changes to any file that has ignore.md in its file name. So here is what I came up with, but that does not seem to work.
on:
push:
paths-ignore:
- 'README.md'
- 'backup/**'
- 'masterDir/contentDir/**/*.draft.md'
Any ideas on what is wrong with my wildcard match?
It was indeed quite simple to do. All I have to do is the following:
on:
push:
paths-ignore:
- 'README.md'
- 'backup/**'
- '**/*.draft.md'
As a reference, here is the documentation in detail: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#patterns-to-match-file-paths
As it can be seen from the documentation that the wildcard matches any file in any folder that contains a .draft.md match.

Docstrings are not generated on Read the Docs with Sphinx autodoc and napoleon extensions

I am using the Sphinx autodoc and napoleon extensions to generate the documentation for my project (Qtools). This works well on my local machines. I am using Sphinx 3.1.2 (or higher). However, when I build the documentation on Read the Docs (RTD), only text added directly to the reStructuredText files that form the source of the documentation is processed. The docstrings that are supposed to be pulled in by autodoc do not appear in the HTML documentation generated by RTD. So for example in docs\source\section2_rsdoc.rst I have:
Response spectra
================
The response spectrum class
---------------------------
.. autoclass:: qtools.ResponseSpectrum
:members:
Response spectrum creation
--------------------------
.. autofunction:: qtools.calcrs
.. autofunction:: qtools.calcrs_cmp
.. autofunction:: qtools.loadrs
See also :func:`qtools.convert2rs` (converts a power spectrum into a response spectrum).
This results in:
Response spectra
The response spectrum class
Response spectrum creation
See also qtools.convert2rs (converts a power spectrum into a response spectrum).
In other words, all directives are apparently ignored, and hyperlinks to other functions are not added. I have examined several basic guidance documents such as this one, but I cannot figure out what I am doing wrong. RTD builds the documentation without any errors or warnings. In RTD advanced settings I have:
Documentation type: Sphinx HTML
Requirements file: requirements.txt
Python interpreter: CPython 3.x
Install Project: no
Use system packages: no
Python configuration file: blank
Enable PDF build: no
Enable EPUB build: no
I haven't touched any other settings.
In conf.py I have tried the following variations of line 15: sys.path.insert(0, os.path.abspath('.')), sys.path.insert(0, os.path.abspath('../..')) and the current sys.path.insert(0, os.path.abspath('../../..')). None of those made any difference.
I would be grateful for any help!
RTD builds the documentation without any errors or warnings
This is slightly incorrect. As you can see in the build logs, autodoc is emitting numerous warnings like this one:
WARNING: autodoc: failed to import class 'ResponseSpectrum' from module 'qtools'; the following exception was raised:
No module named 'qtools'
This has happened for all your variations of sys.path.insert, as you can see in some past builds.
Trying to make it work this way is tricky, since Read the Docs does some magic to guess the directory where your documentation is located, and also the working directory changes between commands.
Instead, there are two options:
Locate where the conf.py is located (see How do you properly determine the current script directory?) and work out a relative package from there.
Invest some time into making your code installable using up-to-date Python packaging standards, for example putting all your sources inside a qtools directory, and creating an appropriate pyproject.toml file using flit.

Build Jekyll pages recursively from directory tree

I love Markdown + MathJax for note taking, and I've found the simplest way to do this is to use a small, local Jekyll server that I can backup using Git. It's simple, clean, private, and redundant; each document is a single human-readable file before processing, meaning I hope that in 20 years the notes aren't worthless.
My only problem is that I wish I could have a directory with subdirectories of Markdown files and to have Jekyll build everything recursively. For example, imagine I had something like this:
...
- research
- foo
- derivations
- derivation1.md
- derivation2.md
...
- meetings
- 20190912_meeting1.md
- 20190912_meeting2.md
...
- bar
- derivations
- meetings
- personal
- courses
- qux
- baz
...
I would love to automatically render and a locally host a web server in which each directory is an index page, and each .md file is a document.
Is it possible to do this relatively easily in Jekyll? I've seen some stuff using nested collections, but it's pretty messy and manual.

Compile file with two separate libraries in Cython

I wrote a library in Cython that has two different "modes":
If rendering, I compile using GLFW.
If not rendering, I compile using EGL, which is faster, but I have not figured out how to render with it.
What is the recommended way to handle this situation?
Right now, I have the following directory structure:
mujoco
├── __init__.py
├── simEgl.pyx
├── simGlfw.pyx
├── sim.pxd
└── sim.pyx
simEgl.pyx contains EGL code and simGlfw.pyx contains GLFW code. setup.py uses an environment variable to choose one or the other for the build.
This works ok, except that I need to recompile the code every time I want to switch between modes. There must be a better way.
Update
I agree that the best approach is to simultaneously compile two different libraries and use a toggle to choose which one to import. I already do have a base class in sim.pyx with shared functionality. However this base class must itself be compiled with the separate libraries. Specifically, sim.pyx depends on libmujoco.so which depends on either GLFW or EGL.
Here is my exhaustive search of possible approaches:
If I do not compile an extension for sim.pyx, I get ImportError: No module named 'mujoco.sim'
If I compile an extension for sim.pyx without including graphics libraries in the extension, I get ImportError: /home/ethanbro/.mujoco/mjpro150/bin/libmujoco150.so: undefined symbol: __glewBlitFramebuffer
If I compile an extension for sim.pyx and choose one set of graphics libraries (GLFW), then when I try to use the other set of graphics libraries (EGL) this does not work either unsurprisingly:
ERROR: GLEW initalization error: Missing GL version
If I compile two different versions of the sim.pyx library, one with one set of libraries, one with the other, I get: TypeError: unorderable types: dict() < dict() which is not a very helpful error message, but appears to result from trying to share a source file between two different extensions.
Something like option 4 should be possible. In fact, if I were working in raw C, I would simply build two shared objects side by side using the different libraries. Any advice on how to get around this Cython limitation would be very welcome.
(This answer is just a summary of the comments with a bit more explanation.)
My initial suggestion was to create two extension modules defining a common interface. That way you pick which to import in Python but be able to use them in the same way once imported:
if rendering:
import simGlfw as s
else:
import simEgl as s
s.do_something() # doesn't matter which you imported
It appears from the comments that the two modules also share a large chunk of their code and its really just the library that they're linked with that defines how they behave. Trying to re-use the same sources with
Extension(name='sim1', sources=["sim.pyx",...)
Extension(name='sim2', sources=["sim.pyx",...)
fails. This is because Cython assumes that the module name will be the same as the filename, and so creates a function PyInit_sim (on Python 3 - Python 2 is named slightly differently but the idea is the same). However, when you import sim1.so it looks for the function PyInit_sim1, fails to find it, and gives an error.
An easy way round it is to put the common code in "sim.pxi" and use Cython's largely obsolete include mechanism to textually include that code in sim1.pyx and sim2.pyx
include "sim.pxi"
Although include is generally no longer recommended and cimport is preferred since it provides more "Python-like" behaviour, include is a simple solution to this particular problem.

What do the square brackets mean in a bundle source pattern?

The aurelia.json file has a bundles.source property. It appears to use the glob syntax that minimatch supports. The out-of-the-box au new template, though, includes square brackets around some patterns. E.g.
"[**/*.js]"
In my experience, square brackets have meant ranges, such as [a-z] mapping to abcdefg...wxyz. That is also what minimatch respects.
> match = require("minimatch");
> match("q", "[a-z]");
true
What do square brackets mean to the Aurelia CLI when processing the bundles.source property?
The brackets actually define whether or not we trace the dependencies of what we find based off the glob pattern. The double star pattern (**/*) is actually what defines the "search sub folders too" part of the pattern.
While it's documented in the section for configuring JSPM, it is also applicable for configuring with the CLI. documentation
Our goal is to create a bundle of our application code only. We have to somehow instruct the bundler not to recursively trace the dependencies. Guess what? [*.js] is how we do it.
[*.js] will exclude the dependencies of each module that the glob pattern *.js yields. In the above case it will exclude aurelia-framework, aurelia-fetch-client and so on.
For example, you'll make a pattern like this: [src/**/*.js], you are asking for every javascript file in the folder and every sub-folder of src without tracing any dependencies. This mean that if module A in src requires module B in test, then module B won't be included because we indicated with the brackets that we're not tracing dependencies.
Again, if you took a pattern like this: src/**/*.js, you are asking for every javascript file in the folder and every sub-folder of src including any dependencies of those files. This means that if module A in src requires module B in test, then module B will be included because we are including dependencies.
It is important to note that this is how Aurelia defines its dependencies. While we use glob patterns and minimatching, the bracket syntax (as far as I know) is not part of those libraries, but rather a way for Aurelia to quickly and easily define if we're tracing or not.