Due to very old code I have been tasked with maintaining, I find that I require to use the command
conda build meta.yaml --prefix-length=80 --no-long-test-prefix
every time I want to build/test my conda repo. Not including these command line options would result in a fail of the build. How do I automatically include these required command line options. For example, this can be done in setup.py with the script_args option. Is there something similar in conda build for meta.yaml?
Related
I have a conflict while attempting a conda build:
Package python_abi conflicts for:
pyibis-ami[version='>=4.0.5'] -> click -> python_abi[version='3.10.*|3.8.*|3.11.*',build='*_cp311|*_cp310|*_cp38']
{snip}
pyibis-ami[version='>=4.0.5'] -> python_abi=3.9[build=*_cp39]
There's clearly a conflict: click doesn't support 3.9, but pyibis-ami demands it.
But, click is a dependency of pyibis-ami.
And I just successfully built pyibis-ami, before attempting this build!
(It's a direct dependency of the package I'm trying to build now.)
So, how did I succeed at building pyibis-ami?!
Why didn't the same conflict block that build?
Some additional sleuthing:
The pyibis-ami package does not call for any specific version of click.
Looking at what's available for the most recent version of click (8.1.3), I find:
one noarch build w/ dependency: __unix,
one noarch build w/ dependency: __win, and
several osx-arm64 (the platform I'm working on) builds, all dependent upon a specific (different) Python minor version, for instance:
dependencies:
- python >=3.9,<3.10.0a0
- python >=3.9,<3.10.0a0 *_cpython
- python_abi 3.9.* *_cp39
(There are similar builds for: 3.8, 3.10, and 3.11.)
Now, I'm giving the --python=3.9 option in my conda build ... command, but I have noticed cases in which Python v3.8 gets selected for the temporary build virtual environment, despite that --python=3.9 command line option.
And I'm wondering if that's what's happening here.
Two questions:
Where can I find the log file for my last build attempt, in order to see which version of Python was actually selected for the conda build ... virtual environment?
What things are allowed to override the --python=3.9 command line option?
I'd like to install programs with conda in one particular conda environment and to be able to use the associated commands from all conda environments.
My goal is to allow students to install Mercurial (plus few Mercurial extensions and related utilities like Meld and TortoiseHg) on any platforms (especially Windows) with one simple command (or few simple commands), and of course without compilation.
Of course the hg command should be available in the terminal from any conda environments (anaconda prompt on Windows). The Mercurial packages cannot be installed in the base environment because Mercurial still works better in Python 2.7 (anyway, it wouldn't be clean).
Now Mercurial and the extensions we need can be installed on all platforms with something like:
conda create -n py27_mercurial -c conda-forge python=2.7 mercurial dulwich ipaddress
conda activate py27_mercurial
pip install hg-evolve hg-git
Working a bit with conda-forge and a conda meta-package, it won't be difficult to do that with one very simple command. Moreover, it should not be difficult to create conda packages for Meld and TortoiseHg.
From this stage, the hg command is available when the environment is activated (and it is very simple to install other Mercurial extensions). To make it available from other environment (and in the base environment), one need to append the path of the directory containing hg to the environment variable PATH or on Unix to create a symbolic link (I don't know Windows enough to know if something similar would work). Both solutions are not straightforward and the commands are not platform independent.
I didn't find a command to do something like this in conda but sometimes conda experts are able to do impressive things! What would be an elegant solution to this issue?
It would also be nice to create icons somewhere (in the Anaconda launcher?) for the graphical applications (Meld and TortoiseHg). Is it possible?
Edit: Conda applications
I discovered that there is a way to specify in the meta.yaml file that a package is an application: https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#app-section
It may help to solve the issue.
Edit after a first answer based on a bash function:
Of course, I'm looking for a solution involving very small work (and understanding) for the users and with cross-platform commands.
Note that for Linux and Bash, one can just do:
CONDA_APP_DIR=$HOME/.local/bin/bin-conda-app/
mkdir -p $CONDA_APP_DIR
echo -e "\nexport PATH=\$PATH:$CONDA_APP_DIR\n" >> ~/.bashrc
ln -s $(which hg) $CONDA_APP_DIR/hg
No need to activate/deactivate the environment each time hg is used...
Of course, such solutions dependent of the system and the shell are not satisfactory. It should be possible to do such things with cross-platform conda-like commands (see https://github.com/conda/conda/issues/8556), something like
conda config --add channels conda-forge
conda install conda-app
conda-app install mercurial
Now, I just have to implement conda-app 🙂
One can always create a shell function/alias and shove it in their shell's runtime configuration file. For example, for your use case, I'd add the following in my ~/.bashrc:
hg() {
(conda activate py27_mercurial
command hg "$#"
_hg_exit_code=$?
conda deactivate
exit $_hg_exit_code)
}
Then, regardless of which environment you are in, you always run hg from the environment it was installed in. To make sure that this function is loaded for you shell in a new session, one can always take a look at the output for: type -a hg
I do this one-time-setup for all the tools (some are custom compiled) and have an alias/shell function for each. This way I can happily switch b/w environments without having to worry much.
The solution https://stackoverflow.com/a/55900964/1779806 is buggy for scripts using command hg ... and too inefficient for this case (installation of a command-line application). See https://github.com/conda/conda/issues/8556#issuecomment-488703716
I created a tiny Python package conda-app (https://pypi.org/project/conda-app/) to improve this situation.
This should now works on Unix systems (with Bash and Fish):
conda activate base
conda config --add channels conda-forge
pip install conda-app
conda-app install mercurial
It should not be difficult to improve conda-app to also support Windows.
For the time being, Windows users can install Mercurial and important extensions by installing TortoiseHG.
Editor's note: The question's original title was "Use npm install to install node modules stored on a local directory", which made the desire to transparently redefine the installation source less obvious. Therefore, some existing answers suggest solutions based on modifying the installation process.
I know this is a simple thing, but I'm quite new to anything in this area so after searching around and constantly finding answers that weren't really what I want I figured I'd just ask directly.
I currently have a process that is run in directory FOO that calls npm install. Directory FOO contains a package.json and a npm-shrinkwrap.json file to specify the modules (bluebird, extend, and mysql in this case but it doesn't really matter) and versions. This all works perfectly fine.
But now instead of reaching out to the internet to get the modules I want to have them stored in local directory BAR and have the process in foo use npm to install them from there. I can not store them permanently in FOO but I can in BAR for reasons outside my control. I know this is relatively simple but I can't seem to get the right set of commands down. Thanks for the help.
Note: This answer originally suggested only redefining the cache location. While that works in principle, npm still tries to contact the network for each package, causing excessive delays.
I assume your intent is to transparently change the installation source: in other words: you don't want to change your package, you simply want to call npm install as before, but have the packages be installed from your custom filesystem location, offline (without the need for an Internet connection).
There are two pieces to the puzzle:
Redefine npm's cache filesystem location (where previously downloaded packages are cached) to point to your custom location:
Note that cached packages are stored in a specific way: the package.json file is stored in subfolder package, and the zipped package as a whole as package.tgz. It's easiest to copy packages from your existing cache to your custom location, or to simply install additionally needed ones while you have an Internet connection, which caches them automatically.
For transparent use (npm install can be called as usual):
By setting the configuration item globally:
npm config set cache '/path/to/BAR'
Note that this will take effect for all npm operations, persistently.
Via an environment variable (which can be scoped to a script or even a single command):
export npm_config_cache='/path/to/BAR'
npm_config_cache='path/to/BAR' npm install
Ad-hoc use, via a command-line option:
npm install --cache /path/to/BAR
Force npm to use cached packages:
Currently, that requires a workaround via the cache-min configuration item.
A more direct feature, such as via an --offline switch has been a feature request for years - see https://github.com/npm/npm/issues/2568
The trick is to set cache-min to a very high value, so that all packages in the cache are considered fresh and served from there:
For transparent use (npm install can be called as usual):
By setting the configuration item globally:
npm config set cache-min 9999999999
Note that this will take effect for all npm operations, persistently.
Via an environment variable (which can be scoped to a script or even a single command):
export npm_config_cache_min=9999999999
npm_config_cache_min=9999999999 npm install
Ad-hoc use, via a command-line option:
npm install --cache-min 9999999999
Assuming you've set cache-min globally or through an environment variable,
running npm install should now serve the packages straight from your custom cache location.
Caveats:
This assumes that all packages your npm install needs are available in your custom location; trying to install a package that isn't in the cache will obviously fail without an Internet connection.
Conversely, if you do have Internet access but want to prevent npm from using it to fetch packages - which it still will attempt if a package is not found in the cache - you must change the registry configuration item to something invalid so as to force the online installation attempt to fail; e.g.:
export npm_config_registry=http://example.org
Note that the URL must exist to avoid delays while npm tries to connect to it; while you could set the value to something syntactically invalid (e.g., none), npm will then issue a warning on every use.
Sample bash script:
#!/usr/bin/env bash
# Set environment variables that set npm configuration items to:
# - redefine the location of the cache folder
# - make npm look in the cache only (assuming the packages are there)
# Note that by doing this inside a script the changes will only take effect
# in the script and NOT persist.
export npm_config_cache='/path/to/BAR' npm_config_cache_min=9999999999
# Now cd to your package and invoke `npm install` as usual.
cd '/path/to/project'
npm install
You might want to try npm link. You could:
download the dependency
run npm link from the dependency's directory
run npm link mycrazydependency from you project
Detail here: https://docs.npmjs.com/cli/link
If a shrink wrap file is present then package.json is ignored. What you need to do is change the URL they are being retrieved from using a find and replace operation like sed .... However I'm not sure changing the URL to a file:/// syntax is valid but give it a go.
I need to install netcdf-openmpi-devel on Red Hat 6. The problem is that this is not provided by the repositories I have: redhat and epel. I already tried downloading several fedora rpms, but for almost all of them, it's not possible to verify their keys (doing rpm -K package). I was able to get one key for one of the rpms, but then it shows that I don't have the required dependencies like:
netcdf-openmpi, which is kind of what I am trying to install.
Is there another way to install this?
Thanks for your help!
The cleanest approach is to build an RPM package against the RHEL6 package set. This will make sure that all dependencies are satisfied. To do this, taking advantage of already existing packages, you can clone the Fedora netcdf package files from [1] and then build the package with mock, using rpmbuild (see [2]) or, actually better, mock (see [3]).
You might encounter the situation where a build dependency is not available in the rhel or epel repos. You can then again clone the respective package files from the Fedora git repository and build that package before.
So, to wrap up your steps might look something like this:
$ git clone git://pkgs.fedoraproject.org/netcdf.git
$ cd netcdf
## Look at netcdf.spec, make changes if necessary
## To build using rpmbuild (probably easier than mock)
# yum install rpmdevtools
$ rpmdev-setuptree
$ mv netcdf.spec $HOME/rpmbuild/SPEC
$ mv * $HOME/rpmbuild/SOURCES
$ cd $HOME/rpmbuild/SPEC
$ rpmbuild -ba netcdf.spec
## rpmbuild might complain about unsatisfied build dependencies. Install these as necessary, some build dependencies might not be available in the repos, you will need to build then following the same procedure.
One particular thing you'll need to be aware of is that there is a netcfd package available in epel, which however is built without openmpi support (see [4]). If you install your home-built rpm, you probably want to make sure that a possible update from the epel repositories won't override your home-built version (this happens if the epoch:version-release of the package in the repositories is newer than the one you've built). You could:
Monitor what will get updated, if a netcdf version from epel would update your home-built rpm, then grab the newest package sources from the Fedora git repo and build a new package.
Exclude netcdf* from epel by adding exclude=netcdf* to the epel repo file in /etc/yum.repos.d.
[1] http://pkgs.fedoraproject.org/cgit/netcdf.git/
[2] http://fedoraproject.org/wiki/How_to_create_an_RPM_package#Preparing_your_system
[3] http://fedoraproject.org/wiki/Projects/Mock
[4] http://pkgs.fedoraproject.org/cgit/netcdf.git/tree/netcdf.spec?h=el6
I am trying to build a .rpm package. I have just followed the steps to do that. Till now all steps were gone fine but now i just stuck with this step. I just ran the following command and got this error:
rpmbuild -ba asterisk.spec
error: Failed build dependencies:
gtk2-devel is needed by asterisk-1.8.12.2-1.fc15.x86_64
libsrtp-devel is needed by asterisk-1.8.12.2-1.fc15.x86_64
[... more ...]
freetds-devel is needed by asterisk-1.8.12.2-1.fc15.x86_64
uw-imap-devel is needed by asterisk-1.8.12.2-1.fc15.x86_64
I am using fedora-15. How to resolve this error?
How I do install all depencencies during installation of src.rpm package. Is it possible?
You can use the yum-builddep command from the yum-utils package to install all the build dependencies for a package.
The arguments can either be paths to spec files, paths to source RPMs or the names of packages which exist as source RPMs in a configured repository, for example:
yum-builddep my-package.spec
or
yum-builddep my-package.src.rpm
The same thing can be achieved on newer versions of Fedora that use dnf as their package manager by making sure that dnf-plugins-core is installed and then doing:
dnf builddep my-package.spec
or
dnf builddep my-package.src.rpm
yum-builddep doesn't seem to work if the mirror you use doesn't serve source RPMs. This may not handle all cases, but it usually works for me:
sudo yum install -y $(<rpmbuild> | fgrep 'is needed by' | awk '{print $1}')
where <rpmbuild> is your rpmbuild command (e.g., rpmbuild -ba foo.spec).
On PHP building - especially phpbrew I used dnf builddep php, it worked.