'The following arguments were not expected: environment.yml --file create' when making new Mamba environment - deep-learning

I need to test the model described in the IceNet paper but I am having issues making the Mamba environment.
After installing Mamba as described here, if I run the command mamba env create --name esports --file environment.yml I get the error The following arguments were not expected: environment.yml --file create Run with --help for more information.
Is there a way I can fix that? Also, I am working with an A100 GPU. Does it still make sense to use Mamba (the code was originally developed to run on a laptop) or am I already fine using Conda as usual?

Mamba should have the same API a Conda so the command yo tried should be correct. The error you get is likely due to a typo.
Note that I was able to trigger this exact error using Micromamba which has a different API than Mamba. Micromamba only has the micromamba create command that handles both YAML and list environment files. In that case, the correct command is:
micromamba create --name esports --file environment.yml

Related

How to make a shell script setup file see my conda environment

I'm trying to apply the CyCADA paper: https://github.com/jhoffman/cycada_release/tree/8629c03fe78a72d4aaa0be1a434018f8600dfae4
I'm trying to run the "train_cycada.sh", but I get an error "No module named 'torch'", even though i have torch on my conda environment. I guess the problem is that bash on windows doesn't see the conda environment .... anyone know how to fix that?

mysqldump not found (Wordmove). How to correctly set up a symlink in Zsh?

I have searched through related questions but have still not found an answer to this one.
I am using Wordmove to try and push/pull databases between local and live environments for WordPress (running on AMPPS on OSX). I have come back to trying the Wordmove method since the fork of WP-Sync-DB stopped working for me and appears to be abandoned now. This was the best free method for migrating databases between WordPress environments.
The error I am getting when running wordmove pull -e runcloud --db is sh: mysqldump: command not found
I am using Zsh and have already added a symlink to the only mysqldump I could locate on my system: alias mysqldump='/Applications/AMPPS/mysql/bin/mysqldump --host=localhost -uroot -proot' in .zprofile . It is also included in my .bash_profile . Without that line I simply get mysqldump not found (verified by commenting the line and needing to restart iTerm after each change).
So now if I type which mysqldump I get mysqldump: aliased to /Applications/AMPPS/mysql/bin/mysqldump --host=localhost -uroot -proot
But the error from Wordmove persists. I have enquired on the Wordmove Github and the author says this will be an error with how mysqldump if configured.
Disclaimer: I am not at all expert with CLI, only knowing enough to configure an environment for Gulp, using tools like Wordmove and basic stuff over SSH. I chose Zsh as it made a lot of stuff easier to use and to see, but any kind of configuration for this usually has me scratching my head!
Have I missed something obvious here? Perhaps the symlink is not set up correctly?
I see two conceptual problems here:
(1) You can not export an alias. An alias defined in the current Zsh, won't be automtaically be visible in a child Zsh.
(2) Your error message says
sh: mysqldump: command not found
which means that Zsh is not even involved when looking for mysqldump. This is a Posix shell script running.
Hence, every mechanism you want to use must work with Posix shell, which means that you need a program (a suitable shell script) named mysqldump in your PATH, which then calls the original mysqldump with the parameters you have in mind.
Make sure that the PATH is set up so that your private version of mysqldump is found before the one in /Applications/AMPPS/mysql/bin.

ModuleNotFoundError when running functional python tests despite that textX command works

I followed the set of instructions for this open source.
At step 3, I am supposed to run
py.test tests/functional/
When I do so, I get
ModuleNotFoundError: No module named 'textx'
However, when I type textx, it's definitely working as a command.
Where did I go wrong?
The PYTHONPATH is not set by py.test, see https://docs.pytest.org/en/latest/pythonpath.html#pythonpath
As described in https://github.com/igordejanovic/textX/blob/master/CONTRIBUTING.md you install textX in your virtual environment. If you omit 'pip install -e .' you get the described behavior.
As mentioned above, you can set the PYTHONPATH manually. Alternatively you can also run 'python -m pytest tests/functional' as proposed on the py.test website.
It is unclear to me why the textx command works in your example. Maybe you installed textX outside your virtual environment after creating the virtual environment?
Run export PYTHONPATH=. before running the py.test tests/functional/ and it should work.
This error may have occurred because I installed textX outside my virtual environment after creating the virtual environment.

'ALGOLIA_API_KEY' not recognized as an internal or external command

I am trying to run algolia for the first time but it seems that there is something wrong with my environment. I followed the detailed explanation here https://community.algolia.com/jekyll-algolia/getting-started.html.
I installed and configured everything that is needed from the previous steps but when I run the command
ALGOLIA_API_KEY=xxxxxxxxxxxxxx bundle exec jekyll algolia
I get an error:
'ALGOLIA_API_KEY' is not recognized as an internal or external command,
operable program or batch file.
I have been rereading the documentation for both jekyll and angolia but couldn't find anything that could be helpful.
Since you're running on Windows, you cannot set an environment variable for your command like you can do on UNIX.
As advised in this question, Setting and using variable within same command line in Windows cmd.exe, I believe you could use
set ALGOLIA_API_KEY=xxxxxxxxxxxxxx && bundle exec jekyll algolia

Upgrading to Ansible 2.0 with Digital Ocean api v2 Issues

I have a working vagrant + ansible setup to provision my digital ocean, it was running on api v1, but when DO deprecated it I got an error message telling me there was no support for v1 anymore. After a research I found out I needed to upgrade to ansible 2.0 + update my digital_ocean.py since the older one was still using client_id and api_key, the new one however now uses the api_token.
Basically I've updated
1.digital_ocean.py which I got from ansible repo module
2. digital_ocean.ini to contain the api_token
3. updated my api token from DO to make sure it's using the new one
but when I execute my ansible playbook I initially got this error
ERROR! The file provisioning/inventory/staging/digital_ocean.py looks like it should be an executable inventory script, but is not marked executable. Perhaps you want to correct this with `chmod +x provisioning/inventory/staging/digital_ocean.py`?
So naturally had to chmod +x it but when I did I get a new error which is
ERROR! The file provisioning/inventory/staging/digital_ocean.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this with `chmod -x provisioning/inventory/staging/digital_ocean.py`.
ERROR! Inventory script (provisioning/inventory/staging/digital_ocean.py) had an execution error:
ERROR! provisioning/inventory/staging/digital_ocean.py:3: Error parsing host definition ''''': No closing quotation
The next one seem to be json parsing related, my only problem is that it's on line 3 which if you check on the code itself are still on the comment side
https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/digital_ocean.py
I received both errors mentioned and solved them with the following:
The dopy related error was due to the fact that my python and pip were installed differently. dopy was installed via pip which was installed via homebrew. I was using the system python. When I installed python via homebrew, the script found dopy just fine.
Regarding the second error, that is the result I got when not setting the DO_API_TOKEN. I set mine in the command itself with:
DO_API_TOKEN=<api_token> ansible -i digital_ocean.py all -m ping