Error: while running the Model Optimizer on the pretrained Caffe* model in Intel OpenVINO - caffe

while running the Model Optimizer on the pretrained Caffe* model in Intel OpenVINO
The "/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/public/mobilenet-ssd/mobilenet-ssd.caffemodel" is not existing file

Message clearly states that model is not present. You need to download it.
Clone this OpenVINO modelzoo.
Go to tools/downloader.
Install the dependencies.
python3 -mpip install --user -r ./requirements.in
Download mobilenet-ssd model to the desired path.
./downloader.py --name mobilenet-ssd --output_dir /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/public/mobilenet-ssd/
Note: It downloads the model using these files. Refer this for details.

Related

nvcc not found but cuda runs fine?

I was trying to run nvcc -V to check cuda version but I got the following error message.
Command 'nvcc' not found, but can be installed with:
sudo apt install nvidia-cuda-toolkit
But gpu acceleration is working fine for training models on cuda. Is there another way to find out cuda compiler tools version. I know nvidia-smi doesn't give the right version.
Is there a way to install or configure nvcc. So I don't have to install a whole new toolkit.
Most of the time, nvcc and other CUDA SDK binaries are not in the environment variable PATH. Check the installation path of CUDA; if it is installed under /usr/local/cuda, add its bin folder to the PATH variable in your ~/.bashrc:
export CUDA_HOME=/usr/local/cuda
export PATH=${CUDA_HOME}/bin:${PATH}
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH
You can apply the changes with source ~/.bashrc, or the next time you log in, everything is set automatically.
As #pQB and #talonmies above mentioned you only need to install the GPU drivers (Versioned 430-470 these days) to use PyTorch. If you are using your GPU display port you should be fine.
For Cuda compilation tools you need to install the whole toolkit, which includes the driver as well. If installing manually from CLI the downloaded file, CLI will give you the option to choose the components to install or skip.
Generally, it is recommended to install the compilation tools (which are system wide) and GPU drivers together because it avoids compatibility issues.
Append:
export PATH="/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
to
~/.bashrc
Note: your path to cuda may include a version so navigate to /usr/local/ and check for cudaXX.XX and modify the command to point to that in ~/.bashrc

Pytorch already installed using Conda but fails when called

I am trying to install pytorch for using BERT but when following the installation instructions found here: https://pytorch.org/get-started/locally/ I am getting an error.
When I try and initalise the BERT model I get the following error:
ImportError:
BertForSequenceClassification requires the PyTorch library but it was not found in your environment.
Checkout the instructions on theinstallation page: https://pytorch.org/get-started/locally/
and follow the ones that match your environment.
I have followed the instructions and run the following command line in my Conda prompt terminal AND in my current working directory:
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
It looks to complete but when I try and call the following line I get the same error as the start as if it hasn't installed at all.
Can anyone help me out please.
EDIT:
The code I am using to execute bert is:
model = BertForSequenceClassification.from_pretrained(r'C:\Users\441\bert\pytorch_model.bin', config = r'C:\Users\441\bert\config.json')
I had the same issue (same error msg), and after using conda list | grep torch I also found it is there. What worked for me is that I restarted the jupyter notebook kernel and the error is gone.

How to use the package written by another language in AWS Lambda?

I want to import and use dataset package of python at AWS Lambda. The dataset package is about MySQL connection and executing queries. But, when I try to import it, there is an error.
"libmysqlclient.so.18: cannot open shared object file: No such file or directory"
I think that the problem is because MySQL client package is necessary. But, there is no MySQL package in the machine of AWS Lambda.
How to add the third party program and how to link that?
You should install your packages in your lambda folder :
$ pip install YOUR_MODULE -t YOUR_LAMBDA_FOLDER
And then, compress your whole directory in a zip to upload in you lambda.
What you have to do is to include the binaries needed with your lambda package.
You need to utilize pip and create an isolated environment.Your zip uploaded to lambda needs to have the python2.7/site-packages included (the ones installed with pip).
Now there are extreme cases of os-related dependencies.
This has a tricky solution.
In those cases you have to spawn an amazon linux ec2 instance in order to build/get those dependencies and package them with your lambda.
Once your lambda is packaged you can close the ec2 instance.
Check this guide if virtualenv is not enough.
This is an os dependent system file. I'm guessing that you successfully installed the Python mysql client, but you still need the system mysql client, which seems to be a different version on your system than the lambda one. While building your virtual environment on the official lambda image will definitely fix this problem, you might have some luck copying your own copy of this system file into your lambda zip file.
I found mine with
locate libmysqlclient.so.18
Note: depending on your system, the version number at the end might be different. Use the version in the error you receive.
Adding that file on the top level of my zip file with
cd \path\from\locate\to\libmysqlclient
followed by
zip -u \path\to\lambda\zip\file.zip libmysqlclient.so.18
worked for me.

Install Cuda without root

I know that I can install Cuda with the following:
wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/cuda_7.0.28_linux.run
chmod +x cuda_7.0.28_linux.run
./cuda_7.0.28_linux.run -extract=`pwd`/nvidia_installers
cd nvidia_installers
sudo ./NVIDIA-Linux-x86_64-346.46.run
sudo modprobe nvidia
sudo ./cuda-linux64-rel-7.0.28-19326674.run
Just wondering if I can install Cuda without root?
Thanks,
Update The installation UI for 10.1 changed. The following works:
Deselect driver installation (pressing ENTERon it)
Change options -> root install path to a non-sudo directory.
Press A on the line marked with a + to access advanced options. Deselect create symbolic link, and change the toolkit install path.
Now installation should work without root permissions
Thank you very much for the hints in the question! I just want to complete it with an approach that worked for me, also inspired in this gist and that hopefully helps in situations where a valid driver is installed, and installing a more recent CUDA on Linux without root permissions is still needed.
TL;DR: Here are the steps to install CUDA9+CUDNN7 on Debian, and installing a pre-compiled version of TensorFlow1.4 on Python2.7 to test that everything works. Everything without root privileges and via terminal. Should also work for other CUDA, CUDNN, TensorFlow and Python versions on other Linux systems too.
INSTALLATION
Go to NVIDIA's official release web for CUDA (as for Nov. 2017, CUDA9 is out): https://developer.nvidia.com/cuda-downloads.
Under your Linux distro, select the runfile (local)option. Note that the sudo indication present in the installation instructions is deceiving, since it is possible to run this installer without root permissions. On a server, one easy way is to copy the <LINK> of the Download button and, in any location of your home directory, run wget <LINK>. It will download the <INSTALLER> file.
Run chmod +x <INSTALLER> to make it executable, and execute it ./<INSTALLER>.
accept the EULA, say no to driver installation, and enter a <CUDA> location under your home directory to install the toolkit and a <CUDASAMPLES> for the samples.
Not asked here but recommended: Download a compatible CUDNN file from the official web (you need to sign in). In my case, I downloaded the cudnn-9.0-linux-x64-v7.tgz, compatible with CUDA9 into the <CUDNN> folder. Uncompress it: tar -xzvf ....
Optional: compile the samples. cd <CUDASAMPLES> && make. There are some very nice examples there and a very good starting point to write some CUDA scripts of yourself.
(If you did 5.): Copy the required files from <CUDNN> into <CUDA>, and grant reading permission to user (not sure if needed):
cp -P <CUDNN>/cuda/include/cudnn.h <CUDA>/include/
cp -P <CUDNN>/cuda/lib64/libcudnn* <CUDA>/lib64
chmod a+r <CUDA>/include/cudnn.h <CUDA>/lib64/libcudnn*
Add the library to your environment. This is typically done adding this following two lines to your ~/.bashrc file (in this example, the <CUDA> directory was ~/cuda9/:
export PATH=<CUDA>/bin:$PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<CUDA>/lib64/
FOR QUICK TESTING OR TENSORFLOW USERS
The quickest way to get a TensorFlow compatible with CUDA9 and CUDNN7 (and a very quick way to test this) is to download a precompiled wheel file and install it with pip install <WHEEL>. Most of the versions you need, can be found in mind's repo (thanks a lot guys). A minimal test that confirms that CUDNN is also working involves the use of tf.nn.conv2d:
import tensorflow as tf
x = tf.nn.conv2d(tf.ones([1,1,10,1]), tf.ones([1,5,1,1]), strides=[1, 1, 1, 1], padding='SAME')
with tf.Session() as sess:
sess.run(x) # this should output a tensor of shape (1,1,10,1) with [3,4,5,5,5,5,5,5,4,3]
In my case, the wheel I installed required Intel's MKL library, as explained here. Again, from terminal and without root users, this are the steps I followed to install the library and make TensorFlow find it (reference):
git clone https://github.com/01org/mkl-dnn.git
cd mkl-dnn/scripts && ./prepare_mkl.sh && cd ..
mkdir -p build && cd build
cmake -D CMAKE_INSTALL_PREFIX:PATH=<TARGET_DIR_IN_HOME> ..
make # this takes a while
make doc # do this optionally if you have doxygen
make test # also takes a while
make install # installs into <TARGET_DIR_IN_HOME>
add the following to your ~/.bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<TARGET_DIR_IN_HOME>/lib
Hope this helps!
Andres
You can install using conda with the following command.
conda install -c anaconda cudatoolkit
But you need to have prior accesss to the device(GPU).
EDIT : If you are finding error in anaconda repository then change the repository to conda-forge which is frequently updated.
conda install -c conda-forge cudatoolkit
You can install CUDA and compile programs, but you won't be able to run them for a lack of device access.

Python/Openshift application using NLTK resources

I hosted a Python webservice application in openshift which uses RSLP Stemmer module of nltk, but the log of service reported that:
[...] Resource 'stemmers/rslp/step0.pt' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download()
Searched in:
- '/var/lib/openshift/539a61ab5973caa2410000bf/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data' [...]
I concluded that the module is not installed properly. Someone knows how install resources of nltk in OpenShift/Python application??
PS: portuguese stopwords module also contains an error like this.
You can use NLTK package on OpenShift. The reason it is not working for you is because NLTK package by default expect corpus in user home directory. In OpenShift, you cannot write to user home but have to use $OPENSHIFT_DATA_DIR for storing data. To solve this problem do the follwing:
Create an environment variable called NLTK_DATA with value $OPENSHIFT_DATA_DIR. After creating environment variable restart the app using rhc app-restart command.
SSH into your application gear using rhc ssh command
Activate the virtual environment and download the corpus using the commads shown below.
. $VIRTUAL_ENV/bin/activate
curl https://raw.github.com/sloria/TextBlob/master/download_corpora.py | python
I have written a blog on Textblob package that underneath uses NLTK package https://www.openshift.com/blogs/day-9-textblob-finding-sentiments-in-text