how to define the root of caffe to digits? - caffe

I have installed caffe on ubuntu 16.04, python 2.7, cuda 8, and have tested mnist on that. it worked fine.
Now I want to install digits. I have installed it but when I do ./digits-devserver, it gives me the following error:
#pc2user:~/digits$ ./digits-devserver
___ ___ ___ ___ _____ ___
| \_ _/ __|_ _|_ _/ __|
| |) | | (_ || | | | \__ \
|___/___\___|___| |_| |___/ 5.1-dev
Did you forget to "make pycaffe"?
"/home/user/caffe-master" from CAFFE_ROOT does not point to a valid installation of Caffe.
Use the envvar CAFFE_ROOT to indicate a valid installation.
Traceback (most recent call last):
File "/home/user/anaconda2/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/home/user/anaconda2/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/user/digits/digits/__main__.py", line 70, in <module>
main()
File "/home/user/digits/digits/__main__.py", line 53, in main
import digits.config
File "digits/config/__init__.py", line 7, in <module>
from . import ( # noqa
File "digits/config/caffe.py", line 226, in <module>
executable, version, flavor = load_from_envvar('CAFFE_ROOT')
File "digits/config/caffe.py", line 37, in load_from_envvar
import_pycaffe(python_dir)
File "digits/config/caffe.py", line 126, in import_pycaffe
import caffe
File "/home/user/caffe-master/python/caffe/__init__.py", line 1, in <module>
from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver
File "/home/user/caffe-master/python/caffe/pycaffe.py", line 13, in <module>
from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
ImportError: /home/user/anaconda2/lib/python2.7/site-packages/scipy/special/../../../../libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /home/user/caffe-master/python/caffe/_caffe.so)
user#pc2user:~/digits$
Moreover, thsi is my ~/.profile:
Please help me to find out how to define the root of caffe to digits.

Related

Issue with KeyError: 'babel'

I am very new to Flask and everything related to Web development. I am building an app in Flask with Dash integrated and it is failing with the following error:
C:\Users\satpute\PycharmProjects\RMAPartsDepotPlanning\venv\Scripts\python.exe
C:/PycharmProjects/RMAPrototype/dashapp.py
Traceback (most recent call last):
File "C:\PycharmProjects\RMAPrototype\dashapp.py", line 4, in <module>
app = create_app()
File "C:\PycharmProjects\RMAPrototype\PDP\__init__.py", line 12, in create_app
from PDP import PDPApp
File "C:\PycharmProjects\RMAPrototype\PDP\PartsDepotPlanningApp.py", line 14, in <module>
from flask_table import Table, Col, LinkCol
File "C:\PycharmProjects\RMAPrototype\venv\lib\site-packages\flask_table\__init__.py", line 1, in
<module>
from .table import Table, create_table
File "C:\PycharmProjects\RMAPrototype\venv\lib\site-packages\flask_table\table.py", line 8, in
<module>
from .columns import Col
File "C:\PycharmProjects\RMAPrototype\venv\lib\site-packages\flask_table\columns.py", line 161, in
<module>
class BoolCol(OptCol):
File "C:\PycharmProjects\RMAPrototype\venv\lib\site-packages\flask_table\columns.py", line 166, in
BoolCol
yes_display = _('Yes')
File "C:\PycharmProjects\RMAPrototype\venv\lib\site-packages\flask_babel\__init__.py", line 548, in
gettext
t = get_translations()
File "C:\PycharmProjects\RMAPrototype\venv\lib\site-packages\flask_babel\__init__.py", line 217,
in get_translations
babel = current_app.extensions['babel']
KeyError: 'babel'
> Process finished with exit code 1
How can I go about troubleshooting this? I tried different approaches but couldn't resolve it so far.

CUDNN_STATUS_MAPPING_ERROR when training with pose2body

I'm trying to train https://github.com/NVIDIA/vid2vid. I'm...
...executing with pretty much the vanilla parametrization shown in the readme, I had to change the number of GPUs though and increased the number of threads for reading the dataset. Command:
python train.py \
--name pose2body_256p \
--dataroot datasets/pose \
--dataset_mode pose \
--input_nc 6 \
--num_D 2 \
--resize_or_crop ScaleHeight_and_scaledCrop \
--loadSize 384 \
--fineSize 256 \
--gpu_ids 0,1 \
--batchSize 1 \
--max_frames_per_gpu 3 \
--no_first_img \
--n_frames_total 12 \
--max_t_step 4 \
--nThreads 6
...training on the supplied example datasets.
...running a docker container built with the scripts in vid2vid/docker, e. g. with CUDA 9.0 and CUDNN 7.
...using two NVIDIA V100 GPUs.
Whenever I start training the script crashes after a couple of minutes with the message RuntimeError: CUDNN_STATUS_MAPPING_ERROR. Full error message:
Traceback (most recent call last):
File "train.py", line 329, in <module>
train()
File "train.py", line 104, in train
fake_B, fake_B_raw, flow, weight, real_A, real_Bp, fake_B_last = modelG(input_A, input_B, inst_A, fake_B_last)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 114, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 124, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 65, in parallel_apply
raise output
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 41, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/vid2vid/models/vid2vid_model_G.py", line 130, in forward
fake_B, fake_B_raw, flow, weight = self.generate_frame_train(netG, real_A_all, fake_B_prev, start_gpu, is_first_frame)
File "/vid2vid/models/vid2vid_model_G.py", line 175, in generate_frame_train
fake_B_feat, flow_feat, fake_B_fg_feat, use_raw_only)
File "/vid2vid/models/networks.py", line 171, in forward
downsample = self.model_down_seg(input) + self.model_down_img(img_prev)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDNN_STATUS_MAPPING_ERROR
From reading the issues in the vid2vid using two V100 should work with this setup. The error also occurs if CUDA 8/CUDNN 6 are used. I checked the flags but haven't found any indication of further necessary changes to the arguments supplied to train.py.
Any ideas on how to solve (or work around) this?
In case anybody deals the same issue: Training on P100 cards worked. Seems like the V100 architecture clashes with version of pytorch used in the supplied Dockerfile at some point. Not quite a solution, but a workaround.

Cuda library dead after linux-updates

System ran beautifully, until I received update notifications from Ubuntu. So I accepted. And after they ran I get a big Cuda-issue:
('fp: ', <open file '/usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so', mode 'rb' at 0x7f8af1a63300>)
('pathname: ', '/usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so')
('description: ', ('.so', 'rb', 3))
Traceback (most recent call last):
File "translate.py", line 41, in <module>
import tensorflow.python.platform
File "/usr/local/lib/python2.7/dist-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/__init__.py", line 45, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 31, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 27, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
ImportError: libcudart.so.7.5: cannot open shared object file: No such file or directory
Any idea?
thx
It seems like your system cannot find "libcudart.so.7.5".
libcudart.so.7.5: cannot open shared object file: No such file or directory
Could you check this file exist and you set the PATH/LD_LIBRARY_PATH correctly?
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

Cython: Trying to wrap SFML Window; getting "ImportError: No module named 'ExprNodes'"

sfml.pxd:
cdef extern from "SFML/Window.hpp" namespace "sf":
cdef cppclass VideoMode:
VideoMode(unsigned int, unsigned int) except +
cdef cppclass Window:
Window(VideoMode, String) except +
void display()
display.pyx:
cimport sfml
cdef class Window:
cdef sfml.Window* _this
def __cinit__(self, unsigned int width, unsigned int height):
self._this = new sfml.Window(sfml.VideoMode(width, height), "title")
def __dealloc__(self):
del self._this
def display(self):
self._this.display()
setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [
Extension("display", ["display.pyx"],
language='c++',
libraries=["sfml-system", "sfml-window"])
]
)
The error when running python setup.py build:
running build
running build_ext
cythoning display.pyx to display.cpp
Traceback (most recent call last):
File "setup.py", line 10, in <module>
libraries=["sfml-system", "sfml-window"])
File "/usr/lib/python3.3/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.3/distutils/dist.py", line 917, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.3/distutils/dist.py", line 936, in run_command
cmd_obj.run()
File "/usr/lib/python3.3/distutils/command/build.py", line 126, in run
self.run_command(cmd_name)
File "/usr/lib/python3.3/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.3/distutils/dist.py", line 936, in run_command
cmd_obj.run()
File "/usr/lib/python3.3/site-packages/Cython/Distutils/build_ext.py", line 163, in run
_build_ext.build_ext.run(self)
File "/usr/lib/python3.3/distutils/command/build_ext.py", line 354, in run
self.build_extensions()
File "/usr/lib/python3.3/site-packages/Cython/Distutils/build_ext.py", line 170, in build_extensions
ext.sources = self.cython_sources(ext.sources, ext)
File "/usr/lib/python3.3/site-packages/Cython/Distutils/build_ext.py", line 317, in cython_sources
full_module_name=module_name)
File "/usr/lib/python3.3/site-packages/Cython/Compiler/Main.py", line 608, in compile
return compile_single(source, options, full_module_name)
File "/usr/lib/python3.3/site-packages/Cython/Compiler/Main.py", line 549, in compile_single
return run_pipeline(source, options, full_module_name)
File "/usr/lib/python3.3/site-packages/Cython/Compiler/Main.py", line 386, in run_pipeline
from . import Pipeline
File "/usr/lib/python3.3/site-packages/Cython/Compiler/Pipeline.py", line 7, in <module>
from .Visitor import CythonTransform
File "Visitor.py", line 10, in init Cython.Compiler.Visitor (/build/src/Cython-0.19/Cython/Compiler/Visitor.c:15987)
ImportError: No module named 'ExprNodes'
Apparently, it can't find something called 'ExprNodes', but I don't think that my Cython installation is broken, because I managed to successfully wrap a different C++ library some time ago, and I didn't run into this problem.
I'm using Cython 0.19.
I would appreciate any help/insight that you could offer.
Thanks.
Looking more closely at the traceback, I see that Cython fails inside it's own compiled code. It may be a bug indeed, sorry for missing it the first time.
What can you do:
Create a clean virtualenv, install Cython there and check if it works. (Version 0.19.1 is the latest).
Create another virtualenv, but this time install Cython using python setup.py install --no-cython-compile.
If either of these fails, please post your detailed configuration (linux distro and version, python version, gcc version, etc.) to the cython-devel mailing list.
BTW does your old successful project still compile?

hg command unknown error

Recently I got a problem with hg that I can't even check the status of modification file. I got the error as below :
sokmesa#sokmesa-laptop:/var/www/my_project$ hg st
Traceback (most recent call last):
File "/usr/lib/python2.7/site.py", line 562, in <module>
main()
File "/usr/lib/python2.7/site.py", line 544, in main
known_paths = addusersitepackages(known_paths)
File "/usr/lib/python2.7/site.py", line 271, in addusersitepackages
user_site = getusersitepackages()
File "/usr/lib/python2.7/site.py", line 246, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/usr/lib/python2.7/site.py", line 236, in getuserbase
USER_BASE = get_config_var('userbase')
File "/usr/lib/python2.7/sysconfig.py", line 558, in get_config_var
return get_config_vars().get(name)
File "/usr/lib/python2.7/sysconfig.py", line 438, in get_config_vars
import re
File "/usr/lib/python2.7/re.py", line 105, in <module>
import sre_compile
File "/usr/lib/python2.7/sre_compile.py", line 14, in <module>
import sre_parse
ValueError: bad marshal data (unknown type code)
Running dpkg-reconfigure didn't work for me, so I removed the .pyc files:
find /usr/lib/python2.7 -name \*.pyc -exec rm {} \;
Just found the solutionsudo dpkg-reconfigure update-manager-core