"ImportError: cannot import name layers" in caffe / pycaffe - caffe

I'm following the ipython notebook example on training caffe via its python interface. However the caffe module seems not to contain the modules described in the tutorial
When I enter
import caffe
from caffe import layers as L
I get the error:
ImportError Traceback (most recent call last)
<ipython-input-5-7cbb5661b7a7> in <module>()
----> 1 from caffe import layers as L
ImportError: cannot import name layers
when I do dir(caffe) I get:
['Classifier',
'Detector',
'Layer',
'Net',
'SGDSolver',
'TEST',
'TRAIN',
'__builtins__',
'__doc__',
'__file__',
'__name__',
'__package__',
'__path__',
'_caffe',
'classifier',
'detector',
'get_solver',
'io',
'proto',
'pycaffe',
'set_device',
'set_mode_cpu',
'set_mode_gpu']
so it doesn't contain a module called "layers".
Is the tutorial somehow outdated? Is there a newer version which works for my version of caffe?

Related

Transforming shapefiles to dataframes with shapefile_to_dataframe() helper function - fiona related error

I am trying to use the Palantir Foundry helper function shapefile_to_dataframe() in order to ingest shapefiles for later usage in geolocation features.
I have manually imported the shapefiles (.shp, .shx & .dbf) in a single dataset (no access issues through the filesystem API).
As per documentation, I have imported geospatial-tools and the GEOSPARK profiles + included dependencies in the transforms-python build.gradle.
Here is my transform code, which is mostly extracted from the documentation:
from transforms.api import transform, Input, Output, configure
from geospatial_tools import geospatial
from geospatial_tools.parsers import shapefile_to_dataframe
#geospatial()
#transform(
raw = Input("ri.foundry.main.dataset.0d984138-23da-4bcf-ad86-39686a14ef21"),
output = Output("/Indhu/InDhu/Vincent/geo_energy/datasets/extract_coord/raw_df")
)
def compute(raw, output):
return output.write_dataframe(shapefile_to_dataframe(raw))
Code assist then become extremely slow to load, and then I am finally getting following error:
AttributeError: partially initialized module 'fiona' has no attribute '_loading' (most likely due to a circular import)
Traceback (most recent call last):
File "/myproject/datasets/shp_to_df.py", line 3, in <module>
from geospatial_tools.parsers import shapefile_to_dataframe
File "/scratch/standalone/3a553998-623b-48f5-9c3f-03de7e64f328/code-assist/contents/transforms-python/build/conda/env/lib/python3.8/site-packages/geospatial_tools/parsers.py", line 11, in <module>
from fiona.drvsupport import supported_drivers
File "/scratch/standalone/3a553998-623b-48f5-9c3f-03de7e64f328/code-assist/contents/transforms-python/build/conda/env/lib/python3.8/site-packages/fiona/__init__.py", line 85, in <module>
with fiona._loading.add_gdal_dll_directories():
AttributeError: partially initialized module 'fiona' has no attribute '_loading' (most likely due to a circular import)
Thanks a lot for your help,
Vincent
I was able to reproduce this error and it seems like it happens only in previews - running the full build seems to be working fine. The simplest way to get around it is to move the import inside the function:
from transforms.api import transform, Input, Output, configure
from geospatial_tools import geospatial
#geospatial()
#transform(
raw = Input("ri.foundry.main.dataset.0d984138-23da-4bcf-ad86-39686a14ef21"),
output = Output("/Indhu/InDhu/Vincent/geo_energy/datasets/extract_coord/raw_df")
)
def compute(raw, output):
from geospatial_tools.parsers import shapefile_to_dataframe
return output.write_dataframe(shapefile_to_dataframe(raw))
However, at the moment, the function shapefile_to_dataframe isn't going to work in the Preview anyway because the full transforms.api.FileSystem API isn't implemented - specifically, the functions ls doesn't implement the parameter glob which the full transforms API does.

Loading XGBoost Model: ModuleNotFoundError: No module named 'sklearn.preprocessing._label'

I'm having issues loading a pretrained xgboost model using the following code:
xgb_model = pickle.load(open('churnfinalunscaled.pickle.dat', 'rb'))
And when I do that, I get the following error:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-29-31e7f426e19e> in <module>()
----> 1 xgb_model = pickle.load(open('churnfinalunscaled.pickle.dat', 'rb'))
ModuleNotFoundError: No module named 'sklearn.preprocessing._label'
I haven't seen anything online so any help would be much appreciated.
I was able to solve my issue. Simply update scikit-learn from 0.21.3 to 0.22.0 seems to solve the issue. Along the way I have to update my pandas version to 0.25.2 as well.
The cue is provided in this link: https://www.gitmemory.com/vruusmann, where it states:
During Scikit-Learn version upgrade from 0.21.X to 0.22.X many modules were renamed (typically, by prepending an underscore character to the module name). For example, sklearn.preprocessing.label.LabelEncoder became sklearn.preprocessing._label.LabelEncoder.

Unable to add LSTM layer on top of embedded layer on GPU - Keras with tensorflow backend

Below is the code snippet:
from keras.models import Sequential
from keras.layers import LSTM, Embedding
lang_model = Sequential()
lang_model.add(Embedding(1000, 100, input_length=25))
lang_model.add(LSTM(100,return_sequences=True)) #stuck here
lang_model.summary()
When I run the above code on my local CPU, it runs fine, but when running on a GPU on google cloud, it just doesn't work. It doesn't even show any error, its just gets stuck on the 3rd line.
Please suggest.
Just figured out that it works fine with theano backend, but fails when I use tensorflow backend.
Thanks.

Still downloading even Keras has the VGG16 pretrained model in ./keras/models

I tried running the VGG16 keras script.
I get this error:
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5
Traceback (most recent call last):
File "test_imagenet.py", line 40, in
model = VGG16(weights="imagenet")
File "/home/nvidia/deep-learning-models/imagenet-example/vgg16.py", line 143, in VGG16
cache_subdir='models')
File "build/bdist.linux-aarch64/egg/keras/utils/data_utils.py", line 222, in get_file
Exception: URL fetch failure on https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5:
I tried to download it manually from here and paste it to ~/.keras/models.
But still, I am getting the same error. Why? I donĀ“t understand the error because the correct model already is in .keras/models.
The default value of include_top parameter in VGG16 function is True. This means if you want to use a full layer pre-trained VGG network (with fully connected parts) you need to download vgg16_weights_tf_dim_ordering_tf_kernels.h5 file, not vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5.

nltk pos_tag usage

I am trying to use speech tagging in NLTK and have used this command:
>>> text = nltk.word_tokenize("And now for something completely different")
>>> nltk.pos_tag(text)
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
nltk.pos_tag(text)
File "C:\Python27\lib\site-packages\nltk\tag\__init__.py", line 99, in pos_tag
tagger = load(_POS_TAGGER)
File "C:\Python27\lib\site-packages\nltk\data.py", line 605, in load
resource_val = pickle.load(_open(resource_url))
File "C:\Python27\lib\site-packages\nltk\data.py", line 686, in _open
return find(path).open()
File "C:\Python27\lib\site-packages\nltk\data.py", line 467, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource 'taggers/maxent_treebank_pos_tagger/english.pickle' not
found. Please use the NLTK Downloader to obtain the resource:
However, I get an error message which shows:
engish.pickle not found.
I have download the whole corpora and the english.pickle file is there in the maxtent_treebank_pos_tagger
What can I do to get this to work?
Your Python installation is not able to reach maxent or treemap.
First, check if the tagger is indeed there:
Start Python from the command line.
>>> import nltk
Then you can check using
>>> dir (nltk)
Look through the list to see if maxent and treebank are both there.
Easier would be to type
>>> "maxent" in dir(nltk)
>>> True
>>> "treebank" in dir(nltk)
>>> True
Use nltk.download() --> Models tab and check to see if the treemap tagger shows as installed.
You should also try downloading the tagger again.
If you don't want to use the downloader gui, you can just use the following commands in a python or ipython shell:
import nltk
nltk.download('punkt')
nltk.download('maxent_treebank_pos_tagger')
Over 50 corpora and lexical resources such as WordNet: http://www.nltk.org/nltk_data/ for free.
Use http://nltk.github.com/nltk_data/ as server index instead of googlecode
Google code 401: Authorization Required