I currently work myself through the caffe/examples/ to learn more about caffe/pycaffe.
In the 02-fine-tuning.ipynb-notebook there is a codecell which shows how to create a caffenet which takes unlabeled "dummmy data" as input, allowing us to set its input images externally. The notebook can be found here:
https://github.com/BVLC/caffe/blob/master/examples/02-fine-tuning.ipynb
There is a given code-cell, which throws an error:
dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227]))
imagenet_net_filename = caffenet(data=dummy_data, train=False)
imagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST)
error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-9f0ecb4d95e6> in <module>()
1 dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227]))
----> 2 imagenet_net_filename = caffenet(data=dummy_data, train=False)
3 imagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST)
<ipython-input-5-53badbea969e> in caffenet(data, label, train, num_classes, classifier_name, learn_all)
68 # write the net to a temporary file and return its filename
69 with tempfile.NamedTemporaryFile(delete=False) as f:
---> 70 f.write(str(n.to_proto()))
71 return f.name
~/anaconda3/envs/testcaffegpu/lib/python3.6/tempfile.py in func_wrapper(*args, **kwargs)
481 #_functools.wraps(func)
482 def func_wrapper(*args, **kwargs):
--> 483 return func(*args, **kwargs)
484 # Avoid closing the file as long as the wrapper is alive,
485 # see issue #18879.
TypeError: a bytes-like object is required, not 'str'
Anyone knows how to do this right?
tempfile.NamedTemporaryFile() opens a file in binary mode ('w+b') by default. Since you are using Python3.x, string is not the same type as for Python 2.x, hence providing a string as input to f.write() results in error since it expects bytes. Overriding the binary mode should avoid this error.
Replace
with tempfile.NamedTemporaryFile(delete=False) as f:
with
with tempfile.NamedTemporaryFile(delete=False, mode='w') as f:
This has been explained in a previous post:
TypeError: 'str' does not support the buffer interface
Related
i'm making a code that classifies numbers by using pytorch
epochess =[]
train_losses = []
test_losses = []
acc_training =[]
acc_testing = []
for epoch in range (epochs):
train_acc, train_epoch_loss = train_CNN(model,loss_function, optimizer, train_load, device)
print('epoch',epoch ,'training loss',train_epoch_loss)
train_losses.append(train_epoch_loss)
print('epoch',epoch,'training accuracy',train_acc)
acc_training.append(train_acc)
test_acc, test_epoch_loss = validate_CNN(model, loss_function, test_load, device)
print('epoch',epoch,'testing loss',test_epoch_loss)
test_losses.append(test_epoch_loss)
print('epoch',epoch,'testing accuracy',test_acc)
acc_testing.append(test_acc)
epochess.append(epoch)
and I get an erreur , I was following the right path just like it said on youtube
here's the following erreur
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-17-0bcb51ebbc3d> in <module>
5 acc_testing = []
6 for epoch in range (epochs):
----> 7 train_acc, train_epoch_loss = train_CNN(model,loss_function, optimizer, train_load, device)
8 print('epoch',epoch ,'training loss',train_epoch_loss)
9 train_losses.append(train_epoch_loss)
2 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _forward_unimplemented(self, *input)
242 registered hooks while the latter silently ignores them.
243 """
--> 244 raise NotImplementedError(f"Module [{type(self).__name__}] is missing the required \"forward\" function")
245
246
NotImplementedError: Module [CNN] is missing the required "forward" function
How did you implement your model?
Did you use a built-in model from PyTorch or did you create a custom model?
If you created a custom model, make sure you use the right components from PyTorch (e.g. torch.nn.Linear and torch.nn.Conv2d https://pytorch.org/tutorials/beginner/introyt/modelsyt_tutorial.html), otherwise PyTorch might complain about certain functions missing, like is happening in your case.
I'm relatively new to python and machine learning and I'm trying to classify chest X-ray scans and I created a model that does that. However, when I'm trying to save the model I'm getting the error:
TypeError: Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
The full Stack Trace is:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_22092\3924096485.py in <module>
1 working_dir=os.getcwd()
2 subject='chest scans'
----> 3 save_model(subject, classes, img_size, f1score, working_dir)
~\AppData\Local\Temp\ipykernel_22092\541087647.py in save_model(subject, classes, img_size, f1score, working_dir)
3 save_id=f'{name}-{f1score:5.2f}.h5'
4 model_save_loc=os.path.join(working_dir, save_id)
----> 5 model.save(model_save_loc)
6 msg= f'model was saved as {model_save_loc}'
7 print_in_color(msg, (0,255,255), (100,100,100)) # cyan foreground
~\anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
~\anaconda3\lib\json\__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
232 if cls is None:
233 cls = JSONEncoder
--> 234 return cls(
235 skipkeys=skipkeys, ensure_ascii=ensure_ascii,
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
~\anaconda3\lib\json\encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
~\anaconda3\lib\json\encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
TypeError: Unable to serialize [2.0896919 2.1128857 2.1081853] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
I have created this function to save the model:
def save_model(subject, classes, img_size, f1score, working_dir):
name=subject + '-' + str(len(classes)) + '-(' + str(img_size[0]) + ' X ' + str(img_size[1]) + ')'
save_id=f'{name}-{f1score:5.2f}.h5'
model_save_loc=os.path.join(working_dir, save_id)
model.save(model_save_loc)
msg= f'model was saved as {model_save_loc}'
print_in_color(msg, (0,255,255), (100,100,100)) # cyan foreground
I have the following packages installed for tensorflow.
tensorboard 2.8.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.8.1
tensorflow-estimator 2.8.0
tensorflow-io-gcs-filesystem 0.28.0
termcolor 1.1.0
keras 2.8.0
keras-nightly 2.5.0.dev2021032900
Keras-Preprocessing 1.1.2
Any help would be great! I am really not being able to convert the tensorflow EagerTensor to Json File for my ML model. Thanks!
I tried to save a ml model that I created and I failed and it's giving me an error when converting it to json file
I'm trying to load transformer model from SentenceTransformer. Below is the code
# Now we create a SentenceTransformer model from scratch
word_emb = models.Transformer('paraphrase-mpnet-base-v2')
pooling = models.Pooling(word_emb.get_word_embedding_dimension())
model = SentenceTransformer(modules=[word_emb, pooling])
Below is the error
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_2948\3254427654.py in <module>
1 # Now we create a SentenceTransformer model from scratch
----> 2 word_emb = models.Transformer('paraphrase-mpnet-base-v2')
3 pooling = models.Pooling(word_emb.get_word_embedding_dimension())
4 model = SentenceTransformer(modules=[word_emb, pooling])
~\miniconda3\envs\atoti\lib\site-packages\sentence_transformers\models\Transformer.py in __init__(self, model_name_or_path, max_seq_length, model_args, cache_dir, tokenizer_args, do_lower_case, tokenizer_name_or_path)
27
28 config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)
---> 29 self._load_model(model_name_or_path, config, cache_dir)
30
31 self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path if tokenizer_name_or_path is not None else model_name_or_path, cache_dir=cache_dir, **tokenizer_args)
~\miniconda3\envs\atoti\lib\site-packages\sentence_transformers\models\Transformer.py in _load_model(self, model_name_or_path, config, cache_dir)
47 self._load_t5_model(model_name_or_path, config, cache_dir)
48 else:
---> 49 self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
50
51 def _load_t5_model(self, model_name_or_path, config, cache_dir):
~\miniconda3\envs\atoti\lib\site-packages\transformers\models\auto\auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
445 elif type(config) in cls._model_mapping.keys():
446 model_class = _get_model_class(config, cls._model_mapping)
--> 447 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
448 raise ValueError(
449 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
~\miniconda3\envs\atoti\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1310 elif os.path.join(pretrained_model_name_or_path, FLAX_WEIGHTS_NAME):
1311 raise EnvironmentError(
-> 1312 f"Error no file named {WEIGHTS_NAME} found in directory {pretrained_model_name_or_path} but "
1313 "there is a file for Flax weights. Use `from_flax=True` to load this model from those "
1314 "weights."
OSError: Error no file named pytorch_model.bin found in directory paraphrase-mpnet-base-v2 but there is a file for Flax weights. Use `from_flax=True` to load this model from those weights.
I'm using below versions
transformers==4.16.2
torch==1.11.0+cu113
torchaudio==0.11.0+cu113
torchvision==0.12.0+cu113
sentence-transformers==2.2.0
faiss-cpu==1.7.2
sentencepiece==0.1.96
It's been 2 months i ran this. All of a sudden, it's returning an error. I'm using FAISS-CPU as well.
The error is telling you that "I can't find the weights of the model you are trying to load."
Based on the error trace, I guess you are using models object from Sentence-Transformers library (correct me if I am wrong). One thing to note is that Sentence-Transformers only has the following paraphrase models as its pretrained models:
paraphrase-multilingual-mpnet-base-v2
paraphrase-albert-small-v2
paraphrase-multilingual-MiniLM-L12-v2
paraphrase-MiniLM-L3-v2
hence the one you wanted to load is not one of Sentence-Transformers pretrained models.
That brings me to think that you are trying to load a model from your local machine.
I would suggest you to create a Sentence-Transformers model like this:
from sentence_transformers import SentenceTransformer
model_path_or_name = "path/to/model" # A folder that contains model config files, including pytorch_model.bin
model = SentenceTransformer(model_path_or_name)
There was also a possibility that the pytorch_model.bin file was downloaded with another filename, as mentioned in the SO thread here.
Let me know if this solves your problem. Cheers.
I am using Test Time Augmentation during inference, like so-
file_path = '/path/to/file.jpg'
dl = learn_.dls.test_dl([file_path])
pred, _ = learn_.tta(dl=dl, n=N_IMAGES)
When I try to add additional transformations of my choice, I am unable to do so.
If I try to add additional transforms using either the item_tfms or batch_tfms parameters following the docs, like this-
pred, _ = learn_.tta(dl=dl,
n=N_IMAGES,
item_tfms=Resize(256),
batch_tfms=Zoom(p=1, draw=2.0))
I get thrown this error-
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'fastai.vision.core.PILImage'>
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-86c798126984> in <module>()
1 # tta
2 dl = learn_.dls.test_dl([file_path])
----> 3 pred, _ = learn_.tta(dl=dl, n=N_IMAGES, item_tfms=Resize(256), batch_tfms=Zoom(p=1, draw=2.0))
4 cat = learn_.dls.vocab[torch.argmax(pred).item()]
5 cat.lstrip()
9 frames
/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
423 # have message field
424 raise self.exc_type(message=msg)
--> 425 raise self.exc_type(msg)
426
427
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 34, in fetch
data = next(self.dataset_iter)
File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 118, in create_batches
yield from map(self.do_batch, self.chunkify(res))
File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 144, in do_batch
def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)
File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 143, in create_batch
def create_batch(self, b): return (fa_collate,fa_convert)[self.prebatched](b)
File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 50, in fa_collate
else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 50, in <listcomp>
else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
File "/usr/local/lib/python3.7/dist-packages/fastai/data/load.py", line 51, in fa_collate
else default_collate(t))
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 86, in default_collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'fastai.vision.core.PILImage'>
Is there any way I can use additional transformations during inference time with tta?
My dask dataframe has about 120 mm rows and 4 columns:
df_final.dtypes
cust_id int64
score float64
total_qty float64
update_score float64
dtype: object
and I'm doing this operation on jupyter notebooks connected to linux machine :
%time df_final.to_csv('/path/claritin-files-*.csv')
and it throws up this error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-46468ae45023> in <module>()
----> 1 get_ipython().magic(u"time df_final.to_csv('path/claritin-files-*.csv')")
/home/mspra/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in magic(self, arg_s)
2334 magic_name, _, magic_arg_s = arg_s.partition(' ')
2335 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2336 return self.run_line_magic(magic_name, magic_arg_s)
2337
2338 #-------------------------------------------------------------------------
/home/mspra/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in run_line_magic(self, magic_name, line)
2255 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2256 with self.builtin_trap:
-> 2257 result = fn(*args,**kwargs)
2258 return result
2259
/home/mspra/anaconda2/lib/python2.7/site-packages/IPython/core/magics/execution.pyc in time(self, line, cell, local_ns)
/home/mspra/anaconda2/lib/python2.7/site-packages/IPython/core/magic.pyc in <lambda>(f, *a, **k)
191 **# but it's overkill for just that one bit of state.**
192 def magic_deco(arg):
--> 193 call = lambda f, *a, **k: f(*a, **k)
194
195 if callable(arg):
/home/mspra/anaconda2/lib/python2.7/site-packages/IPython/core/magics/execution.pyc in time(self, line, cell, local_ns)
1161 if mode=='eval':
1162 st = clock2()
-> 1163 out = eval(code, glob, local_ns)
1164 end = clock2()
1165 else:
<timed eval> in <module>()
/home/mspra/anaconda2/lib/python2.7/site-packages/dask/dataframe/core.pyc in to_csv(self, filename, **kwargs)
936 """ See dd.to_csv docstring for more information """
937 from .io import to_csv
--> 938 return to_csv(self, filename, **kwargs)
939
940 def to_delayed(self):
/home/mspra/anaconda2/lib/python2.7/site-packages/dask/dataframe/io/csv.pyc in to_csv(df, filename, name_function, compression, compute, get, **kwargs)
411 if compute:
412 from dask import compute
--> 413 compute(*values, get=get)
414 else:
415 return values
/home/mspra/anaconda2/lib/python2.7/site-packages/dask/base.pyc in compute(*args, **kwargs)
177 dsk = merge(var.dask for var in variables)
178 keys = [var._keys() for var in variables]
--> 179 results = get(dsk, keys, **kwargs)
180
181 results_iter = iter(results)
/home/mspra/anaconda2/lib/python2.7/site-packages/dask/threaded.pyc in get(dsk, result, cache, num_workers, **kwargs)
74 results = get_async(pool.apply_async, len(pool._pool), dsk, result,
75 cache=cache, get_id=_thread_get_id,
---> 76 **kwargs)
77
78 # Cleanup pools associated to dead threads
/home/mspra/anaconda2/lib/python2.7/site-packages/dask/async.pyc in get_async(apply_async, num_workers, dsk, result, cache, get_id, raise_on_exception, rerun_exceptions_locally, callbacks, dumps, loads, **kwargs)
491 _execute_task(task, data) # Re-execute locally
492 else:
--> 493 raise(remote_exception(res, tb))
494 state['cache'][key] = res
495 finish_task(dsk, key, state, results, keyorder.get)
**ValueError: invalid literal for long() with base 10: 'total_qty'**
Traceback
---------
File "/home/mspra/anaconda2/lib/python2.7/site-packages/dask/async.py", line 268, in execute_task
result = _execute_task(task, data)
File "/home/mspra/anaconda2/lib/python2.7/site-packages/dask/async.py", line 249, in _execute_task
return func(*args2)
File "/home/mspra/anaconda2/lib/python2.7/site-packages/dask/dataframe/io/csv.py", line 55, in pandas_read_text
coerce_dtypes(df, dtypes)
File "/home/mspra/anaconda2/lib/python2.7/site-packages/dask/dataframe/io/csv.py", line 83, in coerce_dtypes
df[c] = df[c].astype(dtypes[c])
File "/home/mspra/anaconda2/lib/python2.7/site-packages/pandas/core/generic.py", line 3054, in astype
raise_on_error=raise_on_error, **kwargs)
File "/home/mspra/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 3189, in astype
return self.apply('astype', dtype=dtype, **kwargs)
File "/home/mspra/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 3056, in apply
applied = getattr(b, f)(**kwargs)
File "/home/mspra/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 461, in astype
values=values, **kwargs)
File "/home/mspra/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 504, in _astype
values = _astype_nansafe(values.ravel(), dtype, copy=True)
File "/home/mspra/anaconda2/lib/python2.7/site-packages/pandas/types/cast.py", line 534, in _astype_nansafe
return lib.astype_intsafe(arr.ravel(), dtype).reshape(arr.shape)
File "pandas/lib.pyx", line 980, in pandas.lib.astype_intsafe (pandas/lib.c:17409)
File "pandas/src/util.pxd", line 93, in util.set_value_at_unsafe (pandas/lib.c:72777)
I have a couple of questions:
1) First of all this export was working fine on Friday, it spit out 100 csv files ( since it has 100 partitions), which I later aggregated. So what is wrong today -- anything from the error log?
2) May be this question is for the creators of this package, what is the most time-efficient way to get a csv extract out of a dask dataframe of this size, since it was taking about 1.5 to 2 hrs, the last time it was working.
I'm not using dask distributed and this is on single core of a linux cluster.
This error likely has little to do with to_csv and more to do with something else in your computation. The call to df.to_csv was just the first time you forced the computation to roll through all of the data.
Given the error I actually suspect that this is failing in read_csv. Dask.dataframe read the first few hundred kilobytes of your first file to guess at the datatypes, but it seems to have guessed incorrectly. You might want to try specifying dtypes explicitly in the read_csv call.
In regards to the second question about writing to CSV quickly, my first answer would be "use Parquet or HDF5 instead". They're much faster and more accurate in almost every respect.