I wanna training EfficientDet with PyTorch-Lightning,
but, i have a problem.
Most of those things were version error.
so, i matched version that model
from effdet.config.model_config import efficientdet_model_param_dict
from effdet import get_efficientdet_config, EfficientDet, DetBenchTrain
from effdet.efficientdet import HeadNet
from effdet.config.model_config import efficientdet_model_param_dict
when i imported effdet, i got runtime error like this
RuntimeError:
object has no attribute nms:
File "/home/ubuntu/anaconda3/envs/pytorch1.7.1_p37/lib/python3.7/site-packages/torchvision/ops/boxes.py", line 35
"""
_assert_has_ops()
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'nms' is being compiled since it was called from '_batched_nms_vanilla'
# Ideally for GPU we'd use a higher threshold
# keep only top max_det_per_image scoring predictions
batch_detections.append(detections)
return torch.stack(batch_detections, dim=0)
how to solve this problem?
i wanna modeling quickly...
also, this is version status
Related
I'm using Tensorboard to see the progress of the PettingZoo environment that my agents are playing. I can see the reward go up with time, which is good, but I'd like to add other metrics that are specific to my environment. i.e. I'd like TensorBoard to show me more charts with my metrics and how they improve over time.
The only way I could figure out how to do that was by inserting a few lines into the learn method of OnPolicyAlgorithm that's part of SB3. This works and I got the charts I wanted:
(The two bottom charts are the ones I added.)
But obviously editing library code isn't a good practice. I should make the modifications in my own code, not in the libraries. Is there currently a more elegant way to add a metric from my PettingZoo environment into TensorBoard?
You can add a callback to add your own logs. See the below example. In this case the call back is called every step. There are other callbacks that you case use depending on your use case.
import numpy as np
from stable_baselines3 import SAC
from stable_baselines3.common.callbacks import BaseCallback
model = SAC("MlpPolicy", "Pendulum-v1", tensorboard_log="/tmp/sac/", verbose=1)
class TensorboardCallback(BaseCallback):
"""
Custom callback for plotting additional values in tensorboard.
"""
def __init__(self, verbose=0):
super(TensorboardCallback, self).__init__(verbose)
def _on_step(self) -> bool:
# Log scalar value (here a random variable)
value = np.random.random()
self.logger.record('random_value', value)
return True
model.learn(50000, callback=TensorboardCallback())
I'm trying to pickle functions with dill. I want to include the whole function and not just a reference to it. Here are my two files:
fun.py:
import dill
from foo import ppp
def qqq(me):
return me + 1
print(dill.dumps(ppp, protocol=4, recurse=True, byref=True))
print(dill.dumps(qqq, protocol=4, recurse=True, byref=True))
And foo.py
def qqq(me):
return me + 1
When I run fun.py I get the following output:
b'\x80\x04\x95\x0f\x00\x00\x00\x00\x00\x00\x00\x8c\x03foo\x94\x8c\x03ppp\x94\x93\x94.'
b'\x80\x04\x95\x90\x00\x00\x00\x00\x00\x00\x00\x8c\ndill._dill\x94\x8c\x10_create_function\x94\x93\x94(h\x00\x8c\n_load_type\x94\x93\x94\x8c\x08CodeType\x94\x85\x94R\x94(K\x01K\x00K\x01K\x02KCC\x08|\x00d\x01\x17\x00S\x00\x94NK\x01\x86\x94)\x8c\x02me\x94\x85\x94\x8c\x06fun.py\x94\x8c\x03qqq\x94K\x04C\x02\x00\x01\x94))t\x94R\x94}\x94h\rNN}\x94Nt\x94R\x94.'
I want to be able to make the first line of output be more similar to the second line, and actually encapsulate the function without the need for a context when reloaded later. Is there a way to do this?
Thanks so much!
James
If the module (foo) is installed on both computers, then there should be no need to do anything but import the function. So, I'll assume the question is in regard to a module that is only installed on the first machine.
It also depends on whether the module foo is "installed" on sys.path or is just available in the current directory. See:
https://github.com/uqfoundation/dill/issues/123
If it's only available in the current directory, either use dill to pickle the file itself or use something like dill.source.getsource to extract the source of the module as a string, and then transfer the string as a "pickle" (this is what ppft does).
Generally, however, dill pickles imported functions by reference, and thus assumes they are available on both sides of the load/dump.
These days, I have tried a model which implemented by cntk. But I can't find a way to predict new pic with trained model.
The trained model saved as a checkpoint:
trainer.save_checkpoint(os.path.join(output_model_folder, "model_{}".format(best_epoch)))
Then I have gotten some files like:
So, I tried to load this model checkpoint like:
model = ct.load_model('../data/models/VGG13_majority/model_94')
the code above can run successfully. Then I tried
model.eval(image_data)
but I got an error:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ update ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
this time I have tried the method below:
model = ct.load_model('../data/models/VGG13_majority/model_94')
model.eval({model.arguments[0]: [final_image]})
then a new error raised:
For any C.Function.eval() you need to pass a dictionary as the argument.
So it will go something like this, assuming that you only have one input_variable into the model:
model = C.load_model()
model.eval({model.arguments[0]: image_data})
Anyhow, i noticed that you saved the model from the checkpoint. By doing so, you actually saved the "ground_truth" input_variable to the loss function too.
I would recommend next time that you saved the model directly. Usually the files from save_checkpoint is meant to be used in restore_from_checkpoint()
import cntk as C
from cntk.layers import Dense
model = Dense(10)(C.input_variable(1))
loss = C.binary_cross_entropy(model, C.input_variable(10))
trainer = C.Trainer(model, (loss,), [C.adam(model.parameters, 0.9, 0.9)])
trainer.save_checkpoint("hello")
model.save() # used this to save the model directly
# to recover model from checkpoint use below
trainer.restore_from_checkpoint("hello")
original_model = trainer.model
print(trainer)
for i in trainer.model.arguments:
print(i)
I am having trouble to access the coefficients of a support vector regression model (SVR) in scikit learn when the model is embedded in a pipeline and a grid search.
Consider the following example:
from sklearn.datasets import load_iris
import numpy as np
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVR
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
iris = load_iris()
X_train = iris.data
y_train = iris.target
clf = SVR(kernel='linear')
select = SelectKBest(k=2)
steps = [('feature_selection', select), ('svr', clf)]
pipeline = Pipeline(steps)
grid = GridSearchCV(pipeline, param_grid={"svr__C":[10,10,100],"svr__gamma": np.logspace(-2, 2)})
grid.fit(X_train, y_train)
This seems to work fine but when I try to access the coefficient of the best fitting model
grid.best_estimator_.coef_
I get an error message: AttributeError: 'Pipeline' object has no attribute 'coef_'.
I also tried to access the individual steps of the pipeline:
pipeline.named_steps['svr']
but could not find the coefficients there.
Just happened to come across the same problem and this post
had the answer:
grid.best_estimator_ contains an instance of the pipeline, which consists of steps. The last step should always be the estimator, so you should always find the coefficients at:
grid.best_estimator_.steps[-1][1].coef_
I need to use the EqualsOps (===) from scalaz, but importing scalaz.Scalaz._ gives me a naming conflict with anorm's get method.
Here is the compilation error:
reference to get is ambiguous;
[error] it is imported twice in the same scope by
[error] import scalaz.Scalaz._
[error] and import anorm.SqlParser._
How can I bring === into scope without causing the conflict with anorm?
Remove import scalaz.Scalaz._
Assuming you are comparing primitives,
import scalaz._
import std.anyVal._
import syntax.equal._
If it is something else, say strings, replace std.anyVal._, with std.string._.
Essentially, the first line gives you the various scalaz types (if you don't want this, replace std with scalaz.std, and syntax with scalaz.syntax).
Line 2 gives you the implicit conversions for the primitives. That lets you treat primitives as Equal, or actually as any of the other scalaz typeclasses, (Monoid, etc.)
Line 3 gives you EqualOps, which enables you to use the === syntax with things that can be Equal.
Hope that helps