How to disable model/weight serialization fully with AllenNLP settings? - allennlp

I wish disable serializing all model/state weights in standard AllenNLP model training via the use of jsonnet config files.
The reason for this is that I am running automatic hyperparameter optimization using Optuna.
Testing dozens of models fills up a drive pretty quickly.
I already have disabled the checkpointer by setting num_serialized_models_to_keep to 0:
trainer +: {
checkpointer +: {
num_serialized_models_to_keep: 0,
},
I do not wish to set serialization_dir to None as I still want the default behaviour regarding logging of intermediate metrics, etc. I only want to disable the default model state, training state, and best model weights writing.
Besides the option I set above, are there any default trainer or checkpointer options to disable all serialization of model weights? I checked the API docs and webpage but could not find any.
If I need to define the functionality for such an option myself, which base function(s) from the AllenNLP should I override in my Model subclass?
Alternatively, is their any utility for cleaning up intermediate model state when training is concluded?
EDIT: #petew's answer shows the solution for a custom checkpointer, but I am not clear on how to make this code findable to allennlp train for my use-case.
I wish to make the custom_checkpointer callable from a config file as below:
trainer +: {
checkpointer +: {
type: empty,
},
What would be best practice to load the checkpointer when calling allennlp train --include-package <$my_package>?
I have my_package with submodules in subdirectories such as my_package/modelss and my_package/training.
I would like to place the custom checkpointer code in my_package/training/custom_checkpointer.py
My main model is located in my_package/models/main_model.py.
Do I have to edit or import any code/functions in my main_model class to use the custom checkpointer?

You could create and register a custom Checkpointer that basically just does nothing:
#Checkpointer.register("empty")
class EmptyCheckpointer(Registrable):
def maybe_save_checkpoint(
self, trainer: "allennlp.training.trainer.Trainer", epoch: int, batches_this_epoch: int
) -> None:
pass
def save_checkpoint(
self,
epoch: Union[int, str],
trainer: "allennlp.training.trainer.Trainer",
is_best_so_far: bool = False,
save_model_only=False,
) -> None:
pass
def find_latest_checkpoint(self) -> Optional[Tuple[str, str]]:
pass
def restore_checkpoint(self) -> Tuple[Dict[str, Any], Dict[str, Any]]:
return {}, {}
def best_model_state(self) -> Dict[str, Any]:
return {}

Related

get weights for each layer of the model using OpenVINO API

I'd like to get the tensor of weights and biases for each layer of the network using the C++/Python API on the OpenVINO framework. I found this solution but unfortunately it uses an older API (release 2019 R3.1) that is no longer supported. For example, the class CNNNetReader does not exist in the latest OpenVINO. Does anyone know how to achieve this in new releases? (2020+)
Thanks!
Can you try something like this? First you need to read-in a model, the iterate over the nodes in the graph and filter the Constants (I'm making a copy here because the nodes are held as shared pointers). Then each Constant node can be transformed into a vector of values if you're looking for that. To do that you can use the templated method cast_vector<T> and you can figure out which type to use by checking what the Constant holds (using get_element_type).
const auto model = core.read_model("path");
std::vector<std::shared_ptr<ov::op::v0::Constant>> weights;
const auto ops = model->get_ops();
std::copy_if(std::begin(ops), std::end(ops), std::back_inserter(weights), [](const std::shared_ptr<ov::Node>& op) {
return std::dynamic_pointer_cast<ov::op::v0::Constant>(op) != nullptr;
});
for (const auto& w : weights) {
if (w->get_element_type() == ov::element::Type_t::f32) {
const std::vector<float> floats = w->cast_vector<float>();
} else if (w->get_element_type() == ov::element::Type_t::i32) {
const std::vector<int32_t> ints = w->cast_vector<int32_t>();
} else if (...) {
...
}
}
Use the class Core and the method read_model to read the model. Next, use the methods in the class Model to get the detail for each layer.
import openvino.runtime as ov
ie = ov.Core()
network = ie.read_model('face-detection-adas-0001.xml')
# print(network.get_ops())
# print(network.get_ordered_ops())
# print(network.get_output_op(0))
# print(network.get_result())
Refer to OpenVINO Python API and OpenVINO Runtime C++ API for more information.

How to use Allen NLP interpret on custom models

I wish to use Allen NLP Interpret for integrated visualization and Saliency mapping.on custom transformer model, can you please tell me how to do that?
It can be done by having AllenNLP wrappers around your custom model. The interpret modules require a Predictor object, so you can write your own, or use an existing one.
Here's an example for a classification model:
from allennlp.data.vocabulary import Vocabulary
from allennlp.predictors.text_classifier import TextClassifierPredictor
from allennlp.data.dataset_readers import TextClassificationJsonReader
import torch
class ModelWrapper(Model):
def __init__(self, vocab, your_model):
super().__init__(vocab)
self.your_model = your_model
self.logits_to_probs = torch.nn.Softmax()
self.loss = torch.nn.CrossEntropyLoss()
def forward(self, tokens, label=None):
if label is not None:
outputs = self.your_model(tokens, label=label)
else:
outputs = self.your_model(tokens)
probs = self.logits_to_probs(outputs["logits"])
if label is not None:
loss = self.loss(outputs["logits"], label)
outputs["loss"] = loss
outputs["probs"] = probs
return outputs
Your custom transformer model may not have an identifiable TextFieldEmbedder. This is the initial embedding layer of your model, against which gradients are calculated for the saliency interpreters. These can be specified by overriding the following methods in the Predictor.
class PredictorWrapper(TextClassifierPredictor):
def get_interpretable_layer(self):
return self._model.model.bert.embeddings.word_embeddings # This is the initial layer for huggingface's `bert-base-uncased`; change according to your custom model.
def get_interpretable_text_field_embedder(self):
return self._model.model.bert.embeddings.word_embeddings
predictor = PredictorWrapper(model=ModelWrapper(vocab, your_model),
dataset_reader=TextClassificationJsonReader())
Now you have an AllenNLP predictor, which can be used with the interpret module as follows:
from allennlp.interpret.saliency_interpreters import SimpleGradient
interpreter = SimpleGradient(predictor)
interpreter.saliency_interpret_from_json({"sentence": "This is a good movie."})
This should give you the gradients with respect to each input token.

Python objects in dealloc in cython

In the docs it is written, that "Any C data that you explicitly allocated (e.g. via malloc) in your __cinit__() method should be freed in your __dealloc__() method."
This is not my case. I have following extension class:
cdef class SomeClass:
cdef dict data
cdef void * u_data
def __init__(self, data_len):
self.data = {'columns': []}
if data_len > 0:
self.data.update({'data': deque(maxlen=data_len)})
else:
self.data.update({'data': []})
self.u_data = <void *>self.data
#property
def data(self):
return self.data
#data.setter
def data(self, new_val: dict):
self.data = new_val
Some c function has an access to this class and it appends some data to SomeClass().data dict. What should I write in __dealloc__, when I want to delete the instance of the SomeClass()?
Maybe something like:
def __dealloc__(self):
self.data = None
free(self.u_data)
Or there is no need to dealloc anything at all?
No you don't need to and no you shouldn't. From the documentation
You need to be careful what you do in a __dealloc__() method. By the time your __dealloc__() method is called, the object may already have been partially destroyed and may not be in a valid state as far as Python is concerned, so you should avoid invoking any Python operations which might touch the object. In particular, don’t call any other methods of the object or do anything which might cause the object to be resurrected. It’s best if you stick to just deallocating C data.
You don’t need to worry about deallocating Python attributes of your object, because that will be done for you by Cython after your __dealloc__() method returns.
You can confirm this by inspecting the C code (you need to look at the full code, not just the annotated HTML). There's an autogenerated function __pyx_tp_dealloc_9someclass_SomeClass (name may vary slightly depending on what you called your module) does a range of things including:
__pyx_pw_9someclass_9SomeClass_3__dealloc__(o);
/* some other code */
Py_CLEAR(p->data);
where the function __pyx_pw_9someclass_9SomeClass_3__dealloc__ is (a wrapper for) your user-defined __dealloc__. Py_CLEAR will ensure that data is appropriately reference-counted then set to NULL.
It's a little hard to follow because it all goes through several layers of wrappers, but you can confirm that it does what the documentation says.

How can I register a custom environment in OpenAI's gym?

I have created a custom environment, as per the OpenAI Gym framework; containing step, reset, action, and reward functions. I aim to run OpenAI baselines on this custom environment. But prior to this, the environment has to be registered on OpenAI gym. I would like to know how the custom environment could be registered on OpenAI gym? Also, Should I be modifying the OpenAI baseline codes to incorporate this?
You do not need to modify baselines repo.
Here is a minimal example. Say you have myenv.py, with all the needed functions (step, reset, ...). The name of the class environment is MyEnv, and you want to add it to the classic_control folder. You have to
Place myenv.py file in gym/gym/envs/classic_control
Add to __init__.py (located in the same folder)
from gym.envs.classic_control.myenv import MyEnv
Register the environment in gym/gym/envs/__init__.py by adding
gym.envs.register(
id='MyEnv-v0',
entry_point='gym.envs.classic_control:MyEnv',
max_episode_steps=1000,
)
At registration, you can also add reward_threshold and kwargs (if your class takes some arguments).
You can also directly register the environment in the script you will run (TRPO, PPO, or whatever) instead of doing it in gym/gym/envs/__init__.py.
EDIT
This is a minimal example to create the LQR environment.
Save the code below in lqr_env.py and place it in the classic_control folder of gym.
import gym
from gym import spaces
from gym.utils import seeding
import numpy as np
class LqrEnv(gym.Env):
def __init__(self, size, init_state, state_bound):
self.init_state = init_state
self.size = size
self.action_space = spaces.Box(low=-state_bound, high=state_bound, shape=(size,))
self.observation_space = spaces.Box(low=-state_bound, high=state_bound, shape=(size,))
self._seed()
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def _step(self,u):
costs = np.sum(u**2) + np.sum(self.state**2)
self.state = np.clip(self.state + u, self.observation_space.low, self.observation_space.high)
return self._get_obs(), -costs, False, {}
def _reset(self):
high = self.init_state*np.ones((self.size,))
self.state = self.np_random.uniform(low=-high, high=high)
self.last_u = None
return self._get_obs()
def _get_obs(self):
return self.state
Add from gym.envs.classic_control.lqr_env import LqrEnv to __init__.py (also in classic_control).
In your script, when you create the environment, do
gym.envs.register(
id='Lqr-v0',
entry_point='gym.envs.classic_control:LqrEnv',
max_episode_steps=150,
kwargs={'size' : 1, 'init_state' : 10., 'state_bound' : np.inf},
)
env = gym.make('Lqr-v0')
Environment registration process can be found here.
Please go through this example custom environment if you have any more issues.
Refer to this stackoverflow issue for further information.
That problem is related to versions of gym try upgrading your gym environment.

Check if object is an sqlalchemy model instance

I want to know how to know, given an object, if it is an instance of an sqlalchemy mapped model.
Normally, I would use isinstance(obj, DeclarativeBase). However, in this scenario, I do not have the DeclarativeBase class used available (since it is in a dependency project).
I would like to know what is the best practice in this case.
class Person(DeclarativeBase):
__tablename__ = "Persons"
p = Person()
print isinstance(p, DeclarativeBase)
#prints True
#However in my scenario, I do not have the DeclarativeBase available
#since the DeclarativeBase will be constructed in the depending web app
#while my code will act as a library that will be imported into the web app
#what are my alternatives?
You can use class_mapper() and catch the exception.
Or you could use _is_mapped_class, but ideally you should not as it is not a public method.
from sqlalchemy.orm.util import class_mapper
def _is_sa_mapped(cls):
try:
class_mapper(cls)
return True
except:
return False
print _is_sa_mapped(MyClass)
# #note: use this at your own risk as might be removed/renamed in the future
from sqlalchemy.orm.util import _is_mapped_class
print bool(_is_mapped_class(MyClass))
for instances there is the object_mapper(), so:
from sqlalchemy.orm.base import object_mapper
def is_mapped(obj):
try:
object_mapper(obj)
except UnmappedInstanceError:
return False
return True
the complete mapper utilities are documented here: http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_api.html
Just a consideration: since specific errors are raised by SQLAlchemy (UnmappedClassError for calsses and UnmappedInstanceError for instances) why not catch them rather than a generic exception? ;)