Can't load 2 models in pycaffe - caffe

I am trying to load several models in pycaffe so I can perform ensemble learning. I can load 1 model fine and perform tests on it, but when I try to load my second model it just stops.
I use the following code:
model_def_net0 = caffe_root + 'Dataset/net0/deploy.prototxt'
model_weights_net0 = caffe_root + 'Dataset/net0/net0_train_vgg_iter_250000.caffemodel'
net0 = caffe.Net(model_def_net0, # defines the structure of the model
model_weights_net0, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
model_def_caffe = caffe_root + 'Dataset/caffenet/deploy_3.prototxt'
model_weights_caffe = caffe_root + 'Dataset/caffenet/caffenet_train_iter_150000.caffemodel'
net1 = caffe.Net(model_def_caffe, # defines the structure of the model
model_weights_caffe, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
When I execute the code it will halt when trying to load net1 with the following in the log:
I0516 18:29:39.718916 7339 net.cpp:411] data -> data
I0516 18:29:39.718942 7339 net.cpp:411] data -> label

Related

NMT , 'KerasTensor' object is not callable'

Here I share a code snippet for training Encoder_Decoder Model for machine translation. While Using the Embedding layer (trained previously) during inference mode( on test_data) . It threw the following error --->
# Encoder
encoder_inputs = Input(shape=(None ,))
enc_emb = Embedding(eng_vocab_size, latent_dim, mask_zero = True)(encoder_inputs)
encoder_lstm = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(enc_emb)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb = Embedding(deu_vocab_size, latent_dim, mask_zero = True)(decoder_inputs)
# decoder return full output sequences, and internal states as well.
# We don't use the return states in the training model,
# but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(dec_emb,
initial_state=encoder_states)
decoder_dense = Dense(deu_vocab_size, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# Compile the model
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['acc'])
# Encode the input sequence to get the "thought vectors"
encoder_model = Model(encoder_inputs, encoder_states)
# Decoder setup
# Below tensors will hold the states of the previous time step
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
dec_emb2= dec_emb(decoder_inputs) # reusing embedding layer
decoder_outputs2, state_h2, state_c2 = decoder_lstm(dec_emb2, initial_state=decoder_states_inputs) # reusing lstm layer
decoder_outputs2 = decoder_dense(decoder_outputs2) # softmax_layer to generate prob_dist. over target vocab
# Final decoder model
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs2] )
ERROR
8 decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
---> 10 dec_emb2= dec_emb(decoder_inputs) # reusing embedding layer
11
12 decoder_outputs2, state_h2, state_c2 = decoder_lstm(dec_emb2, initial_state=decoder_states_inputs) # reusing lstm layer
TypeError: 'KerasTensor' object is not callableenter image description here
I read through various solutions available for this issue , but couldn't understand what 2 modes of model they were talking about and what their soltion was effectively doing .
Pls explain in detail. Thanks in advance

I trained and saved pytorch model but when load it again in another session with same dataset and test then its accurecy get changes each time

I made a FER model in pytorch and train/test it also on FED-RO dataset with (Accurecy=40 out of 100 test images) and then save it (torch.save) inside another folder in drive by model.pth file. But after loading and test() it gives different accurecy.
def test(epoch=None, is_validation=False,pretrained=True):
model.eval()
# loader = dataloaders['val'] if is_validation else dataloaders['train']
test_loss = 0
test_correct = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(dataloaders['val']):
# print("batch_idx",batch_idx)
inputs, targets = inputs.to(DEVICE), targets.to(DEVICE)
outputs = model(inputs)
# print("outputs",outputs)
test_loss += F.cross_entropy(outputs, targets, size_average=False).item()
test_correct+= outputs.max(1)[1].eq(targets).sum().item()
# print("test_correct",test_correct)
# test_correct=test_correct+test_correct
if is_validation:
writer.add_scalar('logs/val_loss', test_loss/len(datasets['val']), epoch)
writer.add_scalar('logs/val_acc', test_correct/len(datasets['val']), epoch)
else:
print("Test Accuracy: {}/{}".format(test_correct, len(datasets['val'])))
I tried different save functions as (torch.save()) or torch.save(model.state_dict(), PATH) but after load model accurecy get changed (86 out of 100 images) each time different accurecy.
I want only that model accurecy should not changed after reloading on different sessions or different files for same datasets.

how can i solve "empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType)" in MushroomRL?

I'm using MushroomRL for a Deep Reinforcement Learning project, and I'm using a Graph representation as RL Environment where the number of nodes represents the number of actions now in my neural network the input is one value ex: tensor([[5.]], and the output Q is the number of nodes which is ten ex: tensor([[5972.4927, 8562.3330, 7443.6479, 7326.1587, 6615.2090, 6617.3145,6911.8672, 8233.7930, 6821.0093, 7000.1182,]] now I'm using a new framework called MushroomRL, and this is the code
if __name__ == '__main__':
from mushroom_rl.core import Core
from mushroom_rl.algorithms.value import TrueOnlineSARSALambda
from mushroom_rl.policy import EpsGreedy
from mushroom_rl.features import Features
from mushroom_rl.features.tiles import Tiles
from mushroom_rl.utils.dataset import compute_J
from mushroom_rl.utils.parameters import LinearParameter, Parameter
from mushroom_rl.approximators.parametric import TorchApproximator
from mushroom_rl.algorithms.value import DQN
# Set the seed
np.random.seed(1)
# Create the toy environment with default parameters
#mdp = Environment.make('graph_env')
mdp=graph_env()
# Using an epsilon-greedy policy
epsilon = Parameter(value=0.1)
pi = EpsGreedy(epsilon=epsilon)
# Policy
epsilon = LinearParameter(value=1.,
threshold_value=.1,
n=1000000)
epsilon_test = Parameter(value=.05)
epsilon_random = Parameter(value=1)
pi = EpsGreedy(epsilon=epsilon_random)
approximator_params = dict(
network=Network,
input_shape=(1,),
output_shape=(1,),
n_actions=mdp.info.action_space.n,
optimizer=optimizer,
loss=F.mse_loss
)
approximator = TorchApproximator
algorithm_params = dict(
batch_size=32,
target_update_frequency=target_update_frequency // train_frequency,
replay_memory=True,
initial_replay_size=initial_replay_size,
max_replay_size=max_replay_size
)
agent=agent = DQN(mdp.info, pi, approximator,
approximator_params=approximator_params,
**algorithm_params)
# Algorithm
core = Core(agent, mdp)
# RUN
# Fill replay memory with random dataset
print_epoch(0)
core.learn(n_steps=initial_replay_size,n_steps_per_fit=initial_replay_size)
# Evaluate initial policy
pi.set_epsilon(epsilon_test)
#mdp.set_episode_end(False)
dataset = core.evaluate(n_steps=test_samples)
scores.append(get_stats(dataset))
for n_epoch in range(1, max_steps // evaluation_frequency + 1):
print_epoch(n_epoch)
print('- Learning:')
# learning step
pi.set_epsilon(epsilon)
mdp.set_episode_end(True)
core.learn(n_steps=evaluation_frequency,
n_steps_per_fit=train_frequency)
print('- Evaluation:')
# evaluation step
pi.set_epsilon(epsilon_test)
mdp.set_episode_end(False)
dataset = core.evaluate(n_steps=test_samples)
scores.append(get_stats(dataset))
it givs me this error when i run the code
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
i beleve the proplem in part of the code can any one help to fix it ?

How to save and load models of custom dataset in Detectron2?

I have tried to save and load the model using:
All keys are mapped but there is no prediction in output
#1
from detectron2.modeling import build_model
model = build_model(cfg)
torch.save(model.state_dict(), 'checkpoint.pth')
model.load_state_dict(torch.load(checkpoint_path,map_location='cpu'))
I also tried doing it using the official doc but can't understand the input format part
from detectron2.checkpoint import DetectionCheckpointer
DetectionCheckpointer(model).load(file_path_or_url) # load a file, usually from cfg.MODEL.WEIGHTS
checkpointer = DetectionCheckpointer(model, save_dir="output")
checkpointer.save("model_999") # save to output/model_999.pth
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file('COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml'))
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # Set threshold for this model
cfg.MODEL.WEIGHTS = '/content/model_final.pth' # Set path model .pth
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
predictor = DefaultPredictor(cfg)
My code to load custom model works.

How to load a trained MXnet model?

I have trained a network using MXnet, but am not sure how I can save and load the parameters for later use. First I define and train the network:
dataIn = mx.sym.var('data')
fc1 = mx.symbol.FullyConnected(data=dataIn, num_hidden=100)
act1 = mx.sym.Activation(data=fc1, act_type="relu")
fc2 = mx.symbol.FullyConnected(data=act1, num_hidden=50)
act2 = mx.sym.Activation(data=fc2, act_type="relu")
fc3 = mx.symbol.FullyConnected(data=act2, num_hidden=25)
act3 = mx.sym.Activation(data=fc3, act_type="relu")
fc4 = mx.symbol.FullyConnected(data=act3, num_hidden=10)
act4 = mx.sym.Activation(data=fc4, act_type="relu")
fc5 = mx.symbol.FullyConnected(data=act4, num_hidden=2)
lenet = mx.sym.SoftmaxOutput(data=fc5, name='softmax',normalization = 'batch')
# create iterator around training and validation data
train_iter = mx.io.NDArrayIter(data=data[:ntrain], label = phen[:ntrain],batch_size=batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(data=data[ntrain:], label=phen[ntrain:], batch_size=batch_size)
# create a trainable module on GPU 0
lenet_model = mx.mod.Module(symbol=lenet, context=mx.gpu())
# train with the same
lenet_model.fit(train_iter,
eval_data=val_iter,
optimizer='adam',
optimizer_params={'learning_rate':0.00001},
eval_metric='f1',
batch_end_callback = mx.callback.Speedometer(batch_size, 10),
num_epoch=1000)
This model performs well on the test set, so I want to keep it. Next, I save the network layout and the parameterization:
lenet.save('./testNet_symbol.mxnet')
lenet_model.save_params('./testNet_module.mxnet')
All the documentation I can find on loading the network seem to have implemented the save function within the training routine, to save the network parameters at the end of each epoch. I haven't set these checkpoints during the training process Other methods use the mx.model.FeedForward class, which doesn't seem appropriate. Still other methods load the network from a .json file, which I don't have as a result of my save functions. How can I save/load a network after it's already finished training?
You just have to do this instead to save:
lenet_model.save_checkpoint('lenet', num_epoch, save_optimizer_states=True)
This would create 3 files if the states flag is set to True else 2 files:
.params (weights),
.json (symbol),
.states
And this to load:
lenet_model = mx.mod.Module.load(prefix,epoch)
lenet_model.bind(for_training=False, data_shapes=[('data', (1,3,224,224))])