update
Since one effect of these functions is to provide a way to use method chaining on methods that would not normally support it *, I'm considering calling them chain and copychain, respectively. This seems less than ideal though, since the would-be copychain is arguably a more fundamental concept, at least in terms of functional programming.
original
I'm calling it a boxer for now. The code is in Python, though the question is general:
def boxer(f):
"""Return a function g(o, *args, **keyargs) -> o
`g` calls `f` on `o` with the remaining arguments
and returns `o`.
>>> l = [2]
>>> def increment_list_element(l, i):
... l[0] += i
>>> adder = boxer(increment_list_element)
>>> adder(l, 2)
[4]
>>> def multiply_list_element(l, i):
... l[0] *= i
>>> multiplier = boxer(multiply_list_element)
>>> sum(multiplier(l, 6))
24
"""
def box(o, *args, **keyargs):
f(o, *args, **keyargs)
return o
return box
A similar concept copies the would-be assignee, and assigns to and returns the copy. This one is a "shadow_boxer":
from copy import deepcopy
def shadow_boxer(f):
"""Return a function g(o, *args, **keyargs) -> p
`g` deepcopies `o` as `p`,
executes `f` on `p` with the remaining arguments,
and returns `p`.
>>> l = [2]
>>> def increment_list_element(l, i):
... l[0] += i
>>> adder = shadow_boxer(increment_list_element)
>>> adder(l, 2)
[4]
>>> def multiply_list_element(l, i):
... l[0] *= i
>>> multiplier = shadow_boxer(multiply_list_element)
>>> sum(multiplier(l, 6))
12
"""
def shadow_box(o, *args, **keyargs):
p = deepcopy(o)
f(p, *args, **keyargs)
return p
return shadow_box
In addition, I'd like to find out about resources for learning more about these sorts of things — though I'm similarly unsure of a name for "these sorts of things". It does seem related to functional programming, although as I understand it, these technique would be unnecessary in a true functional language.
This is pretty much the same thing as Ruby's Object#tap. Don't know how you feel about the name, but that's what they call it.
What the boxer function returns is probably defined closure in some programming languages. If there is not already a function with this name, I would call the function closure.
Related
When I modify a geometric network to address the PPI problem, I found that 'super()' can not be called in some circumstances.
The following way can lead to an error as:
TypeError: super(type, obj): obj must be an instance or subtype of type
def forward(self, batch, level='residue', **kwargs):
out = torch.cat([super().forward(graph, scatter_mean=False, dense=True) for graph in batch], dim=-1)
if level == 'atom': out = out[batch.ca_idx + batch.ptr[:-1]]
return torch.sigmoid(out)
Notably, the batch has two items, i.e., two torch_geometric graphs.
However, the following way works fine for me.
def forward(self, batch, level='residue', **kwargs):
out1 = super().forward(batch[0], scatter_mean=False, dense=True)
out2 = super().forward(batch[1], scatter_mean=False, dense=True)
out = torch.cat([out1, out2], dim=-1)
if level == 'atom': out = out[batch.ca_idx + batch.ptr[:-1]]
return torch.sigmoid(out)
I'm working on math method and to reduce execution time I use numba decorator
#numba.jit(nopython=True, nogil=True, cache=True)
def analize_tick(data:np.array, index:int, result_signal:np.array) -> None:
##I perform an action here and then return result
result_sirnal[0]=1
it works OK, but when I changed the decorator from #numba.jit(nopython=True, nogil=True, cache=True) to #cuda.jit(device=True) I got the error: 'DeviceFunctionTemplate' object is not callable
Could you advice me how to fix this issue?
BTW the method recieves three arguments:
numpy 2 dimensional float array
int index
numpy 1 dimensional int array where I return result
UPDATED to add code sample:
import unittest
import pandas as pd
import numpy as np
import numba
from numba import cuda
#numba.jit(nopython=True, nogil=True, cache=True)
# #cuda.jit(device=True)
def calculate(data:np.array, index:int, options:np.array, result_signal:np.array) -> None:
i = data[0]
b = data[1]
result_signal[0]= i+b
#numba.jit(nopython=True, nogil=True, cache=True)
# #cuda.jit(device=True)
def for_each(data:np.array,options:np.array, result:np.array) -> None:
for index, r in enumerate(data):
calculate(r, index, options, result)
# print(result[0])
class cuda_test(unittest.TestCase):
def test_numba_call(self):
df = pd.DataFrame([[1, 1], [2, 2]], columns=['c0', 'c1'])
data = df.to_numpy()
result = np.array([0], dtype=float)
options = np.array([0], dtype=float)
for sigma in range(0, 10, 1):
options[0] = sigma
for_each(data, options, result)
Could you advice me how to fix this issue?
There is no way to fix this. What you are trying to do is impossible.
When you decorate a function like this:
#cuda.jit(device=True)
def for_each(data:np.array,options:np.array, result:np.array) -> None:
for index, r in enumerate(data):
calculate(r, index, options, result)
you are denoting that the function is only available to be called by CUDA kernels or other device functions. You are not calling it within a CUDA kernel or device function. There is no way to change this behaviour, it is a limitation of the language.
I have read some papers that use something called "Bootstrapped Cross Entropy Loss" to train their segmentation network. The idea is to focus only on the hardest k% (say 15%) of the pixels into account to improve learning performance, especially when easy pixels dominate.
Currently, I am using the standard cross entropy:
loss = F.binary_cross_entropy(mask, gt)
How do I convert this to the bootstrapped version efficiently in PyTorch?
Often we would also add a "warm-up" period to the loss such that the network can learn to adapt to the easy regions first and transit to the harder regions.
This implementation starts from k=100 and continues for 20000 iterations, then linearly decay it to k=15 for another 50000 iterations.
class BootstrappedCE(nn.Module):
def __init__(self, start_warm=20000, end_warm=70000, top_p=0.15):
super().__init__()
self.start_warm = start_warm
self.end_warm = end_warm
self.top_p = top_p
def forward(self, input, target, it):
if it < self.start_warm:
return F.cross_entropy(input, target), 1.0
raw_loss = F.cross_entropy(input, target, reduction='none').view(-1)
num_pixels = raw_loss.numel()
if it > self.end_warm:
this_p = self.top_p
else:
this_p = self.top_p + (1-self.top_p)*((self.end_warm-it)/(self.end_warm-self.start_warm))
loss, _ = torch.topk(raw_loss, int(num_pixels * this_p), sorted=False)
return loss.mean(), this_p
Addition to self answer by #hkchengrex (for future self and API parity with PyTorch);
one could implement functional version first (with some additional arguments provided in original torch.nn.functional.cross_entropy) like this (also I prefer reduction to be callable instead of predefined strings):
import typing
import torch
def bootstrapped_cross_entropy(
inputs,
targets,
iteration,
p: float,
warmup: typing.Union[typing.Callable[[float, int], float], int] = -1,
weight=None,
ignore_index=-100,
reduction: typing.Callable[[torch.Tensor], torch.Tensor] = torch.mean,
):
if not 0 < p < 1:
raise ValueError("p should be in [0, 1] range, got: {}".format(p))
if isinstance(warmup, int):
this_p = 1.0 if iteration < warmup else p
elif callable(warmup):
this_p = warmup(p, iteration)
else:
raise ValueError(
"warmup should be int or callable, got {}".format(type(warmup))
)
# Shortcut
if this_p == 1.0:
return torch.nn.functional.cross_entropy(
inputs, targets, weight, ignore_index=ignore_index, reduction=reduction
)
raw_loss = torch.nn.functional.cross_entropy(
inputs, targets, weight=weight, ignore_index=ignore_index, reduction="none"
).view(-1)
num_pixels = raw_loss.numel()
loss, _ = torch.topk(raw_loss, int(num_pixels * this_p), sorted=False)
return reduction(loss)
Also warmup can be specified as callable (taking p and current iteration) or int which allows for flexible or easy scheduling.
And making a class basing of _WeightedLoss and iteration incremented automatically during each call (so only inputs and targets have to be passed):
class BoostrappedCrossEntropy(torch.nn.modules.loss._WeightedLoss):
def __init__(
self,
p: float,
warmup: typing.Union[typing.Callable[[float, int], float], int] = -1,
weight=None,
ignore_index=-100,
reduction: typing.Callable[[torch.Tensor], torch.Tensor] = torch.mean,
):
self.p = p
self.warmup = warmup
self.ignore_index = ignore_index
self._current_iteration = -1
super().__init__(weight, size_average=None, reduce=None, reduction=reduction)
def forward(self, inputs, targets):
self._current_iteration += 1
return bootstrapped_cross_entropy(
inputs,
targets,
self._current_iteration,
self.p,
self.warmup,
self.weight,
self.ignore_index,
self.reduction,
)
I am trying to use a custom Keras loss function that apart from the usual signature (y_true, y_pred) takes another parameter sigma (which is also produced by the last layer of the network).
The training works fine, but then I am not sure how to perform forward propagation and return sigma (while muis the output of the model.predict method).
This is the code I am using, which features a custom layer GaussianLayer that returns the list [mu, sigma].
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Dense, Layer, Dropout
from keras.models import Model
from keras.initializers import glorot_normal
import numpy as np
def custom_loss(sigma):
def gaussian_loss(y_true, y_pred):
return tf.reduce_mean(0.5*tf.log(sigma) + 0.5*tf.div(tf.square(y_true - y_pred), sigma)) + 10
return gaussian_loss
class GaussianLayer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(GaussianLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.kernel_1 = self.add_weight(name='kernel_1',
shape=(30, self.output_dim),
initializer=glorot_normal(),
trainable=True)
self.kernel_2 = self.add_weight(name='kernel_2',
shape=(30, self.output_dim),
initializer=glorot_normal(),
trainable=True)
self.bias_1 = self.add_weight(name='bias_1',
shape=(self.output_dim, ),
initializer=glorot_normal(),
trainable=True)
self.bias_2 = self.add_weight(name='bias_2',
shape=(self.output_dim, ),
initializer=glorot_normal(),
trainable=True)
super(GaussianLayer, self).build(input_shape)
def call(self, x):
output_mu = K.dot(x, self.kernel_1) + self.bias_1
output_sig = K.dot(x, self.kernel_2) + self.bias_2
output_sig_pos = K.log(1 + K.exp(output_sig)) + 1e-06
return [output_mu, output_sig_pos]
def compute_output_shape(self, input_shape):
return [(input_shape[0], self.output_dim), (input_shape[0], self.output_dim)]
# This returns a tensor
inputs = Input(shape=(1,))
x = Dense(30, activation='relu')(inputs)
x = Dropout(0.3)(x)
x = Dense(30, activation='relu')(x)
x = Dense(40, activation='relu')(x)
x = Dropout(0.3)(x)
x = Dense(30, activation='relu')(x)
mu, sigma = GaussianLayer(1)(x)
model = Model(inputs, mu)
model.compile(loss=custom_loss(sigma), optimizer='adam')
model.fit(train_x, train_y, epochs=150)
Since your model returns two tensors as output, you also need to pass a list of two arrays as the output when calling fit() method. That's essentially what the error is trying to convey:
Error when checking model target:
So the error is in targets (i.e. labels). What is wrong?
the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays:
I may have found the answer among Keras FAQs.
I found out that it is possible to retrieve intermediate steps' output using the code snippet below:
layer_name = 'main_output'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(train_x[0])
intermediate_output
In this case intermediate_output is a list of two values [mu, sigma] (just needed to name the output layer main_output and retrieve it later)
I would like to write a bit of code that calls a function specified by a given argument. EG:
def caller(func):
return func()
However what I would also like to do is specify optional arguments to the 'caller' function so that 'caller' calls 'func' with the arguments specified (if any).
def caller(func, args):
# calls func with the arguments specified in args
Is there a simple, pythonic way to do this?
You can do this by using arbitrary argument lists and unpacking argument lists.
>>> def caller(func, *args, **kwargs):
... return func(*args, **kwargs)
...
>>> def hello(a, b, c):
... print a, b, c
...
>>> caller(hello, 1, b=5, c=7)
1 5 7
Not sure why you feel the need to do it, though.
This already exists as the apply function, though it is considered obsolete due to the new *args and **kwargs syntax.
>>> def foo(a,b,c): print a,b,c
>>> apply(foo, (1,2,3))
1 2 3
>>> apply(foo, (1,2), {'c':3}) # also accepts keyword args
However, the * and ** syntax is generally a better solution. The above is equivalent to:
>>> foo(*(1,2), **{'c':3})