Implementation of multitask "nested" neural network - deep-learning

I am trying to implement a multitask neural network used by a paper but am quite unsure how I should code the multitask network because the authors did not provide code for that part.
The network architecture looks like (paper):
To make it simpler, the network architecture could be generalized as (For demo I changed their more complicated operation for the pair of individual embeddings to concatenation):
The authors are summing the loss from the individual tasks and the pairwise tasks, and using the total loss to optimize the parameters for the three networks (encoder, MLP-1, MLP-2) in each batch, but I am kind of at sea as to how different types of data are combined in a single batch to feed into two different networks that share an initial encoder. I tried to search for other networks with similar structure but did not find any sources. Would appreciate any thoughts!

This is actually a common pattern. It would be solved by code like the following.
class Network(nn.Module):
def __init__(self, ...):
self.encoder = DrugTargetInteractiongNetwork()
self.mlp1 = ClassificationMLP()
self.mlp2 = PairwiseMLP()
def forward(self, data_a, data_b):
a_encoded = self.encoder(data_a)
b_encoded = self.encoder(data_b)
a_classified = self.mlp1(a_encoded)
b_classified = self.mlp1(b_encoded)
# let me assume data_a and data_b are of shape
# [batch_size, n_molecules, n_features].
# and that those n_molecules are not necessarily
# equal.
# This can be generalized to more dimensions.
a_broadcast, b_broadcast = torch.broadcast_tensors(
a_encoded[:, None, :, :],
b_encoded[:, :, None, :],
)
# this will work if your mlp2 accepts an arbitrary number of
# learding dimensions and just broadcasts over them. That's true
# for example if it uses just Linear and pointwise
# operations, but may fail if it makes some specific assumptions
# about the number of dimensions of the inputs
pairwise_classified = self.mlp2(a_broadcast, b_broadcast)
# if that is a problem, you have to reshape it such that it
# works. Most torch models accept at least a leading batch dimension
# for vectorization, so we can "fold" the pairwise dimension
# into the batch dimension, presenting it as
# [batch*n_mol_1*n_mol_2, n_features]
# to mlp2 and then recover it back
B, N1, N_feat = a_broadcast.shape
_B, N2, _N_feat = b_broadcast.shape
a_batched = a_broadcast.reshape(B*N1*N2, N_feat)
b_batched = b_broadcast.reshape(B*N1*N2, N_feat)
# above, -1 would suffice instead of B*N1*N2, just being explicit
batch_output = self.mlp2(a_batched, b_batched)
# this should be exactly the same as `pairwise_classified`
alternative_classified = batch_output.reshape(B, N1, N2, -1)
return a_classified, b_classified, pairwise_classified

Related

Difference between WGAN and WGAN-GP (Gradient Penalty)

I just find that in the code here:
https://github.com/NUS-Tim/Pytorch-WGAN/tree/master/models
The "generator" loss, G, between WGAN and WGAN-GP is different, for WGAN:
g_loss = self.D(fake_images)
g_loss = g_loss.mean().mean(0).view(1)
g_loss.backward(one) # !!!
g_cost = -g_loss
But for WGAN-GP:
g_loss = self.D(fake_images)
g_loss = g_loss.mean()
g_loss.backward(mone) # !!!
g_cost = -g_loss
Why one is one=1 and another is mone=-1?
You might have misread the source code, the first sample you gave is not averaging the resut of D to compute its loss but instead uses the binary cross-entropy.
To be more precise:
The first method ("GAN") uses the BCE loss to compute the loss terms for D and G. The standard GAN optimization objective for D is to minimize E_x[log(D(x))] + E_z[log(1-D(G(z)))]. Source code:
outputs = self.D(images)
d_loss_real = self.loss(outputs.flatten(), real_labels) # <- bce loss
real_score = outputs
# Compute BCELoss using fake images
fake_images = self.G(z)
outputs = self.D(fake_images)
d_loss_fake = self.loss(outputs.flatten(), fake_labels) # <- bce loss
fake_score = outputs
# Optimizie discriminator
d_loss = d_loss_real + d_loss_fake
self.D.zero_grad()
d_loss.backward()
self.d_optimizer.step()
For d_loss_real you optimize towards 1s (output is considered real), while d_loss_fake optimizes towards 0s (output is considered fake).
While the second ("WCGAN") uses the Wasserstein loss (ref) whereby we maximise for D the loss: E_x[D(x)] - E_z[D(G(z))]. Source code:
# Train discriminator
# WGAN - Training discriminator more iterations than generator
# Train with real images
d_loss_real = self.D(images)
d_loss_real = d_loss_real.mean()
d_loss_real.backward(mone)
# Train with fake images
z = self.get_torch_variable(torch.randn(self.batch_size, 100, 1, 1))
fake_images = self.G(z)
d_loss_fake = self.D(fake_images)
d_loss_fake = d_loss_fake.mean()
d_loss_fake.backward(one)
# [...]
Wasserstein_D = d_loss_real - d_loss_fake
By doing d_loss_real.backward(mone) you backpropage with a gradient of opposite sign, i.e. its's a gradient ascend, and you end up maximizing d_loss_real.
In order to Update D network:
lossD = Expectation of D(fake data) - Expectation of D(real data) + gradient penalty
lossD ↓,D(real data) ↑
so you need to add minus one to the gradient process

How to estimate the parameters of a mixture model in OpenTURNS?

I would like to estimate the parameters of a mixture model of normal distributions in OpenTURNS (that is, the distribution of a weighted sum of Gaussian random variables). OpenTURNS can create such a mixture, but it cannot estimate its parameters. Moreover, I need to create the mixture as an OpenTURNS distribution in order to propagate uncertainty through a function.
For example, I know how to create a mixture of two normal distributions:
import openturns as ot
mu1 = 1.0
sigma1 = 0.5
mu2 = 3.0
sigma2 = 2.0
weights = [0.3, 0.7]
n1 = ot.Normal(mu1, sigma1)
n2 = ot.Normal(mu2, sigma2)
m = ot.Mixture([n1, n2], weights)
In this example, I would like to estimate mu1, sigma1, mu2, sigma2 on a given sample. In order to create a working example, it is easy to generate a sample by simulation.
s = m.getSample(100)
You can rely on scikit-learn's GaussianMixture to estimate the parameters and then use them to define a Mixture model in OpenTURNS.
The script hereafter contains a Python class MixtureFactory that estimates the parameters of a scikitlearn GaussianMixture and outputs an OpenTURNS Mixture distribution:
from sklearn.mixture import GaussianMixture
from sklearn.utils.validation import check_is_fitted
import openturns as ot
import numpy as np
class MixtureFactory(GaussianMixture):
"""
Representation of a Gaussian mixture model probability distribution.
This class allows to estimate the parameters of a Gaussian mixture
distribution using scikit algorithms & provides openturns Mixture object.
Read more in scikit learn user guide & openturns theory.
Parameters:
-----------
n_components : int, defaults to 1.
The number of mixture components.
covariance_type : {'full' (default), 'tied', 'diag', 'spherical'}
String describing the type of covariance parameters to use.
Must be one of:
'full'
each component has its own general covariance matrix
'tied'
all components share the same general covariance matrix
'diag'
each component has its own diagonal covariance matrix
'spherical'
each component has its own single variance
tol : float, defaults to 1e-3.
The convergence threshold. EM iterations will stop when the
lower bound average gain is below this threshold.
reg_covar : float, defaults to 1e-6.
Non-negative regularization added to the diagonal of covariance.
Allows to assure that the covariance matrices are all positive.
max_iter : int, defaults to 100.
The number of EM iterations to perform.
n_init : int, defaults to 1.
The number of initializations to perform. The best results are kept.
init_params : {'kmeans', 'random'}, defaults to 'kmeans'.
The method used to initialize the weights, the means and the
precisions.
Must be one of::
'kmeans' : responsibilities are initialized using kmeans.
'random' : responsibilities are initialized randomly.
weights_init : array-like, shape (n_components, ), optional
The user-provided initial weights, defaults to None.
If it None, weights are initialized using the `init_params` method.
means_init : array-like, shape (n_components, n_features), optional
The user-provided initial means, defaults to None,
If it None, means are initialized using the `init_params` method.
precisions_init : array-like, optional.
The user-provided initial precisions (inverse of the covariance
matrices), defaults to None.
If it None, precisions are initialized using the 'init_params' method.
The shape depends on 'covariance_type'::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
warm_start : bool, default to False.
If 'warm_start' is True, the solution of the last fitting is used as
initialization for the next call of fit(). This can speed up
convergence when fit is called several times on similar problems.
In that case, 'n_init' is ignored and only a single initialization
occurs upon the first call.
See :term:`the Glossary <warm_start>`.
verbose : int, default to 0.
Enable verbose output. If 1 then it prints the current
initialization and each iteration step. If greater than 1 then
it prints also the log probability and the time needed
for each step.
verbose_interval : int, default to 10.
Number of iteration done before the next print.
"""
def __init__(self, n_components=2, covariance_type='full', tol=1e-6,
reg_covar=1e-6, max_iter=1000, n_init=1, init_params='kmeans',
weights_init=None, means_init=None, precisions_init=None,
random_state=41, warm_start=False,
verbose=0, verbose_interval=10):
super().__init__(n_components, covariance_type, tol, reg_covar,
max_iter, n_init, init_params, weights_init, means_init,
precisions_init, random_state, warm_start, verbose, verbose_interval)
def fit(self, X):
"""
Fit the mixture model parameters.
EM algorithm is applied here to estimate the model parameters and build a
Mixture distribution (see openturns mixture).
The method fits the model ``n_init`` times and sets the parameters with
which the model has the largest likelihood or lower bound. Within each
trial, the method iterates between E-step and M-step for ``max_iter``
times until the change of likelihood or lower bound is less than
``tol``, otherwise, a ``ConvergenceWarning`` is raised.
If ``warm_start`` is ``True``, then ``n_init`` is ignored and a single
initialization is performed upon the first call. Upon consecutive
calls, training starts where it left off.
Parameters
----------
X : array-like, shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
Returns
-------
"""
data = np.array(X)
# Evaluate the model parameters.
super().fit(data)
# openturns mixture
# n_components ==> weight of size n_components
weights = self.weights_
n_components = len(weights)
# Create ot distribution
collection = n_components * [0]
# Covariance matrices
cov = self.covariances_
mu = self.means_
# means : n_components x n_features
n_components, n_features = mu.shape
# Following the type of covariance, we define the collection of gaussians
# Spherical : C_k = Identity * sigma_k
if self.covariance_type is 'spherical':
c = ot.CorrelationMatrix(n_features)
for l in range(n_components):
sigma = np.sqrt(cov[l])
collection[l] = ot.Normal(list(mu[l]), [ sigma ] * n_features , c)
elif self.covariance_type is 'diag' :
for l in range(n_components):
c = ot.CovarianceMatrix(n_features)
for i in range(n_features):
c[i,i] = cov[l, i]
collection[l] = ot.Normal(list(mu[l]), c)
elif self.covariance_type == 'tied':
# Same covariance for all clusters
c = ot.CovarianceMatrix(n_features)
for i in range(n_features):
for j in range(0, i+1):
c[i,j] = cov[i,j]
# Define the collection with the same covariance
for l in range(n_components):
collection[l] = ot.Normal(list(mu[l]), c)
else:
n_features = cov.shape[1]
for l in range(n_components):
c = ot.CovarianceMatrix(n_features)
for i in range(n_features):
for j in range(0, i+1):
c[i,j] = cov[l][i,j]
collection[l] = ot.Normal(list(mu[l]), c)
self._mixture = ot.Mixture(collection, weights)
return self
def get_mixture(self):
"""
Returns the Mixture object
"""
check_is_fitted(self)
return self._mixture
if __name__ == "__main__":
mu1 = 1.0
sigma1 = 0.5
mu2 = 3.0
sigma2 = 2.0
weights = [0.3, 0.7]
n1 = ot.Normal(mu1, sigma1)
n2 = ot.Normal(mu2, sigma2)
m = ot.Mixture([n1, n2], weights)
x = m.getSample(1000)
est_dist = MixtureFactory(random_state=1)
est_dist.fit(x)
print(est_dist.get_mixture())
I have actually tried this method and it works perfectly. On top of that the fit of the model through the SciKit GMM and the ulterior adjustment thanks to OpenTurns are very fast. I recommend future users to test several numbers of components and covariance matrix structures, as it will not take a lot of time and can substantially improve the goodness of fit to the data.
Thanks for the answer.
Here is a pure OpenTURNS solution. It is probably slower than the scikit-learn-based method, but it is more generic: you could use it to estimate the parameters of any mixture model, not necessarily a mixture of normal distributions.
The idea is to retrieve the log-likelihood function from the Mixture object and minimize it.
In the following, let us assume that s is the sample we want to fit the mixture on.
First, we need to build the mixture we want to estimate the parameters of. We can specify any valid set of parameters, it does not matter. In your example, you want a mixture of 2 normal distributions.
mixture = ot.Mixture([ot.Normal()]*2, [0.5]*2)
There is a small hurdle. All weights sum to 1, thus one of them is determined by the others: the solver must not be allowed to freely set it. The order of the parameters of an OpenTURNS Mixture is as follows:
weight of the first distribution;
parameters of the first distribution;
weight of the second distribution;
parameters of the second distribution:
...
You can view all parameters with mixture.getParameter() and their names with mixture.getParameterDescription(). The following is a helper function that:
takes as input the list containing of all mixture parameters except the weight of its first distribution;
outputs a Point containing all parameters including the weight of the first distribution.
def full(params):
"""
Point of all mixture parameters from a list that omits the first weight.
"""
params = ot.Point(params)
aux_mixture = ot.Mixture(mixture)
dist_number = aux_mixture.getDistributionCollection().getSize()
index = aux_mixture.getDistributionCollection()[0].getParameter().getSize()
list_weights = []
for num in range(1, dist_number):
list_weights.append(params[index])
index += 1 + aux_mixture.getDistributionCollection()[num].getParameter().getSize()
complementary_weight = ot.Point([abs(1.0 - sum(list_weights))])
complementary_weight.add(params)
return complementary_weight
The next function computes the opposite of the log-likelihood of a given list of parameters (except the first weight).
For the sake of numerical stability, it divides this value by the number of observations.
We will minimize this function in order to find the Maximum Likelihood Estimate.
def minus_log_pdf(params):
"""
- log-likelihood of a list of parameters excepting the first weight
divided by the number of observations
"""
aux_mixture = ot.Mixture(mixture)
full_params = full(params)
try:
aux_mixture.setParameter(full_params)
except TypeError:
# case where the proposed parameters are invalid:
# return a huge value
return [ot.SpecFunc.LogMaxScalar]
res = - aux_mixture.computeLogPDF(s).computeMean()
return res
To use OpenTURNS optimization facilities, we need to turn this function into a PythonFunction object.
OT_minus_log_pdf = ot.PythonFunction(mixture.getParameter().getSize()-1, 1, minus_log_pdf)
Cobyla is usually good at likelihood optimization.
problem = ot.OptimizationProblem(OT_minus_log_pdf)
algo = ot.Cobyla(problem)
In order to decrease chances of Cobyla being stuck on a local minimum, we are going to use MultiStart. We pick a starting set of parameters and randomly change the weights. The following helper function makes it easy:
def random_weights(params, nb):
"""
List of nb Points representing mixture parameters with randomly varying weights.
"""
aux_mixture = ot.Mixture(mixture)
full_params = full(params)
aux_mixture.setParameter(full_params)
list_params = []
for num in range(nb):
dirichlet = ot.Dirichlet([1.0] * aux_mixture.getDistributionCollection().getSize()).getRealization()
dirichlet.add(1.0 - sum(dirichlet))
aux_mixture.setWeights(dirichlet)
list_params.append(aux_mixture.getParameter()[1:])
return list_params
We pick 10 starting points and increase the number of maximum evaluations of the log-likelihood from 100 (by default) to 10000.
init = mixture.getParameter()[1:]
starting_points = random_weights(init, 10)
algo_multistart = ot.MultiStart(algo, starting_points)
algo_multistart.setMaximumEvaluationNumber(10000)
Let's run the solver and retrieve the result.
algo_multistart.run()
result = algo_multistart.getResult()
All that remains is to set the mixture's parameters to the optimal value.
We must not forget to add the first weight back!
optimal_parameters = result.getOptimalPoint()
mixture.setParameter(full(optimal_parameters))
Below is an alternative.
The first step creates a new GaussianMixture class, derived from PythonDistribution. The key point is to implement the computeLogPDF method and the set/getParameters methods. Notice that this parametrization of a mixture only has one single weight w.
class GaussianMixture(ot.PythonDistribution):
def __init__(self, mu1 = -5.0, sigma1 = 1.0, \
mu2 = 5.0, sigma2 = 1.0, \
w = 0.5):
super(GaussianMixture, self).__init__(1)
if w < 0.0 or w > 1.0:
raise ValueError('The weight is not in [0, 1]. w=%s.' % (w))
self.mu1 = mu2
self.sigma1 = sigma1
self.mu2 = mu2
self.sigma2 = sigma2
self.w = w
collDist = [ot.Normal(mu1, sigma1), ot.Normal(mu2, sigma2)]
weight = [w, 1.0 - w]
self.distribution = ot.Mixture(collDist, weight)
def computeCDF(self, x):
p = self.distribution.computeCDF(x)
return p
def computePDF(self, x):
p = self.distribution.computePDF(x)
return p
def computeQuantile(self, prob, tail = False):
quantile = self.distribution.computeQuantile(prob, tail)
return quantile
def getSample(self, size):
X = self.distribution.getSample(size)
return X
def getParameter(self):
parameter = ot.Point([self.mu1, self.sigma1, \
self.mu2, self.sigma2, \
self.w])
return parameter
def setParameter(self, parameter):
[mu1, sigma1, mu2, sigma2, w] = parameter
self.__init__(mu1, sigma1, mu2, sigma2, w)
return parameter
def computeLogPDF(self, sample):
logpdf = self.distribution.computeLogPDF(sample)
return logpdf
In order to create the distribution, we use the Distribution class:
gm = ot.Distribution(GaussianMixture())
Estimating the parameters of this distribution is straightforward with MaximumLikelihoodFactory. However, we must set the bounds, because sigma cannot be negative and that w is in (0, 1).
factory = ot.MaximumLikelihoodFactory(gm)
lowerBound = [0.0, 1.e-6, 0.0, 1.e-6, 0.01]
upperBound = [0.0, 0.0, 0.0, 0.0, 0.99]
finiteLowerBound = [False, True, False, True, True]
finiteUpperBound = [False, False, False, False, True]
bounds = ot.Interval(lowerBound, upperBound, finiteLowerBound, finiteUpperBound)
factory.setOptimizationBounds(bounds)
Then we configure the optimization solver.
solver = factory.getOptimizationAlgorithm()
startingPoint = [-4.0, 1.0, 7.0, 1.5, 0.3]
solver.setStartingPoint(startingPoint)
factory.setOptimizationAlgorithm(solver)
Estimating the parameters is based on the build method.
distribution = factory.build(sample)
There are two limitations with this implementation.
First, it is not as fast as it should be, because of a limitation of the PythonDistribution (see https://github.com/openturns/openturns/issues/1391).
Estimating the parameters may be difficult, because the problem may have local optimas that cannot be retrieved with the default algorithm in MaximumLikelihoodFactory. This kind of task is generally done with the EM algorithm.

Using mvtnorm package to build my own multinomial probit model

I am trying to build a multinomial probit model using the optmix and mvtnorm packages. Am not using the existing libraries for MNP since I will be extending it to multivariate models later.
For simplicity, I am trying to build a model that has three alternatives. say, car, bus and train are the alternatives. Each individual chooses one of these three alternatives.
Following is how I defined the model. I started initially with a constants only model.
# defining the utilities of each alternative (assuming only constants), pars[1] and pars[2] are the coefficients to be estimated #
ucar <<- 0
ubus <<- parm[1]
utrain <<- parm[2]
# defining the variance-covariance matrix, initially assuming correlations are zero and variances are of unit magnitude #
cormat <<- matrix(c(1,0,0,0,1,0,0,0,1), nrow = 3, ncol = 3)
# determining the probabilities of choosing car, bus and train #
pcar <<- pmvnorm(lower = c(-Inf, -Inf, -Inf), upper = c(ucar,-ubus,-utrain), mean = c(0,0,0), sigma = cormat)
pbus <<- pmvnorm(lower = c(-Inf, -Inf, -Inf), upper = c(-ucar,ubus,-utrain), mean = c(0,0,0), sigma = cormat)
ptrain <<- pmvnorm(lower = c(-Inf, -Inf, -Inf), upper = c(-ucar,-ubus,utrain), mean = c(0,0,0), sigma = cormat)
# extracting only the probabilities from mvtnorm and finding out the log of probabilities #
pcar_1 <<- pcar[[1]]
pbus_1 <<- pbus[[1]]
ptrain_1 <<- ptrain[[1]]
lpcar <<- log(pcar_1)
lpbus <<- log(pbus_1)
lptrain <<- log(ptrain_1)
# defining the log-likelihood function, where car, bus and train are dummy indicators for chosen alternatives #
ll = -1*(car*lpcar + bus*lpbus + train*lptrain)
I maximize the above ll using optmix to get the parameters.
The above model works fine. The problem comes when I add an independent variables in the model. I get an error "Cannot evaluate function at initial parameters". What am I doing wrong in the code ?

Why/how can model.forward() succeed both on input being mini-batch vs single item?

Why and how does this work?
When I run the forward phase on input
being mini-batch tensor
or alternatively being a single input item
model.__call__() (which AFAIK is calling forward() ) swallows that and spills out adequate output (i.e. a tensor of mini-batch of estimates or a single item of estimate)
Adopting testcode from the Pytorch NN example shows what I mean, but I don't get it.
I would expect it to create problems and me forced to transform the single item input into a mini-batch of size 1( reshape (1,xxx)) or likewise, like I did in the code below.
( I did variations of the test to be sure it is e.g. not depending on execution order )
# -*- coding: utf-8 -*-
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
#N, D_in, H, D_out = 64, 1000, 100, 10
N, D_in, H, D_out = 64, 10, 4, 3
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(1):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
model.eval()
print ("###########")
print ("x[0]",x[0])
print ("x[0].size()", x[0].size())
y_1pred = model(x[0])
print ("y_1pred.size()", y_1pred.size())
print (y_1pred)
model.eval()
print ("###########")
print ("x.size()", x.size())
y_pred = model(x)
print ("y_pred.size()", y_pred.size())
print ("y_pred[0]", y_pred[0])
print ("###########")
model.eval()
input_item = x[0]
batch_len1_shape = torch.Size([1,*(input_item.size())])
batch_len1 = input_item.reshape(batch_len1_shape)
y_pred_batch_len1 = model(batch_len1)
print ("input_item",input_item)
print ("input_item.size()", input_item.size())
print ("y_pred_batch_len1.size()", y_pred_batch_len1.size())
print (y_1pred)
raise Exception
This is the output it generates:
###########
x[0] tensor([-1.3901, -0.2659, 0.4352, -0.6890, 0.1098, -0.3124, 0.6419, 1.1004,
-0.7910, -0.5389])
x[0].size() torch.Size([10])
y_1pred.size() torch.Size([3])
tensor([-0.5366, -0.4826, 0.0538], grad_fn=<AddBackward0>)
###########
x.size() torch.Size([64, 10])
y_pred.size() torch.Size([64, 3])
y_pred[0] tensor([-0.5366, -0.4826, 0.0538], grad_fn=<SelectBackward>)
###########
input_item tensor([-1.3901, -0.2659, 0.4352, -0.6890, 0.1098, -0.3124, 0.6419, 1.1004,
-0.7910, -0.5389])
input_item.size() torch.Size([10])
y_pred_batch_len1.size() torch.Size([1, 3])
tensor([-0.5366, -0.4826, 0.0538], grad_fn=<AddBackward0>)
The docs on nn.Linear state that
Input: (N,∗,in_features) where ∗ means any number of additional dimensions
so one would naturally expect that at least two dimensions are necessary. However, if we look under the hood we will see that Linear is implemented in terms of nn.functional.linear, which dispatches to torch.addmm or torch.matmul (depending whether bias == True) which broadcast their argument.
So this behavior is likely a bug (or an error in documentation) and I would not depend on it working in the future, if I were you.

Simple LSTM in PyTorch with Sequential module

In PyTorch, we can define architectures in multiple ways. Here, I'd like to create a simple LSTM network using the Sequential module.
In Lua's torch I would usually go with:
model = nn.Sequential()
model:add(nn.SplitTable(1,2))
model:add(nn.Sequencer(nn.LSTM(inputSize, hiddenSize)))
model:add(nn.SelectTable(-1)) -- last step of output sequence
model:add(nn.Linear(hiddenSize, classes_n))
However, in PyTorch, I don't find the equivalent of SelectTable to get the last output.
nn.Sequential(
nn.LSTM(inputSize, hiddenSize, 1, batch_first=True),
# what to put here to retrieve last output of LSTM ?,
nn.Linear(hiddenSize, classe_n))
Define a class to extract the last cell output:
# LSTM() returns tuple of (tensor, (recurrent state))
class extract_tensor(nn.Module):
def forward(self,x):
# Output shape (batch, features, hidden)
tensor, _ = x
# Reshape shape (batch, hidden)
return tensor[:, -1, :]
nn.Sequential(
nn.LSTM(inputSize, hiddenSize, 1, batch_first=True),
extract_tensor(),
nn.Linear(hiddenSize, classe_n)
)
According to the LSTM cell documentation the outputs parameter has a shape of (seq_len, batch, hidden_size * num_directions) so you can easily take the last element of the sequence in this way:
rnn = nn.LSTM(10, 20, 2)
input = Variable(torch.randn(5, 3, 10))
h0 = Variable(torch.randn(2, 3, 20))
c0 = Variable(torch.randn(2, 3, 20))
output, hn = rnn(input, (h0, c0))
print(output[-1]) # last element
Tensor manipulation and Neural networks design in PyTorch is incredibly easier than in Torch so you rarely have to use containers. In fact, as stated in the tutorial PyTorch for former Torch users PyTorch is built around Autograd so you don't need anymore to worry about containers. However, if you want to use your old Lua Torch code you can have a look to the Legacy package.
As far as I'm concerned there's nothing like a SplitTable or a SelectTable in PyTorch. That said, you are allowed to concatenate an arbitrary number of modules or blocks within a single architecture, and you can use this property to retrieve the output of a certain layer. Let's make it more clear with a simple example.
Suppose I want to build a simple two-layer MLP and retrieve the output of each layer. I can build a custom class inheriting from nn.Module:
class MyMLP(nn.Module):
def __init__(self, in_channels, out_channels_1, out_channels_2):
# first of all, calling base class constructor
super().__init__()
# now I can build my modular network
self.block1 = nn.Linear(in_channels, out_channels_1)
self.block2 = nn.Linear(out_channels_1, out_channels_2)
# you MUST implement a forward(input) method whenever inheriting from nn.Module
def forward(x):
# first_out will now be your output of the first block
first_out = self.block1(x)
x = self.block2(first_out)
# by returning both x and first_out, you can now access the first layer's output
return x, first_out
In your main file you can now declare the custom architecture and use it:
from myFile import MyMLP
import numpy as np
in_ch = out_ch_1 = out_ch_2 = 64
# some fake input instance
x = np.random.rand(in_ch)
my_mlp = MyMLP(in_ch, out_ch_1, out_ch_2)
# get your outputs
final_out, first_layer_out = my_mlp(x)
Moreover, you could concatenate two MyMLP in a more complex model definition and retrieve the output of each one in a similar way.
I hope this is enough to clarify, but in case you have more questions, please feel free to ask, since I may have omitted something.