Discrepancy in coefficient output from quantile regression - quantile-regression

If I run an ANOVA-style quantile regression model using only a categorical predictor, and if I specify a non-default fitting algorith, I receive different coefficient estimates from summary.rq() compared to rq() or coef(). Below is an example using the engel dataset:
# Libraries
library(data.table)
library(quantreg)
# Data
data(engel)
# Add Group
setDT(engel)
engel[,Group:=1L]
engel[1:117,Group:=0]
# Explore
plot(foodexp~as.factor(Group),data=engel)
# Fit
fit=rq(foodexp~1+as.factor(Group),data=engel,tau=0.5,method='fn')
fit
# (Intercept) as.factor(Group)1
# 572.08066 18.40829
coef(fit)
# (Intercept) as.factor(Group)1
# 572.08066 18.40829
summary(fit,se='nid')
# Value Std. Error t value Pr(>|t|)
# (Intercept) 572.08066 21.27472 26.89016 0.00000
# as.factor(Group)1 18.40829 39.63886 0.46440 0.64279
#### These coefs are different!
summary(fit)
# coefficients lower bd upper bd
# (Intercept) 572.08066 525.92835 605.69257
# as.factor(Group)1 16.43880 -34.51631 82.25984
In the above example, my estimated Group effect is 18.4 from rq() and coef(), but is 16.4 for summary.rq().
It appears that summary.rq() (which defaults to se='rank' for n<1000) does not always recognize the specified fitting algorithm. Is this a bug?

Related

Implementation of multitask "nested" neural network

I am trying to implement a multitask neural network used by a paper but am quite unsure how I should code the multitask network because the authors did not provide code for that part.
The network architecture looks like (paper):
To make it simpler, the network architecture could be generalized as (For demo I changed their more complicated operation for the pair of individual embeddings to concatenation):
The authors are summing the loss from the individual tasks and the pairwise tasks, and using the total loss to optimize the parameters for the three networks (encoder, MLP-1, MLP-2) in each batch, but I am kind of at sea as to how different types of data are combined in a single batch to feed into two different networks that share an initial encoder. I tried to search for other networks with similar structure but did not find any sources. Would appreciate any thoughts!
This is actually a common pattern. It would be solved by code like the following.
class Network(nn.Module):
def __init__(self, ...):
self.encoder = DrugTargetInteractiongNetwork()
self.mlp1 = ClassificationMLP()
self.mlp2 = PairwiseMLP()
def forward(self, data_a, data_b):
a_encoded = self.encoder(data_a)
b_encoded = self.encoder(data_b)
a_classified = self.mlp1(a_encoded)
b_classified = self.mlp1(b_encoded)
# let me assume data_a and data_b are of shape
# [batch_size, n_molecules, n_features].
# and that those n_molecules are not necessarily
# equal.
# This can be generalized to more dimensions.
a_broadcast, b_broadcast = torch.broadcast_tensors(
a_encoded[:, None, :, :],
b_encoded[:, :, None, :],
)
# this will work if your mlp2 accepts an arbitrary number of
# learding dimensions and just broadcasts over them. That's true
# for example if it uses just Linear and pointwise
# operations, but may fail if it makes some specific assumptions
# about the number of dimensions of the inputs
pairwise_classified = self.mlp2(a_broadcast, b_broadcast)
# if that is a problem, you have to reshape it such that it
# works. Most torch models accept at least a leading batch dimension
# for vectorization, so we can "fold" the pairwise dimension
# into the batch dimension, presenting it as
# [batch*n_mol_1*n_mol_2, n_features]
# to mlp2 and then recover it back
B, N1, N_feat = a_broadcast.shape
_B, N2, _N_feat = b_broadcast.shape
a_batched = a_broadcast.reshape(B*N1*N2, N_feat)
b_batched = b_broadcast.reshape(B*N1*N2, N_feat)
# above, -1 would suffice instead of B*N1*N2, just being explicit
batch_output = self.mlp2(a_batched, b_batched)
# this should be exactly the same as `pairwise_classified`
alternative_classified = batch_output.reshape(B, N1, N2, -1)
return a_classified, b_classified, pairwise_classified

Building RNN from scratch in pytorch

I am trying to build RNN from scratch using pytorch and I am following this tutorial to build it.
import torch
import torch.nn as nn
import torch.nn.functional as F
class BasicRNN(nn.Module):
def __init__(self, n_inputs, n_neurons):
super(BasicRNN, self).__init__()
self.Wx = torch.randn(n_inputs, n_neurons) # n_inputs X n_neurons
self.Wy = torch.randn(n_neurons, n_neurons) # n_neurons X n_neurons
self.b = torch.zeros(1, n_neurons) # 1 X n_neurons
def forward(self, X0, X1):
self.Y0 = torch.tanh(torch.mm(X0, self.Wx) + self.b) # batch_size X n_neurons
self.Y1 = torch.tanh(torch.mm(self.Y0, self.Wy) +
torch.mm(X1, self.Wx) + self.b) # batch_size X n_neurons
return self.Y0, self.Y1
class CleanBasicRNN(nn.Module):
def __init__(self, batch_size, n_inputs, n_neurons):
super(CleanBasicRNN, self).__init__()
self.rnn = BasicRNN(n_inputs, n_neurons)
self.hx = torch.randn(batch_size, n_neurons) # initialize hidden state
def forward(self, X):
output = []
# for each time step
for i in range(2):
self.hx = self.rnn(X[i], self.hx)
output.append(self.hx)
return output, self.hx
FIXED_BATCH_SIZE = 4 # our batch size is fixed for now
N_INPUT = 3
N_NEURONS = 5
X_batch = torch.tensor([[[0,1,2], [3,4,5],
[6,7,8], [9,0,1]],
[[9,8,7], [0,0,0],
[6,5,4], [3,2,1]]
], dtype = torch.float) # X0 and X1
model = CleanBasicRNN(FIXED_BATCH_SIZE,N_INPUT,N_NEURONS)
a1,a2 = model(X_batch)
Running this code returns this error
RuntimeError: size mismatch, m1: [4 x 5], m2: [3 x 5] at /pytorch/..
After some digging I found this error happens when passing the hidden states to the BasicRNN model
N_INPUT = 3 # number of features in input
N_NEURONS = 5 # number of units in layer
X0_batch = torch.tensor([[0,1,2], [3,4,5],
[6,7,8], [9,0,1]],
dtype = torch.float) #t=0 => 4 X 3
X1_batch = torch.tensor([[9,8,7], [0,0,0],
[6,5,4], [3,2,1]],
dtype = torch.float) #t=1 => 4 X 3
test_model = BasicRNN(N_INPUT,N_NEURONS)
a1,a2 = test_model(X0_batch,X1_batch)
a1,a2 = test_model(X0_batch,torch.randn(1,N_NEURONS)) # THIS LINE GIVES ERROR
What is happening in the hidden states and How can I solve this problem?
Maybe the tutorial is wrong: torch.mm(X1, self.Wx) multiplies a 3 x 5 and a 4 x 5 tensor, which doesn't work. Even if you make it work by rewriting as torch.mm(self.Wx, X1.t()), you expect it to output a 4 x 5 tensor, but the result is a 4 x 3 tensor.
The BasicRNN is not an implementation of an RNN cell, but rather the full RNN fixed for two time steps. It is depicted in the image of the tutorial:
Where Y0, the first time step, does not include the previous hidden state (technically zero) and Y0 is also h0, which is then used for the second time step, Y1 or h1.
An RNN cell is one of the time steps in isolation, particularly the second one, as it should include the hidden state of the previous time step.
The next hidden state is calculate as described in the nn.RNNCell documentation:
In your BasicRNN there is only one bias term, but you still have a weight Wx for the input and the weight Wy for the hidden state, which should probably be called Wh instead. As for the forward method, its arguments become the input and the previous hidden state, instead of being two inputs at different time steps. This also means that you only have one calculation, corresponding to the formula of the nn.RNNCell, which was the calculation for the Y1, except that it uses the hidden state that was passed to the forward method.
class BasicRNN(nn.Module):
def __init__(self, n_inputs, n_neurons):
super(BasicRNN, self).__init__()
self.Wx = torch.randn(n_inputs, n_neurons) # n_inputs X n_neurons
self.Wh = torch.randn(n_neurons, n_neurons) # n_neurons X n_neurons
self.b = torch.zeros(1, n_neurons) # 1 X n_neurons
def forward(self, x, hidden):
return torch.tanh(torch.mm(x, self.Wx) + torch.mm(hidden, self.Wh) + self.b)
In the tutorial, they opted to use nn.RNNCell directly instead of implementing the cell.
Note: The terms of the matrix multiplications are in a different order, because the weights are usually transposed in comparison to your weights and the formula assumes the input and hidden state to be vectors (not batches). Technically, the batched inputs and hidden states would have to be transposed, and the output would be transposed back for it to work with the batches. It's easier to just use the transposed the weight, as the result is the same due to the transpose property of the matrix multiplication:

Using mvtnorm package to build my own multinomial probit model

I am trying to build a multinomial probit model using the optmix and mvtnorm packages. Am not using the existing libraries for MNP since I will be extending it to multivariate models later.
For simplicity, I am trying to build a model that has three alternatives. say, car, bus and train are the alternatives. Each individual chooses one of these three alternatives.
Following is how I defined the model. I started initially with a constants only model.
# defining the utilities of each alternative (assuming only constants), pars[1] and pars[2] are the coefficients to be estimated #
ucar <<- 0
ubus <<- parm[1]
utrain <<- parm[2]
# defining the variance-covariance matrix, initially assuming correlations are zero and variances are of unit magnitude #
cormat <<- matrix(c(1,0,0,0,1,0,0,0,1), nrow = 3, ncol = 3)
# determining the probabilities of choosing car, bus and train #
pcar <<- pmvnorm(lower = c(-Inf, -Inf, -Inf), upper = c(ucar,-ubus,-utrain), mean = c(0,0,0), sigma = cormat)
pbus <<- pmvnorm(lower = c(-Inf, -Inf, -Inf), upper = c(-ucar,ubus,-utrain), mean = c(0,0,0), sigma = cormat)
ptrain <<- pmvnorm(lower = c(-Inf, -Inf, -Inf), upper = c(-ucar,-ubus,utrain), mean = c(0,0,0), sigma = cormat)
# extracting only the probabilities from mvtnorm and finding out the log of probabilities #
pcar_1 <<- pcar[[1]]
pbus_1 <<- pbus[[1]]
ptrain_1 <<- ptrain[[1]]
lpcar <<- log(pcar_1)
lpbus <<- log(pbus_1)
lptrain <<- log(ptrain_1)
# defining the log-likelihood function, where car, bus and train are dummy indicators for chosen alternatives #
ll = -1*(car*lpcar + bus*lpbus + train*lptrain)
I maximize the above ll using optmix to get the parameters.
The above model works fine. The problem comes when I add an independent variables in the model. I get an error "Cannot evaluate function at initial parameters". What am I doing wrong in the code ?

Why/how can model.forward() succeed both on input being mini-batch vs single item?

Why and how does this work?
When I run the forward phase on input
being mini-batch tensor
or alternatively being a single input item
model.__call__() (which AFAIK is calling forward() ) swallows that and spills out adequate output (i.e. a tensor of mini-batch of estimates or a single item of estimate)
Adopting testcode from the Pytorch NN example shows what I mean, but I don't get it.
I would expect it to create problems and me forced to transform the single item input into a mini-batch of size 1( reshape (1,xxx)) or likewise, like I did in the code below.
( I did variations of the test to be sure it is e.g. not depending on execution order )
# -*- coding: utf-8 -*-
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
#N, D_in, H, D_out = 64, 1000, 100, 10
N, D_in, H, D_out = 64, 10, 4, 3
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(1):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
model.eval()
print ("###########")
print ("x[0]",x[0])
print ("x[0].size()", x[0].size())
y_1pred = model(x[0])
print ("y_1pred.size()", y_1pred.size())
print (y_1pred)
model.eval()
print ("###########")
print ("x.size()", x.size())
y_pred = model(x)
print ("y_pred.size()", y_pred.size())
print ("y_pred[0]", y_pred[0])
print ("###########")
model.eval()
input_item = x[0]
batch_len1_shape = torch.Size([1,*(input_item.size())])
batch_len1 = input_item.reshape(batch_len1_shape)
y_pred_batch_len1 = model(batch_len1)
print ("input_item",input_item)
print ("input_item.size()", input_item.size())
print ("y_pred_batch_len1.size()", y_pred_batch_len1.size())
print (y_1pred)
raise Exception
This is the output it generates:
###########
x[0] tensor([-1.3901, -0.2659, 0.4352, -0.6890, 0.1098, -0.3124, 0.6419, 1.1004,
-0.7910, -0.5389])
x[0].size() torch.Size([10])
y_1pred.size() torch.Size([3])
tensor([-0.5366, -0.4826, 0.0538], grad_fn=<AddBackward0>)
###########
x.size() torch.Size([64, 10])
y_pred.size() torch.Size([64, 3])
y_pred[0] tensor([-0.5366, -0.4826, 0.0538], grad_fn=<SelectBackward>)
###########
input_item tensor([-1.3901, -0.2659, 0.4352, -0.6890, 0.1098, -0.3124, 0.6419, 1.1004,
-0.7910, -0.5389])
input_item.size() torch.Size([10])
y_pred_batch_len1.size() torch.Size([1, 3])
tensor([-0.5366, -0.4826, 0.0538], grad_fn=<AddBackward0>)
The docs on nn.Linear state that
Input: (N,∗,in_features) where ∗ means any number of additional dimensions
so one would naturally expect that at least two dimensions are necessary. However, if we look under the hood we will see that Linear is implemented in terms of nn.functional.linear, which dispatches to torch.addmm or torch.matmul (depending whether bias == True) which broadcast their argument.
So this behavior is likely a bug (or an error in documentation) and I would not depend on it working in the future, if I were you.

How to change the padding for semantic segmentation?

I am trying to run UNet on my data, which is grayscale images with 256x256 resolution. UNet is downsampling the image to 1-by-5-by-84-by-84 (5 is number of classes). and I am getting the following error:
0501 02:16:17.345309 2433 net.cpp:400] loss -> loss
I0501 02:16:17.345317 2433 layer_factory.hpp:77] Creating layer loss
F0501 02:16:17.345377 2433 softmax_loss_layer.cpp:47] Check failed: outer_num_ * inner_num_ == bottom[1]->count() (7056 vs. 65536) Number of labels must match number of predictions; e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W, with integer values in {0, 1, ..., C-1}.
*** Check failure stack trace: ***
# 0x7f7d2c9575cd google::LogMessage::Fail()
# 0x7f7d2c959433 google::LogMessage::SendToLog()
# 0x7f7d2c95715b google::LogMessage::Flush()
# 0x7f7d2c959e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7f7d2d02d4be caffe::SoftmaxWithLossLayer<>::Reshape()
# 0x7f7d2d0c61df caffe::Net<>::Init()
# 0x7f7d2d0c7a91 caffe::Net<>::Net()
# 0x7f7d2d0e1a4a caffe::Solver<>::InitTrainNet()
# 0x7f7d2d0e2db7 caffe::Solver<>::Init()
# 0x7f7d2d0e315a caffe::Solver<>::Solver()
# 0x7f7d2cf7b9f3 caffe::Creator_SGDSolver<>()
# 0x40a6d8 train()
# 0x4075a8 main
# 0x7f7d2b40b830 __libc_start_main
# 0x407d19 _start
# (nil) (unknown)
Could someone please let me know how should I set the padding values to get the exactly the input size in the output prediction? I do not know how and which layers should I change.