I have a vector which consists of a concatenation the features of a video sequence across 7 frames.
I would like to apply 1D convolution to this vector such that only a part of a frame is processed.
Say a feature vector for one frame has the length 10
my input feature vector would be of total length 7 x 10 = 70
Now I want two convolutions to work on different parts of that vector
conv1 should treat the features 1:5
conv2 should treat 6:10
stride for both would be 10
so the convolution filters apply to the same features only in different frames
Basically I would need to specify an offset for the second conv filter. Is that possible?
On the Caffe website they speak only about a zero padding, but for an offset I would need a negative padding.
Is something like this possible?
layer {
name: "conv2"
type: "Convolution"
bottom: "data"
top: "conv2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 90
kernel_h: 1
kernel_w: 5
pad_h: 0
pad_w: -5
stride: 10
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
I think you can achieve this by using a slicing layer instead of negative padding.
Related
Does anyone know that in linear regression, when calculating the sum of squares residuals (Ridge and Lasso), how can I plot it in the cordine system?
X: [ 1 2 3 4 5]
y: [3 5 2 4 8 ]
SSR: 13.1
slope: 0,9
Ridge (L2): 13,91 - How do I represent this capot regularization as a straight line?
Lasso (L1)? 14 - How do I represent this capot regularization as a straight line?
I've picked out the Ridge and Lasso forces, but I can't plot them in the corduroy system!
Can someone help me?
I work on a deconv layer which upscales 64 channels : 64x48x48 => 64x96x96.
layer {
bottom: "layer41_conv"
top: "layertest_upsample"
name: "layertest_upsample"
type: "Deconvolution"
convolution_param {
num_output: 64
group: 64
kernel_size: 2
pad: 0
stride: 2
}
}
When I print the shape of the parameters :
(64,1,2,2).
I was expected something like :
(64,64,2,2) because of 64 channels in input and 64 channels in output.
Can anyone explain me what's going on ?
You defined group: 64
What group does is (according to manual):
group (g) [default 1]: If g > 1, we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the k-th output group channels will be only connected to the k-th input group channels.
In your case you grouped all 64 channels into 64 groups - this that the k-th input channel is mapped (in isolation) by a 2x2 kernel to the k-th output channel. Over all you have 64 such 2x2 mappings and this is why your weight blob is 64x1x2x2 and not 64x64x2x2.
If you remove the group: 64 you'll have the full weight matrix you expect.
Below is my last layer in training net:
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "final"
bottom: "label"
top: "loss"
loss_param {
ignore_label: 255
normalization: VALID
}
}
Note I adopt a softmax_loss layer. Since its calculation form is like: - log (probability), it's weird the loss can be negative, as shown below(iteration 80).
I0404 23:32:49.400624 6903 solver.cpp:228] Iteration 79, loss = 0.167006
I0404 23:32:49.400806 6903 solver.cpp:244] Train net output #0: loss = 0.167008 (* 1 = 0.167008 loss)
I0404 23:32:49.400825 6903 sgd_solver.cpp:106] Iteration 79, lr = 0.0001
I0404 23:33:25.660655 6903 solver.cpp:228] Iteration 80, loss = -1.54972e-06
I0404 23:33:25.660845 6903 solver.cpp:244] Train net output #0: loss = 0 (* 1 = 0 loss)
I0404 23:33:25.660862 6903 sgd_solver.cpp:106] Iteration 80, lr = 0.0001
I0404 23:34:00.451464 6903 solver.cpp:228] Iteration 81, loss = 1.89034
I0404 23:34:00.451661 6903 solver.cpp:244] Train net output #0: loss = 1.89034 (* 1 = 1.89034 loss)
Can anyone explain it for me? How can this happened?
Thank you very much!
PS:
The task I do here is semantic segmentation.
There are 20 object classes plus background in total(So 21 classes). The label range from 0-21. The extra label 225 is ignored which can be find in SoftmaxWithLoss definition at the beginning of this post.
Is caffe run on GPU or CPU ?
Print out prob_data that you get after softmax operation:
// find the next line in your cpu or gpu Forward function
softmax_layer_->Forward(softmax_bottom_vec_, softmax_top_vec_);
// make sure you have data in cpu
const Dtype* prob_data = prob_.cpu_data();
for (int i = 0; i < prob_.count(); i++) {
printf("%f ", prob_data[i]);
}
As I'm trying to fit a function to some experimental data, I've written a function with three inputs, three parameters and one output:
qrfunc = #(x, p) exp(-1*p(1)*x(:,1) - p(2)*x(:,2))+p(3)*x(:,3)+20;
When I generate some input and output values:
pS = [0.5; 0.3; 0.3];
x1 = [1 1 1; 1 1.1 1; 1 1.1 1.1; 2 1.2 2];
y1 = qrfunc(x1, pS);
And call the leasqr function:
pin =[1; 1; 1];
[f1, p1, kvg1, iter1, corp1, covp1, covr1, stdresid1, Z1, r21] = leasqr(x1, y1, pin, qrfunc, 0.0001);
This works correct, the function makes 7 iterations and provides the right outputs.
But when I load x1 from my experimental data (a text file with three columns, about 1500 lines) as well as my y1 (a text file with the same amount of lines) and run the same function, it only makes one iteration, and does not change the parameters.
It even shows that the error margins are very high:
sqrt(diag(covp1))
ans =
3.0281e+004
3.7614e+005
1.9477e-002
What am I doing wrong? There are no error messages, no 'Convergence not achieved' or anything like that...
Edit:
The data is loaded with the command:
load "input.txt"
load "output.txt"
Proof of loading:
size(input)
ans =
1540 3
The first few lines from my input file:
10 0.4 5
20 0.4 5
30 0.4 5
40 0.4 5
50 0.4 5
The second and third parameters have different values further down the line.
In this .obj file:
o sometriangle
v 1 0 0
v 0 1 0
v 0 0 1
f 1 2 3
o somesquare
v 5 0 0
v 5 5 0
v 0 5 0
v 0 0 0
f 1 2 3 # HERE
f 1 3 4 # AND HERE
Do the marked lines refer to the vertices within their containing object, or are vertex numbers global?
The OBJ specification states
For all elements, reference numbers are used to identify geometric
vertices, texture vertices, vertex normals, and parameter space
vertices.
Each of these types of vertices is numbered separately, starting with
1. This means that the first geometric vertex in the file is 1, the second is 2, and so on. The first texture vertex in the file is 1, the
second is 2, and so on. The numbering continues sequentially
throughout the entire file. Frequently, files have multiple lists of
vertex data. This numbering sequence continues even when vertex data
is separated by other data.
So this means that the vertices are indeed numbered globally, at least within the same file.