Should I scale the ground truth images in semantic segmentation? - deep-learning

I am applying CNNs for semantic segmentation. I am hoping someone here can recommend. What I am doing now is that I scaled ground truth images. I have 5 classes that I scaled them to (0-1) in Data Layer based on this:
transform_param {
scale: 0.00390625
}
I am wondering whether I am wrong or right? Is this scaling value correct?
whenever I am not adding `
it is showing the following error:
[...]
I0510 23:21:28.086776 9072 solver.cpp:397] Test net output #0: accuracy = 0
I0510 23:21:28.086812 9072 solver.cpp:397] Test net output #1: loss = 1.9416 (* 1 = 1.9416 loss)
F0510 23:21:28.150539 9072 math_functions.cu:141] Check failed: status == CUBLAS_STATUS_SUCCESS (11 vs. 0) CUBLAS_STATUS_MAPPING_ERROR
*** Check failure stack trace: ***
# 0x7fb9d4e7f5cd google::LogMessage::Fail()
# 0x7fb9d4e81433 google::LogMessage::SendToLog()
# 0x7fb9d4e7f15b google::LogMessage::Flush()
# 0x7fb9d4e81e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7fb9d56665ea caffe::caffe_gpu_asum<>()
# 0x7fb9d5633a38 caffe::SoftmaxWithLossLayer<>::Forward_gpu()
# 0x7fb9d54bde41 caffe::Net<>::ForwardFromTo()
# 0x7fb9d54bdf47 caffe::Net<>::Forward()
# 0x7fb9d54e8d28 caffe::Solver<>::Step()
# 0x7fb9d54e98ca caffe::Solver<>::Solve()
# 0x40acd4 train()
# 0x407418 main
# 0x7fb9d360f830 __libc_start_main
# 0x407ce9 _start
# (nil) (unknown)

Related

in deep learning andrew ng why linear_activation_forward with relu activation return A not AL

def L_model_forward(X, parameters):
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
# The for loop starts at 1 because layer 0 is the input
for l in range(1, L):
A_prev = A
#(≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu")
caches .append(cache)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
#(≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid")
caches.append(cache)
# YOUR CODE STARTS HERE
assert(AL.shape == (1,X.shape[1]))
# YOUR CODE ENDS HERE
return AL, caches

Discrepancy in coefficient output from quantile regression

If I run an ANOVA-style quantile regression model using only a categorical predictor, and if I specify a non-default fitting algorith, I receive different coefficient estimates from summary.rq() compared to rq() or coef(). Below is an example using the engel dataset:
# Libraries
library(data.table)
library(quantreg)
# Data
data(engel)
# Add Group
setDT(engel)
engel[,Group:=1L]
engel[1:117,Group:=0]
# Explore
plot(foodexp~as.factor(Group),data=engel)
# Fit
fit=rq(foodexp~1+as.factor(Group),data=engel,tau=0.5,method='fn')
fit
# (Intercept) as.factor(Group)1
# 572.08066 18.40829
coef(fit)
# (Intercept) as.factor(Group)1
# 572.08066 18.40829
summary(fit,se='nid')
# Value Std. Error t value Pr(>|t|)
# (Intercept) 572.08066 21.27472 26.89016 0.00000
# as.factor(Group)1 18.40829 39.63886 0.46440 0.64279
#### These coefs are different!
summary(fit)
# coefficients lower bd upper bd
# (Intercept) 572.08066 525.92835 605.69257
# as.factor(Group)1 16.43880 -34.51631 82.25984
In the above example, my estimated Group effect is 18.4 from rq() and coef(), but is 16.4 for summary.rq().
It appears that summary.rq() (which defaults to se='rank' for n<1000) does not always recognize the specified fitting algorithm. Is this a bug?

How to change the padding for semantic segmentation?

I am trying to run UNet on my data, which is grayscale images with 256x256 resolution. UNet is downsampling the image to 1-by-5-by-84-by-84 (5 is number of classes). and I am getting the following error:
0501 02:16:17.345309 2433 net.cpp:400] loss -> loss
I0501 02:16:17.345317 2433 layer_factory.hpp:77] Creating layer loss
F0501 02:16:17.345377 2433 softmax_loss_layer.cpp:47] Check failed: outer_num_ * inner_num_ == bottom[1]->count() (7056 vs. 65536) Number of labels must match number of predictions; e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W, with integer values in {0, 1, ..., C-1}.
*** Check failure stack trace: ***
# 0x7f7d2c9575cd google::LogMessage::Fail()
# 0x7f7d2c959433 google::LogMessage::SendToLog()
# 0x7f7d2c95715b google::LogMessage::Flush()
# 0x7f7d2c959e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7f7d2d02d4be caffe::SoftmaxWithLossLayer<>::Reshape()
# 0x7f7d2d0c61df caffe::Net<>::Init()
# 0x7f7d2d0c7a91 caffe::Net<>::Net()
# 0x7f7d2d0e1a4a caffe::Solver<>::InitTrainNet()
# 0x7f7d2d0e2db7 caffe::Solver<>::Init()
# 0x7f7d2d0e315a caffe::Solver<>::Solver()
# 0x7f7d2cf7b9f3 caffe::Creator_SGDSolver<>()
# 0x40a6d8 train()
# 0x4075a8 main
# 0x7f7d2b40b830 __libc_start_main
# 0x407d19 _start
# (nil) (unknown)
Could someone please let me know how should I set the padding values to get the exactly the input size in the output prediction? I do not know how and which layers should I change.

Getting error on [base_conv_layer.cpp:122] Check failed: channels_ % group_ == 0 (1 vs. 0) , how to solve it?

When I am trying to train FCN32 for semantic segmentation on my own data, I am getting this error:
I0106 12:57:53.273977 19825 net.cpp:100] Creating Layer upscore_sign
I0106 12:57:53.273982 19825 net.cpp:434] upscore_sign <- score_fr_sign
I0106 12:57:53.274001 19825 net.cpp:408] upscore_sign -> upscore_sign
F0106 12:57:53.274119 19825 base_conv_layer.cpp:122] Check failed: channels_ % group_ == 0 (1 vs. 0)
*** Check failure stack trace: ***
# 0x7f2602e525cd google::LogMessage::Fail()
# 0x7f2602e54433 google::LogMessage::SendToLog()
# 0x7f2602e5215b google::LogMessage::Flush()
# 0x7f2602e54e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7f260350701b caffe::BaseConvolutionLayer<>::LayerSetUp()
# 0x7f26033ee557 caffe::Net<>::Init()
# 0x7f26033efde1 caffe::Net<>::Net()
# 0x7f26033c5d4a caffe::Solver<>::InitTrainNet()
# 0x7f26033c7157 caffe::Solver<>::Init()
# 0x7f26033c74fa caffe::Solver<>::Solver()
# 0x7f2603400353 caffe::Creator_SGDSolver<>()
# 0x40c07a train()
# 0x408748 main
# 0x7f26014f3830 __libc_start_main
# 0x409019 _start
# (nil) (unknown)
I have not included the creation of previous layers. But it seems net creates previous layers successfully, and it reaches to the Creating Layer upscore_sign, the error comes. I changed solver as follows:
net: "train_val.prototxt"
#test_net: "val.prototxt"
test_iter: 200 #3000 #5105
# make test net, but don't invoke it from the solver itself
test_interval: 1000
display: 20
average_loss: 20
lr_policy: "step" #"fixed"
# lr for unnormalized softmax
base_lr: 1e-10
# high momentum
momentum: 0.99
# no gradient accumulation
iter_size: 1
max_iter: 300000
weight_decay: 0.0005
snapshot: 2000 #10000
snapshot_prefix: "snapshot/FCN32s_train"
test_initialization: false
solver_mode: GPU #+
and I changed the number of outputs from 60 to 5 (based on the number of classes in my data): convolution_param {num_output: 5 }
Can someone suggest any solution or idea about this? What I have set/changed wrongly? What/where is my mistake?
Your help is appreciated.
Check failed: channels_ % group_ == 0 (1 vs. 0)
This line really matters! You should check your num_output and group and find out if channels_ % group_ == 0.

Failed to find HDF5 dataset data

I am quite new in caffe and in deep learning. I want to train my model using the dataset that downloaded from here.
My train data has hdf5 format. It also has following parameters.
{
"debug": false,
"git_revision": "60c477dae59f3d1378568e2ebea054a135683e2f",
"height": 128,
"no_train_mirrors": false,
"output_dir": "/tmp/parse27k_crops_64x128",
"output_mode": "hdf5",
"padding": 32,
"padding_mode": "edge",
"parse_path": "/fast_work/sudowe/parse27k",
"single_threaded": false,
"verbose": false,
"width": 64
}
I have following data layer in my train model.
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "label"
hdf5_data_param {
source: "path_to_caffe/caffe/examples/hdf5_classification/data/train.txt"
batch_size: 10
}
include {
phase: TRAIN
}
}
I am getting following error message when i am trying to train my train.hdf5 data which is mentioned in train.txt file.
I1031 11:52:10.185920 8670 layer_factory.hpp:77] Creating layer data
I1031 11:52:10.185933 8670 net.cpp:100] Creating Layer data
I1031 11:52:10.185940 8670 net.cpp:408] data -> data
I1031 11:52:10.185957 8670 net.cpp:408] data -> label
I1031 11:52:10.185971 8670 hdf5_data_layer.cpp:79] Loading list of HDF5 filenames from: path_to_caffe/caffe/examples/hdf5_classification/data/train.txt
I1031 11:52:10.186003 8670 hdf5_data_layer.cpp:93] Number of HDF5 files: 2
F1031 11:52:10.186825 8670 hdf5.cpp:14] Check failed: H5LTfind_dataset(file_id, dataset_name_) Failed to find HDF5 dataset data
*** Check failure stack trace: ***
# 0x7f231a6a1daa (unknown)
# 0x7f231a6a1ce4 (unknown)
# 0x7f231a6a16e6 (unknown)
# 0x7f231a6a4687 (unknown)
# 0x7f231acca607 caffe::hdf5_load_nd_dataset_helper<>()
# 0x7f231acc93d5 caffe::hdf5_load_nd_dataset<>()
# 0x7f231ad5172e caffe::HDF5DataLayer<>::LoadHDF5FileData()
# 0x7f231ad50548 caffe::HDF5DataLayer<>::LayerSetUp()
# 0x7f231acaf3ac caffe::Net<>::Init()
# 0x7f231acb0235 caffe::Net<>::Net()
# 0x7f231ae0332a caffe::Solver<>::InitTrainNet()
# 0x7f231ae0442c caffe::Solver<>::Init()
# 0x7f231ae0475a caffe::Solver<>::Solver()
# 0x7f231adf8453 caffe::Creator_SGDSolver<>()
# 0x40f0fe caffe::SolverRegistry<>::CreateSolver()
# 0x408134 train()
# 0x405b3c main
# 0x7f23196adf45 (unknown)
# 0x4063ab (unknown)
# (nil) (unknown)
Any kind of help or suggestion will be really appreciated.
In caffe input data layer output blob can be only named after the names of datasets inside of the .hdf5 file.
My dataset has following structure
crops Dataset {27482, 3, 128, 192}
labels Dataset {27482, 12}
mean Dataset {3, 128, 192}
pids Dataset {27482}
By the help of #Shai I solve it like this :
layer {
name: "data"
type: "HDF5Data"
top: "crops"
top: "labels"
include {
phase: TRAIN
}
hdf5_data_param {
source: "path_to_caffe/examples/hdf5_classification/data/train.txt"
batch_size: 64
}
}