I'm currently working on my bachelor project and I'm using the PointNet deep neural network.
My project group and I have created a dataset of point clouds(an unsorted list of x amount of 3d coordinates) and segmentation files, but we can't train PointNet to predict segmentation with the dataset.
Each segmentation file is a list containing the same amount of rows, as points in the corresponding point cloud, and each row is either a 1 or a 2, depending on the corresponding point belonging to segment 1 or 2.
When PointNet predicts it outputs a list of x elements, where each element is the segment that PointNet predicts the corresponding point belongs to.
When we run the benchmark dataset from the original PointNet implementation, the system runs and can predict segmentation, so we know that the error is in the dataset somewhere, even though we have tried our best to have our dataset look like the original benchmark dataset.
The implemented PointNet uses pytorch conv2d, maxpool2d and linear transformation. For calculating the loss, both the nn.functional.nll_loss and the nn.NLLLos functions have been used. When using the nn.NLLLos the weight parameter was set to a tensor of [1,100] to combat potential imbalance of the data.
These are the thing we have tried:
We have tried downsampling the point clouds i.e remove points using voxel downsampling
We have tried downscaling and normalize all values so they are between 0 and 1, using this formula (data - np.min(data)) / (np.max(data) - np.min(data))
We have tried running an euclidean clustering function on the data, to have each scanned object for it self
We have tried replicating another dataset, which was created using the same raw data, which we know have worked before
In the attached link, images of the datafiles with a description can be found.
Cheers everyone
Related
I am applying a Bayesian model for a CNN that has many layers (more than 3), using Stochastic Variational Inference in Pyro Package.
However after defining the NN, Model and Guide functions and running the training loop I found that the loss stops decreasing on loss ~8000 (which is extremely high). I tried different learning rates and different optimization functions but non of them reaches a loss lower than 8000.
At last I changed the autoguide function (I tried AutoNormal, AutoGaussian, AutoBeta) they all stopped dicreasing the loss at the same point.
enter image description here
The last thing I did is trying the AutoMultivatiateNormal and for this it reached negative values (it reached -1) but when I looked at the weight matrices I found that they are all turned into scalers!!
The following graph represent the loss pattern for the AutoMultivatiateNormal
enter image description here
Do anyone knows how to solve such problem?? and why is this happening??
if the inputs to my model are x,y,z and my outputs are continuous variables a,b,c,d I can obviously use the model to predict the vector [a,b,c,d] from [x,y,z].
However what happens if I find myself in a situation whereby I have say value b as a prior, before inference? Can i run the network in a manner such that I am predicting [a,c,d] based on [x,y,z,b]?
Real World example: I have an image with pixel locations (x,y) and pixel values (R,G,B). I then build a neural implicit model to predict pixel values based on pixel locations, say now I have a situation where I have the green values for some pixels as well as their locations, can I use these green values as a prior with my original network to get an improved result. Note that I am not interested in training a new network on said data.
In mathematical terms: I have network f(x,y,z) -> (a,b,c,d) how can I perform f(x,y,z|b) -> (a,c,d)?
I have not tried much here, thinking of maybe passing the prior value back through the network but am kinda lost.
I'm studying on a deep learning(supervised-learning) to estimate depth images from monocular images.
And the dataset currently uses KITTI data. RGB images (input image) are used KITTI Raw data, and data from the following link is used for ground-truth.
In the process of learning a model by designing a simple encoder-decoder network, the result is not so good, so various attempts are being made.
While searching for various methods, I found that groundtruth only learns valid areas by masking because there are many invalid areas, i.e., values that cannot be used, as shown in the image below.
So, I learned through masking, but I am curious about why this result keeps coming out.
and this is my training part of code.
How can i fix this problem.
for epoch in range(num_epoch):
model.train() ### train ###
for batch_idx, samples in enumerate(tqdm(train_loader)):
x_train = samples['RGB'].to(device)
y_train = samples['groundtruth'].to(device)
pred_depth = model.forward(x_train)
valid_mask = y_train != 0 #### Here is masking
valid_gt_depth = y_train[valid_mask]
valid_pred_depth = pred_depth[valid_mask]
loss = loss_RMSE(valid_pred_depth, valid_gt_depth)
As far as I can understand, you are trying to estimate depth from an RGB image as input. This is an ill-posed problem since the same input image can project to multiple plausible depth values. You would need to integrate certain techniques to estimate accurate depth from RGB images instead of simply taking an L1 or L2 loss between an RGB image and its corresponding depth image.
I would suggest you to go through some papers in estimating depth from single images such as: Depth Map Prediction from a Single Image using a Multi-Scale Deep Network where they use a network to first estimate the global structure of the given image and then use a second network that refines the local scene information. Instead of taking a simple RMSE loss, as you did, they use a scale-invariant error function in which the relationship between points is measured.
I am getting started with deep learning and have a basic question on CNN's.
I understand how gradients are adjusted using backpropagation according to a loss function.
But I thought the values of the convolving filter matrices (in CNN's) needs to be determined by us.
I'm using Keras and this is how (from a tutorial) the convolution layer was defined:
classifier = Sequential()
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
There are 32 filter matrices with dimensions 3x3 is used.
But, how are the values for these 32x3x3 matrices are determined?
It's not the gradients that are adjusted, the gradient calculated with the backpropagation algorithm is just the group of partial derivatives with respect to each weight in the network, and these components are in turn used to adjust the network weights in order to minimize the loss.
Take a look at this introductive guide.
The weights in the convolution layer in your example will be initialized to random values (according to a specific method), and then tweaked during training, using the gradient at each iteration to adjust each individual weight. Same goes for weights in a fully connected layer, or any other layer with weights.
EDIT: I'm adding some more details about the answer above.
Let's say you have a neural network with a single layer, which has some weights W. Now, during the forward pass, you calculate your output yHat for your network, compare it with your expected output y for your training samples, and compute some cost C (for example, using the quadratic cost function).
Now, you're interested in making the network more accurate, ie. you'd like to minimize C as much as possible. Imagine you want to find the minimum value for simple function like f(x)=x^2. You can start at some random point (as you did with your network), then compute the slope of the function at that point (ie, the derivative) and move down that direction, until you reach a minimum value (a local minimum at least).
With a neural network it's the same idea, with the difference that your inputs are fixed (the training samples), and you can see your cost function C as having n variables, where n is the number of weights in your network. To minimize C, you need the slope of the cost function C in each direction (ie. with respect to each variable, each weight w), and that vector of partial derivatives is the gradient.
Once you have the gradient, the part where you "move a bit following the slope" is the weights update part, where you update each network weight according to its partial derivative (in general, you subtract some learning rate multiplied by the partial derivative with respect to that weight).
A trained network is just a network whose weights have been adjusted over many iterations in such a way that the value of the cost function C over the training dataset is as small as possible.
This is the same for a convolutional layer too: you first initialize the weights at random (ie. you place yourself on a random position on the plot for the cost function C), then compute the gradients, then "move downhill", ie. you adjust each weight following the gradient in order to minimize C.
The only difference between a fully connected layer and a convolutional layer is how they calculate their outputs, and how the gradient is in turn computed, but the part where you update each weight with the gradient is the same for every weight in the network.
So, to answer your question, those filters in the convolutional kernels are initially random and are later adjusted with the backpropagation algorithm, as described above.
Hope this helps!
Sergio0694 states ,"The weights in the convolution layer in your example will be initialized to random values". So if they are random and say I want 10 filters. Every execution algorithm could find different filter. Also say I have Mnist data set. Numbers are formed of edges and curves. Is it guaranteed that there will be a edge filter or curve filter in 10?
I mean is first 10 filters most meaningful most distinctive filters we can find.
best
The picture below shows a Cauer network, which is a continued fraction network.
I have built the 3rd olrder transfer function 3rd Octave like this:
function uebertragung=G(R1,Tau1,R2,Tau2,R3,Tau3)
s= tf("s");
C1= Tau1/R1;
C2= Tau2/R2;
C3= Tau3/R3;
# --- Uebertragungsfunktion 3.Ordnung --- #
uebertragung= 1/((s*R1*C1)^3+5*(s*R2*C2)^2+6*s*R3*C3+1);
endfunction
R1,R2,R3,C1,C2,C3 are the 6 parameters my characteristic curve depends on.
I need to put this parameters into the tranfser function, get a result and plot the characteristic curve from the data.
The characteristic curve shows thermal impedance vs time. Like these 2 curves from an igbt data sheet.
My problem is I don't know how to handle transfer functions properly. I need data to plot the characteristic curve but I don't know how to generate them out of the transfer function.
Any tips are welcome. Do I have to make Laplace transformation?
If you need further Information ask me and I try to provide them all.
From the data sheet, the equation they are using for their transient thermal impedance graph is the Foster chain step function response:
Z(t) = sum (R_i * (1-exp(-t/tau_i))) = sum (R_i * (1-exp(-t/(R_i*C_i))))
I verified that the stage R's and C's in the table by the graph will produce the plot you shared with that function.
The method for producing a step function response of an s-domain (Laplace domain) impedance function (Z) is to take the inverse Laplace transform of the product of the transfer function and 1/s (the Laplace domain form of a constant value step function). With the Foster model impedance function:
Z(s) = sum (R_i/(1+R_i*C_i*s))
that will produce the equation above.
Using the transfer function in Octave, you can use the Control package function step to calculate the transient response for you rather than performing the inverse Laplace transform yourself. So once you have Z(s), step(Z) will produce or plot the transient response. See help step for details. You can then adjust the plot (switch to log scale, set axes limits, etc) to look like one of the spec sheet plots.
Now, you want to do the same thing with a Cauer network model. It is important to realize that the R's and C's will not be the same for the two models. The Foster network is a decoupled model that has each primary complex pole isolated by layout, but the R's and C's are actually convolutions of the physical thermal resistances and capacitances in the real package. On the contrary, the Cauer model has R's and C's that match the physical package layers, and the poles in the s-domain transfer function will be complex products of the multiple layers.
So, however you are obtaining your R's and C's for the Cauer model, you can't just use the same values they have in their Foster model parameter table. They can be calculated from physical layer and material properties, however, assuming you have that information. Once you do have useful values, the procedure for going from Z(s) to the transient impedance function is the same for either network, and they should produce the same result.
As an example, the following procedure should work in both Octave and Matlab to plot the Thermal impedance curve from the spec sheet data using the Foster Z(s) model as a starting point. For the Cauer model, just use a different Z(s) function.
(Note that Octave has some issues in the step function that insert t = 0 entries into the time series output, even when they aren't specified, which can cause some errors when trying to plot on a log scale. so this example puts in a t=0 node then ignores it. wanted to explain so that line didn't seem confusing).
s = tf('s')
R1 = 8.5e-3; R2 = 2e-3;
tau1 = 151e-3; tau2 = 5.84e-3;
C1 = tau1/R1; C2 = tau2/R2;
input_imped = R1/(1+R1*C1*s)+R2/(1+R2*C2*s)
times = linspace(0, 10, 100000);
[Zvals,output_times] = step(input_imped, times);
loglog(output_times(2:end), Zvals(2:end));
xlim([.001 10]); ylim([0.0001, .1]);
grid;
xlabel('t [s]');
ylabel('Z_t_h_(_j_-_c_) [K/W] IGBT');
text(1,0.013 ,'Z_t_h_(_j_-_c_) IGBT');