Handling missing key points in key points detection - deep-learning

I want to train a regression based key point detector and I wonder how to handle key points that are not visible.
For example my annotation for an image is a list of key points represented as (x_coord, y_coord, is_visible) and x, y coordinates are both 0 by default in such case, what is the correct way to process such samples given that each image always has about 40% non-visible key points? Would it still work if I leave it as is and train the CNN with say MSE loss?

Related

If I have a multi-modal regression model (output: a,b,c,d) based on some input (x,y,z), can I provide prior (b) to predict one or more of the outputs

if the inputs to my model are x,y,z and my outputs are continuous variables a,b,c,d I can obviously use the model to predict the vector [a,b,c,d] from [x,y,z].
However what happens if I find myself in a situation whereby I have say value b as a prior, before inference? Can i run the network in a manner such that I am predicting [a,c,d] based on [x,y,z,b]?
Real World example: I have an image with pixel locations (x,y) and pixel values (R,G,B). I then build a neural implicit model to predict pixel values based on pixel locations, say now I have a situation where I have the green values for some pixels as well as their locations, can I use these green values as a prior with my original network to get an improved result. Note that I am not interested in training a new network on said data.
In mathematical terms: I have network f(x,y,z) -> (a,b,c,d) how can I perform f(x,y,z|b) -> (a,c,d)?
I have not tried much here, thinking of maybe passing the prior value back through the network but am kinda lost.

PointNet can't predict segmentation on custom point cloud

I'm currently working on my bachelor project and I'm using the PointNet deep neural network.
My project group and I have created a dataset of point clouds(an unsorted list of x amount of 3d coordinates) and segmentation files, but we can't train PointNet to predict segmentation with the dataset.
Each segmentation file is a list containing the same amount of rows, as points in the corresponding point cloud, and each row is either a 1 or a 2, depending on the corresponding point belonging to segment 1 or 2.
When PointNet predicts it outputs a list of x elements, where each element is the segment that PointNet predicts the corresponding point belongs to.
When we run the benchmark dataset from the original PointNet implementation, the system runs and can predict segmentation, so we know that the error is in the dataset somewhere, even though we have tried our best to have our dataset look like the original benchmark dataset.
The implemented PointNet uses pytorch conv2d, maxpool2d and linear transformation. For calculating the loss, both the nn.functional.nll_loss and the nn.NLLLos functions have been used. When using the nn.NLLLos the weight parameter was set to a tensor of [1,100] to combat potential imbalance of the data.
These are the thing we have tried:
We have tried downsampling the point clouds i.e remove points using voxel downsampling
We have tried downscaling and normalize all values so they are between 0 and 1, using this formula (data - np.min(data)) / (np.max(data) - np.min(data))
We have tried running an euclidean clustering function on the data, to have each scanned object for it self
We have tried replicating another dataset, which was created using the same raw data, which we know have worked before
In the attached link, images of the datafiles with a description can be found.
Cheers everyone

SciChart - 3D series data point peaks change when rotating

When rotating various renderable series such MountainRenderableSeries3D and WaterfallRenderableSeries3D, the peaks of the data points change significantly as can be seen in the gif below. What causes this, and can anything be done to fix it? The Y values have a large range, from 0.000024 to 20.0. And the same set of data is inserted in to the data series multiple times along the Z axis, so the X and Y values are all the same. Changing the camera mode from Orthogonal to Perspective helps some, but not completely.
This is with SciChart 5.1.0.11405 and SharpDX 4.0.1.
Orthogonal
Perspective

How to train the RPN in Faster R-CNN?

Link to paper
I'm trying to understand the region proposal network in faster rcnn. I understand what it's doing, but I still don't understand how training exactly works, especially the details.
Let's assume we're using VGG16's last layer with shape 14x14x512 (before maxpool and with 228x228 images) and k=9 different anchors. At inference time I want to predict 9*2 class labels and 9*4 bounding box coordinates. My intermediate layer is a 512 dimensional vector.
(image shows 256 from ZF network)
In the paper they write
"we randomly sample 256 anchors in an image to compute the loss
function of a mini-batch, where the sampled positive and negative
anchors have a ratio of up to 1:1"
That's the part I'm not sure about. Does this mean that for each one of the 9(k) anchor types the particular classifier and regressor are trained with minibatches that only contain positive and negative anchors of that type?
Such that I basically train k different networks with shared weights in the intermediate layer? Therefore each minibatch would consist of the training data x=the 3x3x512 sliding window of the conv feature map and y=the ground truth for that specific anchor type.
And at inference time I put them all together.
I appreciate your help.
Not exactly. From what I understand, the RPN predicts WHk bounding boxes per feature map, and then 256 are randomly sampled per the 1:1 criteria, and these are used as part of the computation for the loss function of that particular mini-batch. You're still only training one network, not k, since the 256 random samples are not of any particular type.
Disclaimer: I only started learning about CNNs a month ago, so I may not understand what I think I understand.

Kalman Filter corrected by known path

I am trying to get filtered velocity/spacial data from noisy position data from a tracked vehicle. I have a set of noisy position/time data = (x_i,y_i,t_i) and a known curve along which the vehicle is traveling, curve = (x(s),y(s)), where s is total distance along the curve. I can run a Kalman filter on the data, but I don't know how to constrain it to the 'road' without throwing out data that is too far from the road, which I don't want to do.
Alternately, I'm trying to estimate the value of s along the constrained path with position data that is noisy in x and y
Does anyone have an idea of how to merge the two types of data?
Thanks!
Do you understand what a Kalman filter does? Fundamentally, it assigns a probability to each possible state given just observables. In simple cases, this doesn't use a priori knowledge. But in your case, you can simply set the off-road estimates to zero and renormalizing the remaining probabilities.
Note: this isn't throwing out observables which are too far off the road, or even discarding outcomes which are too far off. It means that an apparent off-road position strongly increases the probabilities of an outcome on, but near the edge of the road.
If you want the model to allow small excursions away from the road, you can use a fast decaying function to model the low but non-zero probability of a car being off the road.
You could have as states the distance s along the path, and the rate of change of s. The position observations X and Y will then be non-linear functions of the state (assuming your track is not a line) so you'll need to use an extended or unscented filter.