How does the training data look like in Yolo model - deep-learning

I'm trying to figure out how to create YOLOv1 model from scratch but I can't figure out how the training data should look like. I suspect training labels (ground truth) looks like a matrix with (7, 7, 5*2 + 10) where
7x7 stands for the prediction grid
5 is object location and confidence (always equal to 1); x,y - known box center; h,v - box height and width
*2 is because there should be horizontal and vertical box for each cell
10 - is one-hot encoding for a class present in this position
what I don't understand is
whether to put confidence==1 to both horizontal and vertical bounding boxes?
whether x and y should be coordinates in the original (resized for the input) image?
...or maybe I'm completely off with my whole understanding. Does somebody have experience with YOLO?

Related

problem with pre-annotations (yolo v5) in label studio

I am trying to use annotations from my Yolo v5 model in label-studio. I was able to bring the annotations to label-studio and show them. The problem is the inaccuracy in the labels displayed by label-studio (I have images generated from Yolo to how the predictions should look like). The only transformation I did to the x, y, width, height data obtained from yolo is multiplying by 100.
Predictions from yolo:
YOLO
label-studio:
label-studio
Please use label-studio-converter for this: https://github.com/heartexlabs/label-studio-converter/pull/46
Most likely you incorrectly calculated coordinates, the correct code is here: https://github.com/heartexlabs/label-studio-converter/blob/master/label_studio_converter/imports/yolo.py#L85
only transformation I did to the x, y, width, height data obtained from yolo is multiplying by 100
Building on #Max answer you can use the following formulas (from here) to convert yolo coordinates to label studio compatible coordinates
x_lbstdio = (x-width/2)*100
y_lbstdio = (y-height/2)*100
w_lbstdio = w*100
h_lbstdio = h*100
Note the only change being subtracting half of width and half of height from x and y respectively before multiplying by 100

How Can I Convert Dataset Annaotations To Fixed(YoloV5) Format Without Hand Encoding

So I Am Working On This Awesome Project On Object Detection,Where The Prior Task Is To Identify Brand Logos, So after Doing some research i found this dateset available for the
brand logo For More About Dataset:here
DATASET:
This dateset has 2 versions
FlickrLogos32
FlickrLogos47(recommended for brand detection)
as the name 32 and 47 are the no. of classes offered by this dataset. From the Documentation itself mentioned 47 version is correctly annotated and recommended for object detection & recognization also in my project i have used 47 version
Model:
I Am Using YoloV5 For object detection the
reason behind using YoloV5 and not previous versions is, it it well documented with couple of tutorials with jupyter notebooks available
Problem:
As For The YoloV5:Object Detection Model,The Object Label Should Be Annotated As
<x_center> <y_center> <width> <height> corresponds to bounding box(below image),
whereas the dataset annotations are given in the form of
<x1> <y1> <x2> <y2> where <x1>,<y1>:upper left corner of the bounding box
<x2>,<y2>:lower right corner of the bounding box.
How can i transform <x1>,<y1>,<x2>,<y2>: corner points of bounding box to naive yolo
annotations format i.e.<center_x>,<center_y>,<height>,<width>
without manually going one by one over image and drawing rectangle box with roboflow
Also the Labels are annotated by pixel so we have to normalize it in (0,1)
Datset Insights:
For Any Dataset Example Its Having An Image(.png) and as a Label A Ground truth(.txt)(see below image)
the '.mask' file its just binary mask of object present in image
So A Data Example look likes:
Image:
gt_data.txt:
Mask:
In general to calculate the center it should be xmin + (width/2) and ymin + (height/2). So I think you have you /2 in wrong part of the equation.
Also note that an yolo annotation will look like this.
0.642859 0.079219 0.148063 0.148062
The coordinates are relative to the size of the photo from 0-1. To normalize the coordinates you need to normalize the x dimensions by dividing by the photo width and normalize the y dimensions by dividing by the photo height.

Why does YOLO divide an image into grid cells?

I'm trying to understand how YOLO works for a project I'm doing. I've gone through the papers, many articles, and blog posts, but I'm still not sure why YOLO divides the entire image into a grid cell and considers each cell for computations. What would happen if we considered the whole image as just one cell (without dividing)? What is the purpose this grid cell serve? Is there a maximum number of objects a particular cell can detect?
Grid cells put the network predictions in a more structure form. Each grid cells correspond to a specific region of image, and these cells predicts objects which their centers lay into the region. So, it is about having a structured output representation to use the advantage of spatial regularity of images.
Each grid cell can make a prediction of a vector which has a form [objectness_value, bbox_h, bbox_w, bbox_cx, bbox_cy, p1, p2, .. pn].
objectness_value: how confident the prediction
bbox_h, bbox_w, bbox_cx, bbox_cy: offsets for bounding box height, width, center coordinate in x-axis, and center coordinate in y_axis, respectively.
p1, p2, ..pn: predicted class probabilities of each object category. (n objects in total)
More grid cell means more predictions. If you have one grid cell (image itself), you will have one bounding box prediction. It is not practical because there are likely many objects in images.
Note that a grid cell can make multiple bounding box predictions adding more bbox offsets to the output vector.

Anchor Boxes in YOLO : How are they decided

I have gone through a couple of YOLO tutorials but I am finding it some what hard to figure if the Anchor boxes for each cell the image is to be divided into is predetermined. In one of the guides I went through, The image was divided into 13x13 cells and it stated each cell predicts 5 anchor boxes(bigger than it, ok here's my first problem because it also says it would first detect what object is present in the small cell before the prediction of the boxes).
How can the small cell predict anchor boxes for an object bigger than it. Also it's said that each cell classifies before predicting its anchor boxes how can the small cell classify the right object in it without querying neighbouring cells if only a small part of the object falls within the cell
E.g. say one of the 13 cells contains only the white pocket part of a man wearing a T-shirt how can that cell classify correctly that a man is present without being linked to its neighbouring cells? with a normal CNN when trying to localize a single object I know the bounding box prediction relates to the whole image so at least I can say the network has an idea of what's going on everywhere on the image before deciding where the box should be.
PS: What I currently think of how the YOLO works is basically each cell is assigned predetermined anchor boxes with a classifier at each end before the boxes with the highest scores for each class is then selected but I am sure it doesn't add up somewhere.
UPDATE: Made a mistake with this question, it should have been about how regular bounding boxes were decided rather than anchor/prior boxes. So I am marking #craq's answer as correct because that's how anchor boxes are decided according to the YOLO v2 paper
I think there are two questions here. Firstly, the one in the title, asking where the anchors come from. Secondly, how anchors are assigned to objects. I'll try to answer both.
Anchors are determined by a k-means procedure, looking at all the bounding boxes in your dataset. If you're looking at vehicles, the ones you see from the side will have an aspect ratio of about 2:1 (width = 2*height). The ones viewed from in front will be roughly square, 1:1. If your dataset includes people, the aspect ratio might be 1:3. Foreground objects will be large, background objects will be small. The k-means routine will figure out a selection of anchors that represent your dataset. k=5 for yolov3, but there are different numbers of anchors for each YOLO version.
It's useful to have anchors that represent your dataset, because YOLO learns how to make small adjustments to the anchor boxes in order to create an accurate bounding box for your object. YOLO can learn small adjustments better/easier than large ones.
The assignment problem is trickier. As I understand it, part of the training process is for YOLO to learn which anchors to use for which object. So the "assignment" isn't deterministic like it might be for the Hungarian algorithm. Because of this, in general, multiple anchors will detect each object, and you need to do non-max-suppression afterwards in order to pick the "best" one (i.e. highest confidence).
There are a couple of points that I needed to understand before I came to grips with anchors:
Anchors can be any size, so they can extend beyond the boundaries of
the 13x13 grid cells. They have to be, in order to detect large
objects.
Anchors only enter in the final layers of YOLO. YOLO's neural network makes 13x13x5=845 predictions (assuming a 13x13 grid and 5 anchors). The predictions are interpreted as offsets to anchors from which to calculate a bounding box. (The predictions also include a confidence/objectness score and a class label.)
YOLO's loss function compares each object in the ground truth with one anchor. It picks the anchor (before any offsets) with highest IoU compared to the ground truth. Then the predictions are added as offsets to the anchor. All other anchors are designated as background.
If anchors which have been assigned to objects have high IoU, their loss is small. Anchors which have not been assigned to objects should predict background by setting confidence close to zero. The final loss function is a combination from all anchors. Since YOLO tries to minimise its overall loss function, the anchor closest to ground truth gets trained to recognise the object, and the other anchors get trained to ignore it.
The following pages helped my understanding of YOLO's anchors:
https://medium.com/#vivek.yadav/part-1-generating-anchor-boxes-for-yolo-like-network-for-vehicle-detection-using-kitti-dataset-b2fe033e5807
https://github.com/pjreddie/darknet/issues/568
I think that your statement about the number of predictions of the network could be misleading. Assuming a 13 x 13 grid and 5 anchor boxes the output of the network has, as I understand it, the following shape: 13 x 13 x 5 x (2+2+nbOfClasses)
13 x 13: the grid
x 5: the anchors
x (2+2+nbOfClasses): (x, y)-coordinates of the center of the bounding box (in the coordinate system of each cell), (h, w)-deviation of the bounding box (deviation to the prior anchor boxes) and a softmax activated class vector indicating a probability for each class.
If you want to have more information about the determination of the anchor priors you can take a look at the original paper in the arxiv: https://arxiv.org/pdf/1612.08242.pdf.

Large ratio values ssrs chart

I have a bar chart that show the count of number of models for each agency,
The problem is that I have a large difference between the values that makes the report to look not so good.
Does anyone have any ideas of a good way to resolve this problem?
Have you considered using a logarithmic scale?
With your chart Right-click the y-axis, and click Vertical Axis Properties.
In Axis Options, select Use logarithmic scale.
Leave the Log base text box as 10 (this is the scale most commonly used by logarithmic charts)
This will display a chart with a scale that goes up by a factor of 10 for each ‘unit’ up, so the distance between 1 and 10 is the same as that between 100 and 1000.
For example the shown dataset is displayed as this chart when using the logarithmic scale
This method is a simple and recognised way to clearly show values of widely different scales.
Update
If want an indicative bar for the vales that are 1 then you could use the expression
=iif(Fields!val.Value = 1, Fields!val.Value * 1.1, Fields!val.Value)
To make all values that are 1 equal to 1.1 so showing a tiny bar appearing a the bottom of the chart, as seen here
Unfortunately I don't know of a way to change that first 1 to a zero (formatting-wise). This is partly because you are now using a logarithmic scale and zero and negative values don't really exist. This is due to a fundamental property of logarithms in mathematics, that
LOG10(10)= 1
LOG10(1) = 0
LOG10(.1)=-1
Therefore, when you perform a log10 of zero, it just doesn't exist.