BoxClassifierLoss/localization_loss and Loss/regularization_loss also zero using Tensorflow Objectdetection API - deep-learning

I have now been trying to train a object-detection model for some time, namely the "faster_rcnn_resnet152_v1_640x640_coco17_tpu-8" model. However, during this whole training process, neither the BoxClassifierLoss/localization_loss or the Loss/regularization_loss has been higher than zero.
Has anyone else had similar issues, or do anyone know a solution?
(I think this is the reason why my model performs very very poorly atleast)
INFO:tensorflow:{'Loss/BoxClassifierLoss/classification_loss': 0.011540242,
'Loss/BoxClassifierLoss/localization_loss': 0.0,
'Loss/RPNLoss/localization_loss': 0.05603733,
'Loss/RPNLoss/objectness_loss': 0.021345321,
'Loss/regularization_loss': 0.0,
'Loss/total_loss': 0.08892289,
'learning_rate': 0.090500005}
I1105 01:40:26.982768 16300 model_lib_v2.py:705] {'Loss/BoxClassifierLoss/classification_loss': 0.011540242,
'Loss/BoxClassifierLoss/localization_loss': 0.0,
'Loss/RPNLoss/localization_loss': 0.05603733,
'Loss/RPNLoss/objectness_loss': 0.021345321,
'Loss/regularization_loss': 0.0,
'Loss/total_loss': 0.08892289,
'learning_rate': 0.090500005}

'localization_loss' always staying at 0.0 can be due to an error in your tfrecords-file, or, most likely an error in your label_map. Check if your label_map matches with the classes in your tfrecords-file and is correctly formatted.

When your localization and regularization loss is zero, it means there is some problem with generating tfrecords files. When creating annotations for image files your labels should be consistent with you label map file.
Extract from example tf record
feature {
key: "image/object/class/text"
value {
bytes_list {
value: "paragraph"
value: "paragraph"
value: "table"
value: "paragraph"
}
}
}
Now when you create labelmap.pbtxt it should exactly match with above values
Extract from sample labelmap file
item {
name: "paragraph"
id: 1
}
item {
name: "table"
id: 2
}
After making this change you localization_loss should not be zero anymore.

Related

DC.JS How to handle objects with different amount of properties

Let's say i have 2 objects each with the same properties but one has an extra property middleName and the other does not.
How should i handle this in DC.js?
var objects = [{
name: "De Smet",
firstName: "Jasper",
adress: "Borsbeke",
},{
name: "De Backer",
firstName: "Dieter",
middleName: "middleName",
adress: "Borsbeke"
},{
name: "De Bondtr",
firstName: "Simon",
middleName: "OtherMiddleName",
adress: "Denderleeuw"
}
]
The wanted behaviour would be that the object without the property gets filtered out. Like so:
Here is a fiddle:
https://jsfiddle.net/mj92shru/41/
It seems to add the property middlename to the first object and assigns it the next middlename it finds
Adding the property to the first object and adding a placeholder value like "none" works but it doesnt really produce wanted behaviour.
I realize i could filter out the objects where the middlename is set to "none" but this would be difficult in the actual application i am writing
i've also found that adding the object without the property last causes it to crash.
Indeed, using undefined fields for your dimension or group keys can crash crossfilter because it does not validate its data. NaN, null, and undefined do not have well-defined sorting operations.
It's strange to see the value folded into another bin, but I suspect it's the same undefined behavior, rather than something you can depend on.
If you have fields which may be undefined, you should always default them, even if you don't want the value:
middleNameDimension = j.dimension(d => d.middleName || 'foo'),
I think you do want to filter your data, but not in the crossfilter sense where those rows are removed and do not influence other charts. Instead, it should just be removed from the group without affecting anything else.
You can use a "fake group" for this, and there is one in the FAQ which is suited perfectly for your problem:
function remove_bins(source_group) { // (source_group, bins...}
var bins = Array.prototype.slice.call(arguments, 1);
return {
all:function () {
return source_group.all().filter(function(d) {
return bins.indexOf(d.key) === -1;
});
}
};
}
Apply it like this:
.group(remove_bins(middleNameGroup, 'foo'))
Fork of your fiddle.
Be careful with this, because a pie chart implicitly adds up to 100%, and in this case it only adds up to 66%. This may be confusing for users, depending how it is used.

JSON Schema giving me validation errors for multipleOf 0.01 for any number ending in .49 or .99

I am using JSON Schema to validate a server request and I have some values that I want validated to 2DP. I have used the following schema to validate these fields:
'properties': {
'amount': {'type': ['number', 'null'], 'multipleOf': 0.01}
}
This works fine for all cases other than numbers ending in .49 or .99, where I get the error amount is not a multiple of (divisible by) 0.01.
This is presumably some kind of floating point error. If this is not right, how should I validate numbers to a certain precision?
To avoid looping through and casting decimals as suggested above, I ended up writing a custom validator:
Validator.prototype.customFormats.currency = function(input) {
if (input === undefined) { return true}
return (input * 100) % 1 === 0
}

ImmutableJs - compare objects but for one property

I am converting a shopping basket to an immutable structure.
Is there an easy way with immutablejs to see if an immutable object already exists within an immutable list EXCEPT for one object property 'quantity' which could be different? List example:
[{
id: 1,
name: 'fish and chips',
modifiers: [
{
id: 'mod1',
name: 'Extra chips'
}
],
quantity: 2
},{
id: 2,
name: 'burger and chips',
modifiers: [
{
id: 'mod1',
name: 'No salad'
}
],
quantity: 1
}]
Now, say I had another object to put in the list. But I want to check if this exact item with modifiers exists in the list already? I could just do list.findIndex(item => item === newItem) but because of the possible different quantity property then it wont work. Is there a way to === check apart from one property? Or any way to do this without having to loop through every property (aside from quantity) to see if they are the same?
Currently, I have an awful nested loop to go through every item and check every property to see if it is the same.
Well this should work-
list.findIndex(item => item.delete("quantity").equals(newItem.delete("quantity"))
The equals method does deep value comparison. So once you delete the quantity, you are comparing all values that matter.
PS: please ignore code formatting, I am on SO app.
PPS: the above code is not optimal, you should compare a pre-trimmed newItem inside the arrow function instead of trimming it there.

Right structure for a series of dates: values

I'm having a hard time trying to figure out what is the right JSON structure for the following set of data. I've got a sensor that logs humidity of a given room on a daily basis. Logs look like:
...
2015-01-19 8%
2015-01-20 13%
...
I'd like to convert it to JSON. My first bet was:
{
'2015-01-19': 8,
'2015-01-20': 13
}
But, is it correct? Shouldn't it be:
[
{ '2015-01-19', 8 },
{ '2015-01-20', 13}
]
Or:
[
{
'date': '2015-01-19',
'value': 8
},
{
'date': '2015-01-20',
'value': 13
}
]
And, at the end of the day, is there a series of best practices I could refer to in order to help me determine what's the best structure on my own?
Your first example is simple and easy, though perhaps not extensible if you decide to add more attributes later. If that's unlikely, you should use that method.
Your second example is not valid JSON.
Your third example makes some sense, though it is not a very compact encoding (wastes space).
A fourth method you should consider is to use separate arrays. This is not necessarily intuitive at first, but it does work well, is compact yet extensible, and is directly compatible with some tools such as HighCharts. That is:
{
'dates': ['2015-01-19', '2015-01-20'],
'humidity': [8, 13]
}

reactivemongo - merging two BSONDocuments

I am looking for the most efficient and easy way to merge two BSON Documents. In case of collisions I have already handlers, for example if both documents include Integer, I will sum that, if a string also, if array then will add elements of the other one, etc.
However due to BSONDocument immutable nature it is almost impossible to do something with it. What would be the easiest and fastest way to do merging?
I need to merge the following for example:
{
"2013": {
"09": {
value: 23
}
}
}
{
"2013": {
"09": {
value: 13
},
"08": {
value: 1
}
}
}
And the final document would be:
{
"2013": {
"09": {
value: 36
},
"08": {
value: 1
}
}
}
There is a method in BSONDocument.add, however it doesn't check uniqueness, it means I would have at the end 2 BSON documents with "2013" as a root key, etc.
Thank you!
If I understand you inquiry, you are looking to aggregate field data via composite id. MongoDB has a fairly slick aggregate framework. Part of that framework is the $group pipeline aggregate keyword. This will allow you to specify and _id to group by which could be defined as a field or a document as in your example, as well as perform aggregation using accumulators such as $sum.
Here is a link to the manual for the operators you will probably need to use.
http://docs.mongodb.org/manual/reference/operator/aggregation/group/
Also, please remove the "merge" tag from your original inquiry to reduce confusion. Many MongoDB drivers include a Merge function as part of the BsonDocument representation as a way to consolidate two BsonDocuments into a single BsonDocument linearly or via element overwrites and it has no relation to aggregation.
Hope this helps.
ndh