Sorry for this primitive question; but I've just started using caffe.
I want to multiple output of a layer by a constant:
top = bottom * k
any ideas This is what I have tried so far: (for example costant = 0.5)
expParam = {power: 1
scale: 0.5
shift: 0}
L.Exp(bottom, exp_param=expParam, in_place=False)
Using "Scale" layer:
L.Scale(bottom, scale_param={'filler': {'type': 'constant', 'value': 0.5}},
param={'lr_mult': 0, 'decay_mult': 0})
Note that by default "Scale" layer learns the scale factor. If you want it to remain fixed you need to set lr_mult to zero.
Related
In the image shown, number of possible routes should be calculated (for example: between (0,0) and (4,3)). Condition: no diagonal direction
I have tried this using adjacency matrix and digraph in MATALB, but I require output in Octave where digraph is not supported. Please suggest.
I actually have 8 possible ways to move between any two points:
Down, left
Down, right
Up, left
Up, right
Right, up
Right, down
Left, up
Left, down
%%%%%%%%%%%%%% Make grid points.
x = -1:6;
y = -1:5;
[xx, yy] = meshgrid(x, y, indexing = 'ij');
G = plot(xx(:),yy(:), 'b.');
grid on;
drawnow;
%%%%%%%%%% Set up figure properties: Enlarge figure to full screen.
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0, 0, 1, 1]);
%%%%%%%%%%%%%%% Get rid of tool bar and pulldown menus that are along top of figure.
set(gcf, 'Toolbar', 'none', 'Menu', 'none');
%%%%%%%%%%% Give a name to the title bar.
set(gcf, 'Name', 'Mega Constellation:Geographical separation', 'NumberTitle', 'Off');
%%%%%%%%%%%%%%% Print (x,y) values at each point.
for k = 1 : numel(xx)
str = sprintf('(%i, %i)', xx(k), yy(k));
text(xx(k), yy(k), str);
end
I want to crate an pygame.Rect object from a center point (xc, yc) and a size (w, h).
pygame.Rect just provides a constructor with the top left point and the size.
Of course I can calculate the top left point:
rect = pygame.Rect(xc - w // 2, yc - h // 2, w, h)
Or I can set the location via the virtual attribute center:
rect = pygame.Rect(0, 0, w, h)
rect.center = xc, yc
If I want to completely confuse someone, I use inflate:
rect = pygame.Rect(xc, yc, 0, 0).inflate(w, h)
Or even clamp:
rect = pygame.Rect(0, 0, w, h).clamp((xc, yc, 0, 0))
Not any of this methods satisfies me. Either I have to calculate something, I have to write several lines of code, or I have to use a function that completely hides what is happening.
I also don't want to write a function (or lambda) as I think this is completely over the top for creating a simple rectangle.
So my question is:
How do you usually create such a rectangle with a self-explanatory line of code so everyone can see what is happening at a glance?
Is there a much easier method? Do I miss something?
Interesting question Rabbid76.
I personally try to write code such that a person with only a general understanding of programming concepts can read the code. This includes absolute beginners (95% of people making PyGame questions), and converts from other languages.
This is why I mostly shy-away from using Python's if x < y < z:, and blah = [ x for x in some-complex-iff-loop ], et.al. syntax. (And that's also why I always put my if conditions in brackets.) Sure if you know python well it doesn't matter, but for an example of why it's important, go try to read a Perl script from the mid 2010's and you see stuff like:
print #$_, "\n" foreach ( #tgs );
It didn't have to be written like that, they could have used a loop-block with some instructive variable names, and not $_, etc.
So bearing the above in mind, the question comes down to - Which is the easiest to read and understand.
So for my 2-cents worth, it has to be option #2:
rect = pygame.Rect(0, 0, w, h)
rect.center = xc, yc
It's absolutely clear to a syntax-ignorant code reader that a rectangle is being created, and some kind of centre-point is being set.
But to make the code more "self documenting", it could be wrapped in a function call:
def getRectAround( centre_point, width, height ):
""" Return a pygame.Rect of size width by height,
centred around the given centre_point """
rectangle = pygame.Rect( 0, 0, w, h ) # make new rectangle
rectangle.center = centre_point # centre rectangle
return rectangle
# ...
rect = getRectAround( ( x, y ), w, h )
Sometimes more code is better.
It is very common tu use softmax function for converting an array of values in an array of probabilities. In general, the function amplifies the probability of the greater values of the array.
However, this function is not scale invariant. Let us consider an example:
If we take an input of [1, 2, 3, 4, 1, 2, 3], the softmax of that is [0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175]. The output has most of its weight where the '4' was in the original input. That is, softmax highlights the largest values and suppress values which are significantly below the maximum value. However, if the input were [0.1, 0.2, 0.3, 0.4, 0.1, 0.2, 0.3] (which sums to 1.6) the softmax would be [0.125, 0.138, 0.153, 0.169, 0.125, 0.138, 0.153]. This shows that for values between 0 and 1 softmax, in fact, de-emphasizes the maximum value (note that 0.169 is not only less than 0.475, it is also less than the initial proportion of 0.4/1.6=0.25).
I would need a function that amplifies differences between values in an array, emphasizing the greatest values and that is not so affected by the scale of the numbers in the array.
Can you suggest some function with these properties?
As Robert suggested in the comment, you can use temperature. Here is a toy realization in Python using numpy:
import numpy as np
def softmax(preds):
exp_preds = np.exp(preds)
sum_preds = np.sum(exp_preds)
return exp_preds / sum_preds
def softmax_with_temperature(preds, temperature=0.5):
preds = np.log(preds) / temperature
preds = np.exp(preds)
sum_preds = np.sum(preds)
return preds / sum_preds
def check_softmax_scalability():
base_preds = [1, 2, 3, 4, 1, 2, 3]
base_preds = np.asarray(base_preds).astype("float64")
for i in range(1,3):
print('logits: ', base_preds*i,
'\nsoftmax: ', softmax(base_preds*i),
'\nwith temperature: ', softmax_with_temperature(base_preds*i))
Calling check_softmax_scalability() would return:
logits: [1. 2. 3. 4. 1. 2. 3.]
softmax: [0.02364054 0.06426166 0.1746813 0.474833 0.02364054 0.06426166
0.1746813 ]
with temperature: [0.02272727 0.09090909 0.20454545 0.36363636 0.02272727 0.09090909
0.20454545]
logits: [2. 4. 6. 8. 2. 4. 6.]
softmax: [0.00188892 0.01395733 0.10313151 0.76204449 0.00188892 0.01395733
0.10313151]
with temperature: [0.02272727 0.09090909 0.20454545 0.36363636 0.02272727 0.09090909
0.20454545]
But the scale invariance comes with a cost: as you increase temperature, the output values will come closer to each other. Increase it too much, and you will have an output that looks like a uniform distribution. In your case, you should pick a low value for temperature to emphasize the maximum value.
You can read more about how temperature works here.
I am using Keras data augmentation for image classification. I would like to specify more than one value for width_shift_range and height_shift_range. For example, I would like augment the images with multiples values of shift ranges such as 0.2, 0.4, 0.6 in one training session. Is there any way of doing this.
Thanks in advance for any help.
You don't need to specify multiple values for width_shift_range (resp. height_shift_range). What it basically does is that it draws a random number x from a uniform distribution in the interval [-width_shift_range, width_shift_range] (resp. [-height_shift_range, height_shift_range]), and apply a translation of the image with a shift proportional to x times the corresponding image width (resp. height) .
Here's the random_shift function from keras:
def random_shift(x, wrg, hrg, row_axis=1, col_axis=2, channel_axis=0,
fill_mode='nearest', cval=0.):
# wrg: Width shift range, as a float fraction of the width.
# hrg: Height shift range, as a float fraction of the height.
h, w = x.shape[row_axis], x.shape[col_axis]
tx = np.random.uniform(-hrg, hrg) * h
ty = np.random.uniform(-wrg, wrg) * w
translation_matrix = np.array([[1, 0, tx],
[0, 1, ty],
[0, 0, 1]])
transform_matrix = translation_matrix # no need to do offset
x = apply_transform(x, transform_matrix, channel_axis, fill_mode, cval)
return x
Conclusion: take the maximum value since you're going to draw in the interval [-x, x], id est if you want shifts varying between 0.2, 0.4 and 0.6 range shifts, just use 0.6.
according to the CUDA programming guide, the value returned by the texture fetch is
tex(x) = (1-a)T[i] + aT[i+1] for a one-dimensional texture
where i = floor(Xb), a = frac(Xb), Xb=x-0.5
Suppose we have a one dimensional texture that only has two texals. For example:
T[0] = 0.2, T[1] = 1.5
Say we want to fetch the texal at 1, which I think should return T[1] which is 1.5.
However, if you follow the rule given in the programming guide, the return value will be:
Xb = 1 - 0.5 = 0.5
a = 0.5
i = 0
return value = 0.5*T[0] + 0.5*T[1] = 0.85
which does not make any sense to me. Can someone explain why the linear filtering is done in such way by CUDA? Thanks
The linear filtering algorithm in CUDA assumes texel values are located at the centroid of the interpolation volume (so voxel centered, if you like). In your 1D filtering example, the input data is implicitly taken as
T[0]=(0.5, 0.2) T[1]=(1.5, 1.5)
So your example is asking for Tex(1), which is the midpoint between the two texel values, ie.
0.5*0.2 + 0.5*1.5 = 0.85
To get T[1] returned you would require Tex(1.5) and that is the general rule - add 0.5 to coordinates if you want to treat the texture data as being at the voxel vertices, rather than the voxel center.