In my project, I need to create a unique neighborhood governed by a rule. The default NL neighborhoods do not suit my purpose.
My rule is as follows. Form a neighborhood of three patches as follows:
a. Begin at the origin on the bottom left corner.
b. Assign all patches a label in the form: Patch ”n” where “n” is any random number.
c. Move in a clock wise direction
d. Moving clockwise, select three patches such that they all border each other i.e. Patch 1 has a border with Patches 2 & 3, Patch 2 has a border with Patches 1 & 3 and patch 3 has borders with Patch 1 & 2.
e. If there is more than one possible choice to form the group, select the patch where
n” in the patch label is smallest.
f. If for example I have a world of 100 patches with n=3, then this results in33 neighborhoods of three patches each and a 34th neighborhood of one patch.
Related
I have two csv files containing countries and values that correspond to each country.
The data from CSV 1 denotes the number of times a country has been attacked on their own soil.
The data from CSV 2 denotes the number of times a country has attacked another country abroad.
There is overlap between the two sets of data and I intend to demonstrate values from both data sets in one grey scale range to be shown on a choropleth map.
I have some (obviously) phony data below to demonstrate what I'm working with.
TARGET.csv
country, code, value
Iran, IRN, 5
Russia, RUS, 4
United States, USA, 0
Egypt, EGY, 2
Spain, ESP, 1
ATTACKER.csv
country, code, value
Iran, IRN, 3
Russia, RUS, 9
United States, USA, 4
Egypt, EGY, 0
Spain, ESP, 0
There are more targets than attackers.
I want to ensure that I represent the data accurately, but do not know how I would create a normalized range of values between -1 and 1.
It is my understanding that displaying the data in this way would accurately represent the reality best, but I feel like I may be wrong.
In summation:
1) Am I thinking about this problem properly? Is this even the right way to think about displaying the data?
2) What is the proper language used to describe my question?
I am usually able to figure these things out but I'm stumped with dead-end search queries.
3) How do I make sure that my range is normalized. Notice that USA above appears as the only attacker who has never been a target, Would that make the USA the nearest value to +1, despite Russia's larger number of attacks?
I would appreciate whatever input you all can offer.
I am currently reading the paper "FlowNet: Learning Optical Flow with Convolutional Networks" and having trouble understanding the correlation layer.
I can't seem to find any explanation on google, so I thought I should ask her:
When the paper talks about comparing each patch from f_1 to each patch from f_2, where f_1 and f_2 are feature maps of dimension whc, what do they mean by patch? Are we talking about a patch of features from a feature map or a patch of pixels from one of the original images?
what is x_1 and x_2? Are they a feature pixel (1*1*c) in the feature maps? are they coordinate values?
What does f_1(x_1 + o) mean exactly?
Many thanks!
From feature map-2 the patch of 21x21x256 is extracted only once and then each 1x1x256 kernel from feature map-1 is convolved with this (21x21x256) patch.
More Explanation:
Each (1x1x256) kernel from feature map-1 is convolved with only pixel-1 of patch (21x21x256) to get one feature map and then all (1x1x256) kernels of feature map-1 are again convolved with pixel-2 of (21x21x256) patch to get second feature map
This process is continued for all pixels of (21x21x256) patch till we get 441 feature maps which is equal to number of pixels in extarcted feature map.please look at this figure
The way I understand it, suppose you have two feature maps (ignoring batches for the moment):
f_1 of shape (w, h, c),
f_2 of shape (w, h, c)
Then there are two stride values s_1 and s_2. The first stride s_1 is applied to f_1 in the sense that we only consider feature map patches x_i of f_1 at strided patch centers. For instance if the stride was 5 (in both the height and width direction), we would consider patches at locations:
(0,0), (0,5), ..., (0, w)
(5,0), (5,5), ..., (5, w)
...
(h,0), (h, 5), ..., (h, w)
** (supposing the width/height are divisible by 5 for simplicity, otherwise you have to do some padding arithmetic)
For a given patch center x_i, the patch centers , call them {y_i}, of f_2 considered in the correlation operation around x_i are only those that are within a neighborhood of size D := 2d+1, and those are strided as well with stride value s_2. There will be D^2 of these, according to the authors. (This part is not well described in my opinion as there are many ways of interpreting what the stride value s_2 means. If s_2 = 1, then there will be D^2 patches {y_i} of f_2 to consider, but if it is larger, there should be less, and hence the final tensor shape will not necessarily be D^2 in the last axis.)
The correlation operation itself is a simple sum of dot products, where the dot products are taken with vectors of shape (1, c) * (1, c), of which there will be K^2 of these summed, where K=2k+1 (an odd-sized filter for some positive integer k).
patch of features from feature map
feature pixel (1*1*c) in the feature maps
feature pixel located in distance o from x1
correlation layer in flownet computes patches from feature maps(first feature map and second feature map).
enter image description here
to calculate correlation between feature pixel x1 and feature pixel x2, correlation layer computes dot product between windows(size (2k+1,2k+1)) that centered x1 and x2. so they just do dot product between elements in windows and add them up.
The plot I am referring to can be found here. It is reproduced by calling the calc_feature_statistics function.
It is clear to me what the blue and orange curve (mean target and mean prediction) represent.
What is the red line(predictions for different feature values) ?
from the link:
To calculate it, the value of the feature is successively changed to fall into every bucket for every input object. The value for a bucket on the graph is calculated as the average for all objects when their feature values are changed to fall into this bucket.
As far as I understand these words the explanation is as next:
for example you've got categorical feature with three possible values: 'Moscow', 'London', 'New York'. Then:
Let's set all values of this feature in train data as 'Moscow' and
calculate average prediction among all of the data with the model we
trained earlier. This will be the dot of red line for bucket
'Moscow'
Repeat previous step with value 'London' --> this will be dot of red line for bucket 'London'
Same for New York.
In this page https://courses.cit.cornell.edu/bionb441/CA/forest.m
I found a code named "Forest Fire"
I am trying to figure out how this code works for educational purposes.
Here are the rules:
Cells can be in 3 different states. State=0 is empty, state=1 is burning and state=2 is forest.
If one or more of the 4 neighbors of a cell is burning and it is forest (state=2) then the new state is burning (state=1).
A cell which is burning (state=1) becomes empty (state=0).
There is a low probablity (0.000005) of a forest cell (state=2) starting to burn on its own (from lightning).
There is a low probability (say, 0.01) of an empty cell becoming forest to simulate growth.
what it is not very clear how it works is...
sum = (veg(1:n,[n 1:n-1])==1) + (veg(1:n,[2:n 1])==1) + ...
(veg([n 1:n-1], 1:n)==1) + (veg([2:n 1],1:n)==1) ;
veg = 2*(veg==2) - ((veg==2) & (sum> 0 | (rand(n,n)< Plightning))) + ...
2*((veg==0) & rand(n,n)< Pgrowth) ;
There is no problem in running the code, it just I am confused what are these vectors (sum and veg). Especially what makes (veg(1:n,[n 1:n-1])==1).
What I see is that, both are matrixes and veg is the data of the plot (matriz with 0's 1's and 2's).
I really appreciate any help you can provide.
Binary comparison operators on a matrix and a scalar return the matrix of elements of that binary comparison with the scalar and the corresponding element of the original matrix.
sum is a matrix in which each cell contains the number of adjacent cells in the corresponding matrix veg that are on fire (==1).
(veg(1:n,[n 1:n-1])==1) is a matrix of logical 1s and 0s (I don't know if the data type is static or dynamic) in which each cell equals 1 when the cell to the left of the corresponding one in veg is on fire (==1).
https://courses.cit.cornell.edu/bionb441/CA/
Look at the URL, go back up the tree to see the source.
The rule:
Cells can be in 3 different states. State=0 is empty, state=1 is burning and state=2 is forest.
If one or more of the 4 neighbors if a cell is burning and it is forest (state=2) then the new state is burning (state=1).
There is a low probablity (say 0.000005) of a forest cell (state=2) starting to burn on its own (from lightning).
A cell which is burning (state=1) becomes empty (state=0).
There is a low probability (say, 0.01) of an empty cell becoming forest to simulate growth.
The array is considered to be toroidly connected, so that fire which burns to left side will start fires on the right. The top and bottom are similarly connected.
The update code:
sum = (veg(1:n,[n 1:n-1])==1) + (veg(1:n,[2:n 1])==1) + ...
(veg([n 1:n-1], 1:n)==1) + (veg([2:n 1],1:n)==1) ;
veg = ...
2*(veg==2) - ((veg==2) & (sum> 0 | (rand(n,n)< Plightning))) + ...
2*((veg==0) & rand(n,n)< Pgrowth) ;
Note that the toroidal connection is implemented by the ordering of subscripts.
I have a 3d cylinder chart that I am having some problems with. I want to effectively sort the cylinders with the highest value at the back and the lowest value at the front. Otherwise the tallest valuest cover the smallest values.
I have tried sorting both a-z and z-a but I really need it to be dynamic based on the values. I have also tried sorting the values by the actual value field. both a-z and z-a but this seems to return completely random results.
the data in the database (example) looks like. I use a parameter to separate by supplier.
Date catgeory_Type cost supplier
01/01/2013 apple $5 abc
01/01/2013 pear $10 def
01/01/2013 bannana $15 cgi
01/02/2013 apple $7 etc
01/02/2013 pear $12 etc
01/02/2013 banana $18 etc
I believe I need some form of expression that sorts the values based on cost. as both a-z and z-a in the instance would provide cylinders that blocked other cylinders.
I have tried sorting the series group by :=Sum(Fields!cost.Value, "DataSet1") and =Fields!cost.Value but this seems to return random results.
I would be happy even if I could achieve a custom sort such as sort by "bannana, pear, apple" although for some "suppliers" this would still cause me an issue.
edit 1: strangely enough this works with a line chart but not a 3d cylinder
edit 2: example
attached is an example. I want the tallest cylinders at the back. but methods mentioned above do not work
In chart area properties -> 3D-options , Enable,
series clustering
Choose this option to cluster series groups. When multiple series for
bar or column charts are clustered, they are displayed along two
distinct rows in the chart area. If series are not clustered, their
corresponding data points are displayed adjacent to each other in one
row. This option is applicable only to bar and column charts.
Also try changing the Rotation & Inclination degrees, to get a better look.
Decrease wall thickness also.