I have two layers: One of the Mississippi River Basin and one of the counties within the 48 states.
USA counties
Misissippi River Basin
I'm having no trouble selecting the counties within the Basin, but I also need to select the portions of the counties only partially within the basin. I know how to select the entirety of the counties that are partially within, but I want to select only the portions that are within the basin shapefile so I can calculate the percentage of those counties that are within the MRB.
Thanks, let me know if you need anything else
If I am reading your question correctly, the end result you want is the proportion of each county that is in the basin?
I would use the Union function to combine the two layers. The output is essentially sub-counties polygons.
You then calculate the area of those shapes that are both county and basin, but selecting by attribute.
Then use Dissolve on the counties using a sum rule on your area field (so that all the pieces are added up if there are unexpected splits).
Then calculate the total area of the counties in a new field and use the two new fields to calculate the proportion.
I hope this helps
Related
I'm trying to create a heat map in Tableau that will show me the hottest and coldest values per field, at the moment it is currently just assessing all values in the data set and assigning heat based on the lowest and highest values, irrespective of where they appear in the data set. Example below:
Place chickens bears potatoes
a............. 10.......... 100...... 5000
b ............. 70.......... 50...... 7000
c............. 30.......... 5........ 3000
d............. 30.......... 150........100
In the example table above, I'd like each column to have its own individual heat ranking (e.g. Chickens in place B should be hot, place b cold. Bears in place D should be hot, place C cold etc.). Can this be done in Tableau on one screen? I know if I filter on one column it works, but I'd rather have this all in one visualisation if possible.
Thanks!
See the example below.
When you have multiple measures in a viz using the measure names pseudo-field on the color shelf, you can have a separate color legend for each measure name. You can choose to have separate or combined color legends by right-clicking on the measure names pill on the color shelf/marks card.
In the example, I chose to use a different color palette for each measure, but that's not required. You can use the same palette for each legend if you wish, and still get different scales.
I have gone through a couple of YOLO tutorials but I am finding it some what hard to figure if the Anchor boxes for each cell the image is to be divided into is predetermined. In one of the guides I went through, The image was divided into 13x13 cells and it stated each cell predicts 5 anchor boxes(bigger than it, ok here's my first problem because it also says it would first detect what object is present in the small cell before the prediction of the boxes).
How can the small cell predict anchor boxes for an object bigger than it. Also it's said that each cell classifies before predicting its anchor boxes how can the small cell classify the right object in it without querying neighbouring cells if only a small part of the object falls within the cell
E.g. say one of the 13 cells contains only the white pocket part of a man wearing a T-shirt how can that cell classify correctly that a man is present without being linked to its neighbouring cells? with a normal CNN when trying to localize a single object I know the bounding box prediction relates to the whole image so at least I can say the network has an idea of what's going on everywhere on the image before deciding where the box should be.
PS: What I currently think of how the YOLO works is basically each cell is assigned predetermined anchor boxes with a classifier at each end before the boxes with the highest scores for each class is then selected but I am sure it doesn't add up somewhere.
UPDATE: Made a mistake with this question, it should have been about how regular bounding boxes were decided rather than anchor/prior boxes. So I am marking #craq's answer as correct because that's how anchor boxes are decided according to the YOLO v2 paper
I think there are two questions here. Firstly, the one in the title, asking where the anchors come from. Secondly, how anchors are assigned to objects. I'll try to answer both.
Anchors are determined by a k-means procedure, looking at all the bounding boxes in your dataset. If you're looking at vehicles, the ones you see from the side will have an aspect ratio of about 2:1 (width = 2*height). The ones viewed from in front will be roughly square, 1:1. If your dataset includes people, the aspect ratio might be 1:3. Foreground objects will be large, background objects will be small. The k-means routine will figure out a selection of anchors that represent your dataset. k=5 for yolov3, but there are different numbers of anchors for each YOLO version.
It's useful to have anchors that represent your dataset, because YOLO learns how to make small adjustments to the anchor boxes in order to create an accurate bounding box for your object. YOLO can learn small adjustments better/easier than large ones.
The assignment problem is trickier. As I understand it, part of the training process is for YOLO to learn which anchors to use for which object. So the "assignment" isn't deterministic like it might be for the Hungarian algorithm. Because of this, in general, multiple anchors will detect each object, and you need to do non-max-suppression afterwards in order to pick the "best" one (i.e. highest confidence).
There are a couple of points that I needed to understand before I came to grips with anchors:
Anchors can be any size, so they can extend beyond the boundaries of
the 13x13 grid cells. They have to be, in order to detect large
objects.
Anchors only enter in the final layers of YOLO. YOLO's neural network makes 13x13x5=845 predictions (assuming a 13x13 grid and 5 anchors). The predictions are interpreted as offsets to anchors from which to calculate a bounding box. (The predictions also include a confidence/objectness score and a class label.)
YOLO's loss function compares each object in the ground truth with one anchor. It picks the anchor (before any offsets) with highest IoU compared to the ground truth. Then the predictions are added as offsets to the anchor. All other anchors are designated as background.
If anchors which have been assigned to objects have high IoU, their loss is small. Anchors which have not been assigned to objects should predict background by setting confidence close to zero. The final loss function is a combination from all anchors. Since YOLO tries to minimise its overall loss function, the anchor closest to ground truth gets trained to recognise the object, and the other anchors get trained to ignore it.
The following pages helped my understanding of YOLO's anchors:
https://medium.com/#vivek.yadav/part-1-generating-anchor-boxes-for-yolo-like-network-for-vehicle-detection-using-kitti-dataset-b2fe033e5807
https://github.com/pjreddie/darknet/issues/568
I think that your statement about the number of predictions of the network could be misleading. Assuming a 13 x 13 grid and 5 anchor boxes the output of the network has, as I understand it, the following shape: 13 x 13 x 5 x (2+2+nbOfClasses)
13 x 13: the grid
x 5: the anchors
x (2+2+nbOfClasses): (x, y)-coordinates of the center of the bounding box (in the coordinate system of each cell), (h, w)-deviation of the bounding box (deviation to the prior anchor boxes) and a softmax activated class vector indicating a probability for each class.
If you want to have more information about the determination of the anchor priors you can take a look at the original paper in the arxiv: https://arxiv.org/pdf/1612.08242.pdf.
We have two position A and B with the specified characteristics on the map.
We want to position out between these two points at a distance of 50 meters.
enter image description here
You can use the geometry library for this:
https://developers.google.com/maps/documentation/javascript/geometry
https://developers.google.com/maps/documentation/javascript/reference/3/#spherical
From the docs:
Navigation Functions
When navigating on a sphere, a heading is the angle of a direction from a fixed reference point, usually true north. Within the Google Maps API, a heading is defined in degrees from true north, where headings are measured clockwise from true north (0 degrees). You may compute this heading between two locations with the computeHeading() method, passing it two from and to LatLng objects.
Given a particular heading, an origin location, and the distance to travel (in meters), you can calculate the destination coordinates using computeOffset().
In your case you might want to get the heading first
var heading = google.maps.geometry.spherical.computeHeading(latLngFrom, latLngTo)
then you can get the offset location:
google.maps.geometry.spherical.computeOffset(latlngFrom, distance, heading)
Could anyone help me with a Spotfire issue?
3 data types are avalialbe. Locations(sting), areas within these locations(string) and sum of hours associated with the areas and therefore the locations.
Objective: I'm trying to show data value axis: sum of hours, category axis - top 10 areas (sting data). I want to do this for all the locations available (trellis by location) and sort the bars descending in sum of hours, all in one bar chart visualisation. When I did this with trellis, Spotfire essentially trellised the original graph, i.e. to show where the top 10 sum of hours are distributed throughout the locations. This is not what I want - I want a dynamic category axis that will show me the areas of top 10 largest sum of hours per each location.
How do I do this? My idea was that maybe I could add a custom expression to the category axis before trellising or to add a calculated column to help, but I'm not sure how exactly this could solve the problem. Could anybody help?
Did you try the 'Sort bars by value' option ?
I have a single point and a set of shapes. I need to know if the point is contained within the compound shape of those shapes. That is, where all of the shapes intersect.
But that is the easy part.
If the point is outside the compound shape I need to find the position within that compound shape that is closest to the point.
These shapes can be of the type:
square
circle
ring (circle with another circle cut out of the center)
inverse circle (basically just the circular hole and a never ending fill outside that hole, or to the end of the canvas is there must be a limit to its size)
part of circle (as in a pie chart)
part of ring (as above but
line
The example below has an inverted circle (the biggest circle with grey surrounding it), a ring (topleft) a square and a line.
If we don't consider the line, then the orange part is the shape to constrain to. If the line is taken into account then the saturated orange part of the line is the shape to constrain to.
The black small dots represent the points that need to be constrained. The blue dots represent the desired result. (a 1, b 2 etc.)
Point "f" has no corresponding constrained result, since it is already in the orange area.
For the purpose of this example, only point "e" is constrained to the line, all others are constrained to the orange orange area.
If none of the shapes would intersect, then the point cannot be constrained. If the constraint would consist of two lines that cross eachother, then every point would be constrained to the same position (the exact position where the lines cross).
I have found methods that come close to this, but none that I can combine to produce the above functionality.
Some similar questions that I found:
Points within a semi circle
What algorithm can I use to determine points within a semi-circle?
Point closest to MovieClip
Flash: Closest point to MovieClip
Closest point through Minkowski Sum (this will work if I can convert the compound shape to polygons)
http://www.codezealot.org/archives/153
Select edge of polygon closest to point (similar to above)
For a point in an irregular polygon, what is the most efficient way to select the edge closest to the point?
PS: I noticed that the orange area may actually come across as yellow on some screens. It's the colored area in any case.
This isn't much of an answer, but it's a bit too long to fit into a comment ...
It's tempting to think, and therefore to advise you, to find the nearest point in each of the shapes to the point of interest, and to find the nearest of those nearest points.
BUT
The area you are interested in is constructed by union, intersection and difference of other areas and there will, therefore, be no general relationship between the closest points of the original shapes and the closest point of the combined shape. If you understand what I mean. For example, while the closest point of A union B is the closest of the set {closest point of A, closest point of B}, the closest point of A intersection B is not a simple function of that same set; at least not for the general case.
I suggest, therefore, that you are going to have to compute the (complex) shape which represents the area of interest and use one of the algorithms you've already discovered to find the closest point to your point of interest.
I look forward to someone much better versed in computational geometry proving me wrong.
Let's call I the intersection of all the shapes, C the contour of I, p the point you want to constrain and r the result point. We have:
If p is in I, then r = p
If p is not in I, then r is in C. So r is the nearest point in C to p.
So I think what you should do is the following:
If p is inside of all the shapes, return p.
Compute the contour C of the intersection of all the shapes, it is defined by a list of parts (segments, arcs, ...).
Find the nearest point to p in every part of C (computed in 2.) and return the nearest point among them to p.
I've discussed this question at length with my brother, and together we came to conclude that any resulting point will always lie on either the point where two shapes intersect, or where a shape intersects with the line from that shape perpendicular to the original point.
In the case of a circular shape constraint, the perpendicular line equals the line to its center. In the case of a line shape constraint, the perpendicular line is (of course) the line perpendicular to itself. In the case of a rectangle, the perpendicular line is the line perpendicular to the closest edge.
(And the same, theoretically, for complex polygon constraints.)
So a new approach (that I'll have to test still) will be to:
calculate all intersecting (with a shape constraint or with the perpendicular line from the original point to the shape constraint) points
keep only those that are valid: that lie within (comply with) all constraints
select the one closest to the original point
If this works, then one more optimization could be to determine first, which intersecting points are nearest and check if they are valid, and then work outward away from the original point until a valid one is found.
If this does not work, I will have another look at the polygon clipping method. For that approach I've come across this useful post:
Compute union of two arbitrary shapes
where clipping complex polygons is made much easier through http://code.google.com/p/gpcas/
The method holds true for all the cases (all points and their results) above, and also for a number of other scenarios that we tested (on paper).
I will try a live version tomorrow at work.