I have a molecular dynamics trajectory of a micelle in a water box. For each frame, I want to get center of mass (COM) of the micelle and calculate the density of water molecules from COM to the sides of the water box. Does anybody know what is the best way to do that? I was wondering whether somebody has dealt with such an issue before.
Thanks
Sadegh
First - a lot of useful commands and how to use them are written here.
You should select your lipids (I suppose micelle is build from phospholipids) than selecting by type - lipids:
set sel1 [atomselect top "lipids"]
or by resname :
set sel1 [atomselect top resname DPC]
And than:
center_of_mass $sel1
calculate the density of water molecules from COM to the sides of the
water box
I don't know what exactly you are talking about - there are some water molecules inside the micelle? Or waterbox is water molecules within certain distance from the micelle? Please, give some more info (better some pics of the system with descriptions).
Related
I have gone through a couple of YOLO tutorials but I am finding it some what hard to figure if the Anchor boxes for each cell the image is to be divided into is predetermined. In one of the guides I went through, The image was divided into 13x13 cells and it stated each cell predicts 5 anchor boxes(bigger than it, ok here's my first problem because it also says it would first detect what object is present in the small cell before the prediction of the boxes).
How can the small cell predict anchor boxes for an object bigger than it. Also it's said that each cell classifies before predicting its anchor boxes how can the small cell classify the right object in it without querying neighbouring cells if only a small part of the object falls within the cell
E.g. say one of the 13 cells contains only the white pocket part of a man wearing a T-shirt how can that cell classify correctly that a man is present without being linked to its neighbouring cells? with a normal CNN when trying to localize a single object I know the bounding box prediction relates to the whole image so at least I can say the network has an idea of what's going on everywhere on the image before deciding where the box should be.
PS: What I currently think of how the YOLO works is basically each cell is assigned predetermined anchor boxes with a classifier at each end before the boxes with the highest scores for each class is then selected but I am sure it doesn't add up somewhere.
UPDATE: Made a mistake with this question, it should have been about how regular bounding boxes were decided rather than anchor/prior boxes. So I am marking #craq's answer as correct because that's how anchor boxes are decided according to the YOLO v2 paper
I think there are two questions here. Firstly, the one in the title, asking where the anchors come from. Secondly, how anchors are assigned to objects. I'll try to answer both.
Anchors are determined by a k-means procedure, looking at all the bounding boxes in your dataset. If you're looking at vehicles, the ones you see from the side will have an aspect ratio of about 2:1 (width = 2*height). The ones viewed from in front will be roughly square, 1:1. If your dataset includes people, the aspect ratio might be 1:3. Foreground objects will be large, background objects will be small. The k-means routine will figure out a selection of anchors that represent your dataset. k=5 for yolov3, but there are different numbers of anchors for each YOLO version.
It's useful to have anchors that represent your dataset, because YOLO learns how to make small adjustments to the anchor boxes in order to create an accurate bounding box for your object. YOLO can learn small adjustments better/easier than large ones.
The assignment problem is trickier. As I understand it, part of the training process is for YOLO to learn which anchors to use for which object. So the "assignment" isn't deterministic like it might be for the Hungarian algorithm. Because of this, in general, multiple anchors will detect each object, and you need to do non-max-suppression afterwards in order to pick the "best" one (i.e. highest confidence).
There are a couple of points that I needed to understand before I came to grips with anchors:
Anchors can be any size, so they can extend beyond the boundaries of
the 13x13 grid cells. They have to be, in order to detect large
objects.
Anchors only enter in the final layers of YOLO. YOLO's neural network makes 13x13x5=845 predictions (assuming a 13x13 grid and 5 anchors). The predictions are interpreted as offsets to anchors from which to calculate a bounding box. (The predictions also include a confidence/objectness score and a class label.)
YOLO's loss function compares each object in the ground truth with one anchor. It picks the anchor (before any offsets) with highest IoU compared to the ground truth. Then the predictions are added as offsets to the anchor. All other anchors are designated as background.
If anchors which have been assigned to objects have high IoU, their loss is small. Anchors which have not been assigned to objects should predict background by setting confidence close to zero. The final loss function is a combination from all anchors. Since YOLO tries to minimise its overall loss function, the anchor closest to ground truth gets trained to recognise the object, and the other anchors get trained to ignore it.
The following pages helped my understanding of YOLO's anchors:
https://medium.com/#vivek.yadav/part-1-generating-anchor-boxes-for-yolo-like-network-for-vehicle-detection-using-kitti-dataset-b2fe033e5807
https://github.com/pjreddie/darknet/issues/568
I think that your statement about the number of predictions of the network could be misleading. Assuming a 13 x 13 grid and 5 anchor boxes the output of the network has, as I understand it, the following shape: 13 x 13 x 5 x (2+2+nbOfClasses)
13 x 13: the grid
x 5: the anchors
x (2+2+nbOfClasses): (x, y)-coordinates of the center of the bounding box (in the coordinate system of each cell), (h, w)-deviation of the bounding box (deviation to the prior anchor boxes) and a softmax activated class vector indicating a probability for each class.
If you want to have more information about the determination of the anchor priors you can take a look at the original paper in the arxiv: https://arxiv.org/pdf/1612.08242.pdf.
I am a bit confused with how Yolo works.
In the paper, they say that:
"The confidence prediction represents the IOU between the
predicted box and any ground truth box."
But how do we have the ground truth box? Let's say I use my Yolo network (already trained) on an image that is not labelled. What is my confidence then?
Sorry if the question is simple, but I really don't get this part...
Thank you!
But how do we have the ground truth box?
You seem to be confused about what exactly is training data and what is the output or prediction by YOLO.
Training data is a bounding box along with the class label(s). This is referred to as 'ground truth box', b = [bx, by, bh, bw, class_name (or number)] where bx, by is the midpoint of annotated bounding box and bh, bw is height and width of box.
Output or prediction is bounding box b along with class c for an image i.
Formally: y = [ pl, bx, by, bh, bw, cn ] where bx, by is the midpoint of annotated bounding box. bh, bw is height and width of box and pc - The probability of having class(es) c in 'box' b.
Let's say I use my Yolo network (already trained) on an image that is not labelled. What is my confidence then?
When you say you have a pre-trained model (which you refer to already trained), your network already 'knows' bounding boxes for certain object classes and it tries to approximate where the object might be in new image but while doing so your network might predict bounding box somewhere else than its supposed to be. So how do you calculate how much is the box 'somewhere else'? IOU to the rescue!
What IOU (Intersection Over Union) does is, it gets you a score of area of overlap over area of union.
IOU = Area of Overlap / Area of Union
While it's rarely perfect or 1. Its somewhat closer, the lesser the value of IOU, the worse YOLO is predicting the bounding box with reference to ground truth.
IOU Score of 1 means the bounding box is accurately or very confidently predicted with reference to ground truth.
YOLO uses IOU to measure weights for training.When you searched what's IOU it like that.
So when training this IoU scores calculate the prediction on validation data.It means
(Prediction of object)*IoU score
Hope it'll helps you.
I think all you need is a good image that clarifies what is the ground truth.
As you may see on the left the rectangle that perfectly envelopes the object is the ground truth (the blue one).
The orange rectangle is the predicted one. The IoU is what you can visually understand from the right hand side of the image.
Hope this helps.
I think I know the answer
Guess YOLO uses IoU in 2 cases for different gooals
1- to asses prediction while training
2- when you use already trained model, sometimes you get many boxes for the same object. I have red that This is the way YOLO tackles this issue (not sure if this is a part of Non Maximum Suppresion)
This question already has answers here:
Formula for controlling the movement of a tank-like vehicle?
(10 answers)
Closed 7 years ago.
I'm trying to simulate a tank-like/skid-steered vehicle, i.e. both of the wheels (one on each side) have separate velocities, and steering is done by increasing or decreasing the velocity of one of the sides.
For example, If I set the velocity of the left wheel to 5, and the right wheel to 3, it will turn right. What I'd like to know is, given the velocities of the wheels Vl and Vr, and the distance between the wheels D, by how many degrees will the direction the vehicle is pointing in change in one tick?
I've tried looking at Formula for controlling the movement of a tank-like vehicle?, and the links on that question, but haven't come up with anything. All my best guesses have failed.
First: the really easy edge cases. if V_l and V_r are zero, don't move. If they're the same, don't turn.
Second, if only one of V_l and V_r are zero, the tank pivots around the stationary tread, and the moving tread traces out an arc of length V_big with a radius of curvature D. theta = Vbig/D, plus or minus some sign conventions based on your coordinates. (the tank base also translates some distance but the calculations for that are dependent on where the center of rotation of the tank is defined to be and your coordinate system, so that detail is left as an exercise for the reader.)
Third, symmetry concerns! Obviously tank treads turning is left/right symmetric. If the left tread is twice as fast as the right the tank should turn the same amount as if the right tread is twice as fast as the left, just in a different direction. Ditto for going backwards.
Fourth: Meat and potatoes! I'm assuming neither tank tread can slip whatsoever. The faster tread traces an arc of length V_fast on a circle of radius r+D marked out by an angle theta. If you recall your trig V_fast=(r+D)*theta. The slower wheel traces out an arc of length V_slow on a circle of radius r marked out by the same angle.(V_slow = theta*r) Divide one equation by the other, receive V_fast/V_slow = (r+D)/r. Apply algebra to provide r=D/((V_fast/V_slow)-1) Note that this explodes appropriately when V_slow is zero or when V_fast=V_slow, and you appropriately receive r=D when V_fast=2*V_slow Recall that theta=V_slow*r: theta=(V_fast-V_slow)/D
IN RADIANS, mind you That's a crucial detail.
NOTE: If you define 'turning right' as positive theta and turning left as negative theta, it all works out and theta=(V_l-V_r)/D, even for negative speeds. The tank won't turn around to face the direction of travel, it'll keep facing the correct way.
I need to create turtles that have a certain dimension and check for overlap.
Since turtles per definition have no extension, I thought maybe the gis extension could be useful.
There is a way of associating an envelope with a turtle like
let gis:envelope-of self (list (xcor - 2 ) (xcor + 2) (ycor - 2) (ycor + 2))
But I don't know how to use this to draw the envelope and to check for overlaps.
Another way could be to give up the idea of one turtle having dimensions and to create a gis dataset from turtles by using
gis:turtle-dataset turtle-set
But I don't know how to create a polygon with this :-(
Any ideas?
Updated for Seth's comment to make explicit the different approaches for circles and others.
If the turtles are circles, then there is an overlap if the sum of the sizes of the two turtles < distance between them / 2, using the distance primitive as in Seth's comment.
However, if you have squares or other shapes, then you will have to do some fancy stuff with heading and the various trigonometry functions, and will need the differences of positions in the x and y direction (differences in xcor and ycor respectively. Something like this will get you started:
to-report xdiff [ turt1 turt2 ]
report [xcor] of turt1 - [xcor] of turt2
end
In the end I took an easy way out:
Since my objects don't have to move, I use adjacent patches to form a block of the needed size. Before I occupy a new patch, I check if it is already used and if so I delete all newly occupied patches.
Not very versatily, but it does the job for me so far.
I'm drawing rectangles at random positions on the stage, and I don't want them to overlap.
So for each rectangle, I need to find a blank area to place it.
I've thought about trying a random position, verify if it is free with
private function containsRect(r:Rectangle):Boolean {
var free:Boolean = true;
for (var i:int = 0; i < numChildren; i++)
free &&= getChildAt(i).getBounds(this).containsRect(r);
return free;
}
and in case it returns false, to try with another random position.
The problem is that if there is no free space, I'll be stuck trying random positions forever.
There is an elegant solution to this?
Let S be the area of the stage. Let A be the area of the smallest rectangle we want to draw. Let N = S/A
One possible deterministic approach:
When you draw a rectangle on an empty stage, this divides the stage into at most 4 regions that can fit your next rectangle. When you draw your next rectangle, one or two regions are split into at most 4 sub-regions (each) that can fit a rectangle, etc. You will never create more than N regions, where S is the area of your stage, and A is the area of your smallest rectangle. Keep a list of regions (unsorted is fine), each represented by its four corner points, and each labeled with its area, and use weighted-by-area reservoir sampling with a reservoir size of 1 to select a region with probability proportional to its area in at most one pass through the list. Then place a rectangle at a random location in that region. (Select a random point from bottom left portion of the region that allows you to draw a rectangle with that point as its bottom left corner without hitting the top or right wall.)
If you are not starting from a blank stage then just build your list of available regions in O(N) (by re-drawing all the existing rectangles on a blank stage in any order, for example) before searching for your first point to draw a new rectangle.
Note: You can change your reservoir size to k to select the next k rectangles all in one step.
Note 2: You could alternatively store available regions in a tree with each edge weight equaling the sum of areas of the regions in the sub-tree over the area of the stage. Then to select a region in O(logN) we recursively select the root with probability area of root region / S, or each subtree with probability edge weight / S. Updating weights when re-balancing the tree will be annoying, though.
Runtime: O(N)
Space: O(N)
One possible randomized approach:
Select a point at random on the stage. If you can draw one or more rectangles that contain the point (not just one that has the point as its bottom left corner), then return a randomly positioned rectangle that contains the point. It is possible to position the rectangle without bias with some subtleties, but I will leave this to you.
At worst there is one space exactly big enough for our rectangle and the rest of the stage is filled. So this approach succeeds with probability > 1/N, or fails with probability < 1-1/N. Repeat N times. We now fail with probability < (1-1/N)^N < 1/e. By fail we mean that there is a space for our rectangle, but we did not find it. By succeed we mean we found a space if one existed. To achieve a reasonable probability of success we repeat either Nlog(N) times for 1/N probability of failure, or N² times for 1/e^N probability of failure.
Summary: Try random points until we find a space, stopping after NlogN (or N²) tries, in which case we can be confident that no space exists.
Runtime: O(NlogN) for high probability of success, O(N²) for very high probability of success
Space: O(1)
You can simplify things with a transformation. If you're looking for a valid place to put your LxH rectangle, you can instead grow all of the previous rectangles L units to the right, and H units down, and then search for a single point that doesn't intersect any of those. This point will be the lower-right corner of a valid place to put your new rectangle.
Next apply a scan-line sweep algorithm to find areas not covered by any rectangle. If you want a uniform distribution, you should choose a random y-coordinate (assuming you sweep down) weighted by free area distribution. Then choose a random x-coordinate uniformly from the open segments in the scan line you've selected.
I'm not sure how elegant this would be, but you could set up a maximum number of attempts. Maybe 100?
Sure you might still have some space available, but you could trigger the "finish" event anyway. It would be like when tween libraries snap an object to the destination point just because it's "close enough".
HTH
One possible check you could make to determine if there was enough space, would be to check how much area the current set of rectangels are taking up. If the amount of area left over is less than the area of the new rectangle then you can immediately give up and bail out. I don't know what information you have available to you, or whether the rectangles are being laid down in a regular pattern but if so you may be able to vary the check to see if there is obviously not enough space available.
This may not be the most appropriate method for you, but it was the first thing that popped into my head!
Assuming you define the dimensions of the rectangle before trying to draw it, I think something like this might work:
Establish a grid of possible centre points across the stage for the candidate rectangle. So for a 6x4 rectangle your first point would be at (3, 2), then (3 + 6 * x, 2 + 4 * y). If you can draw a rectangle between the four adjacent points then a possible space exists.
for (x = 0, x < stage.size / rect.width - 1, x++)
for (y = 0, y < stage.size / rect.height - 1, y++)
if can_draw_rectangle_at([x,y], [x+rect.width, y+rect.height])
return true;
This doesn't tell you where you can draw it (although it should be possible to build a list of the possible drawing areas), just that you can.
I think that the only efficient way to do this with what you have is to maintain a 2D boolean array of open locations. Have the array of sufficient size such that the drawing positions still appear random.
When you draw a new rectangle, zero out the corresponding rectangular piece of the array. Then checking for a free area is constant^H^H^H^H^H^H^H time. Oops, that means a lookup is O(nm) time, where n is the length, m is the width. There must be a range based solution, argh.
Edit2: Apparently the answer is here but in my opinion this might be a bit much to implement on Actionscript, especially if you are not keen on the geometry.
Here's the algorithm I'd use
Put down N number of random points, where N is the number of rectangles you want
iteratively increase the dimensions of rectangles created at each point N until they touch another rectangle.
You can constrain the way that the initial points are put down if you want to have a minimum allowable rectangle size.
If you want all the space covered with rectangles, you can then incrementally add random points to the remaining "free" space until there is no area left uncovered.