I'm looking for an algorithm that allows me to know if a quadrilateral intersects another one. I'm not interested in the intersection itself, I just want to know if it exists.
I have found solutions like the one proposed here: https://math.stackexchange.com/questions/141798/two-quadrilaterals-intersection-area-special-case
But my problem is simpler than the person who wrote this post so there may be a simpler solution too.
Related
Although there exists several posts about (multi)lateration, i would like to summarize some approaches and present some issues/questions to better clarify the approach.
It seems that are two ways to detect the target location; using geometric/analytic approach (solving directly the equations with some trick) and fitting approach converting from non-linear to linear system.
With respect to the first one i would like to ask few questions.
Suppose in the presence of perfect range measurements,considering 2D case, the exact solution is a unique point at three circles intersection. Can anyone point some geometric solution for the first case? I found this approach: https://math.stackexchange.com/questions/884807/find-x-location-using-3-known-x-y-location-using-trilateration
but is seems it fails to consider two points with the same y coordinate as we can get a division by 0. Moreover can this be extended to 3D?
The same solution can be extracted using the second approach
Ax=b and latter recovering x = A^-1b or using MLS (x = A^T A)^-1 A^T b.
Please see http://www3.nd.edu/~cpoellab/teaching/cse40815/Chapter10.pdf
What about the case when the three circles have no intersection. It seems that the second approach still finds a solution. Is this normal? How can be explained?
What about the first approach when the range measurements are noisy. Does it find an approximate solution or it fails?
Considering the 3D, it seems that it needs at least 4 anchors to provide a unique solution. However, considering 3 anchors it can provide 2 solutions. I am asking if anyone of u guys can provide such equations to find the two solutions. This can be good even we have two solutions we may discard one by checking the values if they agree with our scenario. E.g., the GPS case where we pick the solution located in the earth. Instead the second approach of LMS would provide always one solution, wrong one.
Do u know any existing library C/C++ which would implement some of this techniques and maybe some more complex fitting functions such as non-linear etc.
Thank you
Regards
I know about Breadth First Search and Depth First Search. I read this page, and also on SO, I found this question and this question too.
What I wanna know is some practical scenerio where I would use depth first over breadth search. Though I third question link I provided is kinda similar, my question is more geared toward t-sql and SQL Server 2008/2012 performance.
Also, if I use one over other can anyone show me an example how much (worst case scenario) performance impact can I have? Say, if I adopt a dfs, and I have 50 children in first node, and I am searching for 2nd node, the dfs would be about 50 times slow from what I can think, because it would first have to transverse 50 children then it will come to second node. Is this so or not? I mean is it just like this direct relation btw the performance or otherwise?
Lastly, to repeat my question again, although it may (most likely will be) application and requirement specific, I would like to know some practical scenario where I would use one over the other, and what might be the performance cost of choosing one over the other? Also, I am am maintain a category catalog what should I choose? Say, I am maintaining a books category catalog something like : science => physics => astronomy and so on, which one would be the best? dfs or bfs?
You pretty much answered your own question. Depending on the data, use the search method most likely to visit the desired node first.
I want to create a model that has a attribute that holds a string based unique identifier.
I only want the unique string to be 3 characters long and consist of letters of the alphabet (lower case only) and numbers.
How do I implement something like the above? How do I avoid collisions? I have looked into MD5, and that seems along the lines of what I want to accomplish - but shorter. I am willing to also seed it with a time if that make the approach deterministic.
I would love any feedback or pointers on this topic. Thanks!
EDIT:
One solution that has been on my mind is creating a table full of every single permutation, then randomly selecting as needed from the table, and deleting once used. Is this a bad approach?
Check out this SO thread; it's got plenty of good suggestions. Especially the last answer by Simone Carletti which points to this post.
There are quite a few options on the above post. The one I liked and might be useful for you is the use of rufus-mnemo gem
So the solution I decided to roll with after reading some of the questions & answers is quite different than what anyone had suggested.
I created a table to store codes. I wrote a ruby script to populate this table with every 3 letter combo based on the characters I wanted to use. Then on my model I have a before_save method assign a code to the instance if a code has not yet been assigned.
This approach ensures that I will never have a collision when assigning a code in the before_save. The slowest part is the generation of the table, but since I only have to do this once I can deal with this.
This gem called alphadecimal might be able to help you.
I'm currently working on an interesting graph problem, I can't find any algorithms or other stackoverflow questions which mention anything like this.
If I have a graph (undirected, cyclic) and a list of commonly used paths, what is the best way to reduce the average path length by adding in N more edges?
EDIT:: Important point, which might help, all paths start at the same node.
Answering my own question, to cover what I've already considered.
The obvious solution is simply to sort the common paths by order, and slot in a connection between the two ends, and keep doing this until you run out of edges to insert. However, I suspect there is a more intelligent solution.
You could just try inserting all possible edges and see how much the shortest path decreases for each of your given start/end points. Pick the best edge and repeat.
The usefulness of edges depends on what other edges have been added, so if you really want optimality, you'd have to try all sets of N edges. That sounds a tad expensive. Wouldn't surprise me if it was NP-hard.
Interesting question!
Another possible solution, which sounds like it might be the best heuristic, is to take the weighted average of all the end nodes (weighted by path importance), then find the node which is closest to the computed average point. Connect to that node.
Obviously that only works if the nodes are laid out in space somehow, but it's a good analogy.
For a school project, I need to create a way to create personnalized queries based on end-user choices.
Since the user can choose basically any fields from any combination of tables, I need to find a way to map the tables in order to make a join and not have extraneous data (This may lead to incoherent reports, but we're willing to live with that).
For up to two tables, I already managed to design an algorithm that works fine. However, when I add another table, I can't find a way to path through my database. All tables available for the personnalized reports can be linked together so it really all falls down to finding which path to use.
You might be able to try some form of an A* algorithm. Basically this looks at each of the possible next options to choose and applies a heuristic to it, a function that determines roughly how far it is between this node and your goal. It then chooses the one that is closer and repeats. The hardest part of implementing A* is designing a good heuristic.
Without more information on how the tables fit together, or what you mean by a 'path' through the tables, it's hard to recommend something though.
Looks like it didn't like my link, probably the * in it, try:
http://en.wikipedia.org/wiki/A*_search_algorithm
Edit:
If that is the whole database, I'd go with a depth-first exhaustive search.
I thought about using A* or a similar algorithm, but as you said, the hardest part is about designing the heuristic.
My tables are centered around somewhat of a backbone with quite a few branches each leading to at most a single leaf node. Here is the actual map (table names removed because I'm paranoid). Assuming I want to view data from tha A, B and C tables, I need an algorithm to find the blue path.