I have a hierarchical data in MySQL. And this is not exactly a tree structure but more of a graph. Any child could have any number of parents. In such cases how do I get access to a node's all children without missing any? Can I use Nested Set Model in anyway possible??
A graph is a set of nodes and a set of edges. Nodes can be represented in a normal table and edges can be represented in the following way.
|id|node_1|node_2|
| 1| 1234| 1235|
| 2| 1234| 1236|
| 3| 1236| 1237|
| 4| 1237| 1238|
Querying for all the parents of node 1234 would simply be.
SELECT node_2 FROM edges WHERE node_1 = 1234
Traversing graphs in a relational databases however can be cumbersome and inefficient and if your dataset is moderate to large it can make sense to look at a graph database as an alternative.
Related
This question seems pretty stupid but I actually fail to find a simple solution to this. I have a csv file that is structured like this:
0 21 34.00 34.00
1 23 35.00 25.00
2 25 45.00 65.00
The first column is the node's id, the second is an unimportant attribute. The 3rd and 4th attribute are supposed to be the x and y position of the nodes.
I can import the file into the Data Laboratory without problems, but I fail to explain to Gephi to use the x y attributes as the corresponding properties. All I want to achieve is that Gephi sets the x Property to the value of the x Attribute (and y respectively). Also see picture.
Thanks for your help!
In the Layout window, you can select "Geo Layout" and define which columns are used as Latitude and Longitude.
The projection might come in weird if you do not actually have GeoData, but for me, this is fine.
In Gephi 0.8 there was a plugin called Recast column. This plugin is unfortunately not ported to Gephi 0.9 yet, but it allowed you to set Standard (hidden) Columns in the Node Table, from visible values in the nodes table. Thus if you have two columns of type Float or Decimal that represent your coordinates, you could set the coordinate values of your nodes.
I'm currently planning a Web GL game and am starting to make the models for it, I need to know if anyone knows if say my model is 1X scale, my camera zooms/pans out from the object and my models becomes 0.1X scale, what kind of simplification happens by the WGL engine to the models in view?
I.E if I use a triangle as an example, here is it at 1X scale
And here is the triangle at 10% of the original size while keeping all complexity (sorry it's so faint)
While the triangle looks the same, the complexity isn't entirely necessary and could be simplified into perhaps into 4 triangles for performance.
I understand that WebGL is a state machine and perhaps nothing happens; the complexity of the model remains the same, regardless of scale or state but how do I resolve this for the best performance possible?
Since at 1X scale there could be only one or very few models in view but when zoomed to 0.1X scale there could be many hundreds. Meaning, if the complexity of the model is too high then performance takes a huge hit and the game becomes unresponsive/useless.
All advice is hugely appreciated.
WebGL doesn't simplify for you. You have to do it yourself.
Generally you compute the distance away from the camera depending on the distance display a different hand made model. Far away you display a low detail model, close up you display a high detail model. There are lots of ways to do this, which way you choose is up to you. For example
Use different high poly models close, low poly far away
This is the easiest and most common method. The problem with this method is you often see popping when the engine switches from using the low poly model to the high poly model. The three.js sample linked in another answer uses this technique. It creates a LOD object who's job it is to decide which of N models to switch between. It's up to you to supply the models.
Use low-poly far away, fade in the high poly one over it. Once the high poly one is completely obscuring the low poly one stop drawing the low-poly.
Grand Theft Auto uses this technique
Create low poly from high poly and morph between them using any number of techniques.
For example.
1----2----3----4 1--------------4
| | | | | |
| | | | | |
4----5----6----7 | |
| | | | <-----> | |
| | | | | |
8----9----10---11 | |
| | | | | |
| | | | | |
12---13---14---15 12-------------15
Jak and Daxter and Crash Team Racing (old games) use the structure above.
Far away only points 1,4,12,15 are used. Close up all 16 points are used.
Points 2, 3, 4, 5, 6, 8, 9, 10, 11, 13, 14 can be placed anywhere.
Between the far and near distances all the points are morphed so the 16 point
mesh becomes the 4 point mesh. If you play Jak and Daxter #1 or Ratchet and Clank #1
you can see this morphing going on as you play. By the second version of those
games the artists got good at hiding the morphing.
Draw high poly up close, render high poly into a texture and draw a billboard in
the distance. Update the billboard slowly (ever N frames instead of every frame).
This is a technique used for animated objects. It was used in Crash Team Racing
for the other racers when they are far away.
I'm sure there are many others. There are algorithms for tessellating in real time to auto generate low-poly from high or describing your models in some other form (b-splines, meta-balls, subdivision surfaces) and then generate some number of polygons. Whether they are fast enough and produce good enough results is up to you. Most AAA games, as far as I know, don't use them
Search for 'tessellation'.
With it you can add or subtract triangles from your mesh.
Tessellation is closely related to LOD objects (level-of-detail).
Scale factor is just coefficient that all vertices of the mesh are being multiplied with, and with scale you simply stretch your mesh along axis.
Take a look at this Three.js example:
http://threejs.org/examples/webgl_lod.html (WASD/mouse to move around)
Of these two designs for a crossroad database
#Street
street_id | street_nm
#Crossing
crossing_id | x | y | street_id_1 | street_id_2
VS
#Street
street_id | street_nm
#Crossing
crossing_id | x | y
#street crossing relationship
street_id | crossing_id
Assuming every crossing has only exactly two roads, is there a reason why one would use the first solution over the first?
EDIT: For the second setup, how could I create a view where results look like this
crossing_id| x | y | street_nm_1 | street_nm_1
Also I'm not sure how creating a junction with three roads would effect the view.
I'd prefer the second.
First of all "assuming every crossing has only exactly two roads" is quite risky. In general, when designing I prefer not to rely on assumptions that clash with reality because sooner or later your design will have to accomodate for "extra cases".
But the second design is better for another reason... assuming you want to design a query that returns all roads that cross road "X" (which I suppose would be a pretty common requirement) your first design forces you to test for road "X" id both in street_id_1 and street_id_2 - in general, the queries are more convoluted because whenever you are looking for a given road, you don't know if it will be listed in id_1 or id_2.
The relationship "x crosses y" should be symmetrical (unless you want to distinguish between "main roads" and "tributaries", which does not seem to be the case here) so the second design is closer to the intent.
Regarding your question about the view... what about:
Select a.cross_id,a.x,a.y,b.street_nm,c.street_nm
from crossing a, crossing_rel e, street b, street c
where b.street_id=e.street_id and
c.street_id=e.street_id and
a.crossing_id=e.crossing_id and
b.street <> c.street
note that this will not give any specific order to which street appears as "x" and which as "y"... maybe you will prefer something like:
Select a.cross_id,a.x,a.y,b.street_nm,c.street_nm
from crossing a, crossing_rel e, street b, street c
where b.street_id=e.street_id and
c.street_id=e.street_id and
a.crossing_id=e.crossing_id and
b.street_nm < c.street_nm
The second solution is a little more flexible for additions to either crossings or streets while keeping the relationship between them in its proper context. It's a subtle distinction, but worth mentioning.
When you draw an inheritance diagram you usually go
Base
^
|
Derived
Derived extends Base. So why does the arrow go up?
I thought it means that "Derived communicates with Base" by calling functions in it, but Base cannot call functions in Derived.
AFAIK one of the reasons is notational consistency. All other directed arrows (dependency, aggregation, composition) points from the dependant to the dependee.
In inheritance, B depends on A but not vice versa. Thus the arrow points from B to A.
In UML the arrow is called a "Generalization" relationship and it only signals that each object of class Derived is also an object of class Base.
From the superstructure 2.1.2:
A Generalization is shown as a line with a hollow triangle as an
arrowhead between the symbols representing the involved classifiers.
The arrowhead points to the symbol representing the general
classifier. This notation is referred to as the “separate target style.”
Not really an answer though to the question :-)
Read the arrow as "inherits from" and it makes sense. Or, if you like, think of it as the direction calls can be made.
I always think of it as B having more stuff in it then A (subclasses often have more methods than superclasses), hence B gets the wide end of the arrow and A gets the pointy end!
B is the subject, A is the object, action is "inherit". So B acts on A, hence the direction of the arrow.
I think the point is to express "generalization": A is a generalization of B.
This way the arrow expresses the same concept as in extension but goes the "right" way
A note about ascii notation - from the wonderful c2 wiki page
You might consider the following ascii diagram arrow for
IS_A relation (inheritance)
+-------+ +-----------+
| Base | | Interface |
+---^---+ +-----^-----+
/_\ /_\ _
| : (_) OtherInterface
| : |
| : |
+---------+ +----------------+
| Derived | | Implementation |
+---------+ +----------------+
vs
HAS_A relation (containment)
+-------+
| User |
+-------+
|
|
|
\ /
+----V----+
| Roles |
+---------+
I'm trying to implement an undo/redo feature into my application, using the Command Pattern. I'm facing a problem.
To illustrate it, let's imagine you can create with my application 2D profiles (as many as you want).
From these 2D profiles, you can then create a 3D part with different attributes (name, colour, scale, etc.).
+--------------+ +--------------+ +--------------+
| 2D profile A | | 2D profile B | | 2D profile C |
+--------------+ +--------------+ +--------------+
| | |
| +---------------+ +---------------+
| | 3D Part B | | 3D Part C |
| | Colour : blue | | Colour : grey |
| | Name : bibi | | Name : foo |
| | Scale : 33% | | Scale : 100% |
| +---------------+ +---------------+
+--------------+
| 3D Part A |
| Colour : red |
| Name : aaa |
| Scale : 50% |
+--------------*
When a profile is deleted, all 3D parts which a built on this profile are automaticaly deleted too (when a profile is about to be deleted, a 3D part manager is notified and will delete the obsolete 3D parts. Views are also notified to update the GUI).
This is where I'm facing a problem : I'm writing the undo/redo command for deleting a 2D profile, which looks something like this (pseudo-code) :
virtual void redo()
{
m_pProfileList.remove(m_pProfile); // This will automatically delete all 3D parts relying on the deleted 2D profile
}
virtual void undo()
{
m_pProfileList.add(m_pProfile); // This will add the 2D profile, but the 3D Parts are lost
}
As you can see in the code above, removing the 2D profile will automatically delete all 3D parts relying on the removed profile.
But when doing undo, re-adding the 2D profile to the list is not enough : the 3D parts are lost.
What should I do ? Should the undo/redo command be responsible of taking care of the deletion of the 3D parts (which is something actually done by the 3d part manager) ? This would mean the undo/redo command would also be responsible to notify the views to update the GUI.
Or should the undo/redo command create an internal copy of all 3d parts which will be deleted and let the 3d part manager delete the 3D parts ?
Or is there another better solution ?
Thanks for your help !
You want a slight variation on this: the Memento pattern. You store snapshots of either your complete object tree or just all the differences at each change. Armed with this successive history of changes, you can then go backwards and forward in through commands to your hearts content, without losing dependant objects.