I'm trying to implement an undo/redo feature into my application, using the Command Pattern. I'm facing a problem.
To illustrate it, let's imagine you can create with my application 2D profiles (as many as you want).
From these 2D profiles, you can then create a 3D part with different attributes (name, colour, scale, etc.).
+--------------+ +--------------+ +--------------+
| 2D profile A | | 2D profile B | | 2D profile C |
+--------------+ +--------------+ +--------------+
| | |
| +---------------+ +---------------+
| | 3D Part B | | 3D Part C |
| | Colour : blue | | Colour : grey |
| | Name : bibi | | Name : foo |
| | Scale : 33% | | Scale : 100% |
| +---------------+ +---------------+
+--------------+
| 3D Part A |
| Colour : red |
| Name : aaa |
| Scale : 50% |
+--------------*
When a profile is deleted, all 3D parts which a built on this profile are automaticaly deleted too (when a profile is about to be deleted, a 3D part manager is notified and will delete the obsolete 3D parts. Views are also notified to update the GUI).
This is where I'm facing a problem : I'm writing the undo/redo command for deleting a 2D profile, which looks something like this (pseudo-code) :
virtual void redo()
{
m_pProfileList.remove(m_pProfile); // This will automatically delete all 3D parts relying on the deleted 2D profile
}
virtual void undo()
{
m_pProfileList.add(m_pProfile); // This will add the 2D profile, but the 3D Parts are lost
}
As you can see in the code above, removing the 2D profile will automatically delete all 3D parts relying on the removed profile.
But when doing undo, re-adding the 2D profile to the list is not enough : the 3D parts are lost.
What should I do ? Should the undo/redo command be responsible of taking care of the deletion of the 3D parts (which is something actually done by the 3d part manager) ? This would mean the undo/redo command would also be responsible to notify the views to update the GUI.
Or should the undo/redo command create an internal copy of all 3d parts which will be deleted and let the 3d part manager delete the 3D parts ?
Or is there another better solution ?
Thanks for your help !
You want a slight variation on this: the Memento pattern. You store snapshots of either your complete object tree or just all the differences at each change. Armed with this successive history of changes, you can then go backwards and forward in through commands to your hearts content, without losing dependant objects.
Related
I have a hierarchical data in MySQL. And this is not exactly a tree structure but more of a graph. Any child could have any number of parents. In such cases how do I get access to a node's all children without missing any? Can I use Nested Set Model in anyway possible??
A graph is a set of nodes and a set of edges. Nodes can be represented in a normal table and edges can be represented in the following way.
|id|node_1|node_2|
| 1| 1234| 1235|
| 2| 1234| 1236|
| 3| 1236| 1237|
| 4| 1237| 1238|
Querying for all the parents of node 1234 would simply be.
SELECT node_2 FROM edges WHERE node_1 = 1234
Traversing graphs in a relational databases however can be cumbersome and inefficient and if your dataset is moderate to large it can make sense to look at a graph database as an alternative.
I read the book "Computer Organiztion and Design", in chapter 4, it describes a single-cycle MIPS machine. however, I have several doubles about it.
If the data memory and the instruction memory in the design are SRAMs, how can any instructions be finished in a signle clock cycle . Take a load instruction as an example, I think the single-cycle MIPS design still has to go through the following stages. only the ID and EXE stage are merged.
| 1 | 2 | 3 | 4 |
| WB | | | |
| | IF | | |
| | | ID\EXE | |
| | | MEM |
if the data memory is updated at the negedge clock, the ID, EXE and MEM stage can be merged, but there are still three stages left.
Can any one explain how the "Single-Cycle" works? Thanks!
The single cycle processor that you read about in chapter 4 is a slight oversimplification over what is implementable in reality. They don't show some of the tricky details. For instance one timing assumption you could make is to assume that your memory reads are combinational and mempory writes take 1 positive edge to complete, i.e. simiar to the register file. So in that case when the clock edge arrives you have your IF stage populated with an instruction. Then for the duration of that cycle, you decode and execute the instruciton and writeback happens on the next clock edge. In case it is a data store same thing is true, the memory will be written on the next clock edge. In case of loads, you are assuming combinational memory read, so your data will arrive before the clock edge and on the edge it will be written to the register file.
Now, this is not the best way to implement this and you have to make several assumptions. In a slightly more realistic unpipelined processor you could have a stall signal that would roll over to the next cycle if you are waiting for the memory request. So you can imagine you would have a Stall_on_IF signal and Stall_on_LD signal that would tell you to stall this cycle until your instruction/data arrives. When they do arrive, you latch them and continue execution next cycle.
I was wondering are we able to control a data type and decide whether the entered data exists in Haskell?
For example:
data Ruler =Ruler Length Price deriving(Eq,Show)
data Wallet = Wallet Colour Ruler [Pencil] deriving(Eq,Show)
data Pencil =Pencil Penciltype Colour Price deriving(Eq,Show)
data Colour =Black | Blue | Green | Red deriving(Eq,Show)
data Penciltype =Leadpencil | Pen | Fountainpen | Feltpen deriving(Eq,Show)
type Price =Double
type Length =Int
So any ideas?
I want to define a function like that:
isRulerAvailable :: Wallet-> Bool
if Ruler is Available in Wallet then True
else False
I think you're misunderstanding how data types work in Haskell.
What your wallet data type says is
I will store exactly one Ruler, Colour, and some Pencils under the tag Wallet.
This means that Wallet has only 1 Ruler in it and can only ever have 1 Ruler in it.
If you wanted to allow the possibility of not storing a Ruler then you'd use Maybe Ruler in your data declaration, not just Ruler.
Then your function becomes:
isRulerAvailable (Wallet _ ruler _) = isJust ruler
Which requires you to import Data.Maybe.
For an explanation of Maybe, you can look here
I'm currently planning a Web GL game and am starting to make the models for it, I need to know if anyone knows if say my model is 1X scale, my camera zooms/pans out from the object and my models becomes 0.1X scale, what kind of simplification happens by the WGL engine to the models in view?
I.E if I use a triangle as an example, here is it at 1X scale
And here is the triangle at 10% of the original size while keeping all complexity (sorry it's so faint)
While the triangle looks the same, the complexity isn't entirely necessary and could be simplified into perhaps into 4 triangles for performance.
I understand that WebGL is a state machine and perhaps nothing happens; the complexity of the model remains the same, regardless of scale or state but how do I resolve this for the best performance possible?
Since at 1X scale there could be only one or very few models in view but when zoomed to 0.1X scale there could be many hundreds. Meaning, if the complexity of the model is too high then performance takes a huge hit and the game becomes unresponsive/useless.
All advice is hugely appreciated.
WebGL doesn't simplify for you. You have to do it yourself.
Generally you compute the distance away from the camera depending on the distance display a different hand made model. Far away you display a low detail model, close up you display a high detail model. There are lots of ways to do this, which way you choose is up to you. For example
Use different high poly models close, low poly far away
This is the easiest and most common method. The problem with this method is you often see popping when the engine switches from using the low poly model to the high poly model. The three.js sample linked in another answer uses this technique. It creates a LOD object who's job it is to decide which of N models to switch between. It's up to you to supply the models.
Use low-poly far away, fade in the high poly one over it. Once the high poly one is completely obscuring the low poly one stop drawing the low-poly.
Grand Theft Auto uses this technique
Create low poly from high poly and morph between them using any number of techniques.
For example.
1----2----3----4 1--------------4
| | | | | |
| | | | | |
4----5----6----7 | |
| | | | <-----> | |
| | | | | |
8----9----10---11 | |
| | | | | |
| | | | | |
12---13---14---15 12-------------15
Jak and Daxter and Crash Team Racing (old games) use the structure above.
Far away only points 1,4,12,15 are used. Close up all 16 points are used.
Points 2, 3, 4, 5, 6, 8, 9, 10, 11, 13, 14 can be placed anywhere.
Between the far and near distances all the points are morphed so the 16 point
mesh becomes the 4 point mesh. If you play Jak and Daxter #1 or Ratchet and Clank #1
you can see this morphing going on as you play. By the second version of those
games the artists got good at hiding the morphing.
Draw high poly up close, render high poly into a texture and draw a billboard in
the distance. Update the billboard slowly (ever N frames instead of every frame).
This is a technique used for animated objects. It was used in Crash Team Racing
for the other racers when they are far away.
I'm sure there are many others. There are algorithms for tessellating in real time to auto generate low-poly from high or describing your models in some other form (b-splines, meta-balls, subdivision surfaces) and then generate some number of polygons. Whether they are fast enough and produce good enough results is up to you. Most AAA games, as far as I know, don't use them
Search for 'tessellation'.
With it you can add or subtract triangles from your mesh.
Tessellation is closely related to LOD objects (level-of-detail).
Scale factor is just coefficient that all vertices of the mesh are being multiplied with, and with scale you simply stretch your mesh along axis.
Take a look at this Three.js example:
http://threejs.org/examples/webgl_lod.html (WASD/mouse to move around)
When you draw an inheritance diagram you usually go
Base
^
|
Derived
Derived extends Base. So why does the arrow go up?
I thought it means that "Derived communicates with Base" by calling functions in it, but Base cannot call functions in Derived.
AFAIK one of the reasons is notational consistency. All other directed arrows (dependency, aggregation, composition) points from the dependant to the dependee.
In inheritance, B depends on A but not vice versa. Thus the arrow points from B to A.
In UML the arrow is called a "Generalization" relationship and it only signals that each object of class Derived is also an object of class Base.
From the superstructure 2.1.2:
A Generalization is shown as a line with a hollow triangle as an
arrowhead between the symbols representing the involved classifiers.
The arrowhead points to the symbol representing the general
classifier. This notation is referred to as the “separate target style.”
Not really an answer though to the question :-)
Read the arrow as "inherits from" and it makes sense. Or, if you like, think of it as the direction calls can be made.
I always think of it as B having more stuff in it then A (subclasses often have more methods than superclasses), hence B gets the wide end of the arrow and A gets the pointy end!
B is the subject, A is the object, action is "inherit". So B acts on A, hence the direction of the arrow.
I think the point is to express "generalization": A is a generalization of B.
This way the arrow expresses the same concept as in extension but goes the "right" way
A note about ascii notation - from the wonderful c2 wiki page
You might consider the following ascii diagram arrow for
IS_A relation (inheritance)
+-------+ +-----------+
| Base | | Interface |
+---^---+ +-----^-----+
/_\ /_\ _
| : (_) OtherInterface
| : |
| : |
+---------+ +----------------+
| Derived | | Implementation |
+---------+ +----------------+
vs
HAS_A relation (containment)
+-------+
| User |
+-------+
|
|
|
\ /
+----V----+
| Roles |
+---------+