ClojureScript, Om and Core.async: How to handle events properly - clojurescript

I have had a look at using Om for rich client website design. This also is my first time using core.async. Reading the tutorial https://github.com/swannodette/om/wiki/Basic-Tutorial I have seen the usage of a core.async channel to handle the delete operation (as opposed to doing all the work in the handler). I was under the impression that using that channel for deletion was merely done because the delete callback was declared in a scope where you have a cursor on an item-level where you actually want to manipulate the list containing that item.
To get more insights into channels I have seen Rich Hickey's talk http://www.infoq.com/presentations/clojure-core-async where he explains how its a good idea to use channels to get application logic out of event-callbacks. This made me wonder whether the actual purpose of the delete channel in the tutorial was to show that way of structuring an application. If so,
what are best practices associated with that pattern?
Should one create individual channels for all kinds of events? I.e. If I add a controller to create a new event, would I also create a new channel for object creations that is then used to get objects to be added to the global state at another place in the application?
Lets say I have a list of items, and one items has a detailed/concise state flag. If detailed? is true it will display more information, if detailed? is false it will display fewer information. I have associated a on-click event that uses om/transact! on the cursor (being a view to the list item within the global state object).
(let [toggle-detail-handler
(fn [e]
(om/transact! (get-in myitem [:state])
#(conj % {:detailed? (not (:detailed? %))})))]
(html [:li {:on-click toggle-detail-handler}
"..." ]))
I realize that this might be a very succinct snippet where the overall benefit of using channels as a means to decouple the callback event from the acutal logic changes does at first not seem worth the effort but the overall benefits with more complex examples outweigh this. But on the other hand introducing an extra channel for such detail-not-detailed toggling seems to add a fair amount of load to the source code as well.
It would be great if you could give some hints/tips or other thoughts on the whole design issue and put them into perspective. I feel a little lost there.

I use channels to communicate between components that cannot communicate through cursors.
For example, I use channels when:
the communicating components do not share app state (eg, their cursors are pointing down different branches of a hierarchical data structure)
the changes being communicated live outside of the app state (for example, component A wants to change component B's local state AND component B is not a child of A (otherwise this can be done by passing :state to om/build)
I want to communicate with something outside of the Om component tree
Note that I like to keep the "domain state" in the app state atom and the GUI state in the component local state. That is, app state is what is being rendered and local state is how. (where "how" also refers to which part) For example, if you are writing a text editor, the app state is the document being edited and the local state is what page is being edited, whether bold is selected so forth.
In general, I use a single communication channel onto which I place [topic value] pairs. I then use pub and sub to route the messages. Eg, (def p (async/pub ch first)) to use the topic to dispatch events and (om/sub p my-ch :foo) to receive messages with topic :foo to my-ch. I generally store this single communication channel in Om's shared state.
Sometimes I will use multiple channels, but I would do this to set up specific pipelines or workflows, rather than for general purpose messaging. For example, if I have a pipeline of processing components doing stuff to a stream of data, then I might set this up as a chain of channels with the endpoints connected into my Om application. For general UI development, this is rare.
I'm also playing around with a Qt-esque signals/slots system for my Om components and I'm still experimenting with using a shared signals channels vs having each signal be its own channel. I'm as yet undecided which approach is better.

Related

Passing App Atom vs Ref-Cursors in Om

Using Om, it seems like passing relevant parts of the app state to child components is effectively the same thing as not passing any app state but using ref-cursors. What is the use case for ref-cursors over passing pieces of the app state down the chain?
I've read through all three of the tutorials and conceptual overview on the Om github repository but I cant really find an answer to this question. It seems like one could use either one or the other and accomplish the same thing (one either defines a component with (defn blah [_ owner] ...) and uses ref cursors or defines a component with (defn blah [relevent-state owner] ...)
Can someone clarify when I would want to use a ref cursor inside a component as opposed to simply passing part of the app state into that component?
This question is pretty old, but I'll give it a shot.
I believe the main use-case for ref-cursors is to promote modularity and decoupling of the global application state from components. It limits the scope of components to just the data that they depend on, and nothing else.
Normally, you'd pass application state and any change callbacks down the component tree via props, as you say. A consequence is that the component hierarchy becomes tightly coupled with the "shape" of the application state. The components hierarchy will have to match the state 1:1, or else many components will receive big blobs of data and callbacks that only a few subcomponents depend on, which they themselves may never actually use -i.e you might find yourself passing down parts of the global state down the component chain just so that components further down can have access to it. These components are being used as a channel for passing down state, which is not ideal because it exposes them to application state that they have no business knowing about. You run the risk of coupling and lose modularity.
With cursors, component dependencies are explicitly specified by each component upon mounting. The cursors are a black box into the application state -the component itself never has to know where inside the application it is situated. You have the full flexibility of stating a component's dependencies from anywhere in the application state without having to worry about all the transient data being passed around. You get one-way data flow without having to pass update callbacks down arbitrarily deep hierarchies. The end result is excellent component compartmentalization and modularity. As a bonus, you now have a single point into the application state that you can observe for changes!
I used it because when you update it, all of the observers get called.

What's the reason for interface to exist in Actionscript-3 and other languages

what is the meaning of this interfaces? even if we implement an interface on a class, we have to declare it's functionality again and again each time we implement it on a different class, so what is the reason of interfaces exist on as3 or any other languages which has interface.
Thank you
I basically agree with the answers posted so far, just had a bit to add.
First to answer the easy part, yes other languages have interfaces. Java comes to mind immediately but I'm pretty sure all OOP languages (C++, C#, etc.) include some mechanism for creating interfaces.
As stated by Jake, you can write interfaces as "contracts" for what will be fulfilled in order to separate work. To take a hypothetical say I'm working on A and you're working on C, and bob is working on B. If we define B' as an interface for B, we can quickly and relatively easily define B' (relative to defining B, the implementation), and all go on our way. I can assume that from A I can code to B', you can assume from C you can code to B', and when bob gets done with B we can just plug it in.
This comes to Jugg1es point. The ability to swap out a whole functional piece is made easier by "dependency injection" (if you don't know this phrase, please google it). This is the exact thing described, you create an interface that defines generally what something will do, say a database connector. For all database connectors, you want it to be able to connect to database, and run queries, so you might define an interface that says the classes must have a "connect()" method and a "doQuery(stringQuery)." Now lets say Bob writes the implementation for MySQL databases, now your client says well we just paid 200,000 for new servers and they'll run Microsoft SQL so to take advantage of that with your software all you'd need to do is swap out the database connector.
In real life, I have a friend who runs a meat packing/distribution company in Chicago. The company that makes their software/hardware setup for scanning packages and weighing things as they come in and out (inventory) is telling them they have to upgrade to a newer OS/Server and newer hardware to keep with the software. The software is not written in a modular way that allows them to maintain backwards compatibility. I've been in this boat before plenty of times, telling someone xyz needs to be upgraded to get abc functionality that will make doing my job 90% easier. Anyhow guess point being in the real world people don't always make use of these things and it can bite you in the ass.
Interfaces are vital to OOP, particularly when developing large applications. One example is if you needed a data layer that returns data on, say, Users. What if you eventually change how the data is obtained, say you started with XML web services data, but then switched to a flat file or something. If you created an interface for your data layer, you could create another class that implements it and make all the changes to the data layer without ever having to change the code in your application layer. I don't know if you're using Flex or Flash, but when using Flex, interfaces are very useful.
Interfaces are a way of defining functionality of a class. it might not make a whole lot of sense when you are working alone (especially starting out), but when you start working in a team it helps people understand how your code works and how to use the classes you wrote (while keeping your code encapsulated). That's the best way to think of them at an intermediate level in my opinion.
While the existing answers are pretty good, I think they miss the chief advantage of using Interfaces in ActionScript, which is that you can avoid compiling the implementation of that Interface into the Main Document Class.
For example, if you have an ISpaceShip Interface, you now have a choice to do several things to populate a variable typed to that Interface. You could load an external swf whose main Document Class implements ISpaceShip. Once the Loader's contentLoaderInfo's COMPLETE event fires, you cast the contentto ISpaceShip, and the implementation of that (whatever it is) is never compiled into your loading swf. This allows you to put real content in front of your users while the load process happens.
By the same token, you could have a timeline instance declared in the parent AS Class of type ISpaceShip with "Export for Actionscript in Frame N *un*checked. This will compile on the frame where it is first used, so you no longer need to account for this in your preloading time. Do this with enough things and suddenly you don't even need a preloader.
Another advantage of coding to Interfaces is if you're doing unit tests on your code, which you should unless your code is completely trivial. This enables you to make sure that the code is succeeding or failing on its own merits, not based on the merits of the collaborator, or where the collaborator isn't appropriate for a test. For example, if you have a controller that is designed to control a specific type of View, you're not going to want to instantiate the full view for the test, but only the functionality that makes a difference for the test.
If you don't have support in your work situation for writing tests, coding to interfaces helps make sure that your code will be testable once you get to the point where you can write tests.
The above answers are all very good, the only thing I'd add - and it might not be immediately clear in a language like AS3, where there are several untyped collection classes (Array, Object and Dictionary) and Object/dynamic classes - is that it's a means of grouping otherwise disparate objects by type.
A quick example:
Image you had a space shooter, where the player has missiles which lock-on to various targets. Suppose, for this purpose, you wanted any type of object which could be locked onto to have internal functions for registering this (aka an interface):
function lockOn():void;//Tells the object something's locked onto it
function getLockData():Object;//Returns information, position, heat, whatever etc
These targets could be anything, a series of totally unrelated classes - enemy, friend, powerup, health.
One solution would be to have them all to inherit from a base class which contained these methods - but Enemies and Health Pickups wouldn't logically share a common ancestor (and if you find yourself making bizarre inheritance chains to accomodate your needs then you should rethink your design!), and your missile will also need a reference to the object its locked onto:
var myTarget:Enemy;//This isn't going to work for the Powerup class!
or
var myTarget:Powerup;//This isn't going to work for the Enemy class!
...but if all lockable classes implement the ILockable interface, you can set this as the type reference:
var myTarget:ILockable;//This can be set as Enemy, Powerup, any class which implements ILockable!
..and have the functions above as the interface itself.
They're also handy when using the Vector class (the name may mislead you, it's just a typed array) - they run much faster than arrays, but only allow a single type of element - and again, an interface can be specified as type:
var lockTargets:Vector.<Enemy> = new Vector.<Enemy>();//New array of lockable objects
lockTargets[0] = new HealthPickup();//Compiler won't like this!
but this...
var lockTargets:Vector.<ILockable> = new Vector.<ILockable>();
lockTargets[0] = new HealthPickup();
lockTargets[1] = new Enemy();
Will, provided Enemy and HealthPickup implement ILockable, work just fine!

Game programming without a main loop

My professor gave my class an assignment today based on object oriented programming in Pygame. Basically he has said that the game that we are to create will be void of a main game loop. While I believe that it is possible to do this (and this question has stated that it is possible) I don't believe that this is required for adherence to the Object Oriented paradigm.
In a diagram that the professor gave, he showed the game initializing and as the objects were instantiated the control flow of the program would be distributed among the objects.
Basically I believe it would be possible to implement a game this way, but it would not be an ideal way nor is it required for Object Oriented adherence. Any thoughts?
EDIT: We are creating an asteroids clone, which I believe further complicates things due to the fact that it is a real time action game.
Turn based games or anything event driven would be the route to go. In other words, take desktop GUI apps. They'll just tick (wait) over until an event is fired. The same could be done for a simple game. Take Checkers for example. Looping each game cycle would be overkill. 90% of the time the game will be static. Using some form of events (the observer design pattern would be nice here) would provide a much better solution. You're using Pygame, so there may be support for this built in, through due to my limited use I cannot comment fully. Either way, the general principles are the same.
All in all it's a pretty rubbish assignment if you ask me. If it's to teach you event driven programming, a simple GUI application would be better. Even the simplest of games us a basic game loop, which can adhere to OO principles.
Hmm. In the general case, I think this idea is probably hokum. SDL (upon which PyGame is implemented), provides information to the program via an event queue, and consuming that queue requires some sort of repeatedly checking the queue for events, processing them, and waiting until the next event arrives.
There are some particular exceptions to this, though. You can poll the mouse and keyboard for their state without accessing the event queue. The problem with that is it still requires something like a loop, so that it happens over and over again until the game exits.
You could use pygame.time to wait on a timer instead of waiting on the event queue, and then pass control to the game objects which poll the mouse and keyboard as per above, but you are still 'looping', but bound by a timer instead of the event queue.
Instead of focusing on eliminating a main loop, how about instead think about using it in an object oriented way.
For instance, you could require a 'root' object, which actually has its own event loop, but instead of performing any action based on the incoming events, it calls a handler on several child objects. For instance when the root object recieves a pygame.event.MOUSEBUTTONDOWN event, it could search through it's children for a 'rect' attribute and determine if the event.pos attribute is inside that rect. if it is it can call a hypothetical onClick method on that child object.
I think it might qualify as event driven programming? Which can still be object oriented. You see this in Flash a lot.
There's a difference between a main loop in a main class. You can still have a game class initialize all of your objects, and then rely on inputs to move the game onward.
Kinda hard to say exactly without knowing the exact parameters of your assignment, the devil is in the details.
You might look at how python utilizes signals. A decent example I found is at: http://docs.python.org/library/signal.html

What are some good use cases for the CALLBACK pattern/idiom?

I don't use this pattern, maybe there are some places where it would have been appropriate and I used something else. Have you used it in your daily coding? Feel free to give samples, in your language of choice, along with your explanation.
Callbacks aren't really a "pattern" - more like a building block. A number of the gang of four design patterns use virtual methods in a callback-like way. Justin Niessner has already mentioned Observer.
Callbacks are much older than OOP (and probably older than 3GLs and even assembler). Another old idea is the parameter block - the C interpretation being a struct full of related members to be passed to a function so that function doesn't need a huge parameter list.
OOP classes build upon the parameter block (and add a philosophy to it). The class instance itself is a parameter block passed by reference to its methods. The virtual table is a dispatch-handling parameter block. Every virtual method has a callback pointer in the dispatch-handling parameter block. A pure virtual method reserves space for the callback pointer in the parameter block, and promises to provide the actual pointer later.
Since the class is the building block for object oriented design patterns, and parameter blocks and callbacks are the building blocks of classes - well, you could claim that all OOP design patterns are built from these ideas.
I'd like to be able to say "parameter blocks and callbacks, plus style rules guiding their use, inspired object orientation" but as appealing as it sounds, I don't know whether it's true.
I use callbacks pretty much every day in the following scenarios:
Events: When the user clicks their mouse on a control, presses a key or otherwise interacts with the UI in a way I need to handle, I subscribe to the delegate that the control publishes for the event. I can then handle it by updating the UI, cancelling the event in certain circumstances or otherwise taking some special action.
Multithreaded Programming: When programming a GUI, it's important to keep the UI responsive and indicate the progress of a long-running background event to the user. To do this, I kick off the task in a separate thread and then publish delegates (events in the .NET world) that provide my UI with the opporutinty to notify the user about progress that's happening.
Lambda functions: In .NET, lambda functions are a form of a delegate, one that lets me interact with another piece of code's operation at a later point in time. LINQ is a great example of this. I can create a small matching function and then supply it to a LINQ query. Later, when I execute my query against a collection, the matching function is called to determine if there is a match for the query. This allows me to not have to build or worry about the query mechanism. I just have to tell the query mechanism where to go to find out if a comparison is a match or not.
These examples just scratch the surface, I'm sure. But they are useful examples of how I use callbacks every day.
The .NET platform uses callbacks heavily to implement the Observer pattern.
They also get used for handling Asynchronous processes.
Objective C and the Cocoa framework make a lot of use of it. An example would be NSURLConnection, which will inform an object given to it (called its delegate) when something happens on the connection:
NSURLConnection *foo = [[NSURLConnection alloc] initWithRequest:request delegate:self];
Note the passing of delegate there. The request proceeds in the background, and the instance will then send messages to the delegate (in this case, self), like:
connectionDidFinishLoading:
connection:didFailWithError:
You get the idea. I believe this is called the "observer pattern". It's all tied in to Cocoa's event loop (as far as I know, I'm still learning) and is cheap 'n easy asynchronous programming. A lot of frameworks in a variety of languages follow this approach.
.NET has delegates as well, which are similar. Think events.
I use it a great deal in javascript to let me know when an asynchronous call has finished, so the result can be processed.
But, in javascript, and now in C#3, I pass in functions as a parameter, so that the processing can go on without explicitly setting up a delegate to be called.

AS3: Model and View Communication in a Game

So I'm attempting to use the MVC pattern for a game I'm working on. Everything has been going pretty smoothly so far, but I'm having trouble figuring out how to get my model and my view to talk to each other effectively.
My general model structure involves lots of nested information.
a Level has Rooms
a Room have Layers
a Layer has Objects
Each layer has an index and a corresponding layer in the view that is rendering it. I need the objects to post an update message as they animate so it's corresponding layer in the view can update. I'm trying to use the built in event system to handle these updates.
My issue is I'm not sure how to avoid putting listeners on every object in the game - which strikes me as bad ( perhaps I'm wrong here ). If I change the rooms, the layer doesn't have a way of removing listeners from the objects in the last room because it only accesses layers through the current room. Objects are only updated when they are in the current room, so the other objects won't need to fire events.
The view is set up to cascade events to all of the children, so the root node can receive all updates ( I think I did that part correctly ), and the layer can match the target because it knows which layer it's rendering. The problem is getting the message out from the objects to the view.
Of course this makes sense to me, because I've been working with the code for a while now.
If I can provide more clarification please ask. This is my first time working with the MVC pattern, so I'm sure I could do things better.
If you have any suggestions as to how I might solve this conundrum, please share!
Edit: I have something working keeping track of the current layerset from outside of the view and the model which manages adding/removing the appropriate event listeners and delegating the update event to the layer as suggested. But please, anything I can do to improve this please do.
If you are new to MVC you may want to check out the PureMVC framework for AS3. When I first started learning MVC I started by trying to build my own implementation of the pattern. After trying out PureMVC I got a much better understanding of the structure of MVC.
Your rooms/layers/objects sound like they have a parent/child like relationship and may be a good candidate for the composite design pattern. Basically this is a tree like structure where you could trigger an event which would then cascade through all branches. If you do a search for 'composite pattern' you may get a better explanation of how this may work for you.
There a few solutions you could take, adding event listeners is reasonable, but as mentioned you are going to need to make sure you clean them up appropriately, but this will be a requirement with a lot of other solutions as well.
Another one would be to pass in the layer on object construction, perhaps in the form of a "parent" property. In this case the object would notify its parent whenever it has changed, then on layer update, it would go through and handle all objects who have registered as having changed. This has performance benefits in that the object could change several time between renders, but the parent would only act on this changes once, (when its been told to update itself.) In this case you would still need to make sure you clean up your references properly to avoid garbage collection problems.
Yet another solution would be objects register with them selves as having been changed, typically in the form of a simple Boolean value. In this case the parent (your layer) would loop through all children, presumable stored in some form of collection, and handle updates to all those who say they've been changed. This solution removes the dependencies from object to layer, but in extreme cases, could lead to performance issues, (Extreme case being so many objects the process of checking a single Boolean value on them is too much to handle (that'll be A LOT of objects))
Hope that helps.
Using the Mediator pattern in PureMVC, rather than putting listeners on every object, you could have a Mediator listen to the application instance for the events. Then inside the actual objects, send a bubbling event that bubbles up the display hierarchy to the application where the Mediator hears it. Then the mediator takes the appropriate action such as sending off a notification to trigger a command with some logic, perhaps. The target of the event, would of course be the item in your world that sent the event, so if the Command would need to manipulate or inspect that item, then just pass the event.target in the notification body. QED.
-=Cliff>