AS3 - Does the amount of code in an object matter for multiple instances? - actionscript-3

What's better?
1000 objects with 500 lines of code each
vs
1000 objects with 30 lines of code each that delegate to one manager with 500 lines of code.
Context:
I have signals in my game. Dozens and dozens of them. I like that they have better performance than the native Flash Event.
I figured that as I was going to have so many, they should be lightweight, so each signal only stores a few variables and three methods.
Variables: head, tail, isStepping, hasColdNodes, signalMgr
Methods: addListener, removeListener, dispatch
But all these signals delegate the heavywork to the signalMgr, as in:
signalMgr.addListener(this, listener, removeOnFirstCallback);
That manager handles all the doublylinked list stuff, and has much more code than a signal.
So, is it correct to think this way?
I figure that if I had all the management code in the signal, that would be repeated in memory every time I instance one.

In the context of your question this is pretty much irrelevant and both cases should not produce much difference.
From what you say it seems that you assume many things but did not actually check any. For instance when you say: "I like that they have better performance than the native Flash Event." I can only assume that you did read that somewhere but never try to verify it by yourself. There's only a few cases where using the signal system can make a tiny bit of difference, in most cases they don't bring much, in some cases they make things worse. In the context of Flash development signals do not bring anything more than simple convenience. Flash is an event driven system that cannot be turned off so using signals with it means using event + signals, not using signals alone. In the case of custom events using delegation is much more efficient and easier to use and doesn't require any object creation.
But the real answer to the question is even more simple: There's no point optimizing something that you don't know needs optimization. Even worse different OS will produce different optimization needs so anyone trying to answer a general optimization question can only fail or pretend to know.

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. (c) DonaldKnuth
The ultimate goal of any programmer is to write clean code. So you should write clean code with a only a few thoughts of optimizations.

Related

Do we need to avoid duplicate at any cost?

In our application framework ,whenever a code duplicate starts happen (even in slightest degree) people immediately wanted it to be removed and added as separate function. Though it is good ,when it is being done at the early stage of development soon that function need to be changed in an unexpected way. Then they add if condition and other condition to make that function stay.
Is duplicate really evil? I feel we can wait for triplication (duplicate in three places ) to occur before making it as a function. But i am being seen as alien if i suggest this. Do we need to avoid duplicate at any cost ?
There is a quote from Donald Knuth that pin points your question:
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
I think that duplication is needed so we are able to understand what are the critical points that need to be DRYed.
Duplicated code is something that should be avoided but there are some (less frequent) contexts where abstractions may lead you to poor code, for example when writing test specs some degree of repetition may keep your code easier to maintain.
I guess you should always ask what is the gain of another abstraction and if there is no quick win you may leave some duplicated code and add TODO comment, this will make you code faster, deliver earlier and DRY only what is most valued.
A similar approach is described by Three Strikes And You Refactor:
The first time you do something, you just do it.
The second time you do something similar, you wince at the duplication, but you do the duplicate thing anyway.
(Or if you're like me, you don't wince just yet -- because your DuplicationRefactoringThreshold is 3 or 4.)
The third time you do something similar, you refactor.
This is the holy war between management, developers and QC.
So to make this war end you need to make a conman concept for all three of them.
Abstraction VS Encapsulation, Interface VS Class, Singleton VS Static.
you all need to give yourself time to understand each other point of view to make this work.
For example if you are using scrum then you can go sprints for developing then make a sprint for abstraction and encapsulation.

Running to memory issues with as3 using procedural logic, any tips how to prevent

so, in a nutshell, my question is pretty much in title, I have my first big project and now after 11k lines of code in one big file I am running into memory issues after a while I have tried out my flash, slowly it slows down until I cant do anything, even if I always clear containers when I can and remove listeners when not needed.
procedural, for those who dont know, I have everything in big main function and within that hundreds of other functions.
oop logic feels a bit too big of thing to try and understand at this point, procedural so far has been much more natural to me...
any tips how to prevent memory from running out?
You don't really need any object oriented programming to break it down.
Just apply a bit of logic of where you can group & separate things. Also, chances of repetitive lines of code is very high too.
So first of, start grouping lines. put them inside different functions & call them in main.
After you bring it all down to chunks, you can start thinking of grouping the functions into classes as well. But at the very least the first step should have brought your problem down.
Your problem is hard to solve without using object-oriented programming. Btw, the C style of coding is usually called "imperative programming"... just so you know.
The bad news is, 11k lines of code in one file means all the code is inside one translation unit, so everything you coded is always in memory.
If you break it up into classes, then individual class instances (objects) will be created and destroyed on a requirement basis, thereby taking up as much memory as needed (growing and shrinking, not static).
Finally, using as3 as if it were C will hurt you in many other ways long-term. So please learn OOP and break up your monolithic code into objects.
Well afterall calling this a bad practice, you may not giving VM (virtual machine) a chance to breath... I mean If your procedures goes on busy everytime in a loop like thing polling states then with great probability the VM does not find the opportunity to garbage collect. Which is a disaster.
So do not poll events if you are doing. Try getting rid of the grand procedure (the seer of all :) ) and try another architecture which the central monitor procedure is invoked by event handlers when needed.
Then when you settle, sell your product and get rich. Learn oop a.s.a.p. :)

how short should a function be?

I did some searching on here and haven't found anything quite like this, so I'm going to go ahead and ask. This is really more about semantics than an actual programming question. I'm currently writing something in C++ but the language doesn't really matter.
I'm well aware that it's good programming practice to keep your functions/methods as short as possible. Yet how do you really know if a function is too long? Alternately, is it ever possible to break functions down too much?
The first programming language I learned (other than Applesoft BASIC, which doesn't count...) was 6502 assembly language, where speed and optimization is everything. In cases where a few cycle counts screws up the timing of your entire program, it's often better to set a memory location or register directly rather than jump to another subroutine. The former operation might take 3 or 4 cycles, while, altogether, the latter might take two or three times that.
While I realize that nowadays if I were to even mention cycle counts to some programmers they'd just give me a blank look, it's a hard habit to break.
Specifically, let's say (again using C++) we have a private class method that's something like the following:
int Foo::do_stuff(int x) {
this->x = x;
// various other operations on x
this->y = this->x;
}
I've seen some arguments that, at the very least, each set of operations should be its own function. For instance, do_stuff() should in theory be named to set_x(int x), a separate function should be written for the set of operations performed on class member x, and a third function should be written to assign the final value of class member x to class member y. But I've seen other arguments that EVERY operation should have its own function.
To me, this just seems wrong. Again, I'm looking at things from an internal perspective; every method call is pushing an address on the stack, performing its operations, then returning from the subroutine. This just seems like a lot of overhead for something relatively simple.
Is there a best practice for this sort of thing or is it more up to individual judgment?
Since the days of 6502 assembly, two things have happened: Computers have got much faster, and compilers (where appropriate) have become smarter.
Now the advice is to stop spending all your time fretting about the individual cycles, until you are sure that it is a problem. You can spend that time more wisely. If you mention cycle counts to me, I won't look at you blankly because I don't know what they are. I will look at you wondering if you are wasting your effort.
Instead, start thinking about making your functions small enough to be:
understandable,
testable,
re-usable, maybe, where that is appropriate.
If, later, you find some of your code isn't running fast enough, consider how to optimise it.
Note: The optimisation might be to hint to the compiler to move the function inline, so you still get the advantages above, without the performance hit.
The most important thing about deciding where to break up a function is not necessarily how much the function does. It is rather about determining the API of your class.
Suppose we break Foo::do_stuff into Foo::set_x, Foo::twiddle_x, and Foo::set_y. Does it ever make sense to do these operations separately? Will something bad happen if I twiddle x without first setting it? Can I call set_y without calling set_x? By breaking these up into separate methods, even private methods within the same class, you are implying that they are at least potentially separate operations.
If that's not the case, then by all means keep them in one function.
I'm well aware that it's good
programming practice to keep your
functions/methods as short as possible
I wouldn't use the above criteria to refactor my functions to smaller ones. Below is what I use
Keep all the functions at same level
of abstraction
make sure there are
no side effects (for exceptional
cases make sure to explicitly
document them)
Make sure a function is not doing more than one thing (SRP Principle). But you can break this to honor 1.
Other good practices for Method Design
Don't Make the Client Do Anything the
Module Could Do
Don't Violate the Principle of Least
Astonishment
Fail Fast–Report Errors as Soon as
Possible After They Occur
Overload With Care
Use Appropriate Parameter and Return Types
Use Consistent Parameter Ordering
Across Methods
Avoid Long Parameter Lists
Avoid Return Values that Demand
Exceptional Processing
After reading Clean Code: A Handbook of Agile Software Craftsmanship which touched on almost every piece of advice on this page I started writing shorter. Not for the sake of having short functions but to improve readability and testability, keep them at the same level of abstraction, have them do one thing only, etc.
What I've found most rewarding is that I find myself writing a lot less documentation because it's just not necessary for 80% of my functions. When the function does only what its name says, there's no point in restating the obvious. Writing tests also becomes easier because each test method can set up less and perform fewer assertions. When I revisit code I've written over the past six months with these goals in mind, I can more quickly make the change I want and move on.
i think in any code base, readability is much greater a concern than a few more clock cycles for all the well-known reasons, foremost maintainability, which is where most code spends its time, and as a result where most money is spent on the code. so im kind of dodging your question by saying that people dont concern themselves with it because its negligible in comparison to other [more corporate] concerns.
although if we put other concerns aside, there are explicit inline statements, and more importantly, compiler optimisations that can often remove most of the overhead involved with trivial function calls. the compiler is an incredibly smart optimising machine, and will often organise function calls much more intelligently than we could.

How to design and write a game efficiently?

I'm writing a very simple Java Game. Let me describe it briefly:
There are 4 players in a Map.
The map is a two-dimensional matrix with a value called "height"
The height between 2 nodes is the cost of that edge.
Use Dijkstra algorithm to help player navigate from a source to a destination.
Four players take turn to make a move. The total move is 8( left top, top, right top.... )
If they meets, fight for gold value, otherwise move to their target.
As they move, their strength decrease by the height difference between two nodes.
... etc
....
The problem that I'm encountering is that the source code is getting longer and complex day by day. And I think I'm using a wrong approach somehow, I feel so tired because of constantly changing the implementation. Here is my approach:
Write out all requirements.
Create all the object that I need with all getter and setter.
Create a static class to test the logic
Create unit test while putting the logic together
Add some more code, then change to code to fit the test
Write a big method that run, then break it down into smaller methods, then write unit test again.
If everything work fine, add more requirements, add more code
Then things are getting complicated because the more code I added, the complexity increases. No longer have time to write unit test because create a test case now requires too much work
Re-design, then change the implementation, go to step 1 again.
I'm come from a C++ background, and I'm only comfortable with writing 'static' libraries such as: stack, queue, link-listed, tree... Game is really a big challenging to me, especially I have to use Java. I understand the core programming is the same, so picking up Java was not really that bad. However, the time of looking up Java's API is not little. Further, the game logic is really hard to write. When this object moves, other object got affected..., so creating test for a method depends on many other methods...etc.
I really need an advice. Could anyone share some experience of how to write a game to me? I only have two weeks left for this assignment. I'm currently have 45 classes now, I feel so lost because the more I wrote the more it gets complex :( !
Best regards,
Chan Nguyen
First start thinking like a java programmer. Think as every thing in your game as an object, like the board, think about the properties and methods it has, its interfaces, how it interacts with the other objects.
If you need help getting started here is a great tutorial that guides you step by step to do a simple java game, this might put you in the right frame of mind to start programming your own. I strongly recommend you to follow the tutorial.http://www.cokeandcode.com/asteroidstutorial and to use the libraries that they used for developing the interfaz there.
I view game code architects with respect: games are complex systems with emergent properties at runtime, unusually intense interaction requirements (UI, controls) which makes a lot of OOP theory questionable value. It can be difficult to reuse game code. And, a lot of upfront planning work is wasted time.
Most game coders I know, beginning or veteran, succeed with a "just do it" iterative process. e.g.
1) write a minimal prototype. get a very basic system working, using the simplest, most obvious architecture you can think of. (my guy can run around the screen). 5 or 10 objects max.
2) add functionality (points, rules, traps, NPC behaviours, etc) and playtest, over and over. This hack on hack can makes for poorly structured code, but most coders can make it work.
3) rewrite. Programmers grit their teeth at some of the hacking they had to do in (2), and will want to throw it all out and rewrite. Resist this urge until the game is testable (as in, plyaers can sometimes enjoy it, somewhat), or a new feature would require rewriting. Then, rewrite pretty much EVERYTHING from scratch. This goes WAY faster than you'd expect, and results in solid, well-structured code.
Game coders do test, but comprehensive testing of ALL code is rare. two reasons: emergence and culture. Games have emergent properties at runtime ("yeah, but the points COULD go negative when the NPC is killed when ...."). Since games are usually for entertainment purposes, there is a culture of fast-and-loose testing. Games aren't as important as, say, missile control code.
I expect others with more coding experience answer this. (I have written a fair bit of code but I tend toward quick and dirty script type coding style - I know lots of coders who are way better than me.)

Any good threads related job-interview question?

When interviewing graduates I usually ask them questions about data structures, algorithms and complexity theory. I would really like to ask a question that will enable them to show their familiarity with multi-threaded concepts, without dwelling into language specific issues.
Any good questions? The only question I could think of is how to write a Singleton that supports multi-threaded access.
I find the classic "write me a consumer-producer queue" question to be quite good. You can talk about synchronization in a handwavy way beforehand for five minutes or so (e.g. start with "What does Object.wait() do? What other methods on Object is it related to? Can you give me an example of when you might use these? What other concurrency techniques might you use in practice [because really, it's quite rare that actually using the wait/notify primitives is the best approach]?"). Make sure the candidate addresses (or at least makes clear he is aware of) both atomicity ("missed updates") and volatility (visibility of the new value on other threads)
Then after you've had a chat about the theory of these, get them to spend a few minutes actually writing the code for a primitive producer-consumer queue. This should be straightforward to anyone who actually understands what they were talking about above, yet it will weed out those who can "talk the talk" but don't actually understand it in practice (arguably the most dangerous group).
What I like about these mini-coding exercises, is that they're often easy to extend. For instance, if the candidate completes the task easily, you can ask how they would extend it for situation XXX - invent requirements that you know will push the limits of the noddy solution you asked for. This not only lets you tailor the depth of questions you're asking but gives some insight into how well the candidate handles clarification of requirements, and modifications of existing design (which is pretty important in this industry).
Here you can find some topics to discuss:
threads implementation ( kernel vs user space)
thread local storage
synchronization primitives
deadlocks, livelocks
Differences between mutex and
semaphore.
Use of condition variables.
When not to use threads. (eg. IO multiplexing)
Talk with them about a popular, but not well-known topic, where thread handling is essential.
I recommend you, build a web server with them, of course, only on paper or just in words. The result should look something like this: there is a main thread, it's listening on a socket. When something arrives, it passes the socket into the pool, then this thread returns back to socket listening. The pool has fixed number of slots. The request processing threads are dedicated to get job from the pool. Find out, what's better, if the threads are checking the pool concurrently, or the listner main thread selects a free slot/thread for the new incoming request. Try to write a small pseudocode, or a graph for both side of the pool handling.
Let's introduce a small application: page counter, which tells that how many page request has been made since server startup. Don't tell them that the counter must be protected against concurrent modification, let them to find it out how to do this with mutexes or synchronization or whatsoever. Maybe you could skip the web server part, the page counter app is easier to specify.
Another example is a chat, with 2+ clients and a server, find out, how to solve the problem, that all the messages should arrive in the same order for all clients. Or reflex game: the server waits for 1..5 secs random, then says "peek-a-boo", and the player wins who presses space key first. Specify it with 2 player, then try to expand it to N players.
Also, be aware of NPPs. NPP stands: "non-programming programmer". There are dudes, who can talk about programming issues, they know all the 3/4-letter abbrevations (there're lot in the Java world, EJB, JSP, XSLT, and my favourite: POJO, which means Pure Old Java Objects, lol), they understand and modify codes, or make similar programs from a base, but they fail even with small problems, it it has to do it theirselves, e.g. finding the nearest element to a base in an array. Sometimes it takes months, until it turns out. They performs well at interviews, because they prepare for it. Maybe they don't even known, that they're NPPs, this is a known effect: http://en.wikipedia.org/wiki/Dunning-Kruger_effect
It's hard to recognize the opposite dudes, who have not heard about trendy libraries or patterns, but they can learn it even at the job interview. (Personal remark: my last interview was in 1999, and it seems that I will not do interview anymore. I have never heard of dynamic web pages before, but I've figured out the term "session" during the interview, the question was that how to build a simple hanging man web app. I was hired.)