Anyone who has programmed with actionscript 3.0 has most certainly noticed its lack of support for private constructors and abstract classes. There are ways to work around these flaws, like throwing errors from methods which should be abstract, but these work arounds are annoying and not very elegant. (Throwing an error from a method that should be abstract is a run-time check, not compile-time, which can lead to a lot of frustration).
I know actionscript 3.0 follows the current ECMAscript standard and this is the reason for its lack of private constructors, but what about abstract classes, are they not in the ECMAscript standard eather?
I guess the more specific question is why does ECMAscript standard not support private constructors? Is it something that can be look forward to in the future?
I have been wondering this for quit sometime, any insight will be much appreciated.
private constructors and abstract classes are not "good OOP elements". they're nice hacks originated in C++. in more dynamic languages they're usually not needed.
abstract classes in particular are totally unneeded, since you don't have to declare the interface in the ancestor to comply with an interface. in fact, you don't even have to inherit from a common ancestor to use some polymorphism.
i'm not saying that AS is better without something like that; rather that you should think in the language you're using, not trying to translate from whatever you're used to.
Neither private constructors nor abstract classes were defined in the old ECMAScript 4 standard on which ActionScript 3 was based. If I remember correctly, the ECMAScript Working Group chose not to implement these more complicated OOP features because there was a certain focus on simplicity and backwards compatibility with older versions of ECMAScript. I interpreted what I heard from them as "we may add these features later, but let's take it slowly". The abstract keyword, for example, is a reserved word, so they have this stuff in mind.
It's worth noting that the working group has chosen to restart the next version of the language with a new focus. This effort is called "Harmony" because two rival sub-groups had very different opinions about where ECMAScript should go in the future. It's a bit of a compromise. Harmony is going to progress much more slowly than the old ES4, and even the class syntax that was already implemented in AS3 will be dropped from the standard initially. In other words, they're going to keep it looking a lot more like today's JavaScript for a while to focus on some other features that are important to the group that was going to branch. This will become ES3.1. Later, classes and some of the more Java-like OOP features will be reconsidered for a new ES4.
What about AS3, though? Basically, Adobe jumped the gun by using a standard that wasn't yet complete, and they got bit by politics. However, Adobe intends to stay involved with the ECMAScript Working Group, and they will likely consider adding features that the working group recommends. That said, AS3 may never be a complete (or fully compatible) implementation of a future ECMAScript. What does that mean? Well, since they're non-standard again, Adobe has the option to add features to ActionScript, even if those features aren't part of a standard. If you feel abstract classes or private constructors are important to you as a developer, request these features in the public Flash Player bug database, or vote for existing feature requests, if they're already present.
I know, this question is really old, but I just stumbled accross and had to amend this:
Constructors are not OO. Objects should be instantiated by factories.
Abstract classes are not OO. Instead one should use interfaces (which AS3 has) and mixins (which AS3 doesn't have).
AS3 is a poor language, but for many more fundamental reasons than these two. AS3 basically is a structured representation of AVM2 bytecode. A 1:1 mapping is easily feasible. And I suppose, it's likely to stay like that, because Adobe appearently has more interest in driving forward platform features, their commercial authoring tools and the flex framework (including mxml).
greetz
back2dos
Related
In this article, it says that ActionScript 3.0 conforms to ECMA 4th edition. But instead of looking like JavaScript and having no class or extend, ActionScript 3.0 code looks like Java and have the class statement and even have extend?
Actionscript 3 was designed while the ECMA 4 spec was still under development. It's divergent; it conforms to ECMA 4 but goes beyond it.
Because the ECMAScript 4th Edition draft that ActionScript 3 is based on had class and extends and so on.
http://en.wikipedia.org/wiki/ECMAScript#ECMAScript.2C_4th_Edition
http://www.ecma-international.org/activities/Languages/Language%20overview.pdf
Later, the 4th Edition draft was replaced by ECMAScript Harmony:
http://en.wikipedia.org/wiki/ECMAScript#History_2
Some would say that the reason for this was business politics, but you would have to form your own informed opinion on that.
Actionscript was compliant with ECMA from the very start.
You could imagine javascript & actionscript as a fork from a single standard ie ECMA, With Javascript inclined to add power to browsers while Actionscript geared towards flash development.
Seems fair since every company at one time was in a bid to create own version. For eg, Consider the Microsoft's version of ECMA.
You might also consider from the very link you shared that :
In response to user demand for a language better equipped for larger
and more complex applications, ActionScript 2.0 featured compile-time
type checking and class-based syntax, such as the keywords class and
extends.
So you could see that most of the changes were really user driven, rather than a co-incidental similarity.
In general, to be standard compliant does not mean that only the features defined by the standard have to be available. You can implement a standard to be compliant but you can also implement additional functionality.
You can treat ECMA Script as a kind of subset that defines the basic language structure, syntax and semantics. Then ECMA is only a subset of ActionScript. The language adds a wide range of features to this subset.
Another example could be MySQL. It implements the SQL standard but provides much more functionality like the standard would do.
I often seem to run into the discussion of whether or not to apply some sort of prefix/suffix convention to interface type names, typically adding "I" to the beginning of the name.
Personally I'm in the camp that advocates no prefix, but that's not what this question is about. Rather, it's about one of the arguments I often hear in that discussion:
You can no longer see at-a-glance
whether something is an interface or a
class.
The question that immediately pops up in my head is: apart from object creation, why should you ever have to care whether an object reference is a class or an interface?
I've tagged this question as language agnostic, but as has been pointed out it may not be. I contend that it is because while specific language implementation details may be interesting, I'd like to keep this on a conceptual level. In other words, I think that, conceptually, you'd never have to care whether an object reference is typed as a class or an interface but I'm not sure, hence the question.
This is not a discussion about IDEs and what they do or don't do when visualizing the different types; caring about the type of an object is certainly a necessity when browsing through code (packages/sources/whatever form). Nor is it a discussion about the pros or cons about either naming convention. I just can't seem to figure out in what scenario, other than object creation, you actually care about wether or not you're referencing a concrete type or an interface.
Most of the time, you probably don't care. But here are some instances that I can think of where you would. There are several, and it does vary a little bit by language. Some languages don't mind as much as others.
In the case of inversion of control (where someone PASSES you a parameter) you probably don't care if it's an interface or an object as far as calling its methods etc. But when dealing with types, it definitely can make a difference.
In managed languages such as .NET languages, interfaces can usually only inherit one interface, whereas a class can inherit one class but implement many interfaces. The order of classes vs interfaces may also matter in a class or interface declaration. So you need to know which is which when defining a new class or interface.
In Delphi / VCL, interfaces are reference counted and automatically collected, whereas classes must be explicitly freed, so lifecyle management on the whole is affected, not just the creation.
Interfaces may not be viable sources for class references.
Interfaces can be cast to compatible interfaces, but in many languages, they cannot be cast to compatible classes. Classes can be cast to either.
Interfaces may be passed to parameters of type IID, or IUnknown, whereas classes cannot (without a cast and a supporting interface).
An interface's implementation is unknown. Its input and output are defined, but the implementation which creates the output is abstracted. In general, ones attitude may be that when working with a class, one may know how the class works. But when working with an interface, no such assumption should be made. In a perfect world, it might make no difference. But in reality, this most certainly can have affect your design.
I agree with you (and thereby do not use an "I" prefix for interfaces). We shouldn't have to care whether it is an abstract class or an interface.
Worth noting that Java needs to have a notion of interface solely because it does not support multiple inheritance. Otherwise, "abstract class" concept would suffice (which may be "all" abstract, or partially abstract, or almost concrete and just 1 tiny bit abstract, whatever).
Things that concrete class can have and the interfaces can't:
Constructors
Instance fields
Static methods and static fields
So if you use the convention of starting all interface names with 'I' then it indicates to the user of your library that the particular type will not have any of the above mentioned things.
But personally I feel that this is not a reason enough to start all interface names with 'I'. The modern IDEs are powerful enough to indicate if some type is an interface. Also it hides the true meaning of an interface name: imagine if Runnable and List interfaces were named IRunnable and IList repectively.
When a class is used, I can make the assumption that I will get objects from a relatively small and almost well-defined range of subclasses. That's because subclassing is - or at least it should be
- a decision that isn't made too easily, especially in languages that don't support multiple inheritance. In contrast, interfaces can be implemented by any class, and the implementation can be added later to any class.
So the information is useful, especially when browsing through code, and trying to get a feeling what the code author intended to do - but I think it should be enough, if the IDE shows interfaces/classes as distinctive icons.
You want to see at a glance which are the "interfaces" and which are the "concrete classes" so that you can focus your attention to the abstractions in the design instead of the details.
Good designs are based on abstractions - if you know and understand them you understand the system without knowing any of the details. So you know you can skip the classes without the I prefix, and focus on the ones that do have it while you are understanding the code, and you also know to avoid building new code around non-interface classes without having to refer to some other design document.
I agree that the I* naming convention is just not appropriate for modern OO languages, but truth is this question isn't really language agnostic. There are legitimate cases where you have an interface not for any architectural reason but because you simply don't have an implementation or have access to an implementation. For these cases you can read I* as *Stub or similar, and, in these cases, it might make sense to have an IBlah and a Blah class
These days, though, you rarely come up against this, and in modern OO languages when you say Interface you actually mean Interface not just I don't have the code for this. So there is no need for the I*, and in fact it encourages really bad OO design as you won't get the natural naming conflicts that would tell you something's gone wrong in your architecture. Say you had a List and an IList... what's the difference? when would you use one over the other? if you wanted to implement IList would you be constrained (conceptually at least) by what List does? I'll tell you what... if I found both an IBlah and a Blah class in any of my codebases I would purge one at random and take away that person's commit privileges.
Interfaces don't have fields, hence when you use IDisposable (or whatever), you know you're only declaring what you can do. That seems to me the main point of it.
Distinguishing between interfaces and classes may be useful, anywhere the type is referenced, in the IDE or out, to determine:
Can I make a new implementation of this type?
Can I implement this interface in a language that does not support multiple inheritance of implementation classes (e.g., Java).
Can there be multiple implementations of this type?
Can I easily mock this interface in an arbitrary mocking framework?
It is worth noting that UML distinguishes between interfaces and implementation classes. In addition, the "I" prefix is used in the examples in "The Unified Modeling Language User Guide" by the three amigos Booch, Jacobson and Rumbaugh. (Incidentally, this also provides an example why IDE syntax coloring alone is not sufficient to distinguish in all contexts.)
You should care, because :
An interface with capital "I" enables one, namely you or your co-workers to use any implementation which implements the interface. If in the future you figure out a better way to do something, say a better list sorting algorithm, you will be stuck with having the change ALL of the invoking methods as well.
It helps in understanding code - e.g. you don't need to memorize all 10 implementations of say, I_SortableList , you just care that it sorts a list (or something like that). Your code becomes practically self-documenting here.
To complete the discussion, here is a pseudocode example illustrating the above:
//Pseudocode - define implementations of ISortableList
Class SortList1 : ISortableLIst, SortList2:IsortableList, SortList3:IsortableList
//PseudoCode - the interface way
void Populate(ISortableList list, int[] nums)
{
list.set(nums)
}
//PseudoCode - the "i dont care way"
void Populate2( SortList1 list, int[] nums )
{
list.set(nums)
}
...
//Pseudocode - create instances
SortList1 list1 = new SortList1();
SortList2 list2 = new SortList2();
SortList3 list3 = new SortList3();
//Invoke Populate() - The "interface way"
Populate(list1,nums);//OK, list1 is ISortableList implementation
Populate(list2,nums);//OK, list2 is ISortableList implementation
Populate(list3,nums);//OK, list3 is ISortableList implementation
//Invoke Populate2() - the "I don't care way"
Populate(list1,nums);//OK, list1 is an instance of SortList1
Populate(list2,nums);//Not OK, list2 is not of required argument type, won't compile
Populate(list3,nums);//the same as above
Hope this helps,
Jas.
I am learning ActionScript 3.0. Coming from Java world I can easily relate to strict compilation mode. I think having type safety checks at compilation time makes perfect sense.
This makes me wonder, why the compiler allows a standard mode were all the type safety checks are deferred to run time? Is compatibility with older ActionScript specification the sole reason for having standard mode?
not all functions have to be run as strictly adhering to a type at compilation, especially if running dynamicly created variables and applications. have a look at the LiveDocs page for some good examples. It is mainly a stylistic thing as far as i have found, depends on the background you are from in your coding.
I'm not sure whether this'd qualify as an answer, because who really knows exactly other than the Flash team, but my guess is that because AS3's an implementation of ECMAScript, and thus loosely typed by definition, that that's probably the main reason why there's an option for standard/loose mode.
Can anyone tell me why ActionScript 3, a statically typed language, doesn't have generics? Is it too much work? A historical thing? Is there some way to "fake" it that I haven't picked up yet?
Edit: thanks a lot for the answers! The Vector class is basically what I was looking for, and the other information was helpful too.
The new Vector class is a form of generics that Actionscript 3 now supports when compiled for Flash Player 10. They don't support the specification of your own generic classes, yet.
I think Adobe will implement the ES4 standard eventually. It would be nice if they had a competitor who could push them quicker in the right direction. I was expecting a little more from the updates to AS3 when they moved to CS4, but I suppose the revolutionary Vector class will have to suffice.
It looks like they spent a lot of time beefing up the libraries for Flex and AIR, so maybe they'll go back to improving the language support later, but it probably isn't a real priority. Remember, Adobe is in it for the money, not for the feel good of making the sweetest possible language.
I believe it's a historical thing. ActionScript is based on ECMAScript (JavaScript is also based on ECMAScript). ECMAScript is a dynamically typed language, meaning that variables don't have their type declared. Generics are more useful in statically typed languages, wherein the type of variable is declared upfront. In a statically typed language, without generics you're stuck casting all the time from the root object (for example, Object in Java). This is not a problem in ECMAScript, because you can put anything you want into any data structure.
So why didn't ActionScript add generics when they added static typing to ECMAScript? I can't be sure of that, but I think the premise of your question is a bit off - there are generic-esque containers, such as Vector. I might think they'd keep the dynamically-typed containers of ECMAScript (objects and arrays) for backwards-compatibility, but they already broke that between AS2 and AS3, so I'm not sure.
Parameteric types ( the word 'generics' is usually used in ECMAScript for generic methods, rather than the combination of parametric types and runtime polymorphism used in Java ) were proposed as part of ES4, but ES4 fractured and much of the type system proposed for ES ( including the parts implemented in ActionScript ) are not going into the next version. I can't say whether or not Adobe would want to go that way by themselves.
Let's first get proper containers and algorithms in actionscript and then worry about generics...
as3 is not very different from javascript, btw, so your question would kind of apply to JS as well.
Since debate without meaningful terms is meaningless, I figured I would point at the elephant in the room and ask: What exactly makes a language "object-oriented"? I'm not looking for a textbook answer here, but one based on your experiences with OO languages that work well in your domain, whatever it may be.
A related question that might help to answer first is: What is the archetype of object-oriented languages and why?
Definitions for Object-Orientation are of course a huge can of worms, but here are my 2 cents:
To me, Object-Orientation is all about objects that collaborate by sending messages. That is, to me, the single most important trait of an object-oriented language.
If I had to put up an ordered list of all the features that an object-oriented language must have, it would look like this:
Objects sending messages to other objects
Everything is an Object
Late Binding
Subtype Polymorphism
Inheritance or something similarly expressive, like Delegation
Encapsulation
Information Hiding
Abstraction
Obviously, this list is very controversial, since it excludes a great variety of languages that are widely regarded as object-oriented, such as Java, C# and C++, all of which violate points 1, 2 and 3. However, there is no doubt that those languages allow for object-oriented programming (but so does C) and even facilitate it (which C doesn't). So, I have come to call languages that satisfy those requirements "purely object-oriented".
As archetypical object-oriented languages I would name Self and Newspeak.
Both satisfy the above-mentioned requirements. Both are inspired by and successors to Smalltalk, and both actually manage to be "more OO" in some sense. The things that I like about Self and Newspeak are that both take the message sending paradigm to the extreme (Newspeak even more so than Self).
In Newspeak, everything is a message send. There are no instance variables, no fields, no attributes, no constants, no class names. They are all emulated by using getters and setters.
In Self, there are no classes, only objects. This emphasizes, what OO is really about: objects, not classes.
According to Booch, the following elements:
Major:
Abstraction
Encapsulation
Modularity
Hierarchy (Inheritance)
Minor:
Typing
Concurrency
Persistence
Basically Object Oriented really boils down to "message passing"
In a procedural language, I call a function like this :
f(x)
And the name f is probably bound to a particular block of code at compile time. (Unless this is a procedural language with higher order functions or pointers to functions, but lets ignore that possibility for a second.) So this line of code can only mean one unambiguous thing.
In an object oriented language I pass a message to an object, perhaps like this :
o.m(x)
In this case. m is not the name of a block of code, but a "method selector" and which block of code gets called actually depends on the object o in some way. This line of code is more ambiguous or general because it can mean different things in different situations, depending on o.
In the majority of OO languages, the object o has a "class", and the class determines which block of code is called. In a couple of OO languages (most famously, Javascript) o doesn't have a class, but has methods directly attached to it at runtime, or has inherited them from a prototype.
My demarcation is that neither classes nor inheritance are necessary for a language to be OO. But this polymorphic handling of messages is essential.
Although you can fake this with function pointers in say C, that's not sufficient for C to be called an OO language, because you're going to have to implement your own infrastructure. You can do that, and a OO style is possible, but the language hasn't given it to you.
It's not really the languages that are OO, it's the code.
It is possible to write object-oriented C code (with structs and even function pointer members, if you wish) and I have seen some pretty good examples of it. (Quake 2/3 SDK comes to mind.) It is also definitely possible to write procedural (i.e. non-OO) code in C++.
Given that, I'd say it's the language's support for writing good OO code that makes it an "Object Oriented Language." I would never bother with using function pointer members in structs in C, for example, for what would be ordinary member functions; therefore I will say that C is not an OO language.
(Expanding on this, one could say that Python is not object oriented, either, with the mandatory "self" reference on every step and constructors called init, whatnot; but that's a Religious Discussion.)
Smalltalk is usually considered the archetypal OO language, although Simula is often cited as the first OO language.
Current OO languages can be loosely categorized by which language they borrow the most concepts from:
Smalltalk-like: Ruby, Objective-C
Simula-like: C++, Object Pascal, Java, C#
I am happy to share this with you guys, it was quite interesting and helpful to me. This is an extract from a 1994 Rolling Stone interview where Steve (not a programmer) explains OOP in simple terms.
Jeff Goodell: Would you explain, in simple terms, exactly what object-oriented software is?
Steve Jobs: Objects are like people. They’re living, breathing things that have knowledge inside them about how to do things and have memory inside them so they can remember things. And rather than interacting with them at a very low level, you interact with them at a very high level of abstraction, like we’re doing right here.
Here’s an example: If I’m your laundry object, you can give me your dirty clothes and send me a message that says, “Can you get my clothes laundered, please.” I happen to know where the best laundry place in San Francisco is. And I speak English, and I have dollars in my pockets. So I go out and hail a taxicab and tell the driver to take me to this place in San Francisco. I go get your clothes laundered, I jump back in the cab, I get back here. I give you your clean clothes and say, “Here are your clean clothes.”
You have no idea how I did that. You have no knowledge of the laundry place. Maybe you speak French, and you can’t even hail a taxi. You can’t pay for one, you don’t have dollars in your pocket. Yet, I knew how to do all of that. And you didn’t have to know any of it. All that complexity was hidden inside of me, and we were able to interact at a very high level of abstraction. That’s what objects are. They encapsulate complexity, and the interfaces to that complexity are high level.
As far as I can tell, the main view of what makes a language "Object Oriented" is supporting the idea of grouping data, and methods that work on that data, which is generally achieved through classes, modules, inheritance, polymorphism, etc.
See this discussion for an overview of what people think (thought?) Object-Orientation means.
As for the "archetypal" OO language - that is indeed Smalltalk, as Kristopher pointed out.
Supports classes, methods, attributes, encapsulation, data hiding, inheritance, polymorphism, abstraction...?
Disregarding the theoretical implications, it seems to be
"Any language that has a keyword called 'class'" :-P
To further what aib said, I would say that a language isn't really object oriented unless the standard libraries that are available are object oriented. The biggest example of this is PHP. Although it supports all the standard object oriented concepts, the fact that such a large percentage of the standard libraries aren't object oriented means that it's almost impossible to write your code in an object oriented way.
It doesn't matter that they are introducing namespaces if all the standard libraries still require you to prefix all your function calls with stuff like mysql_ and pgsql_, when in a language that supported namespaces in the actual API, you could get rid of functions with mysql_ and have just a simple "include system.db.mysql.*" at the top of your file so that it would know where those things came from.
when you can make classes, it is object-oriented
for example : java is object-oriented, javascript is not, and c++ looks like some kind of "object-curious" language
In my experience, languages are not object-oriented, code is.
A few years ago I was writing a suite of programs in AppleScript, which doesn't really enforce any object-oriented features, when I started to grok OO. It's clumsy to write Objects in AppleScript, although it is possible to create classes, constructors, and so forth if you take the time to figure out how.
The language was the correct language for the domain: getting different programs on the Macintosh to work together to accomplish some automatic tasks based on input files. Taking the trouble to self-enforce an object-oriented style was the correct programming choice because it resulted in code that was easier to trouble-shoot, test, and understand.
The feature that I noticed the most in changing that code over from procedural to OO was encapsulation: both of properties and method calls.
Simples:(compare insurance character)
1-Polymorphism
2-Inheritance
3-Encapsulation
4-Re-use.
:)
Object: An object is a repository of data. For example, if MyList is a ShoppingList object, MyList might record your shopping list.
Class: A class is a type of object. Many objects of the same class might exist; for instance, MyList and YourList may both be ShoppingList objects.
Method: A procedure or function that operates on an object or a class. A method is associated with a particular class. For instance, addItem might be a method that adds an item to any ShoppingList object. Sometimes a method is associated with a family of classes. For instance, addItem might operate on any List, of which a ShoppingList is just one type.
Inheritance: A class may inherit properties from a more general class. For example, the ShoppingList class inherits from the List class the property of storing a sequence of items.
Polymorphism: The ability to have one method call work on several different classes of objects, even if those classes need different implementations of the method call. For example, one line of code might be able to call the "addItem" method on every kind of List, even though adding an item to a ShoppingList is completely different from adding an item to a ShoppingCart.
Object-Oriented: Each object knows its own class and which methods manipulate objects in that class. Each ShoppingList and each ShoppingCart knows which implementation of addItem applies to it.
In this list, the one thing that truly distinguishes object-oriented languages from procedural languages (C, Fortran, Basic, Pascal) is polymorphism.
Source: https://www.youtube.com/watch?v=mFPmKGIrQs4&list=PL-XXv-cvA_iAlnI-BQr9hjqADPBtujFJd
If a language is designed with the facilities specifically to support object-oriented programming(4 features) then it is an Object-oriented programming language.
You can program in an object-orientated style in more or less any language.It’s the code that is object-oriented not the language.
Examples of real object-oriented languages are Java, c#, Python, Ruby, C++.
Also, it's possible to have extensions to provide Object-Oriented features like PHP, Perl etc.
You can write an object-oriented code with C but it is not object-oriented prog. lang. It is not designed for that (that was the whole point of c++)
Archetype
The ability to express real-world scenarios in code.
foreach(House house in location.Houses)
{
foreach(Deliverable mail in new Mailbag(new Deliverable[]
{
GetLetters(),
GetPackages(),
GetAdvertisingJunk()
})
{
if(mail.AddressedTo(house))
{
house.Deliver(mail);
}
}
}
-
foreach(Deliverable myMail in GetMail())
{
IReadable readable = myMail as IReadable;
if ( readable != null )
{
Console.WriteLine(readable.Text);
}
}
Why?
To help us understand this more easily. It makes better sense in our heads and if implemented correctly makes the code more efficient, re-usable and reduces repetition.
To achieve this you need:
Pointers/References to ensure that this == this and this != that.
Classes to point to (e.g. Arm) that store data (int hairyness) and operations (Throw(IThrowable))
Polymorphism (Inheritance and/or Interfaces) to treat specific objects in a generic fashion so you can read books as well as graffiti on the wall (both implement IReadable)
Encapsulation because an apple doesn't expose an Atoms[] property