Related
I am quite new in programming and what haunts me about it is not really the coding (well at least not until the present moment!) itself, but some words/concepts that are really important to understand. My doubt is with the word "ABSTRACTION". I have already searched dictionaries and saw some videos of people giving very clear explanations of the word. So, I know that abstraction is when you take into consideration only the things that are important and leave out everything else (putting in very simple and direct language), like for instance, if you are going to change a light bulb, you do not need to know the manufacturer of the light bulb or the light socket. You also do not need to know the materials used to manufacture the light bulb. However, the problem is when you read some texts or listen to people using the word and it does not seem to fit the meaning and then you start to wonder if they misused the word (which I think is very unlikely) or it is because there is another obscure meaning that I have not found yet or maybe it is just because I am too dumb to understand it. Below I put excerpts from articles I was reading and bolded and capitalized the part where the word appears so you guys have a context and understand where my problem is. Thank you.
"A paradigm programming provides and determines the view that the programmer has on the structuring and execution of the programme. For example, in object-oriented programming, programmers MAY ABSTRACT A PROGRAMME AS A COLLECTION OF OBJECTS that interact with each other, while in functional programming, programmers ABSTRACT THE PROGRAMME as a sequence of functions executed in a stacked fashion."
"A tuple space has the function of creating a SHARED MEMORY ABSTRACTION over a distributed system, where everyone can read and write to it."
It's easy to understand if you replace abstract/abstraction with one of its synonyms conceptualize/conceptualization. In your first two examples "abstract a programme" means "think of a programme as"... or "conceptualize a programme as"... When we make an abstraction we forget about some details, and think about that thing in other terms.
Side advice from a fellow beginner:
As someone who started learning computer science independently less than a year ago, I can tell you right now there will be lots of tricky terms like this. Try not to get too caught up in them. Often times if you just keep learning, you'll experience first hand what these terms mean without even realizing it. Bits and pieces will add up. The takeaway from this being, don't let what you don't know slow you down. Sometimes it's ok to keep going and just not know for a while.
These seem to fit the definition you put up earlier. For object oriented programming, the mindset is to consider "objects" as the essential (important) aspect of a program and abstract all other considerations away. Same thing for functional programming where "functions" are the defining aspect abstracting other considerations as secondary.
The tuple space may be a little trickier but if you consider that variations in memory storage models are abstracted away in favour of a higher level concept focusing on a collection of values, then you see what the abstraction relates to.
Abstract
adjective
existing in thought or as an idea but not having a physical or concrete existence.
relating to or denoting art that does not attempt to represent external reality, but rather seeks to achieve its effect using shapes, colours, and textures.
verb
consider something theoretically or separately from (something else).
extract or remove (something).
noun
a summary of the contents of a book, article, or speech.
an abstract work of art.
There you have your answer. Ask 100 people what an abstract painting is, you will get at least 100 answers. Why should programmers behave differently?
Lets see what Oracle has to say about abstract classes:
Abstract classes are similar to interfaces. You cannot instantiate them, and they may contain a mix of methods declared with or without an implementation. However, with abstract classes, you can declare fields that are not static and final, and define public, protected, and private concrete methods.
Consider using abstract classes if any of these statements apply to your situation:
You want to share code among several closely related classes.
You expect that classes that extend your abstract class have many common methods or fields, or require access modifiers other than public (such as protected and private).
You want to declare non-static or non-final fields. This enables you to define methods that can access and modify the state of the object to which they belong.
Compare that with the definition of abstract in the above section. I think you get a pretty good idea of abstractness in computer programming.
What are the pros and cons of using a singleton class for sound management?
I am concerned that it's very anti-OOP in structure and don't want to potentially get caught up going down the wrong path, but what is a better alternative?
It's an awkward topic but I'd say that a sound manager style class is a good candidate for a class that should not be initialized.
Similarly, I would find it OK if a keyboard input manager style class was a completely static class.
Some notes for my reasoning:
You would not expect more than one instance that deals with all sounds.
It's easily accessible, which is a good thing in this case because sound seems more like an application-level utility rather than something that should only be accessed by certain objects. Making the Player class for a game static for example would be a very poor design choice, because almost no other classes in a game need reference directly to the Player.
In the context of a game for example, imagine the amount of classes that would need a reference to an instance of a sound manager; enemies, effects, items, the UI, the environment. What a nightmare - having a static sound manager class eliminates this requirement.
There aren't many cases that I can think of where it makes no sense at all to have access to sounds. A sound can be triggered relevantly by almost anything - the move of the mouse, an explosion effect, the loading of a dialogue, etc. Static classes are bad when they have almost no relevance or use to the majority of the other classes in your application - sound does.
Anyways; that's my point of view to offset the likely opposing answers that will appear here.
They are bad because of the same reasons why globals are bad, some useful reading :
http://blogs.msdn.com/b/scottdensmore/archive/2004/05/25/140827.aspx
A better alternative is to have it as a member of your application class and pass references of it to only modules that need to deal with sound.
"Managers" are usually classes that are very complex in nature, and thus likely violate the Single Responsibility Principle. To paraphrase Uncle Bob Martin: Any time you feel yourself tempted to call a class "Manager" of something - that's a code smell.
In your case, you are dealing with at least three different responsibilities:
Loading and storing the sounds.
Playing the sounds when needed.
Controlling output parameters, such as volume and panning.
Of these, two might be implemented as singletons, but you should always be very careful with this pattern, because in itself, it violates the SRP, and if used the wrong way, it causes your code to be tightly coupled (instead, you should use the Dependency Injection pattern, possibly by means of a framework, such as SwiftSuspenders, but not necessarily):
Loading and storing sounds is essentially a strictly data related task, and should thus be handled by the SoundModel, of which you only need one instance per application.
Controlling output parameters is something that you probably want to handle in a central place, to be able to change global volume settings, etc. This may be implemented as a singleton, but more likely is a tree-like structure, where a master SoundController handles global settings, and several child SoundControllers are responsible for more specific contexts, such as UI sound effects, game sounds, music, etc.
Playing the sound is something, which will occur in many places, and in many different ways: There may be loops (to which you need to keep references to be able to stop them later), or effect sounds (which are usually short and play only once), or music (where each song is usually played once, but needs subsequent songs to be started automatically, when the end is reached). For each of those variations (and whichever ones you come up with), you should create a different class, which implements a common SoundPlayer interface, e.g. LoopSoundPlayerImpl, SequentialSoundPlayerImpl, EFXSoundPlayerImpl, etc. The interface should be as simple as play(),pause(), rewind() - you can easily exchange those later, and will not have any problems with tightly coupled libraries.
Each SoundPlayer can hold a reference to both the master SoundController an its content-specific one, as well as - possibly - the SoundModel. These, then can be static references: Since they are all parts of your own sound plugin, they will usually be deployed as a package, and therefore tight coupling won't do much damage here - it is important, though, not to cross the boundary of the plugin: Instantiate everything within your Main partition, and pass on the instances to all classes, which need them; only have the SoundPlayer interface show up within your game logic, etc.
I'm writing a very simple Java Game. Let me describe it briefly:
There are 4 players in a Map.
The map is a two-dimensional matrix with a value called "height"
The height between 2 nodes is the cost of that edge.
Use Dijkstra algorithm to help player navigate from a source to a destination.
Four players take turn to make a move. The total move is 8( left top, top, right top.... )
If they meets, fight for gold value, otherwise move to their target.
As they move, their strength decrease by the height difference between two nodes.
... etc
....
The problem that I'm encountering is that the source code is getting longer and complex day by day. And I think I'm using a wrong approach somehow, I feel so tired because of constantly changing the implementation. Here is my approach:
Write out all requirements.
Create all the object that I need with all getter and setter.
Create a static class to test the logic
Create unit test while putting the logic together
Add some more code, then change to code to fit the test
Write a big method that run, then break it down into smaller methods, then write unit test again.
If everything work fine, add more requirements, add more code
Then things are getting complicated because the more code I added, the complexity increases. No longer have time to write unit test because create a test case now requires too much work
Re-design, then change the implementation, go to step 1 again.
I'm come from a C++ background, and I'm only comfortable with writing 'static' libraries such as: stack, queue, link-listed, tree... Game is really a big challenging to me, especially I have to use Java. I understand the core programming is the same, so picking up Java was not really that bad. However, the time of looking up Java's API is not little. Further, the game logic is really hard to write. When this object moves, other object got affected..., so creating test for a method depends on many other methods...etc.
I really need an advice. Could anyone share some experience of how to write a game to me? I only have two weeks left for this assignment. I'm currently have 45 classes now, I feel so lost because the more I wrote the more it gets complex :( !
Best regards,
Chan Nguyen
First start thinking like a java programmer. Think as every thing in your game as an object, like the board, think about the properties and methods it has, its interfaces, how it interacts with the other objects.
If you need help getting started here is a great tutorial that guides you step by step to do a simple java game, this might put you in the right frame of mind to start programming your own. I strongly recommend you to follow the tutorial.http://www.cokeandcode.com/asteroidstutorial and to use the libraries that they used for developing the interfaz there.
I view game code architects with respect: games are complex systems with emergent properties at runtime, unusually intense interaction requirements (UI, controls) which makes a lot of OOP theory questionable value. It can be difficult to reuse game code. And, a lot of upfront planning work is wasted time.
Most game coders I know, beginning or veteran, succeed with a "just do it" iterative process. e.g.
1) write a minimal prototype. get a very basic system working, using the simplest, most obvious architecture you can think of. (my guy can run around the screen). 5 or 10 objects max.
2) add functionality (points, rules, traps, NPC behaviours, etc) and playtest, over and over. This hack on hack can makes for poorly structured code, but most coders can make it work.
3) rewrite. Programmers grit their teeth at some of the hacking they had to do in (2), and will want to throw it all out and rewrite. Resist this urge until the game is testable (as in, plyaers can sometimes enjoy it, somewhat), or a new feature would require rewriting. Then, rewrite pretty much EVERYTHING from scratch. This goes WAY faster than you'd expect, and results in solid, well-structured code.
Game coders do test, but comprehensive testing of ALL code is rare. two reasons: emergence and culture. Games have emergent properties at runtime ("yeah, but the points COULD go negative when the NPC is killed when ...."). Since games are usually for entertainment purposes, there is a culture of fast-and-loose testing. Games aren't as important as, say, missile control code.
I expect others with more coding experience answer this. (I have written a fair bit of code but I tend toward quick and dirty script type coding style - I know lots of coders who are way better than me.)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As we program, we all develop practices and patterns that we use and rely on. However, over time, as our understanding, maturity, and even technology usage changes, we come to realize that some practices that we once thought were great are not (or no longer apply).
An example of a practice I once used quite often, but have in recent years changed, is the use of the Singleton object pattern.
Through my own experience and long debates with colleagues, I've come to realize that singletons are not always desirable - they can make testing more difficult (by inhibiting techniques like mocking) and can create undesirable coupling between parts of a system. Instead, I now use object factories (typically with a IoC container) that hide the nature and existence of singletons from parts of the system that don't care - or need to know. Instead, they rely on a factory (or service locator) to acquire access to such objects.
My questions to the community, in the spirit of self-improvement, are:
What programming patterns or practices have you reconsidered recently, and now try to avoid?
What did you decide to replace them with?
//Coming out of university, we were taught to ensure we always had an abundance
//of commenting around our code. But applying that to the real world, made it
//clear that over-commenting not only has the potential to confuse/complicate
//things but can make the code hard to follow. Now I spend more time on
//improving the simplicity and readability of the code and inserting fewer yet
//relevant comments, instead of spending that time writing overly-descriptive
//commentaries all throughout the code.
Single return points.
I once preferred a single return point for each method, because with that I could ensure that any cleanup needed by the routine was not overlooked.
Since then, I've moved to much smaller routines - so the likelihood of overlooking cleanup is reduced and in fact the need for cleanup is reduced - and find that early returns reduce the apparent complexity (the nesting level) of the code. Artifacts of the single return point - keeping "result" variables around, keeping flag variables, conditional clauses for not-already-done situations - make the code appear much more complex than it actually is, make it harder to read and maintain. Early exits, and smaller methods, are the way to go.
Trying to code things perfectly on the first try.
Trying to create perfect OO model before coding.
Designing everything for flexibility and future improvements.
In one word overengineering.
Hungarian notation (both Forms and Systems).
I used to prefix everything. strSomeString or txtFoo.
Now I use someString and textBoxFoo. It's far more readable and easier for someone new to come along and pick up. As an added bonus, it's trivial to keep it consistant -- camelCase the control and append a useful/descriptive name. Forms Hungarian has the drawback of not always being consistent and Systems Hungarian doesn't really gain you much. Chunking all your variables together isn't really that useful -- especially with modern IDE's.
The "perfect" architecture
I came up with THE architecture a couple of years ago. Pushed myself technically as far as I could so there were 100% loosely coupled layers, extensive use of delegates, and lightweight objects. It was technical heaven.
And it was crap. The technical purity of the architecture just slowed my dev team down aiming for perfection over results and I almost achieved complete failure.
We now have much simpler less technically perfect architecture and our delivery rate has skyrocketed.
The use of caffine. It once kept me awake and in a glorious programming mood, where the code flew from my fingers with feverous fluidity. Now it does nothing, and if I don't have it I get a headache.
Commenting out code. I used to think that code was precious and that you can't just delete those beautiful gems that you crafted. I now delete any commented-out code I come across unless there's a TODO or NOTE attached because it's too perilous to leave it in. To wit, I've come across old classes with huge commented-out portions and it really confused me why they were there: were they recently commented out? is this a dev environment change? why does it do this unrelated block?
Seriously consider not commenting out code and just deleting it instead. If you need it, it's still in source control. YAGNI though.
The overuse / abuse of #region directives. It's just a little thing, but in C#, I previously would use #region directives all over the place, to organize my classes. For example, I'd group all class properties together in a region.
Now I look back at old code and mostly just get annoyed by them. I don't think it really makes things clearer most of the time, and sometimes they just plain slow you down.
So I have now changed my mind and feel that well laid out classes are mostly cleaner without region directives.
Waterfall development in general, and in specific, the practice of writing complete and comprehensive functional and design specifications that are somehow expected to be canonical and then expecting an implementation of those to be correct and acceptable. I've seen it replaced with Scrum, and good riddance to it, I say. The simple fact is that the changing nature of customer needs and desires makes any fixed specification effectively useless; the only way to really properly approach the problem is with an iterative approach. Not that Scrum is a silver bullet, of course; I've seen it misused and abused many, many times. But it beats waterfall.
Never crashing.
It seems like such a good idea, doesn't it? Users don't like programs that crash, so let's write programs that don't crash, and users should like the program, right? That's how I started out.
Nowadays, I'm more inclined to think that if it doesn't work, it shouldn't pretend it's working. Fail as soon as you can, with a good error message. If you don't, your program is going to crash even harder just a few instructions later, but with some nondescript null-pointer error that'll take you an hour to debug.
My favorite "don't crash" pattern is this:
public User readUserFromDb(int id){
User u = null;
try {
ResultSet rs = connection.execute("SELECT * FROM user WHERE id = " + id);
if (rs.moveNext()){
u = new User();
u.setFirstName(rs.get("fname"));
u.setSurname(rs.get("sname"));
// etc
}
} catch (Exception e) {
log.info(e);
}
if (u == null){
u = new User();
u.setFirstName("error communicating with database");
u.setSurname("error communicating with database");
// etc
}
u.setId(id);
return u;
}
Now, instead of asking your users to copy/paste the error message and sending it to you, you'll have to dive into the logs trying to find the log entry. (And since they entered an invalid user ID, there'll be no log entry.)
I thought it made sense to apply design patterns whenever I recognised them.
Little did I know that I was actually copying styles from foreign programming languages, while the language I was working with allowed for far more elegant or easier solutions.
Using multiple (very) different languages opened my eyes and made me realise that I don't have to mis-apply other people's solutions to problems that aren't mine. Now I shudder when I see the factory pattern applied in a language like Ruby.
Obsessive testing. I used to be a rabid proponent of test-first development. For some projects it makes a lot of sense, but I've come to realize that it is not only unfeasible, but rather detrimental to many projects to slavishly adhere to a doctrine of writing unit tests for every single piece of functionality.
Really, slavishly adhering to anything can be detrimental.
This is a small thing, but: Caring about where the braces go (on the same line or next line?), suggested maximum line lengths of code, naming conventions for variables, and other elements of style. I've found that everyone seems to care more about this than I do, so I just go with the flow of whoever I'm working with nowadays.
Edit: The exception to this being, of course, when I'm the one who cares the most (or is the one in a position to set the style for a group). In that case, I do what I want!
(Note that this is not the same as having no consistent style. I think a consistent style in a codebase is very important for readability.)
Perhaps the most important "programming practice" I have since changed my mind about, is the idea that my code is better than everyone else's. This is common for programmers (especially newbies).
Utility libraries. I used to carry around an assembly with a variety of helper methods and classes with the theory that I could use them somewhere else someday.
In reality, I just created a huge namespace with a lot of poorly organized bits of functionality.
Now, I just leave them in the project I created them in. In all probability I'm not going to need it, and if I do, I can always refactor them into something reusable later. Sometimes I will flag them with a //TODO for possible extraction into a common assembly.
Designing more than I coded.
After a while, it turns into analysis paralysis.
The use of a DataSet to perform business logic. This binds the code too tightly to the database, also the DataSet is usually created from SQL which makes things even more fragile. If the SQL or the Database changes it tends to trickle to everything the DataSet touches.
Performing any business logic inside an object constructor. With inheritance and the ability to create overloaded constructors tend to make maintenance difficult.
Abbreviating variable/method/table/... Names
I used to do this all of the time, even when working in languages with no enforced limits on lengths of names (well they were probably 255 or something). One of the side-effects were a lot of comments littered throughout the code explaining the (non-standard) abbreviations. And of course, if the names were changed for any reason...
Now I much prefer to call things what they really are, with good descriptive names. including standard abbreviations only. No need to include useless comments, and the code is far more readable and understandable.
Wrapping existing Data Access components, like the Enterprise Library, with a custom layer of helper methods.
It doesn't make anybody's life easier
Its more code that can have bugs in it
A lot of people know how to use the EntLib data access components. No one but the local team knows how to use the in house data access solution
I first heard about object-oriented programming while reading about Smalltalk in 1984, but I didn't have access to an o-o language until I used the cfront C++ compiler in 1992. I finally got to use Smalltalk in 1995. I had eagerly anticipated o-o technology, and bought into the idea that it would save software development.
Now, I just see o-o as one technique that has some advantages, but it's just one tool in the toolbox. I do most of my work in Python, and I often write standalone functions that are not class members, and I often collect groups of data in tuples or lists where in the past I would have created a class. I still create classes when the data structure is complicated, or I need behavior associated with the data, but I tend to resist it.
I'm actually interested in doing some work in Clojure when I get the time, which doesn't provide o-o facilities, although it can use Java objects if I understand correctly. I'm not ready to say anything like o-o is dead, but personally I'm not the fan I used to be.
In C#, using _notation for private members. I now think it's ugly.
I then changed to this.notation for private members, but found I was inconsistent in using it, so I dropped that too.
I stopped going by the university recommended method of design before implementation. Working in a chaotic and complex system has forced me to change attitude.
Of course I still do code research, especially when I'm about to touch code I've never touched before, but normally I try to focus on as small implementations as possible to get something going first. This is the primary goal. Then gradually refine the logic and let the design just appear by itself. Programming is an iterative process and works very well with an agile approach and with lots of refactoring.
The code will not look at all what you first thought it would look like. Happens every time :)
I used to be big into design-by-contract. This meant putting a lot of error checking at the beginning of all my functions. Contracts are still important, from the perspective of separation of concerns, but rather than try to enforce what my code shouldn't do, I try to use unit tests to verify what it does do.
I would use static's in a lot of methods/classes as it was more concise. When I started writing tests that practice changed very quickly.
Checked Exceptions
An amazing idea on paper - defines the contract clearly, no room for mistake or forgetting to check for some exception condition. I was sold when I first heard about it.
Of course, it turned to be such a mess in practice. To the point of having libraries today like Spring JDBC, which has hiding legacy checked exceptions as one of its main features.
That anything worthwhile was only coded in one particular language. In my case I believed that C was the best language ever and I never had any reason to code anything in any other language... ever.
I have since come to appreciate many different languages and the benefits/functionality they offer. If I want to code something small - quickly - I would use Python. If I want to work on a large project I would code in C++ or C#. If I want to develop a brain tumour I would code in Perl.
When I needed to do some refactoring, I thought it was faster and cleaner to start straightaway and implement the new design, fixing up the connections until they work. Then I realized it's better to do a series of small refactorings to slowly but reliably progress towards the new design.
Perhaps the biggest thing that has changed in my coding practices, as well as in others, is the acceptance of outside classes and libraries downloaded from the internet as the basis for behaviors and functionality in applications. In school at the time I attended college we were encouraged to figure out how to make things better via our own code and rely upon the language to solve our problems. With the advances in all aspects of user interface and service/data consumption this is no longer a realistic notion.
There are certain things which will never change in a language, and having a library that wraps this code in a simpler transaction and in fewer lines of code that I have to write is a blessing. Connecting to a database will always be the same. Selecting an element within the DOM will not change. Sending an email via a server-side script will never change. Having to write this time and again wastes time that I could be using to improve my core logic in the application.
Initializing all class members.
I used to explicitly initialize every class member with something, usually NULL. I have come to realize that this:
normally means that every variable is initialized twice before ever being read
is silly because in most languages automatically initialize variables to NULL.
actually enforces a slight performance hit in most languages
can bloat code on larger projects
Like you, I also have embraced IoC patterns in reducing coupling between various components of my apps. It makes maintenance and parts-swapping much simpler, as long as I can keep each component as independent as possible. I'm also utilizing more object-relational frameworks such as NHibernate to simplify database management chores.
In a nutshell, I'm using "mini" frameworks to aid in building software more quickly and efficiently. These mini-frameworks save lots of time, and if done right can make an application super simple to maintain down the road. Plug 'n Play for the win!
What is the best way to sort class members?
I'm in conflict with a team member about this. He suggests that we should sort the members alphabetically. I think it's better to organize in a semantic manner: important attributes first, related methods together, etc.
What do you think?
I like semantic. Alphabetical doesn't seem to make a lot of sense to me, cause when you're looking for a member, you rarely know exactly what it's called. Also, if you're using any sort of naming convention (eg: Hungarian), alphabetical is going to lead to grouping by type, which may not be what you want.
Group related class members together. I think this will help other programmers understand your interface more easily when they see it for the first time.
Some also find it helpful to organize accessors and modifiers together in separate sections.
I've studied this exact issue as part of my master's thesis.
An alphabetical organization or organization based on public/private is better for being able to find specific things. However, in some IDEs you can set the outline tool to sort alphabetically and to use special indicators for public/private.
My approach was to group methods based on what members they use: there is often a conceptual connection between methods that use the same fields.
I actually created a visualization from that, which helped to quickly navigate and understand the structure of huge classes.
I never look for a member by going through the code. When I want to jump to a member definition, I either select it from the navigation bar / document outline / class view, or I right-click and select "Jump to definition". You don't need to sort the members if you have a decent IDE. This works very good in Visual Studio and the other IDE I use if needed, KDevelop, supports at least the basics of this.
Anyway, I tend to group members by functionality, i.e. all fields / properties / methods that are part of some specific functionality are together. And since classes shouldn't be too long, this is enough.
This is just my opinion, which I am sure will be unpopular, but the problem with semantic sorting is its subjective. Each person will have a different opinion of what methods should be close together.
Alphabetical has the advantage that it is entirely objective. It also reduces large diffs for small changes, which is common when one coder chooses a different semantic ordering.
Most IDEs have outlines, or hyperlinks to make navigation easier.
EDIT: A clarification- I still sort by public first to private, but alphabetical within the same access level. In fact, I don't do any sorting - I let my IDE resort the file for me when saving.
Are you writing a phone book?
With a semantic approach you can easily show what are the most important methods.
I generally go with Constructor, Destructor first, then important methods followed by getters and setters and eventually misc. methods. Finally, I take a similar approach for internal parts (private methods, attributes...).
Alphabetical order does not convey any useful information about your class. If you really want to see methods sorted alphabetically, you should rely on a function of your IDE.
You can go from semantic to alphabetic by sorting the "methods display" in your IDE.
You can't go (automatically) from alphabetic to semantic.
Hence: semantic.
Assuming you are using a modern IDE, finding the method you want is rarely more than two mouse clicks away, so I am not sure what having a particular way of organizing your methods would get you. I do use stylecop (http://code.msdn.microsoft.com/sourceanalysis) which has me ordering by public / private / method / properties - I have found that to be anal enough.
The only time I ever truely thought this was important is when I wrote a very large jscript program and the editor at the time didn't offer any help in finding functions. Alphabetic organization was very helpful. When alphabatized, it isn't hard to figure out which way in the file you need to go to find a method. Semantic organization would have been completely unhelpful.
At a higher level, I would organize my class this way:
Constructor
Destructor
Private Fields
Properties
Methods/Functions
Then for methods/functions I would break it down again by functionality. e.g. I would put methods that implement an interface into one region, I would put event handlers methods into one region, etc...
RWendi
I guess I'm one of the oddball cases who favors alphabetical listings.
First and foremost, it's been my experience that grouping methods together "semantically" tends to be a time-sink. Now, if we're talking about grouping them by scope/visibility, that's another thing. But then, if the member changes it's scope, you have to sink time into moving the member to keep the code current. I don't want to have to waste time shuffling code around to observe a guideline like that.
I am also not a big fan of regions. When properties and methods are grouped by scope, they tend to shout out for enclosure in a region. But enclosing code in collapsing regions tends to hide badly written code. As long as you don't have to look at it, you won't be bothered to think about refactoring it to make it maintainable.
So, I favor alphabetical organization. It's simple, direct, and to the point. I'm not tempted to enclose groups into regions. And since the IDE makes it easy to leap to a function or property definition anyway, the physical layout of the code is moot. It used to be that you wanted folks to focus on your public members first. Modern IDEs make that largely a pointless argument in favor of scope-based layouts.
But the biggest advantage of alphabetical layouts is this: printed code samples during code reviews. And I use them alot. It makes finding a function or a property a snap. If you've ever had to wade through a lot of code to find a function or a property when things weren't just alphabetically listed, you'll know what I'm talking about.
But, as they say, those are my subjective views on the subject. Your mileage may vary.