As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I ask myself where reverse engineering is used. I'm interested at learning it. But I don't know if I can/should put it on my CV.
I don't want my new chief to think I am an evil Hacker or something. :)
So is it worth it?
Should I learn it or put my effort somewhere else?
Is there a good Book or tutorial out there? :)
Reverse engineering is commonly used for deciphering file formats for improving interoperability. For example, many popular commercial Windows applications don't run on Linux, which necessitates reverse engineering of files produced by those applications, so that they can be used in Linux. A good example of this would be the various formats supported by Gimp, OpenOffice, Inkscape, etc.
Another common use of reverse engineering is deciphering protocols. Good examples include Samba, DAAP support in many non-iTunes applications, cross platform IM clients like Pidgin, etc. For protocol reverse engineering, common tools of the trade include Wireshark and libpcap.
No doubt reverse engineering is often associated with software cracking, which is primarily understanding program disassembly. I can't say that I've ever needed to disassemble a program other than out of pure curiosity or to make it do something it wasn't. One plus side to reverse engineering programs is that to make any sense of it, you will need to learn assembly programming. There are however legal ways to hone your disassembly skills, specifically using Crackmes. An important point to be made is that when you're developing security measures in your applications, or if you're in that business, you need to know how reverse engineers operate to try to stay one step ahead.
IMHO, reverse engineering is a very powerful and useful skill to have. Not to mention, it's usually fun and addictive. Like hmemcpy mentioned, I'm not sure I would use the term "reverse engineering" on my CV, only the skills/knowledge associated with it.
Reverse engineering is usually something you do because you have to, not because you want to. For example, there are legal issues with simply reverse engineering a product! But there are necessary cases - where (for example) the supplier has gone and no longer exists or is not contactable. A good example would be the WMD editor that you typed your question into. The SO team/community had to reverse engineer this from obfuscated source to apply some bug fixes.
One of the fields, in my opinion, where reverse engineering skills might be useful is anti-virus industry, for instance. However, I wouldn't place "reverse engineering" on my CV, but rather I'd write down experience in the Assembly language, using miscellaneous disassemblers/debuggers (such as IDA, SoftIce or OllyDbg) and other relevant skills.
I have worked on reverse engineering projects, and they certainly had nothing to do with hacking. We had the source code for all such projects (legitimately), but for one of the projects nobody actually knew what the code did behind the scenes, and how it interacted with other systems. That information had long been lost. In another project, we had the source code and some documentation, but the documentation wasn't up to date, so we had to reverse-engineer the source to update the documentation.
I don't mind having such projects on my CV. In fact, I believe I've learned a lot during the process.
Reverse engineering is needed whenever the documentation is lost or it never existed. Having the source helps, but you still have to reverse engineer the original logic, flow control and bugs out of it.
Working with strange hardware often forces you to reverse engineer. For instance, I was once working with an old signal acquisition card that behaved strangely; putting in beautiful sine wave produced awfully crippled data. It turned out that every other byte was two's complement and every other one's complement - or at least, when interpreted that way, the data became quite beautiful. Of course, this wasn't documented anywhere, and the card worked perfectly when used with its own proprietary software.
It is very common (in my experience) to encounter older code which has defects, has become outdated due to changing requirements, or both. It's often the case that there's inadequate documentation, and the original developer(s) are no longer available. Reverse engineering that code to understand how it works (and sometimes to make a repair-or-replace decision) is an important skill.
If you have the source, it's often reasonable to do a small, carefully-planned, strictly-scoped amount of cleanup. (I'm hinting out loud that this can't be allowed to become a sinkhole for valuable developer time!)
It's also very helpful to be able to exercise the code in a testbed, either to verify that it does what was expected or to identify, document, isolate, and repair defects.
Doing so safely requires careful work. I highly recommend Michael Feathers' book Working with Legacy Code for its practical guidance in getting such code under test.
RCE is great skill for security guys (research, exploitation, IDS, IPS, AV etc.) but also it proves that you've got a deep and low level understanding of the subject.
Finding your way way around easier when working with 3rd party libraries as well.
If you are not working in security industry, if you are not good at ASM don't bother to learn it, generally it's hard to learn.
Books
Hacking the art of exploitation talks about the subject from security point of view.
Also you might want to read books about Ollydbg and IDA Pro
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What genetic-programming library, regardless of language, has the most active community and is the most well developed?
It's hard to tell, frankly. ParadisEO seems to be very active, and is a pretty large library encompassing various metaheuristics besides GP. Note that it is a superset of the EO library. OpenBEAGLE is nice, but it hasn't been updated since 2007. Watchmaker is very good and active right now, but it only has a proof of concept implementation of GP for now. There's a plethora of libraries out there and rather hard to tell which is the best one. And it's not very hard to roll your own GP, so keep that possibility in mind.
HeuristicLab has a very sophisticated implementation that is both fast. For example in an independent benchmark you can see that the speed of HeuristicLab's interpreter was equal to a newly coded minimalistic C++ interpreter that included optimizations. It is also very flexible in that you can configure the grammar that creates your tree in the GUI environment. So you can create functions that should e.g. only have certain variables as inputs, but not all. The implementation is based on a long heritage of code, that is very actively developed and which is reviewed before each release to ensure the continued quality. HeuristicLab supports Regression, Classification, as well as custom problems like Santa Fe trail or Lawn Mower (of which a tutorial exists that helps you implement your own custom problem). There is crossvalidation, there is a separation of training, validation and test that you can make use of to detect overfitting. You will get as results how much each variable is present in the whole population, how much your symbols are present in the population so you can estimate what variables are important. This is displayed as a graph over time. There's also a pareto analyzer that you can enable to show all solutions by quality and complexity. HeuristicLab also contains the recently (GECCO2012) emerging GP benchmark library to enable people to test and compare results. Apart from GP there are further regression and classification algorithms implemented like SVM, Random Forests, k-NN, etc.
It's implemented in C# and runs on .Net 4 (currently only on windows, mono support is close to finish).
You might want to check out Gene Expression Programming (GEP). It is an alternative form of genetic programming.
There is a technology site at http://www.gene-expression-programming.com/. The company behind it is GEPSoft http://www.gepsoft.com.
I'm a fan of ECJ, "A Java-based Evolutionary Computation Research System":
http://cs.gmu.edu/~eclab/projects/ecj/
The mailing list is usually moderately active, indicating to me the general good health of the project. I have been using ECJ for almost all of my GA and GP research and it has a lot of interesting built-in features plus several third party contributions.
ECJ's creator, Sean Luke, also wrote an awesome and free downloadable book: cs.gmu.edu/~sean/book/metaheuristics/
JGAP for Java seems fairly active. Looking at the checkin history there was a burst of activity a couple of months ago.
http://jgap.sourceforge.net/
You can try this C# .NET 4.0 port of Sean Luke's ECJ (Evolutionary Computation in Java):
http://branecloud.codeplex.com
It is very flexible and powerful software! But it is also relatively easy to get started because it includes many working console samples out-of-the-box (and many helpful unit tests that were developed during the conversion).
As noted above, if you program in Java, you should visit Sean Luke's site directly:
http://cs.gmu.edu/~eclab/projects/ecj/
It has been under active development for 13 years!
Ben
CILib from the CIRG team. It's been update regularly. The developers are always frequent to answer your questions.
Forum: http://www.cilib.net/
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
My son is enthusiastically programming simple games in Scratch. However Scratch is a very simple programming environment (no subroutines even), and I can see that soon he is going to need to move on to something else.
Does anyone know of a good learning language that makes graphics easy but provides "real" programming features like data structures, functions, arrays and lists?
Bonus points if it runs under Linux (Ubuntu). An answer of the form "language Foo with library Bar" is also an option.
How about lua?
There is nice graphic "engine" called LOVE which is fully programmable in lua. It has nice documentation and it's not very hard.
There are also several other similar engines using lua:
Novashell
Verge
Luxinia
Agen
There was another 2d engine, but I can't find it at the moment, it was similar to LOVE, but with a little different approach to things.
I would recommend LOVE for starters as it's very easy, has nice tutorials and most importantly you can do nice stuff right away.
Also lua is commonly used as game scripting language. For example all addons for World of Warcraft are written in lua, in fact all of the interface is written in lua. It means that it's very easy to find answers to game related questions in lua. Also if you happen to own a game which uses lua as scripting language, you could easily add your own stuff to it.
I wrote this from game perspective, but there are quite a lot projects which use lua as scripting language.
You could also try python, but it doesn't have so good out of the box, ready to use and easy to learn/understand tools.
Also here's a link to lua manual.
If Scratch is starting to get a bit limiting, but they're not ready for the hardships of text-editor coding, take a look at Scratch-derivative "BYOB" (Build Your Own Blocks). Seriously, it turns Scratch into a grown-up programming environment with functions (and hence recursion), data-structures, multithreading and everything!
There's also Panther but I was less impressed by it (creating new blocks in Panther seems to require coding their function up directly in Squeak, while in BYOB you can just build them in the usual drag-n-drop Scratch style).
Take a look at Processing.
It's tour de force is graphics, animation, and visual manipulation. It runs under Linux, too.
Processing is an open source
programming language and environment
for people who want to program images,
animation, and interactions. It is
used by students, artists, designers,
researchers, and hobbyists for
learning, prototyping, and production.
It is created to teach fundamentals of
computer programming within a visual
context and to serve as a software
sketchbook and professional production
tool.
A nice review here suggests Alice and Shoes after Scratch -- I have no personal experience in the matter, but from the review they seem worth checking out.
It might be just a little bit larger of a jump, but Python with PyGame will allow your pupil to make many of the same sorts of programs as he or she is already used to with Scratch, but with very tight control over how the whole thing works.
Pros: It's python, which is a very easy language to read and write, and provides a very rich programming environment, without really any boilerplate required.
Cons: its SDL, which uses an event-loop that you get to write yourself. This might be a pretty large hurdle for a young programmer.
Scratch is written in Squeak (which runs on Linux, Windows and Mac) so I'll say step up to Smalltalk! The only problem is the lack of a very good beginners book on the language, which is strange when you consider its origins. However, the basic concepts are easy to learn (almost no syntax) and the environment encourages experimentation.
Here is an interesting microsoft project called Small Basic that is a good, simple, free programming environment for learning, based on BASIC.
No bonus points because of the lack of ubuntu, but a cool learning tool.
is QuickBasic still around? That's what I started with when I was like 7-8, and I was able to make full fledged games, etc. without any external libraries.
EDIT: check out this link about FreeBasic:
http://linux.about.com/b/2006/11/10/freebasic-open-source-alternative-to-quickbasic.htm
Well, there is venerable old Logo -- not sure about structures but you do get lists, functions with parameters, and graphics are very straightforward. There are plenty of good implementations, too. Logo has even been likened to 'lisp without all the parentheses'.
I would suggest using CodingBat. Although CodingBat doesn't provide graphics, it does provide the "programming features" and straightforward practice involving strings, array, and logic.
I think this website helps with developing the basic foundation behind programming.
Link: http://codingbat.com/
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Let's say your company has given you the time & money to acquire training on as many advanced programming topics that you can eat in a year, carte blanche. What would those topics be and how would you prefer to acquire them?
Assumptions:
You're still having deliverables to bring into existence, but you're allowed one week per month for the year for this training.
The training can come from anywhere. IE: Classroom, on-site instructor, books, subscriptions, podcasts, etc.
Subject matter can cover any platform, technology, language, DBMS, toolset, etc.
Concurrent/Parallel programming and multi-threading, especially with respect to memory models and memory coherency.. I think every programmer should be aware of the considerations in this arena as we move into a world of multi-core/multi-cpu hardware.
For this I would probably using Internet research most heavily; but an on-campus primer at a good university could be a good way to start off.
Security!
Far too many programmers just build something and think they can add security as an afterthought after finishing the "main" part of the program. You could always benefit from knowing more about how to secure your app, how to design software to be secure from the get-go, how to do intrusion detection, etc.
Advanced Database Development
Things like data warehousing (MDX, OLAP queries, star schemas, fact tables, etc), advanced performance tuning, advanced schema and query patterns, and the like are always useful.
Here are the three that I'm always finding myself explaining to junior developers who didn't get enough CS training. All that other stuff is generally more hype than substance, or can be fairly easily picked up. But if you don't know these three, you can do a great deal of damage:
Algorithm analysis, including Big O
Notation.
The various levels of
cohesion and coupling.
Amdahl's Law, and how it pertains to optimizations.
Internationalization issues, especially since it sounds like it would not be an advanced topic. But it is.
Accessibility
It's ignored by so many organizations but the simple fact of the matter is that there are a huge number of people with low or no vision, color blindness, or other differences that can make navigating the web a very frustrating experience. If everybody had at least a little bit of training in it, we might get some web based UIs that are a little more inclusive.
Object oriented design patterns.
I guess "advanced" is different for everyone, but I'd suggest the following as being things that most decent developers (i.e. ones that don't need to be told about NP-completeness or design patterns) could gain from:
Multithreading techniques that go
beyond "lock" and when to apply them.
In-depth training to learn and
habitualize themselves with clever
features in their toolchain (IDE/text
editor, debugger, profiler, shell.)
Some cryptography theory and hands-on experience with different common flaws in security schemes that people create.
If they program against a database, learn the internals of their database and advanced
query composition and tuning techniques.
Developers should know the basics in SQL development and how their decisions impact database performance. It is one thing to write a query it is another thing to write a query, understand the explain plan and make design decisions based off that output. I think a good course on PL/SQL development and database performance would be very beneficial.
Unfortunately communication skills seem to fall under the "advanced topics" section for most developers (present company excluded, of course).
Best way to acquire this skill: practice.
Take of the headphones, and talk to
someone instead of IM'ing or emailing
the guy at the next desk.
Pick up the phone and talk to a
client instead of lobbing an email
over the fence.
Ask questions at a conference instead of sitting behind your laptop
screen twittering.
Actively participate in a non-technical meeting at work.
Present something in public.
Most projects do not fail because of technical reasons. They fail because they could not create a team. Communication is vital to team dynamics.
It will not harm your career either.
One of the best courses I took was a technical writing course. It has served me well in my career.
Additionally: it probably does not matter WHAT the topic is - the fact that the organization is interested in it and is paying for it and the developers want to go and do go, is a better indicator of success/improvement than any one particular topic.
I also don't think it matters that much what the topic is. Dev organizations deal with so many things during a project that training and then on the job implementation/trial and error will always get you some better perspective - even if the attempts to try out/use the new stuff fail. That experience will probably help more on the subsequent projects.
I'm a book person, so I wouldn't really bother with instruction.
Not necessarily in this order, and depending on what you know already
OO Programming
Functional Programming
Data structures and algorithms
Parallel processing
Set based logic (essentially the theory behind sql and how to apply it)
Building parsers (I only put this, because it actually came up where I work)
Software development methodolgoies
NP Completeness. Specifically, how to detect if a problem is NP-Complete, and how to build an approximate solution to the problem.
I see this as important because you don't want a developer to try and solve an NP-complete problem by getting the optimum solution, unless the problem's search space is very small, in which case brute force is acceptable. However, as the search space increases, the time required to solve the problem increases exponentially.
I'd cover new technologies and trends. Some of the new technologies I'm researching/enhancing my skills with include:
Microsoft .NET Framework v3.0/v3.5/v4.0
Cloud Computing Frameworks (Amazon EC2, Windows Azure Services, GoGrid, etc.)
Design Patterns
I am from MS based developer world, so here is my take on this
More about new concepts in Cloud Computing (various API etc.). as the industry is betting on it for sometime.
More about LinQ for .net framework
Distributed databases
Refactoring techniques (which implies also learning to write a good set of unit/functional tests).
Knowing how to refactor is the best way to keep code clean -- it is rare when you get it right the first time (especially in new designs).
A number of refactorings, however, require a decent set of tests to check that the refactoring did not add unexpected behavior.
Parallel computing- the easiest and best way to learn it
Debugging
Debugging by David J. Agans is a good book on the topic. Debugging can be very complex when you deal with multi threaded programs, crashes, algorithms that doesn't work. etc. Everybody would be better off being good at debugging.
I'd vote for real-world battle stories. Have developers from other organizations present their successes and failures. Don't limit the presentations to technologies you're using. With a significantly complex project, this is bound to cut into 'advanced' topics you haven't even considered. Real-world successes (and failures) have a lot to teach.
Go to the Stack Overflow DevDays
and the ACCU conferences
Read
Agile Software Development, Principles, Patterns, and Practices (Robert C. Martin)
Clean Code (Robert C. Martin)
The Pragmatic Programmer (Andrew Hunt&David Thomas)
Well if you're here I would hope by now you have the basics down:
OOP Best practices
Design patterns
Application Security
Database Security/Queries/Schemas
Most notably developers should strive to learn multiple programming languages and disciplines, in order for their skill set to be expanded in more than one direction. They don't need to become experts in these other skills but at least have a very acute understanding of integration with their central discipline. This will make them much better developers in the long run, and also let them gain the ability to use all tools at their disposal to create applications that can transcend the limitations of a singular language.
Outside of programming specific topics, you should also learn how to work under Agile, XP, or other team based methodologies in order to be more successful while working in a team environment.
I think an advanced programmer should know how to get your employer to give you the time & money to acquire training on as many advanced programming topics that you can eat in a year. I'm not advanced yet. :)
I'd suggest an Artificial Intelligence class at a college/university. Most of the stuff is fun, easy to grasp (the basics at least), and the solutions to problems are usually creative.
Hitchhikers Guide to the Galaxy.
How would I prefer to acquire the training? I'd love to have a substantial amount of company time dedicated to self-training.
I totally agree on Accessabiitly. I was asked to look into it for the website at work and there is a real lack of good knowledge on the subject, not only a lack of CSS standards to aid in the likes of screen readers.
However my answer goes to GUI design - its quite a difficult thing to get right. There's too many awful applications out there that could be prevented just by taking the time to follow HCI (Human Computer Interaction) advice/designs. Take Google/Apple for inspiration when making a GUI - not your typical hundreds of buttons/labels combo that too often gets pushed out.
Automated testing: Unit testing, functional integration testing, non-functional testing
Compiler details (more relevant on some platforms than others): How does the compiler implement certain common constructs in language X? On a byte-code interpreted platform, how does JIT compilation work? What can be JIT-compiled (for example, can virtual calls be JIT compiled?)?
Basic web security
Common design idioms from other problem domains than the one you're working in at the moment.
I'd recommend learning about Refactoring, Test Driven Development, and various unit testing frameworks (NUnit, Visual Test, CppUnit, etc.) I'd also learn how to incorporate automated unit testing into your continuous integration builds.
Ultimately if you can prove your code does what it claims it can do, you don't have to be there to answer questions as to why or how. If a maintainer comes along and tries to "fix" your code, they'll know instantly if they broke it. Tests written around the requirements (use cases) explain to the maintainer what your users wanted it to do, and provide a little working example of how to call it. Think of unit tests as functional documentation.
Test Driven Development (TDD) is a more novel design approach that begins with the requirements, where you start by writing a test before you write the code. You then write exactly enough code required to pass the test. You have to stop before you write extra code (that you may never need), because you will refactor it later if you find that you really needed it.
What makes TDD cool is that a bad interface (such as one with lots of dependencies) is also very hard to write tests for. It's so hard that a coder would rather refactor the interface to make it easier to test. And that refactoring simplifies the code, removing inappropriate dependencies, or grouping related tests together to make it easier to test, thus improving cohesion. By making it immediately apparent to the developer when he's writing a badly interfaced module, the developer sticks to the architecture and gravitates to the principles of tight cohesion and loose coupling. Good interfaces are the natural result. And as a bonus, once you pass all your tests, you know you're done.
On the surface this seems like an easy question to answer, just enter your favorite pet peeve about what other developers can't do correctly. But when I read through the answers and gave it some thought, I realized that every "advanced topic" brought up was covered in my undergraduate computer science curriculum--20 years ago. And I doubt that OO, security, functional programming, etc. concepts have changed in that time. Sure the tools have, but I argue that tools are different than topics.
So what is an "advanced topic" in computer science? Who is the Turing, Knuth, Yourdon of the 21st century?
I don't have a clear answer to this question, though I'd like to see more work on theories for parallel programming that will enable tools to abstract that messy stuff for developers.
Quite funny that noone hasnt mentioned:
debugging.
tools & ide you work with
and platform you are developing to.
Everyday development is much more fun if you know your tools really well and you accomplish more and make your life easier if you know how to debug someone elses code at ease.
Source Control
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have an interesting idea for a new programming language. It's based on a new programming paradigm that I've been working out in my head for some time. I finally got around to start working on a basic parser and interpreter for it a few weeks ago.
I want my new language to be successful and I want to eventually create a community around it when it's ready to release. The idea behind it is fairly innovative, so I don't expect it to gain a lot of ground in the business world, but it would thrill me more than anything else to see a handful of start ups use or open source projects use it.
So taking those aims into account, what can I do to help make my language successful? What do language projects do to become successful? What should I avoid at all costs? I'd love to hear opinions or stories about other languages -- successful or not -- so I can think about them as I continue to develop.
So far, the two biggest concerns on my mind are finding a market, access to existing libraries, having amazing tool support. What else might I add to this list?
The true answer is by having a beard.
http://blogs.microsoft.co.il/blogs/tamir/archive/2008/04/28/computer-languages-and-facial-hair-take-two.aspx
Although not specific to new programming languages, the book Producing Open Source Software by Karl Fogel (available to read online) may be contain some hints to the issue of making a community around your new programming language.
In terms of adoption of programming languages in general, it seems like the trend lately has been to have a rich library to make development times shorter.
As there isn't much detail on what your language is like, it's hard to determine whether adoption of the language is going to depend on the availability of a rich library. Perhaps your language will be able to fill a niche that has been overlooked by other languages and be able to gain users. Or perhaps it has a slick name that will draw people in -- there are many factors which can affect the adoption of a language.
Here are some factors that come to mind when thinking about recent successful languages:
Ability to leverage existing libraries in the new language.
Having an adapter to external libraries written in other languages.
Python allows access to code written in C through the Python/C API.
Targeting a platform which already has plenty of libraries available for use.
Groovy and Scala target the Java platform, therefore allowing the use of and interoperation between existing Java code.
Language design and syntax to allow increased productivity.
Many dynamically-typed languages have gained popularity, such as Ruby and Python to name a couple.
More concise and clear code can be written in languages such as Groovy, as opposed to verbose languages such as Java.
Offering features such as functions as first-class objects and closures which aren't offered in more "traditional" languages such as C and Java.
A community of dedicated users who also are willing to teach newcomers on the benefits of a language
The human factor is going to be big in wide-spread support for a language -- if people never start using your language, it won't gain more users.
Also, another suggestion that I could add is to make the development of your language open -- keep your users posted on developments in your language, and allow people to give you feedback. Better yet, let your users take part in the decision-making process, if you feel that is appropriate.
I believe that by offering ways to participate in the bringing up of a language, the more people will feel that they have a stake in the success of the new language, so the more likely it will gain more support.
Good luck!
Most languages that end up taking off rapidly do so by means of a killer app. For C it was Unix. Ruby had Rails. JavaScript is the only available programming system common to most browsers without third-party add-ons.
Another means of success is by fiat. This only works if you have significant clout. For example C#, as nice as a language as it might be, wouldn't be any where near as popular as it is now if Microsoft had not pushed it as hard as it does. Objective-C is the language of MacOS X simply because Apple says so.
The vast majority of languages, though, which lack a single killer app or a major corporate backer have gained success through long term investment of their respective creators. Perl and Python are prime examples. C++ has no single entity behind it, but it has evolved as the needs of developers have changed.
Don't worry about trying to make the language be successful; worry about using it to solve real problems and make real money.
You'll either make lots of money from using this language, or not. Once you have lots of money, others may care how you did it. Or not, either way you have lots of money.
If you don't make lots of money, nobody will want to know how you did it.
Edit based on comment: I define successful as people using it, and people use languages to solve problems, most for profit, thus successful == profitable.
In addition to making the language easy to use (which has several meanings), you should develop a comprehensive library that covers and also provides a good level of abstraction over (the following most important areas):
* Data structures and manipulation
* File I/O support
* XML processing
* Networking (plus web based technologies like HTTP/HTTPS)
* Database support
* Synchronous and asynchronous I/O
* Processes and threads
* Math
A well thought out framework that makes rapid development faster (and easier to maintain) would be a great addition. For this, you should know the currently popular frameworks well.
Keep in mind that it takes a lot of time. I think it took python about 10 years (someone please correct me if I'm wrong).
So even if your community still seems small after say, 5 years, that's not the end of the story.
"It's based on a new programming paradigm that I've been working out in my head for some time."
While laudable, odds are really good that someone has already done something with your "new" paradigm.
To make a language usable, it must build on prior art. Totally new is not a good path to success. My favorite example is Algol 68.
Algol 60 was wildly popular (back in the day, which is a while ago, admittedly).
The experts wanted to build on this success. They proposed some new paradigms, the effort split into factions. The purists put the new paradigms into Algol 68; it disappeared into obscurity. Some folks created a different version of Algol, called PL/I. It did not have any really new paradigms. It actually went somewhere and was used heavily. Another group created Pascal -- it didn't have much that was new -- it discarded things from Algol 60. It actually went somewhere ans was used heavily.
Your new paradigm must have a clear and concise summary so people can fit it into a context of where the language is usable, how it can be used, what the costs and benefits of using it are.
A "new programming paradigm" causes some people to say "why learn a completely new paradigm when the ones I have work so nicely?" You have to be very clear on how it helps to have a new paradigm.
The language and libraries must work, and work very, very well. A language that isn't rock-solid is worthless. In order to be rock-solid it must be very simple.
It has to have a tutorial that will help anyone get started with your language.
Good Framework for Common Tasks
Easy Installation/Deployment
Good Documentation
Debugger/IDE and other Tools
A popular flagship product that uses your language!
Good documentation, including a detailed reference manual as well as simple examples to get people started quickly.
Good library support so that people can actually write useful programs.
Most popular languages seem to be very strong in either or both or both of those.
Use Trojan Horse approach
C++ - The Forgotten Trojan Horse
An interesting article on why C++ can grab the heart of programmers successfully.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am a student studying software development, and I feel programming, in general, is too broad of a subject to try to know everything. To be proficient, you have to decide which areas to focus your learning and understanding. Certain skill sets synergize with each other, like data-driven web development and SQL experience. However, all the win32 API experience in the world may not directly apply to linux development. This leads me to believe, as a beginning programmer, I should start deciding where I want to specialize after I have general understanding of the basic principles of software development.
This is a multi-part question really:
What are the common specializations within computer programming and software development?
Which of these specializations have more long-term value, both as a foundation for other specializations and/or as marketable skills?
Which skill sets complement each other?
Are there any areas of specialization that hinder your ability of developing other areas of specialization.
Ben, Almost all seasoned programmers are still students in programming. You never stops learning anything when you are a developer. But if you are really starting off on your career then you should be least worried about the specialization thing. All APIs, frameworks and skills that you expect that gives you a long term existence in the field is not going to happen. Technology seems changing a lot and you should be versatile and flexible enough to learn anything. The knowledge you acquire on one platform/api/framework doesn't die off. You can apply the skills to the next greatest platform/api/framework.
That being said you should just stop worrying about the future and concentrate on the basics. DataStructures, Algorithm Analysis and Design, Compiler Design, Operating system design are the bare minimum stuff you need. And further you should be willing to go back and read tho books in those field any time in your career. Thats all is required. Good luck.
Sorry if I sounded like a big ass advisor; but thats what I think. :-)
Not to directly reject your premise but I actually think being a generalist is a good position in programming. You will certainly develop expertise in specific areas but it is likely to be a product of either personal interest or work necessity. Over time the stuff you are able to transfer across languages and problem domains is at the heart of what makes good programmers.
I think the more important question is: What areas of specialization are you most interested in?
Once you know, begin learning in that area!
I would think the greatest skill of all would be to adapt with the times, because if your employer can see this potential in you then they would be wise to hold on tightly.
That said, I would advise you dive into the area YOU would enjoy. Learning is driven by enthusiasm.
Since my current employ is with an internet provider, I've found networking knowledge particularly helpful. But someday I'd like to play with 3D graphics (not necessarily games).
Go as deep as you can starting off in one environment, win32, .net, Java, Objective C... whatever.
It is important to build the deep understanding of how X works... so that you can translate the same concepts into other languages or platforms/environments, if you so desire.
"Are there any areas of specialization that hinder your ability of developing other areas of specialization." Sort of, but nothing permanent i think.
Since I am relatively green myself (less than 4 years) I come from a really OOP mindset. I've rarely jumped out of .NET, so I had a hard time on one job when coming into contact with embedded code. With embedded programmers fearing object creation and the performance loss of inheritance. I had to learn the environment, seriously low memory and slow clock times, they were coming from. Those are times to grow, I had a better time at it because i understood my area pretty well.
I will say if you pick something to specialize in for marketability and money, you will probably burn out fast. If you do start to specialize pick something you enjoy. I love GUI programing and hate server side stuff, my buddy is the opposite, but we both love our jobs. If he had to do my job, and I his, we would both go insane out of boredom.
As a student I'd recommend forgetting about what you're programming and focusing on the software process itself. Understand how to analyse a problem and ask the right questions; learn every design pattern you can and actually apply them all to gain a real understanding and appreciation of object-oriented design; write tests and then code only as much as you need to in order to make the tests pass. I think the best way to really learn is to just code as much as you can - the language and the domain aren't important, browse sourceforge and freshmeat for any interesting-sounding projects and get involved. What's important is understanding the fundamentals of software engineering.
And yes, this includes C. Or Assembler. This is the easiest way to get a good understanding of how your computer works and what your high-level code is actually doing.
Finally, never stop learning - Service-oriented architecture, inversion of control, domain-specific languages, business process management are all showing huge benefits so they're important to be aware of - But by the time you finish studying and join the workforce who knows what the next big thing will be?