develop a game for an old system - amiga

I have a school programming project, the subject is free as long as we demonstrate our programming skills, I have an amiga 500 lying around and i wondered if i could make a game for it ? Maybe nothing complicated, i know how limited the system is but is it possible to make it / test it on a windows 10 pc with an emulator and later on burn it onto a magnetic disk ? Also is it possible is i put a comodore disquette in an usb disk reader, to read the code? or is it "too compiled" to learn anything from it ? Thank you !

The Amiga is (was) a nice computer to program on.
Depending on your time budget, learning and coding a game from scratch can be quite time consuming (unless you already are familiar with the 68000 assembly language).
There are plenty of alternatives though, to programm on this computer, using various languages :
C Programming
Using C with either GCC or VBCC and a dedicated (but quite simple and compact) game engine, such as ACE : https://github.com/AmigaPorts/ACE
The Amiga comes with a builtin library of graphical & audio functions (even the good old A500) that might help you to develop a game, in a fully os-friendly way. The result will be rather slow, but you will be able to use every part of the hardware (blitter, copper, bobs, sprites...) : http://amigadev.elowar.com/read/ADCD_2.1/Includes_and_Autodocs_2._guide/node040D.html
Basic
The Amiga has several version of the Basic language, most of them implementing a rather comprehensive set of commands to get the most of its hardware specifications (sprites, bobs...)
You may want to try AMOS Basic, the most famous one : https://www.ultimateamiga.co.uk/index.php/page,16.html
As an alternative, Blitz Basic was probably the second most popular basic on this machine : https://www.amigafuture.de/downloads.php?view=detail&df_id=3663&sid=661dbda78c2a180a20715f7467a95708
68K Assembly
If you really need to get the most of the Amiga while being super close to the metal, and are starting from virtually nothing, then you might want to look at Photon's video tutorials : https://www.youtube.com/watch?v=p83QUZ1-P10&list=PLc3ltHgmiidpK-s0eP5hTKJnjdTHz0_bW
Theses tutos are rather demoscene-oriented, but the visual side of a game dev project is mostly similar to what you will find in a demoscene project (using bobs, sprites, blitter, copper list, playing ProTracker modules...)
My only recommandation, no matter if you have the proper hardware at home or not, is to do most of your work inside an emulator. WinUAE is a near-perfect accurate emulator. You will benefit as well from all the modern tools to edit/version your code (Visual Studio Code, Git...).
Besides the Amiga, I would recommend 2 oldschool-flavored machines (as of 2018) :
The SEGA Megadrive/Genesis, that can be adressed in a super
friendly way using the excellent SGDK library :
https://github.com/Stephane-D/SGDK
The Pico-8, a virtual oldschool console that can(must) be programmed in Lua. Pico-8 can be understood as an emulator for a 8-bit-ish game console that never existed. Its community is really active, though. It can be found here : https://www.lexaloffle.com/pico-8.php
Again, regarding the Megadrive, an emulator might be your best friend.
Once your creation is worth testing on the real machine and if you need to "burn" (copy) your creation on a floppy disk, my recommandation would be get an Amiga 1200 with a PCMCIA/CompactFlash adapter. Using a Gotek (USB to floppy adapter) can be a cheaper alternative that will work on your A500 too (but please don't butcher your Amiga to fit the Gotek in it).
The Megadrive comes with a nice option nowadays, thanks to the Everdrive SD-Card adapter, that allows you to start any ROM file of your own on the real console.

to run/test your Amiga stuff on Windows use WinUAE: http://www.winuae.net/
you can use pretty much any programming language you prefer, but C is probably best supported (many compilers, official docs), and assembler (Motorola MC68000 CPU) gives you total control of the hardware for fancy effects and super efficient coding (fast graphics, ...). There are also (more or less) Amiga specific languages like e.g. AMOS or BlitzBasic/AmiBlitz, which (more or less) are designed to handle Amiga-specific stuff or simplify things (screen handling, sprites, music, hardware access, ...)
if you want to use C, then it's probably best to use Amiga's "official" compiler, if possible: SAS-C (formerly: Lattice-C). If you can't find that one try vbcc, gcc, DiceC, ... (this CD has a full gcc/POSIX development environment (and much more): https://archive.org/details/cdrom-geek-gadgets-2 )
here are a lot of C- and AmigaOS-coding docs: http://amigadev.elowar.com/
a PC floppy disk drive won't read or write your Amiga floppy disk out-of-the-box, you need some additional hardware, try googling some of these: ADTWin, ADF-Copy / ADF-Drive, Disk2FDI, "Arduino Amiga Floppy Disk Reader/Writer"
a nullmodem cable (e.g. nullmodem-serial-USB-adapter cable to PC or Linux box) is a simple way to transfer files from/to Amiga - try google: "amiga SER: nullmodem", or "Amiga Explorer" (a windows application, also lets you directly write .adf disk images from the PC to the Amiga's floppy drive)
well, of course you can learn from old stuff on floppy disks, it just depends on the contents of the disk and your knowhow. ;-) if the disk contains some source code, you're good. more likely, meaning any commercial game etc., it will contain an executable made of machine code - written in assembler, or compiled from C-code, or whatever - or even a custom loader (=non-AmigaDOS disk, no DOS access), for which you need some serious understanding of M68000 instructions, Amiga hardware, and maybe AmigaOS (API). in other words: it's probably "over-compiled", unless you're an assembler guru, or want to become one.
note: as you said you want to write a game for your Amiga 500 - that means we're talking "classic" Amiga: m68k CPU, A500, A1200, A2000, etc., Workbench/Kickstart up to version 3.9/3.1, floppy disks, etc., including modern re-implementations like MiniMIG, Vampire, M5, etc. -
if you're interested in non-classic, probably called "next-generation", Amiga computing, try googling some of these: AmigaOS4, AROS, MorphOS, AmigaONE

Maybe you are looking for a ready game engine
REDPILL
Scorpion Engine
Amiga C Engine

Related

Term for a smallish program built on top a larger library/framework?

What exactly do you call a relatively smallish program/application built on top a larger library/framework?
The small program (foo) in question does a whole lot but much of the heavy lifting is done by the library (bar) on top of which this program is built. When I say that I designed/developed 'foo' with such and such capabilities I do not want to convey the wrong idea that I coded everything, including the low level stuff, all by myself.
Edit: Just to clarify, this is a numerical code built on top of a numerical library.
TL;DR, almost nobody is going to think you invented everything. If they do, it's a good opportunity to educate them high-level information about computers and software architecture.
In general, this is just an application, utility, or tool. Your runtime context may throw in additional adjectives (e.g. command-line tool, web application, etc).
I think your worries regarding attribution are probably unfounded. If your project is open source, your documentation will certainly need to list the build and runtime dependencies. If there are different licenses, you'll likely also have to ship those with your tool. So it's unlikely that anyone other than people entirely unfamiliar with software engineering would get the "wrong idea".
Furthermore, nearly every software package is built on top of some sort of toolkit. For example, even basic utilities like ls, cp, etc. are built on top of the standard C library and make use of system calls provided by the operating system. Indeed, without the OS, such utilities have no runtime environment in which to execute. The OS has nothing to do if there is no hardware for it to manage (and even some of that hardware is likely to have firmware -- which is just software-on-chip -- to control some of its behavior regardless of an operating system).
The higher up the stack you move, the harder it becomes for someone to mistake the work you did versus the work you built upon. A web application needs an HTTP server, possibly a module interface or CGI environment, a language to express the intent of the software, etc. And then all of this is built on top of the OS, which goes down to hardware, some firmware, etc.
Finally, even if the library does the heavy lifting, that doesn't detract from the value of your software. If your software does a number of very useful things, it doesn't matter whether the library enabled your software to do those things. Some of the most important inventions in history are super simple in retrospect. It just took someone to see how to combine the parts in a different way. This is effectively what we do with software.
If someone does seem to get the wrong idea, this is perhaps a good time to educate them about the complexities of computing environments, the interrelationships between software components, the software stack, etc. It also might be fine to just let it slide and say, "Thank you!"

Scripting Languages vs. Compiled Languages for web development

Though I come from a purely PHP background on the web development side of programming, I have also spent much time with C# and C++ on the desktop.
I don't really want to spark any flame wars, but:
When should you use scripting languages over compiled languages for website development?
(and vice versa)
Just to clarify, for the sake of this question, I define a "scripting language" to mean an interpreted language like PHP, Python, or Ruby, and a "compiled language" to mean a strongly typed, compiled language like C#, C++, Java, or VB.
It depends :-)
On...
...where and how you want to deploy the application
...the skillsets of the engineers in your organization
...what third-party components you want to integrate with or incorporate
Deployment
If you need to be able to deploy the solution on any of dozens of different possible platforms, you may find that you're better off with PHP than Java (for example). There are hundreds of thousands of Java hosting providers out there, but there are probably millions of PHP hosting providers. (And I say this as a Java-head who finds PHP "so so" at best.)
This goes to OS as well. Mono aside, .Net stuff is going to limit you to Windows-based deployment (or lagging behind the cutting edge and having to very, very rigorously test each and every 3rd party component you bring in, to ensure that it doesn't have Mono...issues).
Skillsets
Coming up to speed in an environment or language is non-trivial. For most of us, picking up the basics is pretty quick, but you may not be making the best architectural/design decisions because you're (comparatively) weak on the environment/language. Skillsets count.
Related to this: Skillset hiring counts. Is it easier (and/or cheaper) to hire PHP devs with 3-4 years of experience, or Java devs with 3-4 years of experience, or C# devs, or...?
Buying/finding/integrating vs. building
In your target area of development, which server-side components or packages will you want to integrate with? PHP has a vast array of things available for it, as does Java, as does C# or ASP.Net. But they're different things (by and large), so you'll want to look at what you actually want to use.
Conclusion
So I think it's less a matter of compiled vs. scripted (in today's world), and more a matter of what's the best fit by other criteria for what you're trying to do.
Addendum: Both/And
And of course, there's always "both/and". For instance, I do work in two main, unrelated environments right now, both using a combination of scripted and compiled resources. (One of them is Java + JavaScript via Rhino on Tomcat, the other is compiled COM objects + JScript [again, server-side] on IIS.)
A programmer can write good/bad fast/slow scalable/unscable code in any language. Although, some language and technologies make it harder to do. In my experience, with scripting languages you can produce a small to medium scale application faster than you can with compiled languages like Java. However, as applications grow in size, compiled languages become more suited to the task I think this comes from strongly typing objects, deeper layers of architecture to manage tasks, and more QA frameworks to verify things are running as they should be as changes occur.
I find it to be mostly a matter of opinion. At first I hated the pre-compiled web applications asp.net provides, but I've gotten used to it so I don't hate it anymore. It has advantages and disantages:
Pro
pre-compiled web applications are easy to deploy, often you'll only have to update the bin-directory
pre-compiled web applications perform well
you don't have to upload source code, which is nice imho.
Con
updating a pre-compiled web app generally means the web application is reset, so unless you've changed the session state, it'll end all sessions and log everyone out
rebuilding a large web application can take some time, which is added to the time it took you to write the changes in the first place. I am sometimes impatient.
I've always liked how easy it is to just update one file in a PHP project without having to rebuild a project or something like that, on the other hand, .net has a nice IDE that allows you to debug everying, from back end (C#, VB.net) to front end (Javascript), in one package.
But again; both have advantages and disadvantages.
I wouldn't draw such a sharp distinction between compiled and interpreted languages - this is really just an implementation detail, and tends to change with time (faster than the languages themselves change.) Case in point - thanks to Facebook, PHP is now a "compiled language" too. Another case in point - I enjoy web development with Scheme - and my preferred Scheme implementation now runs a VM and in that sense is at least as compiled as Java is.
So I think the issues to focus on are the expressiveness of the language, its performance, and its ease of deployment - compiled vs. interpreted is only important insofar as it relates to these things.
I'm a big fan of compiled languages everywhere, if for nothing more than the static typing. On the other hand, scripting languages are very convenient -- no binaries to deal with, only text files, which is a big win for web servers.
In the end, it doesn't really matter -- use whatever language you know and feel most comfortable with for the job.
I think that speed is a key concern in a web application, in particular
how fast is it to write my code
how fast is it to fix my code
how fast is it to refactor my code
how fast is it to test my code
That is, I am concerned about the speed of the slowest link: myself. Anything else is fast enough for Twitter-like loads.
Today, the number one on my evaluation list for a new project would be Tornado and Python.
If I had a choice of platforms, of course.
Ah, Python is among the fastest in scripting languages.
For scripting languages, anyone that has a copy of your software could potentially modify your source code because it's open source.
For programming languages, anyone that has a copy the software cannot simply modify your source code because it is compiled.
So I guess, it depends upon your preferences.

What language will protect my source code?

I wish to create shareware software that contains a registration algorithm. I am looking for a programming language, that cannot be easily decompiled into readable code. For example, C# can be decompiled into readable code.
What are my options?
Edit: I'm looking for something that can be only decompiled into assembly. Delphi, for example, cannot be decompiled like C# or Java, but from what I've heard, Delphi is dying.
Delphi is not dying, it is alive and well.
As is the community, at delphifeeds.
You can also see more delphi projects, Freeware, shareware and commercial at the Delphi Wikia.
Thus I'd say Delphi is a very good choice for Software Development. Freeware, Shareware or Commercial.
Update:
On September 1st 2011 Embarcadero released Rad Studio XE2. This released adds 64 Bit Compilation, Livebindings, Native Mac OSX compiling, IOS (via XCode) and a whole lot more to the already powerful delphi Dev environment.
If your CPU is able to see the code and run it, by definition, a sufficiently talented person can do it too.
You can, however, make it harder by running your code through an obfuscator.
I'd suggest the language of business and economics can protect your program.
If you are targeting consumers, and price it at say $10, almost all people would find it easier to pay you the $10 vs going into your program and reverse engineering it.
If you are targeting corporations, and say pricing it at $10,000, it just has to be easier to get the purchasing department to approve the payment than to reverse engineer your code. For real companies who would purchase your product, it's not worth the audit risk to have unlicensed code running.
Lastly, what are the costs/benefits of protecting your code? If you write your program in assembly instead of C#, you might have far higher production costs, while reducing the chance of reverse engineering. However, does this cost outweigh the potential lost sales? Could this time be better spent adding value for people who will buy the product? Generally, trying to sell your product to people who are never pay for software is not a economic strategy.
You could write it in Perl.
(I kid, I kid! Put down the pitchforks!)
As the others said, Delphi is NOT dying.
As the others said, there is no bulletproof method.
As the others said, there are tools to make the life of a cracker much harder.
But what the others didn't say is:
Java, .NET (etc.) obfuscation is rather a toy compared with obfuscation toys for eg. Delphi and other native solutions. (of course YMMV)
A very good technology to relatively protect your executable can be found here.
Repeat after me: "Obscurity is not security."
You would be better off using a hard encryption algorithm (where "hard" doesn't mean "difficult", but "not bi-directional; not easily reversible".
Isn't this logically impossible?
If you can run the code, you can get the instructions being executed by your CPU. At that point, your algorithm is readable, for some definitions of readable.
No language is capable of that AFAIK.. since it's impossible as it can always be reverse engineered.. though a good number of developers would cry if you coded it in brainfvck though.
"I'm looking for something that can be only decompiled into assembly."
Try writing your program in assembly. That is the best possible solution.
if you really concern about people disassemble your software, make your software as a service (SaaS) http://en.wikipedia.org/wiki/Software_as_a_service
Try finding an obfuscator. As the name suggests it obfuscates the code enough that reverse-engineering it will not be trivial.
Or use C/C++. Those can be disassembled, but that's it.
Of course, this is just enough to keep the not-sufficiently-competents from understanding and reverse-engineering the code.
As Dominic said, if you can run it, it can be decompiled.
That said, I believe there are tools that obfuscate the compiled code and make it more difficult for someone to disassemble and reverse or take apart a registration process.
For example, I believe that major companies like Adobe and Microsoft use products like this, in order to make it much more difficult for folks to disassemble and crack their programs.
It's like security or cryptography or even the locks on your car/house - someone with enough time and resources can probably break through anything.
You just need to tilt the curve enough to make it sufficiently unattractive for anyone to really try, so that they'd be more likely to move onto easier targets.
I am going to make the assumption that because you're writing shareware and you mention a registration algorithm you are wanting to protect your software from a keygen or patch that bypasses the restrictions on your trial versions.
Really the most you can do is deter. Like others have mentioned there are obfuscation techniques available, but they are not preventative. There are commercial software packers available which compress the file and make it initially unreadable. But the program has to be decompressed at some point so the machine can run it, so it's still reversible.
And that is pretty much the crutch against any of the anti-reversing techniques you'll see. It has to be interpreted by the machine at some point. More modern packers use anti-debugging techniques to deter the more novice reversers. But these techniques end up being documented rather quickly on popular reversing sites. Many of the techniques are bypassed with nothing more than a simple debugger plugin.
The only way I can think of to protect your executable from being arbitrarily reversed is to run the whole thing on a server you control and just pipe the output to users. But that's not always feasible.
As far as your language options go, take a loot at this. I can't really speak to how complete it is but I'm sure some others can add languages they think of.
If you're lookig "for something that can be only decompiled into assembly", that essentially means that you want to use a language that gets compiled (or assembled) directly into a native code executable.
The usual prime suspects then are C, C++, Delphi, VB6. Of course, also assembly meets your criteria, although I doubt you'd want to write any project of decent size in it.
Very simple:
No Programminglanguage,No Programm can Protect your Software.
The Software Cracker will reversengine your App till it is just asembler and will crackt it.
All code can be read back in assembly. Someone can reverse engineer your application and see what the machine is doing.
This is not so much a matter of choosing the right language as it is finding a tool that will do code obfuscation for you. Nothing is bulletproof, but there are efforts to accomplish this sort of thing.
Eg. see this research project about Java code obfuscation.
You can't be 100% sure nobody will able to read your code, but you can make it very hard. You can encrypt your code and modify it during run time.
For example I have not heard of any successful attempts to reverse engineering Skype.
You could always write it in APL. You could deliver the source and still no one would be able to understand it.
Any code running on client-side can be reverse engineered with enough trial and error. In my opinion, make the client-side code contain only the GUI code, while running the actual security requiring authentications etc. on serverside.
On the other hand, if your app's a service which runs on the client-side, such as a game, CAD, POS or anything that needs to have high quality code on the client-side, I'd recommend storing process outputs on your server-side with an encrypted upload tool, then authenticate the client's key/account data every-time before sending their data back? It is an overkill for most projects though.

Benefits of cross-platform development?

Are there benefits to developing an application on two or more different platforms? Does using a different compiler on even the same platform have benefits?
Yes, especially if you plan to distribute your code for multiple platforms.
But even if you don't cross platform development is a form of futureproofing; if it runs on multiple (diverse) platforms today, it's more likely to run on future platforms than something that was tuned, tweeked, and specialized to work on a version 7.8.3 clean install of vendor X's Q-series boxes (patch level 1452) and nothing else.
There seems to be a benefit in finding and simply preventing bugs with a different compiler and a different OS. Different CPUs can pin down endian issues early. There is the pain at the GUI level if you want to stay native at that level.
Short answer: Yes.
Short of cloning a disk, it is almost impossible to make two systems exactly alike, so you are going to end up running on "different platforms" whether you meant to or not. By specifically confronting and solving the "what if system A doesn't do things like B?" problem head on you are much more likely to find those key assumptions your code makes.
That said, I would say you should get a good chunk of your base code working on system A, and then take a day (or a week or ...) and get it running on system B. It can be very educational.
My education came back in the 80's when I ported a source level C debugger to over 100 flavors of U*NX. Gack!
Are there benefits to developing an application on two or more different platforms?
If this is production software, the obvious reason is the lure of a larger client base. Your product's appeal is magnified the moment the client hears that you support multiple platforms. Remember, most enterprises do not use a single OS or even a single version of the OS. It is fairly typical to find a section using Windows, another Mac and a smaller version some flavor of Linux.
It is also seen that customizing a product for a single platform is often far more tedious than to have it run on multi-platform. The law of diminishing returns kicks in even before you know.
Of course, all of this makes little sense, if you are doing customization work for an existing product for the client's proprietary hardware. But even then, keep an eye out for the entire range of hardware your client has in his repertoire -- you never know when he might ask for it.
Does using a different compiler on even the same platform have benefits?
Yes, again. Different compilers implement different extensions. See to it that you are not dependent on a particular version of a particular compiler.
Further, there may be a bug or two in the compiler itself. Using multiple compilers helps sort these out.
I have further seen bits of a (cross-platform) product using two different compilers -- one was to used in those modules where floating point manipulation required a very high level of accuracy. (Been a while I've heard anyone else do that, but ...)
I've ported a large C++ program, originally Win32, to Linux. It wasn't very difficult. Mostly dealing with compiler incompatibilities, because the MS C++ compiler at the time was non-compliant in various ways. I expect that problem has mostly gone now (until C++0x features start gradually appearing). Also writing a simple platform abstraction library to centralize the platform-specific code in one place. It depends to what extent you are dependent on services from the OS that would be hard to mimic on a new platform.
You don't have to build portability in from the ground up. That's why "porting" is often described as an activity you can perform in one shot after an initial release on your most important platform. You don't have to do it continuously from the very start. Purely for economic reasons, if you can avoid doing work that may never pay off, obviously you should. The cost of porting later on, when really necessary, turns out to be not that bad.
Mostly, there is an existing platform where the application is written for (individual software). But you adress more developers (both platforms), if you decide to provide an independent language.
Also products (standard software) for SMEs can be sold better if they run on different platforms! You can gain access to both markets, WIN&LINUX! (and MacOSx and so on...)
Big companies mostly buy hardware which is supported/certified by the product vendor only to deploy the specified product.
If you develop on multiple platforms at the same time you get the advantage of being able to use different tools. For example I once had a memory overwrite (I still swear I didn't need the +1 for the null byte!) that cause "free" to crash. I brought the code up to speed on Windows and found the overwrite in about 1 minute with Rational Purify... it had taken me a week under Linux of chasing it (valgrind might have found it... but I didn't know about it at the time).
Different compilers on the same or different platforms is, to me, a must as each compiler will report different things, and sometimes the report from one compiler about an error will be gibberish but the other compiler makes it very clear.
Using things like multiple databases while developing means you are much less likely to tie yourself to a particular database which means you can swap out the database if there is a reason to do so. If you want to integrate something that uses Oracle into a existing infrastructure that uses SQL Server for example it can really suck - much better if the Oracle or SQL Server pieces can be moved to the other system (I know of some places that have 3 different databases for their financial systems... ick).
In general, always developing for two or three things means that the odds of you finding mistakes is better, and the odds of the system being more flexible is better.
On the other hand all of that can take time and effort that, at the immediate time, is seen as an unneeded expense.
Some platforms have really dreadful development tools. I once worked in an IB where rather than use Sun's ghastly toolset, peole developed code in VC++ and then ported to Solaris.

Flow Based Programming

I have been doing a little reading on Flow Based Programming over the last few days. There is a wiki which provides further detail. And wikipedia has a good overview on it too. My first thought was, "Great another proponent of lego-land pretend programming" - a concept harking back to the late 80's. But, as I read more, I must admit I have become intrigued.
Have you used FBP for a real project?
What is your opinion of FBP?
Does FBP have a future?
In some senses, it seems like the holy grail of reuse that our industry has pursued since the advent of procedural languages.
1. Have you used FBP for a real project?
We've designed and implemented a DF server for our automation project (dispatcher, component iterface, a bunch of components, DF language, DF compiler, UI). It is written in bare C++, and runs on several Unix-like systems (Linux x86, MIPS, avr32 etc., Mac OSX). It lacks several features, e.g. sophisticated flow control, complex thread control (there is only a not too advanced component for it), so it is just a prototype, even it works. We're now working on a full-featured server. We've learnt lot during implementing and using the prototype.
Also, we'll make a visual editor some day.
2. What is your opinion of FBP?
2.1. First of all, dataflow programming is ultimate fun
When I met dataflow programming, I was feel like 20 years ago, when I met programming first. Altough, DF programming differs from procedural/OOP programming, it's just a kind of programming. There are lot of things to discover, even sooo simple ones! It's very funny, when, as an experienced programmer, you met a DF problem, which is a very-very basic thing, but it was completely unknown for you before. So, if you jump into DF programming, you will feel like a rookie programmer, who first met the "cycle" or "condition".
2.2. It can be used only for specific architectures
It's just a hammer, which are for hammering nails. DF is not suitable for UIs, web server and so on.
2.3. Dataflow architecture is optimal for some problems
A dataflow framework can make magic things. It can paralellize procedures, which are not originally designed for paralellization. Components are single-threaded, but when they're organized into a DF graph, they became multi-threaded.
Example: did you know, that make is a DF system? Try make -j (see man, what -j is used for). If you have multi-core machine, compile your project with and without -j, and compare times.
2.4. Optimal split of the problem
If you're writing a program, you often split up the problem for smaller sub-problems. There are usual split points for well-known sub-problems, which you don't need to implement, just use the existing solutions, like SQL for DB, or OpenGL for graphics/animation, etc.
DF architecture splits your problem a very interesting way:
the dataflow framework, which provides the architecture (just use an existing one),
the components: the programmer creates components; the components are simple, well-separated units - it's easy to make components;
the configuration: a.k.a. dataflow programming: the configurator puts the dataflow graph (program) together using components provided by the programmer.
If your component set is well-designed, the configurator can build such system, which the programmer has never even dreamed about. Configurator can implement new features without disturbing the programmer. Customers are happy, because they have personalised solution. Software manufacturer is also happy, because he/she don't need to maintain several customer-specific branches of the software, just customer-specific configurations.
2.5. Speed
If the system is built on native components, the DF program is fast. The only time loss is the message dispatching between components compared to a simple OOP program, it's also minimal.
3. Does FBP have a future?
Yes, sure.
The main reason is that it can solve massive multiprocessing issues without introducing brand new strange software architectures, weird languages. Dataflow programming is easy, and I mean both: component programming and dataflow configuration building. (Even dataflow framework writing is not a rocket science.)
Also, it's very economic. If you have a good set of components, you need only put the lego bricks together. A DF program is easy to maintain. The DF config building requires no experienced programmer, just a system integrator.
I would be happy, if native systems spread, with doors open for custom component creating. Also there should be a standard DF language, which means that it can be used with platform-independent visual editors and several DF servers.
Interesting discussion! It occurred to me yesterday that part of the confusion may be due to the fact that many different notations use directed arcs, but use them to mean different things. In FBP, the lines represent bounded buffers, across which travel streams of data packets. Since the components are typically long-running processes, streams may comprise huge numbers of packets, and FBP applications can run for very long periods - perhaps even "perpetually" (see a 2007 paper on a project called Eon, mostly by folks at UMass Amherst). Since a send to a bounded buffer suspends when the buffer is (temporarily) full (or temporarily empty), indefinite amounts of data can be processed using finite resources.
By comparison, the E in Grafcet comes from Etapes, meaning "steps", which is a rather different concept. In this kind of model (and there are a number of these out there), the data flowing between steps is either limited to what can be held in high-speed memory at one time, or has to be held on disk. FBP also supports loops in the network, which is hard to do in step-based systems - see for example http://www.jpaulmorrison.com/cgi-bin/wiki.pl?BrokerageApplication - notice that this application used both MQSeries and CORBA in a natural way. Furthermore, FBP is natively parallel, so it lends itself to programming of grid networks, multicore machines, and a number of the directions of modern computing. One last comment: in the literature I have found many related projects, but few of them have all the characteristics of FBP. A list that I have amassed over the years (a number of them closer than Grafcet) can be found in http://www.jpaulmorrison.com/cgi-bin/wiki.pl?FlowLikeProjects .
I do have to disagree with the comment about FBP being just a means of implementing FSMs: I think FSMs are neat, and I believe they have a definite role in building applications, but the core concept of FBP is of multiple component processes running asynchronously, communicating by means of streams of data chunks which run across what are now called bounded buffers. Yes, definitely FSMs are one way of building component processes, and in fact there is a whole chapter in my book on FBP devoted to this idea, and the related one of PDAs (1) - http://www.jpaulmorrison.com/fbp/compil.htm - but in my opinion an FSM implementing a non-trivial FBP network would be impossibly complex. As an example the diagram shown in
is about 1/3 of a single batch job running on a mainframe. Every one of those blocks is running asynchronously with all the others. By the way, I would be very interested to hearing more answers to the questions in the first post!
1: http://en.wikipedia.org/wiki/Pushdown_automaton Push-down automata
Whenever I hear the term flow based programming I think of LabView, conceptually. Ie component processes who's scheduling is driven primarily by a change to its input data. This really IS lego programming in the sense that the labview platform was used for the latest crop of mindstorm products. However I disagree that this makes it a less useful programming model.
For industrial systems which typically involve data collection, control, and automation, it fits very well. What is any control system if not data in transformed to data out? Ie what component in your control scheme would you not prefer to represent as a black box in a bigger picture, if you could do so. To achieve that level of architectural clarity using other methodologies you might have to draw a data domain class diagram, then a problem domain run time class relationship, then on top of that a use case diagram, and flip back and forth between them. With flow driven systems you have the luxury of being able to collapse a lot of this information together accurately enough that you can realistically design a system visually once the components are build and defined.
One question I never had to ask when looking at an application written in labview is "What piece of code set this value?", as it was inherent and easy to trace backwards from the data, and also mistakes like multiple untintended writers were impossible to create by mistake.
If only that was true of code written in a more typically procedural fashion!
1) I build a small FBP framework for an anomaly detection project, and it turns out to have been a great idea.
You can also have a look at some of the KNIME videos, that give a good idea of what a flow based framework feels like when the framework is put together by a great team. Admittedly, it is batch based and not created for continuous operation.
By far the best example of flow based programming, however, is UNIX pipes which is one of the oldest, most overlooked FBP framework. I don't think I have to elaborate on the power of nix pipes...
2) FBP is a very powerful tool for a large set of problems. The intrinsic parallelism is a great advantage, and any FBP framework can be made completely network transparent by using adapter modules. Smart frameworks are also absurdly fault tolerant, and able to dynamically reload crashed modules when necessary. The conceptual simplicity also allows cleaner communication with everybody involved in a project, and much cleaner code.
3) Absolutely! Pipes are here to stay, and are one of the most powerful feature of unix. The power inherent in a FBP framework compared to a static program are many, and trivialise change, to the point where some frameworks can be reconfigured while running with no special measures.
FBP FTW! ;-)
In automotive development, they have a language agnostic messaging protocol which is part of the MOST specification (Media Oriented Systems Transport), this was designed to communicate between components over a network or within the same device. Systems usually have both a real and visualized message bus - therefore you effectively have a form of flow based programming.
That was what made the light bulb go on for me several years ago and brought me here. It really is a fantastic way to work and so much more fun than conventional programming. The message catalog form the central specification and point of reference. It works well for both developers and management. i.e. Management are able to browse the message catalog instead of looking at source.
With integrated logging also referencing the catalog to produce intelligible analysis things can get really productive. I have real world experience of developing commercial products in this way. I am interested in taking things further, particularly with regards to tools and IDEs. Unfortunately I think many people within the automotive sector have missed the point about how great this is and have failed to build on it. They are now distracted by other fads and failed to realize that there was far more to most development than the physical bus.
I've used Spring Web Flow extensively in Java Web applications to model (typically) application processes, which tend to be complex wizard-like affairs with lots of conditional logic as to which pages to display. Its incredibly powerful. A new product was added and I managed to recut the existing pieces into a completely new application process in an hour or two (with adding a couple of new views/states).
I also looked into using OS Workflow to model business processes but that project got canned for various reasons.
In the Microsoft world you have Windows Workflow Foundation ("WWF"), which is becoming more popular, particularly in conjunction with Sharepoint.
FBP is just a means of implementing a finite state machine. It's nothing new.
I realize that it is not exactly the same thing, but this model has been used for years in PLC programming. ISO calls it Sequential Flow Chart, but many people call it Grafcet after a popular implementation. It offers parallel processing and defines transitions between states.
It's being used in the Business Intelligence world these days to mashup and process data. Data processing steps like ETL, querying, joining , and producing reports can be done by the end-user. I'm a developer on an open system - ComposableAnalytics.com In CA, the flow-based apps can be shared and executed via the browser.
This is what MQ Series, MSMQ and JMS are for.
This is cornerstone of Web Services and Enterprise Service Bus implementations.
Products like TIBCO and Sun's JCAPS are basically flow-based without using this particular buzz-word.
Most of the work of the application is done with small modules that pass messages through a processing network.