Term for a smallish program built on top a larger library/framework? - terminology

What exactly do you call a relatively smallish program/application built on top a larger library/framework?
The small program (foo) in question does a whole lot but much of the heavy lifting is done by the library (bar) on top of which this program is built. When I say that I designed/developed 'foo' with such and such capabilities I do not want to convey the wrong idea that I coded everything, including the low level stuff, all by myself.
Edit: Just to clarify, this is a numerical code built on top of a numerical library.

TL;DR, almost nobody is going to think you invented everything. If they do, it's a good opportunity to educate them high-level information about computers and software architecture.
In general, this is just an application, utility, or tool. Your runtime context may throw in additional adjectives (e.g. command-line tool, web application, etc).
I think your worries regarding attribution are probably unfounded. If your project is open source, your documentation will certainly need to list the build and runtime dependencies. If there are different licenses, you'll likely also have to ship those with your tool. So it's unlikely that anyone other than people entirely unfamiliar with software engineering would get the "wrong idea".
Furthermore, nearly every software package is built on top of some sort of toolkit. For example, even basic utilities like ls, cp, etc. are built on top of the standard C library and make use of system calls provided by the operating system. Indeed, without the OS, such utilities have no runtime environment in which to execute. The OS has nothing to do if there is no hardware for it to manage (and even some of that hardware is likely to have firmware -- which is just software-on-chip -- to control some of its behavior regardless of an operating system).
The higher up the stack you move, the harder it becomes for someone to mistake the work you did versus the work you built upon. A web application needs an HTTP server, possibly a module interface or CGI environment, a language to express the intent of the software, etc. And then all of this is built on top of the OS, which goes down to hardware, some firmware, etc.
Finally, even if the library does the heavy lifting, that doesn't detract from the value of your software. If your software does a number of very useful things, it doesn't matter whether the library enabled your software to do those things. Some of the most important inventions in history are super simple in retrospect. It just took someone to see how to combine the parts in a different way. This is effectively what we do with software.
If someone does seem to get the wrong idea, this is perhaps a good time to educate them about the complexities of computing environments, the interrelationships between software components, the software stack, etc. It also might be fine to just let it slide and say, "Thank you!"

Related

I want to write a tool without usage entry barriers. Do I have to write it in C? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I want to write an open-source tool for use by developers. I want to eliminate entry barriers, so if they like the idea, they just get the tool and start playing with it.
In particular, I don't want an "Oh, should I also install 200Mb of ThatLanguage runtime libraries? Oh, so they don't build on my latest version of Linux?" entry barrier.
Should I write this tool in C, then? Or is Python, or Java, or whatever, already sufficiently widespread to not worry about this sort of things altogether (everyone already has them installed)?
Well, of course I know that they are freaking hugely widespread, but still - are there any major benefits to writing a super-lightweight zero-dependency tool, or am I being too much of a perfectionist?
Just write it first. If it is worth it people will use it.
Beyond that, (almost) everyone has Java, Python, and Ruby installed (especially devs). Some languages are still esoteric enough that it might not be worth it for 'that one app' (erlang, haskell, etc.).
Just write it though, that's the important part. From there it can be ported, rewritten, adopted, but none of that can happen if the tool isn't written first.
It won't help if people don't know C.
If you write your own DSL, you can have people use that API and not worry about which language you choose.
Write it in whatever common language you like. Everybody has installed .NET framework or JVM. The only difference between your C approach and Java or C# is, that you would link additional libraries directly to your program (opposed to standard libraries).
On the other hand I would hesitate to write it in some exotic language, for example smalltalk, because normal user does not know what is it squak or smalltalk itself and could be worried about installing the wierd thing :-).
I also think, that you should be concerned more about developers, because you write, you want it to be open source. I dont know anyone, who wants to write his own Swing, Spring or any other framework just to be independent of something. Also its (usually) much faster and easier to write it in JIT Language, than to code it in assembler...
I'm going to suggest what Reese suggested but take a slightly different approach: write it first, preferably in a language that allows you to quickly prototype and develop your program. Then, and this is the most important part, document the protocal you've developed.
I'm giving this advice because you mentioned that your "application" may later have bindings in lots of different languages and it is a client/server architecture. Well, two of the biggest applications in the world started out like this.
Bittorrent started out as Python code. This allowed very quick prototyping of the concept to get it working. The main thing that it had going for it was that the original code was well written and well documented. This later on allowed other people to port the protocol to other languages.
HTTP and HTML is an even bigger success story and started out with an even less popular language at the time it was written: objective-C. Even better than bittorrent, the protocol itself is very simple and very well documented. People didn't care that the original implementation was in a language that they've never seen before that uses square brackets in strange ways on a NeXT cube. The concept and execution was good and people quickly ported it to their favourite programming languages. Again, objective-C was chosen to aid in quick prototyping. Legend has it that the original implementation was written in just a couple of days.
I would say yes, you have to write it in C. If it were written in any language other than C (except perhaps C++ or Perl), I would definitely stop to consider whether the necessary build tools, runtime tools, and/or interpreter for that language would be available everywhere I might need the tool before getting myself dependent upon it. If the tool were meant for use in build scripts, I would consider it a complete show-stopper, since I can't expect anyone who wants to build my software to have random arbitrary language environments installed.
The reason I mentioned C++ and Perl as exceptions is that they're both largely portable in a formal sense. They have implementations that work without significant ties to the host implementation, and can be built not just on any current popular system but on any system that remotely adheres to standards. Python is quite the opposite, with strong dependencies on the underlying system's dynamic loader; I've been completely unable to get Python to work on various systems that only support static linking.
ocaml is another possible choice that has a very portable implementation, but it's not widely installed and people who aren't familiar with it tend to frown on it for no good reason.
If you write your program in C, then you will have the dependency of the platform (Windows != Linux != AIX, etc). If you are talking only about writing this tool for one OS, or rather THE OS (Linux;-), then I think that you can have a reasonable amount of confidence that your app will work on almost any system, especially if you use an Open Source language. If you want to run the app on Windows, I wouldn't count on any of those languages being installed on the host system. Your highest confidence across platforms will be with Java.
If possible you could use the lightest weight framework possible and put it online, where it can be viewed in a browser. What does your app do? Would it work as a web app?
I would suggest go for Delphi. If you want to make it portable, you can do it since most of the Delphi code is kylix compatible.

Are there any tools that make keeping the UML models in-sync with the code completely seamless?

UML Round-Trip Engineering tools with seamless synchronization?
The Rational suite purports to do it. But it's so pricey and clunky at drawing (worse than the Rose days) that it's not in the reach of most departments.
What’s amazing is that the free Bouml seems to do a fantastic job. It’s just feels too clunky to use. It has a great deal of functionality, is free (!), very fast, and reverse-engineers complex C++ very well. It also has some nice diagram support, including a very nice sequence diagram. Although the interface is unpolished (and constantly opens dialogs on the rightmost monitor), it does have the beginnings of a very capable product. It's a shame that the interface is so bare-bones and requires the expenditure of a lot of effort. Maybe it's because the author puts most of his time into the actual functionality. Does anyone have experience using Bouml throughout the product lifecycle?
That leaves the pricey MagicDraw, the very-capable yet reasonably-priced Enterprise Architect, and the slick-looking Visual Paradigm. Of these, only Visual Paradigm had an issue reverse-engineering my project's C++ headers.
MagicDraw has a strange, old feel. It does a good job at reverse-engineering on its own, although it remains to be seen whether round-trip engineering of complex C++ projects is seamless. They want over $1800 for the multi-language version, so it's priced similarly to Rational tools.
Enterprise Architect, although far less expensive than most, seems like it may be the most feature complete. It parses and generates C++ flawlessly. Even the comments and formatting are left intact. There are great training materials. But it doesn't handle Objective-C, so less useful for iOS and Mac OS X mixed code projects. The automatic Sequence Diagram generation sounds awesome, but sounds like it only works on Windows .NET projects.
Visual Architect (>$800 for multi-language 2-way) is bar far the best-looking software modeling tool I've come across. Although it may have some round-trip issues remaining, it is a pleasure to use for building the models by hand. It's even nicer than Rose was in some ways. It has an intuitive way of bringing up the tools you need right at the cursor. Yet as I mentioned, it currently falls short of the goal to keep the model in sync with the source. And it often doesn't even give notification that the import didn't fully work, or that duplicate classes have been created (with the same names). It also makes entry of message parameters difficult, using dialogs, whereas others allow the parameters to be changed right on the diagram. (The free Bouml excels at this, as does MagicDraw and others.)
Has anyone found a multi-language (Java, C++, C#, ObjC++, Python, Ruby, SQL) round-trip engineering tool that will hold up to real world projects, where customizations are handled (like custom parameters on messages), yet are not wiped out by the next source code import?
And where all the formatting and comments are completely preserved on generation. Close is not really good enough. If the tools mess up the source code formatting, no developer is going to want the tool run on his source.
Peter Coad's Together-J used to have diagrams and an editor together in one IDE (hence the name). Change a diagram and the code changes; same for the other way as well.
The UML tool and editor were both a bit slow. I think machines of the day were underpowered and didn't show it off to best advantage.
I believe Peter Coad sold it to Borland. Looks like Borland is out of the IDE business. You can still get it here.
I think IntelliJ is the best Java IDE there is. You can generate some nice UML diagrams using it.
The real question is: Why is UML so important? I'd rather have code. I usually do enough UML to get the idea across, write the code with unit tests, and then reverse engineer it for documentation. You can't debug or unit test UML diagrams. Better to have working code.
Bouml ... constantly opens dialogs on the rightmost monitor
in a multiple monitor configuration the best is to indicate to Bouml which monitor must be used by default, else for Bouml you have just a very large monitor including all your monitors. Of course to indicate a default monitor doesn't means you can't use the other one(s), and it is possible to move the dialogs/main window where you want. The definition of the default monitor to use is done through the environment dialog.
Enterprise Architect seems to do a good job at this. As you point out, it's reasonably-priced. And it will also generate diagrams and documentation, as well as import/export source code.

Scripting Languages vs. Compiled Languages for web development

Though I come from a purely PHP background on the web development side of programming, I have also spent much time with C# and C++ on the desktop.
I don't really want to spark any flame wars, but:
When should you use scripting languages over compiled languages for website development?
(and vice versa)
Just to clarify, for the sake of this question, I define a "scripting language" to mean an interpreted language like PHP, Python, or Ruby, and a "compiled language" to mean a strongly typed, compiled language like C#, C++, Java, or VB.
It depends :-)
On...
...where and how you want to deploy the application
...the skillsets of the engineers in your organization
...what third-party components you want to integrate with or incorporate
Deployment
If you need to be able to deploy the solution on any of dozens of different possible platforms, you may find that you're better off with PHP than Java (for example). There are hundreds of thousands of Java hosting providers out there, but there are probably millions of PHP hosting providers. (And I say this as a Java-head who finds PHP "so so" at best.)
This goes to OS as well. Mono aside, .Net stuff is going to limit you to Windows-based deployment (or lagging behind the cutting edge and having to very, very rigorously test each and every 3rd party component you bring in, to ensure that it doesn't have Mono...issues).
Skillsets
Coming up to speed in an environment or language is non-trivial. For most of us, picking up the basics is pretty quick, but you may not be making the best architectural/design decisions because you're (comparatively) weak on the environment/language. Skillsets count.
Related to this: Skillset hiring counts. Is it easier (and/or cheaper) to hire PHP devs with 3-4 years of experience, or Java devs with 3-4 years of experience, or C# devs, or...?
Buying/finding/integrating vs. building
In your target area of development, which server-side components or packages will you want to integrate with? PHP has a vast array of things available for it, as does Java, as does C# or ASP.Net. But they're different things (by and large), so you'll want to look at what you actually want to use.
Conclusion
So I think it's less a matter of compiled vs. scripted (in today's world), and more a matter of what's the best fit by other criteria for what you're trying to do.
Addendum: Both/And
And of course, there's always "both/and". For instance, I do work in two main, unrelated environments right now, both using a combination of scripted and compiled resources. (One of them is Java + JavaScript via Rhino on Tomcat, the other is compiled COM objects + JScript [again, server-side] on IIS.)
A programmer can write good/bad fast/slow scalable/unscable code in any language. Although, some language and technologies make it harder to do. In my experience, with scripting languages you can produce a small to medium scale application faster than you can with compiled languages like Java. However, as applications grow in size, compiled languages become more suited to the task I think this comes from strongly typing objects, deeper layers of architecture to manage tasks, and more QA frameworks to verify things are running as they should be as changes occur.
I find it to be mostly a matter of opinion. At first I hated the pre-compiled web applications asp.net provides, but I've gotten used to it so I don't hate it anymore. It has advantages and disantages:
Pro
pre-compiled web applications are easy to deploy, often you'll only have to update the bin-directory
pre-compiled web applications perform well
you don't have to upload source code, which is nice imho.
Con
updating a pre-compiled web app generally means the web application is reset, so unless you've changed the session state, it'll end all sessions and log everyone out
rebuilding a large web application can take some time, which is added to the time it took you to write the changes in the first place. I am sometimes impatient.
I've always liked how easy it is to just update one file in a PHP project without having to rebuild a project or something like that, on the other hand, .net has a nice IDE that allows you to debug everying, from back end (C#, VB.net) to front end (Javascript), in one package.
But again; both have advantages and disadvantages.
I wouldn't draw such a sharp distinction between compiled and interpreted languages - this is really just an implementation detail, and tends to change with time (faster than the languages themselves change.) Case in point - thanks to Facebook, PHP is now a "compiled language" too. Another case in point - I enjoy web development with Scheme - and my preferred Scheme implementation now runs a VM and in that sense is at least as compiled as Java is.
So I think the issues to focus on are the expressiveness of the language, its performance, and its ease of deployment - compiled vs. interpreted is only important insofar as it relates to these things.
I'm a big fan of compiled languages everywhere, if for nothing more than the static typing. On the other hand, scripting languages are very convenient -- no binaries to deal with, only text files, which is a big win for web servers.
In the end, it doesn't really matter -- use whatever language you know and feel most comfortable with for the job.
I think that speed is a key concern in a web application, in particular
how fast is it to write my code
how fast is it to fix my code
how fast is it to refactor my code
how fast is it to test my code
That is, I am concerned about the speed of the slowest link: myself. Anything else is fast enough for Twitter-like loads.
Today, the number one on my evaluation list for a new project would be Tornado and Python.
If I had a choice of platforms, of course.
Ah, Python is among the fastest in scripting languages.
For scripting languages, anyone that has a copy of your software could potentially modify your source code because it's open source.
For programming languages, anyone that has a copy the software cannot simply modify your source code because it is compiled.
So I guess, it depends upon your preferences.

Benefits of cross-platform development?

Are there benefits to developing an application on two or more different platforms? Does using a different compiler on even the same platform have benefits?
Yes, especially if you plan to distribute your code for multiple platforms.
But even if you don't cross platform development is a form of futureproofing; if it runs on multiple (diverse) platforms today, it's more likely to run on future platforms than something that was tuned, tweeked, and specialized to work on a version 7.8.3 clean install of vendor X's Q-series boxes (patch level 1452) and nothing else.
There seems to be a benefit in finding and simply preventing bugs with a different compiler and a different OS. Different CPUs can pin down endian issues early. There is the pain at the GUI level if you want to stay native at that level.
Short answer: Yes.
Short of cloning a disk, it is almost impossible to make two systems exactly alike, so you are going to end up running on "different platforms" whether you meant to or not. By specifically confronting and solving the "what if system A doesn't do things like B?" problem head on you are much more likely to find those key assumptions your code makes.
That said, I would say you should get a good chunk of your base code working on system A, and then take a day (or a week or ...) and get it running on system B. It can be very educational.
My education came back in the 80's when I ported a source level C debugger to over 100 flavors of U*NX. Gack!
Are there benefits to developing an application on two or more different platforms?
If this is production software, the obvious reason is the lure of a larger client base. Your product's appeal is magnified the moment the client hears that you support multiple platforms. Remember, most enterprises do not use a single OS or even a single version of the OS. It is fairly typical to find a section using Windows, another Mac and a smaller version some flavor of Linux.
It is also seen that customizing a product for a single platform is often far more tedious than to have it run on multi-platform. The law of diminishing returns kicks in even before you know.
Of course, all of this makes little sense, if you are doing customization work for an existing product for the client's proprietary hardware. But even then, keep an eye out for the entire range of hardware your client has in his repertoire -- you never know when he might ask for it.
Does using a different compiler on even the same platform have benefits?
Yes, again. Different compilers implement different extensions. See to it that you are not dependent on a particular version of a particular compiler.
Further, there may be a bug or two in the compiler itself. Using multiple compilers helps sort these out.
I have further seen bits of a (cross-platform) product using two different compilers -- one was to used in those modules where floating point manipulation required a very high level of accuracy. (Been a while I've heard anyone else do that, but ...)
I've ported a large C++ program, originally Win32, to Linux. It wasn't very difficult. Mostly dealing with compiler incompatibilities, because the MS C++ compiler at the time was non-compliant in various ways. I expect that problem has mostly gone now (until C++0x features start gradually appearing). Also writing a simple platform abstraction library to centralize the platform-specific code in one place. It depends to what extent you are dependent on services from the OS that would be hard to mimic on a new platform.
You don't have to build portability in from the ground up. That's why "porting" is often described as an activity you can perform in one shot after an initial release on your most important platform. You don't have to do it continuously from the very start. Purely for economic reasons, if you can avoid doing work that may never pay off, obviously you should. The cost of porting later on, when really necessary, turns out to be not that bad.
Mostly, there is an existing platform where the application is written for (individual software). But you adress more developers (both platforms), if you decide to provide an independent language.
Also products (standard software) for SMEs can be sold better if they run on different platforms! You can gain access to both markets, WIN&LINUX! (and MacOSx and so on...)
Big companies mostly buy hardware which is supported/certified by the product vendor only to deploy the specified product.
If you develop on multiple platforms at the same time you get the advantage of being able to use different tools. For example I once had a memory overwrite (I still swear I didn't need the +1 for the null byte!) that cause "free" to crash. I brought the code up to speed on Windows and found the overwrite in about 1 minute with Rational Purify... it had taken me a week under Linux of chasing it (valgrind might have found it... but I didn't know about it at the time).
Different compilers on the same or different platforms is, to me, a must as each compiler will report different things, and sometimes the report from one compiler about an error will be gibberish but the other compiler makes it very clear.
Using things like multiple databases while developing means you are much less likely to tie yourself to a particular database which means you can swap out the database if there is a reason to do so. If you want to integrate something that uses Oracle into a existing infrastructure that uses SQL Server for example it can really suck - much better if the Oracle or SQL Server pieces can be moved to the other system (I know of some places that have 3 different databases for their financial systems... ick).
In general, always developing for two or three things means that the odds of you finding mistakes is better, and the odds of the system being more flexible is better.
On the other hand all of that can take time and effort that, at the immediate time, is seen as an unneeded expense.
Some platforms have really dreadful development tools. I once worked in an IB where rather than use Sun's ghastly toolset, peole developed code in VC++ and then ported to Solaris.

Have you ever used code virtualizer or vmprotect to protect from reverse engineering?

I know that there is no way to fully protect our code.
I also know that if a user wants to crack our app, then he or she is not a user that would buy our app.
I also know that it is better to improve our app.. instead of being afraid of anticracking techniques.
I also know that there is no commercial tool that can protec our app....
I also know that....
Ok. Enough. I've heard everything.
I really think that adding a little protection won't hurt.
So.... have you ever used code virtulizer from oreans or vmprotect?
I've heard that they are sometimes detected as virus by some antivirus.
Any experiences that I should be aware of before buying it.
I know it creates some virtual machines and obfuscates a little the code to make it harder to find the weaknesses of our registration routines.
Is there any warning I should know?
Thanks.
Any advice would be appreciated.
Jag
In my humble opinion, you should be lucky or even eager to be pirated, because that means your product is successful and popular.
That's plain incorrect. My software that I worked many months on was cracked the moment it was released. There are organised cracking groups that feed off download.com's RSS channel etc and crack each app that appears. It's a piece of cake to extract the keygen code of any app, so my response was to:
a) resort to digital certificate key files which are impossible to forge as they are signed by a private AES key and validated by a public one embedded in the app (see: aquaticmac.com - I use the stl c++ implementation which is cross-platform), along with.
b) The excellent Code Virtualizer™. I will say that the moment I started using Code Virtualizer™ I was getting some complaints from one or two users about app crashes. When I removed it from their build the crashes ceased. Still, I'm not sure whether it was a problem with CV per se as it could have been an obscure bug in my code, but I since reshuffled my code and I have since heard no complaints.
After the above, no more cracks. Some people look at being cracked as a positive thing, as it's a free publicity channel, but those people usually haven't spent months/years on an idea only to find you're being ripped off. Quite hard to take.
Unfortunately, VM-protected software is more likely to get affected by false positives than conventional packing software. The reason for that is that since AV protection is so complicated, AV software are often unable to analyze the protected code, and may rely on either pattern libraries or may issue generic warnings for any files protected by a system it can't analyze. If your priority is to eliminate false positives, I suggest picking a widely-used protection solution, e.g. AsProtect (although Oreans' products are becoming quite popular as well).
Software VM protection is quite popular today, especially as it's now available at an accessible price for small companies and independent software developers. It also takes a considerable amount of effort to crack in comparison to non-VM techniques - the wrappers usually have the standard anti-debugging tricks that other protections have, as well as the VM protection. Since the virtual machine is generated randomly on each build, the crackers will need to analyze the VM instruction set and reverse engineer the protected code back to machine code.
The main disadvantage of VM protection is that if it's overused (used to protect excessive parts of the code), it can slow down your application considerably - so you'll need to protect just the critical parts (registration checks, etc). It also doesn't apply to certain application types - it likely won't work on DLLs that are used for injection, as well as device drivers.
I've also heard that StrongBit EXECryptor is a decent protection package at a decent price. (I'm not affiliated with said company nor guarantee any quality what-so-ever, it's just word of mouth and worth checking out IMO).