Virtual machines of the future [closed] - vm-implementation

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm looking for some resources regarding the virtual machines of the future (Like jvm or clr)
What are they going to look like? Will they provide a concurrent runtime, more powerful metaprogramming models?
I'm looking for articles, research projects, or pure speculation, anything that is going to be an interesting read.
So if you have any links or opinions please do share.

The Parrot is an upcoming virtual machine that will be used for Perl 6 along with other dynamic languages such as Ruby, PHP, Python, to name a few.
Parrot is a little different from the Java Virtual Machine and Common Language Runtime as it is a register-based VM rather than stack-based like the JVM and CLR. Here's a bit from the Wikipedia entry on the Parrot virtual machine:
Virtual machines such as the Java
virtual machine and the current Perl 5
virtual machine are also stack based.
Parrot developers see it as an
advantage of the Parrot machine that
it has registers, and therefore more
closely resembles an actual hardware
design, allowing the vast literature
on compiler optimization to be used
generating code for the Parrot virtual
machine so that it will run bytecode
at speeds closer to machine code.
Although it may not be exactly what you're looking for, there was news of an interesting use of the Low Level Virtual Machine (LLVM). Adobe has a project called Alchemy, a C/C++ to Flash bytecode compiler, which utilizes the LLVM's optimization facilities to produce well-optimized Flash bytecode, according to this Slashdot article.
I think we're going to see more interesting uses for virtual machines, and increased adoption with better optimization and on-the-fly compilation techniques, along with the increased amount of computing power which is becoming available with newer, faster processors.

http://openjdk.java.net/projects/mlvm/
HTH

There's some academic work on new security ideas for VMs.

Like Parrot, the Lua VM is register-based.

Not knowing what would attract you the most (compilation, garbage collection, security, etc...), my advice would be to do some "depth first search" in webpages/papers/conferences/blog posts/etc related to people working on different virtual machines for java, clr, python, javascript etc.
First starters that come to my mind are Micheal Hind (behind IBM VM for java - JikesRVM), Ben Zorn (Mircosoft), Pypy's blog... But just from those webpages you should find lots of links I think...

One thing we're almost certain to see in VMs of the future is that they will be built from the ground up to handle multiple programming languages.

Related

Which is framework is better for RAD - Corona, Actionscript, HTML5, Unity or Marmalade? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Before you condemn this as subjective, consider that there are differences between different frameworks. Writing something with PHP, I assume, is probably a lot less verbose and thus time consuming and expensive than writing it in binary. While the differences may not be as pronounced between the title options, I think there probably are significant differences which can result in, for example, a DoodleJump-type app taking more or less time to code in each.
Although there are other factors involved in choosing a framework, I'm just asking which one requires the least amount of coding and thus time and expense for equally skilled developers to accomplish the same thing (conjuring DoodleJump physics, a basic TicTacToe game, creating a UI, whatever). I'd appreciate links to sources if you have them, as well as direct experience comparing the verbosity of one or more in accomplishing the same task.
I'd most like to get an idea of how Flash and HTML5 compare to Corona (in terms of development time), but I'm also curious about the others.
Well you should seriously rephrase the question. It looks like subjective.
I'll just give my experience with all these tools.
Disclaimer:- The following review is my own personal opinion and involves my personal experience. You might have different opinion.
Marmalade
While I've used marmalade for most of my deployed projects, I've never used their RAD tool quick for any serious development. I was asked to try it out by remaking one of our deployed game. I was really impressed with it's quickness and less-verbosity. Although it was only for 2D and I recommended it's use over normal marmalade for all our 2D games. Unfortunately, we never made any 2D games after that. The benefit was that it comes preloaded with box2D and Cocos-2Dx and still supports C++ libraries. Didn't try EDK with it yet, but it should support that too. The con (for me) was I had to learn luascript for that.
Flash
Well I am not a flash expert here, but I tried it on two of our deployed game and it was a good one. Although it was too limited in what it seemed to have. We had to Re-code one of these games in marmlade, just to support some 3D elements, which were not possible to do in flash(at least for me). Flash was too verbose and too confusing for me, since we don't know where the actual script is attached. I guess it must have happened with all programmers who tried flash after trying any other tool, like marmalade. It just confuse you.
Unity
Well it was much much better than Flash, and is actually a well written game engine. Although it might cost a fortune for Indie developers, but still it's worth it. I've been using it for almost 4-5 months and I already started liking it over any other engine. It's easy to learn and too less verbose. You just need to drag and drop and attach the script to the gameobjects.(Not really that simple actually). No need to worry about Physics engine, no worries about plugins(since most of the plugins are already avaialable). And you can do 3D in that too.
Never tried Corona and HTML5 for any development project, so can't have a say in that.

Will F# ever be open-sourced? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
There was discussion in early 2009 about whether Microsoft would release the source for the F# compiler under the MS-PL/another license. A StackOverflow thread mentioned the state as of then.
Since then a lot has happened. We've seen an official release of F# with the .NET Framework 4.0 (and Visual Studio 2010), and for all I know, it's still completely closed-source. Have Microsoft just been quiet on the subject, or have they explicitly stated that they no longer intend to open-source the compiler? Perhaps things are in the process already. Basically, any news/considerations?
(As others have pointed out, the source has always been available, but is not yet under MS-PL, the 'approved open source' license - it currently has a more restrictive license.)
If I were being completely speculative, I might hypothesize that there are a number of things which might have "delayed" an open-source release of the F# compiler, including these:
The compiler code requires a bit of tidying up. The source has always been public, but without an open source license, not too many have looked at it. If you open it, people will look, in which case, it would be nice if the code followed at least some basic style guidelines, like using RecommendedDotNETNamingConventions rather than old_legacy_ones. In a sense, an open-source F# compiler would be one 'canonical F# app', so it would be important for the code to be of high quality with regards to basic things like F# coding conventions (that evolved over time the past 5+ years since the compiler code was originally developed).
The current code is hard to build on any platform. An open-source release would require at least reasonable docs on how to build the compiler (still non-trivial today!) and ideally build scripts for major platforms (e.g. Windows/linux/etc).
Even if IronPython/IronRuby ('open' teams) are "just down the hall" from the F# team at MS, making things 'open' still requires getting a great deal of buy-in/sign-off from management, and re-sign-off from new management if the management changes before you get the first open-source release out the door.
(all the usual 'overhead' of managing an open-source project)
All of the above take manpower, and manpower spent on those things is manpower not spent on other things, like working on the next version of F#. So in practice it may be more feasible for the handful of people doing F# work to nibble away at the work above in free time, rather than devote, say, an entire month to focus on an open-source release. So that might slow things down. (As others have tangentially suggested by pointing at links to job postings, some of the manpower could hypothetically be filled by interns at MSR.)
I emphasize that all of this is completely hypothetical speculation, as there's been no official word from anybody in a long time.
As Robert's comment on your question indicates, the source code is already available as part of each installation, though it does not come with an open source license. Additionally, reading between the lines, I think that things like this blog post by Don Syme still point to an open source release as a priority for the team.
Is it a question? I'm not sure, it's more of a request for any news relating to an existing situation. Adding "considerations" to the request is confusing, what considerations are there? The MS C# compiler is closed source but the C# spec is with ECMA. The F# spec has not been opened to the wider community which is the telling part I feel.
The decision is left with Microsoft, I dont think anyone here can answer that. However, even if it is closed source, we will still probably have all the benefits of the framework, as Microsoft is quite committed to improve and provide more functionalities in the framework. I think even if something is not open sourced or not, but the creator is supporting it or not, that is the biggest concern I have. We have tones of projects in open source but they become code junk after they are not maintained and never improved.

What is the most active genetic programming library? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What genetic-programming library, regardless of language, has the most active community and is the most well developed?
It's hard to tell, frankly. ParadisEO seems to be very active, and is a pretty large library encompassing various metaheuristics besides GP. Note that it is a superset of the EO library. OpenBEAGLE is nice, but it hasn't been updated since 2007. Watchmaker is very good and active right now, but it only has a proof of concept implementation of GP for now. There's a plethora of libraries out there and rather hard to tell which is the best one. And it's not very hard to roll your own GP, so keep that possibility in mind.
HeuristicLab has a very sophisticated implementation that is both fast. For example in an independent benchmark you can see that the speed of HeuristicLab's interpreter was equal to a newly coded minimalistic C++ interpreter that included optimizations. It is also very flexible in that you can configure the grammar that creates your tree in the GUI environment. So you can create functions that should e.g. only have certain variables as inputs, but not all. The implementation is based on a long heritage of code, that is very actively developed and which is reviewed before each release to ensure the continued quality. HeuristicLab supports Regression, Classification, as well as custom problems like Santa Fe trail or Lawn Mower (of which a tutorial exists that helps you implement your own custom problem). There is crossvalidation, there is a separation of training, validation and test that you can make use of to detect overfitting. You will get as results how much each variable is present in the whole population, how much your symbols are present in the population so you can estimate what variables are important. This is displayed as a graph over time. There's also a pareto analyzer that you can enable to show all solutions by quality and complexity. HeuristicLab also contains the recently (GECCO2012) emerging GP benchmark library to enable people to test and compare results. Apart from GP there are further regression and classification algorithms implemented like SVM, Random Forests, k-NN, etc.
It's implemented in C# and runs on .Net 4 (currently only on windows, mono support is close to finish).
You might want to check out Gene Expression Programming (GEP). It is an alternative form of genetic programming.
There is a technology site at http://www.gene-expression-programming.com/. The company behind it is GEPSoft http://www.gepsoft.com.
I'm a fan of ECJ, "A Java-based Evolutionary Computation Research System":
http://cs.gmu.edu/~eclab/projects/ecj/
The mailing list is usually moderately active, indicating to me the general good health of the project. I have been using ECJ for almost all of my GA and GP research and it has a lot of interesting built-in features plus several third party contributions.
ECJ's creator, Sean Luke, also wrote an awesome and free downloadable book: cs.gmu.edu/~sean/book/metaheuristics/
JGAP for Java seems fairly active. Looking at the checkin history there was a burst of activity a couple of months ago.
http://jgap.sourceforge.net/
You can try this C# .NET 4.0 port of Sean Luke's ECJ (Evolutionary Computation in Java):
http://branecloud.codeplex.com
It is very flexible and powerful software! But it is also relatively easy to get started because it includes many working console samples out-of-the-box (and many helpful unit tests that were developed during the conversion).
As noted above, if you program in Java, you should visit Sean Luke's site directly:
http://cs.gmu.edu/~eclab/projects/ecj/
It has been under active development for 13 years!
Ben
CILib from the CIRG team. It's been update regularly. The developers are always frequent to answer your questions.
Forum: http://www.cilib.net/

What to teach after Scratch? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
My son is enthusiastically programming simple games in Scratch. However Scratch is a very simple programming environment (no subroutines even), and I can see that soon he is going to need to move on to something else.
Does anyone know of a good learning language that makes graphics easy but provides "real" programming features like data structures, functions, arrays and lists?
Bonus points if it runs under Linux (Ubuntu). An answer of the form "language Foo with library Bar" is also an option.
How about lua?
There is nice graphic "engine" called LOVE which is fully programmable in lua. It has nice documentation and it's not very hard.
There are also several other similar engines using lua:
Novashell
Verge
Luxinia
Agen
There was another 2d engine, but I can't find it at the moment, it was similar to LOVE, but with a little different approach to things.
I would recommend LOVE for starters as it's very easy, has nice tutorials and most importantly you can do nice stuff right away.
Also lua is commonly used as game scripting language. For example all addons for World of Warcraft are written in lua, in fact all of the interface is written in lua. It means that it's very easy to find answers to game related questions in lua. Also if you happen to own a game which uses lua as scripting language, you could easily add your own stuff to it.
I wrote this from game perspective, but there are quite a lot projects which use lua as scripting language.
You could also try python, but it doesn't have so good out of the box, ready to use and easy to learn/understand tools.
Also here's a link to lua manual.
If Scratch is starting to get a bit limiting, but they're not ready for the hardships of text-editor coding, take a look at Scratch-derivative "BYOB" (Build Your Own Blocks). Seriously, it turns Scratch into a grown-up programming environment with functions (and hence recursion), data-structures, multithreading and everything!
There's also Panther but I was less impressed by it (creating new blocks in Panther seems to require coding their function up directly in Squeak, while in BYOB you can just build them in the usual drag-n-drop Scratch style).
Take a look at Processing.
It's tour de force is graphics, animation, and visual manipulation. It runs under Linux, too.
Processing is an open source
programming language and environment
for people who want to program images,
animation, and interactions. It is
used by students, artists, designers,
researchers, and hobbyists for
learning, prototyping, and production.
It is created to teach fundamentals of
computer programming within a visual
context and to serve as a software
sketchbook and professional production
tool.
A nice review here suggests Alice and Shoes after Scratch -- I have no personal experience in the matter, but from the review they seem worth checking out.
It might be just a little bit larger of a jump, but Python with PyGame will allow your pupil to make many of the same sorts of programs as he or she is already used to with Scratch, but with very tight control over how the whole thing works.
Pros: It's python, which is a very easy language to read and write, and provides a very rich programming environment, without really any boilerplate required.
Cons: its SDL, which uses an event-loop that you get to write yourself. This might be a pretty large hurdle for a young programmer.
Scratch is written in Squeak (which runs on Linux, Windows and Mac) so I'll say step up to Smalltalk! The only problem is the lack of a very good beginners book on the language, which is strange when you consider its origins. However, the basic concepts are easy to learn (almost no syntax) and the environment encourages experimentation.
Here is an interesting microsoft project called Small Basic that is a good, simple, free programming environment for learning, based on BASIC.
No bonus points because of the lack of ubuntu, but a cool learning tool.
is QuickBasic still around? That's what I started with when I was like 7-8, and I was able to make full fledged games, etc. without any external libraries.
EDIT: check out this link about FreeBasic:
http://linux.about.com/b/2006/11/10/freebasic-open-source-alternative-to-quickbasic.htm
Well, there is venerable old Logo -- not sure about structures but you do get lists, functions with parameters, and graphics are very straightforward. There are plenty of good implementations, too. Logo has even been likened to 'lisp without all the parentheses'.
I would suggest using CodingBat. Although CodingBat doesn't provide graphics, it does provide the "programming features" and straightforward practice involving strings, array, and logic.
I think this website helps with developing the basic foundation behind programming.
Link: http://codingbat.com/

Where is Reverse Engineering used? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I ask myself where reverse engineering is used. I'm interested at learning it. But I don't know if I can/should put it on my CV.
I don't want my new chief to think I am an evil Hacker or something. :)
So is it worth it?
Should I learn it or put my effort somewhere else?
Is there a good Book or tutorial out there? :)
Reverse engineering is commonly used for deciphering file formats for improving interoperability. For example, many popular commercial Windows applications don't run on Linux, which necessitates reverse engineering of files produced by those applications, so that they can be used in Linux. A good example of this would be the various formats supported by Gimp, OpenOffice, Inkscape, etc.
Another common use of reverse engineering is deciphering protocols. Good examples include Samba, DAAP support in many non-iTunes applications, cross platform IM clients like Pidgin, etc. For protocol reverse engineering, common tools of the trade include Wireshark and libpcap.
No doubt reverse engineering is often associated with software cracking, which is primarily understanding program disassembly. I can't say that I've ever needed to disassemble a program other than out of pure curiosity or to make it do something it wasn't. One plus side to reverse engineering programs is that to make any sense of it, you will need to learn assembly programming. There are however legal ways to hone your disassembly skills, specifically using Crackmes. An important point to be made is that when you're developing security measures in your applications, or if you're in that business, you need to know how reverse engineers operate to try to stay one step ahead.
IMHO, reverse engineering is a very powerful and useful skill to have. Not to mention, it's usually fun and addictive. Like hmemcpy mentioned, I'm not sure I would use the term "reverse engineering" on my CV, only the skills/knowledge associated with it.
Reverse engineering is usually something you do because you have to, not because you want to. For example, there are legal issues with simply reverse engineering a product! But there are necessary cases - where (for example) the supplier has gone and no longer exists or is not contactable. A good example would be the WMD editor that you typed your question into. The SO team/community had to reverse engineer this from obfuscated source to apply some bug fixes.
One of the fields, in my opinion, where reverse engineering skills might be useful is anti-virus industry, for instance. However, I wouldn't place "reverse engineering" on my CV, but rather I'd write down experience in the Assembly language, using miscellaneous disassemblers/debuggers (such as IDA, SoftIce or OllyDbg) and other relevant skills.
I have worked on reverse engineering projects, and they certainly had nothing to do with hacking. We had the source code for all such projects (legitimately), but for one of the projects nobody actually knew what the code did behind the scenes, and how it interacted with other systems. That information had long been lost. In another project, we had the source code and some documentation, but the documentation wasn't up to date, so we had to reverse-engineer the source to update the documentation.
I don't mind having such projects on my CV. In fact, I believe I've learned a lot during the process.
Reverse engineering is needed whenever the documentation is lost or it never existed. Having the source helps, but you still have to reverse engineer the original logic, flow control and bugs out of it.
Working with strange hardware often forces you to reverse engineer. For instance, I was once working with an old signal acquisition card that behaved strangely; putting in beautiful sine wave produced awfully crippled data. It turned out that every other byte was two's complement and every other one's complement - or at least, when interpreted that way, the data became quite beautiful. Of course, this wasn't documented anywhere, and the card worked perfectly when used with its own proprietary software.
It is very common (in my experience) to encounter older code which has defects, has become outdated due to changing requirements, or both. It's often the case that there's inadequate documentation, and the original developer(s) are no longer available. Reverse engineering that code to understand how it works (and sometimes to make a repair-or-replace decision) is an important skill.
If you have the source, it's often reasonable to do a small, carefully-planned, strictly-scoped amount of cleanup. (I'm hinting out loud that this can't be allowed to become a sinkhole for valuable developer time!)
It's also very helpful to be able to exercise the code in a testbed, either to verify that it does what was expected or to identify, document, isolate, and repair defects.
Doing so safely requires careful work. I highly recommend Michael Feathers' book Working with Legacy Code for its practical guidance in getting such code under test.
RCE is great skill for security guys (research, exploitation, IDS, IPS, AV etc.) but also it proves that you've got a deep and low level understanding of the subject.
Finding your way way around easier when working with 3rd party libraries as well.
If you are not working in security industry, if you are not good at ASM don't bother to learn it, generally it's hard to learn.
Books
Hacking the art of exploitation talks about the subject from security point of view.
Also you might want to read books about Ollydbg and IDA Pro