Implementation of Huffman Coding for Unequal Costs? - language-agnostic

After much google searching for an algorithm to find the optimal prefix-free code for unequal letter costs, I still have found only PDFs and PS files that don't even have pseudocode for the algorithms they describe. Is there an implementation of any unequal-cost Huffman Coding algorithm available anywhere?

https://www.cs.umd.edu/~lijian/paper/Uneq_Let_Approx_conf_5.pdf
^ Contains pseudocode. However, you really should understand the setting in which you apply this stuff, there are some subtleties.

Related

What is differentiable programming?

Native support for differential programming has been added to Swift for the Swift for Tensorflow project. Julia has similar with Zygote.
What exactly is differentiable programming?
what does it enable? Wikipedia says
the programs can be differentiated throughout
but what does that mean?
how would one use it (e.g. a simple example)?
and how does it relate to automatic differentiation (the two seem conflated a lot of the time)?
I like to think about this question in terms of user-facing features (differentiable programming) vs implementation details (automatic differentiation).
From a user's perspective:
"Differentiable programming" is APIs for differentiation. An example is a def gradient(f) higher-order function for computing the gradient of f. These APIs may be first-class language features, or implemented in and provided by libraries.
"Automatic differentiation" is an implementation detail for automatically computing derivative functions. There are many techniques (e.g. source code transformation, operator overloading) and multiple modes (e.g. forward-mode, reverse-mode).
Explained in code:
def f(x):
return x * x * x
∇f = gradient(f)
print(∇f(4)) # 48.0
# Using the `gradient` API:
# ▶ differentiable programming.
# How `gradient` works to compute the gradient of `f`:
# ▶ automatic differentiation.
I never heard the term "differentiable programming" before reading your question, but having used the concepts noted in your references, both from the side of creating code to solve a derivative with Symbolic differentiation and with Automatic differentiation and having written interpreters and compilers, to me this just means that they have made the ability to calculate the numeric value of the derivative of a function easier. I don't know if they made it a First-class citizen, but the new way doesn't require the use of a function/method call; it is done with syntax and the compiler/interpreter hides the translation into calls.
If you look at the Zygote example it clearly shows the use of prime notation
julia> f(10), f'(10)
Most seasoned programmers would guess what I just noted because there was not a research paper explaining it. In other words it is just that obvious.
Another way to think about it is that if you have ever tried to calculate a derivative in a programming language you know how hard it can be at times and then ask yourself why don't they (the language designers and programmers) just add it into the language. In these cases they did.
What surprises me is how long it to took before derivatives became available via syntax instead of calls, but if you have ever worked with scientific code or coded neural networks at at that level then you will understand why this is a concept that is being touted as something of value.
Also I would not view this as another programming paradigm, but I am sure it will be added to the list.
How does it relate to automatic differentiation (the two seem conflated a lot of the time)?
In both cases that you referenced, they use automatic differentiation to calculate the derivative instead of using symbolic differentiation. I do not view differentiable programming and automatic differentiation as being two distinct sets, but instead that differentiable programming has a means of being implemented and the way they chose was to use automatic differentiation, they could have chose symbolic differentiation or some other means.
It seems you are trying to read more into what differential programming is than it really is. It is not a new way of programming, but just a nice feature added for doing derivatives.
Perhaps if they named it differentiable syntax it might have been more clear. The use of the word programming gives it more panache than I think it deserves.
EDIT
After skimming Swift Differentiable Programming Mega-Proposal and trying to compare that with the Julia example using Zygote, I would have to modify the answer into parts that talk about Zygote and then switch gears to talk about Swift. They each took a different path, but the commonality and bottom line is that the languages know something about differentiation which makes the job of coding them easier and hopefully produces less errors.
About the Wikipedia quote that
the programs can be differentiated throughout
At first reading it seems nonsense or at least lacks enough detail to understand it in context which is why I am sure you asked.
In having many years of digging into what others are trying to communicate, one learns that unless the source has been peer reviewed to take it with a grain of salt, and unless it is absolutely necessary to understand, then just ignore it. In this case if you ignore the sentence most of what your reference makes sense. However I take it that you want an answer, so let's try and figure out what it means.
The key word that has me perplexed is throughout, but since you note the statement came from Wikipedia and in Wikipedia they give three references for the statement, a search of the word throughout appears only in one
∂P: A Differentiable Programming System to Bridge Machine Learning and Scientific Computing
Thus, since our ∂P system does not require primitives to handle new
types, this means that almost all functions and types defined
throughout the language are automatically supported by Zygote, and
users can easily accelerate specific functions as they deem necessary.
So my take on this is that by going back to the source, e.g. the paper, you can better understand how that percolated up into Wikipedia, but it seems that the meaning was lost along the way.
In this case if you really want to know the meaning of that statement you should ask on the Wikipedia talk page and ask the author of the statement directly.
Also note that the paper referenced is not peer reviewed, so the statements in there may not have any meaning amongst peers at present. As I said, I would just ignore it and get on with writing wonderful code.
You can guess its definition by application of differentiability.
It's been used for optimization i.e. to calculate minimum value or maximum value
Many of these problems can be solved by finding the appropriate function and then using techniques to find the maximum or the minimum value required.

What part of STL knowledge is must for a C++ developer?

I have good knowledge of C++ but never dwell into STL. What part of STL I must learn to improve productivity and reduce defects in my work?
Thanks.
I have good knowledge of C++
With all due respect, but no – you don’t. The standard library, or at least large parts of it (especially the subset known as “STL”) is a fundamental part of C++. Without knowledge of it you don’t know very much about C++ at all.
In fact, much of the modern design of C++ (essentially everything since the 98 version) was guided by design considerations stemming from the standard library, and much of the changes in the language since then are changes to the standard library. If you take a look at the official C++ language description a good part of the document is concerned with the library.
Usually the first reaction (at least in my opinion, of course) for people who have not worked with the STL before is to get upset with all the template code. So I would start by studying a little bit on this subject.
In the case you already know template fundamentals I would recommend taking a brief look over an STL design document. This is actually the second stage of hassle for people not yet familiar with it. The reason for this is that the STL is not designed under a typical object oriented paradigm, but under the generic programming paradigm.
With this in mind, a good start could be this introductory article. The terms used throughout the STL components are explained there. Please notice that is a relatively old text focused on the SGI implementation (which predates the C++ standard and incorrectly mentions, for example, the hash based containers as part of it). However, the theory is still valid.
Well, if you already know most things I've said up to this point, just jump directly to the topcis the others provided.
You mention about improving your productivity and reduce defects. There are general guidelines that you can use for this. I assume c++11, and I mention a bit more than stl (smart pointers):
Use containers, they will manage memory for you. You get rid of new for C arrays and later having to delete them, for example.
For dynamic arrays, use std::vector. You also have hashtables in std::unordered_map and balanced trees with std::map. There are more containers, take a look here.
Use std::array instead of plain C arrays whenever you can: they never decay to pointers when passing as arguments to functions, which can avoid very disgusting bugs.
Use smart pointers and forget forever for a naked new and its matching delete in code.
This can reduce errors far more than you would expect, especially in the presence of exceptions.
Use std::make_shared when possible. You can use it to allocate a shared_ptr directly as an argument to a function that takes a std::shared_ptr. With a naked new this is not possible.
Use algorithms instead of hand-coded loops. The code will be far more readable and usually more performant.
With this advice your code should look closer (but not necessarily equal or semantically equivalent) to C# or Java, in which manual memory management disappears. This is even better than garbage collection in many scenarios, since you have deterministic guarantees for when a resource will be freed.
I'd say the algorithms from <algorithm> will really clean up your code and at the same time make your code more concise.
Obviously, knowledge of all the containers will help you to optimize the bottlenecks of your code caused by a certain choice of container which is not optimal (but be sure to profile first).
These are pretty much the basics and they will help you a lot to make more robust code.
After that you can delve into smart pointers like std::shared_ptr which are almost always better than regular pointers (in my case, at least).
I think can start with containers(vector, list) and alghorithms(binary search, sort).
And as Jesse Emond wrote, the more you know, the better you live)))

OCR lib for math formulas

I need an open OCR library which is able to scan complex printed math formulas (for example some formulas which were generated via LaTeX). I want to get some LaTeX-like output (or just some AST-like data).
Is there something like this already? Or are current OCR technics just able to parse line-oriented text?
(Note that I also posted this question on Metaoptimize because some people there might have additional knowledge.)
The problem was also described by OpenAI as im2latex.
SESHAT is a open source system written in C++ for recognizing handwritten mathematical expressions. SESHAT was developed as part of a PhD thesis at the PRHLT research center at Universitat Politècnica de València.
An online demo:http://cat.prhlt.upv.es/mer/
The source: https://github.com/falvaro/seshat
Seshat is an open-source system for recognizing handwritten mathematical expressions. Given a sample represented as a sequence of strokes, the parser is able to convert it to LaTeX or other formats like InkML or MathML.
According to the answers on Metaoptimize and the discussion on the Tesseract mailinglist, there doesn't seem to be an open/free solution yet which can do that.
The only solution which seems to be able to do it (but I cannot verify as it is Windows-only and non-free) is, like a few other people have mentioned, the InftyProject.
InftyReader is the only one I'm aware of. It is NOT free software (it seems the money goes to a non-profit org, IIRC).
http://www.sciaccess.net/en/InftyReader/
I don't know why PDF can't have metadata in LaTeX? As in: put the LaTeX equation in it! Is this so hard? (I dunno anything about PDF syntax, but I imagine it can be done).
LaTeX syntax is THE ONE TRIED AND TRUE STANDARD for mathematics notation. It seems amazingly stupid that folks that produced MathML and other stuff don't take this in consideration. InftyReader generates MathML or LaTeX syntax.
If I want HTML (pure) I then use TTH to read the LaTeX syntax. Just works.
ABBYY FineReader (a great OCR program) claims you can train the software for Math, but this is immensely braindead (who has the time?)
And Unicode has lots of math symbols. That today's OCR readers can't grok them shows the sorry state of software and the brain deficit in this activity.
As to "one symbol at a time", TeX obviously has rules as to where it will place symbols. They can't write software that know those rules?! TeX is even public domain! They can just "use it" in their comercial products.
Check out "Web Equation." It can convert handwritten equations to LaTeX, MathML, or SymbolTree. I'm not sure if the engine is open source.
Considering that current technologies read one symbol at a time (see http://detexify.kirelabs.org/classify.html), I doubt there is an OCR for full mathematical equations.
Infty works fairly well. My former company integrated it into an application that reads equations out loud for blind people and is getting good feedback from users.
http://www.inftyproject.org/en/download.html
Since the output from math OCR for complex formulas will likely have bugs -- even humans have trouble with it -- you will have to proofread th results, at least if they matter. The (human) proofreader will then have to correct the results, meaning you need to have a math formula editor. Given the effort needed by humans, the probably limited corpus of complex formulas, you might find it easier to assign the task to humans.
As a research problem, reading math via OCR is fun -- you need a formalism for 2-D grammars plus a symbol recognizer.
In addition to references already mentioned here, why not google for this? There is work that was done at Caltech, Rochester, U. Waterloo, and UC Berkeley. How much of it is ready to use out of the box? Dunno.
As of August 2019, there are a few options, depending on what you need:
For converting printed math equations/formulas to LaTex, Mathpix is absolutely the best choice. It's free.
For converting handwritten math to LaTex or printed math, MyScript is the best option, although its app costs a few dollars.
You know, there's an application in Win7 just for that: Math Input Panel. It even handles handwritten input (it's actually made for this). Give it a shot if you have Win7, it's free!
there is this great short video: http://www.youtube.com/watch?v=LAJm3J36tLQ
explaining how you can train your Fine Reader to recognize math formulas. If you use Fine Reader already, better to stick with one tool. Of course it is not free ware :(

So was that Data Structures & Algorithms course really useful after all?

I remember when I was in DSA I was like wtf O(n) and wondering where would I use it other than in grad school or if you're not a PhD like Bloch. Somehow uses for it does pop up in business analysis, so I was wondering when have you guys had to call up your Big O skills to see how to write an algorithm, which data structure did you use to fit or whether you had to actually create a new ds (like your own implementation of a splay tree or trie).
Understanding Data Structures has been fundamental to many of the projects I've worked on, and that goes beyond the ten minute song 'n dance one does when asked such a question in an interview situation.
Granted that modern environments with all sorts of collection classes can make light work of storing and accessing large amounts of data, but having an understanding that a particular problem is best solved with a particular data structure can be a great timesaver. And by "timesaver" I mean "the difference between something working and not working".
Honestly, being able to answer that stuff is my biggest criterion for taking interviewees seriously in an interview. Knowing how basic data structures work, basic O(n) analysis, and some light theory is really crucial to being able to write large applications successfully.
It's important in the interview because it's important in the job. I've worked with techs in the past that were self taught, without taking the data structures course or reading a data structures book, and their code is occasionally bad in ways they should have seen coming.
If you don't know that n2 is going to run slowly compared to n log n, you've got more to learn.
As far as the later half of the data structures courses, it isn't generally applicable to most tech jobs, but if you ever do wind up needing it, you'll wish you had paid more attention.
Big-O notation is one of the basic notations used when describing algorithms implemented by a particular library. For example, all documentation on STL that I've seen describes various operations in terms of big-O, so naturally you have to e.g. understand the difference between O(1), O(log n) and O(n) to understand the implications of your choice of STL containers and algorithms. MSDN also does that for .NET classes, and IIRC Java documentation does that for standard Java classes. So, I'd say that knowing the notation is pretty much a requirement for understanding documentation of most popular frameworks out there.
Sure (even though I'm a humble MS in EE -- no PhD, no CS, differently from my colleague Joshua Block), I write a lot of stuff that needs to be highly scalable (or components that may need to be reused in highly scalable apps), so big-O considerations are most always at work in my design (and it's not hard to take them into account). The data structures I use are almost always from Python's simple but rich supply (which I did lend a hand developing;-), rarely is a totally custom one needed (rather than building on top of list, dict, etc); but when it does happen (e.g. the bitvectors in my open source project gmpy), no big deal.
I was able to use B-Trees right when I learned about them in algorithm class (that was about 15 years ago when there were much less open source implementations available). And even later the knowledge about the differences of e. g. container classes came in handy...
Absolutely: even though stacks, queues, etc. are pretty straightforward, it helps to have been introduced to them in a disciplined fashion.
B-Tree's and more advanced sorting are a bit more difficult so learning them early was a big benefit and I have indeed had to implement each of them at various points.
Finally, I created an algorithm for single-connected components a few years back that was significantly better than the one our signal-processing team was using but I couldn't convince them that it was better until I could show that it was O(n) complexity rather than O(nlogn).
...just to name a few examples.
Of course, if you are content to remain a CRUD-system hacker with no real desire to do more than collect a paycheck, then it may not be necessary...
I found my knowledge of data structures very useful when I needed to implement a customizable event-driven system about ten years ago. That's the biggie, but I use that sort of knowledge fairly frequently in lesser ways.
For me, knowing the exact algorithms has been... nice as background knowledge. However, the thing that's been the most useful is the more general background of having to pay attention to how different pieces of an algorithm interact. For instance, there can be places in code where moving one piece of code (ie, outside a loop) can make a huge difference in both time and space.
Its less of the specific knowledge the course taught and, rather, more that it acted like several years of experience. The course took something that might take years to encounter (have drilled into you) all the variations of in pure "real world experience" and condensed it.
The title of your question asks about data structures and algorithms, but the body of your question focuses on complexity analysis, so I'll focus on that too:
There are lots of programming jobs where being able to do complexity analysis is at least occasionally useful. See What career can I hope for if I like algorithms? for some examples of these.
I can think of several instances in my career where either I or a co-worker have discovered a a piece of code where the (usually time, sometimes space) complexity was higher that it should have been. eg: something that was quadratic or cubic when it could have been linear or nlog(n). Such code would work fine when given small inputs, but on larger inputs would quickly become really slow or consume all available memory. Knowing alternative algorithms and data structures, their complexities, and also how to analyze the complexity to build new algorithms is vital in being able to correct these problems (or avoid them in the first place).
Networking is all I've used it: in an implementation of traveling salesman.
Unfortunately I do a lot of "line of business" and "forms over data" apps, so most problems I work on can be solved by hammering together arrays, linked lists, and hash tables. However, I've had the chance to work my data structures magic here and there:
Due to weird complex business rules, I worked on an application which used a custom thread pool implemented as a leftist-heap.
My dev team struggled to write a complex multithreaded app. It was plagued with race conditions, dead locks, and lousy performance due to very fine-grained locking. We re-worked the code to share state between threads, opting to write a very light-weight wrapper to facilitate message passing. Next, we converting our linked lists and hash tables to immutable stacks and immutable style and immutable red-black trees, we had no more problems with thread safety or performance. The resulting code was immaculate and surprisingly readable.
Frequently, a business rules engine requires you to roll your own state machine, which is very naturally modelled as a graph where vertexes and states and edges are transitions between states.
If for no other reasons, I'm glad I took the time to readable about data structures and algorithms simply to be able picture novel problems a little differently, especially combinatorial problems and graph problems. Graph theory is no longer a synonym for "scary".

Improving the way we write code?

While thinking about software-engineering in general I came across the question why we don't see any improvements in the way we write/document code.
Think about it: There has not been a revolutionary improvement since we've moved from punch cards to text editing. The last improvement I've seen is syntax highlighting and context sensitive help (e.g. Intellisense or ctags). Not something I would call revolutionary.
That makes me wonder: Why is it so?
I'll start with something I miss badly:
Lots of my code deals with geometry.
For documentation describing geometric relationships always ends up in a big heap of hard to read mathematical stuff (due to the lack of proper equation type-setting in ASCII). However, if I could embed a little drawing or scribble into the code everything would be much easier, neater and better to be understood.
What can you think up that would make your coding/text editing/documention tasks easier?
I'm surprised that nobody has yet mentioned No Silver Bullet. In 1986 (!), Frederick Brooks predicted that:
There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity. [...] We cannot expect ever to see two-fold gains every two years."
And in 23 years, he's been proven right. We've come up with a number of things such as syntax highlighting and Intellisense which have improved productivity significantly, but certainly not by an order of magnitude. As time marches on, we'll continue to make several incremental improvements, but the fact is there is no silver bullet: there's not going to be some magical revelation in the way we write code that will improve productivity by an order of magnitude.
I'm suprised that no one seems to have mentioned Donald Knuth's seminal Literate Programming - write your code as if it were a book or a scientific paper.
There has not been a revolutionary improvement since we've moved from punch cards to text editing
Never used a line editor, have you?
But seriously, text (especially in the representations chosen for modern languages) is
easily processed
fairly easy to specify
information dense
precise
Anything that comes along to replace it has to be a net win across all four of those properties. Not easy.
I disagree. We do have changes, small, but changes.
How common is the "for each" construct? Compare it to 20 years ago. How about the Domain Specific Languages movement? What about the idea that we should code in layers? How about Behavior Driven Development? Coding by complying to a specification... which writes a nice document as output when all runs fine. How about the standarization of regular expressions? PCRE. What about Alan Kay's group's DSL related work on "Moore's Law for Software", which explored a more advanced implementation of Cairo and generated TCP/IP code using diagrams from RFCs?
Documentation is a two way dialog. Both as code being more understandable and people learning this special language. You wouldn't say that German needs documentation if you know German. I know natural languages are very far away from computer languages, but there's a movement to make code more expressive. It's not about the new tools, it's about how we are coding.
One thing I've done recently in some of the more math-heavy sections of my application is to include the LaTeX markup for the particular equation as a comment/docstring. Right now, I just copy-paste into an online equation editor, but it would be very helpful to see the formula itself (with things like Greek letters and sub/superscripts) rather than a bunch of ASCII code.
Source Code In Database. In a nutshell, source code is parsed and put into a database. You'd then need an integrated IDE to view and edit the code, but at this point, syntax is decoupled from format. YOUR IDE could show you a program in a way that's completely different from someone else's, tuned to the task you're working on. I'd list some specific examples, but that article covers pretty much everything.
I'm surprised nobody mentioned it - javadoc is basically HTML, so there's nothing preventing you from embedding images (or anything else) in code. Simple, effective and ubiquitous, it's one of the things Java did right.
DrScheme let's you do these things. Here's the things you can insert from the PLT website:
http://docs.plt-scheme.org/drscheme/Menus.html#(part._.Insert)
3.1.6 Insert
Insert Comment Box : Inserts a box that is ignored by DrScheme; use it to write comments for people who read your program.
Insert Image... : Opens a find-file dialog for selecting an image file in GIF, BMP, XBM, XPM, PNG, or JPG format. The image is treated as a value.
Insert Fraction... : Opens a dialog for a mixed-notation fraction, and inserts the given fraction into the current editor.
Insert Large Letters... : Opens a dialog for a line of text, and inserts a large version of the text (using semicolons and spaces).
Insert λ : Inserts the symbol λ (as a Unicode character) into the program. The λ symbol is normally bound the same as lambda.
Insert Java Comment Box : Inserts a box that is ignored by DrScheme. Unlike the Insert Comment Box menu item, this is designed for the ProfessorJ language levels. See ProfessorJ.
Insert Java Interactions Box : Inserts a box that will allows Java expressions and statements within Scheme programs. The result of the box is a Scheme value corresponding to the result(s) of the Java expressions. At this time, Scheme values cannot enter the box. The box will accept one Java statement or expression per line.
Insert XML Box : Inserts an XML; see XML Boxes and Scheme Boxes for more information.
Insert Scheme Box : Inserts a box to contain Scheme code, typically used inside an XML box; see XML Boxes and Scheme Boxes.
Insert Scheme Splice Box : Inserts a box to contain Scheme code, typically used inside an XML box; see also XML Boxes and Scheme Boxes.
Insert Pict Box : Creates a box for generating a Slideshow picture. Inside the pict box, insert and arrange Scheme boxes that produce picture values.
You also insert your unit tests with the code that you're testing. Pretty neat stuff.
I think integrated IDEs with semantic highlighting and **semantically-constrained suggestions* (a la IDEA or Eclipse) are a huge advancement.
But that happened 8-10 years ago.
Template-based programming feels useful never seems to catch on. Recently I was impressed with a demo of the Meta-programming system, which leverages the interactive nature of the IDE to simplify the task of writing templates and what are (essentially) type-aware macros.
Meta-programming might help you define geometry-based macros that would substitute for a number of lines of code. I could imagine something that let you embed a more-readable 'math language' inside Java, and then parses its contents into something machine-readable.
I'd say version control was a pretty huge leap in how we work. The ability to keep a full record of every change anyone has made to the codebase, and to revert changes where necessary, has made a big difference.
I certainly respect Fred Brooks' argument No Silver Bullet, but I think the way we write code is nowhere near optimal, so there is lots more room for improvement. I tried to explain this in my book.
We're all familiar with "code golf", where you compete relentlessly to minimize something. That is a good way to approach the minimum possible value of that something.
What's great about this is that you are allowed, even encouraged, to break from traditions, prior conceptions, accepted wisdom, in the quest for winning. In short, you learn new things.
If the measure to be minimized is wall-clock execution time, you can do aggressive optimization.
If the measure is source code size (lines or characters) you get "code golf".
The measure I like best is "edit count". That is, given a code base, suppose a new requirement comes along. That requirement is implemented, completely, by editing the code base. Then a "diff" is done from old to new code base. The number of differences found is the edit count. Averaged over the set of likely new functional requirements, that is the measure to minimize.
If this is done aggressively, being free to contradict all conventional wisdom, the code base approaches a state I would call a domain-specific language (DSL). In this language, concepts expressed in code are in nearly 1-1 correspondence with problem-oriented concepts. In this state, it is not easy for the source code to be self-inconsistent (i.e. have bugs) because the fewer edits that have to be made to the source code, the fewer chances there are to make a mistake. It's also the case that such code tends to be short. But unlike "code golf" it tends to be very clear, because it maps the problem concepts so clearly.
So, tools and techniques that help in minimizing edit count can, in my opinion, be considered "silver bullets". DSL is one such. Code generation is another. My favorite optimization technique is another. For coding dynamically changing UIs there is differential execution. There are bound to be more, waiting to be discovered. Of course, everything depends on the training and experience of the "marksman" (the coder).
I think there are lots of new ideas to be discovered. The trick is to tell the difference between the ones that move us forward, versus the ones that hold us back.
I think this is where Doxygen and other documentation systems help. If we can embed small, discrete comments that link to other information such as:
/* help: fooimg.png */
And then have an external documentation system do that, then great.
Even better would be allowing our text-editor to treat those things as hyperlinks to external documentation.
I would reference a drawing as a reference in the code documentation. I see no reason why you can't have footnotes in code.
The ability to make a section of code read-only is something I've wanted
It sounds like you might be interested in Jonathan Edward's research. See, for example:
"The Summer of Code"
"What's next?"
"The future of programming"
Diffing and searching pictures is hard. Diff and search are very important to programmers. Using pictures instead of text is only a marginal improvement in many situations, it has some drawbacks, and it requires general acceptance before it's really worth doing (since you don't make things more understandable if your reader doesn't grok what you've done).
Plus, programmers have a million little tricks that make their lives easier, based on text representations of code, that they'd lose if you gave them code to read that was expressed in anything other than text. Sure, they might replace or re-implement those tricks over time, but in the short term they're gone.
You don't see lawyers switching from English to little back-of-a-napkin diagrams in contracts, either (the Creative Commons licenses try, but cannot make the picture be the formal representation of the contract). Probably for similar reasons.
If someone comes up with a programming language and IDE that, on balance, beats text-based ones; and successfully markets it; then you'll see the start of a revolutionary shift from text to a new format. If nobody comes up with any such thing, then we're not missing out. If someone comes up with something that is more productive but it doesn't gain traction because of independent advantages of other technologies, then that loss is the price we pay for free-market capitalism. Perhaps the ideas will be recycled eventually...
That said, integration between code and documentation could clearly be improved, and there are many efforts underway to do so, using various techniques with varying success. Again, the problem is that any particular cunning plan can in practice only really be implemented in one or a few languages and development environments at a time, and so has difficulty proving that it really is better. Embedding documentation in code is possibly the only universal advance since the invention of the API...
I think there's still a lot that can be done with text, though. For example, debugger technology makes a big difference to programmer productivity in certain common circumstances (namely: when a test fails or something else unexpected happens, but it's not obvious what the faulty assumption is in the code you're looking at). There may be lower-hanging fruit in terms of making programming better, than the actual business of expressing the program.
The last improvement I've seen is
syntax highlighting and context
sensitive help
Then you haven't looked much. Modern IDEs can do far, FAR more than that, namely show you the semantic structure of code (e.g. inheritance hierarchies) and even manipulate it (automatic refactoring) or enrich it with external data (such as who last changed a particular line of code).
I've used emacs, I like text macros. But, what I really want is parse macros. I'd like my editor to expose the machinery behind refactoring in such a way that I can write my transformations on the parse tree of the language itself.
For example, Python added += at one point when my code was littered with x = x + 1 lines. If I could have written a search and replace command that worked on the parse tree, I could have quickly cleaned up large amounts of my source code.
So, I want standard search and replace, but I want it at the level where the structure of my code has meaning, at the abstract syntax tree.
If you've ever used ReSharper, each of its refactorings and recommendations are written in the manner I describe, they find a pattern in the parse tree and suggest a replacement, or for a refactoring, apply a known replacement. I want access to that machinery for my own tasks!
Have you used Doxygen or similar for documenting your code? You can add links to images, and other file types (often stored in same directory as source code) that will get sucked into the generated documentation. I realize that this is one step removed from seeing the detail directly in you favorite editor but it definitely improves how we document our code.
Programming languages are a specialized form of mathematical notation, since you can express a programming language mathematically. Notation changes slowly, and so we don't get fast progress in our languages. Mostly, we advance when we come up with a new thing to fit into the notation, like using i to refer to the square root of negative one.
There are documentation schemes that allow you to embed things other than text. There was at least one programming scheme, Donald Knuth's Web, that allowed you to have a presentation and an execution version of a program (unfortunately, the base source code, the stuff you'd actually hack, was rather messy).
You could easily have a text editor that could treat comments as HTML, provided of course it could recognize comments as it saw them.
I've been thinking a lot about how to make coding faster and more efficient for the past years, always trying to keep it realistic and doing minimalistic implementations. Those are not revolutionary ideas, but since the original poster talked about the punching card to code typing transition, I thought of talking about other ways of communicating to the computer what we want to program.
My ideas are visual or vocal programming. The motivation behind is that there are only a number of ways a loop can be efficiently programmed, and an aware IDE could make some smart code substitution decisions depending on inputs other than typed lines of code.
Visual programming vs Coding: encapsulate (literally) code into "boxes" which have inputs and outputs, and connect them together across a horizontal timeline. This is a high-level concept that would be intrinsically interesting for multithreading development since you can have multiple lines or threads happening at the same time. Every process can be divided into a "box", no matter how you see it. Sending an e-mail in its most basic form is a box which takes an email as input and outputs a success/fail signal. Since the boxes and the lines are distributed across a timeline, the notion of time and event chronology isn't lost and feedback lines are possible.
Vocal programming vs Coding: The effectiveness of this technique would revolve around the effectiveness of the vocal syntax decided to create code and move the cursor. For example, you can say to the microphone "for variable zero to 10" and the system will automatically generate the following code placing the cursor inside:
for (x=0;x<10;x++){
// Cursor would be there after after the call
}
In terms of usability, you would need to be in a relatively silent room to minimize other sounds that might harm the voice recognition so this technology could be used in specialized environments mostly.
This is the result of my extensive programming experience using a wide range of hardware and programming languages. Let me know what you guys think, I'd love having a constructive discussion about that.
A few weeks back the "Intentional Software" created quite a buzz about their new language. I've yet to watch the presentation, but here is a quote from a review by Martin Fowler:
They started worryingly, with the
usual unrevealing Powerpoints, but
then they switched to showing the
workbench and the curtain finally
opened. To gauge the reaction, take a
look at Twitter.
#pandemonial Quite impressed! This is sweet! Multiple domains, multiple
langs, no question is going unanswered
#csells OK, watching a live electrical circuit rendered and
working in a C# file is pretty damn
cool.
#jolson Two words to say about the Electronics demo for Intentional
Software: HOLY CRAPOLA. That's it, my
brain has finally exploded.
#gblock This is not about snazzy demos, this is about completely
changing the world we know it.
#twleung ok, the intellisense for the actuarial formulas is just awesome
#lobrien This is like seeing a 100-mpg carburetor : OMG someone is
going to buy this and put it in a
vault!
Two quotes come instantly to mind:
"If it ain't broke, don't fix it."
"Use the best tool for the job."
Of course, although the core code is still written as text, alll the tools and libraries have changed massively since the days of punched cards.
This has been touched on by others, and it wouldn't revolutionise programming, but anyway...
I think it would be nice if code editors moved slightly beyond plain text editors. Even with syntax highlighting and code completion (which I think are incredibly good things), the editors of today (at least, the ones I use) still display exactly the same ASCII text (or whatever encoding is used) that is in the source files. I would be interested to see how well it would work if editors displayed, for example (some examples are more adventurous than others):
Comments in a text box with a light-blue background and no // or /* ... */ visible
Javadoc comments could have semi-rich text editing support (for those who do HTML Javadoc comments) (seriously, I would appreciate it if code editors rendered Javadoc comments as HTML because they're not the easiest to skim over when their HTML as plain text)
Functions in text boxes that could be collapsed to show only the signature (the collapsing can be done by current editors) and can be dragged around as boxes
Lines between function boxes to indicate how functions are connected
Zooming out so that rather than seeing a single source file (class in many languages) you can see multiple files and the way connect to each other (this would essentially be building UML-like diagramming directly into the code editor)
I think this (in my mind at least) would work without requiring additional markup in the source files so users of plain text editors wouldn't be disadvantaged by having all this extra markup cluttering the files.
Part of the problem might stem from the fact that when you don't code we don't call it programming: Assembling modular components using a GUI for instance.
You might be interested in these alternative programming "languages".
[Ladder][1], designed to mimic the way relay-logic-schemes work. Horrible IMO, but easy to understand for the old guys who did logic with sticks and stones. [http://www.amci.com/tutorials/images/ladder-diagram.gif][2]
[SFC, Sequential function chart][3], designed to simplify parallell programming. Code is written into boxes and these boxes can be placed paralell to each other and will thus execute simultaneously. By connecting the end of several boxes you can syncronize events. Very common for automation applications.
[Mathematica][5]!!!, Might not be the best programming language but the syntax highlighting(if you can call it that) is awesome! For example you can input a matrix by seeing the matrix nicly aligned instead of a huge double[][]. Graphs can be inserted in the code and the formatting of mathematical expressions looks like it does when you write on a paper. No more paranthesis-madness or long Math.PI expressions that really only need one character. And best of all, the files are just plain text even if it is rendered nicely in the editor!
Debuggers is also an area where lots of improvement has been done. Debuggers with replay are starting to come and also visual debuggers where data can be modified in real time. Edit and continiue is also a feature i wouldn't want to live withot.
WTF "new users can only post a maximum of one hyperlink", you will have to google the stuff i originally added to this post >:(
A brain-to-computer translator. Typing is the real
bottleneck. It really just needs to derive the algorithms I
think up and convert that to machine code.
I would say a lot of the newer languages are pretty great at
quickly creating algorithms. The improvements aren't so much
revolutionary now as they are evolutionary.
Dare I say it might actually be a new development language (perhaps even a new paradigm) to take us through such revolution;
I think you might want to take a look at Leo. This is one guys attempt at answering what you're asking about. I still can't wrap my VIM head around it personally, but others take to it quickly. It's not just a programming IDE, but more of an information organizer. It's written in Python, but I don't see why you can't code in other languages with it. The power of Leo is not so much the language, but the ability to express your thoughts and organize them whether it be in code, diagrams, images, or diagrams. Look over the tutorial and examples to get a feel for it. You might like it.
Automated semantic source code transformations, where a program can be reliably examined and manipulated by using an abstract interface/frontend to it that is aware of the underlying semantics.
So that source code can be queried and dealt with pretty much like a SQL database.
Allowing you to do static analysis of source code and refactor even complex source code by doing something along the lines of:
FIND CALLERS OF FUNCTION "foo" WHERE SIGNATURE("int","int","char*") AND RETURN_TYPE("bool");
...
RENAME MACRO "max" TO "maximum" IN FILE "macros.hxx";
RENAME NAMESPACE "prj" TO "project";
RENAME SYMBOL "OLDFOO" IN NAMESPACE "project";
RENAME FUNCTION "log" TO "show_log";
RENAME CLASS "FOO" TO "OLDFOO";
RENAME METHOD "FOO::inc" TO "FOO::increment";
...
CHANGE SIGNATURE IN FUNCTION "foo" WHERE SIGNATURE("int","int") TO SIGNATURE("double","double");
CHANGE SIGNATURE IN METHOD "myClass::handle" WHERE SIGNATURE("char") TO SIGNATURE("unsigned char")
MOVE FUNCTION "foo" in FILE "stuff.cc" TO "foo_funcs.cc";