What does "implementation-agnostic" mean? - terminology

I'm just wondering what "implementation-agnostic" means? I didn't find any explanation. I mean it in this context: "an implementation-agnostic engineering approach".

The opposite of "implementation-agnostic" is "implementation-specific".
Some examples should make the difference clear:
Implementation-agnostic
Synonym: Implementation-independent
Examples:
Sorting algorithm "Quicksort"
Algorithms written in Pseudo-Code
The examples above can be implemented with every programming language
(Assembler, BASIC, C#, C++, Java, JavaScript, ...)
Implementation-specific
Synonym: Implementation-dependent
Examples:
Device drivers,
Machine language code for AMD (tm) or Intel (tm) Processors
The examples above run only on the hardware they are written for.
But also software that depends on or is using other software, dependency injection, interfaces, operating systems, services or frameworks is implementation-specific (for example, although .NETs intermediate language MSIL can run on different hardware or operating systems, it still depends on the .NET framework and hence is implementation-specific).

This is often used to discuss a problem without committing to a particular implementation. Doing so may lead to choosing an implementation / tool that's best suited for the problem rather than having to worry about the limitations of an already chosen solution during the problem definition.

Agnostic, in this context, means "doesn't care about". So implementation agnostic is something that does not care about the implementation.

To say that a solution is implementation-agnostic is to say that it's not reliant on specific technologies, programming languages etc. Pseudocode would be a good example of an implementation-agnostic tool, as would UML for modelling.

Related

What's the difference between Chisel and Lava and CLaSH?

I've been studying the sources of Chisel and also various Lavas (Kansas, Chalmers and Xilinx flavors) and CLaSH. I'm trying to understand what's the main selling points of Chisel versus the others. The main one I've identified is fast simulation.
I was wondering if people who have studied more in-depth can point out other advantages, disadvantages and trade-offs.
(Sorry if it's too much of a discussion question. I tried posting to chisel-users but apparently you need to be accepted as a member to do that.)
First, a disclaimer that I'm a heavy Chisel user but have only passing familiarity with the Haskell-based DSELs that you mention.
I think the ability of Chisel to target multiple backends (C++, Verilog, etc.) is a significant advantage. The generated C++ allows cycle-accurate simulation at many times the speed of Verilog/VHDL simulators, because it avoids the event-driven model inherent to those languages.
This is not an intrinsic limitation, but Lava and CLaSH seem to be mostly targeted at FPGA implementations, while Chisel has been used for work on both FPGAs and ASICs. Chisel may also be a bit better supported; code, instructions, and examples are all available on GitHub, and the language remains under active development.
There are also differences between Haskell and Scala (the parent languages); if you're more comfortable in one or the other, it might make getting started a bit easier. (I'll leave the "language wars" to the experts.)
There is a section on the Clash tutorial page which describes some trade-offs between Clash and the Lava flavors (I'll leave it as a reference). Basically Clash uses a static analysis approach while Lava flavors go with the DSEL (Domain Specific Embedded Language) approach. These differences are probably pretty similar to the differences between Chisel and Clash because Chisel also follows the DSEL approach. So, with Clash you can write Haskell code then compile it into VHDL, Verilog, SystemVerilog using the Clash compiler. I am not familiar enough with Chisel or DSELs, but I do know its not JUST a compilation step to HDL.
FWIW, I looked at using Chisel for projects and I found the ecosystem, docs, and community were amazing but I did not like the Scala style. Further research in FP led me to Haskell and Clash. I like the pure functional style to hardware design and the tight coupling of Haskell has allowed me to "have my cake and eat it too"; learn hardware and Haskell at the same time. Like the previous answer, Clash vs Chisel is more of a language decision while Clash vs Lavas is more DSEL vs static analysis (compiled) decision. See the reference for further reading on the latter decision.
Ref: http://hackage.haskell.org/package/clash-prelude-1.2.5/docs/Clash-Tutorial.html#g:20

Pros and cons of weak and strong typing

I'm making the transition from Java to PHP/Javascript and discovering all the practical aspects of using a weakly typed language.
As I'm in a position to fully compare the two I'd like to know the pros and cons of each approach. Also, are there any other forms of typing out there?
A weakly dynamically typed programming language (like PHP) made that the programmer's mistakes occur as non-coherent behaviours (for instance, the program gonna display stupid informations).
With a strongly dynamically typed language (like python), the programming mistakes causes error message. It makes the mistakes easier to uncover and diagnosis but in general the program became not usable after the message has been shown.
Finally, with a strongly statically typed language (like Java, Ada, OCaml, Haskell, ...) some mistakes can be uncovered at compile time and hence reduce the risk to provide an bugged program. (but the release occurs later)
Yes. Python uses Dynamic Typing.
Generally it's a matter of personal preference and the role that the architects of a given language's intended use.
PHP (a scripting language) for example makes sense to be weakly typed, as the tasks it generally performs are far less complex, and require less constraints then say a compiled language.
Regarding your final question, Mathematica is said to be "typeless."
High-level, typeless, dynamic language with consistent symbolic syntax and semantics across all data, functions, and interfaces
PHP/javascript can be used to develope better looking UI's than Java. PHP will be having less constraints and easy to learn and execute than java.

What would be the best language in which to write an ESB?

My first thoughts are Erlang, or Java, but I wanted to know from others experiences.
It's pretty rare that there's a best language for writing any kind of application in the absence of external constraints. The popularity of Java for ESBs seems to be based on the fact that they're coordinating a bunch of other software that's also written in Java. While any language would work, they're often producing and consuming content for and from Java libraries and therefore benefit from using the same libraries in adapters that their clients and servers use.
A language that is not Java but runs on the JVM and interoperates well with Java would have most of Java's advantages for such software. Scala and Clojure come to mind as good options. Erlang does seem like an appropriate choice as well, though it may be tougher to sell to customers.
JavaScript: https://github.com/salboaie/SwarmESB The main innovation is in how easy is to program your functionality. It comes with the "swarm" idea, a variant of mobile code that works very well with JavaScript but could be implemented in Java, Php,etc.
http://servicemix.apache.org/home.html uses Java.
https://open-esb.dev.java.net/ uses Java.
http://www.jboss.org/ uses Java.
http://www.mulesoft.org/display/MULE/Home seems to be Java.
http://wso2.com/products/enterprise-service-bus/ is Java.
So, if you write yours in Java, you'll be in good company with all the others written in Java.

What language features are required in a programming language to make a compiler?

Programming languages seem to go through several stages. Firstly, someone dreams up a new language, Foo Language. The compiler/interpreter is written in another language, usually C or some other low level language. At some point, FooL matures and grows, and eventually someone, somewhere will write a compiler and/or interpreter for FooL in FooL itself.
My question is this: What is the minimal subset of language features such that someone could implement that language in itself?
Compiler can be written even using a Turing machine - a Universal Turing Machine is basically a compiler/interpreter of any Turing machine, so any Turing-complete language should be enough :)
In theory, surprisingly little. A computability theorist would say that all you need is mu-recursion or a Turing machine or the like.
However, from a practical point of view, you're not going to be very happy trying to implement a programming language in a Turing machine. I would say that, at a minimum, you would want to have all the usual control-flow constructs, the primitive datatypes, subroutines, as well as arrays and structs. That should be enough to let you implement that subset of the language in the language itself -- and you can then bootstrap yourself up from there.
One option is a read-eval-print loop. This can be used to build many higher-level constructs. I believe this is the path taken by LISP.
I am unsure about the beginnings of C, but I think it started with a few system calls to implement branching, loops, assignment and single-character I/O, and built from there.
Id assume a assembler would make the cut.
My question is this: What is the minimal subset of language features such that someone could implement that language in itself?
There is no requirement for the language to be useful for anything other than compiling itself? I present to you Useless, the language in which every text is a proper program and means "a program that takes any input and produces itself" (this is also known as Useless compiler).

runnable pseudocode?

I am attempting to determine prior art for the following idea:
1) user types in some code in a language called (insert_name_here);
2) user chooses a destination language from a list of well-known output candidates (javascript, ruby, perl, python);
3) the processor translates insert_name_here into runnable code in destination language;
4) the processor then runs the code using the relevant system call based on the chosen language
The reason this works is because there is a pre-established 1 to 1 mapping between all language constructs from insert_name_here to all supported destination languages.
(Disclaimer: This obviously does not produce "elegant" code that is well-tailored to the destination language. It simply does a rudimentary translation that is runnable. The purpose is to allow developers to get a quick-and-dirty implementation of algorithms in several different languages for those cases where they do not feel like re-inventing the wheel, but are required for whatever reason to work with a specific language on a specific project.)
Does this already exist?
The .NET CLR is designed such that C++.Net, C#.Net, and VB.Net all compile to the same machine language, and you can "decompile" that CLI back in to any one of those languages.
So yes, I would say it already exists though not exactly as you describe.
There are converters available for different languages. The problem you are going to have is dealing with libraries. While mapping between language statements might be easy, finding mappings between library functions will be very difficult.
I'm not really sure how useful that type of code generator would be. Why would you want to write something in one language and then immediately convert it to something else? I can see the rationale for 4th Gen languages that convert diagrams or models into code but I don't really see the point of your effort.
Yes, a program that transform a program from one representation to another does exist. It's called a "compiler".
And as to your question whether that is always possible: as long as your target language is at least as powerful as the source language, then it is possible. So, if your target language is Turing-complete, then it is always possible, because there can be no language that is more powerful than a Turing-complete language.
However, there does not need to be a dumb 1:1 mapping.
For example: the Microsoft Volta compiler which compiles CIL bytecode to JavaScript sourcecode has a problem: .NET has threads, JavaScript doesn't. But you can implement threads with continuations. Well, JavaScript doesn't have continuations either, but you can implement continuations with exceptions. So, Volta transforms the CIL to CPS and then implements CPS with exceptions. (Newer versions of JavaScript have semi-coroutines in the form of generators; those could also be used, but Volta is intended to work across a wide range of JavaScript versions, including obviously JScript in Internet Explorer.)
This seems a little bizarre. If you're using the term "prior art" in its most common form, you're discussing a potentially patentable idea. If that is the case, you have:
1/ Published the idea, starting the clock running on patent filing - I'm assuming, perhaps incorrectly, that you're based in the U.S. Other jurisdictions may have other rules.
2/ Told the entire planet your idea, which means it's pretty much useless to try and patent it, unless you act very fast.
If you're not thinking about patenting this and were just using the term "prior art" in a laypersons sense, I apologize. I work for a company that takes patents very seriously and it's drilled into us, in great detail, what we're allowed to do with information before filing.
Having said that, patentable ideas must be novel, useful and non-obvious. I would think that your idea would not pass on the third of these since you're describing a language translator which would have the prior art of the many pascal-to-c and fortran-to-c converters out there.
The one glimmer of hope would be the ability of your idea to generate one of multiple output languages (which p2c and f2c don't do) but I think even that would be covered by the likes of cross compilers (such as gcc) which turn source into one of many different object languages.
IBM has a product called Visual Age Generator in which you code in one (proprietary) language and it's converted into COBOL/C/Java/others to run on different target platforms from PCs to the big honkin' System z mainframes, so there's your first problem (thinking about patenting an idea that IBM, the biggest patenter in the world, is already using).
Tons of them. p2c, f2c, and the original implementation s of C++ and Objective C strike me immediately. Beyond that, it's kind of hard to distinguish what you're describing from any compiler, especially for us old guys whose compilers generated ASM code for an intermediate represetation anyway.