Core of Verifier in Isabelle/HOL - proof

Question
What is the core algorithm of the Isabelle/HOL verifier?
I'm looking for something on the level of a scheme metacircular evaluator.
Clarification
I'm only interested in the Verifier , not the strategies for automated theorem proving.
Context
I want to implement a simple proof verifier from scratch (purely for education reasons, not for production use.)
I want to understand the core Verifier algorithm of Isabelle/HOL. I don't care about the strategies / code used for automated theorem proving.
I have a suspicion that the core Verifier algorithm is very simple (and elegant). However, I can't find it.
Thanks!

Isabelle is a member of the "LCF family" of proof checkers, which means you have a special module --- the inference kernel -- where all inferences are run through to produce values of the abstract datatype thm. This is a bit like an operating system kernel processing system calls. Everything you can produce this way is "correct by construction" relative to the correctness of the kernel implementation. Since the programming language environment of the prover (Standard ML) has very strong static type-correctness properties, the correctness-by-construction of the inference kernel carries over to the rest of the proof assistant implementation, which can be quite huge.
So in principle you have a relatively small "trusted kernel" part and a really big "application user-space". In particular, most of Isabelle/HOL is actually a big collection of library theories and add-on tools (mostly in SML) in Isabelle user-land.
In Isabelle, the kernel infrastructure is quite complex, but still very small compared to the rest of the system. The kernel is in fact layered into a "micro kernel" (the Thm module) and a "nano kernel" (the Context module). Thm produces thm values in the sense of Milner's LCF-approach, and Context takes care of theory certficates for any results you produce, as well as proof contexts for local reasoning (notably in the Isar proof language).
If you want to learn more about LCF-style provers, I recommend looking also at HOL-Light which is probably the smallest realistic system of the LCF-family, realistic in the sense that people have done big applications with it. HOL-Light has the big advantage that its implementation can be easily understood, but this minimalism also has some disdavantages: it does not fully protect the user from doing non-sense in its ML environment, which is OCaml instead of SML. For various technical reasons, OCaml is not as "safe" by default as Standard ML.

If you untar the Isabelle sources, e.g.
http://isabelle.in.tum.de/dist/Isabelle2013_linux.tar.gz
you will find the core definitions in
src/Pure/thm.ML
And, there is such a project already you want to tackle:
http://www.proof-technologies.com/holzero/
added later: another, more serious project is
https://team.inria.fr/parsifal/proofcert/

Related

How to specify chisel’s post-processor?

Quote from libcores wiki
One post-processor generates a Verilog that is tuned for FPGA execution. A second generates Verilog that is tuned for ASIC.
Is this true? How to specify which post-processor to use?
I noticed that we can send an option ‘-X xxx’ to chisel, in which ‘xxx’ can be high, middle, low, verilog... Is this related? What’s the exact meaning of these ‘compilers’?
Thank you!
Very narrowly addressing your latter question, the -X/--compiler command line argument determines which FIRRTL compiler and emitter to use.
The Chisel3 compiler generates CHIRRTL (a high level form of the FIRRTL intermediate representation). The FIRRTL intermediate representation (IR), described in more detail in a UC Berkeley Technical Report, is a simple language for describing a circuit.
The FIRRTL compiler, broadly, is moving a circuit, represented in the FIRRTL IR, from a high-level representation (what is described in the specification) to a mid-level representation, and finally to a low-level representation that will easily map to Verilog. The FIRRTL compiler can elect to stop early at High FIRRTL, Mid FIRRTL, or Low FIRRTL or going all the way to Verilog. That -X/--compiler argument is telling it if you want to exit early and only target one of these representations.
Note: CHIRRTL will eventually be removed and High FIRRTL will be emitted directly by the Chisel compiler.
I'm not fully familiar with the librecores flow, but glancing over https://github.com/librecores/riscv-sodor I don't see any post-processing scripts. It might be worth filing an issue on the repo to ask for clarification on that point.
For Chisel designs in general, people use transforms on the IR to specialize the code for FPGA vs. ASIC. The most common one is with handling memory structures. The behavioral memories emitted by default work well for FPGAs as they are correctly inferred as BRAMs. For ASICs, there is a standard transform to replace memories with blackboxed interfaces such that the user can provide implementations that use SRAM macros from their given implementation technology.

Using functions in VHDL for synthesis

I do use functions in VHDL now and then, mostly in testbenches and seldom in synthesized projects, and I'm quite happy with that.
However, I was wondering if for projects that will be synthesized, it really is a smart move (in terms of LE use mostly?) I've read quite a lot of things about that online, however I can't find anything satisfying.
For instance, I've read something like that : "The function is synthesized each time it's called !!". Is it really so? (I thought of it more like a component instantiated once but whose inputs and output and accessed from various places in the design but I guess that may be incorrect).
In the case of a once-used function, what would change between that and writing the VHDL directly in the process for example? (In terms of LE use?).
A circuit in hardware, for example a FPGA, executes everywhere all the time, where in compare a program for an CPU executes only one place at a time. This allows a program on a CPU to reuse program code for different data, where a circuit in hardware must have sufficient resources to process all the data all the time.
So a circuit written in VHDL is generally translated by the synthesis tool as massive parallel construction that allows concurrent operation of all of the design all the time. The VHDL language is created with the purpose of concurrent execution, and this is a major different from ordinary programming languages.
As a consequence, a design that implements an algorithm with functions vs. a design that implements the same algorithm with separate logic, will have the exact same size and speed since the synthesis tool will expand the functions to dedicated logic in order to make the required hardware available.
That being said, it is possible to reuse the same hardware for different data, but the designer must generally explicitly create the design to support this, and thereby interleave different data sets when timing allows it.
And finally, as scary_jeff also points out, it is a smart move to use functions since there is nothing to loose in terms of size or speed, but all the advantages of creating a manageable design. But be aware, that functions can't contain state, so it is only possible to create functions for combinatorial logic between flip-flops, which usually limits the possible complexity in order to meet timing.
Yes, you should use functions and procedures.
Many people and companies use functions and procedures in synthesizable code. Some coding styles disallow functions for no good reason. If you feel uncertain about a certain construct in VHDL (in this case: functions), just type up a small example and inspect the synthesis result.
Functions are really powerful and they can help you create better hardware with less effort. As with all powerful things, you can create really bad code (and bad synthesis results) with functions too.

Normal Cuda Vs CuBLAS?

Just of curiosity. CuBLAS is a library for basic matrix computations. But these computations, in general, can also be written in normal Cuda code easily, without using CuBLAS. So what is the major difference between the CuBLAS library and your own Cuda program for the matrix computations?
We highly recommend developers use cuBLAS (or cuFFT, cuRAND, cuSPARSE, thrust, NPP) when suitable for many reasons:
We validate correctness across every supported hardware platform, including those which we know are coming up but which maybe haven't been released yet. For complex routines, it is entirely possible to have bugs which show up on one architecture (or even one chip) but not on others. This can even happen with changes to the compiler, the runtime, etc.
We test our libraries for performance regressions across the same wide range of platforms.
We can fix bugs in our code if you find them. Hard for us to do this with your code :)
We are always looking for which reusable and useful bits of functionality can be pulled into a library - this saves you a ton of development time, and makes your code easier to read by coding to a higher level API.
Honestly, at this point, I can probably count on one hand the number of developers out there who actually implement their own dense linear algebra routines rather than calling cuBLAS. It's a good exercise when you're learning CUDA, but for production code it's usually best to use a library.
(Disclosure: I run the CUDA Library team)
There's several reasons you'd chose to use a library instead of writing your own implementation. Three, off the top of my head:
You don't have to write it. Why do work when somebody else has done it for you?
It will be optimised. NVIDIA supported libraries such as cuBLAS are likely to be optimised for all current GPU generations, and later releases will be optimised for later generations. While most BLAS operations may seem fairly simple to implement, to get peak performance you have to optimise for hardware (this is not unique to GPUs). A simple implementation of SGEMM, for example, may be many times slower than an optimised version.
They tend to work. There's probably less chance you'll run up against a bug in a library then you'll create a bug in your own implementation which bites you when you change some parameter or other in the future.
The above isn't just relevent to cuBLAS: if you have a method that's in a well supported library you'll probably save a lot of time and gain a lot of performance using it relative to using your own implementation.

CUDA - Use the CURAND Library for Dummies

I was reading the CURAND Library API and I am a newbie in CUDA and I wanted to see if someone could actually show me a simple code that uses the CURAND Library to generate random numbers. I am looking into generating a large amount of number to use with Discrete Event Simulation. My task is just to develop the algorithms to use GPGPU's to speed up the random number generation. I have implemented the LCG, Multiplicative, and Fibonacci methods in standard C Language Programming. However I want to "port" those codes into CUDA and take advantage of threads and blocks to speed up the process of generating random numbers.
Link 1: http://adnanboz.wordpress.com/tag/nvidia-curand/
That person has two of the methods I will need (LCG and Mersenne Twister) but the codes do not provide much detail. I was wondering if anyone could expand on those initial implementations to actually point me in the right direction on how to use them properly.
Thanks!
Your question is misleading - you say "Use the cuRAND Library for Dummies" but you don't actually want to use cuRAND. If I understand correctly, you actually want to implement your own RNG from scratch rather than use the optimised RNGs available in cuRAND.
First recommendation is to revisit your decision to use your own RNG, why not use cuRAND? If the statistical properties are suitable for your application then you would be much better off using cuRAND in the knowledge that it is tuned for all generations of the GPU. It includes Marsaglia's XORWOW, l'Ecuyer's MRG32k3a, and the MTGP32 Mersenne Twister (as well as Sobol' for Quasi-RNG).
You could also look at Thrust, which has some simple RNGs, for an example see the Monte Carlo sample.
If you really need to create your own generator, then there's some useful techniques in GPU Computing Gems (Emerald Edition, Chapter 16: Parallelization Techniques for Random Number Generators).
As a side note, remember that while a simple LCG is fast and easy to skip-ahead, they typically have fairly poor statistical properties especially when using large quantities of draws. When you say you will need "Mersenne Twister" I assume you mean MT19937. The referenced Gems book talks about parallelising MT19937 but the original developers created the MTGP generators (also referenced above) since MT19937 is fairly complex to implement skip-ahead.
Also as another side note, just using a different seed to achieve parallelisation is usually a bad idea, statistically you are not assured of the independence. You either need to skip-ahead or leap-frog, or else use some other technique (e.g. DCMT) for ensuring there is no correlation between sequences.

ANN for decompiler?

Has there ever been any attempts at utilizing artificial neural networks in decompilation? It would be nice if it was possible to provide the trimmed semantics of source along with the code in to a neural network so it could learn the connection between the two. I assume this would likely lose it's effectiveness when there is optimizations and maybe work better for high level languages too but I'm interested in hearing any attempts anyone has had at this.
I added this as a comment, but I think I will go ahead and post it as an answer as well. It looks like in the 11 years sense this question has been posted, there has been work done in this direction. Here is a link:
https://www.groundai.com/project/a-neural-based-program-decompiler/1
And here is the abstract
A Neural-based Program Decompiler
Reverse engineering of binary executables is a critical problem in the computer security domain. On the one hand, malicious parties may recover interpretable source codes from the software products to gain commercial advantages. On the other hand, binary decompilation can be leveraged for code vulnerability analysis and malware detection. However, efficient binary decompilation is challenging. Conventional decompilers have the following major limitations: (i) they are only applicable to specific source-target language pair, hence incurs undesired development cost for new language tasks; (ii) their output high-level code cannot effectively preserve the correct functionality of the input binary; (iii) their output program does not capture the semantics of the input and the reversed program is hard to interpret. To address the above problems, we propose Coda111Coda is the abbreviation for CodeAttack., the first end-to-end neural-based framework for code decompilation. Coda decomposes the decompilation task into of two key phases: First, Coda employs an instruction type-aware encoder and a tree decoder for generating an abstract syntax tree (AST) with attention feeding during the code sketch generation stage. Second, Coda then updates the code sketch using an iterative error correction machine guided by an ensembled neural error predictor. By finding a good approximate candidate and then fixing it towards perfect, Coda achieves superior performance compared to baseline approaches. We assess Coda’s performance with extensive experiments on various benchmarks. Evaluation results show that Coda achieves an average of 82% program recovery accuracy on unseen binary samples, where the state-of-the-art decompilers yield 0% accuracy. Furthermore, Coda outperforms the sequence-to-sequence model with attention by a margin of 70% program accuracy. Our work reveals the vulnerability of binary executables and imposes a new threat to the protection of Intellectual Property (IP) for software development.
I'm assuming you mean decompilation to human readable C/C++ as compared to Assembly then,
Given the input size (optimized/compiled code) and the output size of succinct code, and the multi-line stateful nature of decomplilation process, I would have though this is a larger problem that a ANN could ever handle.