Tautology Checker for GNU Prolog [closed] - open-source

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking for open-source implementations of tautology checkers written in GNU Prolog (implementation for SWI-Prolog is acceptable as well, but GNU Prolog is preferred).
I'd like to feed program input with queries like:
A and (B or C) iff (A or B) and (A or C).
or
3^2 * (X + 2) == (9 * X) + 18.
of course, notation can be different, like this:
(3 power 2) mul (X plus 2) equals (9 mul X) plus 18.
What I expect as result, is boolean answer , like "Yes/No", "Equals/Different", "Prove found/Failed to find prove" or similar.
I've found tautology checker for GNU-Prolog on ftp://ftp.cs.yorku.ca/pub/peter/SVT/GNU/ , but licence is not attached and no clue how to apply Gnu Prolog Arithmetic constraints and Arithmetic capabilities in order to extend just logical model with arithmetic.
Any other simmilar solvers?
Some examples how arithmetic capabilities might be used in order to extend model.
Thanks, Greg.
P.S. According arithmetic, I am looking for partial support - I want to handle only some basic cases, which I can code by hand with simple heuristics (gnu-prolog integer functions examples welcome as well) if proposed solution will handle classical logic properly and open-source code will be nice to extend :).
P.P.S As #larsmans noted, according to Gödel's incompleteness theorems there is no way to prove "all" formulas. That's why I am looking for something that proves, what can be proven from given set of axioms and rules, as I am looking for Gnu Prolog program, I am looking for examples of such sets of axioms and rules ;). Of course checker may fail - I am expecting it will work in "some" cases :). - On the other hand, there is Gödel's completeness theorem ;).

If you want an extensible theorem prover in Prolog, check out the family of lean theorem provers, of which leanCOP is the main representative; it handles classical first-order logic in 555 bytes of Prolog.
Version 1.0 is the following program:
prove(M,I) :- append(Q,[C|R],M), \+member(-_,C),
append(Q,R,S), prove([!],[[-!|C]|S],[],I).
prove([],_,_,_).
prove([L|C],M,P,I) :- (-N=L; -L=N) -> (member(N,P);
append(Q,[D|R],M), copy_term(D,E), append(A,[N|B],E),
append(A,B,F), (D==E -> append(R,Q,S); length(P,K), K<I,
append(R,[D|Q],S)), prove(F,S,[L|P],I)), prove(C,M,P,I).
The leanCOP website has more readable versions, with more features. You'll have to implement equality and arithmetic yourself.

You could use constraint logic programming for your problem. Equality
gives you directly a result (example GNU Prolog):
?- X*X #= 4.
X = 2
For inequality you will typically need to ask the system to generate
instanciations after the constraints have been set up. This is typically
done via a labeling directive (example GNU Prolog):
?- 27 * (X + 2) #\= (9 * X) + 18.
X = _#22(0..14913080)
?- 27 * (X + 2) #\= (9 * X) + 18, fd_labeling(X).
X = 0 ? ;
X = 1 ? ;
X = 2 ? ;
Etc..
A list that shows which prologs do have which kind of CLP can be found here. Just
check the CLP multi column.
Overview of Prolog Systems, Ulrich Neumerkel
Bye

I've found some tautology checkers, in Mathematical Logic for Computer Science by Ben-Ari, Mordechai, unfortunately they cover boolean logic. What advantage, his implementations are in Prolog, and present a variety of approaches related with automated proving, or solving such equations (or, automatically verifying proofs).

Related

How would '1+1' look when just using 1 and 0? [duplicate]

This question already has answers here:
Is it possible to program in binary?
(6 answers)
Closed 2 years ago.
Is that possible? Can this be done using just 1 and 0 (true/false, on/off ...)?
If so, how would this code look?
If this example is too complex i am open to all other kinds of examples, but would like to have an operation included, because i have no idea how such operations get encoded (i guess they also are just an entry in a conversion chart)
The reason why i ask this, is that i want to give people a concrete example why datatypes and functions/operations are a practical abstraction (easier to read). Im writing a tutorial.
In a 1-bit wide integer = boolean value, carry-out has nowhere to go, so addition simplifies to just XOR.
Fun fact: XOR is add-without-carry. It's part of implementing a single-bit adder out of logic gates, e.g. a "half adder" that has 2 inputs (no carry-in) and produces a sum and carry-out. (sum = a xor b, carry = a AND b). A simple 32-bit adder could be build out of a half adder and 31 "full adders". Or more adders in parallel with tricks to optimize it for lower latency than a simple ripple-carry binary adders.
Carryless multiplication is a thing in some crypo, where summing partial products is done with XOR instead of normal binary addition.
See also What is the best way to add two numbers without using the + operator? for a software use of the same idea.

Does any programming language support the expressions like "12.Pounds.ToKilograms()"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Converting units and measurements from one system to another can be achieved in most programming languages in one or another way. But, Can we express something like "12.Pounds.ToKilograms()" in any programming language?
Not exactly in that syntax but you may want to take a look at Frink: https://frinklang.org
Frink syntax is similar to Google Calculator or Wolfram Alpha but not exactly the same. Whereas Google and Wolfram Alpha uses the in keyword to trigger unit conversion Frink uses the -> operator. So in frink, the following is valid source code:
// Calculate length of Wifi antenna:
lightspeed / (2.4GHz) / 4 -> inches
As I mentioned, this syntax is similar to Google. For reference, the same calculation in Google syntax is speed of light / 2.4GHz / 4 in inches. Frink predates both Google calculator and Wolfram Alpha. I was first aware of frink sometime in the early 2000s.
Frink is unit aware. A number in frink always has unit even if that unit is simply "scalar" (no units). So to declare a variable that is 12 pounds you'd do:
var x = 12 pounds
To convert you'd do:
x -> kg
Or you can simply write the expression:
12 pounds -> kg
In Smalltalk you could express this as
12 pounds inKilograms
Notice however that it is up to you to program both messages pounds and inKilograms (there are libraries that do that kind of things also). But the key point is that the expression above is perfectly valid in Smalltalk (even if these messages do not exist).
I can't say I've ever seen this as a valid expression. 12 would have to be defined as a type. Let's assume 12 is an integer. An integer type only knows that it is an integer and in most languages there are built in functions/ methods for integers. For this to be a valid expression you would need to define a weight type and within that object you could define methods of conversion or inherit child types with conversion methods.
You could do it in Ruby because of it's syntactic sugar.
Actually there's a ruby gem - alchemist - you should check it out.
For example you could write:
10.miles.to.meters

Why do languages have operator precedence? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Why not simply evaluate Left to Right? Can someone explain how precedence makes code more readable? To me it seems to require more thought and more possibility for error. Einstein said, "Everything should be made as simple as possible, but no simpler." I guess he wasn't a programmer.
because in Mathematics, before the invention of any computer language, there was operator precedence, e.g.
2 + 5 * 3 = 17 (and not 21)
A language not having a higher precedence for the * operator than for the + operator would generate confusion.
Another example from C++:
std::cout << 7 * 8 << std::endl;
If the * operator did not have precedence over the << operator, this would not compile:
std::cout << 7 would be evaluated first, yielding std::cout as a result (with the side effect that 7 would be printed)
then one would want to evaluate std::cout * 8 which -- unless somebody defines a really weird operator * for multiplying an outputstream by an integer -- is not defined.
Operator precedence allows not having to put unnecessary parentheses 'within common sense' (although I agree that in some cases people introduce bugs because they are not aware of the actual precedences)
Now you might ask: why does Mathematics have operator precedence ? This wikipedia article mentions some historical facts and links to a discussion forum. For me, an imporant argument is the one that when writing polynomials, e.g.
ax^2 + bx + c
without operator precedence, one would have to write them as:
(a(x^2)) + (bx) + c
The purpose of precedence is not to make it more readable, its to (a) standardize and (b) keep with standards set in mathematics, such as PEMDAS.
APL does not have operator precedence (except parentheses); code reads right to left (like Hebrew or Arabic). That said, APL is a niche language and this is one reason why. Most other languages, as other answers already point out, mimic the long-established precedence rules of arithmetic, which in turn appear to me to arise from how typical use cases (i.e., real-world problems) can be modeled with a minimum number of parentheses.
The digital world simply follows the old-school "real" world where the BEDMAS rules: Brackets, Exponents, Division, Multiplication, Addition, Subtraction.
Better to follow the real world, than try to refedefine reality and end up with programmers doing
1 + 2 * 3
as
(1 + 2) * 3 = 9
v.s. a realworlder doing
1 + (2 * 3) = 7
and sending your multimillion dollar probe into an invalid atmospheric descent profile because someone "carried the 1" wrong.
It is possible to do away with precedence rules - or more correctly, have just one such rule - as APL demonstrates.
"It has just one simple, consistent, and recursive precedence rule: the right argument of a function is the result of the entire expression to the right of that function."
For example, in "J" (an APL variant):
1+2*3 => 7 Computed as 1+(2*3)
2*3+1 => 8 Computed as 2*(3+1)
APL (and J) has so many operators, it would be difficult to remember the precedence and associativity rules of each and every one of them.
The reason why there is an operator precedence is because you should see a sum like 2*4 as a WHOLE number.
So you could think of it in a way that 2*4 doesn't exist. It's simply 8.
So what you're basically doing is "simplifying" the sum, like: 2+10*2 This basically says: 2+20.
As in the following sum: 2143+8^2+6^9. You don't want to write out 2143+262144+10077696. (And imagine how this could look like if you have a much larger formula)
It's basically to simplify the look of the sum. And both addition and subtraction are the most basic form of operations. So that's why there is a precedence, it is to increase the readbility of the sum.
And like i said, since the + and - operators are the most basic operators there are, then it makes sense to first solve the more "complicated" operators.
For example, with operator precedence the following evaluates to 14:
2 + 3 * 4
without it would evaluate to 20 which is unexpected, imho.
Take an example , if there were no operator precedence how would you evaluate this expression :
1+2/3
Well , some would say that you want to divide the sum of 1+2 by 3 . Others that you want to add 2/3 to 1 !

What is the origin of magic number 42, indispensable in coding? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Update:
Surprised that it is being so heavily downvoted...
The question is coding-related and before asking this question I have googled for "42" in combination with:
site:msdn.micrsoft.com
"code example"
"c#"
"magic number"
And I am not an expert/fan of Western culture/literature.
Also found, Why are variables “i” and “j” used for counters? [duplicate] which was not closed but even protected.
I feel that everybody knows it, except me...
What is the origin of ubiquitous magic digit 42 used all over the code samples and samples?
How have you come using 42? because I have not ever come or ever used 42
After some search, I found MSDN doc on it: Magic Numbers: Integers:
"Aside from a book/movie reference, developers often use this as an arbitrary value"
Well, this did not explain me anything.
Which movies and books have I missed for all those years of being involved in development, coding and programming and around-IT related activities like rwquirements analysis, system administration, etc??
Some references to some texts using code snippets with 42 (just C#-related):
Jérôme Laban. C# Async Tips and Tricks, Part 3: Tasks and the Synchronization Context
var t = Task.Delay(TimeSpan.FromSeconds(1))
.ContinueWith
(
_ => Task.Delay(TimeSpan.FromSeconds(42))
);
MSDN Asynchronous Agents Library
send(_target, 42);
Quickstart: Calling asynchronous APIs in C# or Visual Basic
Office.context.document.setSelectedDataAsync(
"<html><body>hello world</body></html>",
{coercionType: "html", asyncContext: 42},
function(asyncResult) {
write(asyncResult.status + " " + asyncResult.asyncContext);
Asynchronous Programming in C++ Using PPL
task<int> myTask = someOtherTask.then([]() { return 42; });
Boxing and Unboxing (C# Programming Guide)
Console.WriteLine(String.Concat("Answer", 42, true));
How To: Override the ToString Method (C# Programming Guide)
int x = 42;
Trace Listeners
// Use this example when debugging.
System.Diagnostics.Debug.WriteLine("Error in Widget 42");
// Use this example when tracing.
System.Diagnostics.Trace.WriteLine("Error in Widget 42");
|| Operator (C# Reference
// The following line displays True, because 42 is evenly
// divisible by 7.
Console.WriteLine("Divisible returns {0}.", Divisible(42, 7));
// The following line displays False, because 42 is not evenly
// divisible by 5.
Console.WriteLine("Divisible returns {0}.", Divisible(42, 5));
// The following line displays False when method Divisible
// uses ||, because you cannot divide by 0.
// If method Divisible uses | instead of ||, this line
// causes an exception.
Console.WriteLine("Divisible returns {0}.", Divisible(42, 0));
WIKIPedia C Sharp (programming language)
int foo = 42; // Value type.
It's from The Hitch Hiker's Guide to the Galaxy.
In The Hitchhiker's Guide to the Galaxy (published in 1979), the
characters visit the legendary planet Magrathea, home to the
now-collapsed planet-building industry, and meet Slartibartfast, a
planetary coastline designer who was responsible for the fjords of
Norway. Through archival recordings, he relates the story of a race of
hyper-intelligent pan-dimensional beings who built a computer named
Deep Thought to calculate the Answer to the Ultimate Question of Life,
the Universe, and Everything. When the answer was revealed to be 42,
Deep Thought explained that the answer was incomprehensible because
the beings didn't know what they were asking. It went on to predict
that another computer, more powerful than itself would be made and
designed by it to calculate the question for the answer. (Later on,
referencing this, Adams would create the 42 Puzzle, a puzzle which
could be approached in multiple ways, all yielding the answer 42.)
The answer is, as people already have stated, The Hitchhiker's Guide to the Galaxy.
I made a little experiment and put a couple of numbers in the search field, and these are the results:
It seems like 42 beats its neighbors clearly, but it can't touch regular numbers like 40, 45 and 50, no matter how magical it is.
It would be interesting to do the same search in source code only.
Dude!
It's the Answer to the Ultimate Question of Life, the Universe, and Everything! As computed by Deep Thought supercomputer, which took 7.5 million years!
http://en.wikipedia.org/wiki/The_answer_to_life_the_universe_and_everything#Answer_to_the_Ultimate_Question_of_Life.2C_the_Universe_and_Everything_.2842.29
Check this out. 42 is the ultimate answer to the ultimate question of life the universe and everything
This is from The Hitch hikers Guide to the Galaxy and is:
The Answer to the Ultimate Question of Life, the Universe, and Everything
WikiLink
Refer The Hitch Hiker's Guide to the Galaxy.

What's the absolute minimum a programmer should know about binary numbers and arithmetic? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Although I know the basic concepts of binary representation, I have never really written any code that uses binary arithmetic and operations.
I want to know
What are the basic concepts any
programmer should know about binary
numbers and arithmetic ? , and
In what "practical" ways can binary
operations be used in programming. I
have seen some "cool" uses of shift
operators and XOR etc. but are there
some typical problems where using binary
operations is an obvious choice.
Please give pointers to some good reference material.
If you are developing lower-level code, it is critical that you understand the binary representation of various types. You will find this particularly useful if you are developing embedded applications or if you are dealing with low-level transmission or storage of data.
That being said, I also believe that understanding how things work at a low level is useful even if you are working at much higher levels of abstraction. I have found, for example, that my ability to develop efficient code is improved by understanding how things are represented and manipulated at a low level. I have also found such understanding useful in working with debuggers.
Here is a short-list of binary representation topics for study:
numbering systems (binary, hex, octal, decimal, ...)
binary data organization (bits, nibbles, bytes, words, ...)
binary arithmetic
other binary operations (AND,OR,XOR,NOT,SHL,SHR,ROL,ROR,...)
type representation (boolean,integer,float,struct,...)
bit fields and packed data
Finally...here is a nice set of Bit Twiddling Hacks you might find useful.
Unless you're working with lower level stuff, or are trying to be smart, you never really get to play with binary stuff.
I've been through a computer science degree, and I've never used any of the binary arithmetic stuff we learned since my course ended.
Have a squizz here: http://www.swarthmore.edu/NatSci/echeeve1/Ref/BinaryMath/BinaryMath.html
You must understand bit masks.
Many languages and situations require the use of bit masks, for example flags in arguments or configs.
PHP has its error level which you control with bit masks:
error_reporting = E_ALL & ~E_NOTICE
Or simply checking if an int is odd or even:
isOdd = myInt & 1
I believe basic know-hows on binary operations line AND, OR, XOR, NOT would be handy as most of the programming languages support these operations in the form of bit-wise operators.
These operations are also used in image processing and other areas in graphics.
One important use of XOR operation which I can think of is Parity check. Check this http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/xor.html
cheers
The following are things I regularly appreciate knowing in my quite conventional programming work:
Know the powers of 2 up to 2^16, and know that 2^32 is about 4.3 billion. Know them well enough so that if you see the number 2147204921 pop up somewhere your first thought is "hmm, that looks pretty close to 2^31" -- that's a very effective module for your bug radar.
Be able to do simple arithmetic; e.g. convert a hexadecimal digit to a nybble and back.
Have some vague idea of how floating-point numbers are represented in binary.
Understand standard conventions that you might encounter in other people's code related to bit twiddling (flags get ORed together to make composite values and AND checks if one's set, shift operators pack and unpack numbers into different bytes, XOR something twice and you get the same something back, that kind of thing.)
Further knowledge is mostly gravy unless you work with significant performance constraints or do other less common work.
At the absolute bare minimum you should be able to implement a bit mask solution. The tasks associated with bit mask operations should ensure that you at least understand binary at a superficial level.
From the top of my head, here are some examples of where I've used bitwise operators to do useful stuff.
A piece of javascript that needed one of those "check all" boxes was something along these lines:
var check = true;
for(var i = 0; i < elements.length; i++)
check &= elements[i].checked;
checkAll.checked = check;
Calculate the corner points of a cube.
Vec3f m_Corners[8];
void corners(float a_Size){
for(size_t i = 0; i < 8; i++){
m_Corners[i] = a_Size * Vec3f(axis(i, Vec3f::X), axis(i, Vec3f::Y), axis(i, Vec3f::Z));
}
}
float axis(size_t a_Corner, int a_Axis) const{
return ((a_Corner >> a_Axis) & 1) == 1
? -.5f
: +.5f;
}
Draw a Sierpinski triangle
for(int y = 0; y < 512; y++)
for(int x = 0; x < 512; x++)
if(x & y) pixels[x + y * w] = someColor;
else pixels[x + y * w] = someOtherColor;
Finding the next power of two
int next = 1 << ((int)(log(number) / log(2));
Checking if a number is a power of two
bool powerOfTwo = number & (number - 1);
The list can go on and on, but for me these are (except for Sierpinksi) everyday examples. Once you'll understand and work with it though, you'll encounter it in more and more places such as the corners of a cube.
You don't specifically mention (nor rule out!-) floating point binary numbers and arithmetic, so I won't miss the opportunity to flog one of my favorite articles ever (seriously: I sometimes wish I could make passing a strict quiz on it a pre-req of working as a programmer...;-).
The most important thing every programmer should know about binary numbers and arithmetic is : Every number in a computer is represented in some kind of binary encoding, and all arithmetic on a computer is binary arithmetic.
The consequences of this are many:
Floating point "bugs" when doing math with IEEE floating point binary numbers (Which is all numbers in javascript, and quite a few in JAVA, and C)
The upper and lower bounds of representable numbers for each type
The performance cost of multiplication/division/square root etc operations (for embedded systems
Precision loss, and accumulation errors
and more. This is stuff you need to know even if you never do a bitwise xor, or not, or whatever in your life. You'll still run into these things.
This really depends on the language you're using. Recent languages such as C# and Java abstract the binary representation from you -- this makes working with binary difficult and is not usually the best way to do things anyway in these languages.
Middle and low level languages like C and C++, however, require you to understand quite a bit about how the numbers are stored underneath -- especially regarding endianness.
Binary knowledge is also useful when implementing a cross platform protcol of some sort .... for example, on x86 machines, byte order is little endian. but most network protocols want big endian numbers. Therefore you have to realize you need to do the conversion for things to go smoothly. Many RFCs, such as this one -> https://www.rfc-editor.org/rfc/rfc4648 require binary knowledge to understand.
In short, it's completely dependent on what you're trying to do.
Billy3
It's handy to know the numbers 256 and 65536. It's handy to know how two's complement negative numbers work.
Maybe you won't run into a lot of binary. I still use it pretty often, but maybe out of habit.
A good familiarity with bitwise operations should make you more facile with boolean algebra, and I think that's important for every programmer--you want to be able to quickly simplify complex logic expressions.
Absolute minimum is, that "2" is not a binary digit and 10b is smaller than 3.
If you never do low-level programming (like C in embedded systems), never have to use a debugger, and never have to work with real numbers, then I suppose you could get by without knowing binary. But knowing binary will make you a stronger programmer, even if indirectly.
Once you venture into those areas you will need to know binary (and its ``sister'' base, hexadecimal). Without knowing it:
Embedded systems programming would be impossible.
Debugging would be hard because you wouldn't know what you were looking at in memory.
Numerical calculations with decimals would give you answers you don't understand.
I learned to twiddle bits back when c and asm were still used for "mainstream" programming. Although I no longer have much use for that knowledge, I recently used it to solve a real-world business problem.
We use a fax service that posts a message back to us when the fax has been sent or failed after x number of retries. The only way I had to identify the fax was a 15 character field. We wanted to consolidate this into one URL for all of our clients. Before we consolidated, all we had to fit in this field was the FaxID PK (32 bit int) column which we just sent as a string.
Now we had to identify the client (a 4 character code) and the database (32 bit int) underneath the client. I was able to do this using base 64 encoding. Without understanding the binary representation of numbers and characters, I probably would never have even thought of this solution.
Some useful information about the number system.
Binary | base 2
Hexadecimal | base 16
Decimal | base 10
Octal | base 8
These are the most common.
Converting them is faily easy.
112 base 8 = (1 x 8^2) + (2 x 8^1) + (4 x 8^0)
74 base 10 = (7 x 10^1) + (4 x 10^0)
The AND, OR, XOR, and etc. are used in logic gates. Search boolean algebra, something well worth the time knowing.
Say for instance, you have 11001111 base 2 and you want to extract the last four only.
Truth table for AND:
P | Q | R
T | T | T
T | F | F
F | F | F
F | T | F
You can use 11001111 base 2 AND 00111111 base 2 = 00001111 base 2
There are plenty of resources on the internet.