In Idris, you can define operators using infix, infixl or infixr, followed by the precedence of the operators then a list of operators, like
infixl 8 +, -
I imagine you can do this in other languages too.
I know what effect precedence has, but how do I choose what precedence to give my operators? What problems might I encounter if I initially choose a precedence that's too high or low?
For most languages the operator precedence table is already defined by the language itself.
Following one of these would guarantee that your code/language is up to what's considered standard.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Operator_Precedence
https://en.cppreference.com/w/c/language/operator_precedence
Related
I have a question about operator precedence hierarchy, why "Because sequences have a higher precedence than disjunction, /the|any/ matches the or any but not thany or they"?
I just couldn't understand why this result reflects "precedence over disjunction"?
If the disjunction operator had a higher precedence than grouping together each sequence of characters, than it could operate on the individual characters or subgroups. It could be interpreted as something like /the(e|an)y/
I'm a newbie in one of a sport programming service and found that winning solutions often use bitwise operators.
Here is an example.
Write a function, which finds a difference between two
arrays (consider that they differ by one element).
The solutions are:
let s = x => ~eval(x.join`+`);
let findDiff = (a, b) => s(b) - s(a);
and
let findDiff = (a, b) => eval(a.concat(b).join`^`);
I would like to know:
The explanation of those two examples (bitwise part).
What's the advantage of using bitwise operators on decimal numbers?
Is that faster to operate with bitwise operations rather than normal one?
Update:
I didn't fully understand why my question is marked as a duplicate to ~~ vs parseInt. That's good to know, why this operator replaces parseInt and probably helpful for sport programming. But it doesn't answer on my question.
Code golf isn't focused on bitwise operators, is about code length.
Bitwise operators are not necessary faster, but generally fast enough. They are concise (and usually harder to read, but that's side effect).
~~ is a shorter (and usually more preformant) alternative to parseInt with a considerable number of remarks. In regular (non-golf) code it should be used only if it provides the behaviour that is more desirable than parseInt or in performance-sensitive context.
~a is roughly equal to parseInt(a) * -1 - 1. It can be used as a shorter alternative to ~~a in this particular example, s(b) - s(a), because * -1 - 1 part is eliminated on subtraction (the sign should be taken into account).
Quick question about reverse polish notation.
Why is 2*3/(2-1)+5*(4-1)?: (original)
23*21-/541-*+
rather than 23*21-/5+41-*?
I am just confusing myself. Personally I'd have adding extra brackets to the original question to make it clear where the 5 is added. If its not there what order do I assume it goes in?
Thanks
If we assume a conventional order of operations, then any multiplications get computed before any additions. So, when you have y+x*z, x*z gets computed first, according to usual order of operations. More explicitly, y+x*z means (y+(x*z)). Thus, 2*3/(2-1)+5*(4-1) means (((2*3)/(2-1))+(5*(4-1))).
If you were to explicitly state up front that you stipulated your order of operations as additions happening before multiplication, then if you wrote 4+5*6 you would mean ((4+5)*6). If you did that, then you could state the distributive law as x*y+z=(x*y)+(x*z). What would expressions mean when you omit operations? Consider xy&z, where & is binary, and the binary operation for xy gets omitted. If the omitted binary operation is *, and & is +, then this would mean that the expressed operation & would happen before the suppressed multiplication operation. Usually, omitted operations get assumed to happen first. So, if you addition had binding priority over multiplication, then it probably would make sense for an expression like xy to mean x+y instead of the more usual x*y. In principle, there seems nothing wrong with letting additions happen before multiplications, so long as you state that you want to do that up front and stick to that convention and its implications in whatever you write. That all said, except for communicating with people who don't understand RPN or PN, I simply don't see why you would write in infix notation once you understand RPN and PN.
It's because multiplication has higher precedence than addition. When you don't have the braces, 5(only) is first multiplied with (4-1) and added to rest of the expression. When you haven't used braces, it is evaluated according to order of precedence only.
Pattern matching (as found in e.g. Prolog, the ML family languages and various expert system shells) normally operates by matching a query against data element by element in strict order.
In domains like automated theorem proving, however, there is a requirement to take into account that some operators are associative and commutative. Suppose we have data
A or B or C
and query
C or $X
Going by surface syntax this doesn't match, but logically it should match with $X bound to A or B because or is associative and commutative.
Is there any existing system, in any language, that does this sort of thing?
Associative-Commutative pattern matching has been around since 1981 and earlier, and is still a hot topic today.
There are lots of systems that implement this idea and make it useful; it means you can avoid write complicated pattern matches when associtivity or commutativity could be used to make the pattern match. Yes, it can be expensive; better the pattern matcher do this automatically, than you do it badly by hand.
You can see an example in a rewrite system for algebra and simple calculus implemented using our program transformation system. In this example, the symbolic language to be processed is defined by grammar rules, and those rules that have A-C properties are marked. Rewrites on trees produced by parsing the symbolic language are automatically extended to match.
The maude term rewriter implements associative and commutative pattern matching.
http://maude.cs.uiuc.edu/
I've never encountered such a thing, and I just had a more detailed look.
There is a sound computational reason for not implementing this by default - one has to essentially generate all combinations of the input before pattern matching, or you have to generate the full cross-product worth of match clauses.
I suspect that the usual way to implement this would be to simply write both patterns (in the binary case), i.e., have patterns for both C or $X and $X or C.
Depending on the underlying organisation of data (it's usually tuples), this pattern matching would involve rearranging the order of tuple elements, which would be weird (particularly in a strongly typed environment!). If it's lists instead, then you're on even shakier ground.
Incidentally, I suspect that the operation you fundamentally want is disjoint union patterns on sets, e.g.:
foo (Or ({C} disjointUnion {X})) = ...
The only programming environment I've seen that deals with sets in any detail would be Isabelle/HOL, and I'm still not sure that you can construct pattern matches over them.
EDIT: It looks like Isabelle's function functionality (rather than fun) will let you define complex non-constructor patterns, except then you have to prove that they are used consistently, and you can't use the code generator anymore.
EDIT 2: The way I implemented similar functionality over n commutative, associative and transitive operators was this:
My terms were of the form A | B | C | D, while queries were of the form B | C | $X, where $X was permitted to match zero or more things. I pre-sorted these using lexographic ordering, so that variables always occurred in the last position.
First, you construct all pairwise matches, ignoring variables for now, and recording those that match according to your rules.
{ (B,B), (C,C) }
If you treat this as a bipartite graph, then you are essentially doing a perfect marriage problem. There exist fast algorithms for finding these.
Assuming you find one, then you gather up everything that does not appear on the left-hand side of your relation (in this example, A and D), and you stuff them into the variable $X, and your match is complete. Obviously you can fail at any stage here, but this will mostly happen if there is no variable free on the RHS, or if there exists a constructor on the LHS that is not matched by anything (preventing you from finding a perfect match).
Sorry if this is a bit muddled. It's been a while since I wrote this code, but I hope this helps you, even a little bit!
For the record, this might not be a good approach in all cases. I had very complex notions of 'match' on subterms (i.e., not simple equality), and so building sets or anything would not have worked. Maybe that'll work in your case though and you can compute disjoint unions directly.
I've got to the section on operators in The Ruby Programming Language, and it's made me think about operator associativity. This isn't a Ruby question by the way - it applies to all languages.
I know that operators have to associate one way or the other, and I can see why in some cases one way would be preferable to the other, but I'm struggling to see the bigger picture. Are there some criteria that language designers use to decide what should be left-to-right and what should be right-to-left? Are there some cases where it "just makes sense" for it to be one way over the others, and other cases where it's just an arbitrary decision? Or is there some grand design behind all of this?
Typically it's so the syntax is "natural":
Consider x - y + z. You want that to be left-to-right, so that you get (x - y) + z rather than x - (y + z).
Consider a = b = c. You want that to be right-to-left, so that you get a = (b = c), rather than (a = b) = c.
I can't think of an example of where the choice appears to have been made "arbitrarily".
Disclaimer: I don't know Ruby, so my examples above are based on C syntax. But I'm sure the same principles apply in Ruby.
Imagine to write everything with brackets for a century or two.
You will have the experience about which operator will most likely bind its values together first, and which operator last.
If you can define the associativity of those operators, then you want to define it in a way to minimize the brackets while writing the formulas in easy-to-read terms. I.e. (*) before (+), and (-) should be left-associative.
By the way, Left/Right-Associative means the same as Left/Right-Recursive. The word associative is the mathematical perspective, recursive the algorihmic. (see "end-recursive", and look at where you write the most brackets.)
Most of operator associativities in comp sci is nicked directly from maths. Specifically symbolic logic and algebra.