Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
There are common slang names for period, exclamation mark, and asterisk:
. "dot"
! "bang"
* "star"
but what are the teched-out, one-syllable names for characters like
%
&
<
{
[
;
etc.
Here's a few, poetry style:
<> !*''#
^"`$$-
!*=#$_
%*<> ~#4
&[]../
|{,,SYSTEM HALTED
The poem can be appreciated only by reading it aloud:
Waka waka bang splat tick tick hash,
Caret quote back-tick dollar dollar dash,
Bang splat equal at dollar under-score,
Percent splat waka waka tilde number four,
Ampersand bracket bracket dot dot slash,
Vertical-bar curly-bracket comma comma CRASH.
Source: http://babek.info/libertybasicfiles/lbnews/nl123/fun.htm
The Internet knows plenty of these (archive)
Or, check the jargon file.
The intercal reference manual is the definitive guide:
. spot identify 16-bit variable
: two-spot identify 32-bit variable
, tail identify 16-bit array
; hybrid identify 32-bit array
# mesh identify constant
= half-mesh
' spark grouper
` backspark
! wow equivalent to spark-spot
? what unary exlusive OR (ASCII)
" rabbit-ears grouper
". rabbit equivalent to ears-spot
| spike
% double-oh-seven percentage qualifier
- worm used with angles
< angle used with worms
> right angle
( wax precedes line label
) wane follows line label
[ U turn
] U turn back
{ embrace
} bracelet
* splat flags invalid statements
& ampersand[5] unary logical AND
V V unary logical OR
(or book)
V- bookworm unary exclusive OR
(or universal qualifier)
$ big money unary exclusive OR (ASCII)
c| change binary mingle
~ sqiggle binary select
_ flat worm
overline indicates "times 1000"
+ intersection separates list items
/ slat
\ backslat
# whirlpool
-' hookworm
^ shark
(or simply sharkfin)
Strive for clarity and don't use slang, but instead use the commonly accepted name of each symbol. Thus,
. dot or period
! exclamation point
* asterisk or star
% percent (sometimes mod)
& ampersand
< less than
{ left brace
[ left bracket
; semicolon
Be sure to find the more British-flavored ones, I like the #! (read "crunch bang") as in CrunchBang Linux. And for pure weirdness, I enjoy calling "#" the "octothorpe."
Some other fun ones that I've heard (I'd be hard pressed to source these):
= "gets" (when reading C code "int num = 5;" would be "num gets five")
` "quasiquote" (lisp)
#! together are often pronounced "shebang" (consider that a kind of tech-dipthong)
Consider these extras:
! not
# these (Perl)
$ this (Perl)
: otherwise (ternary operator)
; stop (read as a telegram?)
* anything (regex-esque)
& and
| or
In addition to Tyler McHenry's list there is also Jeff's blog post which links to the ASCII entry in The New Hacker's Dictionary.
# = "pound"
as in
#sand
or
#puppy
# is pronounced "dog" (собака) in Russian (for the visual picture).
Hehe, when I was a kid, I called the "#" - character (don't laugh!) "eh". And "$" was called "xsssss".
Related
Instead of writing ffff why the syntax of writing heaxadecimal number's are like 0*ffff.What is the meaning of "0*". Does it specify something?
Anyhow A,B,C,D,E,F notations only in hexa decimal number system. Then whats the need of "0*".
Sorry "*" was not the character i supposed it is "x" .
Is it a nomenclature or notation for hexadecimal number systems.
I don't know what language you are talking about, but if you for example in C# write
var ffffff = "Some unrelated string";
...
var nowYouveDoneIt = ffffff;
what do you expect to happen? How does the compiler know if ffffff refers to the hexadecimal representation of the decimal number 16777215 or to the string variable defined earlier?
Since identifiers (in C#) can't begin with a number, prefixing with a 0 and some other character (in C# it's 0xffffff or hex and 0b111111111111111111111111 for binary IIRC) is a handy way of communicating what base the number literal is in.
EDIT: Another issue, if you were to write var myCoolNumber = 10, how would you have ANY way of knowing if this means 2, 10 or 16? Or something else entirely.
It's typically 0xFFFF: the letter, not the multiplication symbol.
As for why, 0x is just the most common convention, like how some programming languages allow binary to be prefixed by 0b. Prefixing a number with just 0 is typically reserved for octal, or base 8; they wanted a way to tell the machine that the following number is in hexadecimal, or base 16 (10 != 0b10 [2] != 010 [8] != 0x10 [16]). They typically omitted a small 'o' from identifying octal for human readability purposes.
Interestingly enough, most Assembly-based implementations I've come across use (or at least allow the use of) 0h instead or as well.
It's there to indicate the number as heX. It's not '*', it's 'x' actually.
See:
http://www.tutorialspoint.com/cprogramming/c_constants.htm
I've always wondered why leading zeroes (0) are used to represent octal numbers, instead of — for example — 0o. The use of 0o would be just as helpful, but would not cause as many problems as leading 0es (e.g. parseInt('08'); in JavaScript). What are the reason(s) behind this design choice?
All modern languages import this convention from C, which imported it from B, which imported it from BCPL.
Except BCPL used #1234 for octal and #x1234 for hexadecimal. B has departed from this convention because # was an unary operator in B (integer to floating point conversion), so #1234 could not be used, and # as a base indicator was replaced with 0.
The designers of B tried to make the syntax very compact. I guess this is the reason they did not use a two-character prefix.
Worth noting that in Python 3.0, they decided that octal literals must be prefixed with '0o' and the old '0' prefix became a SyntaxError, for the exact reasons you mention in your question
https://www.python.org/dev/peps/pep-3127/#removal-of-old-octal-syntax
"0b" is often used for binary rather than for octal. The leading "0" is, I suspect for "O -ctal".
If you know you are going to be parsing octal then use parseInt('08', 10); to make it treat the number as base ten.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Write the shortest program that calculates the Frobenius number for a given set of positive numbers. The Frobenius number is the largest number that cannot be written as a sum of positive multiples of the numbers in the set.
Example: For the set of the Chicken McNuggetTM sizes [6,9,20] the Frobenius number is 43, as there is no solution for the equation a*6 + b*9 + c*20 = 43 (with a,b,c >= 0), and 43 is the largest value with this property.
It can be assumed that a Frobenius number exists for the given set. If this is not the case (e.g. for [2,4]) no particular behaviour is expected.
References:
http://en.wikipedia.org/wiki/Coin_problem
http://mathworld.wolfram.com/FrobeniusNumber.html
[Edit]
I decided to accept the GolfScript version. While the MATHEMATICA version might be considered "technically correct", it would clearly take the fun out of the competition. That said, I'm also impressed by the other solutions, especially Ruby (which was very short for a general purpose language).
Mathematica 0 chars (or 19 chars counting the invoke command)
Invoke wtih
FrobeniusNumber[{a,b,c,...}]
Example
In[3]:= FrobeniusNumber[{6, 9, 20}]
Out[3]= 43
Is it a record? :)
Ruby 100 86 80 chars
(newline not needed)
Invoke with frob.rb 6 9 20
a=$*.map &:to_i;
p ((1..eval(a*"*")).map{|i|a<<i if(a&a.map{|v|i-v})[0];i}-a)[-1]
Works just like the Perl solution (except better:). $* is an array of command line strings; a is the same array as ints, which is then used to collect all the numbers which can be made; eval(a*"*") is the product, the max number to check.
In Ruby 1.9, you can save one additional character in by replacing "*" with ?*.
Edit: Shortened to 86 using Symbol#to_proc in $*.map, inlining m and shortening its calculation by folding the array.
Edit 2: Replaced .times with .map, traded .to_a for ;i.
Mathematica PROGRAM - 28 chars
Well, this is a REAL (unnecessary) program. As the other Mathematica entry shows clearly, you can compute the answer without writing a program ... but here it is
f[x__]:=FrobeniusNumber[{x}]
Invoke with
f[6, 9, 20]
43
GolfScript 47/42 chars
Faster solution (47).
~:+{0+{.1<{$}{1=}if|}/.!1):1\{:X}*+0=-X<}do];X(
Slow solution (42). Checks all values up to the product of every number in the set...
~:+{*}*{0+{.1<{$}{1=}if|}/1):1;}*]-1%.0?>,
Sample I/O:
$ echo "[6 9 20]"|golfscript frobenius.gs
43
$ echo "[60 90 2011]"|golfscript frobenius.gs
58349
Haskell 155 chars
The function f does the work and expects the list to be sorted. For example f [6,9,20] = 43
b x n=sequence$replicate n[0..x]
f a=last$filter(not.(flip elem)(map(sum.zipWith(*)a)(b u(length a))))[1..u] where
h=head a
l=last a
u=h*l-h-l
P.S. since that's my first code golf submission I'm not sure how to handle input, what are the rules?
C#, 360 characters
using System;using System.Linq;class a{static void Main(string[]b)
{var c=(b.Select(d=>int.Parse(d))).ToArray();int e=c[0]*c[1];a:--e;
var f=c.Length;var g=new int[f];g[f-1]=1;int h=1;for(;;){int i=0;for
(int j=0;j<f;j++)i+=c[j]*g[j];if(i==e){goto a;}if(i<e){g[f-1]++;h=1;}
else{if(h>=f){Console.Write(e);return;}for(int k=f-1;k>=f-h;k--)
g[k]=0;g[f-h-1]++;h++;}}}}
I'm sure there's a shorter C# solution than this, but this is what I came up with.
This is a complete program that takes the values as command-line parameters and outputs the result to the screen.
Perl 105 107 110 119 122 127 152 158 characters
Latest edit: Compound assignment is good for you!
$h{0}=$t=1;$t*=$_ for#ARGV;for$x(1..$t){$h{$x}=grep$h{$x-$_},#ARGV}#b=grep!$h{$_},1..$t;print pop#b,"\n"
Explanation:
$t = 1;
$t *= $_ foreach(#ARGV);
Set $t to the product of all of the input numbers. This is our upper limit.
foreach $x (1..$t)
{
$h{$x} = grep {$_ == $x || $h{$x-$_} } #ARGV;
}
For each number from 1 to $t: If it's one of the input numbers, mark it using the %h hash; otherwise, if there is a marked entry from further back (difference being anything in the input), mark this entry. All marked entries are non-candidates for Frobenius numbers.
#b=grep{!$h{$_}}(1..$t);
Extract all UNMARKED entries. These are Frobenius candidates...
print pop #b, "\n"
...and the last of these, the highest, is our Frobenius number.
Haskell 153 chars
A different take on a Haskell solution. I'm a rank novice at Haskell, so I'd be surprised if this couldn't be shortened.
m(x:a)(y:b)
|x==y=x:m a b
|x<y=x:m(y:b)a
|True=y:m(x:a)b
f d=l!!s-1where
l=0:foldl1 m[map(n+)l|n<-d]
g=minimum d
s=until(\n->l!!(n+g)-l!!n==g)(+1)0
Call it with, e.g., f [9,6,20].
FrobeniusScript 5 characters
solve
Sadly there does not yet exist any compiler/interpreter for this language.
No params, the interpreter will handle that:
$ echo solve > myProgram
$ frobeniusScript myProgram
6
9
20
^D
Your answer is: 43
$ exit
$ bc
BC> ibase=2
BC> 110&101 // wanna get 100
(standar_in) 8: syntax error
Wikipedia informs that the ops are "|, & and ^". It may be that they work only in certain BC-types or I misread something.
Those operators are listed in the section 'Missing' operators relative to C, which ends with "... are not available in POSIX bc"
Despite bc won't do it, you can use arithmetic expansion directly on the terminal if you use bash.
To XOR 44 and 61, you can do:
echo $((44^61))
If you want to use binary code, then:
echo $((2#110^2#101))
See Numerical Constants for changing the base.
See bitwise operators section to peep at available operators.
The wikipedia article is pretty clear that these ops aren't in either POSIX bc or gnu bc. The man page has no mention of them either.
Why do most computer programming languages not allow binary numbers to be used like decimal or hexadecimal?
In VB.NET you could write a hexadecimal number like &H4
In C you could write a hexadecimal number like 0x04
Why not allow binary numbers?
&B010101
0y1010
Bonus Points!... What languages do allow binary numbers?
Edit
Wow! - So the majority think it's because of brevity and poor old "waves" thinks it's due to the technical aspects of the binary representation.
Because hexadecimal (and rarely octal) literals are more compact and people using them usually can convert between hexadecimal and binary faster than deciphering a binary number.
Python 2.6+ allows binary literals, and so do Ruby and Java 7, where you can use the underscore to make byte boundaries obvious. For example, the hexadedecimal value 0x1b2a can now be written as 0b00011011_00101010.
In C++0x with user defined literals binary numbers will be supported, I'm not sure if it will be part of the standard but at the worst you'll be able to enable it yourself
int operator "" _B(int i);
assert( 1010_B == 10);
In order for a bit representation to be meaningful, you need to know how to interpret it.
You would need to specify what the type of binary number you're using (signed/unsigned, twos-compliment, ones-compliment, signed-magnitude).
The only languages I've ever used that properly support binary numbers are hardware description languages (Verilog, VHDL, and the like). They all have strict (and often confusing) definitions of how numbers entered in binary are treated.
See perldoc perlnumber:
NAME
perlnumber - semantics of numbers and numeric operations in Perl
SYNOPSIS
$n = 1234; # decimal integer
$n = 0b1110011; # binary integer
$n = 01234; # octal integer
$n = 0x1234; # hexadecimal integer
$n = 12.34e-56; # exponential notation
$n = "-12.34e56"; # number specified as a string
$n = "1234"; # number specified as a string
Slightly off-topic, but newer versions of GCC added a C extension that allows binary literals. So if you only ever compile with GCC, you can use them. Documenation is here.
Common Lisp allows binary numbers, using #b... (bits going from highest-to-lowest power of 2). Most of the time, it's at least as convenient to use hexadecimal numbers, though (by using #x...), as it's fairly easy to convert between hexadecimal and binary numbers in your head.
Hex and octal are just shorter ways to write binary. Would you really want a 64-character long constant defined in your code?
Common wisdom holds that long strings of binary digits, eg 32 bits for an int, are too difficult for people to conveniently parse and manipulate. Hex is generally considered easier, though I've not used either enough to have developed a preference.
Ruby which, as already mentioned, attempts to resolve this by allowing _ to be liberally inserted in the literal , allowing, for example:
irb(main):005:0> 1111_0111_1111_1111_0011_1100
=> 111101111111111100111100
D supports binary literals using the syntax 0[bB][01]+, e.g. 0b1001. It also allows embedded _ characters in numeric literals to allow them to be read more easily.
Java 7 now has support for binary literals. So you can simply write 0b110101. There is not much documentation on this feature. The only reference I could find is here.
While C only have native support for 8, 10 or 16 as base, it is actually not that hard to write a pre-processor macro that makes writing 8 bit binary numbers quite simple and readable:
#define BIN(d7,d6,d5,d4, d3,d2,d1,d0) \
( \
((d7)<<7) + ((d6)<<6) + ((d5)<<5) + ((d4)<<4) + \
((d3)<<3) + ((d2)<<2) + ((d1)<<1) + ((d0)<<0) \
)
int my_mask = BIN(1,1,1,0, 0,0,0,0);
This can also be used for C++.
for the record, and to answer this:
Bonus Points!... What languages do allow binary numbers?
Specman (aka e) allows binary numbers. Though to be honest, it's not quite a general purpose language.
Every language should support binary literals. I go nuts not having them!
Bonus Points!... What languages do allow binary numbers?
Icon allows literals in any base from 2 to 16, and possibly up to 36 (my memory grows dim).
It seems the from a readability and usability standpoint, the hex representation is a better way of defining binary numbers. The fact that they don't add it is probably more of user need that a technology limitation.
I expect that the language designers just didn't see enough of a need to add binary numbers. The average coder can parse hex just as well as binary when handling flags or bit masks. It's great that some languages support binary as a representation, but I think on average it would be little used. Although binary -- if available in C, C++, Java, C#, would probably be used more than octal!
In Smalltalk it's like 2r1010. You can use any base up to 36 or so.
Hex is just less verbose, and can express anything a binary number can.
Ruby has nice support for binary numbers, if you really want it. 0b11011, etc.
In Pop-11 you can use a prefix made of number (2 to 32) + colon to indicate the base, e.g.
2:11111111 = 255
3:11111111 = 3280
16:11111111 = 286331153
31:11111111 = 28429701248
32:11111111 = 35468117025
Forth has always allowed numbers of any base to be used (up to size limit of the CPU of course). Want to use binary: 2 BASE ! octal: 8 BASE ! etc. Want to work with time? 60 BASE ! These examples are all entered from base set to 10 decimal. To change base you must represent the base desired from the current number base. If in binary and you want to switch back to decimal then 1010 BASE ! will work. Most Forth implementations have 'words' to shift to common bases, e.g. DECIMAL, HEX, OCTAL, and BINARY.
Although it's not direct, most languages can also parse a string. Java can convert "10101000" into an int with a method.
Not that this is efficient or anything... Just saying it's there. If it were done in a static initialization block, it might even be done at compile time depending on the compiler.
If you're any good at binary, even with a short number it's pretty straight forward to see 0x3c as 4 ones followed by 2 zeros, whereas even that short a number in binary would be 0b111100 which might make your eyes hurt before you were certain of the number of ones.
0xff9f is exactly 4+4+1 ones, 2 zeros and 5 ones (on sight the bitmask is obvious). Trying to count out 0b1111111110011111 is much more irritating.
I think the issue may be that language designers are always heavily invested in hex/octal/binary/whatever and just think this way. If you are less experienced, I can totally see how these conversions wouldn't be as obvious.
Hey, that reminds me of something I came up with while thinking about base conversions. A sequence--I didn't think anyone could figure out the "Next Number", but one guy actually did, so it is solvable. Give it a try:
10
11
12
13
14
15
16
21
23
31
111
?
Edit:
By the way, this sequence can be created by feeding sequential numbers into single built-in function in most languages (Java for sure).