Prove using induction that the loop invariant holds - proof

//Precondition: n > 0
//Postcondition: returns the minimum number of decial digits
// necessary to write out the number n
int countDigits(int n){
1. int d = 0;
2. int val = n;
3. while(val != 0){
4. val = val / 10; // In C++: 5 / 2 === 2
5. d++;
6. }
7. return d;
}
Invariant: Just before evaluating the loop guard on line 3, n with its rightmost d digits removed is identical to val. (Assume that the number 0 takes 0 digits to write out and is the only number that takes 0 digits to write out).
Prove using induction that the loop invariant holds.
Now I've always thought that proof with induction is assuming that by replacing a variable within an equation with k will be true then I must prove k+1 will also be true. But I'm not really given an equation in this question and just a block of code. Here's my base case:
Just before evaluating the loop guard on line 3, d is equal to 0 and on line 2, val == n, so if n has its rightmost 0 digit removed, it is val. Therefore, the base case holds.
I'm not really sure how to write the inductive step after this since I'm not sure how to prove k+1..

The logic is really the same as with an equation, except you replace the k value in your equation by the n iteration of the loop:
base case is that the loop invariant holds before starting the loop;
you have to prove that if the invariant holds before iteration N, it will still hold after execution of iteration N.
From 1. and 2. we conclude by induction that the invariant holds at the end of the loop (or at the end of any iteration, in fact).
EDIT and this is interesting because the loop ends with val == 0. Your invariant (still true at the end of the loop) is n with its rightmost d digits removed is identical to val, so n with d digits removed is identical to 0 at this point, so d is correctly the number of digits required to display n.

Related

Is using base case variable in a recursive function important?

I'm currently learning about recursion, it's pretty hard to understand. I found a very common example for it:
function factorial(N)
local Value
if N == 0 then
Value = 1
else
Value = N * factorial(N - 1)
end
return Value
end
print(factorial(3))
N == 0 is the base case. But when i changed it into N == 1, the result is still remains the same. (it will print 6).
Is using the base case important? (will it break or something?)
What's the difference between using N == 0 (base case) and N == 1?
That's just a coincidence, since 1 * 1 = 1, so it ends up working either way.
But consider the edge-case where N = 0, if you check for N == 1, then you'd go into the else branch and calculate 0 * factorial(-1), which would lead to an endless loop.
The same would happen in both cases if you just called factorial(-1) directly, which is why you should either check for > 0 instead (effectively treating every negative value as 0 and returning 1, or add another if condition and raise an error when N is negative.
EDIT: As pointed out in another answer, your implementation is not tail-recursive, meaning it accumulates memory for every recursive functioncall until it finishes or runs out of memory.
You can make the function tail-recursive, which allows Lua to treat it pretty much like a normal loop that could run as long as it takes to calculate its result:
local function factorial(n, acc)
acc = acc or 1
if n <= 0 then
return acc
else
return factorial(n-1, acc*n)
end
return Value
end
print(factorial(3))
Note though, that in the case of factorial, it would take you way longer to run out of stack memory than to overflow Luas number data type at around 21!, so making it tail-recursive is really just a matter of training yourself to write better code.
As the above answer and comments have pointed out, it is essential to have a base-case in a recursive function; otherwise, one ends up with an infinite loop.
Also, in the case of your factorial function, it is probably more efficient to use a helper function to perform the recursion, so as to take advantage of Lua's tail-call optimizations. Since Lua conveniently allows for local functions, you can define a helper within the scope of your factorial function.
Note that this example is not meant to handle the factorials of negative numbers.
-- Requires: n is an integer greater than or equal to 0.
-- Effects : returns the factorial of n.
function fact(n)
-- Local function that will actually perform the recursion.
local function fact_helper(n, i)
-- This is the base case.
if (i == 1) then
return n
end
-- Take advantage of tail calls.
return fact_helper(n * i, i - 1)
end
-- Check for edge cases, such as fact(0) and fact(1).
if ((n == 0) or (n == 1)) then
return 1
end
return fact_helper(n, n - 1)
end

How does this function calculate?

I've been working through CodeWars katas and I came across a pretty cool solution that someone came up with. The problem I have is I don't understand how it works. I understand some of it like what it is generally doing but not detail specifics. Is it returning itself? How is it doing the calculation? Can someone explain this to me because I really what to learn how to do this. And if you know of any other resources I can read or watch that would be helpful. I didn't see anything like this in the Swift documentation.
func findDigit(_ num: Int, _ nth: Int) -> Int {
let positive = abs(num)
guard nth > 0 else { return -1 }
guard positive > 0 else { return 0 }
guard nth > 1 else { return positive % 10 }
return findDigit(positive / 10, nth - 1) }
For context:
Description:
The function findDigit takes two numbers as input, num and nth. It outputs the nth digit of num (counting from right to left).
Note
If num is negative, ignore its sign and treat it as a positive value.
If nth is not positive, return -1.
Keep in mind that 42 = 00042. This means that findDigit(42, 5) would return 0.
Examples
findDigit(5673, 4) returns 5
findDigit(129, 2) returns 2
findDigit(-2825, 3) returns 8
findDigit(-456, 4) returns 0
findDigit(0, 20) returns 0
findDigit(65, 0) returns -1
findDigit(24, -8) returns -1
Greatly appreciate any help. Thanks.
This is a simple recursive function. Recursive means that it calls itself over and over until a condition is satisfied that ends the recursion. If the condition is never satisfied, you'll end up with an infinite recursion which is not a good thing :)
As you already understand the purpose of the function, here are the details of how it works internally:
// Saves the absolute value (removes the negative sign) of num
let positive = abs(num)
// Returns -1 if num is 0 or negative
guard nth > 0 else { return -1 }
// Returns 0 if the absolute value of num is 0 (can't be negative)
guard positive > 0 else { return 0 } // Could be guard positive == 0
// nth is a counter that is decremented with every recursion.
// positive % 10 returns the remainder of positive / 10
// For example 23 % 10 = 3
// In this line it always returns a number from 0 - 9 IF nth <= 0
guard nth > 1 else { return positive % 10 }
// If none of the above conditions are true, calls itself using
// the current absolute value divided by 10, decreasing nth.
// nth serves to target a different digit in the original number
return findDigit(positive / 10, nth - 1)
Let's run through an example step by step:
findDigit(3454, 3)
num = 3454, positive = 3454, nth = 3
-> return findDigit(3454 / 10, 3 - 1)
num = 345, positive = 345, nth = 2 // 345, not 345.4: integer type
-> return findDigit(345 / 10, 2 - 1)
num = 35, positive = 35, nth = 1
-> return 35 % 10
-> return 5
It is a recursive solution. It does not return itself, per se, it calls itself on a simpler case, until it gets to a base case (here a 1 digit number). So for example, let us trace through what it does in your first example:
findDigit(5673, 4) calls
findDigit (567, 3) calls
findDigit (56,2) calls
findDigit (5,1) which is the base case which returns 5 which bubbles all the way back up to the surface.
This is a recursive algorithm. It works by solving the original problem by reducing it to a smaller problem of the same time, then solving that, recursively, until a base case is hit.
I think you'll have a much easier time understanding it if you see the calls being made. Of course, it's best to step through this in the debugger to really see what's going on. I've numbered the sections of interest to refer to them below
func findDigit(_ num: Int, _ nth: Int) -> Int {
print("findDigit(\(num), \(nth))") //#1
let positive = abs(num) // #2
guard nth > 0 else { return -1 } // #3
guard positive > 0 else { return 0 } // #4
guard nth > 1 else { return positive % 10 } // #5
return findDigit(positive / 10, nth - 1) // #6
}
print(findDigit(5673, 4))
I print out the function and its parameters, do you can see what's going on. Here's what's printed:
findDigit(5673, 4)
findDigit(567, 3)
findDigit(56, 2)
findDigit(5, 1)
5
Take the positive value of num, so the - sign doesn't get in the way.
Assert that the nth variable is greater than 0. Since the digit counting in this problem, any value equal to less 0 is invalid. In such a case, -1 is returned. This is very bad practice in Swift. This is what Optionals exist for. It's much better to make this function return Int? and returning nil to represent an error in the nth variable.
Assert that the positive variable is greater than 0. The only other possible case is that positive is 0, in which case its digit (for any position) is 0, so that's why you have return 0.
Assert that nth is greater than 1. If this is not the case, then nth must be 1 (the guard numbered #3 ensures it can't be negative, or 0. In such a case, the digit in the first position of a decimal number is that number modulo 10, hence why positive % 10 is returned.
If we reach this line, than we know we have a sane value of nth (> 0), which isn't 1, and we have a positive number greater than 0. Now we can proceed to solve this problem by recursing. We'll divid positive by 10, and make it into the new nth, and we'll decrement nth, because what is the nth digit of this call, will be in the n-1 th spot of the next call.
Someone by the name of JohanWiltink on CodeWars answered my question. But I chose to accept Nicolas's for the detail.
This was JohanWiltink explanation:
The function does not return itself as a function; it calls itself with different arguments and returns the result of that recursive call (this is possibly nested until, in this case, nth=1).
findDigit(10,2) thus returns the value of findDigit(1,1).
If you're not seeing how this works, try to work out by hand what e.g. findDigit(312,3) would return.
Thanks so much to everyone that answered! Really appreciate it!

Addition as binary operations

I'm adding a pair of unsigned 32bit binary integers (including overflow). The addition is expressive rather than actually computed, so there's no need for an efficient algorithm, but since each component is manually specified in terms of individual bits, I need one with a compact representation. Any suggestions?
Edit: In terms of boolean operators. So I'm thinking that carry = a & b; sum = a ^ b; for the first bit, but the other 31?
Oh, and subtraction!
You can not perform addition with simple boolean operators, you need an adder. (Of course the adder can be built using some more complex boolean operators.)
The adder adds two bits plus carry, and passes carry out to next bit.
Pseudocode:
carry = 0
for i = 31 to 0
sum = a[i] + b[i] + carry
result[i] = sum & 1
carry = sum >> 1
next i
Here is an implementation using the macro language of VEDIT text editor.
The two numbers to be added are given as ASCII strings, one on each line.
The results are inserted on the third line.
Reg_Empty(10) // result as ASCII string
#0 = 0 // carry bit
for (#9=31; #9>=0; #9--) {
#1 = CC(#9)-'0' // a bit from first number
#2 = CC(#9+34)-'0' // a bit from second number
#3 = #0+#1+#2 // add with carry
#4 = #3 & 1 // resulting bit
#0 = #3 >> 1 // new carry
Num_Str(#4, 11, LEFT) // convert bit to ASCII
Reg_Set(10, #11, INSERT) // insert bit to start of string
}
Line(2)
Reg_Ins(10) IN
Return
Example input and output:
00010011011111110101000111100001
00110110111010101100101101110111
01001010011010100001110101011000
Edit:
Here is pseudocode where the adder has been implemented with boolean operations:
carry = 0
for i = 31 to 0
sum[i] = a[i] ^ b[i] ^ carry
carry = (a[i] & b[i]) | (a[i] & carry) | (b[i] & carry)
next i
Perhaps you can begin by stating addition for two 1-bit numbers, with overflow (=carry):
A | B | SUM | CARRY
===================
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
To generalize this further, you need a "full adder" which also takes a carry as an input, from the preceding stage. Then you can express the 32-bit addition as a chain of 32 such full adders (with the first stage's carry input tied to 0).
Regarding data structure part to represent these numbers. There are 4 ways
1) Bit Array
A bit array is an array data structure that compactly stores individual bits.
They are also known as bitmap, bitset or bitstring.
2) Bit Field
A bit field is a common idiom used in computer programming to compactly store multiple logical values as a short series of bits where each of the single bits can be addressed separately.
3) Bit Plane
A bit plane of a digital discrete signal (such as image or sound) is a set of bits corresponding to a given bit position in each of the binary numbers representing the signal.
4) Bit Board
A bitboard or bit field is a format that stuffs a whole group of related boolean variables into the same integer, typically representing positions on a board game.
Regarding implementation, you can check that at each step, we have following
S = a xor b xor c
S is result of sum of current bits a an b
c is input carry
Cout - output carry is (a & b) xor (c & (a xor b))

Translation from Complex-FFT to Finite-Field-FFT

Good afternoon!
I am trying to develop an NTT algorithm based on the naive recursive FFT implementation I already have.
Consider the following code (coefficients' length, let it be m, is an exact power of two):
/// <summary>
/// Calculates the result of the recursive Number Theoretic Transform.
/// </summary>
/// <param name="coefficients"></param>
/// <returns></returns>
private static BigInteger[] Recursive_NTT_Skeleton(
IList<BigInteger> coefficients,
IList<BigInteger> rootsOfUnity,
int step,
int offset)
{
// Calculate the length of vectors at the current step of recursion.
// -
int n = coefficients.Count / step - offset / step;
if (n == 1)
{
return new BigInteger[] { coefficients[offset] };
}
BigInteger[] results = new BigInteger[n];
IList<BigInteger> resultEvens =
Recursive_NTT_Skeleton(coefficients, rootsOfUnity, step * 2, offset);
IList<BigInteger> resultOdds =
Recursive_NTT_Skeleton(coefficients, rootsOfUnity, step * 2, offset + step);
for (int k = 0; k < n / 2; k++)
{
BigInteger bfly = (rootsOfUnity[k * step] * resultOdds[k]) % NTT_MODULUS;
results[k] = (resultEvens[k] + bfly) % NTT_MODULUS;
results[k + n / 2] = (resultEvens[k] - bfly) % NTT_MODULUS;
}
return results;
}
It worked for complex FFT (replace BigInteger with a complex numeric type (I had my own)). It doesn't work here even though I changed the procedure of finding the primitive roots of unity appropriately.
Supposedly, the problem is this: rootsOfUnity parameter passed originally contained only the first half of m-th complex roots of unity in this order:
omega^0 = 1, omega^1, omega^2, ..., omega^(n/2)
It was enough, because on these three lines of code:
BigInteger bfly = (rootsOfUnity[k * step] * resultOdds[k]) % NTT_MODULUS;
results[k] = (resultEvens[k] + bfly) % NTT_MODULUS;
results[k + n / 2] = (resultEvens[k] - bfly) % NTT_MODULUS;
I originally made use of the fact, that at any level of recursion (for any n and i), the complex root of unity -omega^(i) = omega^(i + n/2).
However, that property obviously doesn't hold in finite fields. But is there any analogue of it which would allow me to still compute only the first half of the roots?
Or should I extend the cycle from n/2 to n and pre-compute all the m-th roots of unity?
Maybe there are other problems with this code?..
Thank you very much in advance!
I recently wanted to implement NTT for fast multiplication instead of DFFT too. Read a lot of confusing things, different letters everywhere and no simple solution, and also my finite fields knowledge is rusty , but today i finally got it right (after 2 days of trying and analog-ing with DFT coefficients) so here are my insights for NTT:
Computation
X(i) = sum(j=0..n-1) of ( Wn^(i*j)*x(i) );
where X[] is NTT transformed x[] of size n where Wn is the NTT basis. All computations are on integer modulo arithmetics mod p no complex numbers anywhere.
Important values
Wn = r ^ L mod p is basis for NTT
Wn = r ^ (p-1-L) mod p is basis for INTT
Rn = n ^ (p-2) mod p is scaling multiplicative constant for INTT ~(1/n)
p is prime that p mod n == 1 and p>max'
max is max value of x[i] for NTT or X[i] for INTT
r = <1,p)
L = <1,p) and also divides p-1
r,L must be combined so r^(L*i) mod p == 1 if i=0 or i=n
r,L must be combined so r^(L*i) mod p != 1 if 0 < i < n
max' is the sub-result max value and depends on n and type of computation. For single (I)NTT it is max' = n*max but for convolution of two n sized vectors it is max' = n*max*max etc. See Implementing FFT over finite fields for more info about it.
functional combination of r,L,p is different for different n
this is important, you have to recompute or select parameters from table before each NTT layer (n is always half of the previous recursion).
Here is my C++ code that finds the r,L,p parameters (needs modular arithmetics which is not included, you can replace it with (a+b)%c,(a-b)%c,(a*b)%c,... but in that case beware of overflows especial for modpow and modmul) The code is not optimized yet there are ways to speed it up considerably. Also prime table is fairly limited so either use SoE or any other algo to obtain primes up to max' in order to work safely.
DWORD _arithmetics_primes[]=
{
2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,
179,181,191,193,197,199,211,223,227,229,233,239,241,251,257,263,269,271,277,281,283,293,307,311,313,317,331,337,347,349,353,359,367,373,379,383,389,397,401,409,
419,421,431,433,439,443,449,457,461,463,467,479,487,491,499,503,509,521,523,541,547,557,563,569,571,577,587,593,599,601,607,613,617,619,631,641,643,647,653,659,
661,673,677,683,691,701,709,719,727,733,739,743,751,757,761,769,773,787,797,809,811,821,823,827,829,839,853,857,859,863,877,881,883,887,907,911,919,929,937,941,
947,953,967,971,977,983,991,997,1009,1013,1019,1021,1031,1033,1039,1049,1051,1061,1063,1069,1087,1091,1093,1097,1103,1109,1117,1123,1129,1151,
0}; // end of table is 0, the more primes are there the bigger numbers and n can be used
// compute NTT consts W=r^L%p for n
int i,j,k,n=16;
long w,W,iW,p,r,L,l,e;
long max=81*n; // edit1: max num for NTT for my multiplication purposses
for (e=1,j=0;e;j++) // find prime p that p%n=1 AND p>max ... 9*9=81
{
p=_arithmetics_primes[j];
if (!p) break;
if ((p>max)&&(p%n==1))
for (r=2;r<p;r++) // check all r
{
for (l=1;l<p;l++)// all l that divide p-1
{
L=(p-1);
if (L%l!=0) continue;
L/=l;
W=modpow(r,L,p);
e=0;
for (w=1,i=0;i<=n;i++,w=modmul(w,W,p))
{
if ((i==0) &&(w!=1)) { e=1; break; }
if ((i==n) &&(w!=1)) { e=1; break; }
if ((i>0)&&(i<n)&&(w==1)) { e=1; break; }
}
if (!e) break;
}
if (!e) break;
}
}
if (e) { error; } // error no combination r,l,p for n found
W=modpow(r, L,p); // Wn for NTT
iW=modpow(r,p-1-L,p); // Wn for INTT
and here is my slow NTT and INTT implementations (i havent got to fast NTT,INTT yet) they are both tested with Schönhage–Strassen multiplication successfully.
//---------------------------------------------------------------------------
void NTT(long *dst,long *src,long n,long m,long w)
{
long i,j,wj,wi,a,n2=n>>1;
for (wj=1,j=0;j<n;j++)
{
a=0;
for (wi=1,i=0;i<n;i++)
{
a=modadd(a,modmul(wi,src[i],m),m);
wi=modmul(wi,wj,m);
}
dst[j]=a;
wj=modmul(wj,w,m);
}
}
//---------------------------------------------------------------------------
void INTT(long *dst,long *src,long n,long m,long w)
{
long i,j,wi=1,wj=1,rN,a,n2=n>>1;
rN=modpow(n,m-2,m);
for (wj=1,j=0;j<n;j++)
{
a=0;
for (wi=1,i=0;i<n;i++)
{
a=modadd(a,modmul(wi,src[i],m),m);
wi=modmul(wi,wj,m);
}
dst[j]=modmul(a,rN,m);
wj=modmul(wj,w,m);
}
}
//---------------------------------------------------------------------------
dst is destination array
src is source array
n is array size
m is modulus (p)
w is basis (Wn)
hope this helps to someone. If i forgot something please write ...
[edit1: fast NTT/INTT]
Finally I manage to get fast NTT/INTT to work. Was little bit more tricky than normal FFT:
//---------------------------------------------------------------------------
void _NFTT(long *dst,long *src,long n,long m,long w)
{
if (n<=1) { if (n==1) dst[0]=src[0]; return; }
long i,j,a0,a1,n2=n>>1,w2=modmul(w,w,m);
// reorder even,odd
for (i=0,j=0;i<n2;i++,j+=2) dst[i]=src[j];
for ( j=1;i<n ;i++,j+=2) dst[i]=src[j];
// recursion
_NFTT(src ,dst ,n2,m,w2); // even
_NFTT(src+n2,dst+n2,n2,m,w2); // odd
// restore results
for (w2=1,i=0,j=n2;i<n2;i++,j++,w2=modmul(w2,w,m))
{
a0=src[i];
a1=modmul(src[j],w2,m);
dst[i]=modadd(a0,a1,m);
dst[j]=modsub(a0,a1,m);
}
}
//---------------------------------------------------------------------------
void _INFTT(long *dst,long *src,long n,long m,long w)
{
long i,rN;
rN=modpow(n,m-2,m);
_NFTT(dst,src,n,m,w);
for (i=0;i<n;i++) dst[i]=modmul(dst[i],rN,m);
}
//---------------------------------------------------------------------------
[edit3]
I have optimized my code (3x times faster than code above),but still i am not satisfied with it so i started new question with it. There I have optimized my code even further (about 40x times faster than code above) so its almost the same speed as FFT on floating point of the same bit size. Link to it is here:
Modular arithmetics and NTT (finite field DFT) optimizations
To turn Cooley-Tukey (complex) FFT into modular arithmetic approach, i.e. NTT, you must replace complex definition for omega. For the approach to be purely recursive, you also need to recalculate omega for each level based on current signal size. This is possible because min. suitable modulus decreases as we move down in the call tree, so modulus used for root is suitable for lower layers. Additionally, as we are using same modulus, the same generator may be used as we move down the call tree. Also, for inverse transform, you should take additional step to take recalculated omega a and instead use as omega: b = a ^ -1 (via using inverse modulo operation). Specifically, b = invMod(a, N) s.t. b * a == 1 (mod N), where N is the chosen prime modulus.
Rewriting an expression involving omega by exploiting periodicity still works in modular arithmetic realm. You also need to find a way to determine the modulus (a prime) for the problem and a valid generator.
We note that your code works, though it is not a MWE. We extended it using common sense, and got correct result for a polynomial multiplication application. You just have to provide correct values of omega raised to certain powers.
While your code works, though, like from many other sources, you double spacing for each level. This does not lead to recursion that is as clean, though; this turns out to be identical to recalculating omega based on current signal size because the power for omega definition is inversely proportional to signal size. To reiterate: halving signal size is like squaring omega, which is like giving doubled powers for omega (which is what one would do for doubling of spacing). The nice thing about the approach that deals with recalculating of omega is that each subproblem is more cleanly complete in its own right.
There is a paper that shows some of the math for modular approach; it is a paper by Baktir and Sunar from 2006. See the paper at the end of this post.
You do not need to extend the cycle from n / 2 to n.
So, yes, some sources which say to just drop in a different omega definition for modular arithmetic approach are sweeping under the rug many details.
Another issue is that it is important to acknowledge that the signal size must be large enough if we are to not have overflow for result time-domain signal if we are performing convolution. Additionally, it may be useful to find certain implementations for exponentiation subject to modulus exist that are fast, even if the power is quite large.
References
Baktir and Sunar - Achieving efficient polynomial multiplication in Fermat fields using the fast Fourier transform (2006)
You must make sure that roots of unity actually exist. In R there are only 2 roots of unity: 1 and -1, since only for them x^n=1 can be true.
In C you have infinitely many roots of unity: w=exp(2*pi*i/N) is a primitive N-th roots of unity and all w^k for 0<=k
Now to your problem: you have to make sure the ring you're working in offers the same property: enough roots of unity.
Schönhage and Strassen (http://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strassen_algorithm) use integers modulo 2^N+1. This ring has enough roots of unity. 2^N == -1 is a 2nd root of unity, 2^(N/2) is a 4th root of unity and so on. Furthermore, these roots of unity have the advantage that they are powers of two and can be implemented as binary shifts (with a modulo operation afterwards, which comes down to a add/subtract).
I think QuickMul (http://www.cs.nyu.edu/exact/doc/qmul.ps) works modulo 2^N-1.

Function to determine number of unordered combinations with non-unqiue choices

I'm trying to determine the function for determining the number of unordered combinations with non-unique choices.
Given:
n = number of unique symbols to select from
r = number of choices
Example... for n=3, r=3, the result would be: (edit: added missing values pointed out by Dav)
000
001
002
011
012
022
111
112
122
222
I know the formula for permutations (unordered, unique selections), but I can't figure out how allowing repetition increases the set.
In C++ given the following routine:
template <typename Iterator>
bool next_combination(const Iterator first, Iterator k, const Iterator last)
{
/* Credits: Mark Nelson http://marknelson.us */
if ((first == last) || (first == k) || (last == k))
return false;
Iterator i1 = first;
Iterator i2 = last;
++i1;
if (last == i1)
return false;
i1 = last;
--i1;
i1 = k;
--i2;
while (first != i1)
{
if (*--i1 < *i2)
{
Iterator j = k;
while (!(*i1 < *j)) ++j;
std::iter_swap(i1,j);
++i1;
++j;
i2 = k;
std::rotate(i1,j,last);
while (last != j)
{
++j;
++i2;
}
std::rotate(k,i2,last);
return true;
}
}
std::rotate(first,k,last);
return false;
}
You can then proceed to do the following:
std::string s = "12345";
std::size_t r = 3;
do
{
std::cout << std::string(s.begin(),s.begin() + r) << std::endl;
}
while(next_combination(s.begin(), s.begin() + r, s.end()));
If you have N unique symbols, and want to select a combination of length R, then you are essentially putting N-1 dividers into R+1 "slots" between cumulative total numbers of symbols selected.
0 [C] 1 [C] 2 [C] 3
The C's are choices, the numbers are the cumulative count of choices made so far. You're essentially putting a divider for each possible thing you could choose of when you "start" choosing that thing (it's assumed that you start with choosing a particular thing before any dividers are placed, hence the -1 in the N-1 dividers).
If you place all of the dividers are spot 0, then you chose the final thing for all of the choices. If you place all of the dividers at spot 3, then you choose the initial thing for all of the choices. In general, if you place the ith divider at spot k, then you chose thing i+1 for all of the choices that come between that spot and the spot of the next divider.
Since we're trying to put N-1 non-unique items (the dividers are non-unique, they're just dividers) around R slots, we really just want to permute N-1 1's and R 0's, which is effectively
(N+R-1) choose (N-1) =(N+R-1)!/((N-1)!R!).
Thus the final formula is (N+R-1)!/((N-1)!R!) for the number of unordered combinations with non-unique selection of items.
Note that this evaluates to 10 for N=3, R=3, which matches with your result... after you add the missing options that I pointed out in comments above.