Uniswap V2 why overflow is desired? - ethereum

I'm analyzing Uniswap V2 core contracts, and have noticed a comment
// overflow is desired
Why overflow is desired?
Because, from my point of view when overflow happens the next line
if (timeElapsed > 0 && _reserve0 != 0 && _reserve1 != 0) {
never will be true due to wrong timeElapsed.

In:
uint32 timeElapsed = blockTimestamp - blockTimestampLast; // overflow is desired
Since timeElapsed is unsigned, if an overflow occurs then its value will necessarily be positive, hence the expression timeElapsed > 0 will necessarily evaluate to true.
If you're planning to dive into Solidity code, then you probably want to learn the basic concepts of unsigned integers and Twos Complement.

That contract is written in pragma solidity =0.5.16. In this version in order to prevent overflow-underflow errors, it has to have SafeMath library checks which is extra operation so it is an extra cost (solidity automatically checks for overflow underflow after v8.0.0)
uint32 timeElapsed = blockTimestamp - blockTimestampLast; // overflow is desired
blockTimestampLast is when those 2 variables are updated
uint public price0CumulativeLast;
uint public price1CumulativeLast;
timeElapsed type is uint32 which represents unsigned integers with values ranging from 0 to 4,294,967,295. To simplify let's say our range is 0-32 and let's say we have those variables
blockTimestamp=30
blockTimestampLast=20
Therefore timeElapsed=10
Assume that 10 seconds passed and we did not update price0CumulativeLast and price1CumulativeLast so blockTimestampLast=20. 10 seconds passed, blockTimestampLast did not change we expect that timeElapsed=20
After 10 seconds blockTimestamp will be 30+10=40. since we assume that our range is 0-32, blockTimestamp will be 40-32=8. Now calculate the timeElapsed
timeElapsed = blockTimestamp - blockTimestampLas
= 8-20 =-12
since we are in 0-32 -12 means 20 seconds. So timeElapsed=20. Since the time difference did not change, overflow is desired in this case.
But imagine a case where you are adding total costs and our range is still 0-32. Now when your total cost is 25 and if you add 10 dollars cost, it will be 35 dollars, in our range it is 3 dollars. So this is not desirable.

Related

How to define periodic function in Fortran?

How can you define a periodic function in Fortran?
For example, f(x) = exp(-x**2), for –10 < x < 10 with period 20.
You should be fine with using MODULO
f = exp(-(MODULO(x-10,20.)-10.)**2)
x-10 is used to shift the repeating interval to -10,10. Otherwise,MODULO(x, 20.) would be periodic on interval 0,20. Because MODULO retuns values [0,20), the final -10 shifts it to [-10,10)
I assumed that x is real. If it has a different kind or even different type, the other argument to MODULO must be of the same type (20._wp, 20.d0, 20,...) as needed. The final -10. may also have to be adjusted to a higher kind.

Is the stack limit of 5287 in AS3 variable or predefined?

I ran a test just now:
function overflow(stack:int = 0):void
{
if(stack < 5290)
{
trace(stack);
overflow(stack + 1);
}
}
overflow();
This always throws a StackOverflow error after 5287 calls.
Error #1023: Stack overflow occurred.
Is this limit variable (depending on machine specs, environment, etc) or is that a flat value defined somewhere? If I change the if statement to less than 5287, I don't get the error.
Obviously it's variable. Since all the calculations you really do are located in stack (disassembly report codes show pushbyte instructions and other stuff that's working with stack, as non-operand arithmetics), this value only reports how many function contexts can be put into the stack until it overflows.
I have decided to run some tests for recursion thresholds as based on this article that was referenced in baris's comment. The results were pretty embarrassing. Test environment: FlashDevelop 3.3.4 RTM, Flash player debugger 10.1.53.64, flash compile mode: release. "Debug" mode didn't change numbers cardinally, checked that too.
Locals number Iterations (static int) Iterations (Math.random())
0 5306
1 4864 4856
2 4850 4471
3 4474 4149
4 4153 3870
5 3871 3868
6 3869 3621
7 3620 3404
8 3403 3217
9 3210 3214
10 3214 3042
11 3042 3045
10 mixed 3042 1 value was assigned Math.random() and 9 - static int
10 advancedRandom 2890 1 value was assigned a custom random with 1 parameter
Note, all of these values vary within a margin of ten between subsequent executions. The "static int" and "Math.random()" are designations of what is assigned to locals wihin the recursively called function. This, however, leads me to assume the following:
Including function calls into the recursive function adds to function context
Memory for locals is assigned along with its type, in chunks of more than 8 bytes, because adding a local does not always decrease recursion limit
Adding more than one call to a certain function does not add more memory to function context
The "memory chunk" is most likely 16 bytes long, because this value is 2^N, an addition of one int or Number local does not always decrease recursion, and this is more than 8, as a raw value of a Number variable takes 8 bytes, being a double-precision floating-point.
Assuming #4 is correct, the best value for function context size appeared to be 172 bytes, with total stack size being 912632 bytes. This largely confirms my initial assumption that the stack size is actually 1 megabyte in Flash 10. Flash 11 showed me a bit higher numbers when I have tried opening the test SWF in its debugger, but I didn't make extensive tests with it.
Hm, this is interesting. I took a look at the link that Barış gave. It seems like it might be to be with 'method complexity' after all, but I am not sure how to further test it. I am using Flash CS5, publishing for Flash Player 10, Actionscript 3 (of course).
Original:
function overflow(stack:int = 0):void {
if(stack < 5290){
trace(stack);
overflow(stack + 1);
}
}
// gives 5287
Now adding a single Math.random() call to the overflow() method:
function overflow(stack:int = 0):void {
Math.random();
if(stack < 5290){
trace(stack);
overflow(stack + 1);
}
}
// gives 4837
Adding multiple Math.random() calls make no difference, nor does storing it in a local variable or adding another parameter to the overflow() method to 'carry' that random generated value
function overflow(stack:int = 0):void {
Math.random();
Math.random();
if(stack < 5290){
trace(stack);
overflow(stack + 1);
}
}
// still gives 4837
At this point I tried different Math calls, such as:
// just the change to that 1 line:
Math.pow() // gives 4457
Math.random(), Math.sqrt(), Math.tan(), Math.log() // gives 4837
Interestingly, it doesn't seem to matter what you pass in to the Math class, but it remains constant:
Math.sqrt(5) vs Math.sqrt(Math.random()) // gives 4837
Math.tan(5) vs Math.tan(Math.random()) // gives 4837
Math.pow(5, 7) vs Math.pow(Math.random(), Math.random()) // 4457
Until I chained 3 of them:
Math.tan(Math.log(Math.random())); // gives 4457
It looks like two Math calls from that 'group' is "equal" to one Math.pow() call? =b Mixing Math.pow() and something else doesn't seem to decrease the value though:
Math.pow(Math.random(), Math.random()); // gives 4457
However, chaining two Math.pow()'s:
Math.pow(Math.pow(Math.random(), Math.random()), Math.random()); // 4133
I could go on and on, but I wonder if there is some pattern:
Results: 5287, 4837, 4457, 4133
Differences: 450 380 324
Musst be variable! Just compiled your sample and i get to 5274 before stack overflow.
#baris thats for the mxmlc compiler
+1 for stack overflow question ^^

Strange behavior in a simple for using uint

This works as expected:
for (var i:uint = 5; i >= 1; i-- )
{
trace(i); // output is from 5~1, as expected
}
This is the strange behavior:
for (var i:uint = 5; i >= 0; i-- )
{
trace(i)
}
// output:
5
4
3
2
1
0
4294967295
4294967294
4294967293
...
Below 0, something like a MAX_INT appears and it goes on decrementing forever. Why is this happening?
EDIT
I tested a similar code using C++, with a unsigned int and I have the same result. Probably the condition is being evaluated after the decrement.
The behavior you are describing has little to do with any programming language. This is true for C, C++, actionscript, etc. Let me say this though, what you do see is quite normal behavior and has to do with the way a number is represented (see the wiki article and read about unsigned integers).
Because you are using an uint (unsigned integer). Which can only be a positive number, the type you are using cannot represent negative numbers so if you take a uint like this:
uint i = 0;
And you reduce 1 from the above
i = i - 1;
In this case i does not represent negative numbers, as it is unsigned. Then i will display the maximum value of a uint data type.
Your edit that you posted above,
"...in C++, .. same result..."
Should give you a clue as to why this is happening, it has nothing to do with what language you are using, or when the comparison is done. It has to do with what data type you are using.
As an excercise, fire up that C++ program again and write a program that displays the maximum value of a uint. The program should not display any defined constants :)..it should take you one line of code too!

Exponent Binary Numbers

Could someone tell me the logic behind exponenting binary numbers? For example, I want to take 110^10, but I don't know the logic behind it. If someone could supply me with that, it'd be a great help.. (And I want it to be done in pure binary with no conversions and no looping multiplication. Just logic...)
peenut is correct in that exponentiation doesn't care what base you're representing your numbers in, and I don't know what you mean by "just logic," but here's a stab at it.
A quick search over at Wikipedia reveals this algorithm. The basic ideas is to square your base, store the result, and then square the result and repeat. This will give you the factors of your answer, which you can then multiply together. I think of it as a "binary search"-flavored exponentiation algorithm since you can skip a lot of intermediate steps by squaring and storing.
Binary exponents are very easy. They are simply additions and shifts only.
the number 110 is where you start.
Working backwards from the number 10 - (i.e. 0) - it's a zero, so this means "do not add it in."
Now you shift left - so 110 becomes 1100
Now you work on the next bit of the 10 (i.e. 1) - it's a one, so this means "add this to the result" - it's 0 so far, because we didn't already add it, so the result is now 1100
there are no more bits to do - so the answer is 1100
If you were doing 110^110 - you would have one more to do - so - you again shift and get 11000 now.
The last bit is again a one, so now you add:
1100 +
11000 =
100100
110^10=1100 i.e. 6^2=12
110^110=100100 i.e. 6^6=36
Exponentiation is operation that is independent of actual textual representation of number (e.g. in base 2 - binary, base 10 - decimal).
Maybe you want to ask about binary XOR (eXclusive OR) operation?
Unfortunately the easiest way for your computer to handle simple exponents is your "looping multiplication" (or the naïve approach), which is the most rudimentary (and literal) way of handling it. As #user1561358 commented, it is NOT just binary adds and shifts. That is multiplication. To raise 66 (110110) the naïve approach has you multiplying the base n times (as below):
110
x 110
--------------
100100 = 36
x 110
--------------
11011000 = 216
x 110
--------------
10100010000 = 1296
x 110
--------------
1111001100000 = 7776
x 110
--------------
01011011001000000 = 46656
The simple code for a naïve multiplication is elegant for most applications:
long long binpow(long long a, long long b) {
if (b == 0)
return 1;
long long res = binpow(a, b / 2);
if (b % 2)
return res * res * a;
else
return res * res;
}
For larger or arbitrary exponents you can dramatically reduce the number of calculations by applying Horner's Method, explained in great detail in this video specifically calculating binary exponents.
In essence, you are just multiplying the bits with non-zero exponents. Let's look at 11021102, (or 66):
11021102 breaks down into the following exponents:
There is no "1" bit set so 61 won't be multiplied, but we do have the two and four bits to calculate:
6102 = 36
61002 = 1296
So, 66 = 36 x 1296 = 46656
The above code can be modified only slightly to check for non-zero exponents with a while {.. test:
long long binpow(long long a, long long b) {
long long res = 1;
while (b > 0) {
if (b & 1)
res = res * a;
a = a * a;
b >>= 1;
}
return res;
}
To really see the advantage of this let's try the binary exponentiation of
11121000000002, which is 7256.
The naïve approach would require us to make 256 multiplication iterations!
Instead, all the exponents except 2256 are zero, so they are skipped in the while loop. There is one single iterative calculation where a * a happens 256 times:
11121000000002 = (a 718 digit binary beginning with 11001101011....)
728 = 2213595400046048155450188615474945937162517050260073069916366390524704974007989996848003433837940380782794455262312607598867363425940560014856027866381946458951205837379116473663246733509680721264246243189632348313601

Translation from Complex-FFT to Finite-Field-FFT

Good afternoon!
I am trying to develop an NTT algorithm based on the naive recursive FFT implementation I already have.
Consider the following code (coefficients' length, let it be m, is an exact power of two):
/// <summary>
/// Calculates the result of the recursive Number Theoretic Transform.
/// </summary>
/// <param name="coefficients"></param>
/// <returns></returns>
private static BigInteger[] Recursive_NTT_Skeleton(
IList<BigInteger> coefficients,
IList<BigInteger> rootsOfUnity,
int step,
int offset)
{
// Calculate the length of vectors at the current step of recursion.
// -
int n = coefficients.Count / step - offset / step;
if (n == 1)
{
return new BigInteger[] { coefficients[offset] };
}
BigInteger[] results = new BigInteger[n];
IList<BigInteger> resultEvens =
Recursive_NTT_Skeleton(coefficients, rootsOfUnity, step * 2, offset);
IList<BigInteger> resultOdds =
Recursive_NTT_Skeleton(coefficients, rootsOfUnity, step * 2, offset + step);
for (int k = 0; k < n / 2; k++)
{
BigInteger bfly = (rootsOfUnity[k * step] * resultOdds[k]) % NTT_MODULUS;
results[k] = (resultEvens[k] + bfly) % NTT_MODULUS;
results[k + n / 2] = (resultEvens[k] - bfly) % NTT_MODULUS;
}
return results;
}
It worked for complex FFT (replace BigInteger with a complex numeric type (I had my own)). It doesn't work here even though I changed the procedure of finding the primitive roots of unity appropriately.
Supposedly, the problem is this: rootsOfUnity parameter passed originally contained only the first half of m-th complex roots of unity in this order:
omega^0 = 1, omega^1, omega^2, ..., omega^(n/2)
It was enough, because on these three lines of code:
BigInteger bfly = (rootsOfUnity[k * step] * resultOdds[k]) % NTT_MODULUS;
results[k] = (resultEvens[k] + bfly) % NTT_MODULUS;
results[k + n / 2] = (resultEvens[k] - bfly) % NTT_MODULUS;
I originally made use of the fact, that at any level of recursion (for any n and i), the complex root of unity -omega^(i) = omega^(i + n/2).
However, that property obviously doesn't hold in finite fields. But is there any analogue of it which would allow me to still compute only the first half of the roots?
Or should I extend the cycle from n/2 to n and pre-compute all the m-th roots of unity?
Maybe there are other problems with this code?..
Thank you very much in advance!
I recently wanted to implement NTT for fast multiplication instead of DFFT too. Read a lot of confusing things, different letters everywhere and no simple solution, and also my finite fields knowledge is rusty , but today i finally got it right (after 2 days of trying and analog-ing with DFT coefficients) so here are my insights for NTT:
Computation
X(i) = sum(j=0..n-1) of ( Wn^(i*j)*x(i) );
where X[] is NTT transformed x[] of size n where Wn is the NTT basis. All computations are on integer modulo arithmetics mod p no complex numbers anywhere.
Important values
Wn = r ^ L mod p is basis for NTT
Wn = r ^ (p-1-L) mod p is basis for INTT
Rn = n ^ (p-2) mod p is scaling multiplicative constant for INTT ~(1/n)
p is prime that p mod n == 1 and p>max'
max is max value of x[i] for NTT or X[i] for INTT
r = <1,p)
L = <1,p) and also divides p-1
r,L must be combined so r^(L*i) mod p == 1 if i=0 or i=n
r,L must be combined so r^(L*i) mod p != 1 if 0 < i < n
max' is the sub-result max value and depends on n and type of computation. For single (I)NTT it is max' = n*max but for convolution of two n sized vectors it is max' = n*max*max etc. See Implementing FFT over finite fields for more info about it.
functional combination of r,L,p is different for different n
this is important, you have to recompute or select parameters from table before each NTT layer (n is always half of the previous recursion).
Here is my C++ code that finds the r,L,p parameters (needs modular arithmetics which is not included, you can replace it with (a+b)%c,(a-b)%c,(a*b)%c,... but in that case beware of overflows especial for modpow and modmul) The code is not optimized yet there are ways to speed it up considerably. Also prime table is fairly limited so either use SoE or any other algo to obtain primes up to max' in order to work safely.
DWORD _arithmetics_primes[]=
{
2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,
179,181,191,193,197,199,211,223,227,229,233,239,241,251,257,263,269,271,277,281,283,293,307,311,313,317,331,337,347,349,353,359,367,373,379,383,389,397,401,409,
419,421,431,433,439,443,449,457,461,463,467,479,487,491,499,503,509,521,523,541,547,557,563,569,571,577,587,593,599,601,607,613,617,619,631,641,643,647,653,659,
661,673,677,683,691,701,709,719,727,733,739,743,751,757,761,769,773,787,797,809,811,821,823,827,829,839,853,857,859,863,877,881,883,887,907,911,919,929,937,941,
947,953,967,971,977,983,991,997,1009,1013,1019,1021,1031,1033,1039,1049,1051,1061,1063,1069,1087,1091,1093,1097,1103,1109,1117,1123,1129,1151,
0}; // end of table is 0, the more primes are there the bigger numbers and n can be used
// compute NTT consts W=r^L%p for n
int i,j,k,n=16;
long w,W,iW,p,r,L,l,e;
long max=81*n; // edit1: max num for NTT for my multiplication purposses
for (e=1,j=0;e;j++) // find prime p that p%n=1 AND p>max ... 9*9=81
{
p=_arithmetics_primes[j];
if (!p) break;
if ((p>max)&&(p%n==1))
for (r=2;r<p;r++) // check all r
{
for (l=1;l<p;l++)// all l that divide p-1
{
L=(p-1);
if (L%l!=0) continue;
L/=l;
W=modpow(r,L,p);
e=0;
for (w=1,i=0;i<=n;i++,w=modmul(w,W,p))
{
if ((i==0) &&(w!=1)) { e=1; break; }
if ((i==n) &&(w!=1)) { e=1; break; }
if ((i>0)&&(i<n)&&(w==1)) { e=1; break; }
}
if (!e) break;
}
if (!e) break;
}
}
if (e) { error; } // error no combination r,l,p for n found
W=modpow(r, L,p); // Wn for NTT
iW=modpow(r,p-1-L,p); // Wn for INTT
and here is my slow NTT and INTT implementations (i havent got to fast NTT,INTT yet) they are both tested with Schönhage–Strassen multiplication successfully.
//---------------------------------------------------------------------------
void NTT(long *dst,long *src,long n,long m,long w)
{
long i,j,wj,wi,a,n2=n>>1;
for (wj=1,j=0;j<n;j++)
{
a=0;
for (wi=1,i=0;i<n;i++)
{
a=modadd(a,modmul(wi,src[i],m),m);
wi=modmul(wi,wj,m);
}
dst[j]=a;
wj=modmul(wj,w,m);
}
}
//---------------------------------------------------------------------------
void INTT(long *dst,long *src,long n,long m,long w)
{
long i,j,wi=1,wj=1,rN,a,n2=n>>1;
rN=modpow(n,m-2,m);
for (wj=1,j=0;j<n;j++)
{
a=0;
for (wi=1,i=0;i<n;i++)
{
a=modadd(a,modmul(wi,src[i],m),m);
wi=modmul(wi,wj,m);
}
dst[j]=modmul(a,rN,m);
wj=modmul(wj,w,m);
}
}
//---------------------------------------------------------------------------
dst is destination array
src is source array
n is array size
m is modulus (p)
w is basis (Wn)
hope this helps to someone. If i forgot something please write ...
[edit1: fast NTT/INTT]
Finally I manage to get fast NTT/INTT to work. Was little bit more tricky than normal FFT:
//---------------------------------------------------------------------------
void _NFTT(long *dst,long *src,long n,long m,long w)
{
if (n<=1) { if (n==1) dst[0]=src[0]; return; }
long i,j,a0,a1,n2=n>>1,w2=modmul(w,w,m);
// reorder even,odd
for (i=0,j=0;i<n2;i++,j+=2) dst[i]=src[j];
for ( j=1;i<n ;i++,j+=2) dst[i]=src[j];
// recursion
_NFTT(src ,dst ,n2,m,w2); // even
_NFTT(src+n2,dst+n2,n2,m,w2); // odd
// restore results
for (w2=1,i=0,j=n2;i<n2;i++,j++,w2=modmul(w2,w,m))
{
a0=src[i];
a1=modmul(src[j],w2,m);
dst[i]=modadd(a0,a1,m);
dst[j]=modsub(a0,a1,m);
}
}
//---------------------------------------------------------------------------
void _INFTT(long *dst,long *src,long n,long m,long w)
{
long i,rN;
rN=modpow(n,m-2,m);
_NFTT(dst,src,n,m,w);
for (i=0;i<n;i++) dst[i]=modmul(dst[i],rN,m);
}
//---------------------------------------------------------------------------
[edit3]
I have optimized my code (3x times faster than code above),but still i am not satisfied with it so i started new question with it. There I have optimized my code even further (about 40x times faster than code above) so its almost the same speed as FFT on floating point of the same bit size. Link to it is here:
Modular arithmetics and NTT (finite field DFT) optimizations
To turn Cooley-Tukey (complex) FFT into modular arithmetic approach, i.e. NTT, you must replace complex definition for omega. For the approach to be purely recursive, you also need to recalculate omega for each level based on current signal size. This is possible because min. suitable modulus decreases as we move down in the call tree, so modulus used for root is suitable for lower layers. Additionally, as we are using same modulus, the same generator may be used as we move down the call tree. Also, for inverse transform, you should take additional step to take recalculated omega a and instead use as omega: b = a ^ -1 (via using inverse modulo operation). Specifically, b = invMod(a, N) s.t. b * a == 1 (mod N), where N is the chosen prime modulus.
Rewriting an expression involving omega by exploiting periodicity still works in modular arithmetic realm. You also need to find a way to determine the modulus (a prime) for the problem and a valid generator.
We note that your code works, though it is not a MWE. We extended it using common sense, and got correct result for a polynomial multiplication application. You just have to provide correct values of omega raised to certain powers.
While your code works, though, like from many other sources, you double spacing for each level. This does not lead to recursion that is as clean, though; this turns out to be identical to recalculating omega based on current signal size because the power for omega definition is inversely proportional to signal size. To reiterate: halving signal size is like squaring omega, which is like giving doubled powers for omega (which is what one would do for doubling of spacing). The nice thing about the approach that deals with recalculating of omega is that each subproblem is more cleanly complete in its own right.
There is a paper that shows some of the math for modular approach; it is a paper by Baktir and Sunar from 2006. See the paper at the end of this post.
You do not need to extend the cycle from n / 2 to n.
So, yes, some sources which say to just drop in a different omega definition for modular arithmetic approach are sweeping under the rug many details.
Another issue is that it is important to acknowledge that the signal size must be large enough if we are to not have overflow for result time-domain signal if we are performing convolution. Additionally, it may be useful to find certain implementations for exponentiation subject to modulus exist that are fast, even if the power is quite large.
References
Baktir and Sunar - Achieving efficient polynomial multiplication in Fermat fields using the fast Fourier transform (2006)
You must make sure that roots of unity actually exist. In R there are only 2 roots of unity: 1 and -1, since only for them x^n=1 can be true.
In C you have infinitely many roots of unity: w=exp(2*pi*i/N) is a primitive N-th roots of unity and all w^k for 0<=k
Now to your problem: you have to make sure the ring you're working in offers the same property: enough roots of unity.
Schönhage and Strassen (http://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strassen_algorithm) use integers modulo 2^N+1. This ring has enough roots of unity. 2^N == -1 is a 2nd root of unity, 2^(N/2) is a 4th root of unity and so on. Furthermore, these roots of unity have the advantage that they are powers of two and can be implemented as binary shifts (with a modulo operation afterwards, which comes down to a add/subtract).
I think QuickMul (http://www.cs.nyu.edu/exact/doc/qmul.ps) works modulo 2^N-1.