Why do I get “estimated error” -1.#IND when doing BICGSTAB linear solver using ILUT perconditioner in eigen - numerical-methods

I want to ask a question, when I use eigen (C++ library, do numerial linear computing) to solve a linear equations, I use bi-conjugate gradient BICGSTAB with Incomplete LU preconditioner, however, the result bg.error()("estimated error") becomes -1.#IND, after 1 interation, the size of the linear equations is 20000, could anyone help me about this problem? I also tries other small size, e.g, 200, and that is OK.

Related

Gradient Ascent vs Gradient Descent

I'm a programming who is just recently looking in machine and deep learning.
What exactly is the difference between the usages for gradient ascent and descent? Why would we want maximize a loss instead of minimalizing it? More specifically, I'm curious about its usage for convolutional networks.
The difference is a sign, gradient ascent means to change parameters according to the gradient of the function (so increase its value) and gradient descent against the gradient (thus decrease).
You almost never want to increase the loss (apart from say some form of gamified system, e.g. a GAN). But if you frame your problem as maximisation of probability of correct answer then you want to utilise gradient ascent. It is always a dual thing, for every problem expressed as gradient ascent of something you can think about it as gradient descent of minus this function, and vice versa.
theta_t + grad(f)[theta_t] = theta_t - grad(-f)[theta_t]
gradient ascent on f gradient descent on -f
In other words there is absolutely no difference in usage of these two methods, they are equivalent. The reason why people use one or the other is just what helps explain the method in most natural terms. It is more natural to say "I am going to decrease the cost" or "I am going to maximise the probability" than it is to say "I am going to decrease minus cost" or "I am going to minimise 1 minus probability".

What does a.sub_(lr*a.grad) actually do?

I am doing the course of fast-ai, SGD and I can not understand.....
This subtracts the coefficients by (learning rate * gradient)...
But why is it necessary to subtract?
here is the code:
def update():
y_hat = x#a
loss = mse(y_hat, y)
if t % 10 == 0: print (loss)
loss.backward()
with torch.no_grad():
a.sub_(lr * a.grad)
Look at the image. It shows the loss function J as a function of the parameter W. Here it is a simplified representation with W being the only parameter. So, for a convex loss function, the curve looks as shown.
Note that the learning rate is positive. On the left side, the gradient (slope of the line tangent to the curve at that point) is negative, so the product of the learning rate and gradient is negative. Thus, subtracting the product from W will actually increase W (since 2 negatives make a positive). In this case, this is good because loss decreases.
On the other hand (on the right side), the gradient is positive, so the product of the learning rate and gradient is positive. Thus, subtracting the product from W reduces W. In this case also, this is good because the loss decreases.
We can extend this same thing for more number of parameters (the graph shown will be higher dimensional and won't be easy to visualize, which is why we had taken a single parameter W initially) and for other loss functions (even non-convex ones, though it won't always converge to the global minima, but definitely to the nearest local minima).
Note : This explanation can be found in Andrew Ng's courses of deeplearning.ai, but I couldn't find a direct link, so I wrote this answer.
I'm assuming a represents your model parameters, based on y_hat = x # a. This is necessary because the stochastic gradient descent algorithm aims to find a minima of the loss function. Therefore, you take the gradient w.r.t. your model parameters, and update them a little in the direction of the gradient.
Think of the analogy of sliding down a hill: if the landscape represents your loss, the gradient is the direction of steepest descent. To get to the bottom (i.e. minimize loss), you take little steps in the direction of the steepest descent from where you're standing.

CSS Box Shadow-Animated Pixel Art Flickering

Partly for funsies, and partly for a design idea I had, I'm trying to convert an animated gif into pure animated CSS.
It's very nearly working but I've hit a snag and am unsure what is causing my issue, or how I could fix it. I have an unfortunate suspicion that I've simply hit a limitation of the technology.
The gif I've been using for testing is this: https://us.v-cdn.net/5018289/uploads/editor/yj/lcdjneh1yoxv.gif
As for the actual CSS, I've been trying to implement the method here (animated box-shadow properties), as it seemed like the most feasible: https://codepen.io/andrewarchi/pen/OXEEgL
#ash::after {
animation: ash-frames 0.4s steps(1) infinite;
}
#keyframes ash-frames {
0% {box-shadow: 32px 8px #181818, 40px 8px #181818,...}
...
}
The animation seems fairly seamless in the given example, so I figured it was worth a try. Obvious differences: the gif I'm using has more frames and more pixels.
And just as a quick overview, my CSS (I am using vendor tags etc, this is just an example):
.pixel-art-3940::after {
animation: pixel-art-3940-frames 1s steps(5, end) infinite;
}
#keyframes pixel-art-3940-frames {
0% {box-shadow: 112px 68px rgba(77, 69, 64, 1),...}
16.666666666666668% {box-shadow:115px 65px rgba(77, 69, 64, 1),...}
...
}
The animation does seem to be actually working, however there's an intense 'flickering' effect on the animation. See below:
I've tried the usual solutions to 'flickering transitions' in Chrome - such as setting -webkit-backface-visibility to hidden - but so far nothing has solved the issue.
As I said, I fear I've simply hit a limitation of the technology itself. Any ideas what the problem might be, and whether I can solve it?
EDIT: The full source code of this particular animation can be found in these two Gists. I opted for Gists because of the size of the CSS file.
HTML: https://gist.githubusercontent.com/ChrGriffin/2f1f221143e24d3e39cad8e7369bc167/raw/16ea77d21aa79cf9da52fc3477a6773af41130f2/image.html
CSS: https://gist.githubusercontent.com/ChrGriffin/7dcff0f119532ff37f68c01a8a22ecb5/raw/3e49d3dd0b7fa93aef6708750770d2616c53f682/image.css
Correct answer
In the end, it was all due to the animation-timing-function. First param in steps() function is not the number of keyframes (or number of steps in the loop) but the number of steps rendered in between keyframes.
So changing it to steps(1,end) fixes it, as the browser no longer has to calculate intermediary frames (where it obviously fails due to the large number of box-shadow values - there's basically 1 value for each pixel - wicked technique, btw).
See it working: https://jsfiddle.net/websiter/wnrxmapu/2/
Previous answer: (partially incorrect, led to the correct one above - i left it as it might prove helpful to others debugging similar animations):
Initially I thought your exporting tool was... simply wrong.
Why? Because increasing animation-duration from 1s to 100s produced this result.
The apparent conclusion is that your intermediary frames are bugged.
However, I tested each of them individually and, to my surprise, they rendered correctly.
Which leads to the conclusion that the number of box-shadow calculations per keyframe is limited and some sort of clustering algorithm is performed.
Which makes sense, since we're talking box-shadow here, which, in 99.999999999% of cases (basically all except yours) does not have to be accurate. It should be (and obviously is) approximated favoring rendering speed, for obvious reasons: we're talking user experience and "feel". Most users are non-technical and they simply expect smooth scrolling under any and all conditions.
I've come to the conclusion there must be a limit on the amount allowed calculations per keyframe after trying my best at optimizing your code, reducing it to less than half the initial size: https://jsfiddle.net/websiter/wnrxmapu/1/
I wasn't able to find any material on pixel clustering techniques for box-shadow and I don't think much is available online - this should be classified information.
However, IMHO, other than bragging rights, I don't think your technique stands a chance in terms of rendering performance when compared to a gif or svg. Emphasis on "IMHO". If you insist on getting this done, you might want to slice the image up and check if the limit on allowed calculation is per element or per page.
But I wouldn't keep my hopes too high. It is optimizations like the one your code has revealed that make CSS lightning fast. If it had to be accurate it wouldn't be so fast.

Why are negative frequencies of a discrete fourier transform appear where high frequencies are supposed to be?

i am very new to this topic, please help me understand why this happens...
if i make the dft of a cos wave
cos(w*x)=0.5*(exp(i*w*x)+exp(-i*w*x))
i expect to get one peak at the "w" frequency, and the negative one "-w" should not be visible, but it appears at the far end of the spectrum where higher frequencies are supposed to be...
why? do high frequencies produce the same effect as negative ones?
if you imagine a wave with frequency equal to 3*pi/2, that is, (0, 3pi/2, 6pi/2, 9pi/2), it does seem like a wave with negative frequency -pi/2 (0, -pi/2, -pi, -3pi/2)
is that the reason to what is happening? please help!
High frequencies between the Nyquist rate (Fs/2) and the Sample rate (Fs) alias to negative frequencies, due to sampling above the Nyquist rate. And vice versa, negative frequencies alias to the top half of an FFT. "Alias" means you can't tell those frequencies apart at that sample rate, due to the fact that sampling sinewaves at either of those frequencies at that sample rate can produce absolutely identical sets of samples.

smooth transition with tweenlite doesn't work

I have a vector circle and what to do the following:
I 'd like to animate it with tweenlite so it looks like a shockwave of an explosion. At first it fades in (from 0 to .5) and gets scaled and when it reaches half of the total animation time it fades out but it still gets scaled (hope you know what I mean).
Currently it looks horrible because I don't know how to get a smooth transition from part 1 to part 2 with TweenLite. My animation stops when reaching the end of part 1 and suddenly makes a jump to part 2.
Can someone help me out with this problem please? Thank you very much.:)
Total time of animation: .75 sec
total amount of scaling: 5
part 1 of 2:
TweenLite.to(blastwave, .375, {alpha:.5, transformAroundCenter:{scale:2.23},
onComplete:blastScaleFadeOut, onCompleteParams:[blastwave]});
part 2 of 2:
private function blastScaleFadeOut(object:DisplayObject, time:Number = .375, scaleVal:Number = 4.46) {
TweenLite.to(object, time, {alpha:0, transformAroundCenter:{scale:scaleVal},
onComplete:backgroundSprite.removeChild, onCompleteParams:[object]});
}
TweenLite applies an easing function to the tween transition to control the rate of change, and the default ease is a Quad.EaseOut (this means the rate of change in the tween will decelerate as it approaches the end using a quadratic function). This is why you see such a sudden change (or "jump"), because at the "joining point" of both tweenings (or parts, as you call them), the rate of change is quite different: the first tweening is at its slowest rate because it was reaching the end, while the other is at its quickest rate because it's just beginning.
I'm afraid the only way to really assure that there will be no "jump" (the rate will be the same at the end of part 1 and the beginning of part 2) would be to use a Linear.easeNone ease, which means no easing or acceleration (constant speed through the duration of the tween):
TweenLite.to(blastwave, .375,
{ease:Linear.easeNone, alpha:.5, transformAroundCenter:{scale:2.23},
onComplete:blastScaleFadeOut, onCompleteParams:[blastwave]});
//part 2 (put inside function)
TweenLite.to(object, time, {ease:Linear.easeNone, alpha:0, transformAroundCenter:{scale:scaleVal},
onComplete:backgroundSprite.removeChild, onCompleteParams:[object]});
But I'd recommend you to play around with the easing functions and their parameters and try to find a combination that suits your needs (Linear.easeNone is a little bit boring).
Hope this helps!
You should rethink how you manage those tweens. You should put the scaling of the sprite to one tween and then divide the alpha transitions into two separate tweens. By default, this won't work in TweenLite, because of it's Tween overwriting policies. However, you can change this either by using TimelineLite to chain your tweens, or add overwrite:OverwriteManager.NONE property to your tweens. These two solutions work:
Using TimelineLite to chain your tweens:
var timeline:TimelineLite = new TimelineLite();
timeline.insert(TweenLite.to(blastwave, 0.7, {scaleX: 7, scaleY: 7}));
timeline.insert(TweenLite.to(blastwave, 0.35, {alpha: 0.5}));
timeline.insert(TweenLite.to(blastwave, 0.35, {alpha: 0, delay: 0.35}));
Or just change the tween overwriting policy:
TweenLite.to(s, 0.75, {scaleX: 7, scaleY: 7, overwrite:OverwriteManager.NONE});
TweenLite.to(s, 0.35, {alpha: 1, overwrite:OverwriteManager.NONE});
TweenLite.to(s, 0.35, {alpha: 0, delay: 0.35, overwrite:OverwriteManager.NONE});
About TweenLite's OverwriteManager and tween overwriting.
On a side note, if you scale your explosion for 0.7 seconds, make the whole alpha in, alpha out process shorter, like 0.55 seconds. Looks better that way. :]