Improper integral calculation using numerical intergration - language-agnostic

I'm interested in calculation of Improper Integral for a function.
Particularly it's a Gauss Integral. Using a numerical integration does make sense for a definite integrals but how should I deal with improper integrals ?
Is there any was to extrapolate the function "around" negative infinity or should I just remove this part and start integration from some particular value because cumulative sum near "negative infinity" is almost non-existent for Gauss integral? Perhaps there are some algorithms that I'm not aware about.

In C++, you can use the Boost statistics library, which includes routines for working with normal distributions (closely related to Gaussian integrals). In particular, the cumulative distribution accessor function can be used to calculate the improper integrals you're interested in.
Working with normal distributions in this way is such a common operation that I'm sure you could find a comparable library function in most other languages. All you need to do is express the integration limit in terms of a Z-score, then take the cumulative distribution using that limit (perhaps subtracting that value from 1.0, depending on whether you're
integrating from -infinity or to +infinity).

Related

Q-learning, how about picking the action that actually gives most reward?

So in Q learning, you update the Q function by Qnew(s,a) = Q(s,a) + alpha(r + gamma*MaxQ(s',a) - Q(s,a).
Now, if I were to use the same principle but change Q to V function, instead of performing the action based on the current V function, you actually perform all actions (assuming you can reset the simulated environment), and select the best action out of those, and update the V function for that state. Would this yield a better result?
Of course, the training time would probably increase because you actually do all the actions once for each update, but since you are guaranteed to select the best action each time (except when exploring), it would give you a global optimum policy in the end?
This is a bit similar to value iteration, except I'm don't have and not building a model for the problem.
Now, if I were to use the same principle but change Q to V function, instead of performing the action based on the current V function, you actually perform all actions (assuming you can reset the simulated environment), and select the best action out of those, and update the V function for that state. Would this yield a better result?
It is typically assumed in Reinforcement Learning that we do not have the ability to reset the (simulated) environment. Sure, when we're working on simulations it often may technically be possible, but generally we hope that work in RL can also extend to "real-world" problems outside of simulations afterwards, where that would no longer be possible.
If you do have that possibility, it would generally be recommended to look into search algorithms like Monte-Carlo Tree Search, rather than Reinforcement Learning like Sarsa, Q-learning, etc. I suspect your suggestion might work slightly better than Q-learning indeed in this case, but things like MCTS would be even better.
Now, if I were to use the same principle but change Q to V function, instead of performing the action based on the current V function, you actually perform all actions (assuming you can reset the simulated environment), and select the best action out of those, and update the V function for that state. Would this yield a better result?
Given that you don't have access to the model, you have to resort to model free methods. What you are suggesting is basically a Dynamics programming backup. See the slides 28 - 31 in David Silver's lecture notes for various backup strategies to iterate on the value function.
However, note that this is just for prediction (i.e. estimating the value function for a given policy) and not for control (figuring out the best policy). There won't be a Max involved in prediction. To do control, you can use the above policy evaluation + greedy policy improvement to arrive at a "policy iteration based on Dynamic prog backup policy evaluation" method.
The other options for model-free control are SARSA [+ greedy policy improvement] (on policy) and Q-learning (off-policy). These are Q-function based methods, though.
If you are just trying to win the game, and not necessarily interested in RL techniques discussed above, then you also have the choice of using purely planning based methods (like Monte Carlo Tree Search). Finally, you can combine planning and learning with methods such as Dyna.

Conceptual confusion about numerical methods

I have a conceptual question about numerical methods. What is the difference among finite element, continuous finite element, discontinuous finite element, continuous galerkin and discontinuous galerkin methods? Are some of them just the same thing?
Thanks in advance
Finite element methods are a subset of numerical methods (which also include finite volume, finite difference, Monte Carlo and a lot more).
Simply put, in finite element methods, one tries to approximate the solution to a problem with a linear combination of pre-defined basis functions. These basis functions can be chosen to be either continuous or discontinuous. The resulting numerical methods are called CG/DG (continuous/discontinuous Galerkin) methods. In a DG method, the basis functions are only piecewise continuous: each basis function is zero everywhere in the domain, except in one element. See also this excellent Wikipedia article, which features some very nice figures.
Discontinuous Garlerkin methods were originally popularised in the field of particle transport long ago, but they have recently gained ground in other fields as well. (This is mostly because it wasn't clear at first how discontinuous basis functions would work well in equations that involve diffusion, but that problem has been solved now.)
Just a small pedantic correction to #A. Hennink's answer - In DG methods, the state variable is not piece-wise continuous between basis functions. There is a non-physical jump in the state variable between basis functions, hence the discontinuous part in the name. This can be visualized in the following figure displaying (dis)continuous basis functions of CG and DG:
In CG methods, the basis functions are piece-wise continuous, meaning continuous in the state variable itself, but the derivative may not be continuous (i.e. there may be a discontinuity in the derivative of the state variable between basis functions). This means that the solution to what ever problem you're solving can have non-physical kinks in the solution. Note the 'kinks' in the following solution
Some basis functions are not always discontinuous in the derivative though (see Hermite basis functions). See how the following Hermite basis function has a derivative of zero at both the end points. If all basis functions are composed using Hermite polynomials, then the derivative is continuous between basis functions because it's zero at the boundaries. Below, Psi_1 is the derivate of Psi_0, and Psi_3 is the derivative of Psi_2:

Resolve system of equations with 10th degree polynomial, LSM

I have numerical problem with resolve system of equations (polynomial 10th degree) using ordinary LSM (Least Square Method). I obtained parameters with huge and very small values - therefore I can't inverse matrix constructed in this method - precision is to low even in extended variables. I tried do this in C++,Matlab,Delphi.
Can somebody know application instruments which can I do this with enough accurancy or numerical tips do get good results. Standard calculation on matrix is unfortunatly elusive.
I think that your problem comes from the fact that you are using 10th order polynomials, which quite often lead to numerical problems:
First of all, they can be unsuitable because of large oscillations. Even when interpolating a simple function, these oscillations can be present, see the famous Runge's example.
Secondly, the fitting of the high order polynomials can lead to hill-conditioned linear systems, which is why you could not invert the matrix (which you should anyway not do). I made a simple experiment: I took 11 equidistant points (on the interval [0,1]) and assembled the matrix of the linear system to solve. Matlab gives me a condition number of about 1e8, so the least square matrix has a condition number of 1e16. So your matrix is 'close to singular' and this means that all the numerical precision is lost.
So, the best way to get rid of your problem is to get rid of the 10th order polynomial. You should maybe consider lower order polynomials, splines or piecewise polynomial approximations.
If you really need 10th order polynomials (e.g. if you know that your data have been generated by such a polynomial), then do not invert the matrix. Use a good preconditioner and an iterative method to solve the system without inverting the matrix.

Which hash functions to use in a Bloom filter

I've got the following question about choosing hash functions for Bloom filters:
Which functions to use?
In nearly every document/paper you can read that the hash functions used in a Bloom filter should be independent and uniformly distributed.
I know what is meant by this (independent and uniformly distributed), but I'm having trouble to find a argumentation or a discussion, which hash functions fulfill those requirements and are therefore suitable. In a lot of posts I've read about suggestions for the usage of the FNV or Murmur hash function, but not why (or at least without a proof) they are suitable.
Thanks in advance!
I asked myself the same question when building a Java Bloom filter library. See the Github readme for a detailed treatment of my analysis of hash functions for Bloom filters.
I looked at the problem from two perspectives:
How fast is the computation?
How uniform is the output distribution?
Speed can easily be measured by benchmarks on random input. Uniformity is a bit harder and requires some statistics. Using Chi-Square goodness of fit tests I measured how similar the distribution of hash values is to a uniform distribution.
The result is:
Use Murmur3 for the best trade-off between speed and uniformity. Do not use Murmur2 as it is not uniform for inputs that change in small increments.
Use a cryptographic hash function like SHA-256 for the best uniformity.
Apply the Kirsch-Mitzenmacher-Optimization to only compute 2 instead of k hash functions (hash_i = hash1 + i x hash2).
If your implementation is using Java I would recommend using our Bloom filter hash library. It is well documented and thoroughly tested. For the details, including the benchmark results for different hash function and their unformity according to Chi-Square test, see the Github readme of the repo.
Hash Functions should provide you with graphical proof of why FNV would be a bad choice, and why Murmur2 or one of Bob Jenkins' Hashes would be a good choice.
I think a reasonable option would be multiple CRC hashes. I'm assuming that, if you want multiple n-bit hash values, then for polynomials with Boolean field coefficients, there are multiple prime polynomials of degree n+1. But I don't know of a process for finding these polynomials.
Another possibility would be to use multiple modulo hashes. The size of the Bloom Filter bit array would have to be the maximum modulo value. But I think, for it to work well, the modulus values would have to be product of primes greater than 10, and relatively prime to each other. And the range from the minimum to the maximum modulus value would have to be as small as possible. I don't know of a way to find such values. I have written some open-source C++ code for quick calculation of remainders: https://github.com/wkaras/C-plus-plus-intrusive-container-templates/blob/master/modulus_hash.h

High-level/semantic optimization

I'm writing a compiler, and I'm looking for resources on optimization. I'm compiling to machine code, so anything at runtime is out of the question.
What I've been looking for lately is less code optimization and more semantic/high-level optimization. For example:
free(malloc(400)); // should be completely optimized away
Even if these functions were completely inlined, they could eventually call OS memory functions which can never be inlined. I'd love to be able to eliminate that statement completely without building special-case rules into the compiler (after all, malloc is just another function).
Another example:
string Parenthesize(string str) {
StringBuilder b; // similar to C#'s class of the same name
foreach(str : ["(", str, ")"])
b.Append(str);
return b.Render();
}
In this situation I'd love to be able to initialize b's capacity to str.Length + 2 (enough to exactly hold the result, without wasting memory).
To be completely honest, I have no idea where to begin in tackling this problem, so I was hoping for somewhere to get started. Has there been any work done in similar areas? Are there any compilers that have implemented anything like this in a general sense?
To do an optimization across 2 or more operations, you have to understand the
algebraic relationship of those two operations. If you view operations
in their problem domain, they often have such relationships.
Your free(malloc(400)) is possible because free and malloc are inverses
in the storage allocation domain.
Lots of operations have inverses and teaching the compiler that they are inverses,
and demonstrating that the results of one dataflow unconditionally into the other,
is what is needed. You have to make sure that your inverses really are inverses
and there isn't a surprise somewhere; a/x*x looks like just the value a,
but if x is zero you get a trap. If you don't care about the trap, it is an inverse;
if you do care about the trap then the optimization is more complex:
(if (x==0) then trap() else a)
which is still a good optimization if you think divide is expensive.
Other "algebraic" relationships are possible. For instance, there are
may idempotent operations: zeroing a variable (setting anything to the same
value repeatedly), etc. There are operations where one operand acts
like an identity element; X+0 ==> X for any 0. If X and 0 are matrices,
this is still true and a big time savings.
Other optimizations can occur when you can reason abstractly about what the code
is doing. "Abstract interpretation" is a set of techniques for reasoning about
values by classifying results into various interesting bins (e.g., this integer
is unknown, zero, negative, or positive). To do this you need to decide what
bins are helpful, and then compute the abstract value at each point. This is useful
when there are tests on categories (e.g., "if (x<0) { ... " and you know
abstractly that x is less than zero; you can them optimize away the conditional.
Another way is to define what a computation is doing symbolically, and simulate the computation to see the outcome. That is how you computed the effective size of the required buffer; you computed the buffer size symbolically before the loop started,
and simulated the effect of executing the loop for all iterations.
For this you need to be able to construct symbolic formulas
representing program properties, compose such formulas, and often simplify
such formulas when they get unusably complex (kinds of fades into the abstract
interpretation scheme). You also want such symbolic computation to take into
account the algebraic properties I described above. Tools that do this well are good at constructing formulas, and program transformation systems are often good foundations for this. One source-to-source program transformation system that can be used to do this
is the DMS Software Reengineering Toolkit.
What's hard is to decide which optimizations are worth doing, because you can end
of keeping track of vast amounts of stuff, which may not pay off. Computer cycles
are getting cheaper, and so it makes sense to track more properties of the code in the compiler.
The Broadway framework might be in the vein of what you're looking for. Papers on "source-to-source transformation" will probably also be enlightening.