GSL numerical integration: increase sampling around list of points? - numerical-methods

I want to integrate a function on a finite interval. I know the positions of "interesting" peaks of this function, and I know that the function is almost zero except in the vicinity of these peaks. The peaks are not singularities, in the sense that the function is smooth all around. However, the peaks are very sharp.
I know that numerical integration routines in GSL admit a list of "singularities". However, I am wondering if this is what I need to use, since as I mentioned, the function is smooth. So how do I specify a list of peaks such that the sampling is denser in the vicinity of the peaks?

Related

Web Audio Pitch Detection for Tuner

So I have been making a simple HTML5 tuner using the Web Audio API. I have it all set up to respond to the correct frequencies, the problem seems to be with getting the actual frequencies. Using the input, I create an array of the spectrum where I look for the highest value and use that frequency as the one to feed into the tuner. The problem is that when creating an analyser in Web Audio it can not become more specific than an FFT value of 2048. When using this if i play a 440hz note, the closest note in the array is something like 430hz and the next value seems to be higher than 440. Therefor the tuner will think I am playing these notes when infact the loudest frequency should be 440hz and not 430hz. Since this frequency does not exist in the analyser array I am trying to figure out a way around this or if I am missing something very obvious.
I am very new at this so any help would be very appreciated.
Thanks
There are a number of approaches to implementing pitch detection. This paper provides a review of them. Their conclusion is that using FFTs may not be the best way to go - however, it's unclear quite what their FFT-based algorithm actually did.
If you're simply tuning guitar strings to fixed frequencies, much simpler approaches exist. Building a fully chromatic tuner that does not know a-priori the frequency to expect is hard.
The FFT approach you're using is entirely possible (I've built a robust musical instrument tuner using this approach that is being used white-label by a number of 3rd parties). However you need a significant amount of post-processing of the FFT data.
To start, you solve the resolution problem using the Short Timer FFT (STFT) - or more precisely - a succession of them. The process is described nicely in this article.
If you intend building a tuner for guitar and bass guitar (and let's face it, everyone who asks the question here is), you'll need t least a 4092-point DFT with overlapping windows in order not to violate the nyquist rate on the bottom E1 string at ~41Hz.
You have a bunch of other algorithmic and usability hurdles to overcome. Not least, perceived pitch and the spectral peak aren't always the same. Taking the spectral peak from the STFT doesn't work reliably (this is also why the basic auto-correlation approach is also broken).

Using 10-node tetrahedron, is strain continuous between neighbouing tetrahedons?

I'm trying to implementing a Finite Element Analysis algorithm. I solve K u = f to get the displacement u, and then calculate strain with u, then calculate the stress. Finally, I use the stress to calculate the Von Mises Stress, and visualize this. From the result I find the strain is not continuous between tetrahedrons.
I use 10 nodes tetrahedron as the element, so the displacement is a second-order polynomial in every element. The displacement should be enforced to be continuous between tetrahedrons. And the strain, which is the first order derivatives of the displacements should be continuous inside every tetrahedron. But I'm not sure: is this true across the interface between tetrahedrons?
Only the components of strain tangent to the adjoining face are guaranteed continuous.
This follows from the displacement continuity, when you take derivatives in the direction of the interface they are the same.
Commercial FEM programs typically do some post process averaging to make the other components look continuous. Note the strain components normal to an element boundary are only expected to be continuous if the underlying constitutive model is continuous, so such averaging is not always appropriate.
You should not compute the stress and strain at the nodes but inside the elements. You can choose for example 4 Gauss points and compute the values there. You then have to think about a scheme on how to get the values computed at the Gauss points onto the tet nodes.
There is a Mathematica application example which illustrates this. Unfortunately the web page is no longer available, but the notebooks are here. You'll find the example in the application example section under Finite Element Method, Structural Mechanics 3D (in the old HelpBrowser). If you have difficulties I could convert it to PDF and send it you.

Cosine in floating point

I am trying to implement the cosine and sine functions in floating point (but I have no floating point hardware).
Since my processor has no floating-point hardware, nor instructions, I have already implemented algorithms for floating point multiplication, division, addition, subtraction, and square root. So those are the tools I have available to me to implement cosine and sine.
I was considering using the CORDIC method, at this site
However, I implemented division and square root using newton's method, so I was hoping to use the most efficient method.
Please don't tell me to just go look in a book or that "paper's exist", no kidding they exist. I am looking for names of well known algorithms that are known to be fast and efficient.
First off, depending on your accuracy requirements, this can be considerably fussier than your earlier questions.
Now that you've been warned: you'll first want to reduce the argument modulo pi/2 (or 2pi, or pi, or pi/4) to get the input into a manageable range. This is the subtle part. For a nice discussion of the issues involved, download a copy of K.C. Ng's ARGUMENT REDUCTION FOR HUGE ARGUMENTS: Good to the Last Bit. (simple google search on the title will get you a pdf). It's very readable, and does a great job of describing why this is tricky.
After doing that, you only need to approximate the functions on a small range around zero, which is easily done via a polynomial approximation. A taylor series will work, though it is inefficient. A truncated chebyshev series is easy to compute and reasonably efficient; computing the minimax approximation is better still. This is the easy part.
I have implemented sine and cosine exactly as described, entirely in integer, in the past (sorry, no public sources). Using hand-tuned assembly, results in the neighborhood of 100 cycles are entirely reasonable on "typical" processors. I don't know what hardware you're dealing with (the performance will mostly be gated on how quickly your hardware can produce the high part of an integer multiply).
For various levels of precision, you can find some good approximations here:
http://www.ganssle.com/approx.htm
With the added advantage that they are deterministic in runtime unlike the various "converging series" options which can vary wildly depending on the input value. This matters if you are doing anything real-time (games, motion control etc.)
Since you have the basic arithmetic operations implemented, you may as well implement sine and cosine using their taylor series expansions.

Float and Math precision on different systems

I want to implement a gameplay recording feature in a project, which would only record player input and seed of the RNG at the beginning of the level. Then I could take such record and play it on my computer in order to test it for validity.
I'm only concerned with some numerical differences which might appear between different Flash Player version, Operating Systems or CPUs (or whatever else that might be affected). The project would be written for Flash Player 10.0.0+. What stuff I am concerned with:
Operations on Numbers: Multiplying, dividing; bit operations (possibly bit shifting too); addition and subtraction; modulo
Math class: sin, cos and atan2; rounding
localToGlobal/globalToLocal with rotations and scaling
I won't be using stuff like hitTest, getObjectsUnderPoint, hitTestPoint, getBounds and so on, all collisions will be geometrical.
So, are there any chances that using any of the pointed things above will yield different results on different systems? If so, what can I do to avoid them?
That's an interesting question...
It's not a "will this game play the same on multiple platforms", it's "will a recording of user inputs produce the exact same output when simulated" question.
My gut would say "don't worry about it the flash VM abstracts the differences away", but then as I think more, there are some areas that might be a problem.
First, I wouldn't record anything time-based. A user hitting a key at 1.21 seconds in might be tough to predict whether that happens before or after a frame's worth of computation, especially if either the recording or playback computer was under load. Trying to time tweens with user input is probably a recipe for failure.
Accuracy of floating point should be ok. The algorithms that define when to round are well documented in IEEE-754, and all VM's use 64 bit Numbers regardless of OS they're running on. I'm guessing the math operations are equally understood.
I think it's good to avoid hitTest and whatnot. I imagine they theoretically could be influenced by whether or not hardware acceleration is being used. But I'm not an expert there, so maybe not.
Now localToGlobal/globalToLocal... I just don't know. They might have that theoretical hardware acceleration problem, but I tend to doubt it.
So I guess I didn't give any real answers.
Trig functions WILL NOT WORK! You must create custom implementations of the following: acos, asin, atan, atan2, cos, exp, log, pow, sin, and sqrt. And obviously, random().
I'm still in the process of testing the Number class. I can't say for sure whether additon/subtraction/etc. will be consistent on every machine.
It is very unlikely (although possible) that things will behave in a noticeably different way on different computers. Even if they did, it would be a very rare event and not something I would recommend worrying about unless it is absolutely crucial to gameplay.

What kind of learning algorithm would you use to build a model of how long it takes a human to solve a given Sudoku situation?

I don't have much experience in machine learning, pattern recognition, data mining, etc. and in their underlying theory and systems.
I would like to develop an artificial model of the time it takes a human to make a move in a given Sudoku puzzle.
So what I'm looking for as an output from the machine learning process is a model that can give predictions on how long does it take for a target human to make a move in a given Sudoku situation.
Same input doesn't always map to same outcome. It takes different times for the human to make a move with the same situation, but my hypothesis is that there's a tendency in the resulting probability distribution. (My educated guess is that it is ~normal.)
I have ideas about the factors that influence the distribution (like #empty slots) but would preferably leave it to the system to figure these patterns out. Please notice, that I'm not interested in the patterns, just the model.
I can generate sample and test data easily by running sudoku puzzles and measuring the times it takes to make the moves.
What kind of learning algorithm would you suggest to use for this?
I was thinking NNs, but I'm not sure if they can have the desired property of giving weighted random outcomes for the same input.
If I understand this correctly you have an input vector of length 81, which contains 1 if the square is filled in and 0 otherwise. You want to learn a function which returns a probability distribution which models the response time of a human to that board position.
My first response would be that this is a regression problem and you should try straightforward linear regression. This will not provide you with a distribution of response times, but a single 'best-guess' response time.
I'm not clear on why you want to model a distribution of response times. However, if you really want to do want to output a distribution then it sounds like you want to look at Bayesian methods. I'm not really an expert on Bayesian inference, so I can't help you much further here.
However, I don't really think your approach is going to work because I agree with your intuition about features such as the number of empty slots being important. There are also other obvious features, such as the number of empty slots per row/column that are likely to be important. Explicitly putting these features in your representation will probably be much more successful than expecting that the learning algorithm will infer something similar on its own.
The monte carlo method seems like it would work well here but would require a stack of solutions the size of the moon to really do it. And it wouldn't give you the time per person, just the time on average.
My understanding of it, tenuous as it is, is that you have a database with a board position and the time it took a human to make the next move. At the very least you have a starting point for most moves. Even if it's not in the database you could start to calculate how long it would take to make a move based on some algorithm. Though I know you had specified you wanted machine learning to do this it might be worth segmenting the problem into something a little smaller then building on it.
If you have some guesstimate as to what influences the function (# of empty cell, etc), try to train a classifier on a vector of features, and not on the 81 cells vector (0/1 or 0..9, doesn't really matter for my argument).
I think that your claim:
we wouldn't have to necessary know the underlying patterns, the "trained patterns" in a learning system automatically encodes these sometimes quite delicate and subtle patterns inside them -- that's one of their great power
is wrong. you do have to give the network the right domain. for example, when trying to detect object in an image, working in the pixel domain is pointless. you'll only get results if you first run some feature detection to detect edges, corners, etc.
Theoretically, with enough non-linearity (in NN - enough layers in the network) it can detect such things, but in practice, I have never seen that work, without giving the classifier the right features to work with.
I was thinking NNs, but I'm not sure if they can have the desired property of giving weighted random outcomes for the same input.
You're just trying to learn a function from 2^81 or 10^81 (or a much smaller feature space as I suggest) to R (response time between 0 and Inf) or some discretization of that. So NN and other classifiers can do that.