Multiply two Point objects - actionscript-3

I have noticed in other languages such as Java that there are Objects such as Vector2d that have a multiply method. How would I do the same with Actionscript 3? I know that the Point or Vector3D classes have add/substract methods, but neither offer multiply/divide methods.
What is the best way to multiply two Point objects? would it be something like the following?
var p1:Point = new Point(10, 20);
var p2:Point = new Point(30, 40);
var p3:Point = new Point((p1.x * p2.x), (p1.y * p2.y));
Also why would multiply/divide be left out of these classes?
EDIT* Here is a link to the Vector2d class I have seen in Java: Java Vector2d multiply

What would be the mathematical meaning of new Point((p1.x * p2.x), (p1.y * p2.y))? In a vector space, there is usually no multiplication, as it is simply not clear, what it should do.
However, there is a so called "scalar multiplication" defined on the vectors of the euclidean space, which yields a number ("scalar", hence the name):
double s = p1.x * p2.x + p1.y * p2.y;
This is useful, for example, if you need to test, whether to lines are orthogonal. However, the result is a number not a vector.
Another thing to do with vectors is to scale them:
double factor = 1.5;
Point p3 = new Point(factor * p1.x, factor * p2.y);
Here the result is indeed a point, but the input is a number and a vector, not two vectors. So, unless you can say, what the interpretation/meaning of your vector multiplication is, you shouldn't define one. (Note, that I don't know, whether your proposal might be useful or not -- it is simply not a "standard" multiplication I know of).

I really doubt that Java has multiply methods for point or vector classes, because a canonical definition of a concept like “multiplication” is not possible for these objects. There are a few operations generally called “products”—dot product, cross product, complex multiplication (by identification of the complex and the Euclidean plane)—, but they all have properties that strongly differ from the properties of the usual real multiplication. The component-wise multiplication that you suggested doesn't make much sense geometrically. So there is no definitive answer; normally you just pick the “multiplication” that you need to solve the problem at hand.

(forgot to finish the post and by now some answers appeared, so sorry for the redundancies)
well, basically, that is because it does not really make sense to multiply two points in 2d ... the vector product does not exist for 2 dimensions ... the scalar product yields a scalar, as the name indicates, thus a Number ... the only thing that would make sense would be the complex product:
var p3:Point = new Point(p1.x * p2.x - p1.y * p2.y, p1.x * p2.y + p1.y * p2.x);
when looking at polar coordinates you will realize that the angle of p3 is the sum of the angles of p1 and p2, and the length is the product ...
even in 3d, you rarely need multiplication of vectors ... only in the rare case that you have a 3d physics engine, that includes things as angular momentum or lorentz force ...
so the question is: what do you intend to do in the end? what result should the multiplication provide?

Related

Evaluate function at constant speed relative to arc length

I'm implementing a realtime graphics engine (C++ / OpenGL) that moves a vehicle over time along a specified course that is described by a polynomial function. The function itself was programmatically generated outside the application and is of a high order (I believe >25), so I can't really post it here (I don't think it matters anyway). During runtime the function does not change, so it's easy to calculate the first and second derivatives once to have them available quickly later on.
My problem is that I have to move along the curve with a constant speed (say 10 units per second), so my function parameter is not equal to the time directly, since the arc length between two points x1 and x2 differs dependent on the function values. For example the difference f(a+1) - f(a) may be way larger or smaller than f(b+1) - f(b), depending on how the function looks at points a and b.
I don't need a 100% accurate solution, since the movement is only visual and will not be processed any further, so any approximation is OK as well. Also please keep in mind that the whole thing has to be calculated at runtime each frame (60fps), so solving huge equations with complex math may be out of the question, depending on computation time.
I'm a little lost on where to start, so even any train of thought would be highly appreciated!
Since the criterion was not to have an exact solution, but a visually appealing approximation, there were multiple possible solutions to try out.
The first approach (suggested by Alnitak in the comments and later answered by coproc) I implemented, which is approximating the actual arclength integral by tiny iterations. This version worked really well most of the time, but was not reliable at really steep angles and used too many iterations at flat angles. As coproc already pointed out in the answer, a possible solution would be to base dx on the second derivative.
All these adjustments could be made, however, I need a runtime friendly algorithm. With this one it is hard to predict the number of iterations, which is why I was not happy with it.
The second approach (also inspired by Alnitak) is utilizing the first derivative by "pushing" the vehicle along the calculated slope (which is equal to the derivative at the current x value). The function for calculating the next x value is really compact and fast. Visually there is no obvious inaccuracy and the result is always consistent. (That's why I chose it)
float current_x = ...; //stores current x
float f(x) {...}
float f_derv(x) {...}
void calc_next_x(float units_per_second, float time_delta) {
float arc_length = units_per_second * time_delta;
float derv_squared = f_derv(current_x) * f_derv(current_x);
current_x += arc_length / sqrt(derv_squared + 1);
}
This approach, however, will possibly only be accurate enough for cases with high frame time (mine is >60fps), since the object will always be pushed along a straight line with a length depending on said frame time.
Given the constant speed and the time between frames the desired arc length between frames can be computed. So the following function should do the job:
#include <cmath>
typedef double (*Function)(double);
double moveOnArc(Function f, const double xStart, const double desiredArcLength, const double dx = 1e-2)
{
double arcLength = 0.;
double fPrev = f(xStart);
double x = xStart;
double dx2 = dx*dx;
while (arcLength < desiredArcLength)
{
x += dx;
double fx = f(x);
double dfx = fx - fPrev;
arcLength += sqrt(dx2 + dfx*dfx);
fPrev = fx;
}
return x;
}
Since you say that accuracy is not a top criteria, choosing an appropriate dx the above function might work right away. Ofcourse, it could be improved by adjusting dx automatically (e.g. based on the second derivative) or by refining the endpoint with a binary search.

How to Solve non-specific non-linear equations?

I am attempting to fit a circle to some data. This requires numerically solving a set of three non-linear simultaneous equations (see the Full Least Squares Method of this document).
To me it seems that the NEWTON function provided by IDL is fit for solving this problem. NEWTON requires the name of a function that will compute the values of the equation system for particular values of the independent variables:
FUNCTION newtfunction,X
RETURN, [Some function of X, Some other function of X]
END
While this works fine, it requires that all parameters of the equation system (in this case the set of data points) is hard coded in the newtfunction. This is fine if there is only one data set to solve for, however I have many thousands of data sets, and defining a new function for each by hand is not an option.
Is there a way around this? Is it possible to define functions programmatically in IDL, or even just pass in the data set in some other manner?
I am not an expert on this matter, but if I were to solve this problem I would do the following. Instead of solving a system of 3 non-linear equations to find the three unknowns (i.e. xc, yc and r), I would use an optimization routine to converge to a solution by starting with an initial guess. For this steepest descent, conjugate gradient, or any other multivariate optimization method can be used.
I just quickly derived the least square equation for your problem as (please check before use):
F = (sum_{i=1}^{N} (xc^2 - 2 xi xc + xi^2 + yc^2 - 2 yi yc + yi^2 - r^2)^2)
Calculating the gradient for this function is fairly easy, since it is just a summation, and therefore writing a steepest descent code would be trivial, to calculate xc, yc and r.
I hope it helps.
It's usual to use a COMMON block in these types of functions to pass in other parameters, cached values, etc. that are not part of the calling signature of the numeric routine.

AS3 vector projection on vector

http://www.tonypa.pri.ee/vectors/tut03.html
Could you explain it to me, how do they get projection on vector? When I multiply dp * unit vector I get what? I don't understand what they do and how they get it without angles or anything else, just weird non-vector number*1-length vector getting from like 30k+ (I get huge dp numbers) anything like a projection. I really suffered enough going through all the formulas, trying to get a projection with atan2 and other geometry calculation.
Do a dot product of two vectors, divided by length of the vector that's being projected. You'll net the same if you normalize both vectors prior to projecting, and then multiplying the projection by length of the vector which is the projection base (sorry for bad English, my mind is slow). BTW, in orthogonal coordinate system you don't need angles to do projection - probably with non-orthogonal too. It's that the angle between vectors is derived from acos(dotProduct(v1,v2)/length(v1)/length(v2)).
var v:Vector.<Number>; // the vector to get projected
var p:Vector.<Number>; // the projection base. Lengths of vectors as number sequences are equal
function dotProduct(v1,v2:Vector.<Number>):Number {
var d:Number=0;
for (var i:int=v1.length-1;i>=0;i--) d+=v1[i]*v2[i];
return d;
}
function lengthOf(v:Vector.<Number):Number { return Math.sqrt(dotProduct(v,v)); }
var pl:Number=dotProduct(v,p)/lengthOf(v)/lengthOf(p); // part of p's length that's the projection length
for (var i:int=v.length-1;i>=0;i--) v[i]=p[i]*pl;

Good way to procedurally generate a "blob" graphic in 2D

I'm looking to create a "blob" in a computationally fast manner. A blob here is defined as a collection of pixels that could be any shape, but all connected. Examples:
.ooo....
..oooo..
....oo..
.oooooo.
..o..o..
...ooooooooooooooooooo...
..........oooo.......oo..
.....ooooooo..........o..
.....oo..................
......ooooooo....
...ooooooooooo...
..oooooooooooooo.
..ooooooooooooooo
..oooooooooooo...
...ooooooo.......
....oooooooo.....
.....ooooo.......
.......oo........
Where . is dead space and o is a marked pixel. I only care about "binary" generation - a pixel is either ON or OFF. So for instance these would look like some imaginary blob of ketchup or fictional bacterium or whatever organic substance.
What kind of algorithm could achieve this? I'm really at a loss
David Thonley's comment is right on, but I'm going to assume you want a blob with an 'organic' shape and smooth edges. For that you can use metaballs. Metaballs is a power function that works on a scalar field. Scalar fields can be rendered efficiently with the marching cubes algorithm. Different shapes can be made by changing the number of balls, their positions and their radius.
See here for an introduction to 2D metaballs: https://web.archive.org/web/20161018194403/https://www.niksula.hut.fi/~hkankaan/Homepages/metaballs.html
And here for an introduction to the marching cubes algorithm: https://web.archive.org/web/20120329000652/http://local.wasp.uwa.edu.au/~pbourke/geometry/polygonise/
Note that the 256 combinations for the intersections in 3D is only 16 combinations in 2D. It's very easy to implement.
EDIT:
I hacked together a quick example with a GLSL shader. Here is the result by using 50 blobs, with the energy function from hkankaan's homepage.
Here is the actual GLSL code, though I evaluate this per-fragment. I'm not using the marching cubes algorithm. You need to render a full-screen quad for it to work (two triangles). The vec3 uniform array is simply the 2D positions and radiuses of the individual blobs passed with glUniform3fv.
/* Trivial bare-bone vertex shader */
#version 150
in vec2 vertex;
void main()
{
gl_Position = vec4(vertex.x, vertex.y, 0.0, 1.0);
}
/* Fragment shader */
#version 150
#define NUM_BALLS 50
out vec4 color_out;
uniform vec3 balls[NUM_BALLS]; //.xy is position .z is radius
bool energyField(in vec2 p, in float gooeyness, in float iso)
{
float en = 0.0;
bool result = false;
for(int i=0; i<NUM_BALLS; ++i)
{
float radius = balls[i].z;
float denom = max(0.0001, pow(length(vec2(balls[i].xy - p)), gooeyness));
en += (radius / denom);
}
if(en > iso)
result = true;
return result;
}
void main()
{
bool outside;
/* gl_FragCoord.xy is in screen space / fragment coordinates */
outside = energyField(gl_FragCoord.xy, 1.0, 40.0);
if(outside == true)
color_out = vec4(1.0, 0.0, 0.0, 1.0);
else
discard;
}
Here's an approach where we first generate a piecewise-affine potato, and then smooth it by interpolating. The interpolation idea is based on taking the DFT, then leaving the low frequencies as they are, padding with zeros at high frequencies, and taking an inverse DFT.
Here's code requiring only standard Python libraries:
import cmath
from math import atan2
from random import random
def convexHull(pts): #Graham's scan.
xleftmost, yleftmost = min(pts)
by_theta = [(atan2(x-xleftmost, y-yleftmost), x, y) for x, y in pts]
by_theta.sort()
as_complex = [complex(x, y) for _, x, y in by_theta]
chull = as_complex[:2]
for pt in as_complex[2:]:
#Perp product.
while ((pt - chull[-1]).conjugate() * (chull[-1] - chull[-2])).imag < 0:
chull.pop()
chull.append(pt)
return [(pt.real, pt.imag) for pt in chull]
def dft(xs):
pi = 3.14
return [sum(x * cmath.exp(2j*pi*i*k/len(xs))
for i, x in enumerate(xs))
for k in range(len(xs))]
def interpolateSmoothly(xs, N):
"""For each point, add N points."""
fs = dft(xs)
half = (len(xs) + 1) // 2
fs2 = fs[:half] + [0]*(len(fs)*N) + fs[half:]
return [x.real / len(xs) for x in dft(fs2)[::-1]]
pts = convexHull([(random(), random()) for _ in range(10)])
xs, ys = [interpolateSmoothly(zs, 100) for zs in zip(*pts)] #Unzip.
This generates something like this (the initial points, and the interpolation):
Here's another attempt:
pts = [(random() + 0.8) * cmath.exp(2j*pi*i/7) for i in range(7)]
pts = convexHull([(pt.real, pt.imag ) for pt in pts])
xs, ys = [interpolateSmoothly(zs, 30) for zs in zip(*pts)]
These have kinks and concavities occasionally. Such is the nature of this family of blobs.
Note that SciPy has convex hull and FFT, so the above functions could be substituted by them.
You could probably design algorithms to do this that are minor variants of a range of random maze generating algorithms. I'll suggest one based on the union-find method.
The basic idea in union-find is, given a set of items that is partitioned into disjoint (non-overlapping) subsets, to identify quickly which partition a particular item belongs to. The "union" is combining two disjoint sets together to form a larger set, the "find" is determining which partition a particular member belongs to. The idea is that each partition of the set can be identified by a particular member of the set, so you can form tree structures where pointers point from member to member towards the root. You can union two partitions (given an arbitrary member for each) by first finding the root for each partition, then modifying the (previously null) pointer for one root to point to the other.
You can formulate your problem as a disjoint union problem. Initially, every individual cell is a partition of its own. What you want is to merge partitions until you get a small number of partitions (not necessarily two) of connected cells. Then, you simply choose one (possibly the largest) of the partitions and draw it.
For each cell, you will need a pointer (initially null) for the unioning. You will probably need a bit vector to act as a set of neighbouring cells. Initially, each cell will have a set of its four (or eight) adjacent cells.
For each iteration, you choose a cell at random, then follow a pointer chain to find its root. In the details from the root, you find its neighbours set. Choose a random member from that, then find the root for that, to identify a neighbouring region. Perform the union (point one root to the other, etc) to merge the two regions. Repeat until you're happy with one of the regions.
When merging partitions, the new neighbour set for the new root will be the set symmetric difference (exclusive or) of the neighbour sets for the two previous roots.
You'll probably want to maintain other data as you grow your partitions - e.g. the size - in each root element. You can use this to be a bit more selective about going ahead with a particular union, and to help decide when to stop. Some measure of the scattering of the cells in a partition may be relevant - e.g. a small deviance or standard deviation (relative to a large cell count) probably indicates a dense roughly-circular blob.
When you finish, you just scan all cells to test whether each is a part of your chosen partition to build a separate bitmap.
In this approach, when you randomly choose a cell at the start of an iteration, there's a strong bias towards choosing the larger partitions. When you choose a neighbour, there's also a bias towards choosing a larger neighbouring partition. This means you tend to get one clearly dominant blob quite quickly.

How would you write this algorithm for large combinations in the most compact way?

The number of combinations of k items which can be retrieved from N items is described by the following formula.
N!
c = ___________________
(k! * (N - k)!)
An example would be how many combinations of 6 Balls can be drawn from a drum of 48 Balls in a lottery draw.
Optimize this formula to run with the smallest O time complexity
This question was inspired by the new WolframAlpha math engine and the fact that it can calculate extremely large combinations very quickly. e.g. and a subsequent discussion on the topic on another forum.
http://www97.wolframalpha.com/input/?i=20000000+Choose+15000000
I'll post some info/links from that discussion after some people take a stab at the solution.
Any language is acceptable.
Python: O(min[k,n-k]2)
def choose(n,k):
k = min(k,n-k)
p = q = 1
for i in xrange(k):
p *= n - i
q *= 1 + i
return p/q
Analysis:
The size of p and q will increase linearly inside the loop, if n-i and 1+i can be considered to have constant size.
The cost of each multiplication will then also increase linearly.
This sum of all iterations becomes an arithmetic series over k.
My conclusion: O(k2)
If rewritten to use floating point numbers, the multiplications will be atomic operations, but we will lose a lot of precision. It even overflows for choose(20000000, 15000000). (Not a big surprise, since the result would be around 0.2119620413×104884378.)
def choose(n,k):
k = min(k,n-k)
result = 1.0
for i in xrange(k):
result *= 1.0 * (n - i) / (1 + i)
return result
Notice that WolframAlpha returns a "Decimal Approximation". If you don't need absolute precision, you could do the same thing by calculating the factorials with Stirling's Approximation.
Now, Stirling's approximation requires the evaluation of (n/e)^n, where e is the base of the natural logarithm, which will be by far the slowest operation. But this can be done using the techniques outlined in another stackoverflow post.
If you use double precision and repeated squaring to accomplish the exponentiation, the operations will be:
3 evaluations of a Stirling approximation, each requiring O(log n) multiplications and one square root evaluation.
2 multiplications
1 divisions
The number of operations could probably be reduced with a bit of cleverness, but the total time complexity is going to be O(log n) with this approach. Pretty manageable.
EDIT: There's also bound to be a lot of academic literature on this topic, given how common this calculation is. A good university library could help you track it down.
EDIT2: Also, as pointed out in another response, the values will easily overflow a double, so a floating point type with very extended precision will need to be used for even moderately large values of k and n.
I'd solve it in Mathematica:
Binomial[n, k]
Man, that was easy...
Python: approximation in O(1) ?
Using python decimal implementation to calculate an approximation. Since it does not use any external loop, and the numbers are limited in size, I think it will execute in O(1).
from decimal import Decimal
ln = lambda z: z.ln()
exp = lambda z: z.exp()
sinh = lambda z: (exp(z) - exp(-z))/2
sqrt = lambda z: z.sqrt()
pi = Decimal('3.1415926535897932384626433832795')
e = Decimal('2.7182818284590452353602874713527')
# Stirling's approximation of the gamma-funciton.
# Simplification by Robert H. Windschitl.
# Source: http://en.wikipedia.org/wiki/Stirling%27s_approximation
gamma = lambda z: sqrt(2*pi/z) * (z/e*sqrt(z*sinh(1/z)+1/(810*z**6)))**z
def choose(n, k):
n = Decimal(str(n))
k = Decimal(str(k))
return gamma(n+1)/gamma(k+1)/gamma(n-k+1)
Example:
>>> choose(20000000,15000000)
Decimal('2.087655025913799812289651991E+4884377')
>>> choose(130202807,65101404)
Decimal('1.867575060806365854276707374E+39194946')
Any higher, and it will overflow. The exponent seems to be limited to 40000000.
Given a reasonable number of values for n and K, calculate them in advance and use a lookup table.
It's dodging the issue in some fashion (you're offloading the calculation), but it's a useful technique if you're having to determine large numbers of values.
MATLAB:
The cheater's way (using the built-in function NCHOOSEK): 13 characters, O(?)
nchoosek(N,k)
My solution: 36 characters, O(min(k,N-k))
a=min(k,N-k);
prod(N-a+1:N)/prod(1:a)
I know this is a really old question but I struggled with a solution to this problem for a long while until I found a really simple one written in VB 6 and after porting it to C#, here is the result:
public int NChooseK(int n, int k)
{
var result = 1;
for (var i = 1; i <= k; i++)
{
result *= n - (k - i);
result /= i;
}
return result;
}
The final code is so simple you won't believe it will work until you run it.
Also, the original article gives some nice explanation on how he reached the final algorithm.