Good way to procedurally generate a "blob" graphic in 2D - language-agnostic

I'm looking to create a "blob" in a computationally fast manner. A blob here is defined as a collection of pixels that could be any shape, but all connected. Examples:
.ooo....
..oooo..
....oo..
.oooooo.
..o..o..
...ooooooooooooooooooo...
..........oooo.......oo..
.....ooooooo..........o..
.....oo..................
......ooooooo....
...ooooooooooo...
..oooooooooooooo.
..ooooooooooooooo
..oooooooooooo...
...ooooooo.......
....oooooooo.....
.....ooooo.......
.......oo........
Where . is dead space and o is a marked pixel. I only care about "binary" generation - a pixel is either ON or OFF. So for instance these would look like some imaginary blob of ketchup or fictional bacterium or whatever organic substance.
What kind of algorithm could achieve this? I'm really at a loss

David Thonley's comment is right on, but I'm going to assume you want a blob with an 'organic' shape and smooth edges. For that you can use metaballs. Metaballs is a power function that works on a scalar field. Scalar fields can be rendered efficiently with the marching cubes algorithm. Different shapes can be made by changing the number of balls, their positions and their radius.
See here for an introduction to 2D metaballs: https://web.archive.org/web/20161018194403/https://www.niksula.hut.fi/~hkankaan/Homepages/metaballs.html
And here for an introduction to the marching cubes algorithm: https://web.archive.org/web/20120329000652/http://local.wasp.uwa.edu.au/~pbourke/geometry/polygonise/
Note that the 256 combinations for the intersections in 3D is only 16 combinations in 2D. It's very easy to implement.
EDIT:
I hacked together a quick example with a GLSL shader. Here is the result by using 50 blobs, with the energy function from hkankaan's homepage.
Here is the actual GLSL code, though I evaluate this per-fragment. I'm not using the marching cubes algorithm. You need to render a full-screen quad for it to work (two triangles). The vec3 uniform array is simply the 2D positions and radiuses of the individual blobs passed with glUniform3fv.
/* Trivial bare-bone vertex shader */
#version 150
in vec2 vertex;
void main()
{
gl_Position = vec4(vertex.x, vertex.y, 0.0, 1.0);
}
/* Fragment shader */
#version 150
#define NUM_BALLS 50
out vec4 color_out;
uniform vec3 balls[NUM_BALLS]; //.xy is position .z is radius
bool energyField(in vec2 p, in float gooeyness, in float iso)
{
float en = 0.0;
bool result = false;
for(int i=0; i<NUM_BALLS; ++i)
{
float radius = balls[i].z;
float denom = max(0.0001, pow(length(vec2(balls[i].xy - p)), gooeyness));
en += (radius / denom);
}
if(en > iso)
result = true;
return result;
}
void main()
{
bool outside;
/* gl_FragCoord.xy is in screen space / fragment coordinates */
outside = energyField(gl_FragCoord.xy, 1.0, 40.0);
if(outside == true)
color_out = vec4(1.0, 0.0, 0.0, 1.0);
else
discard;
}

Here's an approach where we first generate a piecewise-affine potato, and then smooth it by interpolating. The interpolation idea is based on taking the DFT, then leaving the low frequencies as they are, padding with zeros at high frequencies, and taking an inverse DFT.
Here's code requiring only standard Python libraries:
import cmath
from math import atan2
from random import random
def convexHull(pts): #Graham's scan.
xleftmost, yleftmost = min(pts)
by_theta = [(atan2(x-xleftmost, y-yleftmost), x, y) for x, y in pts]
by_theta.sort()
as_complex = [complex(x, y) for _, x, y in by_theta]
chull = as_complex[:2]
for pt in as_complex[2:]:
#Perp product.
while ((pt - chull[-1]).conjugate() * (chull[-1] - chull[-2])).imag < 0:
chull.pop()
chull.append(pt)
return [(pt.real, pt.imag) for pt in chull]
def dft(xs):
pi = 3.14
return [sum(x * cmath.exp(2j*pi*i*k/len(xs))
for i, x in enumerate(xs))
for k in range(len(xs))]
def interpolateSmoothly(xs, N):
"""For each point, add N points."""
fs = dft(xs)
half = (len(xs) + 1) // 2
fs2 = fs[:half] + [0]*(len(fs)*N) + fs[half:]
return [x.real / len(xs) for x in dft(fs2)[::-1]]
pts = convexHull([(random(), random()) for _ in range(10)])
xs, ys = [interpolateSmoothly(zs, 100) for zs in zip(*pts)] #Unzip.
This generates something like this (the initial points, and the interpolation):
Here's another attempt:
pts = [(random() + 0.8) * cmath.exp(2j*pi*i/7) for i in range(7)]
pts = convexHull([(pt.real, pt.imag ) for pt in pts])
xs, ys = [interpolateSmoothly(zs, 30) for zs in zip(*pts)]
These have kinks and concavities occasionally. Such is the nature of this family of blobs.
Note that SciPy has convex hull and FFT, so the above functions could be substituted by them.

You could probably design algorithms to do this that are minor variants of a range of random maze generating algorithms. I'll suggest one based on the union-find method.
The basic idea in union-find is, given a set of items that is partitioned into disjoint (non-overlapping) subsets, to identify quickly which partition a particular item belongs to. The "union" is combining two disjoint sets together to form a larger set, the "find" is determining which partition a particular member belongs to. The idea is that each partition of the set can be identified by a particular member of the set, so you can form tree structures where pointers point from member to member towards the root. You can union two partitions (given an arbitrary member for each) by first finding the root for each partition, then modifying the (previously null) pointer for one root to point to the other.
You can formulate your problem as a disjoint union problem. Initially, every individual cell is a partition of its own. What you want is to merge partitions until you get a small number of partitions (not necessarily two) of connected cells. Then, you simply choose one (possibly the largest) of the partitions and draw it.
For each cell, you will need a pointer (initially null) for the unioning. You will probably need a bit vector to act as a set of neighbouring cells. Initially, each cell will have a set of its four (or eight) adjacent cells.
For each iteration, you choose a cell at random, then follow a pointer chain to find its root. In the details from the root, you find its neighbours set. Choose a random member from that, then find the root for that, to identify a neighbouring region. Perform the union (point one root to the other, etc) to merge the two regions. Repeat until you're happy with one of the regions.
When merging partitions, the new neighbour set for the new root will be the set symmetric difference (exclusive or) of the neighbour sets for the two previous roots.
You'll probably want to maintain other data as you grow your partitions - e.g. the size - in each root element. You can use this to be a bit more selective about going ahead with a particular union, and to help decide when to stop. Some measure of the scattering of the cells in a partition may be relevant - e.g. a small deviance or standard deviation (relative to a large cell count) probably indicates a dense roughly-circular blob.
When you finish, you just scan all cells to test whether each is a part of your chosen partition to build a separate bitmap.
In this approach, when you randomly choose a cell at the start of an iteration, there's a strong bias towards choosing the larger partitions. When you choose a neighbour, there's also a bias towards choosing a larger neighbouring partition. This means you tend to get one clearly dominant blob quite quickly.

Related

Evaluate function at constant speed relative to arc length

I'm implementing a realtime graphics engine (C++ / OpenGL) that moves a vehicle over time along a specified course that is described by a polynomial function. The function itself was programmatically generated outside the application and is of a high order (I believe >25), so I can't really post it here (I don't think it matters anyway). During runtime the function does not change, so it's easy to calculate the first and second derivatives once to have them available quickly later on.
My problem is that I have to move along the curve with a constant speed (say 10 units per second), so my function parameter is not equal to the time directly, since the arc length between two points x1 and x2 differs dependent on the function values. For example the difference f(a+1) - f(a) may be way larger or smaller than f(b+1) - f(b), depending on how the function looks at points a and b.
I don't need a 100% accurate solution, since the movement is only visual and will not be processed any further, so any approximation is OK as well. Also please keep in mind that the whole thing has to be calculated at runtime each frame (60fps), so solving huge equations with complex math may be out of the question, depending on computation time.
I'm a little lost on where to start, so even any train of thought would be highly appreciated!
Since the criterion was not to have an exact solution, but a visually appealing approximation, there were multiple possible solutions to try out.
The first approach (suggested by Alnitak in the comments and later answered by coproc) I implemented, which is approximating the actual arclength integral by tiny iterations. This version worked really well most of the time, but was not reliable at really steep angles and used too many iterations at flat angles. As coproc already pointed out in the answer, a possible solution would be to base dx on the second derivative.
All these adjustments could be made, however, I need a runtime friendly algorithm. With this one it is hard to predict the number of iterations, which is why I was not happy with it.
The second approach (also inspired by Alnitak) is utilizing the first derivative by "pushing" the vehicle along the calculated slope (which is equal to the derivative at the current x value). The function for calculating the next x value is really compact and fast. Visually there is no obvious inaccuracy and the result is always consistent. (That's why I chose it)
float current_x = ...; //stores current x
float f(x) {...}
float f_derv(x) {...}
void calc_next_x(float units_per_second, float time_delta) {
float arc_length = units_per_second * time_delta;
float derv_squared = f_derv(current_x) * f_derv(current_x);
current_x += arc_length / sqrt(derv_squared + 1);
}
This approach, however, will possibly only be accurate enough for cases with high frame time (mine is >60fps), since the object will always be pushed along a straight line with a length depending on said frame time.
Given the constant speed and the time between frames the desired arc length between frames can be computed. So the following function should do the job:
#include <cmath>
typedef double (*Function)(double);
double moveOnArc(Function f, const double xStart, const double desiredArcLength, const double dx = 1e-2)
{
double arcLength = 0.;
double fPrev = f(xStart);
double x = xStart;
double dx2 = dx*dx;
while (arcLength < desiredArcLength)
{
x += dx;
double fx = f(x);
double dfx = fx - fPrev;
arcLength += sqrt(dx2 + dfx*dfx);
fPrev = fx;
}
return x;
}
Since you say that accuracy is not a top criteria, choosing an appropriate dx the above function might work right away. Ofcourse, it could be improved by adjusting dx automatically (e.g. based on the second derivative) or by refining the endpoint with a binary search.

How to convert a QuadTree Cell's Spatial Index (Binary Index) to Position and Dimension values?

Sorry in advance for miss-using any terminology in this question, but basically I'm looking into creating a QuadTree that makes use of Binary Indexing, like this:
As you can see in the two illustrations above, if each cells are given a binary ID (ex: 1010, 1011) then every ODD binary indices controls the X offset and every EVEN binary indices controls the Y offset.
For example, in the case of the Level 2 grid (16 cells), 1010 (cell #10) could be said to have 1s at it's 4th and 2nd index, therefore those would perform two Y offsets. The first '1###' (on the leftmost side) would indicate an offset of one cell-height, then the second '##1#' would additionally offset it twice the cell height.
As in:
// If Cell Height = 64pixels
1### = 64 pixels
+ ##1# = 128 pixels
__________________
1#1# = 192 pixels
The same can be applied to the X axis, only it uses the odd numbers instead (ex: #1#1).
Now, when I initialize my QuadTree, I began calculating the maximum nodes it may contain if all cells and all depths are used. I have calculated this with the sum of 4 to the power of each depths:
_totalNodes = 0;
var t:int=0, tLen:int=_maxLevels;
for (; t<tLen; t++) {
_totalNodes += Math.pow(4, t); //Adds 1, 4, 16, 64, 256, etc...
}
Then, I create another loop (iterating from 0 to _totalNodes) which instantiates the nodes and stores it in a long array. It passes the current iteration integer to the Node constructor, and it stores it as it's index.
So far I've been able to determine which depth (aka: Level) the Node would be stored in by figuring out it's index's Most Significant Bit:
public static function MSB( pValue:uint ):int {
var bits:int = 0;
while ( pValue >>= 1) {
bits++;
}
return bits;
}
But now, I'm stuck trying to figure out how to convert the index from binary form to actual Cell X and Y positions. like I said above, the dimensions of each cells are found. It's just a matter of doing some logical operations on the whole index (or "bit-code" is the name I refer to in my code)
If you know of a good example that uses logical-operations (binary level) to convert the binary index values to X and Y positions, could you please post a link or explanation here?
Thanks!
Here's a reference where I got this idea from (note: different programming language):
L. Spiro Engine - http://lspiroengine.com/?p=530
I'm not familiar with the language used in that article though, so I can't really follow it and convert it easily to ActionScript 3.0.
your task is described by Hannan Samet.
This works by first building the quadtree, and then assign to each quad cell the coresponding morton code. (bit interleaving code).
once you have the code, you assign it to the objects in the quad. then you can delte the quad tree. you then can search by converting a coordinate to the coresponding morton code, and do a bin search on the morton index. Instead of morton (also called z order) you als can use hilbert or gray codes.

any rules of thumb how to smooth FFT spectrum to prevent artifacts when hand-tweaking?

I've got a FFT magnitude spectrum and I want to create a filter from it that selectively passes periodic noise sources (e.g. sinewave spurs) and zero's out the frequency bins associated with the random background noise. I understand sharp transitions in the freq domain will create ringing artifacts once this filter is IFFT back to the time domain... and so I'm wondering if there are any rules of thumb how to smooth the transitions in such a filter to avoid such ringing.
For example, if the FFT has 1M frequency bins, and there are five spurs poking out of the background noise floor, I'd like to zero all bins except the peak bin associated with each of the five spurs. The question is how to handle the neighboring spur bins to prevent artifacts in the time domain. For example, should the the bin on each side of a spur bin be set to 50% amplitude? Should two bins on either side of a spur bin be used (the closest one at 50%, and the next closest at 25%, etc.)? Any thoughts greatly appreciated. Thanks!
I like the following method:
Create the ideal magnitude spectrum (remembering to make it symmetrical about DC)
Inverse transform to the time domain
Rotate the block by half the blocksize
Apply a Hann window
I find it creates reasonably smooth frequency domain results, although I've never tried it on something as sharp as you're suggesting. You can probably make a sharper filter by using a Kaiser-Bessel window, but you have to pick the parameters appropriately. By sharper, I'm guessing maybe you can reduce the sidelobes by 6 dB or so.
Here's some sample Matlab/Octave code. To test the results, I used freqz(h, 1, length(h)*10);.
function [ht, htrot, htwin] = ArbBandPass(N, freqs)
%# N = desired filter length
%# freqs = array of frequencies, normalized by pi, to turn into passbands
%# returns raw, rotated, and rotated+windowed coeffs in time domain
if any(freqs >= 1) || any(freqs <= 0)
error('0 < passband frequency < 1.0 required to fit within (DC,pi)')
end
hf = zeros(N,1); %# magnitude spectrum from DC to 2*pi is intialized to 0
%# In Matlabs FFT, idx 1 -> DC, idx 2 -> bin 1, idx N/2 -> Fs/2 - 1, idx N/2 + 1 -> Fs/2, idx N -> bin -1
idxs = round(freqs * N/2)+1; %# indeces of passband freqs between DC and pi
hf(idxs) = 1; %# set desired positive frequencies to 1
hf(N - (idxs-2)) = 1; %# make sure 2-sided spectrum is symmetric, guarantees real filter coeffs in time domain
ht = ifft(hf); %# this will have a small imaginary part due to numerical error
if any(abs(imag(ht)) > 2*eps(max(abs(real(ht)))))
warning('Imaginary part of time domain signal surprisingly large - is the spectrum symmetric?')
end
ht = real(ht); %# discard tiny imag part from numerical error
htrot = [ht((N/2 + 1):end) ; ht(1:(N/2))]; %# circularly rotate time domain block by N/2 points
win = hann(N, 'periodic'); %# might want to use a window with a flatter mainlobe
htwin = htrot .* win;
htwin = htwin .* (N/sum(win)); %# normalize peak amplitude by compensating for width of window lineshape

Multivariate Bisection Method

I need an algorithm to perform a 2D bisection method for solving a 2x2 non-linear problem. Example: two equations f(x,y)=0 and g(x,y)=0 which I want to solve simultaneously. I am very familiar with the 1D bisection ( as well as other numerical methods ). Assume I already know the solution lies between the bounds x1 < x < x2 and y1 < y < y2.
In a grid the starting bounds are:
^
| C D
y2 -+ o-------o
| | |
| | |
| | |
y1 -+ o-------o
| A B
o--+------+---->
x1 x2
and I know the values f(A), f(B), f(C) and f(D) as well as g(A), g(B), g(C) and g(D). To start the bisection I guess we need to divide the points out along the edges as well as the middle.
^
| C F D
y2 -+ o---o---o
| | |
|G o o M o H
| | |
y1 -+ o---o---o
| A E B
o--+------+---->
x1 x2
Now considering the possibilities of combinations such as checking if f(G)*f(M)<0 AND g(G)*g(M)<0 seems overwhelming. Maybe I am making this a little too complicated, but I think there should be a multidimensional version of the Bisection, just as Newton-Raphson can be easily be multidimed using gradient operators.
Any clues, comments, or links are welcomed.
Sorry, while bisection works in 1-d, it fails in higher dimensions. You simply cannot break a 2-d region into subregions using only information about the function at the corners of the region and a point in the interior. In the words of Mick Jagger, "You can't always get what you want".
I just stumbled upon the answer to this from geometrictools.com and C++ code.
edit: the code is now on github.
I would split the area along a single dimension only, alternating dimensions. The condition you have for existence of zero of a single function would be "you have two points of different sign on the boundary of the region", so I'd just check that fro the two functions. However, I don't think it would work well, since zeros of both functions in a particular region don't guarantee a common zero (this might even exist in a different region that doesn't meet the criterion).
For example, look at this image:
There is no way you can distinguish the squares ABED and EFIH given only f() and g()'s behaviour on their boundary. However, ABED doesn't contain a common zero and EFIH does.
This would be similar to region queries using eg. kD-trees, if you could positively identify that a region doesn't contain zero of eg. f. Still, this can be slow under some circumstances.
If you can assume (per your comment to woodchips) that f(x,y)=0 defines a continuous monotone function y=f2(x), i.e. for each x1<=x<=x2 there is a unique solution for y (you just can't express it analytically due to the messy form of f), and similarly y=g2(x) is a continuous monotone function, then there is a way to find the joint solution.
If you could calculate f2 and g2, then you could use a 1-d bisection method on [x1,x2] to solve f2(x)-g2(x)=0. And you can do that by using 1-d bisection on [y1,y2] again for solving f(x,y)=0 for y for any given fixed x that you need to consider (x1, x2, (x1+x2)/2, etc) - that's where the continuous monotonicity is helpful -and similarly for g. You have to make sure to update x1-x2 and y1-y2 after each step.
This approach might not be efficient, but should work. Of course, lots of two-variable functions don't intersect the z-plane as continuous monotone functions.
I'm not much experient on optimization, but I built a solution to this problem with a bisection algorithm like the question describes. I think is necessary to fix a bug in my solution because it compute tow times a root in some cases, but i think it's simple and will try it later.
EDIT: I seem the comment of jpalecek, and now I anderstand that some premises I assumed are wrong, but the methods still works on most cases. More especificaly, the zero is garanteed only if the two functions variate the signals at oposite direction, but is need to handle the cases of zero at the vertices. I think is possible to build a justificated and satisfatory heuristic to that, but it is a little complicated and now I consider more promising get the function given by f_abs = abs(f, g) and build a heuristic to find the local minimuns, looking to the gradient direction on the points of the middle of edges.
Introduction
Consider the configuration in the question:
^
| C D
y2 -+ o-------o
| | |
| | |
| | |
y1 -+ o-------o
| A B
o--+------+---->
x1 x2
There are many ways to do that, but I chose to use only the corner points (A, B, C, D) and not middle or center points liky the question sugests. Assume I have tow function f(x,y) and g(x,y) as you describe. In truth it's generaly a function (x,y) -> (f(x,y), g(x,y)).
The steps are the following, and there is a resume (with a Python code) at the end.
Step by step explanation
Calculate the product each scalar function (f and g) by them self at adjacent points. Compute the minimum product for each one for each direction of variation (axis, x and y).
Fx = min(f(C)*f(B), f(D)*f(A))
Fy = min(f(A)*f(B), f(D)*f(C))
Gx = min(g(C)*g(B), g(D)*g(A))
Gy = min(g(A)*g(B), g(D)*g(C))
It looks to the product through tow oposite sides of the rectangle and computes the minimum of them, whats represents the existence of a changing of signal if its negative. It's a bit of redundance but work's well. Alternativaly you can try other configuration like use the points (E, F, G and H show in the question), but I think make sense to use the corner points because it consider better the whole area of the rectangle, but it is only a impression.
Compute the minimum of the tow axis for each function.
F = min(Fx, Fy)
G = min(Gx, Gy)
It of this values represents the existence of a zero for each function, f and g, within the rectangle.
Compute the maximum of them:
max(F, G)
If max(F, G) < 0, then there is a root inside the rectangle. Additionaly, if f(C) = 0 and g(C) = 0, there is a root too and we do the same, but if the root is in other corner we ignore him, because other rectangle will compute it (I want to avoid double computation of roots). The statement bellow resumes:
guaranteed_contain_zeros = max(F, G) < 0 or (f(C) == 0 and g(C) == 0)
In this case we have to proceed breaking the region recursively ultil the rectangles are as small as we want.
Else, may still exist a root inside the rectangle. Because of that, we have to use some criterion to break this regions ultil the we have a minimum granularity. The criterion I used is to assert the largest dimension of the current rectangle is smaller than the smallest dimension of the original rectangle (delta in the code sample bellow).
Resume
This Python code resume:
def balance_points(x_min, x_max, y_min, y_max, delta, eps=2e-32):
width = x_max - x_min
height = y_max - y_min
x_middle = (x_min + x_max)/2
y_middle = (y_min + y_max)/2
Fx = min(f(C)*f(B), f(D)*f(A))
Fy = min(f(A)*f(B), f(D)*f(C))
Gx = min(g(C)*g(B), g(D)*g(A))
Gy = min(g(A)*g(B), g(D)*g(C))
F = min(Fx, Fy)
G = min(Gx, Gy)
largest_dim = max(width, height)
guaranteed_contain_zeros = max(F, G) < 0 or (f(C) == 0 and g(C) == 0)
if guaranteed_contain_zeros and largest_dim <= eps:
return [(x_middle, y_middle)]
elif guaranteed_contain_zeros or largest_dim > delta:
if width >= height:
return balance_points(x_min, x_middle, y_min, y_max, delta) + balance_points(x_middle, x_max, y_min, y_max, delta)
else:
return balance_points(x_min, x_max, y_min, y_middle, delta) + balance_points(x_min, x_max, y_middle, y_max, delta)
else:
return []
Results
I have used a similar code similar in a personal project (GitHub here) and it draw the rectangles of the algorithm and the root (the system have a balance point at the origin):
Rectangles
It works well.
Improvements
In some cases the algorithm compute tow times the same zero. I thinh it can have tow reasons:
I the case the functions gives exatly zero at neighbour rectangles (because of an numerical truncation). In this case the remedy is to incrise eps (increase the rectangles). I chose eps=2e-32, because 32 bits is a half of the precision (on 64 bits archtecture), then is problable that the function don't gives a zero... but it was more like a guess, I don't now if is the better. But, if we decrease much the eps, it extrapolates the recursion limit of Python interpreter.
The case in witch the f(A), f(B), etc, are near to zero and the product is truncated to zero. I think it can be reduced if we use the product of the signals of f and g in place of the product of the functions.
I think is possible improve the criterion to discard a rectangle. It can be made considering how much the functions are variating in the region of the rectangle and how distante the function is of zero. Perhaps a simple relation between the average and variance of the function values on the corners. In another way (and more complicated) we can use a stack to store the values on each recursion instance and garantee that this values are convergent to stop recursion.
This is a similar problem to finding critical points in vector fields (see http://alglobus.net/NASAwork/topology/Papers/alsVugraphs93.ps).
If you have the values of f(x,y) and g(x,y) at the vertexes of your quadrilateral and you are in a discrete problem (such that you don't have an analytical expression for f(x,y) and g(x,y) nor the values at other locations inside the quadrilateral), then you can use bilinear interpolation to get two equations (for f and g). For the 2D case the analytical solution will be a quadratic equation which, according to the solution (1 root, 2 real roots, 2 imaginary roots) you may have 1 solution, 2 solutions, no solutions, solutions inside or outside your quadrilateral.
If instead you have analytic functions of f(x,y) and g(x,y) and want to use them, this is not useful. Instead you could divide your quadrilateral recursively, however as it was already pointed out by jpalecek (2nd post), you would need a way to stop your divisions by figuring out a test that would assure you would have no zeros inside a quadrilateral.
Let f_1(x,y), f_2(x,y) be two functions which are continuous and monotonic with respect to x and y. The problem is to solve the system f_1(x,y) = 0, f_2(x,y) = 0.
The alternating-direction algorithm is illustrated below. Here, the lines depict sets {f_1 = 0} and {f_2 = 0}. It is easy to see that the direction of movement of the algorithm (right-down or left-up) depends on the order of solving the equations f_i(x,y) = 0 (e.g., solve f_1(x,y) = 0 w.r.t. x then solve f_2(x,y) = 0 w.r.t. y OR first solve f_1(x,y) = 0 w.r.t. y and then solve f_2(x,y) = 0 w.r.t. x).
Given the initial guess, we don't know where the root is. So, in order to find all roots of the system, we have to move in both directions.

Multiply two Point objects

I have noticed in other languages such as Java that there are Objects such as Vector2d that have a multiply method. How would I do the same with Actionscript 3? I know that the Point or Vector3D classes have add/substract methods, but neither offer multiply/divide methods.
What is the best way to multiply two Point objects? would it be something like the following?
var p1:Point = new Point(10, 20);
var p2:Point = new Point(30, 40);
var p3:Point = new Point((p1.x * p2.x), (p1.y * p2.y));
Also why would multiply/divide be left out of these classes?
EDIT* Here is a link to the Vector2d class I have seen in Java: Java Vector2d multiply
What would be the mathematical meaning of new Point((p1.x * p2.x), (p1.y * p2.y))? In a vector space, there is usually no multiplication, as it is simply not clear, what it should do.
However, there is a so called "scalar multiplication" defined on the vectors of the euclidean space, which yields a number ("scalar", hence the name):
double s = p1.x * p2.x + p1.y * p2.y;
This is useful, for example, if you need to test, whether to lines are orthogonal. However, the result is a number not a vector.
Another thing to do with vectors is to scale them:
double factor = 1.5;
Point p3 = new Point(factor * p1.x, factor * p2.y);
Here the result is indeed a point, but the input is a number and a vector, not two vectors. So, unless you can say, what the interpretation/meaning of your vector multiplication is, you shouldn't define one. (Note, that I don't know, whether your proposal might be useful or not -- it is simply not a "standard" multiplication I know of).
I really doubt that Java has multiply methods for point or vector classes, because a canonical definition of a concept like “multiplication” is not possible for these objects. There are a few operations generally called “products”—dot product, cross product, complex multiplication (by identification of the complex and the Euclidean plane)—, but they all have properties that strongly differ from the properties of the usual real multiplication. The component-wise multiplication that you suggested doesn't make much sense geometrically. So there is no definitive answer; normally you just pick the “multiplication” that you need to solve the problem at hand.
(forgot to finish the post and by now some answers appeared, so sorry for the redundancies)
well, basically, that is because it does not really make sense to multiply two points in 2d ... the vector product does not exist for 2 dimensions ... the scalar product yields a scalar, as the name indicates, thus a Number ... the only thing that would make sense would be the complex product:
var p3:Point = new Point(p1.x * p2.x - p1.y * p2.y, p1.x * p2.y + p1.y * p2.x);
when looking at polar coordinates you will realize that the angle of p3 is the sum of the angles of p1 and p2, and the length is the product ...
even in 3d, you rarely need multiplication of vectors ... only in the rare case that you have a 3d physics engine, that includes things as angular momentum or lorentz force ...
so the question is: what do you intend to do in the end? what result should the multiplication provide?