Inverse of fourier transform in 3d for non-uniform grid - fft

My data is given by the Fourier transform of the function, where the points are distributed in a ball with uniformly distributed distances and uniformly distributed spherical angles (not Gaussian angles).
So the grid in Fourier space is obviously non-uniform (uniform spherical angles imply non-uniform distribution on the sphere).
I need to reconstruct the function from such data. I don't care yet about effectiveness of the algorithm but I want to know if it is in principal possible to reconstruct it from such data. I know that reconstruction is very sensitive to the grid in Fourier space.
p.s. I know that in 2D for example, the uniform polar-coordinates grid is ok.
p.p.s I tried to do the inversion by discretizing the Fourier integral in 3D -- so it will be the summ of all points in the ball multipilied by respective exponents and multiplied by discretized jacobian (in spherical coordinates).
The pictures I get are unsatisfactory.
test_reconstruction.png
On this picture it should be a small square in the middle (a slide of a square in 3D).

The answer is yes. Sorry to taking your time with questions. Naive discretization of the Fourier integral gives already meaningful results.
Reconstruction of a slice with square potential(with post-smoothing)
Reconstruction of a slice with round potential (no postsmoothing)

Related

2D FFT: Identify connection between certain frequencies in an image

Imagine you have a real image, and you put it through a 2D FFT.
Usually this yields a cross-like structure (edge effect) and some real content, depending on the image.
Imagine the original image contains two spots with bad lighting, which for example yield two different frequencies in the fourier domain.
Imagine another, different image, put through fft, with a single spot due to bad lighting; this spot yields the same frequencies in the fourier domain, as the spots in the first image combined.
How would one distinguish those two amplitude spectra? In my opinion, there is no way of knowing the location of a certain frequency in the fourier domain. Only a direction information is retained, i.e. a horizontal line in the image will yield a vertical line in the fourier amplitude spectrum.
So the information I am after has to be hidden in the phase spectrum. How can I recover such a specific piece of information from a phase spectum that looks like noise?

How to get the radial distance to the boundary given a point in ITK?

I'm loading a 3D CT model and doing thinning algorithms on it. Now I'd like to calculate how much thinning the algorithms do. How can I know the distances between skeleton points and their nearest/farthest boundary points?
Compute the distance transform of the skeleton points and boundary points (stored as a binary mask). Your answer lies therin.

Calculate 3D coordinates from 2D Image plane accounting for perspective without direct access to view/projection matrix

First time asking a question on the stack exchange, hopefully this is the right place.
I can't seem to develop a close enough approximation algorithm for my situation as I'm not exactly the best in terms of 3D math.
I have a 3d environment in which I can access the position and rotation of any object, including my camera, as well as run trace lines from any two points to get distances between a point and a point of collision. I also have my camera's field of view. I do not have any form of access to the world/view/projection matrices however.
I also have a collection of 2d images that are basically a set of screenshots of the 3d environment from the camera, each collection is from the same point and angle and the average set is taken at about an average of a 60 degree angle down from the horizon.
I have been able to get to the point of using "registration point entities" that can be placed in the 3d world that represent the corners of the 2d image, and then when a point is picked on the 2d image it is read as a coordinate with range 0-1, which is then interpolated between the 3d positions of the registration points. This seems to work well, but only if the image is a perfect top down angle. When the camera is tilted and another dimension of perspective is introduced, the results become more grossly inaccurate as there no compensation for this perspective.
I don't need to be able to calculate the height of a point, say a window on a sky scraper, but at least the coordinate at the base of the image plane, or which if I extend a line out from my image from a specified image space point I need at least the point that the line will intersect with the ground if there was nothing in the way.
All of the material I found about this says to just deproject the point using the world/view/projection matrices, which I find straightforward in itself except I don't have access to these matrices, just data I can collect at screenshot time and other algorithms use complex maths I simply don't grasp yet.
One end goal of this would be able to place markers in the 3d environment where a user clicks in the image, while not being able to run a simple deprojection from the user's view.
Any help would be appreciated, thanks.
Edit: Herp derp, while my implementation for doing so is a bit odd due to the limitations of my situation, the solution essentially boiled down to ananthonline's answer about simply recalculating the view/projection matrices.
Between position, rotation and FOV of the camera, could you not calculate the View/Projection matrices of the camera (songho.ca/opengl/gl_projectionmatrix.html) - thus allowing you to unproject known 3D points?

Calculate the area a camera can see on a plane

I have a camera with the coordinates x,y at height h that is looking onto the x-y-plane at a specific angle, with a specific field of view. I want to calculate the 4 corners the camera can see on the plane.
There is probably some kind of formula for that, but I can't seem to find it on google.
Edit: I should probably mention that I mean a camera in the 3D-Graphics sense. Specifically I'm using XNA.
I've had to do similar things for debugging graphics code in 3D games. I found the easiest way of thinking about it was creating the vectors representing the corners of the screen and then calculating the intersection with whatever relevant objects (in this case, a plane).
Take a look at your view-projection (or whatever your Camera's matrix stack looks like multiplied together) and realize the screen space they output to has corners with homogonized coordinates of (-1, -1), (-1, 1), (1, -1), (1, 1). Knowing this, you're left with one free variable and can solve for the vector representing the corner of the camera.
Although, this is a pain. It's much easier to construct a vector for each corner as if the camera isn't rotated or translated and then transform them by the view matrix to give you world space vectors. Then you can calculate the intersection between the vectors and the plane to get your four corners.
I have a day job, so I leave the math to you. Some links that may help with that, however:
Angle/Field of view calculations
Line plane intersection
ignoring lens distortions and assuming the lens is almost at the focal point then you just have a triangle formed by the sensor size and the lens, then the lens to the subject - similar trianlges gives you the size of the subject plane.
If you want a tilted object plane that's just a projection onto the perpendicular object plane

Vectors for Game Programming

Im not sure how to use vectors correctly in game programming. I have been reading advanced game design with flash which shows you how to create a vector with a start point and endpoint and how to use that for games, for example the start point would be used for a characters position in a game and the x and y length would be used for velocity. But since I have started looking online I have found that vectors are usually just x and y with no start point or end point and a character would be moved by having a position vector and a velocity vector and acceleration vector. I have started creating my own vector class. And wondered what the reasons for and against each method are. Or is it completely not important?
Initially a vector means direction. Classical vector is used in physics to present a velocity so that the vector direction stands for the heading and the vector length is a speed.But in graphics vectors are used also to present position. So if you have some point, let's say, in 2d space noted by x ,y it remains point if you don't want to know in what direction it set relating to the origin which is usually a center of the coordinate system. In 2d graphics we deal with Cartesian coordinate system which has an origin in top left corner of the screen. But you can also have a direction of some vector relative to any other point in the space.That is why you have also vector operations like addition, subtracting ,dot product, cross product. All those help you to measure distances and angles between vectors. I would suggest you to buy a book on graphics programming. Most of them introduce an easy to grasp primer to vector math.And you don't need to write a vector class in AS 3.0 You have a generic one - Vector3d