What would you see if left-right images of a 3d view are inverted? - language-agnostic

Sorry for the maybe-trivial post, but I really cannot figure it out...
Let's suppose you have some 3d glasses or something that allows you a 3d stereo vision.
What happens if you invert left and right image??? Thinking at it I cannot really figure out it. Should you see the reverse of the image? Or just some axis-shift?
Unfortunately I cannot try it out in any way, but even if possible, I'd love to try to figure out and understand the thing with my mind before trying it.
So, please, any help, any idea, any hit that can help me to understand or to deeply discuss are welcome.

For the human brain it's next to impossible to give a formal answer, because frankly, neurologists still don't fully understand how it works in detail. But so much we know:
Our brain does no absolute "measurement" on the parallax in stereo images. The whole depth perception works on parallax differences. You could say, the brain takes the derivative of the parallax to build it's mental representation of depth. Derivative of Parallax and depth are taken to be (nearly) proportional. By swapping the pictures the derivative gets negative, so at every point the brain sums up depth in the wrong direction.
However parallax is not the only source for depth perception. Of similar importance is experienced knowledge about typical object in the world. For example faces are "known" to be never inside out, so even with negative parallax the knowledge will overrule and the face being percept in the right form (however it'll clash hilariously with the surroundings).

You would see it as "inside-out" (it's a little more complex than that, but that's the basic truth).

You can experience 3D without any special hardware, thanks to stereoscopic images (side-by-side, then crossing your eyes to see the images as one unique image).
You can then switch right to left, left to right by editing the image.
Here is an example with an image I've found on the web: https://imgur.com/a/ov7U7N5
Do you feel any difference in the depth? Do you see things inside out?
I believe the sense of depth in this case is preserved. But maybe it's just me.

Related

Resizing big images for object detection

I need to perform object detection using deep learning on "huge" images, say 10000x10000 pixels.
At some point in the workflow, I need to resize the images down to something more manageable say 640x640. At the moment, I am achieving this using opencv :
import cv2
img = cv2.imread("some/path/to/my/img")
h, w = 640, 640
img = cv2.resize("some/path/to/my/img_resized", (w,h))
Now, when I am trying to look at some of these pictures (e.g. to check my bounding boxes are well-defined) with my human eye, I "can't see anything" in the sense that the resize is so aggressive that the image is heavily pixelated.
Does this cause an issue for the training of the algorithm ? Because in the end, I can get back the bounding boxes output by the model back to the original image (100000x10000px) using some transform. That is not an issue. But I can't tell if working on such pixelated images during training causes something to go wrong ?
It really depends what information is lost during the resizing. From 10000x10000 to 640x640 I would assume almost everything relevant is lost making the problem a lot harder if even solvable at all.
If you can't solve the problem (seeing the objects in the resized image) it is a very bad starting point to solve the problem with a neural network. I would still try and see if the network does anything.
It probably won't work good. An easy approach trying to solve this is splitting up the initial image in patches and do the detection on them and combine the results. This can work but depending on the problem might not be sufficient.
If this is not sufficient for your problem you might wanna do some state of the art research and try to find someone with a similar problem. I know that medical images also can be quite big. Also people dealing with satellite images might have the same problem of very big input images and maybe came up with ways to solve this.

cocos2d-x: why are there gaps in my 3d model?

see right arm and right leg.
we really workout a perfect model, but after we add animation the gaps appear again. does anybody know why? we use coco2d-x for ios and android.
This happened to me when the bone data was a bit off. Make sure the parenting of your bones and positioning of your bones always stay the same (distance from each other. Like moving an arm doesn't cause the hand to go further away from the elbow bone). I saw this same problem when the individual bones moved a bit too far away from each other in certain frames.
There's no quick fix for it, I just made sure that every for frame there is no gap in the model. Correct parenting usually fixed this. I hope this helps.

Make sprites character faces topleft

i already have this sprites on my computer (taken from internet), but how do i make it faces top left? because this sprites do not have character that faces topleft.. My question is, how do i do that? i want use this sprites character for my game, 2.5D..
Thanks.
I want it to faced like this (topleft):
I appreciate your answer. Thanks.
You're wanting to make the sprite face diagonally, somewhat up, and somewhat to the left? (To me, it looks like the chicken is facing to the right, but it's hard for me to really make out individual features on the chicken.)
In that case, you probably really need to draw a new graphic that is based on the others. Hire someone else, if you have to. Stuff that's easy to do in a typical programming language would have to do more with moving graphics around (translation), flipping them, rotating them, stretching them, that kind of thing. Those are the four basic geometric transformations in a programming language, and simply changing the alpha of the whole thing or something like that should be easy as well. But it's not easy at all to programmatically create something that looks like a brand new graphic, even if it is sort of similar to some graphics you already you have.

As3 3D - Ink drop spreading in water

I'm researching the following problem:
Let's say I have a glass of some fluid (water for example). The fluid is completely transparent and I don't have to render it at all.
However a ink drop is dropped in the glass and it's spreading in the water.
The whole thing should be 3D and user should be able to rotate the camera and see the spreading in real time.
I have researched a couple of way to approach this problem, but it turned out that most of them are dead end.
Тhe only approach that has some success was to use enormous amount of particles which form the skeleton of the "inc spread". The physics simulation of the process of spreading is far form perfect, but let's say it's not a problem.
The problem is the rendering part.
As far as I know I'll not be able to speed up the z-sort process greatly by using the flash GPU acceleration, because the upload of those particles to the GPU memory every frame is quite slow?
Can somebody confirm that please?
The other thing that I'm struggling with is the final render. I tried a whole bunch of filters in combination with "post process" techniques to create smooth lines and gradients between the dots, but the result it terrible. If somebody know some article that could help me with that I'll be very grateful.
Overall if there is another viable approach tho the problem please let me know.
Thanks in advance.
Cheers.
You should probably look at Computational Fluid Dynamics in general to get a basic understanding. This should make it easy to play with actionscript implementations like Eugene's Fluid Solver, either in 2D or 3D, tweaking fluid properties to get the look and feel you're after

Converting Pixels to Bezier Curves in Actionscript 3

Ok, so I'll try to be as descriptive as possible.
I'm working on a project for a client that requires a jibjab-style masking feature of an uploaded image.
I would like to be able to generate a database-storable object that contains anchor/control positions of a bezier shape, so I can pull it out later and re-mask the object. This all is pretty easy to do, except for one catch : I need to create the bezier object from a user-drawn outline.
So far, here's how I imagine the process going:
on mouse down, create a new sprite, beginFill, and moveTo mouse position.
on mouse move, lineTo an XY coordinate.
on mouse up, endFill.
This all works just great. I could just store the info here, but I would be looking at a GIGANTIC object full of tons of pretty useless x/y coordinates, and no way to really make fine-tuning changes outside of putting handles on every pixel. (I may as well give the end user a pencil tool...)
Here's what I'm thinking as far as bezier curve calculation goes :
1: Figure out when I need to start a new curve, and track the xy of the pixel on this interval. I'm imagining this being just a pixel count, maybe just increment a count variable per mouse move and make a new one every x pixels. The issue here is some curves would be inaccurate, and others unnecessary, but I really just need a general area, not an exact representation, so it could work. I'd be happier with something a little smarter though.
2: take each new x/y, store it as an anchor, and figure out where a control would go to make the line curve between this and the last anchor. this is where I get really hung up. I'm sure someone has done this in flash, but no amount of googling can seem to help me out with the way to get this done. I've done a lot of sketching and what little math I can wrap my brain around, but can't seem to figure out a way of converting pixels to beziers.
Is this possible? All I really need is something that will get close to the same shape. I'm thinking about maybe only placing anchors when the angle of the next pixel is beyond 180 degrees in relation to the current line or something, and just grabbing the edge of the arc between these changes, but no matter how hard I try, I can't seem to figure out how to get this working!
Thanks for your help, I'll be sure to post my progress here as I go, I think this could be really useful in many applications, as long as it's actually feasible...
Jesse
It sounds like a lot of work to turn pixels into Bezier curves. You could try using something like the Linear least squares algorithm. http://en.wikipedia.org/wiki/Linear_least_squares
A different tact, could you have your users draw vector graphics instead? That way you can just store the shapes in the database.
Another cool method of converting raster to vector would be something like this iterative program: http://rogeralsing.com/2008/12/07/genetic-programming-evolution-of-mona-lisa/
Good luck
In my answer to this question I discuss using autotrace to convert bitmaps to beziers. I recommend passing your user drawing through this program on the server. Autotrace does a fantastic job of tracing and simplifying so there is no need to try and reinvent the wheel here.
Thanks for the answers, although I guess I probably should be more specific about the application, I'm really only needing an outline for a mask, so converting images to vectors or polygons, despite how cool that is, doesn't really fix my issue. The linear least squares algorithm is mega cool, I think this might be closer to what I'm looking for.
I have a basic workaround going right now, I'm just counting mouse moves, then every X (playing with it to get most desirable curve) moves, I grab the xy position. then, I take every other stored xy, and turn it into an anchor, the remaining xys are turned into controls. This is producing somewhat desirable results, but has some minor issues, in that the speed at which the mask is drawn effects the number of handles, and it's really just getting a general area, not a precise fit.
Interestingly, users seem to draw slower for more precise shapes, so this solution works a lot better than I had imagined, but it's not as nice as it could be. This will work for the client, so although there's no reason to pursue it further, I like learning new things, and will spend some off the clock time looking into linear least equations and seeing if I can drum up a class that will do these computations for me. If anyone runs across some AS3 code for this type of thing, or would like some of mine, let me know, this is an interesting puzzle.