I'm rendering a starfield composed from points(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST). If a point get closer to the camera I make it double the size. That worked well with OpenGL 1.x using glPointSize(2.0f)
Is there a way to achieve this with DirectX 11 on Windows Phone 8?
What I need is a way to make a rendered point appear bigger on some custom value.
Any thought is greatly appreciated.
There's no native sprite type in D3D11. Your best bet is to use instancing with a single quad VB and a per-instance point VB. You'd scale by dividing the quad point deltas by the view depth, or applying a standard perspective projection matrix (though the latter would also cause the points themselves to converge at higher distances).
Related
I have a grid based coordinate system where the distance between each grid center is a set number of pixels. Doing this in OpenGL I made a camera using view/projection matrices. Changing view center and scale for the whole scene could be made by setting matrix elements.
This is from a prototype using OpenGL that I made before going full Tcl/Tk. The node positions are {[1..5], [1..3]} (text at fractional y-offset):
How would I accomplish this in Tcl/Tk? I'm reading about canvas zooming however was hoping for a smaller example; create line between 1..2, 3..4, pan, zoom, pan again, zoom again.
The Tk canvas widget supports panning (it's following the Tk scrollable window protocol) but doesn't allow zooming of the viewport. (It has a scale method, but that just changes the coordinates of the objects, which is not the same thing; in particular, it doesn't change things like font sizes, embedded window/image sizes, line thicknesses, etc.) You can control the panning via the xview and yview methods, as with any scrollable Tk widget.
However, there's an alternative that might be more suitable for you.
Paul Obermeier's Tcl3D, which allows you to use OpenGL together with Tcl/Tk, would let you keep a lot of what you've done so far with writing OpenGL code, and yet it still provides the benefits of being able to use Tcl/Tk. Basically, the core of it is that it includes Togl, a package for presenting an OpenGL drawing surface as a Tk widget, together with bindings to the OpenGL API. I know of quite a few people who use Tcl3D to power their (both commercial and open-source) applications, and you'd end up with a hybrid that should provide the best of both worlds. (Also, it's considered entirely reasonable to include some custom C code in a Tcl application that needs a bit more performance in a critical spot; it's outright good style to do this. Complex GUIs sometimes need that sort of thing.)
I'm developing a network demo game using cocos2d-x and chipmunk. I have a problem with physics when i ( ApplyImpuse() ) to one sprite in one device and send that Vec2 force to the other device and apply the same impulse to the sprite the simulation is pretty different than what i got in the first device.
I tests it many times with different devices.
Note : I don't use any custom update method i just ApplyImpulse() to the sprite when i touch the screen.
Can anyone describe this issue and propose any solution for it please ?
If i switched to Box2D will this problem be solved??
Thanks.
No. The only way to get a deterministic simulation across many devices is to use (or rewrite) the physics library to use fixed point math, and then deal with all the limitations that brings.
It is nearly impossible to get deterministic results from floating point numbers from different CPUs, compilers, platforms, or even most minor code changes. For example, a*b + a*c could be simplified by the compiler to a*(b + c), but since floating point numbers have limited precision, the result will likely not be exactly the same.
Contraption Make started out using vanilla Chipmunk, but eventually rewrote significant portions of it using fixed point: http://www.moddb.com/members/kevryan/blogs/the-butterfly-effect-deterministic-physics-in-the-incredible-machine-and-contraption-maker
I know that Cesium offers several different interpolation methods, including linear (or bilinear in 2D), Hermite, and Lagrange. One can use these methods to resample sets of points and/or create curves that approximate sampled points, etc.
However, the question I have is what method does Cesium use internally when it is rendering a 3D scene and the user is zooming/panning all over the place? This is not a case where the programmer has access to the raster, etc, so one can't just get in the middle of it all and call the interpolation functions directly. Cesium is doing its own thing as quickly as it can in response to user control.
My hunch is that the default is bilinear, but I don't know that nor can I find any documentation that explicitly says what is used. Further, is there a way I can force Cesium to use a specific resampling method during these activities, such as Lagrange resampling? That, in fact, is what I need to do: force Cesium to employ Lagrange resampling during scene rendering. Any suggestions would be appreciated.
EDIT: Here's a more detailed description of the problem…
Suppose I use Cesium to set up a 3-D model of the Earth including a greyscale image chip at its proper location on the model Earth's surface, and then I display the results in a Cesium window. If the view point is far enough from the Earth's surface, then the number of pixels displayed in the image chip part of the window will be fewer than the actual number of pixels that are available in the image chip source. Some downsampling will occur. Likewise, if the user zooms in repeatedly, there will come a point at which there are more pixels displayed across the image chip than the actual number of pixels in the image chip source. Some upsampling will occur. In general, every time Cesium draws a frame that includes a pixel data source there is resampling happening. It could be nearest neighbor (doubt it), linear (probably), cubic, Lagrange, Hermite, or any one of a number of different resampling techniques. At my company, we are using Cesium as part of a large government program which requires the use of Lagrange resampling to ensure image quality. (The NGA has deemed that best for its programs and analyst tools, and they have made it a compliance requirement. So we have no choice.)
So here's the problem: while the user is interacting with the model, for instance zooming in, the drawing process is not in the programmer's control. The resampling is either happening in the Cesium layer itself (hopefully) or in even still lower layers (for instance, the WebGL functions that Cesium may be relying on). So I have no clue which technique is used for this resampling. Worse, if that technique is not Lagrange, then I don't have any clue how to change it.
So the question(s) would be this: is Cesium doing the resampling explicitly? If so, then what technique is it using? If not, then what drawing packages and functions are Cesium relying on to render an image file onto the map? (I can try to dig down and determine what techniques those layers may be using, and/or have available.)
UPDATE: Wow, my original answer was a total misunderstanding of your question, so I've rewritten from scratch.
With the new edits, it's clear your question is about how images are resampled for the screen while rendering. These
images are texturemaps, in WebGL, and the process of getting them to the screen quickly is implemented in hardware,
on the graphics card itself. Software on the CPU is not performant enough to map individual pixels to the screen
one at a time, which is why we have hardware-accelerated 3D cards.
Now for the bad news: This hardware supports nearest neighbor, linear, and mapmapping. That's it. 3D graphics
cards do not use any fancier interpolation, as it needs to be done in a fraction of a second to keep frame rate as high as possible.
Mapmapping is described well by #gman in his article WebGL 3D Textures. It's
a long article but search for the word "mipmap" and skip ahead to his description of that. Basically a single image is reduced
into smaller images prior to rendering, so an appropriately-sized starting point can be chosen at render time. But there will
always be a final mapping to the screen, and as you can see, the choices are NEAREST or LINEAR.
Quoting #gman's article here:
You can choose what WebGL does by setting the texture filtering for each texture. There are 6 modes
NEAREST = choose 1 pixel from the biggest mip
LINEAR = choose 4 pixels from the biggest mip and blend them
NEAREST_MIPMAP_NEAREST = choose the best mip, then pick one pixel from that mip
LINEAR_MIPMAP_NEAREST = choose the best mip, then blend 4 pixels from that mip
NEAREST_MIPMAP_LINEAR = choose the best 2 mips, choose 1 pixel from each, blend them
LINEAR_MIPMAP_LINEAR = choose the best 2 mips. choose 4 pixels from each, blend them
I guess the best news I can give you is that Cesium uses the best of those, LINEAR_MIPMAP_LINEAR to
do its own rendering. If you have a strict requirement for more time-consuming imagery interpolation, that means you
have a requirement to not use a realtime 3D hardware-accelerated graphics card, as there is no way to do Lagrange image interpolation during a realtime render.
i tried the hello example on the adobe site.
http://www.adobe.com/devnet/flashplayer/articles/hello-triangle.html
it works, but the context3D seems work on the stage's background in the lowest level. if i draw anything it will cover the 3d context.
i want to bring it to front or set it to a certain level. how can i do that?
also i was told if use 2d api and 3d api together , it will lower the performance of 3d,is it truth?In my works ,i still need 2d api ,for example, drawing the textfield .
Everything goes like this (from bottom to top):
StageVideo (1 or more instances) > Stage3D (1 or more instances) > Your regular display list.
And yes, regular display objects may degrade performance of Stage3D, therefore it may be better to use Stage3D alternatives of those. Some Stage3D accelerated frameworks already has some of those built in (like TextField in Starling).
No, you can't bring it to front.
2d and 3d not relates to each other. But of course, if you write 2d stuff that eat 100% of cpu, you'll get overal slow performance.
the only way is to get from the bottom layer of stage3D instance the rendered bitmap and display it on top of you displayList.. but it should work on each frame, thing that will affect the performance a lot and of course no mouse interaction... this solution will work only to display rendered scene on top of stage3D.. just a simulation
Just starting to experiment with filling the canvas, and I'm trying to apply a texture to an object (the blobs from the blob example - http://www.blobsallad.se/). This example is using the 2D context, and doesn't appear to be implementing webGL. All the information on texturing I could find uses webGL, and I was wondering how easy it would be to accomplish this feat. Is there anyway I could incorporate the texturing features of webGL to this canvas without rewriting the code? Summed up, I guess this question is asking whether or not the methods available to the 2D context are also available to the webGL context... If so I suppose I could just change the context and apply my texture? If I'm thinking about this all wrong or am confused conceptually, please let me know.
Thanks,
Brandon
I've experimented with drawing an image to a 2d canvas before using it as a texture for a WebGL canvas. It works, but the performance is horrible (it really varies from browser to browser). I'm currently considering a few other options for refactoring it. I wouldn't recommend it for anything that more than statically drawing an image to one or two 2d canvases.
You can see an example of the craziness in lanyard/src/render/SurfaceTileRenderer.js in the project at: http://github.com/fintler/lanyard
Are you looking to apply a texture to a 2D shape?
Try something like this
http://jsfiddle.net/3U9pm/