I am trying to achieve shader animation in Windows Store DirectX App. Actually I just would like to achieve the same animation I see on below link (implemented for DirectX 9 and C#).
http://www.rastertek.com/dx10tut33.html
I am kind of able to find my way around with DirectX 11.1 (Windows Store App compatible DirectX shaders) but I can not see how may I pass the time parameter to the shader code from the C++ program logic so that I can affect shader state and have different effect based on the time.
Please share an opinion if you have some.
To pass parameters to a shader you can use constantbuffers (msdn). You create a constantbuffer, fill it with your data, e.g. the actual time, and set it in the desired shader with
ID3D11DeviceContext::GSSetConstantBuffers
ID3D11DeviceContext::PSSetConstantBuffers
or ID3D11DeviceContext::VSSetConstantBuffers.
Related
I have a D3D11 Texture2d with the format DXGI_FORMAT_R10G10B10A2_UNORM and want to convert this into a D3D11 Texture2d with a DXGI_FORMAT_R32G32B32A32_FLOAT or DXGI_FORMAT_R8G8B8A8_UINT format, as those textures can only be imported into CUDA.
For performance reasons I want this to fully operate on the GPU. I read some threads suggesting, I should set the second texture as a render target and render the first texture onto it or to convert the texture via a pixel shader.
But as I don't know a lot about D3D I wasn't able to do it like that.
In an ideal world I would be able to do this stuff without setting up a whole rendering pipeline including IA, VS, etc...
Does anyone maybe has an example of this or any hints?
Thanks in advance!
On the GPU, the way you do this conversion is a render-to-texture which requires at least a minimal 'offscreen' rendering setup.
Create a render target view (DXGI_FORMAT_R32G32B32A32_FLOAT, DXGI_FORMAT_R8G8B8A8_UINT, etc.). The restriction here is it needs to be a format supported as a render target view on your Direct3D Hardware Feature level. See Microsoft Docs.
Create a SRV for your source texture. Again, needs to be supported as a texture by your Direct3D Hardware device feature level.
Render the source texture to the RTV as a 'full-screen quad'. with Direct3D Hardware Feature Level 10.0 or greater, you can have the quad self-generated in the Vertex Shader so you don't really need a Vertex Buffer for this. See this code.
Given your are starting with DXGI_FORMAT_R10G10B10A2_UNORM, then you pretty much require Direct3D Hardware Feature Level 10.0 or better. That actually makes it pretty easy. You still need to actually get a full rendering pipeline going, although you don't need a 'swapchain'.
You may find this tutorial helpful.
I have been working on a project with Direct3D on Windows Phone. It is just a simple game with 2d graphics, and I make use of DirectXTK for helping me out with sprites.
Recently , I have come across to an out of memory error while I was debugging on 512mb emulator. This error was not common and was the result of a sequence of open, suspend , open , suspend ...
Tracking it down, I found out that the textures are loaded on every activation of the app, and finally filling up the allowed memory. To solve it , I will probably go and edit it so as to load textures only on opening but activation from suspends; but after this problem I am curious about the correct life cycle management of texture resources. While searching I have came across to Automatic (or "managed" by microsoft) Texture Management http://msdn.microsoft.com/en-us/library/windows/desktop/bb172341(v=vs.85).aspx . which can probably help out with some management of the textures in video memory.
However, I also would like to know other methods since I couldnt figure out a good way to incorporate managed textures into my code.
My best is to call the Release method of ID3D11ShaderResourceView pointers I store in destructors to prevent filling up the memory , but how do I ensure textures are resting in memory while other apps would want to use it(the memory)?
Windows phone uses Direct3D 11 which 'virtualizes' the GPU memory. Essentially every texture is 'managed'. If you want a detailed description of this, see "Why Your Windows Game Won't Run In 2,147,352,576 Bytes?". The link you provided is Direct3D 9 era for Windows XP XPDM, not any Direct3D 11 capable platform.
It sounds like the key problem is that your application is leaking resources or has too large a working set. You should enable the debug device and first make sure you have cleaned up everything as you expected. You may also want to check that you are following the recommendations for launching/resuming on MSDN. Also keep in mind that Direct3D 11 uses 'deferred destruction' of resources so just because you've called Release everywhere doesn't mean that all the resources are actually gone... To force a full destruction, you need to use Flush.
With Windows phone 8.1 and Direct3D 11.2, there is a specific Trim functionality you can use to reduce an app's memory footprint, but I don't think that's actually your issue.
I used to program maps in AS3 like you, but then I took a Pixel Bender in the knee. Since then I have been using Pixel Bender to do parallel calculations on arrays. Can Stage3D be used for this?
Example of using Pixel Bender for calculation:
http://wonderfl.net/c/eFp0/
My goal is to get a vector of [x1, y1, x2, y,2 , . . ., xn, yn] and return a vector of [f(x1), f(y1), f(x2), f(y2), . . . ,f(xn), f(yn)]. More like f(x1, y1).x , f(x1,y1).y. I am sure you get the general idea.
What we normally call a map. Here is a thorough explaination.
http://en.wikipedia.org/wiki/Map_%28higher-order_function%29
I noticed that with Pixel Bender I can accomplish this with a speed boost of 10x. Is there any way to do the same thing with Stage3D.
Unlike other languages such as c++, I'm not aware of a way to direct assembly instructions to the GPU beyond Program3D Shaders with Adobe Graphics Assembly Language.
Program bytecode can be created using the Pixel Bender 3D offline
tools. It can also be created dynamically. The AGALMiniAssembler class
is a utility class that compiles AGAL assembly language programs to
AGAL bytecode. The class is not part of the runtime. When you upload
the shader programs, the bytecode is compiled into the native shader
language for the current device (for example, OpenGL or Direct3D). The
runtime validates the bytecode on upload.
The programs run whenever the Context3D drawTriangles() method is
invoked. The vertex program is executed once for each vertex in the
list of triangles to be drawn. The fragment program is executed once
for each pixel on a triangle surface.
Not approaching the performance you are obtaining with Pixel Bender, it should be noted that you could leverage ActionScript Workers to parallelize your functors.
A Worker object represents a worker, which is a virtual instance of
the Flash runtime. Each Worker instance controls and provides access
to the lifecycle and shared data of a single worker.
This capability of simultaneously executing multiple sets of code
instructions is known as concurrency.
With Adobe Premium Features, Alchemy / FlasCC, XC APIs, Pixel Bender, and ActionScript Next on the way, it's an exciting time for the rapidly progressing Flash platform.
As we know many HTML 5 renderers use the GPU to draw canvas elements. I'm wondering about using this ability to trigger the GPU to use it for GPGPU. There probably are no native GPGPU functions in the canvas API or HTML 5, but what about a hack to do that?
I was thinking about using something like a texture (2D or 3D array) with the values to be processed and then ask a canvas element to perform some operation on this matrix. This operation has to be a function that I can somehow send to the canvas element. Then we have browser-based GPGPU.
Is such a thing possible? What do you think? Do you have any other ideas of how to implement this?
There is WebCL standard which is created exactly to give Javascripts running in browser access to GPGPU's computational power (provided client has any). However the list of existing implementations is pretty short.
Successful attempts to harness GPU power for genral-purpose calculations were long before (and lead to) the emergence of CUDA, OpenCL and similar GPGPUs framework. Here is what looks like a good tutorial, and I guess it is portable to WebGL (which has much broader support then WebCL). See #MikkoOhtamaa's answer for good introductory article about WebGL itself
You probably want to use webGL shaders for your nefarious purposes.
http://www.html5rocks.com/en/tutorials/webgl/shaders/
Shaders provide limited opportunities for parallel computations.
as many of you most likly know Flash CS4 intergrates with the GPU. My question to you is, is there a way that you can make all of your rendering execute on the GPU or can i not get that much access.
The reason i ask is with regards to Flash 3D nearly all existing engines are software renderers. However, i would like to work on top of one of theses existing engines and convert it to be as much of a Hardware renderer as possible.
Thanks for your input
Regards
Mark
First off, it's not Flash CS4 that is hardware accelerated, it is Flash Player 10 that does it.
Apparently "The player offloads all raster content rendering (graphics effects, filters, 3D objects, video etc) to the video card". It does this automatically. I don't think you get much choice.
The new GPU accelerated abilities of Flash Player 10 is not something that is accessible to you as a developer, it's simply accelerated blitting that's done "over your head".
The closest you can get to the hardware is Pixel Bender filters. They are basically Flash' equivalent to pixel shaders. However, due to (afaik) cross platform consistency issues these do not actually run on the GPU when run in the Flash player (they're available in other adobe products, and some do run them on the gpu).
So, as far as real hardware acceleration goes the pickings are pretty slim.
If you need all the performance you can get Alchemy can be something worth checking out, this is a project that allows for cross compiling c/c++ code to the AVM2 (the virtual machine that runs actionscript3). This does some nifty tricks to allow for better performance (due to the non dynamic nature of these languages).
Wait for Flash Player 11 to release as a beta in the first half of next year. It would be an awesome.