I'm trying to redirect the output of a GDI application to a buffer, preferably a d3d texture but I'll settle for a system memory buffer that I can then copy to a d3d texture.
Specifically, I'm trying to get Google Chrome to render into a d3d buffer to be displayed in a d3d application.
Are there any foolproof ways to do this or am I opening the mother of all worm-cans?
Thanks,
Tim.
Ideally all applications would draw only within WM_PAINT, draw only to their own DC, and they would also implement WM_PRINTCLIENT so you could get a "snapshot" of the apps window DC. But most apps do not do this perfectly, so getting what the app shows into a buffer may not be easy or possible.
An option is for you to patch the chrome source code, but it's a tall order.
Related
I have a D3D11 Texture2d with the format DXGI_FORMAT_R10G10B10A2_UNORM and want to convert this into a D3D11 Texture2d with a DXGI_FORMAT_R32G32B32A32_FLOAT or DXGI_FORMAT_R8G8B8A8_UINT format, as those textures can only be imported into CUDA.
For performance reasons I want this to fully operate on the GPU. I read some threads suggesting, I should set the second texture as a render target and render the first texture onto it or to convert the texture via a pixel shader.
But as I don't know a lot about D3D I wasn't able to do it like that.
In an ideal world I would be able to do this stuff without setting up a whole rendering pipeline including IA, VS, etc...
Does anyone maybe has an example of this or any hints?
Thanks in advance!
On the GPU, the way you do this conversion is a render-to-texture which requires at least a minimal 'offscreen' rendering setup.
Create a render target view (DXGI_FORMAT_R32G32B32A32_FLOAT, DXGI_FORMAT_R8G8B8A8_UINT, etc.). The restriction here is it needs to be a format supported as a render target view on your Direct3D Hardware Feature level. See Microsoft Docs.
Create a SRV for your source texture. Again, needs to be supported as a texture by your Direct3D Hardware device feature level.
Render the source texture to the RTV as a 'full-screen quad'. with Direct3D Hardware Feature Level 10.0 or greater, you can have the quad self-generated in the Vertex Shader so you don't really need a Vertex Buffer for this. See this code.
Given your are starting with DXGI_FORMAT_R10G10B10A2_UNORM, then you pretty much require Direct3D Hardware Feature Level 10.0 or better. That actually makes it pretty easy. You still need to actually get a full rendering pipeline going, although you don't need a 'swapchain'.
You may find this tutorial helpful.
I want to load a ludicrous, binary glTF object with only a few polygons (~250), and a huge texture of size 10,000 x 5,500 pixels. The file is "only" 20MB in size.
When I load it using Three.js, Chrome hangs in its entirety for nearly 15 seconds. When looking in the profiler, pretty much nothing is going on during the freezing time.
If you want to load the file yourself, you can download it at https://phychi.com/uni/threejs/models/freezing-monster.glb, and the whole scene can be visited at https://phychi.com/uni/threejs/ (until I've found a solution or given up).
The behavior stays the same, whether I call GLTFLoader.load(), GLTFLoader.loadAsync(), or create my own Promise, and call .then(addToScene), without any awaits.
Does somebody have a magical solution? Or if not, how could I profile it more efficiently, seeing the internal calls? Or should I just open a bug report for Chrome/Three.js?
PS: Windows 10 Personal, Ryzen 5 2600, 32 GB RAM, RX 580 8GB.
The issue should be resolved by upgrading the library to r135(the current release).
The releases r133 and r134 have a change that introduced a performance regression on Windows when using sRGB encoded textures.
I have been working on a project with Direct3D on Windows Phone. It is just a simple game with 2d graphics, and I make use of DirectXTK for helping me out with sprites.
Recently , I have come across to an out of memory error while I was debugging on 512mb emulator. This error was not common and was the result of a sequence of open, suspend , open , suspend ...
Tracking it down, I found out that the textures are loaded on every activation of the app, and finally filling up the allowed memory. To solve it , I will probably go and edit it so as to load textures only on opening but activation from suspends; but after this problem I am curious about the correct life cycle management of texture resources. While searching I have came across to Automatic (or "managed" by microsoft) Texture Management http://msdn.microsoft.com/en-us/library/windows/desktop/bb172341(v=vs.85).aspx . which can probably help out with some management of the textures in video memory.
However, I also would like to know other methods since I couldnt figure out a good way to incorporate managed textures into my code.
My best is to call the Release method of ID3D11ShaderResourceView pointers I store in destructors to prevent filling up the memory , but how do I ensure textures are resting in memory while other apps would want to use it(the memory)?
Windows phone uses Direct3D 11 which 'virtualizes' the GPU memory. Essentially every texture is 'managed'. If you want a detailed description of this, see "Why Your Windows Game Won't Run In 2,147,352,576 Bytes?". The link you provided is Direct3D 9 era for Windows XP XPDM, not any Direct3D 11 capable platform.
It sounds like the key problem is that your application is leaking resources or has too large a working set. You should enable the debug device and first make sure you have cleaned up everything as you expected. You may also want to check that you are following the recommendations for launching/resuming on MSDN. Also keep in mind that Direct3D 11 uses 'deferred destruction' of resources so just because you've called Release everywhere doesn't mean that all the resources are actually gone... To force a full destruction, you need to use Flush.
With Windows phone 8.1 and Direct3D 11.2, there is a specific Trim functionality you can use to reduce an app's memory footprint, but I don't think that's actually your issue.
This is more an "implementation" of technology kind of question.
In old times, when I worked with C language, you could specify to use VGA memory or ram memory for allocation of bitmaps structures, then you could work with them a lot faster.
Now we are in 2013, I create bitmap in AS3, and it is allocated in ram (I've seen no option to use the GPU and 100% of cases im sure it is using the RAM, because it increases exactly the expected bitmap size.
¿Is there any option to use GPU memory?
Thanks
Check out the API docs for flash.display3D.Texture - there are 3 methods:
uploadCompressedTextureFromByteArray(data:ByteArray, byteArrayOffset:uint, async:Boolean = false):void
Uploads a compressed texture in Adobe Texture Format (ATF) from a ByteArray object.
uploadFromBitmapData(source:BitmapData, miplevel:uint = 0):void
Uploads a texture from a BitmapData object.
uploadFromByteArray(data:ByteArray, byteArrayOffset:uint, miplevel:uint = 0):void
Uploads a texture from a ByteArray.
So you can't allocate the memory directly in the GPU. You must upload data from a ByteArray or BitmapData, which first exists in RAM. However, to minimize CPU RAM usage, you could potentially reuse a single ByteArray or BitmapData in RAM, change its contents, and upload it many times, or release it after loading. But you can't access the contents of GPU memory directly, as far as I know.
As far as "read access", the only way to get data back from the GPU memory (again, a slow workaround) is to draw the Context3D back into a BitmapData via Context3D.drawToBitmapData... basically like a screen grab. The Starling Framework has an example of this functionality via Stage.drawToBitmapData.
Basically, the Stage3D APIs weren't setup so you can easily access the GPU memory.
You cannot allocate GPU memory manually like in other languages, but you can indeed accelerate your graphics using the GPU with different Adobe technologies.
For example if you want GPU accelerated video decoding you should be using StageVideo, or if you want to accelerate 2d or 3d graphics you could use Stage3D.
Unless you want to work in a low level fashion with Stage3D, it is recommended you use an intermediary framework.
For 2d the best solution is by far Starling. It is a solid framework endorsed by Adobe which has been used in countless commercial projects and is constantly optimised.
As for 3d take a look at Flare3D or Away3D.
I'm writing a Windows CE application, and I want to play a sound (a short wav file) when something happens. Since this sound will be played often, my first instinct was to load the wav file into a memory stream and reuse that stream instead of reading the file every time.
But then it occured to me that these Windows Mobile devices only have one kind of memory, which is used both for data storage (= the file system) as well as for program memory; there's even a nice slider in the control panel which you can use to delegate memory to either storage or program execution. So, theoretically, reading a file from the file system (or some value from a SQL Server CE database) should take (almost) the same amount of time as reading this value from some in-memory object, right?
Is this assumption correct (i.e., in-memory caching on application level doesn't make sense here) or did I miss something? For simplicity, let's assume that only the internal memory of the device is used (no memory card).
The assumption may or may not be valid. Where in storage does it reside? If it's persistent storage (like a storage card folder or anything else that remains when you hard reset) then it's backed by Flash, which is way, way slower than RAM and there will be a difference in load perf, though how much it might impact your app I can't say - only testing will tell you that.
When I want to play a short WAV file on Windows Mobile (e.g. notification sound). I usually add it as a resource to my executable. AFAIK resources are loaded into RAM since they are part of the executable image. You can then conveniently call PlaySound() with the SND_RESOURCE (and probably OR that with SND_ASYNC too so the call isn't going to block while the file is being played) flag.