I'm searching how the browser(Chrome) uses GPU.
For example, I read the document that said css translate 3d property makes it draw by GPU. (graphic layer)
However I'm confused recent Chrome draw all pixel by GPU or CPU also draw.
I saw below references, these seem to say GPU draw pixel.
https://www.youtube.com/watch?v=m-J-tbAlFic
https://developer.chrome.com/blog/inside-browser-part3/
So I wonder the fact translate 3d makes it draw by GPU is true or false for recent Chrome.
Related
I know that Cesium offers several different interpolation methods, including linear (or bilinear in 2D), Hermite, and Lagrange. One can use these methods to resample sets of points and/or create curves that approximate sampled points, etc.
However, the question I have is what method does Cesium use internally when it is rendering a 3D scene and the user is zooming/panning all over the place? This is not a case where the programmer has access to the raster, etc, so one can't just get in the middle of it all and call the interpolation functions directly. Cesium is doing its own thing as quickly as it can in response to user control.
My hunch is that the default is bilinear, but I don't know that nor can I find any documentation that explicitly says what is used. Further, is there a way I can force Cesium to use a specific resampling method during these activities, such as Lagrange resampling? That, in fact, is what I need to do: force Cesium to employ Lagrange resampling during scene rendering. Any suggestions would be appreciated.
EDIT: Here's a more detailed description of the problem…
Suppose I use Cesium to set up a 3-D model of the Earth including a greyscale image chip at its proper location on the model Earth's surface, and then I display the results in a Cesium window. If the view point is far enough from the Earth's surface, then the number of pixels displayed in the image chip part of the window will be fewer than the actual number of pixels that are available in the image chip source. Some downsampling will occur. Likewise, if the user zooms in repeatedly, there will come a point at which there are more pixels displayed across the image chip than the actual number of pixels in the image chip source. Some upsampling will occur. In general, every time Cesium draws a frame that includes a pixel data source there is resampling happening. It could be nearest neighbor (doubt it), linear (probably), cubic, Lagrange, Hermite, or any one of a number of different resampling techniques. At my company, we are using Cesium as part of a large government program which requires the use of Lagrange resampling to ensure image quality. (The NGA has deemed that best for its programs and analyst tools, and they have made it a compliance requirement. So we have no choice.)
So here's the problem: while the user is interacting with the model, for instance zooming in, the drawing process is not in the programmer's control. The resampling is either happening in the Cesium layer itself (hopefully) or in even still lower layers (for instance, the WebGL functions that Cesium may be relying on). So I have no clue which technique is used for this resampling. Worse, if that technique is not Lagrange, then I don't have any clue how to change it.
So the question(s) would be this: is Cesium doing the resampling explicitly? If so, then what technique is it using? If not, then what drawing packages and functions are Cesium relying on to render an image file onto the map? (I can try to dig down and determine what techniques those layers may be using, and/or have available.)
UPDATE: Wow, my original answer was a total misunderstanding of your question, so I've rewritten from scratch.
With the new edits, it's clear your question is about how images are resampled for the screen while rendering. These
images are texturemaps, in WebGL, and the process of getting them to the screen quickly is implemented in hardware,
on the graphics card itself. Software on the CPU is not performant enough to map individual pixels to the screen
one at a time, which is why we have hardware-accelerated 3D cards.
Now for the bad news: This hardware supports nearest neighbor, linear, and mapmapping. That's it. 3D graphics
cards do not use any fancier interpolation, as it needs to be done in a fraction of a second to keep frame rate as high as possible.
Mapmapping is described well by #gman in his article WebGL 3D Textures. It's
a long article but search for the word "mipmap" and skip ahead to his description of that. Basically a single image is reduced
into smaller images prior to rendering, so an appropriately-sized starting point can be chosen at render time. But there will
always be a final mapping to the screen, and as you can see, the choices are NEAREST or LINEAR.
Quoting #gman's article here:
You can choose what WebGL does by setting the texture filtering for each texture. There are 6 modes
NEAREST = choose 1 pixel from the biggest mip
LINEAR = choose 4 pixels from the biggest mip and blend them
NEAREST_MIPMAP_NEAREST = choose the best mip, then pick one pixel from that mip
LINEAR_MIPMAP_NEAREST = choose the best mip, then blend 4 pixels from that mip
NEAREST_MIPMAP_LINEAR = choose the best 2 mips, choose 1 pixel from each, blend them
LINEAR_MIPMAP_LINEAR = choose the best 2 mips. choose 4 pixels from each, blend them
I guess the best news I can give you is that Cesium uses the best of those, LINEAR_MIPMAP_LINEAR to
do its own rendering. If you have a strict requirement for more time-consuming imagery interpolation, that means you
have a requirement to not use a realtime 3D hardware-accelerated graphics card, as there is no way to do Lagrange image interpolation during a realtime render.
So i would like to do things like what is possible with a HW accelerated HTML5 canvas for animated 2D vector graphics drawing, but on top of my OpenGL (4.x) rendered 3D scene (for complex HUD and GUI displays). I need this to be able to work on Win7+, MacOs, and Linux, mobile platform support is not needed.
BTW I am working with C++.
I was wondering if anyone knew what for example Chrome uses for accelerated 2D vector graphics in its HTML5 canvas draw functions? I was under the impression it was accelerated using ANGLE (which wraps OpenGL or DX9). Or am I wrong and its only SVG rendering that is accelerated, not the javascript canvas draw functions.
Doing HTML5 canvas style animated 2D vector graphics with OpenGL is highly non-trivial, is Google using an available library for that or is it just in-house code?
I have been looking into OpenVG and have had a hard time finding the right implementation to use for that, so far the only thing i can actually get examples compiled for is ShivaVG (but there seems to be shimmering artifacts for the tiger demo and other issues for the latest release 7 years ago). Also i think ShivaVG is using fixed function and my team decide to lock down our OpenGL usage to 4.x core profile, so that won't work. I would love to use NV_Path_Rendering but its not portable (to anything other than a nvidia accelerated device).
I also thought using OpenVG would be useful since I might be able to hide NV_Path underneath, or a new OpenVG library that might come out in the future. But I am wondering if OpenVG's future might be in peril.
They apparently use the Skia library for all 2D rendering.
I am making a 3D space game in Stage3D and would like a field of stars drawn behind ALL other objects. I think the problem I'm encountering is that the distances involved are very high. If I have the stars genuinely much farther than other objects I have to scale them to such a degree that they do not render correctly - above a certain size the faces seem to flicker. This also happens on my planet meshes, when scaled to their necessary sizes (12000-100000 units across).
I am rendering the stars on flat plane textures, pointed to face the camera. So long as they are not scaled up too much, they render fine, although obviously in front of other objects that are further away.
I have tried all manner of depthTestModes (Context3DCompareMode.LESS, Context3DCompareMode.GREATER and all the others) combined with including and excluding the mesh in the z-buffer, to get the stars to render only if NO other pixels are present where the star would appear, without luck.
Is anyone aware of how I could achieve this - or, even better, know why, above a certain size meshes do not render properly? Is there an arbitrary upper limit that I'm not aware of?
I don't know Stage3D, and I'm talking in OpenGL language here, but the usual way to draw a background/skybox is to draw the background close up, not far, draw the background first, and either disable depth buffer writing while the background is being drawn (if it does not require depth buffering itself) or clear the depth buffer after the background is drawn and before the regular scene is.
Your flickering of planets may be due to lack of depth buffer resolution; if this is so, you must choose between
drawing the objects closer to the camera,
moving the camera frustum near plane farther out or far plane closer (this will increase depth buffer resolution across the entire scene), or
rendering the scene multiple times at mutually exclusive depth ranges (this is called depth peeling).
You should use starling. It can work
http://www.adobe.com/devnet/flashplayer/articles/away3d-starling-interoperation.html
http://www.flare3d.com/blog/2012/07/24/flare3d-2-5-starling-integration/
You have to look at how projection and vertex shader output is done.
The vertex shader output has four components: x,y,z,w.
From that, pixel coordinates are computed:
x' = x/w
y' = y/w
z' = z/w
z' is what ends up in the z buffer.
So by simply putting z = w*value at the end of your vertex shader you can output any constant value. Just put value = .999 and there you are! Your regular depth less test will work.
I'm relatively new to graphics programming, and I've just been reading some books and have been scanning through tutorials, so please pardon me if this seems a silly question.
I've got the basics of directx11 up and running, and now i'm looking to have some fun. so naturally I've been reading heavily into the shader pipeline, and i'm already fascinated. The idea of writing a simple, minuscule piece of code that has to be efficient enough to run maybe tens of thousands of times every 60th of a second without wasting resources has me in a hurry to grasp the concept before continuing on and possibly making a mess of things. What i'm having trouble with is grasping what the pixel shader is actually doing.
Vertex shaders are simple to understand, you organize the vertices of an object in uniform data structures that relate information about it, like position and texture coordinates, and then pass each vertex into the shader to be converted from 3d to 2d by way of trasformation matrices. As long as i understand it, i can work out how to code it.
But i don't get pixel shaders. What i do get is that the output of the vertex shader is the input of the pixel shader. So wouldn't that just be handing the pixel shader the 2d coordinates of the polygon's vertices? What i've come to understand is that the pixel shader receives individual pixels and performs calculations on them to determine things like color and lighting. But if that's true, then which pixels? the whole screen or just the pixels that lie within the transformed 2d polygon?
or have i misunderstood something entirely?
Vertex shaders are simple to understand, you organize the vertices of an object in uniform data structures that relate information about it, like position and texture coordinates, and then pass each vertex into the shader to be converted from 3d to 2d by way of trasformation matrices.
After this, primitives (triangles or multiples of triangles) are generated and clipped (in Direct3D 11, it is actually a little more complicated thanks to transform feedback, geometry shaders, tesselation, you name it... but whatever it is, in the end you have triangles).
Now, fragments are "generated", i.e. a single triangle is divided into little cells with a regular grid, the output attributes of the vertex shader are interpolated according to each grid cell's relative position to the three vertices, and a "task" is set up for each little grid cell. Each of these cells is a "fragment" (if multisampling is used, several fragments may be present for one pixel1).
Finally, a little program is executed over all these "tasks", this is the pixel shader (or fragment shader).
It takes the interpolated vertex attributes, and optionally reads uniform values or textures, and produces one output (it can optionally produce several outputs, too). This output of the pixel shader refers to one fragment, and is then either discarded (for example due to depth test) or blended with the frame buffer.
Usually, many instances of the same pixel shader run in parallel at the same time. This is because it is more silicon efficient and power efficient to have a GPU run like this. One pixel shader does not know about any of the others running at the same time.
Pixel shaders commonly run in a group (also called "warp" or "wavefront"), and all pixel shaders within one group execute the exact same instruction at the same time (on different data). Again, this allows to build more powerful chips that user less energy, and cheaper.
1Note that in this case, the fragment shader still only runs once for every "cell". Multisampling only decides whether or not it stores the calculated value in one of the higher resolution extra "slots" (subsamples) according to the (higher resolution) depth test. For most pixels on the screen, all subsamples are the same. However, on edges, only some subsamples will be filled by close-up geometry whereas some will keep their value from further away "background" geometry. When the multisampled image is resolved (that is, converted to a "normal" image), the graphics card generates a "mix" (in the easiest case, simply the arithmetic mean) of these subsamples, which results in everything except edges coming out the same as usual, and edges being "smoothed".
Your understanding of pixel shaders is correct in that it "receives individual pixels and performs calculations on them to determine things like color and lighting."
The pixels the shader receives are the individual ones calculated during the rasterization of the transformed 2d polygon (the triangle to be specific). So whereas the vertex shader processes the 3 points of the triangle, the pixel shader processes the pixels, one at a time, that "fill in" the triangle.
I have some images drawn on a HTML5 Canvas and I want to check if they are hit on mouse click. Seems easy, I have the bounds of the images, however the images are transformed (translated and scaled). Unfortunately, the context does not have a method to get the current transform matrix, and also, there is no API for matrices multiplication.
Seems the only solution is to keep track of the transforms myself and implement matrix multiplication.
Suggestions are welcomed.
This is a common problem in the 3D (OpenGL) graphics world as well.
The solution is to create an auxiliary canvas object (which is not displayed), and to redraw your image into it. The draw is exactly the same as with your main canvas draw, except that each element gets drawn with a unique color. You then look up the pixel corresponding to your mouse pick, and read off its color, which will give you the corresponding element (if any).
This is a commonly used method in the OpenGL world. You can find descriptions of it by Googling terms like "opengl object picking". Here is one of the many search results.
Update: The HTML5 canvas spec now includes hit regions. I'm not sure to what degree these are supported by browsers yet.