WebGL Unavailable, GPU Process unable to boot - google-chrome

I'm running Chrome 54.0.2840.87 on Windows 10. I have two GPUs: an Intel(R) HD Graphics 520, and a AMD Radeon R5 M335.
Up until a couple of weeks ago, WebGL was running just fine in chrome. Now, after not having changed any settings anywhere, WebGL is no longer available.
When trying to run a chrome experiment for example, I get a message saying that my graphics card does not seem to support WebGL. I know my graphics cards work fine (they have been updated with the latest drivers), plus WebGL runs perfectly in firefox. I know my GPUs have not been blacklisted (on either browser).
On chrome:gpu, I am told that WebGL is unavailable, and that the GPU process was unable to boot. When checking chrome:flags enabling or disabling WebGL no longer seems to be an option.
Enabling/disabling anything else that involves WebGL has not made any difference. Is there something else that can be done to get it working again? At what level is the issue? (The issue persists on Chrome Canary.) I am not the most technologically savvy person, but I've had no luck finding answers anywhere else.
The following is what I see on my chrome:gpu page:
Graphics Feature Status
Canvas: Software only, hardware acceleration unavailable
Flash: Software only, hardware acceleration unavailable
Flash Stage3D: Software only, hardware acceleration unavailable
Flash Stage3D Baseline profile: Software only, hardware acceleration
unavailable
Compositing: Software only, hardware acceleration unavailable
Multiple Raster Threads: Unavailable
Native GpuMemoryBuffers: Software only. Hardware acceleration disabled
Rasterization: Software only, hardware acceleration unavailable
Video Decode: Software only, hardware acceleration unavailable
Video Encode: Software only, hardware acceleration unavailable
VPx Video Decode: Software only, hardware acceleration unavailable
WebGL: Unavailable
Driver Bug Workarounds
clear_uniforms_before_first_program_use
disable_d3d11
disable_discard_framebuffer
disable_dxgi_zero_copy_video
disable_nv12_dxgi_video
disable_framebuffer_cmaa
exit_on_context_lost
scalarize_vec_and_mat_constructor_args
Problems Detected
GPU process was unable to boot: GPU process launch failed.
Disabled Features: all
Some drivers are unable to reset the D3D device in the GPU process
sandbox
Applied Workarounds: exit_on_context_lost
Clear uniforms before first program use on all platforms: 124764,
349137
Applied Workarounds: clear_uniforms_before_first_program_use
Always rewrite vec/mat constructors to be consistent: 398694
Applied Workarounds: scalarize_vec_and_mat_constructor_args
Disable Direct3D11 on systems with AMD switchable graphics: 451420
Applied Workarounds: disable_d3d11
Framebuffer discarding can hurt performance on non-tilers: 570897
Applied Workarounds: disable_discard_framebuffer
NV12 DXGI video hangs or displays incorrect colors on AMD drivers:
623029, 644293
Applied Workarounds: disable_dxgi_zero_copy_video,
disable_nv12_dxgi_video
Limited enabling of Chromium GL_INTEL_framebuffer_CMAA: 535198
Applied Workarounds: disable_framebuffer_cmaa
Native GpuMemoryBuffers have been disabled, either via about:flags or
command line.
Disabled Features: native_gpu_memory_buffers
Version Information
Data exported 11/7/2016, 2:09:57 PM
Chrome version Chrome/54.0.2840.87
Operating system Windows NT 10.0.14393
Software rendering list version 11.12
Driver bug list version 9.00
ANGLE commit id 905fbdea9ef0
2D graphics backend Skia/54 a21f10dd8b19c6cb47d07d94d0a0525c16461969
Command Line Args Files (x86)\Google\Chrome\Application\chrome.exe"
--flag-
switches-begin --enable-gpu-rasterization --enable-unsafe-es3-apis
--enable-
webgl-draft-extensions --flag-switches-end
Driver Information
Initialization time 0
In-process GPU true
Sandboxed false
GPU0 VENDOR = 0x1002, DEVICE= 0x6660
GPU1 VENDOR = 0x8086, DEVICE= 0x1916
Optimus false
AMD switchable true
Desktop compositing Aero Glass
Diagonal Monitor Size of \.\DISPLAY1 15.5"
Driver vendor Advanced Micro Devices, Inc.
Driver version 16.200.2001.0
Driver date 6-16-2016
Pixel shader version
Vertex shader version
Max. MSAA samples
Machine model name
Machine model version
GL_VENDOR
GL_RENDERER
GL_VERSION
GL_EXTENSIONS
Disabled Extensions
Window system binding vendor
Window system binding version
Window system binding extensions
Direct rendering Yes
Reset notification strategy 0x0000
GPU process crash count 0
Compositor Information
Tile Update Mode One-copy
Partial Raster Enabled
GpuMemoryBuffers Status
ATC Software only
ATCIA Software only
DXT1 Software only
DXT5 Software only
ETC1 Software only
R_8 Software only
BGR_565 Software only
RGBA_4444 Software only
RGBX_8888 Software only
RGBA_8888 Software only
BGRX_8888 Software only
BGRA_8888 Software only
YVU_420 Software only
YUV_420_BIPLANAR Software only
UYVY_422 Software only
Diagnostics
... loading ...
Log Messages
[1268:3756:1107/133435:ERROR:gl_surface_egl.cc(252)] : No suitable EGL configs found.
[1268:3756:1107/133435:ERROR:gl_surface_egl.cc(1012)] : eglCreatePbufferSurface failed with error EGL_BAD_CONFIG
[1268:3756:1107/133435:ERROR:gpu_info_collector.cc(35)] : gl::GLContext::CreateOffscreenGLSurface failed
[1268:3756:1107/133435:ERROR:gpu_info_collector.cc(108)] : Could not create surface for info collection.
[1268:3756:1107/133435:ERROR:gpu_main.cc(506)] : gpu::CollectGraphicsInfo failed (fatal).
GpuProcessHostUIShim: The GPU process exited normally. Everything is okay.

Got the same issue, found a post about ignoring the hardware list of compatible materials. So, go to chrome://flags, and activate first option:
Ignorer la liste de rendu logiciel (in french)
Override software rendering list (english)
https://superuser.com/questions/836832/how-can-i-enable-webgl-in-my-browser
Tell me if it helps !

For anyone still showing WebGL unavailabe under chrome://gpu/ after enabling Override software rendering list at chrome://flags/.
Check further down at chrome://gpu/ under the topic: Problems Detected. If there is a mention about GPU access being disabled:
GPU process was unable to boot: GPU access is
disabled in chrome://settings. Disabled Features: all
Navigate to:
chrome://settings/ > Advanced > Systems
And enable Use hardware acceleration.

Had the same problem when using sketchfab! In "Override software rendering list" I selected to show "disable", and everything looks ok now!!

Related

Webgl: max vertex uniform vectors in ANGLE-based implementations

I have some rendering code which queries GL_MAX_VERTEX_UNIFORM_VECTORS, reports this value to console and computes uniform array sizes based on it. The goal is to use (almost) all available GPRs for some batched rendering.
The code runs on desktop (Linux and Windows) and in browsers when built with emscripten. I tried to run it on Nvidia, AMD and Intel HD GPUs with all combinations of GPUs and OS.
On Linux both native and web versions work fine and report GL_MAX_VERTEX_UNIFORM_VECTORS = 1024 on my GPUs.
Then I reboot to Windows, where native version works fine too, while web version in both Chrome and Firefox report GL_MAX_VERTEX_UNIFORM_VECTORS = 4096 and work fine with Nvidia and AMD. Looks like it uses phantom uniforms which were unavailable for native build. So my first question: how? Does it swap extra values from memory?
Then I run this code on Intel HD 4000 GPU on Windows. Native version works as expected (with correct value of GL_MAX_VERTEX_UNIFORM_VECTORS), but web version reports 4096 GL_MAX_VERTEX_UNIFORM_VECTORS and corrupts some of the uniforms: geometry gets glitched when uniform array is used in vertex shader.
Why does ANGLE report wrong GL_MAX_VERTEX_UNIFORM_VECTORS? Is it a bug? How can I get correct value or otherwise use all available uniforms? No UBOs please, I'm bound with webgl1.
Quick reproduction for webgl: https://sergeyext.github.io/sergeyext/max_vertex_uniform_vectors/start_webgl.html

Can I debug CUDA on the device that drives the display output?

I develop on VS2012. I have 3 monitors connected to my pc with one GTX 960 graphic card.
I knew that it's impossible to debug CUDA on the same device that drives the display output. Maybe I'm reading it wrong, but when I go to NSight->Windows->System Info->Display Devices, I can see that the monitor uses my graphic card. Since I have only one graphic card and I can debug (as the image shows in CUDA WarpWatch1) I deduct that either I do can debug on the same device that drives the display output or it uses my built-in Intel HD Graphics but doesn't show it in the Display Device .
Despite what you have apparently read somewhere, CUDA (and NSight) has supported debugging on GPUs with the WDDM driver on active display GPUs for a number of years. You can see the exact matrix of supported hardware, drviers and debugging modes in the documentation here.
When CUDA was first introduced, debugging was limited to non-display cards. However, this limitation was removed on Windows and Linux using more recent hardware some time ago.

Can't run CUDA nor OpenCL on GeForce 540M

I have problem running samples provided by Nvidia in their GPU Computing SDK (there's a library of compiled sample codes).
For cuda I get message "No CUDA-capable device is detected", for OpenCL there's error from function that should find OpenCL capable units.
I have installed all three parts from Nvidia to develop with OpenCL - devdriver for win7 64bit v.301.27, cuda toolkit 4.2.9 and gpu computing sdk 4.2.9.
I think this might have to do with Optimus technology that reroutes output from Nvidia GPU to Intel to render things (this notebook has also Intel 3000HD accelerator), but in Nvidia control pannel I set to use high performance Nvidia GPU, set power profile to prefer maximum performance and for PhysX I changed from automatic selection to Nvidia processor again. Nothing has changed though, those samples won't run (not even those targeted for GF8000 cards).
I would like to play somewhat with OpenCL and see what it is capable of but without ability to test things it's useless. I have found some info about this on forums, but it was mostly about linux users where you need Bumblebee to access Nvidia GPU. There's no such problem on Windows however, drivers are better and so you can access it without dark magic (or I thought so until I found this problem).
My laptop has a GeForce 540M as well, in an Optimus configuration since my Sandy Bridge CPU also has Intel's integrated graphics. To run CUDA codes, I have to:
Install NVIDIA Driver
Go to NVIDIA Control Panel
Click 3D Settings -> Manage 3D Settings -> Global Settings
In the Preferred Graphics processor drop down, select "High-performance NVIDIA processor"
Apply the settings
Note that the instructions above apply the settings for all applications, so you don't have to worry about CUDA errors any more. But it will drain more battery.
Here is a video recap as well. Good luck!
Ok this has proven to be totally crazy solution. I was thinking if something isn't hooking between the hardware and application and only thing that came to my mind was AV software. I'm using Comodo with sandbox and Defense+ on and after turning them off I could run all those samples. What's more, only Defense+ needs to be turned off.
Now I just think about how much apps could have been blocked from accessing that GPU..
That's most likely because of the architecture of Optimus. So I'd suggest you to read
NVIDIA CUDA Developer Guide for NVIDIA Optimus Platforms, especially the section "Querying for a CUDA Device" which addresses this issue, I believe.

CUDA-enabled graphics processor as VMware?

I'm taking a course that teaches CUDA. I would like to use it my personal laptop, but I don't have Nvidia graphics processor. mine is ATI . so I was thinking is there any Virtual Hardware simulator that I can use? or that there is no other way than using a PC with CUDA Graphics processor.
Thank you very much
The CUDA toolkit used to ship with a host CPU emulation mode, but that was deprecated early in the 3.0 release cycle and has been fully removed from toolkits for the best part of two years.
Your only real option today is to use Ocelot. It has a PTX assembly translator and a pretty reliable reimplementation of the CUDA runtime for x86 CPUs, and there is also a rather experimental PTX to AMD IL translator (I have no experience with the latter). On a modern linux system with an up to date GNU toolchain, Ocelot is reasonably easy to get running. I am not sure if there is a functioning Windows port or not.
CUDA has its own emulation mode witch runs everything on CPU. Problem is that in such case you don't have real concurrency so programs that runs successfully in emulation mode can fail (and usually does) in normal mode. You can develop your code in emulation mode, but then you have to debug it on computer with CUDA card.

Use NVIDA card for CUDA, motherboard for video

I want use the motherboard as the primary display adapter and my NVIDIA graphics card as a dedicated CUDA processor. My first thought was to simply plug the monitor's VGA cable into the motherboard's VGA port and hope the BIOS was smart enough to use the on-board video as the display adapter when it booted. That didn't work. The BIOS must have detected the NVIDIA card and continued to use it as the display adapter. The next thing I looked for was a setting in the BIOS to tell it "don't use the the NVIDIA 560 as the display adapter, use the on-board video as the display adapter". I search through the BIOS and the Web, but either this cannot be done or I cannot figure out how to do it. The mobo is a BIOSTAR TH67+ LGA 1155. Windows 7 OS.
RESULTS SUMMARY (from answers provided below)
Enabling the Integrated Graphics Device (IGD) in the BIOS will allow the system to be driven from the on-board graphics even with the graphics card connected to the system bus. However, the graphics card cannot be used for CUDA processing. Windows will not enable graphics devices unless a monitor is attached to them. The normal driver stack cannot see them. Solution: use Linux, or attach a display to the graphics card but do not use it. The Tesla cards (GPGPU-only) are not recognized by Windows as graphics devices, so they don't suffer from this.
Also ,a newer BIOSTAR motherboard, the TZ68A+, supports the Virtu drivers which permit sophisticated simultaneous use of the graphics cards and on-board video.
Looking at the BIOS manual (.zip), the setting you probably want is Chipset -> North Bridge -> Initiate Graphics Adapter. Try setting it to IGD (Integrated Graphics Device).
I believe this will happen automatically as the native video won't support CUDA. After installing the SDK, if you run DeviceQuery, do you see more than one result?
I believe h67 allows coexistence of both integrated & dedicated GPU. Check out Lucid Virtu here http://www.lucidlogix.com/driverdownloads-virtu.html it allows switching GPUs on the fly. But I don't know if it affects CUDA device query.
I never tried it on my rig, because its x58, I just heard it from tomshardware. Try it out and let us know. Lucid Virtu is definitely worth a try, its free, and it can cut you electric bill.