It stopped reporting since June 28th, after midday.
Anyone knows why? Only changes made in the system were automatic system updates.
This is probably due to a bug fix we rolled out in late June:
https://cloud.google.com/compute/docs/release-notes#june2016
Lower and less noisy usage is expected, especially for mostly-idle VMs, but if the graph remains flat even if the VM is busy, it's possible we have broken something.
Related
Apologies if this question comes off a bit ranty, but is this error supposed to be commonplace? I'm trying to use a compute instance with GPUs attached in the asia-east1 region. However I frequently (at this point every day) encounter this issue for hours at a time. Today I stopped the instance I was using for all of about 2 minutes before starting it again only to get this error. I've used AWS in the past and never had this issue. How are people supposed to use gcloud in any serious capacity when it seemingly never has enough resources available?
This is a common error message that appears from time to time. Many resources (CPUs, IPs, etc) in the Asia Regions/zones are at full capacity.
Also, note that if you are a Free Trial user, you will not be able to use GPUs, if you want to use GPUs you might need to upgrade your account
I have loaded up a database with about 7.5M nodes having 33+M relationships - it's about 25 GB in total. So, it's reasonably large is my point. What I am finding since loading it is that periodically my Neo4j client is just falling over, leaving nothing more than Chrome's irritating "Aw snap - something went wrong" behind. I have checked the logs and found nothing significant there. How can I begin to track down what is happening on these failed queries?
The issue might be with your Chrome environment. This Google support page on the "Aw Snap!" error may be of help.
Also, if your queries return a lot of data or take too long to respond, the browser might be running out of memory or exceeding some internal timeouts. So, make sure the queries you make via a browser are tailored accordingly.
Have found several SO Q&A's related to OpenShift's concept of application idling when there is no inbound traffic for 24 hours. Apart from the fact that there could be hacks to work around it, I was wondering, as to what is the effect since OpenShift claims that the application is brought back to full live state when an incoming request is encountered. In that case, apart from the fact that the HTTP request that causes application to go back from idle state to live/running state would run trifle slower, but is there any other inconvenience that I am missing here ?
In my experience, the first call after idling consistently fails. Subsequent calls do work then. This is probably due to a timeout, since it takes a while to spin up the gears. It may also depend on the time your application takes to activate, meaning this could be specific for my kind of application.
However I switched from the FREE plan to BRONZE a while ago and did not experience any problems since then.
im not sure why (probably an update) but chrome has significantly lost performance while running some things I've made with three.js. I haven't worked on anything in a month and now that i've returned to my project i've found things are suddenly running much slower than they used to. I used to get a smooth 60 fps+, and now things are chugging along at 20 fps in one of my programs.
Just to be clear, I've changed absolutely nothing. I simply opened my projects a month later and the performance has dropped by 40+ fps, which is frightening. This is true for anything using three.js.
I'm wondering if anyone knows what the issue is.
EDIT:
http://gamejolt.com/games/arcade/tiny-tank/27522/
This is an application I've made which has significantly degraded in performance, at least on my machine. There also strange shading behavior which has appeared on the shading of certain objects due to hidden lights(?).
I'm using the WebGL renderer by the way.
I'm using Three.js version r66, since there are no migration instructions to move to any higher versions on github.
Go to chrome://flags and make sure Override software rendering list is set to enabled. This will make sure the GPU-acceleration is enabled on unsupported system configurations
I have a Java swing application that subscribes to a lot of data and displays this data in various ways. Under heavy load I have come to encounter that the JRE simply stops working with message "Java(TM) Platform SE binary has stopped working". This obviously shuts down my application and I need to restart it. I have tried to google for ways to troubleshoot this issue as I do not get a stacktrace in my code or anything that I can work with but I have found very little useful information beyond upgrading/re-installing the JRE and running virus scans. I have done both of these measures and rebooted the server but the problem still persists. I have tried to monitor the process with Java VisualVM (see dump below) but I am no expert on this tool and may not know what to look for. The observation that I have made is that the 'crashes' appear to coincide with Garbage Collections.
The issue is quite easy to reproduce and occurs after about 10 minutes of running the application. I do not run the application with any specific jvm parameters. The Java version is 1.6.0_31 (was _25 before upgrade) and I run on Windows 7 64-bit.
In the pic below from VisualVM the Java binary has just stopped working which appears to coincide with the GC-run.
Any help or ideas so that I can troubleshoot or remedy the problem is greatly appreciated. Thanks.
Three things to check:
If you've implemented the finalize() method anywhere, make sure it doesn't directly or indirectly lock any objects; this can cause a catatrophic deadlock correlated with GC.
If you've got native code, any number of weird things can happen if the code is not using global references correctly, including deadlocks and weird memory corruption, which would again correlate with GC activity.
Finally, GC might just be "stirring the pot" and exposing vanilla deadlocks which exist otherwise in the application; check your synchronization protocols.
Garbage collection pauses the VM's application threads while it happens, which might be exposing a race condition somewhere.