After we migrate to JavaMelody 1.88 from 1.68.1 we get this message. Usually after we start our app server? Can anybody help us?
java.io.IOException: Could not release [<any_path>/springe4b951a1e22fd7add5aed6f9f81d9023cca34a88.rrd], the file was never requested
at net.bull.javamelody.internal.model.JRobin.createIOException(JRobin.java:569) ~[javamelody-core-1.88.0.jar:1.88.0]
at net.bull.javamelody.internal.model.JRobin.createInstance(JRobin.java:166) ~[javamelody-core-1.88.0.jar:1.88.0]
at net.bull.javamelody.internal.model.Collector.getRequestJRobin(Collector.java:907) ~[javamelody-core-1.88.0.jar:1.88.0]
[...]
Caused by: org.jrobin.core.RrdException: Could not release [<any_path>/springe4b951a1e22fd7add5aed6f9f81d9023cca34a88.rrd], the file was never requested
at org.jrobin.core.RrdDbPool.release(RrdDbPool.java:189) ~[jrobin-1.5.9.jar:1.5.9]
at net.bull.javamelody.internal.model.JRobin.init(JRobin.java:227) ~[javamelody-core-1.88.0.jar:1.88.0]
at net.bull.javamelody.internal.model.JRobin.<init>(JRobin.java:131) ~[javamelody-core-1.88.0.jar:1.88.0]
at net.bull.javamelody.internal.model.JRobin.createInstance(JRobin.java:164) ~[javamelody-core-1.88.0.jar:1.88.0]
It happens in a branch of the code when there exist empty rrd files in the javamelody storage directory, certainly because there is or there was no space left on device.
I suggest to check space left on device. And optionnaly delete empty rrd files, but if they are not deleted, javamelody is supposed to recreate correctly and automatically those empty rrd files when new data is collected for those rrd files, if there is free space on device of course.
Related
I have setup ssis logging to a text file. In the connection manager I have selected create file and given path as c:\logs\log.txt
Notice that log file is not generated if the log folder is absent. How to ensure that folder is created if not exists? I tried choosing create folder on connection manager but that is also not creating the log file in absence of the c:\log folder.
How to ensure folder is auto created and log is always generated?
You have a chicken and egg scenario here. Consider the following replication of your problem
I have the connection manager driven by a variable LogFileName which generates the date and time. That file lives in whatever folder is specified by LogPath and the first thing my package does is create the folder if it does not already exist. "This thing can run anywhere and all is good." I've said that plenty and have the scars to show for it.
The following shows the events you can choose to log (based on what is in my package).
I am only logging OnPostExecute events. So I'm good, right? Because the post execute event won't fire until after that File System Task has completed.
If that were the case, you wouldn't have posted a question.
The first event that a package generates is a PackageStart event. Look at that list of events - no ability to filter that out. It doesn't matter whether you want that event logged or not, the logging handlers hear the PackageStart event and record it. Always.
The specified Text file logger should be used to record the data and it's ready to record PackageStart to file... "oh that path doesn't exist."
It will exist once the very first task (File System Task, Create Folder) has completed but alas, it it too late. You either get the complete sequence of events or none.
In your Output window, you would see something like the following
SSIS package "C:\Users\bfellows\source\repos\PackageDeploymentModel\PackageDeploymentModel\ChickenAndEgg_Logging.dtsx" starting.
Error: 0xC001404B at ChickenAndEgg_Logging, Log provider "SSIS log provider for Text files": The SSIS logging provider has failed to open the log. Error code: 0x80070003.
The system cannot find the path specified.
Error: 0xC001404B at ChickenAndEgg_Logging, Log provider "SSIS log provider for Text
files": The SSIS logging provider has failed to open the log. Error code: 0x80070003.
The system cannot find the path specified.
SSIS package "C:\Users\bfellows\source\repos\PackageDeploymentModel\PackageDeploymentModel\ChickenAndEgg_Logging.dtsx" finished: Success.
The package will show your Control Flow objects as all having gone green/OK and the status message will say it "Package execution completed with success" but on the results tab, you'll have a red X showing the log provider couldn't open the log.
What do I do
Preconfigure your environments as part of the package deployment process. When we used the native logger as you're inquiring about, we had a document that laid out all that new developers/new servers needed to have done to ensure all of this stuff was laid out and configured as it needed to be.
Unless a client has a strong business case for using the classic logging methodology, I would encourage them to not use it and instead rely on the SSISDB's native logging. It's cleaner, easier to manage, no special setup required. To quote the fine folks in Cupertino - it just works
I'm using NVIDIA Nsight Systems version 2019.5.2.16-b54ef97 with CUDA 10.2. I'm running:
nsys profile my_app --some --args=here
so, a plain-vanilla profiling with no funny business. And yet, I get, at the bottom of the output:
... etc. etc. ...
Saving report to file "/some/where/report1.qdrep"
Report file saved.
Please discard the qdstrm file and use the qdrep file instead.
Removed /some/where/report1.qdstrm as it was successfully imported.
Please use the qdrep file instead.
Why am I being told to discard files and use other files instead? Especially given how, eventually, only a single file is generated (a .qdrep file)?
I'm guessing some internal conversion utility is run, and the message is not really intended for me - or am I missing something?
It is just a logging, which is a little confusing though, and later it removes the *.qdstrm file for you automatically.
In the documentation for app-indexeddb-mirror at https://elements.polymer-project.org/elements/app-storage?active=app-indexeddb-mirror there is a section I've copied below. I think I'm running into an error because the indicated file isn't loading, but I'm not sure how to fix the issue. Do I add a reference in staticFileGlobs in sw-precache-config.js or somewhere else?
In order to ensure that operations on IndexedDB block the main browser thread as little as possible, app-indexeddb-mirror relies on a WebWorker to operate on its corresponding IndexedDB database. If you are vulcanizing or otherwise combining your source files before your app is deployed, make sure that you include the corresponding worker script (app-indexeddb-mirror-worker.js) among your deployable files. You can configure the path to the worker script with the worker-url attribute.
The error I'm getting:
GET https://example.com/src/common-worker-scope.js?https://example.com/bower_components/app-storage/app-indexeddb-mirror/app-indexeddb-mirror-worker.js net::ERR_INTERNET_DISCONNECTED
I can see my page get crash(see aw, snap page) with 20% proprobility after 10 mins(otherwise it runs well like forever)
so I tried:
1) CPU and memory check with task manager, and see no increasing(so no leakage).
2) enable crush log in the chrome://settings/
result:
2.1) see still nothing in the chrome://crashes page, not even a crush ID (0 crashes).
2.2) see nothing in the folder under path
C:/%User%/AppData/Local/Google/CrashReports (nothing in) nor
C:/%User%/AppData/Local/Google/Chrome/User Data/Crash Reports (folder not exist)
2.3) but indeed see DMP in the:
C:/%User%/AppData/Local/Google/Chrome/User Data/CrashPads/reports
but seems they are not readable, and it also seems not the correct address for crash logs
3) can get chrome log either by command line arguments, or using sawbuck, but found nothing but only 2 errors, one for sawbuck itself, and another saying can't send the report to google.
So the questions are:
1) are those DMP the crash logs(the default Dir for dump file has been changed for chrome v50)
2) how can I abstract information out of the DMP file, if chrome://crashes page shows nothing (for chrome on windows)
p.s. 2 usage pages are found at https://www.chromium.org/developers/decoding-crash-dumps
https://www.chromium.org/developers/crash-reports
but seems it's not for windows without a recompile of chrome's component, is there any 3rd party tools to interpret the DMP file?
env informations:
chrome version: 50.0.2661.02 m
; Host OS: windows 10
The crash dumps (.dmp files) in C:\Users\<user>\AppData\Local\Google\Chrome\User Data\Crashpad\reports can be read by standard Windows debuggers. WinDbg is one tool (provided by Microsoft) for analysing these dumps; it's not going to win any beauty contents, but it's powerful and gets the job done. The recommended way to obtain it is, somewhat bizarrely, the Windows Driver Kit.
You'll need debugging symbols to make sense of the results, and these aren't included in standard builds of Chrome. To get symbols for both Chrome and the Windows runtime, set the following as your Symbols path:
SRV*c:\symbols*https://msdl.microsoft.com/download/symbols;SRV*c:\symbols*https://chromium-browser-symsrv.commondatastorage.googleapis.com
There are numerous resources on using WinDbg on the web; this cheat sheet contains some useful commands to get you started.
chrome.fileSystem.isRestorable is a new part of the chrome.fileSystem API and it saif if a file can be restored with its entry or not. I've made many tests but something is wrong, when I tried to do :
chrome.storage.local.get(
["recentFileId1"],
function(recent) {
chrome.fileSystem.isRestorable(
recent["recentFileId1"],
function (isRestorable){
console.log(isRestorable);
});
});
It returns me true, even if the file has been deleted of my computer. recentFileId1 seems like a real id (many numbers and the path at the end, for example FD158F2A41037D17440C025C1CA5FE08:question.txt) and the file's restoration works if the file is still on my computer. When I tried to restore the file with an id of a deleted file it just returns nothing, no error.
So I want to know : did I use this feature wrong or something? It can work if I try to restore and see what is restored (if it returns nothing the file has been deleted), but I don't want to use a hack if the API is available.
Thanks.
This function is currently only available in the dev channel of Chrome, and should be released to stable in version 31.
What you're describing sounds like a bug, please file it at http://crbug.com. We should always return true or false. What the correct behavior in this case should be is not clear.
The intent of this function is to let an app know if it should provide UI to give the user access to previously opened files. If a file is restorable, it simply means the app still has permission to access the file.
We are reserving the right to limit when files are restorable. E.g. we might have an arbitrary upper limit to how many files can be restored, or the access might timeout after a few months, or we may give the user the option of not letting apps restore any files. isRestorable lets you know if access to a previously opened file is still available.
isRestorable is not intended to give information about how accessible the file still is. Local changes can impact this - e.g. the file might be deleted or the OS access permissions changed. It might still be there but be invisible to chrome and the app due to no read access to the containing folder.
Think about a recent documents menu. This could show files which were opened and since deleted. When the app restores a deleted app it would not work and would show an error to the user. At that point the user might go to their recycle bin or git checkout and replace the file.
Or the recent documents menu could just not show files which have been deleted.
Either way your app should not rely on isRestorable as an indication of whether a file entry can be regained and successfully used, you should handle restoreFile not restoring a file and giving an error, and handle access to the file having permission problems.