Chrome Click&Clean - Not clean? - google-chrome

After running Chrome Click&Clean (set to clean everything), it turns out that the SQLITE3 file .config/google-chrome/Default/Cookies is still about 250K in size. This is huge compared with the initial size of Cookies after deleting the file and restarting Chrome.
Am I missing something.....or does Click& Clean actually NOT DO MUCH CLEANING?

It doesn't do a hell of lot, clears up a lot but not all.
Ccleaner should definitely clean that out, but be aware that cleaning up can cause error logs in registry sometimes from things badly coded trying call things which don't exist anymore.

You would not expect the physical file size to change when data is removed as that would require a full rewrite of the database to disk which would be extremely inefficient. The size will not change until the browser decides to do some housekeeping (VACUUM). CCLeaner can compact chrome databases on demand.

Related

PhpStorm choking and freezing while editing code

I have experienced sudden drop in PhpStorm performance. IDE which was performing very well (without any noticeable signs of delay in response) started to be sluggish and freeze for a fraction of a second every time any autocomplete was performed.
Despite available 16GB memory (and increasing max memory for program in PhpStorm settings up to 2 GB) IDE does not use any of the additional memory and still freezes every few second. None of available suggestions all over Internet regarding disabling unnecessary plugins, clearing indexes or changing memory were successful. Reducing highlighting level has no effect on described issues.
Question: is there any other option I could do to restore normal performance without disabling some vital IDE functionalities?
I have managed to identify cause of sudden drop in performance described in question. The reason behind this was adding to project Gulp libraries which seemed to be more than PHPStorm could index in real time.
Solution: exclude node_modules directory from indexing. To do this go to Settings > Directories, select node_modules and mark it excluded. Your PHPStorm performance should return to normal.
I was also getting this problem but node_modules was already excluded (I think PHPStorm does this by default).
I have just excluded the location of the compiled files as well. In my case this was the entire public folder. (Using Laravel and Elixir.)
This solved it for me.
Mick
I've experienced those kind of performance problems in the past. Options I've used are
Increase heap memory to 4gb
Exclude folders from indexing
Disable some plugins (last time the problem was with PHP Inspections plugin)
I'm using PhpStorm with projects on Laravel, therefore what worked for me is excluding bootstrap/cache as well as entire storage folders.

Terminal command slower in verbose mode for huge files?

I have to restore a few MySQL schemes that are sometimes 400GB.
Since I want to know what's going on I'm a fan of verbose mode in terminal commands.
But I'm wondering if there are disadvantages for such big files when they have to be display in terminal window?
Displaying progress is always going to be a strain on the Terminal, imagine if I was writing a article, and you asked me to give you a progress update you every time I finished a sentence, rather than letting me do my own thing, and simply tell you once I'm done, or if I have any problems and cannot proceed.
I don't know exactly what command you're running but if it's 400GB of SQL related files, you can imagine what it would print in Terminal.
If you want to test this for yourself, try running smaller files in both verbose and non-verbose mode with timers and see the difference.
I don't think the overall command would be amazingly slower with verbose mode on, however, there's a strain nonetheless.

how to make wkhtmltopdf work in as many case as possible?

this question on wkhtmltopdf has a specific component and a more general component to it.
generally: i am trying to extract a wide range of webpages into pdf files and i want wkhtmltopdf to work in as many cases as possible. its a pretty good tool but i often meet problems when it couldn't convert webpages. do you guys have a go-to set of flags that you use with wkhtmltopdf?
specifically: for example, a webpage that isn't anything far-out, but i am having problems with is http://gizmodo.com/microsoft-surface-book-review-so-good-i-might-switch-1737680767. when i run wkhtmltopdf without any flags (in Windows), i get the following:
>>wkhtmltopdf http://gizmodo.com/microsoft-surface-book-
review-so-good-i-might-switch-1737680767 blah.pdf
Loading pages (1/6)
Error: Failed loading page http://gizmodo.com/microsoft-surface-book-review-so-g
ood-i-might-switch-1737680767 (sometimes it will work just to ignore this error
with --load-error-handling ignore)
Warning: A finished ResourceObject received a loading progress signal. This migh
t be an indication of an iframe taking too long to load.
Warning: Received createRequest signal on a disposed ResourceObject's NetworkAcc
essManager. This might be an indication of an iframe taking too long to load.
Exit with code 1, due to unknown error.
if i follow the instructions and use the --load-error-handling ignore flag, the PDF file is generated, but its empty. how do i get wkhtmltopdf to work with this webpage?
i tried to look at other tools such as phantomJS with rasterize.js, but it has its own set of problems...
thanks guys!
This happens when Javascript is enabled and it is too slow to complete. If you need to run javascript to solve this problem add:
--javascript-delay 100000
which adjust the wait time for Javascript to complete (it's in milliseconds). So in the example above it waits for 100 secs. Note if you run a multiple document conversion at once, this setting applies to the whole run, and not to each individual document. Therefore if, say, you convert some 100 input htmls in a single pdf output, you may need a longer delay.
I also add to my scripts:
--no-stop-slow-scripts
which enables: Do not Stop slow running javascripts.
Turns out its actually quite simple!
simply use the "-n" flag! works like a charm!

expect script running out of sync?

I'm currently modifying a script used to backup cisco ACE modules' contexts & crypto files. it works absolutely beautifully with one device. however, when i use it on another module, it seems to go completely out of sync and it messes up the script.
From what I can see, the differences are in the presence of a line that the ACE module throws up as so: Warning: Permanently added '[x.x.x.x]' (RSA) to the list of known hosts.\r\r\n this just seems to throw the rest of the script off, even though none of my expect statements are even looking for this!
I've had nothing but nightmares with expect and the way in which it interprets information from ace modules; can anyone shed any light on this issue or provide any advice as to how to make these devices behave when I try to script for them?
If you're handling one connection at a time, you should make sure you fully terminate one before opening the next. The simplest way of doing that is to put:
close
wait
At the end of the (foreach) loop over the things to connect to.
If you were doing multiple connections at once, you'd have to take care to use the -i option to various commands (notably expect, send and close) and make everything work right in addition to fixing the things I mentioned earlier. It can be done, but it's considerably more tricky and not worth it if you don't need the parallelism.

Catching the dreaded Blue Screen Of Death

It's a simple problem. Sometimes Windows will just halt everything and throws a BSOD. Game over, please reboot to play another game. Or whatever. Annoying but not extremely serious...
What I want is simple. I want to catch the BSOD when it occurs. Why? Just for some additional crash logging. It's okay that the system goes blue but when it happens, I just want to log some additional information or perform one additional action.
Is this even possible? If so, how? And what would be the limitations?
Btw, I don't want to do anything when the system recovers, I want to catch it while it happens. This to allow me one final action. (For example, flushing a file before the system goes down.)
BSOD happens due to an error in the Windows kernel or more commonly in a faulty device driver (that runs in kernel mode). There is very little you can do about it. If it is a driver problem, you can hope the vendor will fix it.
You can configure Windows to a create memory dump upon BSOD which will help you troubleshoot the problem. You can get a pretty good idea about the faulting driver by loading the dump into WinDbg and using the !analyze command.
Knowing which driver is causing the problem will let you look for a new driver, but if that doesn't fix the problem, there is little you can do about it (unless you're very good with a hex editor).
UPDATE: If you want to debug this while it is happening, you need to debug the kernel. A good place to pick up more info is the book Windows Internals by Mark Russinovich. Also, I believe there's a bit of info in the help file for WinDbg and there must be something in the device driver kit as well (but that is beyond my knowledge).
The data is stored in what's called "Minidumps".
You can then use debugging tools to explore those dumps. The process is documented here http://forums.majorgeeks.com/showthread.php?t=35246
You have two ways to figure out what happened:
The first is to upload the dmp file located under C:\Minidump***.dmp to microsoft service as they describe it : http://answers.microsoft.com/en-us/windows/wiki/windows_10-update/blue-screen-of-death-bsod/1939df35-283f-4830-a4dd-e95ee5d8669d
or use their software debugger WinDbg to read the dmp file
NB: You will find several files, you can tell the difference using the name that contain the event date.
The second way is to note the error code from the blue screen and to make a search about it in Google and Microsoft website.
The first method is more accurate and efficient.
Windows can be configured to create a crash dump on blue screens.
Here's more information:
How to read the small memory dump files that Windows creates for debugging (support.microsoft.com)