How can I read Chrome Cache files? - google-chrome

A forum I frequent was down today, and upon restoration, I discovered that the last two days of forum posting had been rolled back completely.
Needless to say, I'd like to get back what data I can from the forum loss, and I am hoping I have at least some of it stored in the cache files that Chrome created.
I face two problems -- the cache files have no filetype, and I'm unsure how to read them in an intelligent manner (trying to open them in Chrome itself seems to "redownload" them in a .gz format), and there are a ton of cache files.
Any suggestions on how to read and sort these files? (A simple string search should fit my needs)

EDIT: The below answer no longer works see here
In Chrome or Opera, open a new tab and navigate to chrome://view-http-cache/
Click on whichever file you want to view.
You should then see a page with a bunch of text and numbers.
Copy all the text on that page.
Paste it in the text box below.
Press "Go".
The cached data will appear in the Results section below.

Try Chrome Cache View from NirSoft (free).

EDIT: The below answer no longer works see here
Chrome stores the cache as a hex dump. OSX comes with xxd installed, which is a command line tool for converting hex dumps. I managed to recover a jpg from my Chrome's HTTP cache on OSX using these steps:
Goto: chrome://cache
Find the file you want to recover and click on it's link.
Copy the 4th section to your clipboard. This is the content of the file.
Follow the steps on this gist to pipe your clipboard into the python script which in turn pipes to xxd to rebuild the file from the hex dump:
https://gist.github.com/andychase/6513075
Your final command should look like:
pbpaste | python chrome_xxd.py | xxd -r - image.jpg
If you're unsure what section of Chrome's cache output is the content hex dump take a look at this page for a good guide:
http://www.sparxeng.com/blog/wp-content/uploads/2013/03/chrome_cache_html_report.png
Image source: http://www.sparxeng.com/blog/software/recovering-images-from-google-chrome-browser-cache
More info on XXD: http://linuxcommand.org/man_pages/xxd1.html
Thanks to Mathias Bynens above for sending me in the right direction.

EDIT: The below answer no longer works see here
If the file you try to recover has Content-Encoding: gzip in the header section, and you are using linux (or as in my case, you have Cygwin installed) you can do the following:
visit chrome://view-http-cache/ and click the page you want to recover
copy the last (fourth) section of the page verbatim to a text file (say: a.txt)
xxd -r a.txt| gzip -d
Note that other answers suggest passing -p option to xxd - I had troubles with that presumably because the fourth section of the cache is not in the "postscript plain hexdump style" but in a "default style".
It also does not seem necessary to replace double spaces with a single space, as chrome_xxd.py is doing (in case it is necessary you can use sed 's/ / /g' for that).

Note: The flag show-saved-copy has been removed and the below answer will not work
You can read cached files using Chrome alone.
Chrome has a feature called Show Saved Copy Button:
Show Saved Copy Button Mac, Windows, Linux, Chrome OS, Android
When a page fails to load, if a stale copy of the page exists in the browser cache, a button will be presented to allow the user to load that stale copy. The primary enabling choice puts the button in the most salient position on the error page; the secondary enabling choice puts it secondary to the reload button. #show-saved-copy
First disconnect from the Internet to make sure that browser doesn't overwrite cache entry. Then navigate to chrome://flags/#show-saved-copy and set flag value to Enable: Primary. After you restart browser Show Saved Copy Button will be enabled. Now insert cached file URI into browser's address bar and hit enter. Chrome will display There is no Internet connection page alongside with Show saved copy button:
After you hit the button browser will display cached file.

I've made short stupid script which extracts JPG and PNG files:
#!/usr/bin/php
<?php
$dir="/home/user/.cache/chromium/Default/Cache/";//Chrome or chromium cache folder.
$ppl="/home/user/Desktop/temporary/"; // Place for extracted files
$list=scandir($dir);
foreach ($list as $filename)
{
if (is_file($dir.$filename))
{
$cont=file_get_contents($dir.$filename);
if (strstr($cont,'JFIF'))
{
echo ($filename." JPEG \n");
$start=(strpos($cont,"JFIF",0)-6);
$end=strpos($cont,"HTTP/1.1 200 OK",0);
$cont=substr($cont,$start,$end-6);
$wholename=$ppl.$filename.".jpg";
file_put_contents($wholename,$cont);
echo("Saving :".$wholename." \n" );
}
elseif (strstr($cont,"\211PNG"))
{
echo ($filename." PNG \n");
$start=(strpos($cont,"PNG",0)-1);
$end=strpos($cont,"HTTP/1.1 200 OK",0);
$cont=substr($cont,$start,$end-1);
$wholename=$ppl.$filename.".png";
file_put_contents($wholename,$cont);
echo("Saving :".$wholename." \n" );
}
else
{
echo ($filename." UNKNOWN \n");
}
}
}
?>

I had some luck with this open-source Python project, seemingly inactive:
https://github.com/JRBANCEL/Chromagnon
I ran:
python2 Chromagnon/chromagnonCache.py path/to/Chrome/Cache -o browsable_cache/
And I got a locally-browsable extract of all my open tabs cache.

The Google Chrome cache directory $HOME/.cache/google-chrome/Default/Cache on Linux contains one file per cache entry named <16 char hex>_0 in "simple entry format":
20 Byte SimpleFileHeader
key (i.e. the URI)
payload (the raw file content i.e. the PDF in our case)
SimpleFileEOF record
HTTP headers
SHA256 of the key (optional)
SimpleFileEOF record
If you know the URI of the file you're looking for it should be easy to find. If not, a substring like the domain name, should help narrow it down. Search for URI in your cache like this:
fgrep -Rl '<URI>' $HOME/.cache/google-chrome/Default/Cache
Note: If you're not using the default Chrome profile, replace Default with the profile name, e.g. Profile 1.

It was removed on purpose and it won't be coming back.
Both chrome://cache and chrome://view-http-cache have been removed starting chrome 66. They work in version 65.
Workaround
You can check the chrome://chrome-urls/ for complete list of internal Chrome URLs.
The only workaround that comes into my mind is to use menu/more tools/developer tools and having a Network tab selected.
The reason why it was removed is this bug:
https://chromium.googlesource.com/chromium/src.git/+/6ebc11f6f6d112e4cca5251d4c0203e18cd79adc
https://bugs.chromium.org/p/chromium/issues/detail?id=811956
The discussion:
https://groups.google.com/a/chromium.org/forum/#!msg/net-dev/YNct7Nk6bd8/ODeGPq6KAAAJ

The JPEXS Free Flash Decompiler has Java code to do this at in the source tree for both Chrome and Firefox (no support for Firefox's more recent cache2 though).

EDIT: The below answer no longer works see here
Google Chrome cache file format description.
Cache files list, see URLs (copy and paste to your browser address bar):
chrome://cache/
chrome://view-http-cache/
Cache folder in Linux: $~/.cache/google-chrome/Default/Cache
Let's determine in file GZIP encoding:
$ head f84358af102b1064_0 | hexdump -C | grep --before-context=100 --after-context=5 "1f 8b 08"
Extract Chrome cache file by one line on PHP (without header, CRC32 and ISIZE block):
$ php -r "echo gzinflate(substr(strchr(file_get_contents('f84358af102b1064_0'), \"\x1f\x8b\x08\"), 10,
-8));"

Note: The below answer is out of date since the Chrome disk cache format has changed.
Joachim Metz provides some documentation of the Chrome cache file format with references to further information.
For my use case, I only needed a list of cached URLs and their respective timestamps. I wrote a Python script to get these by parsing the data_* files under C:\Users\me\AppData\Local\Google\Chrome\User Data\Default\Cache\:
import datetime
with open('data_1', 'rb') as datafile:
data = datafile.read()
for ptr in range(len(data)):
fourBytes = data[ptr : ptr + 4]
if fourBytes == b'http':
# Found the string 'http'. Hopefully this is a Cache Entry
endUrl = data.index(b'\x00', ptr)
urlBytes = data[ptr : endUrl]
try:
url = urlBytes.decode('utf-8')
except:
continue
# Extract the corresponding timestamp
try:
timeBytes = data[ptr - 72 : ptr - 64]
timeInt = int.from_bytes(timeBytes, byteorder='little')
secondsSince1601 = timeInt / 1000000
jan1601 = datetime.datetime(1601, 1, 1, 0, 0, 0)
timeStamp = jan1601 + datetime.timedelta(seconds=secondsSince1601)
except:
continue
print('{} {}'.format(str(timeStamp)[:19], url))

Related

chrome "aw, snap" crash, but can't see crash log in chrome://crashes but see DMP generated, so what is the quickest way to interpret this DMP file

I can see my page get crash(see aw, snap page) with 20% proprobility after 10 mins(otherwise it runs well like forever)
so I tried:
1) CPU and memory check with task manager, and see no increasing(so no leakage).
2) enable crush log in the chrome://settings/
result:
2.1) see still nothing in the chrome://crashes page, not even a crush ID (0 crashes).
2.2) see nothing in the folder under path
C:/%User%/AppData/Local/Google/CrashReports (nothing in) nor
C:/%User%/AppData/Local/Google/Chrome/User Data/Crash Reports (folder not exist)
2.3) but indeed see DMP in the:
C:/%User%/AppData/Local/Google/Chrome/User Data/CrashPads/reports
but seems they are not readable, and it also seems not the correct address for crash logs
3) can get chrome log either by command line arguments, or using sawbuck, but found nothing but only 2 errors, one for sawbuck itself, and another saying can't send the report to google.
So the questions are:
1) are those DMP the crash logs(the default Dir for dump file has been changed for chrome v50)
2) how can I abstract information out of the DMP file, if chrome://crashes page shows nothing (for chrome on windows)
p.s. 2 usage pages are found at https://www.chromium.org/developers/decoding-crash-dumps
https://www.chromium.org/developers/crash-reports
but seems it's not for windows without a recompile of chrome's component, is there any 3rd party tools to interpret the DMP file?
env informations:
chrome version: 50.0.2661.02 m
; Host OS: windows 10
The crash dumps (.dmp files) in C:\Users\<user>\AppData\Local\Google\Chrome\User Data\Crashpad\reports can be read by standard Windows debuggers. WinDbg is one tool (provided by Microsoft) for analysing these dumps; it's not going to win any beauty contents, but it's powerful and gets the job done. The recommended way to obtain it is, somewhat bizarrely, the Windows Driver Kit.
You'll need debugging symbols to make sense of the results, and these aren't included in standard builds of Chrome. To get symbols for both Chrome and the Windows runtime, set the following as your Symbols path:
SRV*c:\symbols*https://msdl.microsoft.com/download/symbols;SRV*c:\symbols*https://chromium-browser-symsrv.commondatastorage.googleapis.com
There are numerous resources on using WinDbg on the web; this cheat sheet contains some useful commands to get you started.

How to download HTTP directory with all files and sub-directories as they appear on the online files/folders list?

There is an online HTTP directory that I have access to. I have tried to download all sub-directories and files via wget. But, the problem is that when wget downloads sub-directories it downloads the index.html file which contains the list of files in that directory without downloading the files themselves.
Is there a way to download the sub-directories and files without depth limit (as if the directory I want to download is just a folder which I want to copy to my computer).
Solution:
wget -r -np -nH --cut-dirs=3 -R index.html http://hostname/aaa/bbb/ccc/ddd/
Explanation:
It will download all files and subfolders in ddd directory
-r : recursively
-np : not going to upper directories, like ccc/…
-nH : not saving files to hostname folder
--cut-dirs=3 : but saving it to ddd by omitting
first 3 folders aaa, bbb, ccc
-R index.html : excluding index.html
files
Reference: http://bmwieczorek.wordpress.com/2008/10/01/wget-recursively-download-all-files-from-certain-directory-listed-by-apache/
I was able to get this to work thanks to this post utilizing VisualWGet. It worked great for me. The important part seems to be to check the -recursive flag (see image).
Also found that the -no-parent flag is important, othewise it will try to download everything.
you can use lftp, the swish army knife of downloading if you have bigger files you can add --use-pget-n=10 to command
lftp -c 'mirror --parallel=100 https://example.com/files/ ;exit'
wget -r -np -nH --cut-dirs=3 -R index.html http://hostname/aaa/bbb/ccc/ddd/
From man wget
‘-r’
‘--recursive’
Turn on recursive retrieving. See Recursive Download, for more details. The default maximum depth is 5.
‘-np’
‘--no-parent’
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded. See Directory-Based Limits, for more details.
‘-nH’
‘--no-host-directories’
Disable generation of host-prefixed directories. By default, invoking Wget with ‘-r http://fly.srk.fer.hr/’ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior.
‘--cut-dirs=number’
Ignore number directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved.
Take, for example, the directory at ‘ftp://ftp.xemacs.org/pub/xemacs/’. If you retrieve it with ‘-r’, it will be saved locally under ftp.xemacs.org/pub/xemacs/. While the ‘-nH’ option can remove the ftp.xemacs.org/ part, you are still stuck with pub/xemacs. This is where ‘--cut-dirs’ comes in handy; it makes Wget not “see” number remote directory components. Here are several examples of how ‘--cut-dirs’ option works.
No options -> ftp.xemacs.org/pub/xemacs/
-nH -> pub/xemacs/
-nH --cut-dirs=1 -> xemacs/
-nH --cut-dirs=2 -> .
--cut-dirs=1 -> ftp.xemacs.org/xemacs/
...
If you just want to get rid of the directory structure, this option is similar to a combination of ‘-nd’ and ‘-P’. However, unlike ‘-nd’, ‘--cut-dirs’ does not lose with subdirectories—for instance, with ‘-nH --cut-dirs=1’, a beta/ subdirectory will be placed to xemacs/beta, as one would expect.
No Software or Plugin required!
(only usable if you don't need recursive deptch)
Use bookmarklet. Drag this link in bookmarks, then edit and paste this code:
javascript:(function(){ var arr=[], l=document.links; var ext=prompt("select extension for download (all links containing that, will be downloaded.", ".mp3"); for(var i=0; i<l.length; i++) { if(l[i].href.indexOf(ext) !== false){ l[i].setAttribute("download",l[i].text); l[i].click(); } } })();
and go on page (from where you want to download files), and click that bookmarklet.
wget is an invaluable resource and something I use myself. However sometimes there are characters in the address that wget identifies as syntax errors. I'm sure there is a fix for that, but as this question did not ask specifically about wget I thought I would offer an alternative for those people who will undoubtedly stumble upon this page looking for a quick fix with no learning curve required.
There are a few browser extensions that can do this, but most require installing download managers, which aren't always free, tend to be an eyesore, and use a lot of resources. Heres one that has none of these drawbacks:
"Download Master" is an extension for Google Chrome that works great for downloading from directories. You can choose to filter which file-types to download, or download the entire directory.
https://chrome.google.com/webstore/detail/download-master/dljdacfojgikogldjffnkdcielnklkce
For an up-to-date feature list and other information, visit the project page on the developer's blog:
http://monadownloadmaster.blogspot.com/
You can use this Firefox addon to download all files in HTTP Directory.
https://addons.mozilla.org/en-US/firefox/addon/http-directory-downloader/
wget generally works in this way, but some sites may have problems and it may create too many unnecessary html files. In order to make this work easier and to prevent unnecessary file creation, I am sharing my getwebfolder script, which is the first linux script I wrote for myself. This script downloads all content of a web folder entered as parameter.
When you try to download an open web folder by wget which contains more then one file, wget downloads a file named index.html. This file contains a file list of the web folder. My script converts file names written in index.html file to web addresses and downloads them clearly with wget.
Tested at Ubuntu 18.04 and Kali Linux, It may work at other distros as well.
Usage :
extract getwebfolder file from zip file provided below
chmod +x getwebfolder (only for first time)
./getwebfolder webfolder_URL
such as ./getwebfolder http://example.com/example_folder/
Download Link
Details on blog

Using --js-flags in Google Chrome to get --trace output

I've looked through various sources online and done a number of Google searches, but I can't seem to find any specific instructions as to how to work with the V8 --trace-* flags in Google Chrome. I've seen a few "You can do this as well in Chrome", but I haven't been able to find what I'm looking for, which is output like this: (snippets are near the near bottom of the post) Optomizing for V8.
I found reference that the data is logged to a file: Profiling Chromium with V8 and I've found that the file is likely named v8.log: (Lost that link) but I haven't found any clues as to how to generate that file, or where it is located. It didn't appear to be in the chrome directory or the user directory.
Apparently I need to enable .map files for chrome.dll as well, but I wasn't able to find anything to help me with that.
The reason I would prefer to use Chrome's V8 for this as opposed to building V8 and using a shell is because the JavaScript I would like to test makes use of DOM, which I do not believe would be included in the V8 shell. However if it is, that would be great to know, then I can rewrite the code to work sans-html file and test. But my guess is that V8 by itself is sans-DOM access, like node.js
So to sum things up;
Running Google Chrome Canary on Windows 7 ultimate x64
Shortcut target is "C:\Users\ArkahnX\AppData\Local\Google\Chrome SxS\Application\chrome.exe" --no-sandbox --js-flags="--trace-opt --trace-bailout --trace-deop" --user-data-dir=C:\chromeDebugProfile
Looking for whether this type of output can be logged from chrome
If so, where would the log be?
If not, what sort of output should I expect, and again, where could I find it?
Thank you for any assistance!
Amending with how I got the answer to work for me
Using the below answer, I installed python to it's default directory, and modified the script so it had the full path to chrome. From there I set file type associations to .py files to python and executed the script. Now every time I open Chrome Canary it will run that python script (at least until I restart my pc, then I'll have to run that script again)
The result is exactly what I was looking for!
On Windows stdout output is suppressed by the fact that chrome.exe is a GUI application. You need to flip Subsystem field in the PE header from IMAGE_SUBSYSTEM_WINDOWS_GUI to WINDOWS_SUBSYSTEM_WINDOWS_CUI to see what V8 outputs to stdout.
You can do it with the following (somewhat hackish) Python script:
import mmap
import ctypes
GUI = 2
CUI = 3
with open("chrome.exe", "r+b") as f:
map = mmap.mmap(f.fileno(), 1024, None, mmap.ACCESS_WRITE)
e_lfanew = (ctypes.c_uint.from_buffer(map, 30 * 2).value)
subsystem = ctypes.c_ushort.from_buffer(map, e_lfanew + 4 + 20 + (17 * 4))
if subsystem.value == GUI:
subsystem.value = CUI
print "patched: gui -> cui"
elif subsystem.value == CUI:
subsystem.value = GUI
print "patched: cui -> gui"
else:
print "unknown subsystem: %x" % (subsystem.value)
Close all Chrome instances and execute this script. When you restart chrome.exe you should see console window appear and you should be able to redirect stdout via >.
If your not keen on hacking the PE entry of chrome then there is alternative for windows.
Because the chrome app doesn't create a console stdout on windows all tracing in v8 (also d8 compiler) is sent to the OutputDebugString instead.
The OutputDebugString writes to a shared memory object that can be read by any other application.
Microsoft has a tool called DebugView which monitors and if required also stream to a log file.
DebugView is free and downloadable from microsoft: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx

Save the console.log in Chrome to a file

Does anyone know of a way to save the console.log output in Chrome to a file? Or how to copy the text out of the console?
Say you are running a few hours of functional tests and you've got thousands of lines of console.log output in Chrome. How do you save it or export it?
Good news
Chrome dev tools now allows you to save the console output to a file natively
Open the console
Right-click
Select "save as.."
Chrome Developer instructions here.
I needed to do the same thing and this is the solution I found:
Enable logging from the command line using the flags:
--enable-logging --v=1
This logs everything Chrome does internally, but it also logs all the console.log() messages as well. The log file is called chrome_debug.log and is located in the User Data Directory which can be overridden by supplying --user-data-dir=PATH (more info here).
Filter the log file you get for lines with CONSOLE(\d+).
Note that console logs do not appear with --incognito.
I have found a great and easy way for this.
In the console - right click on the console logged object
Click on 'Store as global variable'
See the name of the new variable - e.g. it is variableName1
Type in the console: JSON.stringify(variableName1)
Copy the variable string content: e.g. {"a":1,"b":2,"c":3}
Go to some JSON online editor:
e.g. https://jsoneditoronline.org/
There is an open-source javascript plugin that does just that, but for any browser - debugout.js
Debugout.js records and save console.logs so your application can access them. Full disclosure, I wrote it. It formats different types appropriately, can handle nested objects and arrays, and can optionally put a timestamp next to each log. You can also toggle live-logging in one place, and without having to remove all your logging statements.
For better log file (without the Chrome-debug nonsense) use:
--enable-logging --log-level=0
instead of
--v=1 which is just too much info.
It will still provide the errors and warnings like you would typically see in the Chrome console.
update May 18, 2020: Actually, I think this is no longer true. I couldn't find the console messages within whatever this logging level is.
This may or may not be helpful but on Windows you can read the console log using Event Tracing for Windows
http://msdn.microsoft.com/en-us/library/ms751538.aspx
Our integration tests are run in .NET so I use this method to add the console log to our test output. I've made a sample console project to demonstrate here: https://github.com/jkells/chrome-trace
--enable-logging --v=1 doesn't seem to work on the latest version of Chrome.
For Google Chrome Version 84.0.4147.105 and higher,
just right click and click 'Save as' and 'Save'
then, txt file will be saved
A lot of good answers but why not just use JSON.stringify(your_variable) ? Then take the contents via copy and paste (remove outer quotes). I posted this same answer also at: How to save the output of a console.log(object) to a file?
There is another open-source tool which allows you to save all console.log output in a file on your server - JS LogFlush (plug!).
JS LogFlush is an integrated JavaScript logging solution which include:
cross-browser UI-less replacement of console.log - on client side.
log storage system - on server side.
Demo
If you're running an Apache server on your localhost (don't do this on a production server), you can also post the results to a script instead of writing it to console.
So instead of console.log, you can write:
JSONP('http://localhost/save.php', {fn: 'filename.txt', data: json});
Then save.php can do this
<?php
$fn = $_REQUEST['fn'];
$data = $_REQUEST['data'];
file_put_contents("path/$fn", $data);
Right-click directly on the logged value you want to copy
In the right-click menu, select "Store as global variable"
You'll see the value saved as something like "temp1" on the next line in the console
In the console, type copy(temp1) and hit return (replace temp1 with the variable name from the previous step). Now the logged value is copied to your clipboard.
Paste the values to wherever you want
This is especially good as an approach if you don't want to mess with changing flags/settings in Chrome and don't want to deal with JSON stringifying and parsing etc.
Update: I just found this explanation of what I suggested with images that's easier to follow https://scottwhittaker.net/chrome-devtools/2016/02/29/chrome-devtools-copy-object.html
These days it's very easy - right click any item displayed in the console log and select save as and save the whole log output to a file on your computer.
On Linux (at least) you can set CHROME_LOG_FILE in the environment to have chrome write a log of the Console activity to the named file each time it runs. The log is overwritten every time chrome starts. This way, if you have an automated session that runs chrome, you don't have a to change the way chrome is started, and the log is there after the session ends.
export CHROME_LOG_FILE=chrome.log
the other solutions in this thread weren't working on my mac. Here's a logger that saves a string representation intermittently using ajax. use it with console.save instead of console.log
var logFileString="";
var maxLogLength=1024*128;
console.save=function(){
var logArgs={};
for(var i=0; i<arguments.length; i++) logArgs['arg'+i]=arguments[i];
console.log(logArgs);
// keep a string representation of every log
logFileString+=JSON.stringify(logArgs,null,2)+'\n';
// save the string representation when it gets big
if(logFileString.length>maxLogLength){
// send a copy in case race conditions change it mid-save
saveLog(logFileString);
logFileString="";
}
};
depending on what you need, you can save that string or just console.log it and copy and paste. here's an ajax for you in case you want to save it:
function saveLog(data){
// do some ajax stuff with data.
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function(){
if (this.readyState == 4 && this.status == 200) {}
}
xhttp.open("POST", 'saveLog.php', true);
xhttp.send(data);
}
the saveLog.php should append the data to a log file somewhere. I didn't need that part so I'm not including it here. :)
https://www.google.com/search?q=php+append+to+log
This answer might seem specifically related, but specifically for Network Log, you can visit the following link.
The reason I've post this answer is because in my case, the console.log printed a long truncated text so I couldn't get the value from the console. I solved by getting the api response I was printing directly from the network log.
chrome://net-export/
There you may see a similar windows to this, just press the Start Logging to Disk button and that's it:
Create a batch file using below command and save it as ChromeDebug.bat in your desktop.
start chrome --enable-logging --v=1
Close all other Chrome tabs and windows.
Double click ChromeDebug.bat file which will open Chrome and a command prompt with Chrome icon in taskbar.
All the web application logs will be stored in below path.
Run the below path in Run command to open chrome log file
%LocalAppData%\Google\Chrome\User Data\chrome_debug.log

How to configure firefox to run emacsclientw on certain links?

I've got a Perl script that groks a bunch of log files looking for "interesting" lines, for some definition of interesting. It generates an HTML file which consists of a table whose columns are a timestamp, a filename/linenum reference and the "interesting" bit. What I'd love to do is have the filename/linenum be an actual link that will bring up that file with the cursor positioned on that line number, in emacs.
emacsclientw will allow such a thing (e.g. emacsclientw +60 foo.log) but I don't know what kind of URL/URI to construct that will let FireFox call out to emacsclientw. The original HTML file will be local, so there's no problem there.
Should I define my own MIME type and hook in that way?
Firefox version is 3.5 and I'm running Windows, in case any of that matters. Thanks!
Go to about:config page in firefox. Add a new string :
network.protocol-handler.app.emacs
value: path to a script that parse the url without protocol (what's after emacs://) and then call emacsclient with the proper argument.
You can't just put the path of emacsclient because everything after the protocol is passed as one arg to the executable so your +60 foo.log would be a new file named that way.
But you could easily imagine someting like emacs:///path/to/your/file/LINENUM and have a little script that remove the final / and number and call emacsclient with the number and the file :-)
EDIT: I could do that in bash if you want but i don't know how to do that with the windows "shell" or whatever it is called.
EDIT2: I'm wrong on something, the protocol is passed in the arg string to !
Here is a little bash script that i just made for me, BTW thanks for the idea :-D
#!/bin/bash
ARG=${1##emacs://}
LINE=${ARG##*/}
FILE=${ARG%/*}
if wmctrl -l | grep emacs#romuald &>/dev/null; then # if there's already an emacs frame
ARG="" # then just open the file in the existing emacs frame
else
ARG="-c" # else create a new frame
fi
emacsclient $ARG -n +$LINE "$FILE"
exit $?
and my network.protocol-handler.app.emacs in my iceweasel (firefox) is /home/p4bl0/bin/ffemacsclient. It works just fine !
And yes, my laptop's name is romuald ^^.
Thanks for the pointer, p4bl0. Unfortunately, that only works on a real OS; Windows uses a completely different method. See http://kb.mozillazine.org/Register_protocol for more info.
But, you certainly provided me the start I needed, so thank you very, very much!
Here's the solution for Windows:
First you need to set up the registry correctly to handle this new URL type. For that, save the following to a file, edit it to suit your environment, save it and double click on it:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\emacs]
#="URL:Emacs Protocol"
"URL Protocol"=""
[HKEY_CLASSES_ROOT\emacs\shell]
[HKEY_CLASSES_ROOT\emacs\shell\open]
[HKEY_CLASSES_ROOT\emacs\shell\open\command]
#="\"c:\\product\\emacs\\bin\\emacsclientw.exe\" --no-wait -e \"(emacs-uri-handler \\\"%1\\\")\""
This is not as robust as p4bl0's shell script, because it does not make sure that Emacs is running first. Then add the following to your .emacs file:
(defun emacs-uri-handler (uri)
"Handles emacs URIs in the form: emacs:///path/to/file/LINENUM"
(save-match-data
(if (string-match "emacs://\\(.*\\)/\\([0-9]+\\)$" uri)
(let ((filename (match-string 1 uri))
(linenum (match-string 2 uri)))
(with-current-buffer (find-file filename)
(goto-line (string-to-number linenum))))
(beep)
(message "Unable to parse the URI <%s>" uri))))
The above code will not check to make sure the file exists, and the error handling is rudimentary at best. But it works!
Then create an HTML file that has lines like the following:
file: c:/temp/my.log, line: 60
and then click on the link.
Post Script:
I recently switched to Linux (Ubuntu 9.10) and here's what I did for that OS:
$ gconftool -s /desktop/gnome/url-handlers/emacs/command '/usr/bin/emacsclient --no-wait -e "(emacs-uri-handler \"%s\")"' --type String
$ gconftool -s /desktop/gnome/url-handlers/emacs/enabled --type Boolean true
Using the same emacs-uri-handler from above.
Might be a great reason to write your first FF plugin ;)