Filter only the requests with errors - Google chrome network - google-chrome

How Can I filter only the requests with errors in google chrome network devtools?

Option 1: Filtering HTTP Status Codes
You can filter responses by their status code — Here's a useful list with all HTTP's Status Codes.
AFAIK this filtering feature has been working for years. It's through the status-code property (you can see all properties you can use here, in Google Developers).
As explained:
status-code. Only show resources whose HTTP status code matches the
specified code. DevTools populates the autocomplete dropdown menu with
all of the status codes it has encountered.
While it's not as useful as a regex expression or a wildcard, it can narrow down a lot. For instance, if you want to see all requests with error 403, the filter is status-code:403.
There's a useful plot twist: you can use negative filters, i.e.: -status-code:200 (notice the prepended - sign). That will filter out all requests with a 200 code, showing only, for the most part, troubled requests.
With all the 200's out of the way, you can sort the status column for a better experience.
Option 2: Work with the HAR format
For a more in-depth analysis, almost as quick, you can easily export the whole networking log, and its details, to a HAR (HTTP ARchive) file. Right-click:
Paste it into your favorite editor. You'll see it's just a JSON file (plain text). You can always search for "error" or RegExp expressions. If you know a bit of JS, Python, etc., you can quickly parse it as you wish.
Or you can save it as *.har file, for instance, and use a HAR analyzer, like Google's free analyzer:
There are a lot of tools that will help you analyze HAR files. Apps like Paw, Charles, and others can import HAR and show it to you as a history of requests. AFAIK Postman doesn't understand HAR yet, but you can go to your network tab and copy in cURL format instead of HAR (or use a HAR->cURL converter like this one) and import it right into Postman.

There's no such functionality.
The Filter input doesn't apply to the Status column.
You can augment devtools itself by adding a checkbox in the filter bar:
open the network panel
undock devtools into a separate window
press the hotkey to invoke devtools - CtrlShifti or ⌘⌥i
paste the following code in this new devtools window console and run it
{
// see the link in the notes below for a full list of request properties
const CONDITION = r =>
r.failed ||
r.statusCode >= 400;
const label = document.createElement('label');
const input = label.appendChild(document.createElement('input'));
input.type = 'checkbox';
input.onchange = () => {
const view = UI.panels.network._networkLogView;
view.removeAllNodeHighlights()
view._filters = input.checked ? [CONDITION] : [];
view._filterRequests();
};
label.append('failed');
UI.panels.network._filterBar._filters[1]._filterElement.appendChild(label);
}
You can save this code as a snippet in devtools to run it later.
To quickly switch docking mode in the main devtools press CtrlShiftD or ⌘⇧D
Theoretically, it's not that hard to put this code into resources.pak file in Chrome application directory. There are several tools to decompile/build that file.
The full list of internal request properties is in the constructor of NetworkRequest.

Related

What's the JSON equivalent of the 0-100 scores in Lighthouse reports?

If I use the PageSpeed Insights tool Google offers on its developer website, or the Lighthouse CLI with HTML as output, I get a very nicely formatted report with 0-100 scores like so:
However, if I run the Lighthouse CLI tool with the --output json option, I get a Lighthouse Result Object (LHR) and Lighthouse's Understanding the Results page helpfully points out that the HTML version is just is "a rendering of information contained in the result object."
My question: How do I translate the JSON into the scores from the HTML version? I want to be able to programmatically react to changes for some custom monitoring I'm setting up for my site.
I actually ended up finding the answer myself by browsing the Lighthouse source code. So I'll just leave the answer here in case others are trying to do this.
With performance as an example, simply multiply lhr.categories.performance.score with 100 in the lighthouse result object, like so:
import chromeLauncher from 'chrome-launcher';
import lighthouse from 'lighthouse';
const chrome = await chromeLauncher.launch({ chromeFlags: ['--headless'] });
const result = await lighthouse(`https://example.com`, {
onlyCategories: ['performance'],
port: chrome.port
});
const score = result.lhr.categories.performance.score * 100;

How do I find an element from a web page and import it into a Google Sheets cell?

I'm attempting to scrape from this web page here: https://explorer.helium.com/accounts/14Jydka1ufeZBXAHNmjK9SWedvWtufdaJRbEMgtF8Bifc6Dv7Gm
I'm trying to find out how to pull the number below "Rewards(24H)" and import it into a cell.
I've tried using ImportXML function but it gave me the "Imported content is empty" error.
After doing some research, I think the element is not server-side because I can't find it in the source code. So I opened up the Developer Tools for the page, clicked the Network Tab and refreshed the page.
I filtered the results to see only the XHR parts. Clicking the Headers tab will display a number of APIs in the Request URL section.
This is as far as I have gotten. I cannot find any reference to the Rewards(24H) number in any of the JSON code.
It'd be much appreciated if anyone can explain how I can find that number and import it into a Google Sheets cell, preferably self updating every hour.
Thanks!
After seeing to the network requests of the above URL data is coming from the api.helium.io so you can follow below code to get your desired data and FYI that website provides API already and has docs here : API Documentation so if you other data you can follow this also and according to my guess you were not able to find the data cause it was rounded off to the 2 decimals where as API gives result up to 5 to 6 so that could be the reason you were not able to find the data which was displayed in XHR requests.
Code:
import datetime
import requests
dt = datetime.datetime.now(datetime.timezone.utc)
querystring = {"min_time":"-60 day","max_time":dt.strftime("%Y-%m-%dT%H:%M:%SZ"),"bucket":"day"}
response=requests.get('https://api.helium.io/v1/accounts/14Jydka1ufeZBXAHNmjK9SWedvWtufdaJRbEMgtF8Bifc6Dv7Gm/hotspots').json()
for data in response["data"]:
print(f"name:{data['name']}")
internal_data =requests.get(f"https://api.helium.io/v1/hotspots/{data['address']}/rewards/sum",params=querystring).json()
last24hour=internal_data["data"][0]["total"]
last7days=0
for i in range(0,7):
last7days+=internal_data["data"][i]["total"]
last30days = 0
for i in range(0, 30):
last30days += internal_data["data"][i]["total"]
print(f"24H : {round(last24hour,2)}")
print(f"7D : {round(last7days, 2)}")
print(f"30D : {round(last30days, 2)}")
Output:
Let me know if you have any questions :)

Obtain list of My Places from Google Maps

I am trying to obtain the list of places the user has saved on Google Maps. Now I know there isnt an API for this (for whatever reason), but I saw here:
"My Places" Google Maps API
That apparently there used to be a way to obtain the URL, but it does not seem to work with my list of places.
E.g.
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Does not seem to work if I append &output=kml or &output=json
I created this list on Google Maps, then hit share and obtained that link.
I even tried parsing the resulting HTML but it seems everything is handled by some Javascript Engine and I can't find any reference to Google Ids there --- I dont even know how they handle clicks!
Any help? There must be a way to retrieve this information programmatically!
EDIT:
I managed to get something working by visiting the shared link, then processing the html and storing the window.APP_INITIALIZATION_STATE variable. I then convert it to an javascript array and loop over it. Deep inside the array/map structure, I managed to get the google name and google place id out of that array. That seems to work a bit, but when trying with lists over 20 items long, google only gets the first 20 and is waiting for the user to 'scroll down' to get the next 20. That seems to trigger another call to get the next 20 results and looks a bit like:
https://www.google.com/search?tbm=map&fp=1&authuser=0&hl=en&gl=nl&pb=!4m8!1m3!1d54065472.4384380........
I can see the original feature id being included at the end of the url, but have no idea how to construct this url in full though to get the next 20 items.... Any ideas?
Your saved places list actually has what you call a feature ID attribute, this isn't a common practice and Google frowns upon this technique but take a look at this URL:
https://www.google.com/maps/preview/entity?authuser=0&hl=en&gl=us&pb=!1m10!1s0x0%3A0x3743ae09a161976b!3m8!1m3!1d14318.72623152007!2d-98.2296425!3d26.2070353!3m2!1i1024!2i768!4f13.1!12m3!2m2!1i392!2i106!13m57!2m2!1i203!2i100!3m2!2i4!5b1!6m6!1m2!1i86!2i86!1m2!1i408!2i200!7m42!1m3!1e1!2b0!3e3!1m3!1e2!2b1!3e2!1m3!1e2!2b0!3e3!1m3!1e3!2b0!3e3!1m3!1e8!2b0!3e3!1m3!1e3!2b1!3e2!1m3!1e9!2b1!3e2!1m3!1e10!2b0!3e3!1m3!1e10!2b1!3e2!1m3!1e10!2b0!3e4!2b1!4b1!9b0!14m3!1snyc5W-WeHY3r5gLwkoRI!7e81!15i10112!15m19!2b1!5m4!2b1!3b1!5b1!6b1!10m1!8e3!14m1!3b1!17b1!24b1!25b1!26b1!30m1!2b1!36b1!52b1!53b1!21m28!1m6!1m2!1i0!2i0!2m2!1i458!2i768!1m6!1m2!1i974!2i0!2m2!1i1024!2i768!1m6!1m2!1i0!2i0!2m2!1i1024!2i20!1m6!1m2!1i0!2i748!2m2!1i1024!2i768!22m1!1e81!29m0!30m1!3b1
Highlighted is the feature ID from the link you posted:
https://www.google.com/maps/#46.889424,0.1194148,6z/data=!4m3!11m2!2s1KbZtik1IdXyNhwfXEb3P9vaZvzU!3e3
Along with other maps parameters; when you hit that link you're actually manually triggering the same callback that Google's own scripts in maps use to parse the data to feed back to the maps UI; if you look at array item 2, or {c:..} you'll find a stringified array with the contents of your list, now depending on the program language you're using all it takes is a little tweaking (find/replace, loop through, lint and trim, etc.) to this array and you can pull your results; the cool thing is if you add or remove a place the next time you hit that end point it's updated in real-time.
Some people may call it a "hack"; but it gets the job done. :)
Hope I pointed you to a direction in the event you haven't found a solution; give this a shot.
Note the URL has to be pasted in its entirety, SO truncated the hyperlink; copy and paste the whole thing in one shot and a text file from Google with the arrays will be produced; in my case I curl the URLs I need and parse the returned strings as needed to pull data from Google where their API has limitations. Just a tip. :)
Also check Joel's Answer who did some research and refined some of the following information.
Pagination
You can use this tool to decrypt the pb-parameter. PB stands for protocol buffer (protobuf) and Google uses its own kind of it for maps. You can find different decoders for this by googling it.
In my case, the pagination was done via one parameter (8iX0). It seems, that it always comes with another similar parameter (7i20) but I don't know that it does. I can't yet confirm that this is always the case, but from my experience you're basically looking for two integers that are 20/40/60 etc. apart.
Here's what this looks like for me:
page 2 (7i20, 8i20)
page 3 (7i20, 8i40)
page 4 (7i20, 8i60)
From this information, I tried 7i20 8i00 for page 1, that seemed to work. For lists with >100 items, it just continues like that (8i120, 8i140 etc.)
Here's a code snippet in python (quick & dirty). Make sure to add (long) delays if your list has many pages as you will get rate-limited by captchas eventually if you don't. Notice the 8i%s0 in the url, make sure to put the %s back when you paste your pb-block.
url = "https://www.google.com:443/search?tbm=map&pb=!7i20!8i%s0!..."
headers = {"Referer": "https://www.google.com/"}
def fetch_stops_from_maps():
new_results = -1
page = 0
results = []
while new_results != 0:
new_results = 0
x = requests.get(url % page, headers=headers)
txt = html.unescape(x.text)
txt = txt.split("\n")[1]
results = re.findall(r"\[null,null,[0-9]{1,2}\.[0-9]{4,15},[0-9]{1,2}\.[0-9]{4,15}]", txt)
print(len(results))
for cord in results:
# curr = the description you can manually type in when saving
curr = txt.split(cord)[1].split("\"]]")[0]
curr = curr[curr.rindex(",\"") + 2:]
cords = str(cord).split(",")
lat = cords[2]
lon = cords[3][:-1]
results.append(s)
new_results += 1
page += 2
Actually getting the correct url
Getting the correct url currently seems to be the hardest part when doing this and I have not fully figured this out aswell. However, for my use-case this is not really important, so I extracted the correct pb-block once and called it a day.
As explained in the other answers, the id of the list is visible in the basic url (here, the 2sXX...) when you navigate to the list in your browser. It seems to usually be 24-32 (?) characters long.
.../maps/<coords>/data=!4m3!11m2!2sXXXX...XXXX!3e3
If you have this id, you can put it into an existing protobuf-block and it may work (I only tested this with 3 different lists, which were all created by the same account, so this theory is far from proven).
Now, how do you get the block? I would just share the one I have, but because I only understand parts of what it does, I fear that it may contain some personal info. Instead, I will share my process of getting it. For this I use Burpsuite. It's a program mainly used for web-security testing and has a free community edition, however for our use-case it is the perfect tool, because with it you can easily tinker with requests, change small parts in the request, send it again and immediately see if your changes changed the response. However for extracting the pb-block, one should also be able to use any program that can intercept browser traffic.
Heres the basic rundown with burp:
From GMaps, share a list that has >20 items (this is important) and copy the public link
In Burp, go to the tab "Proxy", make sure "Intercept" is off and click "Open browser" to open the integrated chromium browser
There, paste the link and wait until maps loaded completely
In Burp, turn "Intercept" on, then in google maps, scroll down in the list, until it starts loading new results (always blocks of 20)
Burp now intercepted all requests the browser made since you turned intercepting on. Click "Forward" and go through all requests, until you see a request in the format
GET /search?tbm=map&authuser=0&hl=de&gl=de&pb=!7i20....
This is what you're looking for.
Optionally, you can now right click into the request-text and click "send to repeater", then switch to the repeater-tab. Here you can edit the request and then send it again, being able to see the response immediately. For example, removing the authuser, hl, gl, q, ech, psi url parameters, the request still works flawlessly. If you remove the tch=1 parameter, the response you get will be in a more human readable format.
In the request-text you should now be able to just search for the list-id you got from the link previously and replace it with the id of another list (search bar is at the bottom in burp). As I said, this worked for me, but it may be possible that the pb-block contains some additional metadata that makes lists from different google-accounts or different types of lists incompatible with specific pb-blocks. Just a theory though. Let me know how it goes!
Further automating
I have theorised that one could automate getting the pb-block using requests-html because it can load html-sites fully but it doesn't get updated anymore. Another option (probably the better one) is Selenium Wire, as you should be able to load the page and intercept the requests, like we did in burp. Seems like a whole lot of work tho :D
This was the only API was able to find was this:
https://www.google.com/bookmarks/?output=xml
Used in a browser you would have to first log in through Google's OAuth. It would then return your saved places. Not sure at the moment how you would embedded the authentication to do this programmatically, but this might send you in the right direction.
I was able to extract the data I needed from my google maps list. Below are some comments that expand on some of the other comments here, along with a script that extracts all of the relevant data points from the network response.
Obtaining the underlying URL
You can easily find this URL by just opening the devtools on your browser, going to the network tab, and refreshing the webpage or scrolling down on the list until it loads new results (the list must be larger than 20 results). You should be able to find the network request that starts with https://www.google.com/search?tbm=map&pb... and go from there.
Increase the results size
I was able to increase the number of results returned from the request by changing the value of the 7i20 parameter. From what I can tell, the 71XX parameter is the size of the page, and the 8iXX parameter is the starting point. I haven't tested how large you can make the page limit, but I tested 100 and it seemed to work fine. This should make dealing with larger lists much easier.
Parsing out the data
Instead of using regex to parse out the relevant data from the response, I found that the response is basically just a massive JSON object and I was able to identify the indexes for specific types of data, such as the name of the place, location, notes, etc. See the script below.
If you look at the buildResults function in the script below, you can see the exact indexes used to extract specific pieces of information. This of course may change over time if the network response changes format at all, so use these as a starting point in the case where the specific values aren't at those indexes anymore. Hopefully they would be close to those locations
Script to parse the data (javascript / node.js)
// Insert the raw text content from the network response from the
// https://www.google.com/search?tbm=map&pb... url below.
const rawInput = null
function prepare(input) {
// There are 5 random characters before the JSON object we need to remove
// Also I found that the newlines were messing up the JSON parsing,
// so I removed those and it worked.
const preparedForParsing = input.substring(5).replace(/\n/g, '')
const json = JSON.parse(preparedForParsing)
const results = json[0][1].map(array => array[14])
return results
}
function prepareLookup(data) {
// this function takes a list of indexes as arguments
// constructs them into a line of code and then
// execs the retrieval in a try/catch to handle data not being present
return function lookup(...indexes) {
const indexesWithBrackets = indexes.reduce((acc, cur) => `${acc}[${cur}]`, '')
const cmd = `data${indexesWithBrackets}`
try {
const result = eval(cmd)
return result
} catch(e) {
return null
}
}
}
function buildResults(preparedData) {
const results = []
for (const place of preparedData) {
const lookup = prepareLookup(place)
// Use the indexes below to extract certain pieces of data
// or as a starting point of exploring the data response.
const result = {
address: {
street_address: lookup(183, 1, 2),
city: lookup(183, 1, 3),
zip: lookup(183, 1, 4),
state: lookup(183, 1, 5),
country_code: lookup(183, 1, 6),
},
name: lookup(11),
tags: lookup(13),
notes: lookup(25,15,0,2),
placeId: lookup(78),
phone: lookup(178,0,0),
coordinates: {
long: lookup(208,0,2),
lat: lookup(208,0,3)
}
}
results.push(result)
}
return results
}
const preparedData = prepare(rawInput)
const listResults = buildResults(preparedData)
console.log(listResults)

Chrome Console: VM

When executing a script directly in the console in Chrome, I saw this:
Does anyone know what's the meaning of VM117:2
What does VM stand for ?
It is abbreviation of the phrase Virtual Machine.
In the Chrome JavaScript engine (called V8) each script has its own script ID.
Sometimes V8 has no information about the file name of a script, for example in the case of an eval. So devtools uses the text "VM" concatenated with the script ID as a title for these scripts.
Some sites may fetch many pieces of JavaScript code via XHR and eval it. If a developer wants to see the actual script name for these scripts she can use sourceURL. DevTools parses and uses it for titles, mapping etc.
Thanks to #MRB,
I revisited this problem, and found the solution today,
thanks to https://stackoverflow.com/a/63221101/1818089
queueMicrotask (console.log.bind (console, "Look! No source file info..."));
It will group similar elements, so make sure you add a unique identifier to each log line to be able to see all data.
Demonstrated in the following example.
Instead of
data = ["Apple","Mango","Grapes"];
for(i=0;i<10;i++){
queueMicrotask (console.log.bind (console, " info..."+i));
}
use
data = ["Apple","Mango","Grapes"];
for(i=0;i<data.length;i++){
queueMicrotask (console.log.bind (console, " info..."+i));
}
A better way would be to make a console.print function that does so and call it instead of console.log as pointed out in https://stackoverflow.com/a/64444083/1818089
// console.print: console.log without filename/line number
console.print = function (...args) {
queueMicrotask (console.log.bind (console, ...args));
}
Beware of the grouping problem mentioned above.

Inserting JSON file in Meteor

I have a form and I need to put the data of that form in collection, using coffeescript
I am currently doing these in my client coffeescript file:
#Question = new Meteor.Collection('questions')
Template.question.events
'submit #question-form' : (event) ->
QuestionData = $('#question-form').serializeJSON()
Question. insert QuestionData
I am not sure whether these data is being inserted or not. Please give me some useful ideas
Thank You in advance !!!
Tools you can use:
1) You can add a line to javascript:
debugger
your client browser will stop when it reaches that line. Sometimes you have to be in an inspect element screen already before it triggers. I do this often in Chrome and Firefox. Firefox has a debugger tab; chrome, a sources tab.
2) You can use mini-mongo in the client to check for the new record. In the console (you can get to the console as a tab as described above) type
Question.find().fetch()
You can also write
id = Question.insert QuestionData
console.log 'Question.findOne("' + id + '")
which should give an easy to copy and paste.
In a separate terminal/dos prompt fire up the mongo console using.
meteor mongo
then to list all the questions in the mongo console type
db.questions.find().pretty()