I've been trying for quite some time to understand this:
In the Chrome settings documentation they claim to have an alternate_url option in the JSON:
"alternate_urls": [
"http://www.moo.__MSG_url_domain__/s?q={searchTerms}",
"http://www.noo.__MSG_url_domain__/s?q={searchTerms}"
],
However, I couldn't find anywhere in the documentation how am I suppoused to use this.
I've tried setting the search_url to a page that returns 500,404 errors and you would expect the alternate URL to work, but it didn't.
Does anyone know how do I get it to use the alternate URL or what does it use for?
Thanks in advance,
Eric
Related
I'm creating a webapp in electron, a web-crawler with a neural network, which needs all webSecurities to be disabled
i tried modifying the headers (X-Frame-Origin,access-control-allow-origin
etc..) , using flags like chrome --allow-file-access-from-files --disable-web-security --user-data-dir="" etc... nothing seems to remove the error above
The iframe is showing the ORIGIN restricted websites after i modified the xframe header, but when i try to access its document, the error above pops
i tried running it in chrome and firefox, and the same behaviour is encoutered
been googling for 4 hours now and i can't seem to find an appropriate answer. if you think this is a duplicated please include a link, it would help a lot
i found the solution ,the disable site isolation trial should be toggled on :
app.commandLine.appendSwitch('disable-site-isolation-trials')
The only solution I've found that is not deprecated (so far, to this date) is the bellow, old aproaches like webPreferences: { webSecurity: false } wont work anymore as webSecurity no longer controls CORS.
mainWindow.webContents.session.webRequest.onHeadersReceived({ urls: [ "*://*/*" ] },
(d, c)=>{
if(d.responseHeaders['X-Frame-Options']){
delete d.responseHeaders['X-Frame-Options'];
} else if(d.responseHeaders['x-frame-options']) {
delete d.responseHeaders['x-frame-options'];
}
c({cancel: false, responseHeaders: d.responseHeaders});
}
);
Please help me.
I'm trying to scrape the split table but actually I can't do and I don't understand why.
This is the url:
https://www.strava.com/activities/1983801964
This is the credential to login:
email=trytest#tiscali.it
password=12345678
This is my code:
pgsession<-html_session("https://www.strava.com/login")
pgform<-html_form(pgsession)[[1]]
filled_form<-set_values(pgform, email="trytest#tiscali.it", password="12345678")
submit_form(pgsession, filled_form)
page<-jump_to(pgsession, "https://www.strava.com/activities/1983801964")
page%>%html_nodes(xpath='//*[#id="contents"]')
And I get {xml_nodeset (0)}
I tried everything, also
page%>%html_nodes("body")%>%html_text()
But I can't get this information, please help me!!
Thanks in advance
I cannot find the split data in the HTML. Therefore, it may not be possible to scrape the splits from the HTML like this.
Alternatively, you can download the raw activity data. Link: https://support.strava.com/hc/en-us/articles/216918437-Exporting-your-Data-and-Bulk-Export
Edit: you may also be able to use this method to download Strava data: https://scottpdawson.com/export-strava-workout-data/
Edit 2: The splits are contained in a DIV called "splits-container". But, the source HTML is likely modified by javascript after the page is loaded. This means you will probably not be able to scrape the data without running the javascript first. Hope this helps.
In cloudflare, I know how to write pagerules for filtering URLs. Does anyone know how to block the URLs using the pagerule.It would help me to stop some DOS attack request. for example, I want to block URLs with the following pattern "www.example.com/?". Thank you.
Untested but you could try creating a new page rule:
URL pattern: www.example.com/?
Security Level: I'm under attack
Browser Ingetrity Check: On
Good luck!
You can also create a redirection page rule:
URL pattern: www.example.com/?"
Forwarding URLs: https://notfound.404
I have a list of web domains and would like to check if they are built to be mobile-responsive. A fairly sure way to check this manually is to see if there are "#media" queries in the style.css.
I've used XPATH (IMPORTXML) previously to bulk-check for strings on webpages, but I don't see an obvious way of importing the css files in bulk and search for a string within them. Is there a way to do this? Ideally, I'd like to accomplish it in Google Sheets or with Google Apps Script.
Thank you!
You can use Google's Mobile-Friendly Test if you want to use a GUI.
If you want to use a REST API, try this (replace url parameter for what you want to test):
https://www.googleapis.com/pagespeedonline/v3beta1/mobileReady?url=http://facebook.com
This will return a JSON object. It will return lots of useful info, but if you are just looking for mobile friendliness, look for the true or false result here:
"ruleGroups": {
"USABILITY": {
"pass": true
}
Hope that helps!
I have a PHPBB forum along with PJAX.
PJAX works well with links however whenever I try use a .pjax function I get an error in firebug, claiming that .pjax is not defined?
Here is what works:
pjax.connect('wrap');
Here is what doesn't work:
$(document).pjax('a', '#pjax-container')
$.pjax.submit(event, '#pjax-container')
and anything that contains .pjax.
I have the pjax/pjax-standalone.min.js file included in my header.
Am I misunderstanding something here?
Thanks,
Peter