I'm using CDP4J, though I expect this question relates directly to Google Chrome DevTools Protocol.
I want to get a list of the HTTP requests made for a webpage and response codes. So that would include the initial request in the main frame and subsequent requests, either made via 3xx redirects or JavaScript-originated navigation.
It's not clear how to do this reliably.
I have tried the following:
Store io.webfolder.cdp.session.Session.getFrameId
Add callback to session with addEventListener, record every event of type io.webfolder.cdp.event.Event.NetworkResponseReceived
Of these, filter those whose frame ID matches.
Of these filter on type io.webfolder.cdp.type.page.ResourceType.Document
I have a URL that I know returns a HTTP 303. But looking at the Events, don't see the original URL, but instead see only the final destination of the redirects. Every single NetworkResponseReceived has a status of 200.
How can I capture the chain of redirects?
I found the answer. The io.webfolder.cdp.event.network.RequestWillBeSent event has getRedirectResponse, which contains a response if it's a redirect.
I've been using the ResponseReceived event for this purpose. This seems to work to get the document URL from the event:
if (session.getTargetId().equals(responseReceived.getFrameId()) && ResourceType.Document.equals(responseReceived.getType())) {
String url = responseReceived.getResponse().getUrl();
...
}
Related
I'm having trouble finding any documentation in regards to Google One Tap UX and how to persist signin state after a signin redirect. I am using the html api, check the code here:
setTimeout(function () {
let target = document.getElementById('google-signin');
target.innerHTML = '<div id="g_id_onload" data-client_id="x" data-context="signin" data-login_uri="https://x/account/google/callback" data-auto_select="true" data-itp_support="true"></div>';
var s = document.createElement("script");
s.src = 'https://accounts.google.com/gsi/client';
document.head.appendChild(s);
console.log('appended script', s);
}, 30000);
</script>
Essentially I am delaying this signin popup for 30 seconds, that part works fine but soon after this is what happens:
Sign in occurs
Redirect happens
Server redirects back to the referer page
After 30 seconds the process starts again
I would have assumed the google sdk would set a cookie or something somewhere but I guess it does not, either that I'm supposed to handle persisting signin state through my own means. I just want to know the correct approach here.
My question is: How does google know if a user has already signed in using Google One Tap UX?
Figured out a solution. Google allows you to put a property on your div tag called data-skip_prompt_cookie="yourcookie" this will skip the one tap prompt if that cookie is present with a truthy value.
What I did was on my server callback in asp.net I added a cookie to the response. This ensures the prompt is only disabled once someone actually signs in.
Response.Cookies.Append(
"yourcookie", "true");
This ensures when my server redirects back to the originating page, the cookie exists and the one tap does not show up again
I want to validate file. While file is invalid, i want to refresh my page and inform user that he did not upload proper file. So i have this in my
views/campaign.py
try:
wb = load_workbook(mp_file)
except BadZipfile:
return redirect('campaign_add', client_id)
The only way i know how to do it is add another attribute to client class which will be
is_error(models.BooleanField())
And then change views/campaign to
try:
client.is_error = False
wb = load_workbook(mp_file)
client.save()
except BadZipfile:
client.is_error = True
client.save()
return redirect('campaign_add', client)
And with another attribute i can add in my campaign.html file some kind of if is.error is true i'm adding some kind of windows with information about bad file after reloading page. But is there any way to do it without adding another attribute?
Ok, let's imagine that the answer is a little bit complicated than you've expected.
Modern UI's are not reloading pages just to inform about some errors with user input or upload.
So what is the best user experience here?
User is uploading some file(s) from the page.
You are sending a file via JavaScript to the dedicated API endpoint for this uploading. Let's say /workbook/uploads/. You need to create a handler for this endpoint (view)
Endpoint returns 200 OK with the empty body on success or an error, let's say 400 Bad Request with detailed JSON in the body to show to the user what's wrong.
You're parsing responses in JavaScript and show the user what's wrong
No refreshes are needed. 🙌
But the particular answer will need more code from your implementation. (view, urls, template)
I'm running into an issue with the redirection that happens after a user of my app authenticates with Keycloak.
My app uses react-router hashRouter. When the initial redirect happens, I get a redirect_fragment that looks something like this:
http://localhost:3000/lol.html?redirect_fragment=%2F&redirect_fragment=%2Fstate%3D1c5900ee-954f-4532-b01c-dcf5d88f07a2%26code%3DKZNXVqQCcIXTCFu2ZIkx4quXa6zJb59zGKpNIhZwfNo.d2786d1e-67cd-437f-a873-bad49126bad4&redirect_fragment=%2Fstate%3D51a9cb44-b80a-4c14-8f3d-f04dfdb84377%26code%3Dp5cKQ7xVCR_n1s4ucXZTSE3O1T5lwNri_PBKD07Mt1Y.63364a83-f04f-4e64-a33e-faf00f6cd4ff&redirect_fragment=%2Fstate%3D05155315-ab60-4990-8d4e-444c7cce9748%26code%3DBxxpf_uMB28rKAQ6MXFTTrL9RE4rC3UtwCMXLu_K1Zo.4ce56da0-8e52-47e3-a0f2-4f982599bb98#/state=f3e362e4-c030-40ac-80df-9f9882296977&code=8HHTgd3KdlfwcupXR_5nDV0CqZNPV1xdCu3udc6l5xM.97b3ea71-366a-4038-a7ce-30ac2f416807
The URL keeps growing from there. I've read a few posts already that indicate that redirection from keycloak might have a problem with client-side routing via location.hash ... Any thoughts would be appreciated!
I think I figured it out!
The redirection loop seems to stop if I use the 'noslash' hashRouter instead of the default which contains a slash.
My URLs look like this: localhost:3000/lol.html#client/side/route
instead of this: localhost:3000/lol.html#/client/side/route
The redirection now seems to terminate appropriately after one redirect, but now I'm running into a different problem where the hash portion of my route is not being honored by react-router...
EDIT: I figured the second issue out
react-router creates a wrapper around window.location that it uses to tell which client side "page" it is currently on. I found that this wrapper was out of sync with window.location.
Check this console output out. This was taken immediately after the redirection resolved (and the page was blank):
history pathname is /state=aon03i-238hnsln-soih930-8hsdlkh9-982hnkui-89hkgyq-8ihbei78-893hiugsu
history hash is (empty)
window.location pathname is /lol.html
window.location hash is #users/1
The state=blah-blah-blah in the history.pathname is part of the redirect url that keycloak sends back after auth. You'll notice that window.location is updated to the correct path / hash, but that history seems to be one URL behind. Maybe keycloak directly modifies window.location to perform this redirection?
I tried using a history.push(window.location.hash) to push the hash fragment and update react router, but got the error "this entry already exists on the stack". Since it clearly is not on the top of the location stack, this led me to believe that react-router compares window.location with its internal location to figure out where it ultimately is. So how did I get around this?
I used history.replace() instead, which just replaces the entry on the top of the stack with a new value, instead of pushing a new entry to the stack. This also makes sense, since we don't want users who navigate "back" in their browsers to go back to that /state=blah-blah-blah url <-- replace eliminates this entry from the history stack.
One final piece: react-router history.location, like window.location, has both pathname and hash components. HashRouter uses the history.location.pathname component to keep track of the client side route after the hash in the browser. The equivalent of this in window.location is stored in window.location.hash, so we will be using this as the value passed to history.replace() instead of window.location.pathname. This confused me for a bit, but makes sense when you think about it.
react-router history also keeps track of its current route with a prepended / instead of a prepended #, since it's just treating it like any normal URL. Before I called history.replace(), I needed to take my window.location.hash, replace the leading hash with a / and then pass that value history.replace()
const slashPath = window.location.hash.replace('#', '/');
history.replace(slashPath);
Whew!
I've been reading through the docs for Chrome's implementation of the Web Push API here, and I noticed the API says "you promise to show a notification whenever you receive a push" and under limitations it's stated "you have to show a notification when you receive a push message".
After implementing the example on my localhost, I used cURL to send a push notification successfully. I was curious, so I commented out the lines that actually call the showNotification function, and put in a console.log instead and found that I could, in fact, send, receive, and totally ignore a push notification. I even tried using an if-statement to control whether or not to show them based on global boolean that I controlled from my main page, and that worked. So I was wondering if anyone knew what they meant by saying you need to show a notification, and that silent push notifications weren't available?
This wasn't just for the heck of it, I legitimately may need to control whether or not to show these notifications in my web app, so it would be great if this were actually possible. Code below in case you're curious.
self.addEventListener('push', function(event) {
var title = 'New Message';
var body = 'You have received a new message!';
var icon = '/img/favicon.png';
var tag = 'well-notification';
console.log("DID RECEIVE NOTIFICATION")
if(settingsShowNotification) {
event.waitUntil(
self.registration.showNotification(title, {
body: body,
icon: icon,
tag: tag
})
);
}
});
EDIT: On Chrome 47, if it's relevant.
UPDATE: After further experimenting, I found the obvious issue that I can't update the original global variable once the user navigates away and then re-navigates to the same page. However, I was able to circumvent this using a variable on the serviceworker itself and sending a message to the service worker using the API described here to toggle the showNotifications boolean.
You do have to show a notification, and if you don't show a notification you get a forced notification from the browser saying "This site has been updated in the background". But the requirements that show the scary message have been relaxed slightly:
As of Jan. '16, it seems like up to the last 10 notifications are checked for whether each showed a notification or not. If one notification in the last ten notifications did not show a notification, that's considered an accident and the browser won't show the scary "This site has been updated in the background". You have to miss two notifications in the last ten for the scary message to appear.
Note: If the URL in the address bar of the active browser tab matches the origin of your page, and the browser is not minimized, you are not required to show a notification. This is probably why your tests succeeded, if you were on the page itself while running your tests.
Chromium bug that tracks the implementation: https://code.google.com/p/chromium/issues/detail?id=437277
Relevant lines of source code: https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/push_messaging/push_messaging_notification_manager.cc&l=249
How can I configure Polymer's platinum-sw-cache or platinum-sw-fetch to cache all URL paths except for /_api, which is the URL for Hoodie's API? I've configured a platinum-sw-fetch element to handle the /_api path, then platinum-sw-cache to handle the rest of the paths, as follows:
<platinum-sw-register auto-register
clients-claim
skip-waiting
on-service-worker-installed="displayInstalledToast">
<platinum-sw-import-script href="custom-fetch-handler.js"></platinum-sw-import-script>
<platinum-sw-fetch handler="HoodieAPIFetchHandler"
path="/_api(.*)"></platinum-sw-fetch>
<platinum-sw-cache default-cache-strategy="networkFirst"
precache-file="precache.json"/>
</platinum-sw-cache>
</platinum-sw-register>
custom-fetch-handler.js contains the following. Its intent is simply to return the results of the request the way the browser would if the service worker was not handling the request.
var HoodieAPIFetchHandler = function(request, values, options){
return fetch(request);
}
What doesn't seem to be working correctly is that after user 1 has signed in, then signed out, then user 2 signs in, then in Chrome Dev Tools' Network tab I can see that Hoodie regularly continues to make requests to BOTH users' API endpoints like the following:
http://localhost:3000/_api/?hoodieId=uw9rl3p
http://localhost:3000/_api/?hoodieId=noaothq
Instead, it should be making requests to only ONE of these API endpoints. In the Network tab, each of these URLs appears twice in a row, and in the "Size" column the first request says "(from ServiceWorker)," and the second request states the response size in bytes, in case that's relevant.
The other problem which seems related is that when I sign in as user 2 and submit a form, the app writes to user 1's database on the server side. This makes me think the problem is due to the app not being able to bypass the cache for the /_api route.
Should I not have used both platinum-sw-cache and platinum-sw-fetch within one platinum-sw-register element, since the docs state they are alternatives to each other?
In general, what you're doing should work, and it's a legitimate approach to take.
If there's an HTTP request made that matches a path defined in <platinum-sw-fetch>, then that custom handler will be used, and the default handler (in this case, the networkFirst implementation) won't run. The HTTP request can only be responded to once, so there's no chance of multiple handlers taking effect.
I ran some local samples and confirmed that my <platinum-sw-fetch> handler was properly intercepting requests. When debugging this locally, it's useful to either add in a console.log() within your custom handler and check for those logs via the chrome://serviceworker-internals Inspect interface, or to use the same interface to set some breakpoints within your handler.
What you're seeing in the Network tab of the controlled page is expected—the service worker's network interactions are logged there, whether they come from your custom HoodieAPIFetchHandler or the default networkFirst handler. The network interactions from the perspective of the controlled page are also logged—they don't always correspond one-to-one with the service worker's activity, so logging both does come in handy at times.
So I would recommend looking deeper into the reason why your application is making multiple requests. It's always tricky thinking about caching personalized resources, and there are several ways that you can get into trouble if you end up caching resources that are personalized for a different user. Take a look at the line of code that's firing off the second /_api/ request and see if it's coming from an cached resource that needs to be cleared when your users log out. <platinum-sw> uses the sw-toolbox library under the hood, and you can make use of its uncache() method directly within your custom handler scripts to perform cache maintenance.