I am new to computer networks and have a simple question. Assuming that we want to visit a website www.aaa.com, and the website includes a picture . When we try to access aaa.com, who launched the resource request on bbb.com, the aaa.com server or the user-side browser? I have two thoughts:
User first downloaded the html file of aaa.com and the browser executed the code in it, so the user browser finishes resource request.
The aaa.com launches the request, and prepares all the sources, then gives back to user browser.
Which idea is right?
Unless a visitor is using a proxy which redirects all traffic through website aaa.com then what bbb.com site will see is a request made from the users browser.
Your HTML file essentially acts as a pointer to all the resources needed by the website; browser then fetches all the resources accordingly. This is usually called a Cross-Origin call.
You can open up your Developer Tools in your browser to see the calls under the Network tab.
If you want to delve deeper into the subject take a look at CORS on MDN.
Related
Issue: appears to be that banno framework is "remembering" the urls. This is happening in a mobile browser when the user does not close the tab or browser. When the user opens the page, banno is remembering the url from last time and trying to load the same url.
What needs to happen is that banno needs to fully reload the page so that we can go retrieve a new url and log the user in again.
Could it be how they treat plugins when a browser is left open. A url that is loaded is not good forever.
Odds are good that the situation you're encountering is described in https://stackoverflow.com/a/71267143/6680761
Essential info from that link is:
Part of keeping state of the page is keeping authentication data. The OAuth flow used to initially authenticate the user is not intended to be used on every page refresh. It's expected that the embedded web application will keep its own authentication state. How this is done is usually very specific to the language and platform used for the embedded web application. However all strategies almost exclusively use a cookie which is destroyed when the application closes.
The Oauth callback URL with an authentication code should be redirected away from once the code is exchanged for an access token. From that point forward your app should be using its own authentication mechanism.
This situation cannot be easily reproduced because the website requires login through Steam.
The webpage shows a list of items that can be purchased. Whenever a new item is listed, it will appear at the top of the list of items. However, when checking Chrome DevTools and Fiddler, I cannot find the Request that is made that contains the data of the newly listed items. In fact, there are no requests made at all.
I am not using any filters in Chrome DevTools.
How is this webpage retrieving data from the server, and why are Chrome and Fiddler not picking up on it?
This question contains the answer: POST request not showing up in Chrome DevTools
jvda:
This is a common source of conufsion when debugging networking requests done from the web. Normally, developers look at these network requests from top down and assume that the lowest one is the most recent request made - therefore assuming that the request must be at the bottom. For 'plain' HTTP this is correct. However, many apps that want to show data in real-time, use WebSockets to communicate with an API.
The same thing happens in the Web-version of Whatsapp. Only assets like the actual JavaScript-app, icons etc are loaded using plain HTTP. Then, a WebSocket is opened through which messages are exchanged for example.
I thought this question was irrelevant but I guess it was not. The data is exchanged through WebSockets
So when I go to a website like, say https://coinmarketcap.com (that displays the prices of cryptocurrencies), in my Chrome Browser, it looks like I dont see all activity going on in the inspector under the Network tab.
Here is a screenshot to visualize the website:
I see the prices are updated live on the website (without refreshing), but I don't see any activity in the Network inspector.
There is of course activity when I load the page for the first time, but nothing after that even tho the website dynamically updates the prices? My firsts thought was, that it could be fake updates via a JS script on the client-side, but there are many websites I know where you can't see this, so what's going on here? What types of protocols are used to achieve this, because I know that WebSockets and polling (xhr) always shows up.
A screenshot of the network inspector, just be clear what I mean by that (showing traffic for the first 50 ms (loading time) and then nothing afterwards)
It using Web socket, you filter the request by WS and should see the latest ws connection.
Click on it and sees the message for this socket.
it was necessary to use a proxy like burpsuite in order to capture the sockets sent and received by the client / server here is the result it's about 72 request received in a single second
The website you suggested is using Websockets for communication.
To see WebSockets request in WS tab in Network inspector, You will have to open the console first then refresh the page.
The console needs to capture the initial handshakes when communication initializes. So if you open the website first then check the console, you may not find anything in WS.
We have a simple web site for our company, deployed under IIS. it contains 5 html pages with CSS, and some HTML web pages have links to other HTML pages, such as go to home page. now i want to check if my web site generated cookies at users machines or not? so can i do so? and usually does HTML web sites that do not have any login generated cookies?
Edit:
Using chrome development tools (F12) i have found the following:-
Load the development tools in your favourite web browser, then load your website.
In Chrome, the cookies will appear in the 'Application Tab' of the development tools, and under 'Storage' you will see 'Cookies.' Microsoft Edge has them in 'Debugger> Cookies'
Expand that and it will show all the cookies that have been delivered by your website.
It's possible for a 'HTML only' site to be delivering cookies, especially if you have 3rd party content.
Most of the cookies are generated on your Server Side and sent to the client.
You will have to go through your code and see whether it generates cookies.
Usually, if it's a regular HTML page, your server won't create a session for that and most likely that no cookie be sent to the client.
Otherwise, If you use .aspx pages or MVC (for example..) most likely that your server will generate Session Cookie and send it with the response to the client.
Another thing you'll have to check is whether your pages contain references to 3rd-party websites i.e includes of .css / .js files from CDNs like
Cloudflare - these CDNs usually put their own cookie in your client's browser.
And lastly, your pages might contain scripts like Google Analytics which put some cookies in your client's browser.
A HTML Page is not creating any cookies. Maybe you are mixing up Cache with Cookies? For example in PHP you have to define what shall be saved into a cookie. If you don't define any Cookie Variables, there won't be any cookies.
I use gifjs to generate a lot of gifs from png/jpg files once user logins success. At same time, I want to change the location.url to direct user to my website main page. But the problem is that the web worker task stopped once the url is changed. So how to execute web worker task continuously even though the location url is redirect?
If the browser does a load of a new page, the short answer is that workers will be terminated. Definitely dedicated workers, and according to Do Shared Web Workers persist across a single page reload, link navigation shared workers (that aren't being used by other windows/tabs) will be too.
But...
If you didn't use any full page loads, and made the entire site use Javascript for navigation / posting of forms, and use the HTML history API to change the URL, then the worker will survive as the user logs in and navigates the site. The worker will only be terminated when they leave the site, or force a reload in the browser.
Depending on the current setup of your site, this might mean considerable change of both the browser and server architecture, the details of which I suspect are beyond the scope of this question.