I'm using Om for the client side and through the lifetime of the application many components gets mounted/unmounted. When mounted, various channels are opened (in go blocks). And I'm planning to use IWillUnmount to close them too. But first, my questions are: What happens to unclosed channels? Do the resources they used get released? Not closing channels (when unmounting components) can degrade the browser performance in the longrun? Thanks.
Based on a cursory reading of the implementation, unclosed channels should not use resources if they are eligible to be garbage collected. That means that both the sender and receiver cannot retain references to them (or must also be eligible for collection).
All closing a channel does is empty its buffer, and mark it closed so that nothing can be added to the buffer. If it doesn't have any messages in its buffer, an open channel will use the same resources as a closed one.
https://github.com/clojure/core.async/blob/master/src/main/clojure/cljs/core/async/impl/channels.cljs#L110
Related
I am working for a while with ZeroMQ. I read already some whitepapers and a lot from the guide, but one question remained open to me:
Lets say we use PUB-SUB.
Where or on which system are the messaging queues? On the publisher system side, on the subscriber system side or something in between?
Sorry for the maybe stupid question.
This picture is from the Broker vs. Brokerless whitepaper.
Every zeromq socket has both send and recv queues (The limits are set via high water mark).
In the case of a brokerless PUB/SUB the messages could be queued on the sender or the receiver.
Example:
If there is network congestion the sender may queue on its send queue
If the receiver is consuming messages slower than they arrive they will be queued on the receiver recv queue
If the queues reach the high water mark the messages will be lost.
Q : Where or on which system are the messaging queues?
The concept of the Zen-of-Zero, as built-into ZeroMQ uses smart and right enough approches, on a case by case principle.
There cases, where there are no-Queues, as you know them, at all :
for example, the inproc:// transport-class has no Queue, as one may know 'em, as there are just memory-regions, across which the messages are put and read-from. The whole magic thus appears inside the special component - the Context()-instance. There the message-memory-mapping takes place. The inproc:// case is a special case, where no I/O-thread(s) are needed at all, as the whole process is purely memory-mapping based and the Context()-instance manipulates its internal states, so as to emulate both an externally provided abstraction of the Queue-alike behaviour and also the internal "Queue"-management.
Others, similarly, operate localhost-located internal Queue(s) endpoints :
for obvious reason, as the ZeroMQ is a broker-less system, there is no "central"-place and all the functionality is spread across the participating ( coordinated and cooperating ) nodes.
Either one of the Queue-ends reside inside the Context()-instances, one in the one-side's localhost-itself, the other in the "remote"-host located Context()-instance, as was declared and coordinated during building of the ZMTP/RFC-specified socket-archetype .bind()/.connect() construction. Since a positive acknowledgement was made for such a ZMTP-specific-socket, the Queue-abstraction ( implemented in a distributed-system-manner ) holds till such socket does not get .close()-ed or the Context()-instance did not get forcefully or by a chance .term()-ed.
For these reasons, the proper capacity sizings are needed, as the messages may reside inside the Context()-operated memory before it gets transported across the { network | ipc-channel }-mediated connection ( on the .send()-side ) or before being .recv()-ed ( from inside the remote Context()-instance ) by the remote application-code for any further use of the message-payload data ( a Zero-copy message-data management is possible, yet not all use-cases indeed use this mode for avoiding replicated memory-allocations and data-transfer costs ).
Hello all please help with the analysis of my page.
Question 1
Why since everything is load from cache. Load time is 690ms?
question 2
what will be the reason to use --> private, max-age=60000
(public), max-age=60000 VS. private, max-age=60000
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching?hl=en
First, load time isn't just defined by the time it takes to get assets from the network. Painting and parsing can take a lot of time, as can the parsing of Javascript. In your case, DOMContentLoaded is only fired after 491 milliseconds, so that's already part of the answer.
As to your second question, the answer really is in the link you provided:
If the response is marked as “public” then it can be cached, even if it has HTTP authentication associated with it, and even when the response status code isn’t normally cacheable. Most of the time, “public” isn’t necessary, because explicit caching information (like “max-age”) indicates that the response is cacheable anyway.
By contrast, “private” responses can be cached by the browser but are typically intended for a single user and hence are not allowed to be cached by any intermediate cache - e.g. an HTML page with private user information can be cached by that user’s browser, but not by a CDN.
I've been reading through Google's slides for the so-called pre-optimisation. (For the ones interested or for those who do not know what I'm talking about, this slide kinda summarises it.)
In HTML5 we can prefetch and prerender pages in the link element. Here's an overview. We can use the rel values dns-prefetch, subresource, prefetch and prerender.
The first confusing thing is that apparently only prefetch is in the spec for HTML5 (and 5.1) but none of the others are. (Yet!) The second, that browser support is OK for (dns-)prefetch but quite bad for the others. Especially Firefox's lack of support for prerender is annoying.
Thirdly, the question that I ask myself is this: does the prefetching (or any other method) happen as soon as the browser reads the line (and does it, then, block the current page load), or does it wait with loading the resources in the background until the current page is loaded completely?
If it's loaded synchronously in a blocking manner, is there a way to do this asynchronously or after page load? I suppose with a JS solution like this, but I'm not sure it will run asynchronously then.
var pre = document.createElement("link");
pre.setAttribute("rel", "prerender prefetch");
pre.setAttribute("href", "next-page.php");
document.head.appendChild(pre);
Please answer both questions if applicable!
EDIT 17 September
After reading through the Editor's draft of Resource Hints I found the following (emphasis mine):
Resource fetches that may be required for the next navigation can
negatively impact the performance of the current navigation context
due to additional contention for the CPU, GPU, memory, and network
resources. To address this, the user agent should implement logic to
reduce and eliminate such contention:
Resource fetches required for the next navigation should have lower
relative priority and should not block or interfere with resource
fetches required by the current navigation context.
The optimal time to initiate a resource fetch required for the next navigation is
dependent on the negotiated transport protocol, users current
connectivity profile, available device resources, and other context
specific variables. The user agent is left to determine the optimal
time at which to initiate the fetch - e.g. the user agent may decide
to wait until all other downloads are finished, or may choose to
pipeline requests with low priority if the negotiated protocol
supports the necessary primitives. Alternatively, the user agent may
opt-out from initiating the fetch due to resource constraints, user
preferences, or other factors.
Notice how much the user agent may do this or that. I really fear that this will lead to different implementations by different browsers which will lead to a divergence again.
The question remains though. It isn't clear to me whether loading an external resource by using prefetch or other happens synchronously (and thus, when placed in the head, before the content is loaded), or asynchronously (with a lower priority). I am guessing the latter, though I don't understand how that is possible because there's nothing in the link spec that would allow asynchronous loading of link elements' content.
Disclaimer: I haven't spent a lot of dedicated time with these specs, so it's quite likely I've missed some important point.
That said, my read agrees with yours: if a resource is fetched as a result of a pre optimization, it's likely to be fetched asynchronously, and there's little guarantee about where in the pipeline you should expect it to be fetched.
The intent seems advisory rather than prescriptive, in the same way as the CSS will-change attribute advises rendering engines that an element should receive special consideration, but doesn't prescribe the behavior, or indeed that there should be any particular behavior.
there's nothing in the link spec that would allow asynchronous loading of link elements' content
Not all links would load content in any case (an author type wouldn't cause the UA to download the contents of a mailto: URL), and I can't find any mention of fetching resources in the spec apart from that in the discussion around crossorigin:
The exact behavior for links to external resources depends on the exact relationship, as defined for the relevant link type. Some of the attributes control whether or not the external resource is to be applied (as defined below)... User agents may opt to only try to obtain such resources when they are needed, instead of pro-actively fetching all the external resources that are not applied.
(emphasis mine)
That seems to open the door for resources specified by a link to be fetched asynchronously (or not at all).
Consider a scenario where a browser has two or more tabs pointing to the same origin. Different event loops of the different tabs can lead to race conditions while accessing local storage and the different tabs can potentially overwrite each other's changes in local storage.
I'm writing a web application that would face such race conditions, and so I wanted to know about the different synchronization primitives that could be employed in such a scenario.
My reading of the relevant W3C spec, and the comment from Ian Hickson at the end of this blog post on the topic, suggests that what's supposed to happen is that a browser-global mutex controls access to each domain's localStorage. Each separate "thread" (see below for what I'm fairly confident that means) of JavaScript execution must attempt to acquire the storage mutex (if it doesn't have it already) whenever it examines local storage. Once it gets the mutex, it doesn't give it up until it's completely done.
Now, what's a thread, and what does it mean for a thread to be done? The only thing that makes sense (and the only thing that's really consistent with Hixie's claim that the mutex makes things "completely safe") is that a thread is JavaScript code in some browser context that's been initiated by some event. (Note that one possible event is that a <script> block has just been loaded.) The nature of JavaScript in the browser in general is that code in a <script> block, or code in a handler for any sort of event, runs until it stops; that is, runs to the end of the <script> body, or else runs until the event handler returns.
So, given that, what the storage mutex is supposed to do is to force all shared-domain scripts to block upon attempting to claim the mutex when one of their number already has it. They'll block until the owning thread is done — until the <script> tag code is exhausted, or until the event handler returns. That behavior would achieve this guarantee from the spec:
Thus, the length attribute of a Storage object, and the value of the various properties of that object, cannot change while a script is executing, other than in a way that is predictable by the script itself.
However, it appears that WebKit-based browsers (Chrome and Safari, and probably the Android browser too, and now maybe Opera?) don't bother with the mutex implementation, which leaves you in the situation that drove you to ask the question. If you're concerned with such race conditions (a perfectly reasonable attitude), then you can use either the locking mechanism suggested in the blog post (by someone who does, or did, work for Stackoverflow :) or else implement a version counting system to detect dirty writes. (edit — now that I think about it, an RDBMS-style version mechanism would be problematic, because there'd still be a race condition checking the version!)
We require all requests for downloads to have a valid login (non-http) and we generate transaction tickets for each download. If you were to go to one of the download links and attempt to "replay" the transaction, we use HTTP codes to forward you to get a new transaction ticket. This works fine for a majority of users. There's a small subset, however, that are using Download Accelerators that simply try to replay the transaction ticket several times.
So, in order to determine whether we want to or even can support download accelerators or not, we are trying to understand how they work.
How does having a second, third or even fourth concurrent connection to the web server delivering a static file speed the download process?
What does the accelerator program do?
You'll get a more comprehensive overview of Download Accelerators at wikipedia.
Acceleration is multi-faceted
First
A substantial benefit of managed/accelerated downloads is the tool in question remembers Start/Stop offsets transferred and uses "partial" and 'range' headers to request parts of the file instead of all of it.
This means if something dies mid transaction ( ie: TCP Time-out ) it just reconnects where it left off and you don't have to start from scratch.
Thus, if you have an intermittent connection, the aggregate transfer time is greatly lessened.
Second
Download accelerators like to break a single transfer into several smaller segments of equal size, using the same start-range-stop mechanics, and perform them in parallel, which greatly improves transfer time over slow networks.
There's this annoying thing called bandwidth-delay-product where the size of the TCP buffers at either end do some math thing in conjunction with ping time to get the actual experienced speed, and this in practice means large ping times will limit your speed regardless how many megabits/sec all the interim connections have.
However, this limitation appears to be "per connection", so multiple TCP connections to a single server can help mitigate the performance hit of the high latency ping time.
Hence, people who live near by are not so likely to need to do a segmented transfer, but people who live in far away locations are more likely to benefit from going crazy with their segmentation.
Thirdly
In some cases it is possible to find multiple servers that provide the same resource, sometimes a single DNS address round-robins to several IP addresses, or a server is part of a mirror network of some kind. And download managers/accelerators can detect this and apply the segmented transfer technique across multiple servers, allowing the downloader to get more collective bandwidth delivered to them.
Support
Supporting the first kind of acceleration is what I personally suggest as a "minimum" for support. Mostly, because it makes a users life easy, and it reduces the amount of aggregate data transfer you have to provide due to users not having to fetch the same content repeatedly.
And to facilitate this, its recommended you, compute how much they have transferred and don't expire the ticket till they look "finished" ( while binding traffic to the first IP that used the ticket ), or a given 'reasonable' time to download it has passed. ie: give them a window of grace before requiring they get a new ticket.
Supporting the second and third give you bonus points, and users generally desire it at least the second, mostly because international customers don't like being treated as second class customers simply because of the greater ping time, and it doesn't objectively consume more bandwidth in any sense that matters. The worst that happens is they might cause your total throughput to be undesirable for how your service operates.
It's reasonably straight forward to deliver the first kind of benefit without allowing the second simply by restricting the number of concurrent transfers from a single ticket.
I believe the idea is that many servers limit or evenly distribute bandwidth across connections. By having multiple connections, you're cheating that system and getting more than your "fair" share of bandwidth.
It's all about Little's Law. Specifically each stream to the web server is seeing a certain amount of TCP latency and so will only carry so much data. Tricks like increasing the TCP window size and implementing selective acks help but are poorly implemented and generally cause more problems than they solve.
Having multiple streams means that the latency seen by each stream is less important as the global throughput increases overall.
Another key advantage with a download accelerator even when using a single thread is that it's generally better than using the web browsers built in download tool. For example if the web browser decides to die the download tool will continue. And the download tool may support functionality like pausing/resuming that the built-in brower doesn't.
My understanding is that one method download accelerators use is by opening many parallel TCP connections - each TCP connection can only go so fast, and is often limited on the server side.
TCP is implemented such that if a timeout occurs, the timeout period is increased. This is very effective at preventing network overloads, at the cost of speed on individual TCP connections.
Download accelerators can get around this by opening dozens of TCP connections and dropping the ones that slow to below a certain threshold, then opening new ones to replace the slow connections.
While effective for a single user, I believe it is bad etiquette in general.
You're seeing the download accelerator trying to re-authenticate using the same transaction ticket - I'd recommend ignoring these requests.
From: http://askville.amazon.com/download-accelerator-protocol-work-advantages-benefits-application-area-scope-plz-suggest-URLs/AnswerViewer.do?requestId=9337813
Quote:
The most common way of accelerating downloads is to open up parllel downloads. Many servers limit the bandwith of one connection so opening more in parallel increases the rate. This works by specifying an offset a download should start which is supported for HTTP and FTP alike.
Of course this way of acceleration is quite "unsocial". The limitation of bandwith is implemented to be able to serve a higher number of clients so using this technique lowers the maximum number of peers that is able to download. That's the reason why many servers are limiting the number of parallel connection (recognized by IP), e.g. many FTP-servers do this so you run into problems if you download a file and try to continue browsing using your browser. Technically these are two parallel connections.
Another technique to increase the download-rate is a peer-to-peer-network where different sources e.g. limited by asynchron DSL on the upload-side are used for downloading.
Most download 'accelerators' really don't speed up anything at all. What they are good at doing is congesting network traffic, hammering your server, and breaking custom scripts like you've seen. Basically how it works is that instead of making one request and downloading the file from beginning to end, it makes say four requests...the first one downloads from 0-25%, the second from 25-50%, and so on, and it makes them all at the same time. The only particular case where this helps any, is if their ISP or firewall does some kind of traffic shaping such that an individual download speed is limited to less than their total download speed.
Personally, if it's causing you any trouble, I'd say just put a notice that download accelerators are not supported, and have the users download them normally, or only using a single thread.
They don't, generally.
To answer the substance of your question, the assumption is that the server is rate-limiting downloads on a per-connection basis, so simultaneously downloading multiple chunks will enable the user to make the most of the bandwidth available at their end.
Typically download-accelerators depend on partial content download - status code 206. Just like the streaming media players, media players ask for a small chunk of the full file to the server and then download it and play. Now the catch is if a server restricts partial-content-download then the download accelerator won't work!. It's easy to configure a server like Nginx to restrict partial-content-download.
How to know if a file can be downloaded via ranges/partially?
Ans: check for a header value Accept-Ranges:. If it does exist then you are good to go.
How to implement a feature like this in any programming language?
Ans: well, it's pretty easy. Just spin up some threads/co-routines(choose threads/co-routines over processes in I/O or network bound system) to download the N-number of chunks in parallel. Save the partial files in the right position in the file. and you are technically done. Calculate the download speed by keeping a global variable downloaded_till_now=0 and increment it as one thread completes downloading a chunk. don't forget about mutex as we are writing to a global resource from multiple thread so do a thread.acquire() and thread.release(). And also keep a unix-time counter. and do math like
speed_in_bytes_per_sec = downloaded_till_now/(current_unix_time-start_unix_time)