From looking online, it appears as if the chrome profiler samples 1000 times a second. This seems to be a reasonable default that balances information collection without high overhead. However, i'm finding the default to be not aggressive enough for my current task.
I was wondering if there was a way to configure this default so i could try a few more values. I'm absolutely willing to take the increased overhead while smapling this task.
Thanks!
There's high resolution profiling option in DevTools settings. If enabled it will sample with 10kHz rate.
Related
I'm using the Profiles tab in the Chrome developer tools to record memory heap snapshots. My app has a memory leak, so I'm expecting the snapshots to gradually increase in size, which they do. But for reasons I don't understand, the first snapshot is always artificially large... creating a seemingly deceptive drop in memory between the first and second. All subsequent snapshots gradually increase as expected.
I know there is often extra memory used at the beginning of a page load, due to caching and other setup. But the same thing happens no matter when I take the first snapshot. It could be 30 seconds after the page is loaded or 30 minutes. Same pattern. My only guess is that the profile tool its self is interacting with the memory somehow, but that seems like a stretch.
Any ideas what's going on here?
Right before memory snapshot is taken Chrome tries to collect the garbage. It doesn't collect it thoroughly though, it only does a predefined number of passes (this magic number seems to be 7). Therefore, when the first snapshot is taken there still might be some uncollected garbage left.
Before making a first snapshot try going to the "Timeline" tab and forcing garbage collection manually.
From what I've tested, this always reduces the size of the first snapshot.
We have a bucket of about 34 million items in a Couchbase cluster setup of 6 AWS nodes. The bucket has been allocated 32.1GB of RAM (5482MB per node) and is currently using 29.1GB. If I use the formula provided in the Couchbase documentation (http://docs.couchbase.com/admin/admin/Concepts/bp-sizingGuidelines.html) it should use approx. 8.94GB of RAM.
Am I calculating it incorrectly? Below is link to google spreadsheet with all the details.
https://docs.google.com/spreadsheets/d/1b9XQn030TBCurUjv3bkhiHJ_aahepaBmFg_lJQj-EzQ/edit?usp=sharing
Assuming that you indeed have a working set of 0.5%, which as Kirk pointed out in his comment, is odd but not impossible, then you are calculating the result of the memory sizing formula correctly. However, it's important to understand that the formula is not a hard and fast rule that fits all situations. Rather, it's a general guideline and serves as a good starting point for you to go and begin your performance tests. Also, keep in mind that the RAM sizing isn't the only consideration for deciding on cluster size, because you also have to consider data safety, total disk write throughput, network bandwidth, CPU, how much a single node failure affects the rest of the cluster, and more.
Using the result of the RAM sizing formula as a starting point, you should now actually test whether your working assumptions were correct. Which means putting real (or close to representative) load on the bucket and seeing whether the % of cache misses is low enough and the operation lacency is within your acceptable limits. There is no general rule for this, what's acceptable to some applications might be too slow for others.
Just as an example, if you see that under load your cache miss ratio is 5% and while the average read latency is 3ms, the top 1% latency is 100ms - then you have to consider whether having one out of every 100 reads take that much longer is acceptable in your application. If it is - great, if not - you need to start increasing the RAM size until it matches your actual working set. Similarly, you should keep an eye on the disk throughput, CPU usage, etc.
We are finding that the Regular Expression Cache in our JRuby application is out of control - it just keeps growing and growing until the app is grinding to a halt.
It eventually does garbage collect, but transaction time is becomes far too high (90 secs instead of 1-2 secs) long before that.
Is there a way to either stop this Regexp Cache from growing so much or limit the size of the cache?
first of all, since you already mentioned looking at the source at Very large retained heap size for org.jruby.RubyRegexp$RegexpCache in JRuby Rails App you probably realised there's no such support implemented.
would say you have 2-3 options to decide :
implement support for limiting or completely disabling the cache within JRuby's RubyRegexp
introduce a "hack" that will check available memory and clear out some of the cache RubyRegexp caches e.g. from another thread (at least until a PR is accepted into JRuby)
look into tuning or using a different GC (including some JVM options) so that the app performs more predictably ... this is application dependent and can no be answered (in general) without knowing the specifics
one hint related to how the JVM keeps soft references -XX:SoftRefLRUPolicyMSPerMB=250 it's 1000 (1 seconds) by default thus decreasing it means they will live shorter ... but it might just all relate to when they're collected (depends on GC and Java version I guess) so in the end you might find out to be fixing the symptom and not the real cause (as noted things such these can not be generalized esp. knowing very little about the app and/or JVM OPTS used)
How many peer connections can I create on a single client? Is there any limit?
I assume you've arrived at 256 experimentally since there is currently no documentation/specs to suggest it. I don't know exactly how things have changed since 2013, but currently, my own experiments cap out at 500 simultaneous connections per page. As far as I can tell, Firefox has no such limit.
The real limit, according to Chromium source code is 500 (source). As far as I can tell, there was no limit before this was implemented (source) even going as far back as the WebKit days.
I think one reason that it can be tricky to keep track of is that Chrome (and FF for that matter) have always been bad at the garbage collection of dead connections. If you check chrome://webrtc-internals (FF equivalent: about:webrtc), there will often be a build-up of zombie connections that count towards the 500 limit. These persist until you manually destroy them, or close/refresh the page. One way to work around this is through your own heartbeat implementation or using the signalling server to notify of peers disconnecting such that other peers can destroy their connection (although this requires a persistent connection to a signalling server).
Maximum peer connections limit is 256 (on chrome).
Not sure about other major browsers, depending on your bandwidth they are limited to give certain stability.
Not sure if there is any hard limit(other than runtime memory), but there is certainly soft one.
If you are considering fully mesh topology(app in which every client is connected to every other client), then you have to consider main deficiency of this topology. For large number of participants in video conference session bandwidth which is required to sustain the overall session grows for each new participant.
Therefore, users with low bandwidth will not be able to handle video conference session with big number of participants.
Hope it helps.
This is an interesting topic.. I was just watching this youtube video about Multi Peer in WebRTC. The presenters said it just depend on the number of peers, but the highest he did was on less than 6 peers. Also this depends on your bandwidth size. The best thing you can do is to develop an WebRTC and try connecting with your friends and judge as this also depends on the country you are in. Like I live in Botswana and the network is not fast so i wont expect to be having 6 peers while I am still suffering to get a clear communication with only one person this side.
According to this source:
In practice, even under optimal network conditions, a mesh video call doesn’t work well beyond five participants.
I'm writing a tool to collect information about power consumption of notebooks. I need to measure the current power consumption, and I use Perfmon to do so. But I found a strange bug.
Here is the typical graph of power consumption (this is "Power Meter" - "Power" - "_Total"):
Measurements are updated about once every 10-15 seconds.
But if the run Everest (or AIDA64) Power Management tab will be updating this more often, the results are more accurate:
Measurements are updated about once every 1-2 seconds.
I do not understand what happens when we run Everest. I really need to get accurate data.
Do you have any ideas?
I would really appreciate any suggestions in this regard.