How to increase buffer size of gl2ps renderer - octave

I make videos out of png files created by octave, through the print command. Sometimes the created image can be large enough that octave has to bump the gl2ps buffer size a couple of times, accompanied by the warning:
GL2PS info: OpenGL feedback buffer overflow
warning: gl2ps_renderer::draw: retrying with buffer size: 8.4E+06 B
GL2PS info: OpenGL feedback buffer overflow
warning: gl2ps_renderer::draw: retrying with buffer size: 1.7E+07 B
Because I am creating thousands of images, a lot of time is being wasted, trying various buffer sizes that are known to be too small. Is there any way telling print what buffer size to request for the duration of a run?

Related

How to make Media Foundation H.264 decoder work?

For some reason I'm not able to decode H.264.
The input/output configuration went well, just like input/output buffer creation.
I'm manually feeding the decoder with the H.264 demuxed from a live stream. Therefore, I use MFVideoFormat_H264_ES as media subtype. The decoding is very slow and the decoded frames are complete garbage. Other decoders are decoding the same stream properly.
Weird thing is that once ProcessInput() returns MF_E_NOTACCEPTING, the following ProcessOutput() returns MF_E_TRANSFORM_NEED_MORE_INPUT. According to MSDN, this should never happen.
Can anybody provide some concrete info on how to do it? (assuming that MF H.264 is functional, which I seriously doubt).
I'm willing to provide extra information, but I don't know what somebody might need in order to help.
Edit:
When exactly should I reset the number of bytes in input buffer to zero?
Btw, I'm resetting the output buffer when ProcessOutput() delivers something (garbage).
Edit2:
Without resetting the current buffer size on input buffer to 0, I managed to get some semi valid output. By semi valid I mean that on every successful ProcessOutput() I receive an YUV image where current image contains a few decoded macro blocks more than the previous frame. The rest of the frame is black. Because I do not reset the size, this stops after a while. So, I guess there is a problem in resetting the buffer size and I guess I should get some notification when the whole frame is done (or not).
Edit3:
While creating input buffer, GetInputStreamInfo() returns 4096 as input buffer size. Alignment 0. However, 4k is not enough. Increasing to 4MB helps in decompressing frame fragment by frame fragment. Still have to figure out if there is a way to tell when is the entire frame decoded.
When creating input buffer, GetInputStreamInfo() returns 4096 as buffer size, which is too small.
Setting input buffer to 4MB solved the problem. The buffer can probably be smaller... still have to test that.

Chrome heap snapshots statistics, what does the grey color represent?

What does the grey color represents in above chart?
Unallocated Memory
Total Memory: 20,526 KB
Total Allocated Memory: 1925 + 2939 + 2918 + 494 + 840 = 9,116 KB
Total Unallocated Memory: 20,526 - 9,116 = 11,410 KB
11410 / 20526 = ~0.56 (0.56/1) which is the area shaded on the graph.
Total Unallocated Memory / Total Memory = Shaded Area
Doughnut chart showing allocated memory vs unallocated memory
Total value
In a represented Heap Snapshot Statictics pie chart –
TOTAL is a total memory heap distribution between JavaScript objects and associated DOM nodes. Also I should say that the heap is a memory set aside for dynamic allocation.
According to Chrome Memory Tab article:
Memory leaks occur when a website is using more energy than necessary. Severe memory leaks can even make sites unusable. Since the JavaScript memory leak detector is a part of the Google Chrome browser, you should not hesitate to select a profiling type and analyze your website’s memory usage.
Google Chrome heap snapshot will reveal memory distribution between JavaScript objects and associated DOM nodes. This feature is useful because you will be able to compare different snapshots and find memory leak.
Total it is the total of reachable JavaScript
objects
which is the current memory for the created objects filled with data for this heap at that time.
In your image you will find 71955 kb on the image and it's the same value of Total
The values before Total it's the value of the data in byte at the initiation point before filling objects with data
for more definitions check chrome memory analysis
Grey color represents objects protected from Garbage Collection.
it's difference between shallow and retained size (you can see these in summary view)
in other words difference between data itself and allocated memory for having data available
longer description
https://developers.google.com/web/tools/chrome-devtools/memory-problems/memory-101
shorter description
Retained size of an object is its shallow size plus the shallow sizes
of the objects that are accessible, directly or indirectly, only from
this object. In other words, the retained size represents the amount
of memory that will be freed by the garbage collector when this object
is collected.
https://www.yourkit.com/docs/java/help/sizes.jsp

Maximum bitstream size for H.264

Although I am quite familiar with H.264 encoding I came down to a point where I need advice from more experienced people. I'm performing hardware accelerated H.264 encoding using Intel Quick Sync and NVIDIA NVENC in a unified pipeline. The issue that troubles me is bitstream output buffer size. Intel Quick Sync provides a way to query the maximum bitstream size from the encoder, while NVDIA NVENC does not have such a feature (or at least I haven't found it, pointers are welcome). In their tutorial they state that:
NVIDIA recommends setting the VBV buffer size equal to single frame size. This is very
helpful in low latency applications, where network bandwidth is a concern. Single frame
VBV allows users to enable capped frame size encoding. In single frame VBV, VBV
buffer size must be set to maximum frame size which is equal to channel bitrate divided
by frame rate. With this setting, every frame can be sent to client immediately upon
encoding and the decoder can also decode without any buffering.
For example, if you have a channel bitrate of B bits/sec and you are encoding at N fps,
the following settings are recommended to enable single frame VBV buffer size.
uint32_t maxFrameSize = B/N;
NV_ENC_RC_PARAMS::vbvBufferSize= maxFrameSize;
NV_ENC_RC_PARAMS::vbvInitialDelay= maxFrameSize;
NV_ENC_RC_PARAMS::maxBitRate= NV_ENC_CONFIG::vbvBufferSize *N; // where N is the encoding frame rate.
NV_ENC_RC_PARAMS::averageBitRate= NV_ENC_RC_PARAMS::vbvBufferSize *N; // where N is the encoding frame rate.
NV_ENC_RC_PARAMS::rateControlMode= NV_ENC_PARAMS_RC_TWOPASS_CBR;
I am allocating a bitstream buffer pool for quite many encoding sessions so having an overhead of unused memory for each buffer by calculating the size from network bandwidth (in my case it is not the bottleneck) will cause ineffective memory usage.
So the general question is - is there any way how to determine the bitstream size for H.264 assuming there is no frame buffering and each frame should generate NAL units? Can I assume that it will never be larger than the input NV12 buffer (which seems unreliable since there may be many NAL units like SPS/PPS/AUD/SEI for the first frame and I am not sure if the size of those plus the same of IDR frame is not greater than the NV12 buffer size)? Does the standard have any pointers on this? Or is it totally encoder dependent?

How to change the underlying buffer size of BufferedOutputStream used by logback's FileAppender?

We are using logback as our logging framework. We noticed that the FileAppender uses ResilientFileOutputStream which is backed by an BufferedOutputStream. We are wondering if there's a way to configure the buffer size of this BufferedOutputStream instance so that we can tune the performance of logback.
Thanks
As i remeber, BufferedOutputStream uses a buf size of 8192 per default.
I remember an perfomrance examination paper, where they showed that 8192 is the most performant.
It does not make sense to raise the size of more than 8192
This is an interesting bit of information. Looking at the source code, the buffer size is, as you suggest, 8192 by default (http://docs.oracle.com/javase/6/docs/api/java/io/BufferedOutputStream.html). In Java 4, Javadoc indicated that it was 512. The information has disappeared from Javadoc in Java 6 and 7.

Can a too high MySQL innodb buffer pool size cause high load by kernel swap daemon (kswapd0)?

What are the best practices to analyze and avoid appearance of high load which is caused by kernel swap daemon? Does it have a direct effect from the MySQL configurations of the buffer pool size, etc.. ?
In a stable Linux system the swap file should barely be used at all, as soon as it is then your system will slow to a crawl. It exists for three reasons, overcommit accounting (which no longer applies these days), to swap out unused code segments to disk to make more room for disk buffers, and to give you more warning when you're running out of memory before the oom_killer starts terminating your applications.
MySQL bypasses the kernel's in-built disk buffering for various reasons. When it starts up it allocates the buffer pool and caches pages from the disk there. When the buffer pool is full it will remove some clean pages, and write-out some dirty pages to make room for more.
If you set the buffer pool to be larger than the amount of RAM you have available then as RAM fills up the kernel will start swapping pages out to the swap file. When the buffer pool fills up MySQL will start swapping out pages to the database files. This will cause thrashing and generally bad performance as all your I/O operations will be multiplied by (at minimum) three.
This is most likely what you're seeing, I'd suggest you reduce the size of the buffer pool so that it fits in your free RAM.