Scaling delay values in Design compiler topographical - vlsi

I want to scale delay values in TLU plus file to zero . How can we achieve that in design compiler topographical mode. How can we scale delay values to zero of TLU plus file in DC topo

Check "Assessing Design and Constraint Feasibility in Mapped Designs" on User Guide.
You can use:
set_zero_interconnect_delay_mode true

Related

Using the scale transfer function for the Point Gaussian representation in Paraview?

I have run the cyclone case from the OpenFOAM tutorials and want to view it using the builtin paraFOAM viewer which is based on Paraview 5.4.0.
The simulation has a number of particles in the diameter range of [2e-5, 1e-4] and i would like to scale the size of particles with the diameter array provided with the results.
To do this i select the Point Gaussian representation for the lagrangian fields (kinematiccloud), select Advanced properties, and select 'Scale by data array' after which the diameter array is chosen by default (although its not possible to change it to another field, which I suspect is a bug) but all the particles disappear from the view, as can be seen in the following screenshot:
My guess is that i need to chose proper values of the Gaussian radius and for the scale transfer function but there is no documentation to which it should be set. I have tried trial-and-error but i cannot find any settings for which i can get the particles back and have them render at different sizes.
Can someone enlighten me on how to set the Gaussian radius and scale transfer function properly?
The PointGaussian has just been improved and configuration is now automatic. You may want to try the last release of ParaView.
More info here :
https://blog.kitware.com/major-improvements-on-the-point-gaussian-representation/

What is the format of the Qpdeltamap used for ROI in NVENC?

I am trying to get started with ROI encoding with the Nvidia Encoder NVENC.
As a first step I am trying to get the Nvidia demos to encode using ROI. I know that the switch -qpDeltaMapFile enables the flag enableExtQPDeltaMap. This allows me to send a file with a qp map that the encoder uses to tweak the values obtained by the rate control algorithm.
However there is absolutely no documentation on the format of this file. I tried to use one value per byte, and one byte per value assuming fixed size macroblocks of 16x16. It doesn't seem to work as I would expect.
Any guidance or references would help a lot.
There was a bug in my code. It actually works almost as I described.
Assume your screen is divided equally in 16x16 blocks, then each value will be added to the qp that the rate control algorithm chose. Each value passed is a signed integer, therefore a negative value will improve the quality while a positive value will decrease it. A value of 0 will stay with whatever the rate control algorithm decided.

Motion Vectors and DCT residuals, are they related or independent?

I am working on a novel technique that uses already encoded H264 motion vectors from a pre-encoded video.
I need to know how the motion vectors and residuals are related. I need some very specific answers that I can't find answered anywhere else:
Are the motion vectors forward, or backward? I mean, does the vector indicate where the current 4x4 or 8x8, 8x4 .... block will be in the next frame (forward). Or is it the opposite? (That in the block it is indicated where that block comes from), (backwards).
In the case a block has multiple references (I don't know if that is even possible). How are those references added together? Mean? Weighted?
How is the residual error being compensated, per block (4x8, 8x4, etc)? Ignoring the sub blocks, and just partitioning the image in 8x8 chunks?
My ultimate goal, is to know from the video feed the "accuracy" of each motion vector. I can only do that with backwards prediction, and if the DCT residuals are per block. In that case I can measure the accuracy of the motion vector estimation by measuring the amount of residual error of that block.
Thanks in advance!!
PD: Reading trough the 800 pages of H264 is not easy task....
The H264 standard is your friend. Also get the books by Ian Richardson, a bit more readable than the standard (but only a bit :)
"Are the motion vectors forward, or backward?" - they are backward. The MV for a block points to where that block came from.
"In the case a block has multiple references (I don't know if that is even possible). How are those references added together?" - it is possible, check out weightb and weightp options for x264. Can have up to two references, the explicit weights are encoded in the stream (I think as deltas from the neighbor weights, so usually zeros - but don't quote me on that; also I think whether weights are used is a flag somewhere, if not used the weights are equal by default)
"How is the residual error being compensated" - depends on the macroblock partitioning mode and transform size. The MVs are for each partition, the residuals are for the transform size tiled into the partition (so if a 16x16 is partitioned into two 16x8 and the transform is 8x8, each partition gets two transforms; if the transform is 4x4 each partition gets (16/4)x(8/4)=8 transforms).
For experiments, you can change encoder settings to turn off B-frames and weighted P-frames, and also restrict the partitioning mode to not partition (ie 16x16 only). This allows much easier way to try different motion vectors :)

How to detect local maxima and curve windows correctly in semi complex scenarios?

I have a series of data and need to detect peak values in the series within a certain number of readings (window size) and excluding a certain level of background "noise." I also need to capture the starting and stopping points of the appreciable curves (ie, when it starts ticking up and then when it stops ticking down).
The data are high precision floats.
Here's a quick sketch that captures the most common scenarios that I'm up against visually:
One method I attempted was to pass a window of size X along the curve going backwards to detect the peaks. It started off working well, but I missed a lot of conditions initially not anticipated. Another method I started to work out was a growing window that would discover the longer duration curves. Yet another approach used a more calculus based approach that watches for some velocity / gradient aspects. None seemed to hit the sweet spot, probably due to my lack of experience in statistical analysis.
Perhaps I need to use some kind of a statistical analysis package to cover my bases vs writing my own algorithm? Or would there be an efficient method for tackling this directly with SQL with some kind of local max techniques? I'm simply not sure how to approach this efficiently. Each method I try it seems that I keep missing various thresholds, detecting too many peak values or not capturing entire events (reporting a peak datapoint too early in the reading process).
Ultimately this is implemented in Ruby and so if you could advise as to the most efficient and correct way to approach this problem with Ruby that would be appreciated, however I'm open to a language agnostic algorithmic approach as well. Or is there a certain library that would address the various issues I'm up against in this scenario of detecting the maximum peaks?
my idea is simple, after get your windows of interest you will need find all the peaks in this window, you can just compare the last value with the next , after this you will have where the peaks occur and you can decide where are the best peak.
I wrote one simple source in matlab to show my idea!
My example are in wave from audio file :-)
waveFile='Chick_eco.wav';
[y, fs, nbits]=wavread(waveFile);
subplot(2,2,1); plot(y); legend('Original signal');
startIndex=15000;
WindowSize=100;
endIndex=startIndex+WindowSize-1;
frame = y(startIndex:endIndex);
nframe=length(frame)
%find the peaks
peaks = zeros(nframe,1);
k=3;
while(k <= nframe - 1)
y1 = frame(k - 1);
y2 = frame(k);
y3 = frame(k + 1);
if (y2 > 0)
if (y2 > y1 && y2 >= y3)
peaks(k)=frame(k);
end
end
k=k+1;
end
peaks2=peaks;
peaks2(peaks2<=0)=nan;
subplot(2,2,2); plot(frame); legend('Get Window Length = 100');
subplot(2,2,3); plot(peaks); legend('Where are the PEAKS');
subplot(2,2,4); plot(frame); legend('Peaks in the Window');
hold on; plot(peaks2, '*');
for j = 1 : nframe
if (peaks(j) > 0)
fprintf('Local=%i\n', j);
fprintf('Value=%i\n', peaks(j));
end
end
%Where the Local Maxima occur
[maxivalue, maxi]=max(peaks)
you can see all the peaks and where it occurs
Local=37
Value=3.266296e-001
Local=51
Value=4.333496e-002
Local=65
Value=5.049438e-001
Local=80
Value=4.286804e-001
Local=84
Value=3.110046e-001
I'll propose a couple of different ideas. One is to use discrete wavelets, the other is to use the geographer's concept of prominence.
Wavelets: Apply some sort of wavelet decomposition to your data. There are multiple choices, with Daubechies wavelets being the most widely used. You want the low frequency peaks. Zero out the high frequency wavelet elements, reconstruct your data, and look for local extrema.
Prominence: Those noisy peaks and valleys are of key interest to geographers. They want to know exactly which of a mountain's multiple little peaks is tallest, the exact location of the lowest point in the valley. Find the local minima and maxima in your data set. You should have a sequence of min/max/min/max/.../min. (You might want to add an arbitrary end points that are lower than your global minimum.) Consider a min/max/min sequence. Classify each of these triples per the difference between the max and the larger of the two minima. Make a reduced sequence that replaces the smallest of these triples with the smaller of the two minima. Iterate until you get down to a single min/max/min triple. In your example, you want the next layer down, the min/max/min/max/min sequence.
Note: I'm going to describe the algorithmic steps as if each pass were distinct. Obviously, in a specific implementation, you can combine steps where it makes sense for your application. For the purposes of my explanation, it makes the text a little more clear.
I'm going to make some assumptions about your problem:
The windows of interest (the signals that you are looking for) cover a fraction of the entire data space (i.e., it's not one long signal).
The windows have significant scope (i.e., they aren't one pixel wide on your picture).
The windows have a minimum peak of interest (i.e., even if the signal exceeds the background noise, the peak must have an additional signal excess of the background).
The windows will never overlap (i.e., each can be examined as a distinct sub-problem out of context of the rest of the signal).
Given those, you can first look through your data stream for a set of windows of interest. You can do this by making a first pass through the data: moving from left to right, look for noise threshold crossing points. If the signal was below the noise floor and exceeds it on the next sample, that's a candidate starting point for a window (vice versa for the candidate end point).
Now make a pass through your candidate windows: compare the scope and contents of each window with the values defined above. To use your picture as an example, the small peaks on the left of the image barely exceed the noise floor and do so for too short a time. However, the window in the center of the screen clearly has a wide time extent and a significant max value. Keep the windows that meet your minimum criteria, discard those that are trivial.
Now to examine your remaining windows in detail (remember, they can be treated individually). The peak is easy to find: pass through the window and keep the local max. With respect to the leading and trailing edges of the signal, you can see n the picture that you have a window that's slightly larger than the actual point at which the signal exceeds the noise floor. In this case, you can use a finite difference approximation to calculate the first derivative of the signal. You know that the leading edge will be somewhat to the left of the window on the chart: look for a point at which the first derivative exceeds a positive noise floor of its own (the slope turns upwards sharply). Do the same for the trailing edge (which will always be to the right of the window).
Result: a set of time windows, the leading and trailing edges of the signals and the peak that occured in that window.
It looks like the definition of a window is the range of x over which y is above the threshold. So use that to determine the size of the window. Within that, locate the largest value, thus finding the peak.
If that fails, then what additional criteria do you have for defining a region of interest? You may need to nail down your implicit assumptions to more than 'that looks like a peak to me'.

Adjust MIDI Note Volume

[I am doing this work in Java, but I think the question is language-agnostic.]
I have a MIDI Note On volume (called "data2," it's 0-127) that I am adjusting with a fader (0 to 127). The "math" I am using is simple:
newData2 = oldData2 * faderVolume / 127;
Zero works perfectly, and 127 does too, but the volumes close to the bottom of the range are way too loud, especially the louder notes. What might be a different relationship than a linear one (in pseudo-code would be great)? I will have to plug them into the code and try them, of course.
I realize that this question depends on the instrument that is playing the Note Ons (a BFD Kit in Ableton Live, which doesn't inform much), but maybe not and perhaps there's a standard way to adjust a Midi Note On volume with a fader.
Your equation is correct. You are figuring up the note-on velocity relative to the fader in a linear fashion. A couple notes...
The parameter you are adjusting is velocity. This does not necessarily mean volume! The two do have a correlation for most synths (including your drum kit in Ableton) but it might not be as volume related as you might think.
0-velocity is equivalent to note-off and will never play a sound. I say this because if the difference between 0 and 1 is signficant, itmight be that volume isn't affect as much by the velocity parameter as you might think.
Finally, traditional mixer faders use logarithmic law. You might experiment with this, but again I think you are barking up the wrong tree with volume.
There is a MIDI message for channel volume that you should use for volume, and that is CC 7.
As I said on my comment, when playing with sound or audio or any audible technologies, rather use doubles or floats (depending on the hardware or API specifications).
You are returning an integer on newData2. Rather convert it to a double or float (for precision).
e.g.
float newData2 = (float)oldData2 * (float)faderVolume / (float)127;
Hope this helps.