Unit Conversions! Ghz - ns - MHz - cycles - units-of-measurement

I am preparing for a units quiz and there are two kinds of conversions that have me stumped.
Type one:
What is length (in ns) of one cycle on a XXX computer?
- In this case, XXX can be some MHz or Ghz, randomly. I am having trouble converting the cyles times. Example:
What is length (in ns) of one cycle on a 50 MegaHertz (MHz) computer?
The second type of conversion I have trouble with:
If the average instruction on a XXX computer requires ZZ cycles, how long (in ns) does the average instruction take to execute?
- Like the previous case, the XXX will either be some MHz or Ghz. For example:
If the average instruction on a 2.0 GigaHertz (GHz) computer requires 2.0 cycles, how long (in ns) does the average instruction take to execute?
I don't understand what I am doing wrong in these conversions but I keep getting them wrong. Any help would be great!

I hope to have my math correct, I'll give it a try.
One Hertz is defined as one cycle per second, so a 1 Hz computer has a 10^9 ns cycle length (because nano is 10^-9).
50 Mega = 50 * 10^6, so 50MHz yields a (10^9 ns / (50 * 10^6)) = 20 ns cycle length.
2 Giga = 2 * 10^9, so 2GHz yields a (10^9 ns / (2 * 10^9)) = 0.5 ns cycle length. Two cycles here take 1 ns.

The unit for frequency is Hz which is the same as 1/s or s^-1. To convert from frequency to length (really time) you have to compute the reciprocal value: length = 1/frequency.
What is length (in ns) of one cycle on a 50 MegaHertz (MHz) computer?
1/(50*10^6 Hz) = 2*10^-8 s = 20*10^-9 s = 20 ns
If the average instruction on a 2.0 GigaHertz (GHz) computer requires 2.0 cycles, how long (in ns) does the average instruction take to execute?
One cycle: 1/(2*10^9 Hz) = 0.5*10^-9 s = 0.5 ns
Two cycles: 1 ns

Related

Understanding an wav file exported from a daw

I have generated a tone from Audacity at 440Hz with Amplitude as 1 for 1 sec like this:
I understand that this is going to create 440 peaks in 1 sec with Amplitude as 1.
Here i see that its a 32 bit file with 44100Hz is the sample rate which means there are 44100 samples per sec. The Amplitude is 1 which is as expected because that is what i chose.
What i dont understand is, what is the unit of this Amplitude? When right-clicked it shows linear(-1 to +1)
There is an option to select dB it shows (0 to -60 to 0) which i dont understand how is this converted!
now when i use this wav file in the python scipy to read the wav and get values of time and amplitude
How to match or get the relation between what i generated vs what i see when i read a wav file?
The peak is amplitude is 32767.987724003342 Frequency 439.99002267573695
The code i have used in python is
wavFileName ="440Hz.wav"
sample_rate, sample_data = wavfile.read(wavFileName)
print ("Sample Rate or Sampling Frequency is", sample_rate," Hz")
l_audio = len(sample_data.shape)
print ("Channels", l_audio,"Audio data shape",sample_data.shape,"l_audio",l_audio)
if l_audio == 2:
sample_data = sample_data.sum(axis=1) / 2
N = sample_data.shape[0]
length = N / sample_rate
print ("Duration of audio wav file in secs", length,"Number of Samples chosen",sample_data.shape[0])
time =np.linspace(0, length, sample_data.shape[0])
sampling_interval=time[1]-time[0]
notice in audacity when you created the one second of audio with aplitude choice of 1.0 right before saving file it says signed 16 bit integer so amplitude from -1 to +1 means the WAV file in PCM format stores your raw audio by varying signed integers from its max negative to its max positive which since 2^16 is 65536 then signed 16 bit int range is -32768 to 32767 in other words from -2^15 to ( +2^15 - 1 ) ... to better plot I suggest you choose a shorter time period much smaller than one second lets say 0.1 seconds ... once your OK with that then boost it back to using a full one second which is hard to visualize on a plot due to 44100 samples
import os
import scipy.io
import scipy.io.wavfile
import numpy as np
import matplotlib.pyplot as plt
myAudioFilename = '/home/olof/sine_wave_440_Hz.wav'
samplerate, audio_buffer = scipy.io.wavfile.read(myAudioFilename)
duration = len(audio_buffer)/samplerate
time = np.arange(0,duration,1/samplerate) #time vector
plt.plot(time,audio_buffer)
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.title(myAudioFilename)
plt.show()
here is 0.1 seconds of 440 Hz using signed 16 bit notice the Y axis of Amplitude range matches above mentioned min to max signed integer value range

Need a formula to get total LUN size using lunSizeLow and lunSizeHigh SNMP objects

I have 2 SNMP Objects/OIDs. Below are the details:
Object1:
Name: lunSizeLow
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.9
Description: `LUN` size in bytes - low order bytes
Object2:
Name: lunSizeHigh
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.10
Description: `LUN` size in bytes - high order bytes
My requirement:
I want to monitor LUN size through some script. But i didn't found any SNMP object, which can give total LUN size directly. I found 2 separate objects (lunSizeLow and lunSizeHigh) to get LUN total size, so i need a formula to get total LUN size using these 2 low order and high order SNMP objects (lunSizeLow and lunSizeHigh).
I gone through many articles over internet and i found couple of formulas in community.hpe.com.
But I'm not sure which one is correct.
Formula 1:
Max unsigned number that can be stored in 32bits counter is 4294967295.
Total size would be: LOW_ORDER_BYTES + HIGH_ORDER_BYTES * 4294967296
Formula 2:
Total size in GB is LOW_ORDER_BYTES / 1073741824 + HIGH_ORDER_BYTES * 4
Could any one help me to get correct formula.
Most languages will have the bit-shift operator, allowin you to do something similar to the below (pseudo-Java):
long myBigInteger = lunSizeHigh
myBigInteger << 32 # Shifts the high bits 32 positions to the left, into the high half of the long
myBigInteger = myBigInteger + lunSizeLow
This has two advantages over multiplying:
Bit shifting is often faster than multiplication, even though most compilers would optimize that particular multiplication into a bit shift anyway.
It is easier to read the code and understand why this would provide the correct answer, given the description from the MIB. Magic numbers should be avoided where possible.
That aside, putting some numbers into the Windows Calculator (using Programmer Mode) and trying formula 1, we can see that it works.
Now, you don't specify what language or environment you're working in, and in some languages you won't have any number type that supports the size of numbers you want to manipulate. (Same reason that this number had to be split into two counters to begin with - it's larger than the largest number representation available on some (primitive) platforms.) If you want to do it using multiplication, you'll have to make sure your implementation language can do better.

MIPS lw latency in pipelining

I'm given stages of a clock cycle in a processor.
IF ID EX MEM WB
250ps 350ps 150ps 300ps 200ps
Now I'm being asked what is the total latency of a LW instruction in a pipelined instruction.
Here's what I know:
The clock cycle time in a pipelined version is 350ps because that's the longest instruction.
The clock cycle time in a non-pipelined version is 1250ps because that's the duration of all the instructions added together.
But how does the "latency of a LW instruction" relate to those times?
Ok I'm pretty sure I figured out the answer which is you take the longest duration of the stages which in this case is 350ps and you multiply it by the amount of stages, in this case 5.
So
350 * 5 = 1750ps
Yes, you are correct with your result. Here is the formula:
(Number of Instructions)(Longest Instruction Time with (Unit)) = Latency(Unit)

Faster RCNN (caffe) Joint Learning: Out of Memory with dedicated memory 5376 MB (changed batch sizes and no.of RPN proposals) what else?

I have worked with the alternative optimization matlab code before, currently I am trying to get joint learning running. I am able to run the test demo with my GPU Tesla 2070. For training, I have set all the batch sizes to 1:
__C.TRAIN.IMS_PER_BATCH = 1
__C.TRAIN.BATCH_SIZE = 1
__C.TRAIN.RPN_BATCHSIZE = 1
(updated yaml to 1 as well since it was overridden)
But I still have the error == cudaSuccess (2 vs. 0) out of memory.
I have tried to experiment with lowering the number of proposals. (the originals are below:)
train:
Number of top scoring boxes to keep before apply NMS to RPN proposals
C.TRAIN.RPN_PRE_NMS_TOP_N = 12000
Number of top scoring boxes to keep after applying NMS to RPN proposals
C.TRAIN.RPN_POST_NMS_TOP_N = 2000
test:
Number of top scoring boxes to keep before apply NMS to RPN proposals
C.TEST.RPN_PRE_NMS_TOP_N = 6000
Number of top scoring boxes to keep after applying NMS to RPN proposals
C.TEST.RPN_POST_NMS_TOP_N = 300
I tried as low as pre: 100 post:50 for sanity check.
And I still am not able to run without the out of memory problem. What am I missing here?? I have a Tesla 5376 MB dedicated memory and I use the Tesla only for this (have a separate GPU for my screen) I am positive about reading 5376 MB should be enough by an author himself.
Thanks.

cublas one function call produced three executions

I called the cublas_Sgemm_v2 function for 10236 times with first matrix non-transposed and the second transposed. However, in the nvprof results, I saw three items produced from that function call. The (m, n, k) values to the function call are (588, 588, 20).
There are the items listed in the nvprof results.
Time(%) Time Calls Avg Min Max Name
12.32% 494.86ms 10236 48.344us 47.649us 49.888us sgemm_sm35_ldg_nt_128x8x128x16x16
8.64% 346.91ms 10236 33.890us 32.352us 35.488us sgemm_sm35_ldg_nt_64x16x128x8x32
8.11% 325.63ms 10236 31.811us 31.360us 32.512us sgemm_sm35_ldg_nt_128x16x64x16x16
Is this expected and why is that? Can someone explain what do the values in the function names such as sgemm_sm35_ldg_nt_128x8x128x16x16 mean?
I also have other function calls to cublas_Sgemm_v2 with different transpose settings and I only see one item per each function call.
UPDATE:
As #Marco13 asked, I put more results here:
Time(%) Time Calls Avg Min Max Name
--------------------------------------------------------------------------------
Resulted from 7984 calls with (Trans, NonTrans) with (m, n, k) = (588, 100, 588)
20.84% 548.30ms 7984 68.675us 58.977us 81.474us sgemm_sm35_ldg_tn_32x16x64x8x16
Resulted from 7984 calls with (NonTrans, NonTrans) with (m, n, k) = (588, 100, 588)
12.95% 340.71ms 7984 42.674us 21.856us 64.514us sgemm_sm35_ldg_nn_64x16x64x16x16
All the following resulted from 3992 calls with (NonTrans, Trans) with (m, n, k) = (588, 588, 100)
9.81% 258.15ms 3992 64.666us 61.601us 68.642us sgemm_sm35_ldg_nt_128x8x128x16x16
6.84% 179.90ms 3992 45.064us 40.097us 49.505us sgemm_sm35_ldg_nt_64x16x128x8x32
6.33% 166.51ms 3992 41.709us 38.304us 61.185us sgemm_sm35_ldg_nt_128x16x64x16x16
Another run with 588 changed to 288:
Time(%) Time Calls Avg Min Max Name
--------------------------------------------------------------------------------
Resulted from 7984 calls with (Trans, NonTrans) with (m, n, k) = (288, 100, 288)
22.01% 269.11ms 7984 33.706us 30.273us 39.232us sgemm_sm35_ldg_tn_32x16x64x8x16
Resulted from 7984 calls with (NonTrans, NonTrans) with (m, n, k) = (288, 100, 288)
14.79% 180.78ms 7984 22.642us 18.752us 26.752us sgemm_sm35_ldg_nn_64x16x64x16x16
Resulted from 3992 calls with (NonTrans, Trans) with (m, n, k) = (288, 288, 100)
7.43% 90.886ms 3992 22.766us 19.936us 25.024us sgemm_sm35_ldg_nt_64x16x64x16x16
From the last three lines is looks like certain transposition types can be more efficient than the others, and certain matrix sizes are more economic in terms of computation time over matrix size. What is the guideline of ensuring economic computation?
UPDATE 2:
For the case of (m, n, k) = (588, 100, 588) above, I manually transposed the matrix before calling the sgemm function, then there is only one item in the nvprof result. The time it take is only a little less than the sum of the two items in the above table. So there is no much performance gain from doing so.
Time(%) Time Calls Avg Min Max Name
--------------------------------------------------------------------------------
31.65% 810.59ms 15968 50.763us 21.505us 72.098us sgemm_sm35_ldg_nn_64x16x64x16x16
Sorry, not an answer - but slightly too long for a comment:
Concerning the edit, about the influence of the "transpose" state: Transposing a matrix might cause an access pattern that is worse in terms of memory coalescing. A quick websearch brings brings some results about this ( https://devtalk.nvidia.com/default/topic/528450/cuda-programming-and-performance/cublas-related-question/post/3734986/#3734986 ), but with a slightly different setup than yours:
DGEMM performance on a K20c
args: ta=N tb=N m=4096 n=4096 k=4096 alpha=-1 beta=2 lda=4096 ldb=4096 ldc=4096
elapsed = 0.13280010 sec GFLOPS=1034.93
args: ta=T tb=N m=4096 n=4096 k=4096 alpha=-1 beta=2 lda=4096 ldb=4096 ldc=4096
elapsed = 0.13872910 sec GFLOPS=990.7
args: ta=N tb=T m=4096 n=4096 k=4096 alpha=-1 beta=2 lda=4096 ldb=4096 ldc=4096
elapsed = 0.12521601 sec GFLOPS=1097.61
args: ta=T tb=T m=4096 n=4096 k=4096 alpha=-1 beta=2 lda=4096 ldb=4096 ldc=4096
elapsed = 0.13652611 sec GFLOPS=1006.69
In this case, the differences do not seem worth the hassle of changing the matrix storage (e.g. from column-major to row-major, to avoid transposing the matrix), because all patterns seem to run with a similar speed. But your mileage may vary - particularly, the difference in your tests between (t,n) and (n,n) are very large (548ms vs 340ms), which I found quite surprising. If you have the choice to easily switch between various representations of the matrix, then a benchmark covering all the four cases may be worthwhile.
In any case, regarding your question about the functions that are called there: The CUBLAS code for the sgemm function in CUBLAS 1.1 was already full of unrolled loops and already contained 80 (!) versions of the sgemm function for different cases that have been assembled using a #define-hell. It has to be assumed that this has become even more unreadable in the newer CUBLAS versions, where the newer compute capabilities have to be taken into account - and the function names that you found there indicated that this indeed is the case:
sgemm_sm35_ldg_nt_64x16x128x8x32
sm35 : Runs on a device with compute capability 3.5
ldg : ? Non-texture-memory version ? (CUBLAS 1.1 contained functions called sgemm_main_tex_* which worked on texture memory, and functions sgemm_main_gld_* which worked on normal, global memory)
nt : First matrix is Not transposed, second one is Transposed
64x16x128x8x32 - Probably related to tile sizes, maybe shared memory etc...
Still, I think it's surprising that a single call to sgemm causes three of these internal functions to be called. But as mentioned in the comment: I assume that they try to handle the "main" part of the matrix with a specialized, efficient version, and "border tiles" with one that is capable of doing range checks and/or cope with warps that are not full. (Not very precise, just to be suggestive: A matrix of size 288x288 could be handled by an efficient core for matrices of size 256x256, and two calls for the remaining 32x288 and 288x32 entries).
But all this is also the reason why I guess there can hardly be a general guideline concerning the matrix sizes: The "best" matrix size in terms of computation time over matrix size will at least depend on
the hardware version (compute capability) of the target system
the transposing-flags
the CUBLAS version
EDIT Concerning the comment: One could imagine that there should be a considerable difference between the transosed and the non-transposed processing. When multiplying two matrices
a00 a01 a02 b00 b01 b02
a10 a11 a12 * b10 b11 b12
a20 a21 a22 b20 b21 b22
Then the first element of the result will be
a00 * b00 + a01 * b10 + a02 * b20
(which simply is the dot product of the first row of a and the first column of b). For this computation one has to read consecutive values from a. But the values that are read from b are not consecutive. Instead, they are "the first value in each row". One could think that this would have a negative impact on memory coalescing. But for sure, the NVIDIA engineers have tried hard to avoid any negative impact here, and the implementation of sgemm in CUBLAS is far, far away from "a parallel version of the naive 3-nested-loops implementation" where this access pattern would have such an obvious drawback.