DIfference between binary and decimal MB to bytes converter - binary

In the MB to bytes converter site , when i try to see the conversion , i saw below two answers , what is the difference ?
1 MB = 1000000 Bytes (in decimal)
1 MB = 1048576 Bytes (in binary)
https://www.gbmb.org/mb-to-bytes

1048576 = 1024 x 1024
The first one is more human friendly using decimal calculation. Usually seen on the product package showing SSD or USB storage's capacity in consumer electronics field.
1MB = 1000kB = 1000x1000B
The latter one is used in computer science or real circuit which uses binary calculation (2^n).
1MB = 1024kB = 1024x1024B

Related

Understanding an wav file exported from a daw

I have generated a tone from Audacity at 440Hz with Amplitude as 1 for 1 sec like this:
I understand that this is going to create 440 peaks in 1 sec with Amplitude as 1.
Here i see that its a 32 bit file with 44100Hz is the sample rate which means there are 44100 samples per sec. The Amplitude is 1 which is as expected because that is what i chose.
What i dont understand is, what is the unit of this Amplitude? When right-clicked it shows linear(-1 to +1)
There is an option to select dB it shows (0 to -60 to 0) which i dont understand how is this converted!
now when i use this wav file in the python scipy to read the wav and get values of time and amplitude
How to match or get the relation between what i generated vs what i see when i read a wav file?
The peak is amplitude is 32767.987724003342 Frequency 439.99002267573695
The code i have used in python is
wavFileName ="440Hz.wav"
sample_rate, sample_data = wavfile.read(wavFileName)
print ("Sample Rate or Sampling Frequency is", sample_rate," Hz")
l_audio = len(sample_data.shape)
print ("Channels", l_audio,"Audio data shape",sample_data.shape,"l_audio",l_audio)
if l_audio == 2:
sample_data = sample_data.sum(axis=1) / 2
N = sample_data.shape[0]
length = N / sample_rate
print ("Duration of audio wav file in secs", length,"Number of Samples chosen",sample_data.shape[0])
time =np.linspace(0, length, sample_data.shape[0])
sampling_interval=time[1]-time[0]
notice in audacity when you created the one second of audio with aplitude choice of 1.0 right before saving file it says signed 16 bit integer so amplitude from -1 to +1 means the WAV file in PCM format stores your raw audio by varying signed integers from its max negative to its max positive which since 2^16 is 65536 then signed 16 bit int range is -32768 to 32767 in other words from -2^15 to ( +2^15 - 1 ) ... to better plot I suggest you choose a shorter time period much smaller than one second lets say 0.1 seconds ... once your OK with that then boost it back to using a full one second which is hard to visualize on a plot due to 44100 samples
import os
import scipy.io
import scipy.io.wavfile
import numpy as np
import matplotlib.pyplot as plt
myAudioFilename = '/home/olof/sine_wave_440_Hz.wav'
samplerate, audio_buffer = scipy.io.wavfile.read(myAudioFilename)
duration = len(audio_buffer)/samplerate
time = np.arange(0,duration,1/samplerate) #time vector
plt.plot(time,audio_buffer)
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.title(myAudioFilename)
plt.show()
here is 0.1 seconds of 440 Hz using signed 16 bit notice the Y axis of Amplitude range matches above mentioned min to max signed integer value range

How many bits are required for binary representation of 64G?

So we are doing an assignment dealing with hexadecimal, binary and decimal conversion, and I got this question. I have no idea where the G is coming from, I know hexadecimal does not include a G. Is this a trick question with no answer, or does the G stand for something that I don't know?
Here is the whole question.
5- How many bits are required for binary representation of 64G? Use the shortest way to find the answer. Explain.
If I understand the question correctly, the G stands for Giga, you can assume 64GB or 64Gb, 64 Gigabytes and 64 Gigabits respectfully, meaning that the question is missing a B or b.
Let's just review one thing first before I give a complete answer:
1 bit = 8 bytes
1024 bytes = 1 Kilobyte (2^10 = 1024)
1024 Kilobyte = 1 Megabyte
1024 Megabyte = 1 Gigabyte
If it is bits then the calculation should be something like:
64 Gigabits = 64 * 1024 * 1024 * 1024 = 68719476736 bits
If it is Bytes then:
64 Gigabytes = 64 * 1024 * 1024 * 1024 * 8 = 549755813888 bits

Why doesn't Google calculate data quantity conversions using binary, and instead just moves the decimal point left/right?

I already understand the fundamentals behind why the two calculations are different. I just want to know how can I get Google to give me the same binary conversion result that Bing does, because I don't feel like using Bing just to convert data quantities.
Use MiB and KiB when you want the 1024 version as from the Kilobyte wikipedia entry: "In the International System of Quantities, the kilobyte (symbol kB) is 1000 bytes, while the kibibyte (symbol KiB) is 1024 bytes.
1 MiB to KiB

Really 1 KB (KiloByte) equals 1024 bytes?

Until now I believed that 1024 bytes equals 1 KB (kilobyte) but I was reading on the internet about decimal and binary system.
So, actually 1024 bytes = 1 KB would be the correct way to define or simply there is a general confusion?
What you are seeing is a marketing stunt.
Since non-technical people don't know the difference between Metric Meg, Gig, etc. against the binary Meg, Gig, etc. marketers for storage will use the Metric calculation, thus 1000 Bytes == 1 KiloByte.
This can cause issues with development or highly technical people so you get the idea of a binary Meg, Gig, etc. which is designated with a bi instead of the standard combination (ex. Mebibyte vs Megabyte, or Gibibyte vs Gigabyte)
There are two ways to represent big numbers: You could either display them in multiples of 1000 (base 10) or 1024 (base 2). If you divide by 1000, you probably use the SI prefix names, if you divide by 1024, you probably use the IEC prefix names. The problem starts with dividing by 1024. Many applications use the SI prefix names for it and some use the IEC prefix names. But it is important how it is written:
Using IEC standard:
1 KiB = 1,024 bytes (Note: big K)
1 MiB = 1,024 KiB = 1,048,576 bytes
Using SI standard:
1 kB = 1,000 bytes (Note: small k)
1 MB = 1,000 kB = 1,000,000 bytes
Source: ubunty units policy: https://wiki.ubuntu.com/UnitsPolicy
In the normal world, most things go by the power of 10. This would include electricity, for example.
But, in the computer world, it is about half binary. For example, when they sell a hard drive, they sell it by the value of 10, so if it is a 1KB drive, then it is 1000 B. But, when the computer reads it, the OS's usually read by the value of 1024. This is why, when you read the size of space available on a drive, it reads much less then what it was advertised. A 500 GB drive will read only about 466GB, because the computer is reading the drive by the binary 1024 version. Not the power of 10 that it was sold and advertised by. Same will go with flash drives. But, RAM is sold, and read by the computer, by the Binary 1024 version.
One thing to note.. It is "B", not "b". There are 8 bits "b" in a Byte "B". The reason I bring this up is when you get internet service, they usually advertise the speed by bits, not bytes. When it reads in the download box on the computer, it reads the speed in bytes. Say you have a 50Mb internet connection, it is actually 6.25MB connection in the download speed box, because you have to divide the 50 by 8 since there are 8 bits in a byte. That is how the computer reads it. Another marking strategy too. After all, 50Mb sounds much faster then 6.25MB. Other then speeds through a network, most things are read by bytes "B". Some people do not realize that there is a difference between the "B" and "b".
Quite simple...
The word 'Byte' is a computing reference for which the letter 'B' is used as abbreviation.
It must follow then that any reference to Bytes, eg. KB, MB etc, must be based on the well known and widely accepted 1024 base.
Therefore 1KB must equal 1024 Bytes, 1MB must equal 1048576 Bytes (1024x1024) etc.
Any non-computing reference to Kilo/Mega etc. Is based on the decimal 1000 base, eg. 1KW or 1KiloWatt which is 1000 Watts.

MySQL char & varchar character sets & storage sizes

Wondering how much actual storage space will be taken up by these two datatypes, as the MySQL documentation is slightly unclear on the matter.
CHAR(M) M × w bytes, 0 <= M <= 255, where w is the number of bytes
required for the maximum-length character in the character set
VARCHAR(M), VARBINARY(M) L + 1 bytes if column values require 0 – 255
bytes, L + 2 bytes if values may require more than 255 bytes
This seems to imply to me that, given a utf8-encoded database, a CHAR will always take up 32 bits per character, whilst a VARCHAR will take between 8 and 32 depending on the actual byte length of the characters stored. Is that correct? Or does a VARCHAR imply an 8-bit character width, and storing multi-octet UTF8 characters actually consumes multiple 'characters' from the VARCHAR? Or does the VARCHAR also always store 32 bits per character? So many possibilities.
Not something I've ever had to worry this much about before, but I'm starting to hit in-memory temp table size limits and I don't necessarily want to have to increase MySQL's available pool (for the second time).
CHAR and VARCHAR both count characters. Both of them count the maximum storage that they might require given the character encoding and length. For ASCII, that's 1 byte per character. For UTF-8, that's 3 bytes per character (not 4 as you'd expect, because MySQL's Unicode support is limited for some reason, and it doesn't support any Unicode characters which would require 4 bytes in UTF-8). So far, CHAR and VARCHAR are the same.
Now, CHAR just goes ahead and reserves this amount of storage.
VARCHAR instead allocated 1 or 2 bytes, depending on whether this maximum storage is < 256 or ≥ 256. And the actual amount of space occupied by the entry is these one or two bytes, plus the amount of space actually occupied by the string.
Interestingly, this makes 85 a magic number for UTF-8 VARCHAR:
VARCHAR(85) uses 1 byte for the length because the maximum possible length of 85 UTF-8 characters is 3 × 85 = 255.
VARCHAR(86) uses 2 byte for the length because the maximum possible length of 86 UTF-8 characters is 3 × 86 = 258.