Calculating total bytes of memory needed with macro in MASM - masm32

I have the following problem:
A program requires 2048 bytes (data and executable code). A macro is added that will require 48 bits of memory, and will need to be used 30 times.
How many bytes will the entire program require?
So, I thought it would be 2048 + 48 + (30 * 48). My rational is that you need the extra 48 for the declaration of the macro. But, the answer turns out to be 2048 + (30 * 48).
I'm wondering how come we don't count the declaration of the macro in the total bytes?
Thanks

Related

Revers Engineering CRC 32 in firmware

i have a p-flash (size is about 700kb) in this flash there is a CRC32. I know where it is, and i know the CRC calculation method (polynomial, initial value, final Xor Value, input and output reflected) the problem is that just a part of these 700kb are used to calculate the crc. And i don't know which part. Is there a way to find out the input data for the calculation?
I have 5 of these 700kb files. The files are all the same except 4 bytes that are different, and the 4 bytes of the crc.
If you can get the files onto a PC, that would help. You can xor any two of the files to get a file that is all zeroes except for the 4 different bytes and the 4 bytes of the CRC. The xor of two files will also eliminate any initial value or final xor value, as if initial value = 0 and final xor value = 0. Then check the nearly all zero file to see if the CRC matches what you would expect. If it matches, then you would know that the CRC includes the 4 non-zero bytes and all the zero bytes that follow, but you wouldn't know how far before the 4 non-zero bytes that the CRC includes in its calculation, but this would at least be a start. If it does match, that would reduce the amount of searching for what is included in the CRC calculation.
Assuming the part used for CRC is contiguous, you could do a brute force search using a fast CRC32. On a X86 with SSE2 (xmm) registers, an assembly based CRC32 could calculate a CRC32 for 700,000 bytes in about 0.0002 seconds on an Intel 3770K 3.5ghz a 3rd gen processor (they're faster now), or a bit more than 70 seconds to try lengths from 8 to 700,000 bytes.
I converted the code from this github example to Visual Studio asm, for both reflected and non-reflected CRC, using CRC32 and CRC32C polynomials, and I could upload the code if interested.
https://github.com/intel/isa-l/blob/master/crc/crc16_t10dif_01.asm

Need a formula to get total LUN size using lunSizeLow and lunSizeHigh SNMP objects

I have 2 SNMP Objects/OIDs. Below are the details:
Object1:
Name: lunSizeLow
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.9
Description: `LUN` size in bytes - low order bytes
Object2:
Name: lunSizeHigh
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.10
Description: `LUN` size in bytes - high order bytes
My requirement:
I want to monitor LUN size through some script. But i didn't found any SNMP object, which can give total LUN size directly. I found 2 separate objects (lunSizeLow and lunSizeHigh) to get LUN total size, so i need a formula to get total LUN size using these 2 low order and high order SNMP objects (lunSizeLow and lunSizeHigh).
I gone through many articles over internet and i found couple of formulas in community.hpe.com.
But I'm not sure which one is correct.
Formula 1:
Max unsigned number that can be stored in 32bits counter is 4294967295.
Total size would be: LOW_ORDER_BYTES + HIGH_ORDER_BYTES * 4294967296
Formula 2:
Total size in GB is LOW_ORDER_BYTES / 1073741824 + HIGH_ORDER_BYTES * 4
Could any one help me to get correct formula.
Most languages will have the bit-shift operator, allowin you to do something similar to the below (pseudo-Java):
long myBigInteger = lunSizeHigh
myBigInteger << 32 # Shifts the high bits 32 positions to the left, into the high half of the long
myBigInteger = myBigInteger + lunSizeLow
This has two advantages over multiplying:
Bit shifting is often faster than multiplication, even though most compilers would optimize that particular multiplication into a bit shift anyway.
It is easier to read the code and understand why this would provide the correct answer, given the description from the MIB. Magic numbers should be avoided where possible.
That aside, putting some numbers into the Windows Calculator (using Programmer Mode) and trying formula 1, we can see that it works.
Now, you don't specify what language or environment you're working in, and in some languages you won't have any number type that supports the size of numbers you want to manipulate. (Same reason that this number had to be split into two counters to begin with - it's larger than the largest number representation available on some (primitive) platforms.) If you want to do it using multiplication, you'll have to make sure your implementation language can do better.

Why is MySQL's maximum time limit 838:59:59?

I've run into the limit myself, but despite lots of chatter online, I've never seen an explanation for why the upper and lower limit for the TIME data type is what it is. The official reference at http://dev.mysql.com/doc/refman/5.7/en/time.html says
TIME values may range from '-838:59:59' to '838:59:59'. The hours part may be so large because the TIME type can be used not only to represent a time of day (which must be less than 24 hours), but also elapsed time or a time interval between two events (which may be much greater than 24 hours, or even negative).
But I'm wondering not why the hours part is allowed to be "so large", but why it's cut off where it is. There doesn't seem to be any significance to that many hours in regards to days, or if I try to imagine possible cutoffs for how many seconds could be stored as an integer. So why the range?
The TIME values were always stored on 3 bytes in MySQL. But the format changed on version 5.6.4. I suspect this was not the first time when it changed. But the other change, if there was one, happened long time ago and there is no public evidence of it. The MySQL source code history on GitHub starts with version 5.5 (the oldest commit is from May 2008) but the change I am looking for happened somewhere around 2001-2002 (MySQL 4 was launched in 2003)
The current format, as described in the documentation, uses 6 bits for seconds (possible values: 0 to 63), 6 bits for minutes, 10 bits for hours (possible values: 0 to 1023), 1 bit for sign (add the negative values of the already mentioned intervals) and 1 bit is unused and labelled "reserved for future extensions".
It is optimized for working with time components (hours, minutes, seconds) and doesn't waste much space. Using this format it's possible to store values between -1023:59:59 and +1023:59:59. However MySQL limits the number of hours to 838, probably for backward compatibility with applications that were written a while ago, when I think this was the limit.
Until version 5.6.4, the TIME values were also stored on 3 bytes and the components were packed as days * 24 * 3600 + hours * 3600 + minutes * 60 + seconds. This format was optimized for working with timestamps (because it was, in fact, a timestamp). Using this format it would be possible to store values in the range of about -2330 to +2330 hours. While having this big range of values available, MySQL was still limiting the values to -838 to +838 hours.
There was bug #11655 on MySQL 4. It was possible to return TIME values outside the -838..+838 range using nested SELECT statements. It was not a feature but a bug and it was fixed.
The only reason to limit the values to this range and to actively change any piece of code that produces TIME values outside it was backward compatibility.
I suspect MySQL 3 used a different format that, due to the way the data was packed, limited the valid values to the range -838..+838 hours.
By looking into the current MySQL's source code I found this interesting formula:
#define TIME_MAX_VALUE (TIME_MAX_HOUR*10000 + TIME_MAX_MINUTE*100 + TIME_MAX_SECOND)
Let's ignore for the moment the MAX part of the names used above and let's remember only that TIME_MAX_MINUTE and TIME_MAX_SECOND are numbers between 00 and 59. The formula just concatenates the hours, minutes and seconds in a single integer number. For example, the value 170:29:45 becomes 1702945.
This formula raises the following question: given that the TIME values are stored on 3 bytes with sign, what is the maximum positive value that can be represented this way?
The value we are looking for is 0x7FFFFF that in decimal notation is 8388607. Since the last four digits (8607) should be read as minutes (86) and seconds (07) and their maximum valid values is 59, the greatest value that can be stored on 3 bytes with sign using the formula above is 8385959. Which, as TIME is +838:59:59. Ta-da!
Guess what? The fragment of C code listed above was extracted from this:
/* Limits for the TIME data type */
#define TIME_MAX_HOUR 838
#define TIME_MAX_MINUTE 59
#define TIME_MAX_SECOND 59
#define TIME_MAX_VALUE (TIME_MAX_HOUR*10000 + TIME_MAX_MINUTE*100 + TIME_MAX_SECOND)
I am sure this is how MySQL 3 used to keep the TIME values internally. This format imposed the limitation of the range, and the backward compatibility requirement on the subsequent versions propagated the limitation to our days.
DATETIME is stored based on a base of 10, see Date and Time Data Type Representation:
DATETIME: Eight bytes: A four-byte integer for date packed as YYYY×10000 + MM×100 + DD and a four-byte integer for time packed as HH×10000 + MM×100 + SS
For convinience and some other reasons, the (old) time format was encoded in the same way, using 3 bytes:
Hours * 10000 + Minutes * 100 + Seconds
This means:
3 bytes = 2^24 = 16.777.216
with sign: 2^23 = 8.388.608
Using the encoding, this represents the magical 838 hours. And max. 8608 seconds for the minutes and seconds (without overflow), which results in the largest valid time 838:59:59. One nice thing about this is that the integer representation of that time, 8385959, is easily readable to a human. But this encoding of course leaves gaps, invalid (unused) integer values (like 8309999).
As of MySQL 5.6.4, time format changed its encoding to
1 bit sign (1= non-negative, 0= negative)
1 bit unused (reserved for future extensions)
10 bits hour (0-838)
6 bits minute (0-59)
6 bits second (0-59)
---------------------
24 bits = 3 bytes
Even though it could now store more hours, for compatibility it still just allows 838 hours.
Obviously, it's hard to answer these types of questions without getting direct feedback from the designers of the database.
But there is some documentation regarding how the different data types are stored internally, and, to an extent, it can help us understand this a little bit.
For, instance, regarding the TIME data type, notice how it's stored internally according to the documentation:
TIME encoding for nonfractional part:
1 bit sign (1= non-negative, 0= negative)
1 bit unused (reserved for future extensions)
10 bits hour (0-838)
6 bits minute (0-59)
6 bits second (0-59)
---------------------
24 bits = 3 bytes
So, as you can see, the goal is to fit the information within 3 bytes. And, of those 3 bytes, 10 bits are reserved for the hours, which pretty much determines the overall range.
That said, 10 bits does allow values up to 1023, so I guess, technically, without any changes to the storage size, the range could have been -1023:59:59 to 1023:59:59. Why they didn't do that and they chose 838 as the cutoff, I have no idea.

Really 1 KB (KiloByte) equals 1024 bytes?

Until now I believed that 1024 bytes equals 1 KB (kilobyte) but I was reading on the internet about decimal and binary system.
So, actually 1024 bytes = 1 KB would be the correct way to define or simply there is a general confusion?
What you are seeing is a marketing stunt.
Since non-technical people don't know the difference between Metric Meg, Gig, etc. against the binary Meg, Gig, etc. marketers for storage will use the Metric calculation, thus 1000 Bytes == 1 KiloByte.
This can cause issues with development or highly technical people so you get the idea of a binary Meg, Gig, etc. which is designated with a bi instead of the standard combination (ex. Mebibyte vs Megabyte, or Gibibyte vs Gigabyte)
There are two ways to represent big numbers: You could either display them in multiples of 1000 (base 10) or 1024 (base 2). If you divide by 1000, you probably use the SI prefix names, if you divide by 1024, you probably use the IEC prefix names. The problem starts with dividing by 1024. Many applications use the SI prefix names for it and some use the IEC prefix names. But it is important how it is written:
Using IEC standard:
1 KiB = 1,024 bytes (Note: big K)
1 MiB = 1,024 KiB = 1,048,576 bytes
Using SI standard:
1 kB = 1,000 bytes (Note: small k)
1 MB = 1,000 kB = 1,000,000 bytes
Source: ubunty units policy: https://wiki.ubuntu.com/UnitsPolicy
In the normal world, most things go by the power of 10. This would include electricity, for example.
But, in the computer world, it is about half binary. For example, when they sell a hard drive, they sell it by the value of 10, so if it is a 1KB drive, then it is 1000 B. But, when the computer reads it, the OS's usually read by the value of 1024. This is why, when you read the size of space available on a drive, it reads much less then what it was advertised. A 500 GB drive will read only about 466GB, because the computer is reading the drive by the binary 1024 version. Not the power of 10 that it was sold and advertised by. Same will go with flash drives. But, RAM is sold, and read by the computer, by the Binary 1024 version.
One thing to note.. It is "B", not "b". There are 8 bits "b" in a Byte "B". The reason I bring this up is when you get internet service, they usually advertise the speed by bits, not bytes. When it reads in the download box on the computer, it reads the speed in bytes. Say you have a 50Mb internet connection, it is actually 6.25MB connection in the download speed box, because you have to divide the 50 by 8 since there are 8 bits in a byte. That is how the computer reads it. Another marking strategy too. After all, 50Mb sounds much faster then 6.25MB. Other then speeds through a network, most things are read by bytes "B". Some people do not realize that there is a difference between the "B" and "b".
Quite simple...
The word 'Byte' is a computing reference for which the letter 'B' is used as abbreviation.
It must follow then that any reference to Bytes, eg. KB, MB etc, must be based on the well known and widely accepted 1024 base.
Therefore 1KB must equal 1024 Bytes, 1MB must equal 1048576 Bytes (1024x1024) etc.
Any non-computing reference to Kilo/Mega etc. Is based on the decimal 1000 base, eg. 1KW or 1KiloWatt which is 1000 Watts.

cuda: effective Bandwidth in the sdk example of Reduction

in the reduction.pdf ,it introduces the reduction method through 7 steps ,there are 16777216 elements,in the 1th step,the effective bandwidth is 2.083 GB/S,how 2.083GB/S come out? and how the 2th step bandwidth 4.854GB/s come out?
The bandwidth figures are calculated using the number of bytes in the reduction input data divided by the execution time (note there are 2^22 integers = 16777216 bytes). The calculation is clearly shown on page 10 of the pdf that ships in the SDK in reduction/doc.