Mid([table1]![field1], 2.9)
I've found this operation in an old Access file someone made and is still being used. Who can explain that 2.9? My research says that parameter should be a natural number as it is the starting index of string [field1], isn't it?
Odds are that's a typo and it's supposed to be Mid([table1]![field1], 2,9)
Indeed, it's supposed to be a whole number, but since it isn't, it's cast to a whole number. 2.9 is rounded up, becomes 3, and the thing works fine. This means it won't cause an error, it just might cause unexpected results if you expect the maximum length returned to be 9 characters.
Related
I have had to work in a project where we have an identifier in HEX.
Example, B900001752F10001, is received in a parser developed in JAVA in a SIGNED LONG variable. We store that variable in a SIGNED BIGINT variable in MySQL DB.
Every time we need the HEX Chain we use HEX(code) function and we get what is expected.
But when we have to provision the master table, we need to input valid codes, to achieve that we used something like:
Update employee set code=0xB900001752F10001 where main_employee_id=1002;
it worked in the past producing code to be stored in DB as
13330654997192441857
but now we are using the same exact instruction and we are getting code stored in DB as
-5116089076517109759
So Comparing those two numbers by using HEX function, those provide the same HEX NUMBER.
select HEX(-5116089076517109759), HEX(13330654997192441857)
0xB900001752F10001, 0xB900001752F10001
Could someone please provide ideas why this is happening? How we should handle this from the provisioning perspective we need to assure storing as 13330654997192441857 so when an authentication event happen codes match.
I have run without any other idea, I appreciate any help.
I think you have overflowed the datatype.
According to MySQL manual, signed bigint is in the range of
-9,223,372,036,854,775,808
to
9,223,372,036,854,775,807
Your number
18,446,744,073,709,551,615
has exceeded the above positive bound so it overflows and is
interpreted as a negative number.
Having that said, I think you may still be okay with your command -- it is only when you try to interpret the hex pattern as a number the result looks confusing.
Update employee set code=0xB900001752F10001 where main_employee_id=1002;
64bite machineļ¼top digit is sign bit,so the biggest num is 9,223,372,036,854,775,807,but if ur num more than it,the top digit will transform to be 1,so the num will be negative and its overflowed.
so ur 13330654997192441857 will become to 5116089076517109759.
why is this so?
when i try out:
Math.pow(2,58)=288230376151711740
while in fact, it is 288230376151711744
or
Math.pow(2,57)=144115188075855870
while it really equals 144115188075855872
it just throws that number without any warning.
i would understand if it stopped going above some number in case of maximum value reached. however, this seems to calculate the first n digits correctly and then go wrong at the very end of the digits only
You've ran out of Number type display precision. The trick is that with powers of 2 the actual value stored in the variable will be precise, while when you'll trace it the engine will truncate the displayed value by 16 digits, as it divides by 10 in process, and leftovers will eventually hit "machine zero" if compared to original value taken without exponential part. This is made to prevent white noise generated by imprecise floating-point division to be displayed. You can work around this issue if you'll advance to big integers/floating point numbers, that store more bits than a double precision number.
I have a few values in the database, in a FLOAT column that I want to search on.
Strangely, values of 2.5 come back when running this:
SELECT * from `prices` WHERE `price` = 2.5
But nothing is returned when I search on 4.8, UNLESS I do a trim on the column before comparing against it, like this:
SELECT * from `prices` WHERE trim(`price`) = 4.8
Anybody know what the cause of this might be, I thought that since it's a FLOAT field, there shouldn't be any leading or trailing space that needs to be trimmed. I'm assuming the number of 4.8 isn't anything special, but it's still intriguing.
When browsing the database, I can see it as a plain 4.8, with no leading or trailing characters.
I'm a bit stumped as to why this would be happening.
It's because 4.8 cannot be represented by a float. You can get values very close to it, but not the exact value.
I'm not entirely sure what trim is doing to it (I can't find anything specifying the behavior maybe it's not actually defined) but I know that it is changing the value so you can get a match. Read this http://dev.mysql.com/doc/refman/5.0/en/problems-with-float.html if you have further interest on the behavior of floats (although it won't tell you anything about what trim does to them).
I've got a MySQL database with a large amount of 2048-bit binary strings (e.g '0111001...0101'). One calculation I'll need is the Hamming Distance (the total count of 1's in the XOR'd result) of these strings compared to some externally generated bitstring. In order to get an idea of how to write this query, I tried writing it for smaller bitstrings. Here's an example:
select BIT_COUNT(bin((b'0011100000') ^ (b'1111111111')))
The inner portion that computes the XOR works correctly, but BIT_COUNT returns strange results. This example returns 14, which is longer than the string itself.
So I have a few questions:
First, why is BIT_COUNT returning such strange results. Is it operating on a string rather than the binary string I'd like it to operate on? If so, how do I deal with this?
Second, notice that I'm casting (is that the right word here?) the strings as binary by prepending with a b. How would I do this with column names and variables? Clearly I can't simply prepend a b to a variable name, and I can't insert a space between. Any ideas?
Thanks,
EDIT:
So here's a solution to the first problem:
select BIT_COUNT(b'0011100000' ^ b'1111111111')
There seems to be a problem when using this for larger strings (2048 bits). I tried:
select BIT_COUNT(b'001110...00011')
and it gives me results like 28, when the actual bitcount should be around 1024. If I remove the b, then it appears to max-out at 64. Any ideas on how to resolve this problem?
Just remove bin function. With it BIN_COUNT treats its argument as a chars string, not as a set of bits. So
select BIT_COUNT(b'0011100000' ^ b'1111111111')
will do the work
It goes without saying that using hard-coded, hex literal pointers is a disaster:
int *i = 0xDEADBEEF;
// god knows if that location is available
However, what exactly is the danger in using hex literals as variable values?
int i = 0xDEADBEEF;
// what can go wrong?
If these values are indeed "dangerous" due to their use in various debugging scenarios, then this means that even if I do not use these literals, any program that during runtime happens to stumble upon one of these values might crash.
Anyone care to explain the real dangers of using hex literals?
Edit: just to clarify, I am not referring to the general use of constants in source code. I am specifically talking about debug-scenario issues that might come up to the use of hex values, with the specific example of 0xDEADBEEF.
There's no more danger in using a hex literal than any other kind of literal.
If your debugging session ends up executing data as code without you intending it to, you're in a world of pain anyway.
Of course, there's the normal "magic value" vs "well-named constant" code smell/cleanliness issue, but that's not really the sort of danger I think you're talking about.
With few exceptions, nothing is "constant".
We prefer to call them "slow variables" -- their value changes so slowly that we don't mind recompiling to change them.
However, we don't want to have many instances of 0x07 all through an application or a test script, where each instance has a different meaning.
We want to put a label on each constant that makes it totally unambiguous what it means.
if( x == 7 )
What does "7" mean in the above statement? Is it the same thing as
d = y / 7;
Is that the same meaning of "7"?
Test Cases are a slightly different problem. We don't need extensive, careful management of each instance of a numeric literal. Instead, we need documentation.
We can -- to an extent -- explain where "7" comes from by including a tiny bit of a hint in the code.
assertEquals( 7, someFunction(3,4), "Expected 7, see paragraph 7 of use case 7" );
A "constant" should be stated -- and named -- exactly once.
A "result" in a unit test isn't the same thing as a constant, and requires a little care in explaining where it came from.
A hex literal is no different than a decimal literal like 1. Any special significance of a value is due to the context of a particular program.
I believe the concern raised in the IP address formatting question earlier today was not related to the use of hex literals in general, but the specific use of 0xDEADBEEF. At least, that's the way I read it.
There is a concern with using 0xDEADBEEF in particular, though in my opinion it is a small one. The problem is that many debuggers and runtime systems have already co-opted this particular value as a marker value to indicate unallocated heap, bad pointers on the stack, etc.
I don't recall off the top of my head just which debugging and runtime systems use this particular value, but I have seen it used this way several times over the years. If you are debugging in one of these environments, the existence of the 0xDEADBEEF constant in your code will be indistinguishable from the values in unallocated RAM or whatever, so at best you will not have as useful RAM dumps, and at worst you will get warnings from the debugger.
Anyhow, that's what I think the original commenter meant when he told you it was bad for "use in various debugging scenarios."
There's no reason why you shouldn't assign 0xdeadbeef to a variable.
But woe betide the programmer who tries to assign decimal 3735928559, or octal 33653337357, or worst of all: binary 11011110101011011011111011101111.
Big Endian or Little Endian?
One danger is when constants are assigned to an array or structure with different sized members; the endian-ness of the compiler or machine (including JVM vs CLR) will affect the ordering of the bytes.
This issue is true of non-constant values, too, of course.
Here's an, admittedly contrived, example. What is the value of buffer[0] after the last line?
const int TEST[] = { 0x01BADA55, 0xDEADBEEF };
char buffer[BUFSZ];
memcpy( buffer, (void*)TEST, sizeof(TEST));
I don't see any problem with using it as a value. Its just a number after all.
There's no danger in using a hard-coded hex value for a pointer (like your first example) in the right context. In particular, when doing very low-level hardware development, this is the way you access memory-mapped registers. (Though it's best to give them names with a #define, for example.) But at the application level you shouldn't ever need to do an assignment like that.
I use CAFEBABE
I haven't seen it used by any debuggers before.
int *i = 0xDEADBEEF;
// god knows if that location is available
int i = 0xDEADBEEF;
// what can go wrong?
The danger that I see is the same in both cases: you've created a flag value that has no immediate context. There's nothing about i in either case that will let me know 100, 1000 or 10000 lines that there is a potentially critical flag value associated with it. What you've planted is a landmine bug that, if I don't remember to check for it in every possible use, I could be faced with a terrible debugging problem. Every use of i will now have to look like this:
if (i != 0xDEADBEEF) { // Curse the original designer to oblivion
// Actual useful work goes here
}
Repeat the above for all of the 7000 instances where you need to use i in your code.
Now, why is the above worse than this?
if (isIProperlyInitialized()) { // Which could just be a boolean
// Actual useful work goes here
}
At a minimum, I can spot several critical issues:
Spelling: I'm a terrible typist. How easily will you spot 0xDAEDBEEF in a code review? Or 0xDEADBEFF? On the other hand, I know that my compile will barf immediately on isIProperlyInitialised() (insert the obligatory s vs. z debate here).
Exposure of meaning. Rather than trying to hide your flags in the code, you've intentionally created a method that the rest of the code can see.
Opportunities for coupling. It's entirely possible that a pointer or reference is connected to a loosely defined cache. An initialization check could be overloaded to check first if the value is in cache, then to try to bring it back into cache and, if all that fails, return false.
In short, it's just as easy to write the code you really need as it is to create a mysterious magic value. The code-maintainer of the future (who quite likely will be you) will thank you.