Why Free Pascal prints 'NO'? - freepascal

var
a: Integer;
begin
a:= 300;
if a in [100..500] then
WriteLn ('YES')
else
WriteLn ('NO')
end.

Pascal supports only numbers between 0 and 255 in sets, according to the FreePascal documentation. The significant portion is here:
Each of the elements of SetType must be of type TargetType. TargetType can be any ordinal type with a range between 0 and 255. A set can contain at most 255 elements.
Turning on range checking {$R+} will cause the compiler to warn you of these sorts of errors.

Related

How to resolve mismatch in argument error [duplicate]

I'm having trouble with the precision of constant numerics in Fortran.
Do I need to write every 0.1 as 0.1d0 to have double precision? I know the compiler has a flag such as -fdefault-real-8 in gfortran that solves this kind of problem. Would it be a portable and reliable way to do? And how could I check if the flag option actually works for my code?
I was using F2py to call Fortran code in my Python code, and it doesn't report an error even if I give an unspecified flag, and that's what's worrying me.
In a Fortran program 1.0 is always a default real literal constant and 1.0d0 always a double precision literal constant.
However, "double precision" means different things in different contexts.
In Fortran contexts "double precision" refers to a particular kind of real which has greater precision than the default real kind. In more general communication "double precision" is often taken to mean a particular real kind of 64 bits which matches the IEEE floating point specification.
gfortran's compiler flag -fdefault-real-8 means that the default real takes 8 bytes and is likely to be that which the compiler would use to represent IEEE double precision.
So, 1.0 is a default real literal constant, not a double precision literal constant, but a default real may happen to be the same as an IEEE double precision.
Questions like this one reflect implications of precision in literal constants. To anyone who asked my advice about flags like -fdefault-real-8 I would say to avoid them.
Adding to #francescalus's response above, in my opinion, since the double precision definition can change across different platforms and compilers, it is a good practice to explicitly declare the desired kind of the constant using the standard Fortran convention, like the following example:
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
write(*,"(*(g20.15))") "real64: ", 2._RK / 3._RK
write(*,"(*(g20.15))") "double precision: ", 2.d0 / 3.d0
write(*,"(*(g20.15))") "single precision: ", 2.e0 / 3.e0
end program test
Compiling this code with gfortran gives:
$gfortran -std=gnu *.f95 -o main
$main
real64: .666666666666667
double precision: .666666666666667
single precision: .666666686534882
Here, the results in the first two lines (explicit request for 64-bit real kind, and double precision kind) are the same. However, in general, this may not be the case and the double precision result could depend on the compiler flags or the hardware, whereas the real64 kind will always conform to 64-bit real kind computation, regardless of the default real kind.
Now consider another scenario where one has declared a real variable to be of kind 64-bit, however, the numerical computation is done in 32-bit precision,
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
real(RK) :: real_64
real_64 = 2.e0 / 3.e0
write(*,"(*(g30.15))") "32-bit accuracy is returned: ", real_64
real_64 = 2._RK / 3._RK
write(*,"(*(g30.15))") "64-bit accuracy is returned: ", real_64
end program test
which gives the following output,
$gfortran -std=gnu *.f95 -o main
$main
32-bit accuracy is returned: 0.666666686534882
64-bit accuracy is returned: 0.666666666666667
Even though the variable is declared as real64, the results in the first line are still wrong, in the sense that they do not conform to double precision kind (64-bit that you desire). The reason is that the computations are first done in the requested (default 32-bit) precision of the literal constants and then stored in the 64-bit variable real_64, hence, getting a different result from the more accurate answer on the second line in the output.
So the bottom-line message is: It is always a good practice to explicitly declare the kind of the literal constants in Fortran using the "underscore" convention.
The answer to your question is : Yes you do need to indicate the constant is double precision. Using 0.1 is a common example of this problem, as the 4-byte and 8-byte representations are different. Other constants (eg 0.5) where the extended precision bytes are all zero don't have this problem.
This was introduced into Fortran at F90 and has caused problems for conversion and reuse of many legacy FORTRAN codes. Prior to F90, the result of double precision a = 0.1 could have used a real 0.1 or double 0.1 constant, although all compilers I used provided a double precision value. This can be a common source of inconsistent results when testing legacy codes with published results. Examples are frequently reported, eg PI=3.141592654 was in code on a forum this week.
However, using 0.1 as a subroutine argument has always caused problems, as this would be transferred as a real constant.
So given the history of how real constants have been handled, you do need to explicitly specify a double precision constant when it is required. It is not a user friendly approach.

Register allocation for code generator in MIPS

program main();
a,b: integer;
function p(name x: integer; var y,z: integer): integer;
a: integer;
function f(y: integer): integer;
b,c: integer;
a := a+1;
if (g(y<=0)) then return 2;
return 2*x + p(y--/2,b,c) – b*c;
end function;
function g(name i: integer): integer;
if (b < 3) then return 1;
return i;
end function;
a := b++;
y := f(x);
z := y*f(x) – a;
return a;
end function;
a := 1;
b := 6;
p(a+b--,a,b);
print(a,b);
end program
This is the programm, in which I want to allocate registers for code generator in MIPS. I have troubles with the nested functions.
My registers are: $2- $25.
The registers $4- $7 are used for passing arguments, $3 is for passage access link when calling a function, $2 for results of function. Registers $16-$23 must retain their value when calling a function, so if they are bound, they must be stored in auxiliary positions in the stack by entering the function and
to be reloaded before exiting the function. If the word size is 4 bytes, which is the activation record size for each of the code units,including parameter by reference evaluation subprograms?
Also, how many registers and which are needed to allocate?
Since this programming language is hypothetical, we have to make some assumptions as to the immediacy of the visibility of modifications to var parameters.  Let's assume such changes (to var parameters) are immediately visible, and thus, they are effectively passed by address.
Therefore, for all practical purposes, actual arguments passed in var positions must be memory variables and their address is passed instead of their current value (as would otherwise be the case).
(Sure, there are other ways to do this: advanced optimization might inline some of these functions (p is recursive so that makes it rather difficult to fully inline, though tail recursion could potentially be applied along with further modifications to make it iterative instead of recursive), or else, custom generating the code per particular call sites is another possibility.)
Thus, under these assumptions, main.a and main.b are both "forced" to (stack) memory because of p(a+b--,a,b);  Further, because of p(y--/2,b,c), p.b & p.c are also both "forced" to memory.
Other than these variables being assigned stack memory locations, you can do the register allocation normally.
Also complicating matters due to the undocumented nature of the hypothetical programming language, is the order of evaluation of expressions.  This code example is rich with ambiguity, using variables and modifications of them (e.g. b-- in the same expression as using b), which in C would lead to the territory of dragons: undefined behavior.
One reasonable assumption would be that order of evaluation is left-to-right, and that side effects (such as post-decrement) are observed immediately.
But we don't know the language.  So, suffice it to say that this code is filled with language-specific land-mines and we don't know the language.

SQL code reading nvarchar variable length

I am having trouble with a sequence of code that is not reading the NVARCHAR length of my variables (they are barcode strings). We have two different barcodes and the inventory system I have set up measures only the format of the original one (has 7 characters). The new barcode has 9 characters. I need to run a loop value through each barcode input, hence how I have set up this line of script.
I originally thought that a DATALENGTH or LEN function would suffice but it seems that it is only measuring the variable as an integer, not the 7 characters in the string. If anybody has any input of how to manipulate my code sequence or a function that will measure a variables nvarchar length, it would more than appreciated!
CASE WHEN #BarcodeID = LEN(7)
THEN UPPER(LEFT(#BarcodeID,2))+CONVERT(nvarchar,RIGHT(#BarcodeID,5)+#LoopValue-1)
ELSE UPPER(LEFT(#BarcodeID,3))+CONVERT(nvarchar,RIGHT(#BarcodeID,6)+#LoopValue-1) END
Once again, the LEN(7) function in the beginning seems to be my issue.
Perhaps what you're trying to do is actually
CASE WHEN LEN(#BarcodeID) = 7
By using #BarcodeID = LEN(7) you are basically testing to see if the #BarcodeID variable is equal to 1 because the LEN() function, "Returns the number of characters of the specified string expression." It is implicitly converting 7 to a one-character string.

why heaxadecimal numbers are prefixed with "0* "

Instead of writing ffff why the syntax of writing heaxadecimal number's are like 0*ffff.What is the meaning of "0*". Does it specify something?
Anyhow A,B,C,D,E,F notations only in hexa decimal number system. Then whats the need of "0*".
Sorry "*" was not the character i supposed it is "x" .
Is it a nomenclature or notation for hexadecimal number systems.
I don't know what language you are talking about, but if you for example in C# write
var ffffff = "Some unrelated string";
...
var nowYouveDoneIt = ffffff;
what do you expect to happen? How does the compiler know if ffffff refers to the hexadecimal representation of the decimal number 16777215 or to the string variable defined earlier?
Since identifiers (in C#) can't begin with a number, prefixing with a 0 and some other character (in C# it's 0xffffff or hex and 0b111111111111111111111111 for binary IIRC) is a handy way of communicating what base the number literal is in.
EDIT: Another issue, if you were to write var myCoolNumber = 10, how would you have ANY way of knowing if this means 2, 10 or 16? Or something else entirely.
It's typically 0xFFFF: the letter, not the multiplication symbol.
As for why, 0x is just the most common convention, like how some programming languages allow binary to be prefixed by 0b. Prefixing a number with just 0 is typically reserved for octal, or base 8; they wanted a way to tell the machine that the following number is in hexadecimal, or base 16 (10 != 0b10 [2] != 010 [8] != 0x10 [16]). They typically omitted a small 'o' from identifying octal for human readability purposes.
Interestingly enough, most Assembly-based implementations I've come across use (or at least allow the use of) 0h instead or as well.
It's there to indicate the number as heX. It's not '*', it's 'x' actually.
See:
http://www.tutorialspoint.com/cprogramming/c_constants.htm

MySQL "Incorrect string value" error in function / stored procedure

I need to assign the output of the following statement to a variable in a MySQL FUNCTION or STORED PROCEDURE:
SELECT CAST(0xAAAAAAAAAAAFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF AS CHAR(28));
All I get is this:
Error Code: 1366 Incorrect string value: '\x81\xEC\x92\x01I\x06...' for column 'some_output' at row 1
This is obvious, but somehow couldn't solve it.
I read about all other CHARSET/COLLATION solutions, but that didn't help me.
DELIMITER $$
CREATE DEFINER=`root`#`localhost` FUNCTION `function_name`(`some_input` VARCHAR(100)) RETURNS varchar(100) CHARSET utf8mb4
BEGIN
DECLARE some_output CHAR(50) CHARSET utf8mb4;
SET some_output = SELECT CAST(0xAAAAAAAAAAAFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF AS CHAR(28));
RETURN some_output;
END
Hexadecimal Literals
... In string contexts hexadecimal values act like binary strings ...
Are you looking for something like this?
CREATE FUNCTION function_name() RETURNS VARBINARY(28)
RETURN 0xAAAAAAAAAAAFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF;
Here is SQLFiddle demo
You're thinking like a computer. Databases tend to conceptualize strings in a more human-relevant way than you may be accustomed to. This used to drive me nuts.
It's logically sensible that if I have a 'string' then I should be able to store anything from 0x00 to 0xFF in each byte position, yeah? Well, no, because the "CHAR" in VARCHAR is for "character" (not the same as "byte").
You might concur, based on the result of the following logical test, that databases seem to have a different idea of what makes up a "string" ...
mysql> SELECT 'A' = 'a';
+-----------+
| 'A' = 'a' |
+-----------+
| 1 |
+-----------+
1 row in set (0.00 sec)
True? Wait, in what kind of crazy universe is that true? It's true in a universe with collations, which determine sorting and matching among characters (as opposed to bytes)... and go hand in hand with character sets, which map the bits of a character to the visual representation of a character. Within a character set, not every possible combination of bytes is a valid character.
A moment's reflection suggests that a world where 0x41 ("A") is considered equal to 0x61 ("a") is no place for binary data.
Incorrect string value: '\x81\xEC\x92\x01I\x06...'
UTF-8 is backwards compatible with good ol' ASCII only for values of 0x7F and below, with larger values in a given byte position indicating that the single "character" starting at that byte position is actually also composed of one or more subsequent bytes, and valid values for those bytes are determined by the design of UTF-8, where multibyte characters are constrained like this:
The leading byte has two or more high-order 1s followed by a 0, while continuation bytes all have '10' in the high-order position.
— http://en.wikipedia.org/wiki/UTF-8#Description
Thus, the byte 0x81 (10000001) is not a valid character in a UTF-8 string unless it is preceded by a byte with a value >= 0xC0... so what you have there is an invalid UTF-8 string that you're trying to store in a structure that by definition requires valid UTF-8.
The structure you are looking for is the blissfully character set and collation unaware VARBINARY for storing arbitrary strings of bytes that are relevant to algorithms, as opposed to VARCHAR, for storing strings of characters that are (presumably) relevant to human communication.