I have to convert an input file of ASCII and non ASCII characters to a binary vector using Fortran 90 - binary

Is there any routine available ? I didn´t find any
Thanks

Under the assumption that you want to read a binary file that contains bytes for printable characters like "A"=65 and nonprintable characters like "ESC"=27 something like this might help?
integer(kind=selected_int_kind(1)), dimension(1000) :: vector
open(unit=10,file='data.data', access='stream', form='unformatted')
i=1
read(unit=10, iostat=ios) vector(i)
do while(ios==0)
i=i+1
read(unit=10, iostat=ios) vector(i)
enddo
For simplicity I assumed that the length of the vector is at most 1000. The vector will now contain the decimal representation of the ASCII characters in the input file.

Related

What's the exact meaning of the statement "Since ASCII used 7 bits for the character, it could only represent 128 different characters"?

I come across the below statement while studying about HTML Character Sets and Character Encoding :
Since ASCII used 7 bits for the character, it could only represent 128
different characters.
When we convert any decimal value from the ASCII character set to its binary equivalent it comes down to a 7-bits long binary number.
E.g. For Capital English Letter 'E' the decimal value of 69 exists in ASCII table. If we convert '69' to it's binary equivalent it comes down to the 7-bits long binary number 1000101
Then, why in the ASCII Table it's been mentioned as a 8-bits long binary number 01000101 instead of a 7-bits long binary number 1000101 ?
This is contradictory to the statement
Since ASCII used 7 bits for the character, it could only represent 128
different characters.
The above statement is saying that ASCII used 7 bits for the character.
Please clear my confusion about considering the binary equivalent of a decimal value. Whether should I consider a 7-bits long binary equivalent or a 8-bits long binary equivalent of any decimal value from the ASCII Table? Please explain to me in an easy to understand language.
Again, consider the below statement :
Since ASCII used 7 bits for the character, it could only represent 128
different characters.
According to the above statement how does the number of characters(128) that ASCII supports relates to the fact that ASCII uses 7 bits to represent any character?
Please clear the confusion.
Thank You.
In most processors, memory is byte-addressable and not bit-addressable. That is, a memory address gives the location of an 8-bit value. So, almost all data is manipulated in multiples of 8 bits at a time.
If we were to store a value that has by its nature only 7 bits, we would very often use one byte per value. If the data is a sequence of such values, as text might be, we would still use one byte per value to make counting, sizing, indexing and iterating easier.
When we describe the value of a byte, we often show all of its bits, either in binary or hexadecimal. If a value is some sort of integer (say of 1, 2, 4, or 8 bytes) and its decimal representation would be more understandable, we would write the decimal digits for the whole integer. But in those cases, we might lose the concept of how many bytes it is.
BTW—HTML doesn't have anything to do with ASCII. And, Extended ASCII isn't one encoding. The fundamental rule of character encodings is to read (decode) with the encoding the text was written (encoded) with. So, a communication consists of the transferring of bytes and a shared understanding of the character encoding. (That makes saying "Extended ASCII" so inadequate as to be nearly useless.)
An HTML document represents a sequence of Unicode characters. So, one of the Unicode character encodings (UTF-8) is the most common encoding for an HTML document. Regardless, after it is read, the result is Unicode. An HTML document could be encoded in ASCII but, why do that? If you did know it was ASCII, you could just as easily know that it's UTF-8.
Outside of HTML, ASCII is used billions—if not trillions—of times per second. But, unless you know exactly how it pertains to your work, forget about it, you probably aren't using ASCII.

Converting from lowercase to uppercase using decimal/binary representation of alphabets

I'm using RISC-V and I am limited to using just and, or, xori, addition, subtraction, multiplication, division of integer values.
So for instance, the letter "a" will be represented as 97 and "aa" will be represented as 24929, and so on. The UI converts binary sequence into decimal representation, and I cannot directly modify n-th bit.
Is there anyway I can find a simple, general equation of converting from lowercase to uppercase the decimal representation of a maximum of 8 letter sequence of Strings?
Also, I forgot to add, I can't partition the string into individual letters either. Maybe it's possible, but I don't know how to do it.
Letters or characters are usually represented as byte values, which are easier to read in hexadecimal. This can be seen if you convert 97 and 24929 to hex.
You did not mention the system which was used to encode the characters; mentioning the value for one character is not definitive. Assuming your letters are encoded as ASCII, find an ASCII table and figure out the DIFFERENCE between upper- and lowercase character codes.
Use this knowledge to design an algorithm to transform lowercase character codes to uppercase.
A good uppercase conversion algorithm will not modify characters that are not lowercase letters.
This can be extended to SIMD if you are careful to avoid carries between bytes if you need to add or subtract.

How to convert alphabet to binary?

How to convert alphabet to binary? I search on Google and it says that first convert alphabet to its ASCII numeric value and than convert the numeric value to binary. Is there any other way to convert ?
And if that's the only way than is the binary value of "A" and 65 are same?
BECAUSE ASCII vale of 'A'=65 and when converted to binary its 01000001
AND 65 =01000001
That is indeed the way which text is converted to binary.
And to answer your second question, yes it is true that the binary value of A and 65 are the same. If you are wondering how CPU distinguishes between "A" and "65" in that case, you should know that it doesn't. It is up to your operating system and program to distinguish how to treat the data at hand. For instance, say your memory looked like the following starting at 0 on the left and incrementing right:
00000001 00001111 000000001 01100110
This binary data could mean anything, and only has a meaning in the context of whatever program it is in. In a given program, you could have it be read as:
1. An integer, in which case you'll get one number.
2. Character data, in which case you'll output 4 ASCII characters.
In short, binary is read by CPUs, which do not understand the context of anything and simply execute whatever they are given. It is up to your program/OS to specify instructions in order for data to be handled properly.
Thus, converting the alphabet to binary is dependent on the program in which you are doing so, and outside the context of a program/OS converting the alphabet to binary is really the exact same thing as converting a sequence of numbers to binary, as far as a CPU is concerned.
Number 65 in decimal is 0100 0001 in binary and it refers to letter A in binary alphabet table (ASCII) https://www.bin-dec-hex.com/binary-alphabet-the-alphabet-letters-in-binary. The easiest way to convert alphabet to binary is to use some online converter or you can do it manually with binary alphabet table.

WTX CONVERT() function

How convert() function works in WTX?
I have code like this. CONVERT(input element,"|||||||||||||||||||||||||||||||| |""#$%&'()*+,-.|0123456789:;|=|?#ABCDEFGHIJKLMNOPQRSTUVWXYZ|\||_|abcdefghijklmnopqrstuvwxyz{|}||" )
Convert has 2 arguments. First byes to be replaced. Second replaceable bytes. Syntax : convert(bytes_to_replace, replaceable_bytes).
It is usually used to convert data from ASCII to EBCDIC or from EBCDIC to ASCII .
The ordinal for each byte in bytes_to_converts points the nth byte in the replacement bytes.
e.g. if there is a A in bytes_to_convert, it gets replaced by the 65th byte in the replacement bytes

Flash CS4/AS3: differing behavior between console and textarea for printing UTF-16 characters

trace(escape("д"));
will print "%D0%B4", the correct URL encoding for this character (Cyrillic equivalent of "A").
However, if I were to do..
myTextArea.htmlText += unescape("%D0%B4");
What gets printed is:
д
which is of course incorrect. Simply tracing the above unescape returns the correct Cyrillic character, though! For this texarea, escaping "д" returns its unicode code-point "%u0434".
I'm not sure what exactly is happening to mess this up, but...
UTF-16 д in web encoding is: %FE%FF%00%D0%00%B4
Whereas
UTF-16 д in web encoding is: %00%D0%00%B4
So it's padding this value with something at the beginning. Why would a trace provide different text than a print to an (empty) textarea? What's goin' on?
The textarea in question has no weird encoding properties attached to it, if that sort of thing is even possible.
The problem is unescape (escape could also be a problem, but it's not the culprit in this case). These functions are not multibyte aware. What escape does is this: it takes a byte in the input string and returns its hex representation with a % prepended. unescape does the opposite. The key point here is that they work with bytes, not characters.
What you want is encodeURIComponent / decodeURIComponent. Both use utf-8 as the string encoding scheme (the encoding using by flash everywhere). Note that it's not utf-16 (which you shouldn't care about as long as flash is concerned).
encodeURIComponent("д"); //%D0%B4
decodeURIComponent("%D0%B4"); // д
Now, if you want to dig a bit deeper, here's what's going on (this assumes a basic knowledge of how utf-8 works).
escape("д")
This returns
%D0%B4
Why?
"д" is treated by flash as utf-8. The codepoint for this character is 0x0434.
In binary:
0000 0100 0011 0100
It fits in two utf-8 bytes, so it's encoded thus (where e means encoding bit, and p means payload bit):
1101 0000 1011 0100
eeep pppp eepp pppp
Converting it to hex, we get:
0xd0 0xb4
So, 0xd0,0xb4 is a utf-8 encoded "д".
This is fed to escape. escape sees two bytes, and gives you:
%d0%b4
Now, you pass this to unescape. But unescape is a little bit brain-dead, so it thinks one byte is one and the same thing as one char, always. As far as unescape is concerned, you have two bytes, hence, you have two chars. If you look up the code-points for 0xd0 and 0xb4, you'll see this:
0xd0 -> Ð
0xb4 -> ´
So, unescape returns a string consisting of two chars, Ð and ´ (instead of figuring out that the two bytes it got where actually just one char, utf-8 encoded). Then, when you assign the text property, you are not really passing д´ butд`, and this is what you see in the text area.