Restricted low-ascii character in AS3 - actionscript-3

Im having an error while trying to compile an .air from Flash CS5.5. The error says:
Usage error (incorrect arguments)
Filename contains restricted low-ascii character 13:
Im using a version of the file which compiles correctly on XP, now Im trying to compile on OsX but it doesn't. Any clue?.
Thanks in advance

Different operating systems use different characters to mark the end of line:
Unix / Linux / OS X uses LF (line feed, '\n', 0x0A)
Macs prior to OS X use CR (carriage return, '\r', 0x0D)
Windows / DOS uses CR+LF (carriage return followed by line feed, '\r\n', 0x0D0A)
In your case, ascii 13 = carriage return.
It is probable that one of your air project configuration files has an embedded carriage return that is causing this error.
You should try a utility such as linebreaks
Less likely but not impossible, is that one of your actual file names contains a carriage return.
For that you will need to try and rename the files till you find the correct one.

Related

Do line endings matter when moving mysql database between windows and linux?

I am exporting a MySQL database on a windows machine (running XAMPP) to then import into a Linux server (using cmdline or phpMyAdmin IMPORT "filename.sql")
The dbdump file has mixed LF/CRLF line endings, and I know Linux uses LF for line endings.
Will this cause a problem?
Thanks
I anticipate that the mysql program on each platform would expect "its" line-ending style. I honestly don't know if, on each platform, it is "smart enough" to know what to do with each kind of file. (Come to think of it, maybe it does ...) Well, there's one sure way to find out ...
You say that the file has mixed(?!) line-endings? That's very atypical ...
However, there are ready-made [Unix/Linux] utilities, dos2unix and unix2dos, which can handle this problem – and I'm quite sure that Windows has them too. Simply run your dump-file through the appropriate one before using it.
you can import db from Windows to Linux without problem.
MySQL's SQL tokenizer skips all whitespace characters, according to ctype.h.
https://github.com/mysql/mysql-server/blob/8.0/mysys/sql_chars.cc#L94-L95
else if (my_isspace(cs, i))
state_map[i] = MY_LEX_SKIP;
The my_isspace() function tests that in character set cs, the character i is whitespace. For example in ASCII, this includes:
space
tab
newline (\n)
carriage return (\r)
vertical tab (\v)
form feed (\f)
All of the whitespace characters are considered the same for this purpose. So there's no problem using CRLF or LF for line endings in SQL code.
But if your data (i.e. string values) contain different line endings, those line endings will not be converted.

Unhandled exception: 'charmap' codec can't decode byte 0x81 in position 3852: character maps to <undefined>

So I tried to download this dataset from kaggle and when I try to import it shows the following error. Error Picture here
I opened in Excel and even notebook and saved as UTF-8 but still faced an error. Does this mean this dataset can only be opened with python? I have not yet studied python but wanted do a few queries with SQL and visualizations for my project.
https://www.kaggle.com/datasets/vardan95ghazaryan/top-250-football-transfers-from-2000-to-2018
The character set must be specified in multiple places:
The client
The table definition (or defaulted from the database)
and maybe other places.
For further discussion, please show the line that is in question, plus hex of that line, plus what you expect the line to day.
Kaggle
I found this in that download; there are doubtless other issues:
Diego Tristán
The á character in that name is encoded as hex E1, implying that it is one of these encodings: cp1250, dec8, latin1, latin2, latin5. (It is likely to be latin1.)
Your Workbench setup was (apparently) configured to assume that any data coming at it would be UTF-8. When it saw the E1, it croaked because that is not valid UTF-8.
Find out how you can configure "imports". It should allow you to change the "character set"; change that to "latin1". Then try the import again.
Meanwhile, complain to Kaggle that UTF-8 is becoming the de facto standard and they should change their data to that encoding.
You say you "saved as UTF-8", if so, can you provide me with that file. I'll do a similar analysis.

MySQL - Table Data Import Wizard error in MacOS "Unhandled exception: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range(128)"

I am unable to load any CSV file into MySQL. Using the Table Data Import Wizard, this error pops up every time I get to the 'Configure Import Settings' step:
"Unhandled exception: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range(128)"
... even though the CSV is encoded as UTF-8 and that seems to be the default encoding setting for MySQL Workbench. Granted, I am not very skilled with computers, I have only a few weeks' exposure to MySQL. This has not always happened to me. I had no issues with this a couple of months ago while I was in a database management course.
But, I think this is where my problem lies: at one point I tried to uninstall MySQL Workbench and Community Server and re-installed, and ever since, this error happens every time I try to load data. I am even using a very basic test file that still won't load (all column types are set to 'Text' in Excel and saved as UTF-8 CSV:
I am using MySQL 8.0.28 on MacOS 11.5.2 (Big Sur)
Case 1, you wanted ï ("LATIN SMALL LETTER I WITH DIAERESIS"):
Character set ASCII is not adequate for the accented letters you have. You probably need latin1
Case 2, the first 3 bytes of the file are (hex) EF BB BF:
That is "BOM", which is a marker at the beginning of the file that indicates that it is encoded in UTF-8. But, apparently, the program reading it dos not handle such.
In some situations, you can remove the 3 bytes and proceed; in other situations, you need to read it using some UTF-8 setting.
Since you say "Text' in Excel and saved as UTF-8 CSV", I suspect that it is case 2. But that only addresses the source (Excel), over which you may not have enough control to get rid of the BOM.
I don't know what app has "Table Data Import Wizard", I cannot address the destination side of the problem. Maybe the wizard has a setting of UTF-8 or utf8mb4 or utf8; any of those might work instead of "ascii".
Sorry, I don't have the full explanation, but maybe the clues "BOM" or "EFBBBF" will help you find a solution either in Excel or in the Wizard.
Was able to solve it by saving my excel file to csv using MS DOS csv and Macintosh csv. After that, I was able to import my csv through the import wizard without the bug.

How to port app from Borland Pascal to FreePascal and Unicode terminal

I am trying to port my first app I ever wrote from old Borland Pascal to FreePascal and run it in Linux unicode shell.
Unfortunately, the app uses CRT unit and writes non-standard ASCII graphical characters. So I tried to rewrite statements like these:
gotoxy(2,3); write(#204);
writeln('3. Intro');
to these:
gotoxy(2,3); write('╠');
write('3. Intro', #10);
Two notes:
I use unicode characters directly in code because I did not find out how to write unicode characters via their code.
I used write procedure instead of writeln to make sure that unix line endings will be produced.
But after replacing all non-standard ASCII characters and getting rid of all writeln statements, it became even worse.
Before changes:
After changes:
Why it ends up like this? What I can do better?
After some time here is an update what I found out.
1) I cannot port it
As user #dmsc rightly pointed out, CRT does not support UTF-8. His suggested hack that did not work for me.
2) When you can't port it, emulate environment.
The graphical characters I needed were part of CP-437. There is a program called luit that is made for converting application output from the locale's encoding into UTF-8. Unfortunately this does not work for me. It simple erased the characters:
# Via iconv, everything is OK:
$ printf "top right corner in CP437: \xbf \n" | iconv -f CP437 -t UTF-8
top right corner in CP437: ┐
# But not via luit, that simply omit the character:
$ luit -gr g2 -g2 'CP 437' printf "top right corner in CP437: \xbf \n"
top right corner in CP437:
So my solution is to run gnome-terminal, add and set Hebrew (IBM862) encoding (tutorial here) and enjoy your app!
The CRT unit does not currently works with UTF-8, as it assumes that each character on the screen is exactly one byte, see http://www.freepascal.org/docs-html-3.0.0/rtl/crt/index.html
But, simple applications can be made to work by "tricking" GotoXY to always do a full cursor positioning, by doing:
GotoXY(1,1);
GotoXY(x, y);
To replace all the strings in your source file, you can use recode, in a terminal type:
recode cp437..u8 < original.pas > fixed.pas
Then, you need to replace all the numeric characters (like your #204 example) with the equivalent UTF-8, you can use:
echo -e '\xCC' | recode cp437/..u8
The 'CC' is hexadecimal for 204, and as a result the character '╠' will be printed.

Migrating MS Access data to MySQL: character encoding issues

We have an MS Access .mdb file produced, I think, by an Access 2000 database. I am trying to export a table to SQL with mdbtools, using this command:
mdb-export -S -X \\ -I orig.mdb Reviewer > Reviewer.sql
That produces the file I expect, except one thing: Some of the characters are represented as question marks. This: "He wasn't ready" shows up like this: "He wasn?t ready", only in some cases (primarily single/double curly quotes), where maybe the content was pasted into the DB from MS Word. Otherwise, the data look great.
I have tried various values for "export MDB_ICONV=". I've tried using iconv on the resulting file, with ISO-8859-1 in the from/to, with UTF-8 in the from/to, with WINDOWS-1250 and WINDOWS-1252 and WINDOWS-1256 in the from, in various combinations. But I haven't succeeded in getting those curly quotes back.
Frankly, based on the way the resulting file looks, I suspect the issue is either in the original .mdb file, or in mdbtools. The malformed characters are all single question marks, but it is clear that they are not malformed versions of the same thing; so (my gut says) there's not enough data in the resulting file; so (my gut says) the issue can't be fixed in the resulting file.
Has anyone run into this one before? Any tips for moving forward? FWIW, I don't have and never have had MS Access -- the file is coming from a 3rd party -- so this could be as simple as changing something on the database, and I would be very glad to hear that.
Thanks.
Looks like "smart quotes" have claimed yet another victim.
MS word takes plain ascii quotes and translates them to the double-byte left-quote and right-quote characters and translates a single quote into the double byte apostrophe character. The double byte characters in question blelong to to an MS code page which is roughly compatable with unicode-16 except for the silly quote characters.
There is a perl script called 'demoroniser.pl' which undoes all this malarky and converts the quotes back to plain ASCII.
It's most likely due to the fact that the data in the Access file is UTF, and MDB Tools is trying to convert it to ascii/latin/is0-8859-1 or some other encoding. Since these encodings don't map all the UTF characters properly, you end up with question marks. The information here may help you fix your encoding issues by getting MDB Tools to use the correct encoding.