I'm using vbscript via HP-UFT (former QTP).
I'm facing with issue which looks pretty simple but I couldn't fix it.
I have .CSV files exported from some system and there is no finished CRLF in this file.
I need a simple fix to append new line to this file (I know that is possible write-to-another-file workaround)
I'm using FileSystemObject like this:
Set objFile = objFSO.OpenTextFile(outFile,8)' (outFile,8, true/false/default)
objFile.Write "test string" & vbCrLf ' and other different combinations
I didn't use ADODB.Stream because it has not append function and I have no need to additional files
When I'm opening file in Notepad after my tries I see empty squares instead CRLF. I think it because file created with UCS-2 Little Endian encoding. I have no such issue with utf-8
PS maybe some more quick fix of system variable is possible? I have found in network that it possible to change default encoding for all created files via some system varibale but din't find it name.
My Language in Region and Language -> Administrative -> Language for non-Unicode is English
When in doubt, read the documentation:
Syntax
object.OpenTextFile(filename[, iomode[, create[, format]]])
Arguments
[...]
format
Optional. One of three Tristate values used to indicate the format of the opened file (TristateTrue = -1 to open the file as Unicode, TristateFalse = 0 to open the file as ASCII, TristateUseDefault = -2 to open the file as the system default). If omitted, the file is opened as ASCII.
You open the file for appending, but don't specify the encoding, so the interpreter assumes ASCII format. Change the line
Set objFile = objFSO.OpenTextFile(outFile,8)
to
Set objFile = objFSO.OpenTextFile(outFile, 8, False, -1)
and the problem will disappear.
Related
I am attempting to import a CSV file which is in French to my US based analysis. I have noticed several issues in the import related to the use of accents. I put the csv file into a text reader and found that the data look like this
I am unsure how to get rid of the [sub] pieces and format this properly.
I am on SAS 9.3 and am unable to edit the CSV as it is a shared CSV with French researchers. I am also limited to what I can do in terms of additional languages within SAS because of admin rights.
I have tried the following fixes:
data want(encoding=asciiany);
set have;
comment= Compress(comment,'0D0A'x);
comment= TRANWRD(comment,'0D0A'x,'');
comment= TRANWRD(comment,'0D'x,'');
comment= TRANWRD(comment,"\u001a",'');
How can I resolve these issues?
While this would have been a major issue a few decades ago, nowadays, it's very simple to determine the encoding and then run your SAS in the right mode.
First, open the CSV in a text editor, not the basic Notepad but almost any other; Notepad++ is free, for example, or Ultraedit or Textpad, on Windows, or on the Mac, BBEdit, or several others will do. I'll assume Notepad++ for the rest of this answer, but all of them have some way of doing this. If you're in a restricted no-admin-rights environment, good news: Notepad++ can be installed in your user folder with no admin rights (or even on a USB!). (Also, an advanced text editor is a vital data science tool, so you should have one anyway.)
In Notepad++, once you open the file there will be an encoding in the bottom right: "UTF-8", "WLATIN1", "ASCII", etc., depending on the encoding of the file. Look and see what that is, and write it down.
Once you have that, you can try starting SAS in that encoding. For the rest of this, I assume it is in UTF-8 as that is fairly standard, but replace UTF-8 with whatever the encoding you determined. earlier.
See this article for more details; the instructions are for 9.4, but they have been the same for years. If this doesn't work, you'll need to talk to your SAS administrator, and they may need to modify your SAS installation.
You can either:
Make a new shortcut (a copy of the one you run SAS with) and add -encoding UTF-8 to the command line
Create a new configuration file, point SAS to it, and include ENCODING=UTF-8 in the configuration file.
Note that this will have some other impacts - the datasets you create will be encoded in UTF-8, and while SAS is capable of handling that, it will add some extra notes to the log and some extra time if you later do work in non-UTF8 SAS with this, or if you use non-UTF8 SAS datasets in this mode.
This worked:
data want;
array f[8] $4 _temporary_ ('ä' 'ö' 'ü' 'ß' 'Ä' 'Ö' 'Ü' 'É');
array t[8] $4 _temporary_ ('ae' 'oe' 'ue' 'ss' 'Ae' 'Oe' 'Ue' 'E');
set have;
newvar=oldvar;
newvar = Compress(newvar,'0D0A'x);
newvar = TRANWRD(newvar,'0D0A'x,'');
newvar = TRANWRD(newvar,'0D'x,'');
newvar = TRANWRD(newvar,"\u001a",'');
newvar = compress(newvar, , 'kw');
do _n_=1 to dim(f);
d=tranwrd(d, trim(f[_n_]), trim(t[_n_]));
end;
run;
I useDo.cmd TransferText to import some csv files in to my database though i am running in to some problems.
I get an error popup saying that Access is unable to find the file, the suggestions for the cause are the usual, "file does not exist // contains symbols or punctuation // name is too long"
Experimenting has shown that the issue is the file path being too long, in some cases over 230 characters (the files are saved on a network with a badly optimized hierarchy beyond my control)
I have some some experimenting, and it seems that 208 Characters is the limit for this, the CSV are automatically generated, and the names can be shortened slightly, though that won't always be too much of a help as they still need to be easily identifiable.
Is there a solution which would allow importing files with a path longer than the 208 characters, as insisting that the file names are kept short doesn't seem like the best long term solution.
Thanks for any feedback!
Edit: I currently have the below code.
file = "\\Long\File\Path\FileName.txt"
path = Left(file, InStrRev(file, "\"))
newfile = Right(file, Len(file) - InStrRev(file, "\"))
Shell ("subst Z:" & & Chr(34) & path & & Chr(34))
fullpath = "Z:\" & newfile
DoCmd.TransferText TransferType:=acImport, TableName:="tbl_name", FileName:=fullpath, HasFieldNames:=True
Shell ("subst Z: /d")
You can call good ol' DOS command Subst before or when running your application:
Subst x: f:\some\very\long\path
Now x: will have that long path folder as its root.
Then export to drive x:
When finished, call:
Subst x: /d
to remove drive x:.
Use Shell from inside Access:
Shell "Subst x: f:\some\very\long\path"
I have a CSV file that gets generated by a Mac program (unfortunately, with little encoding flexibility) which writes LFs at the end of lines. Then a vbscript reads this file like so:
Set objTextFile = fso.OpenTextFile("the_file.csv", 1)
lineItemString = objTextFile.Readline
However, since it is looking for CRLF at the end of the lines, lineItemString contains the text of the entire file. Since this is a daily procedure, I'd like not to have to add an interim step of using some utility program that properly converts all the line endings to CRLF.
Is there a way to avoid this by doing this conversion from within my vbscript?
Thanks in advance!
This will replace each LF in a string with CRLF:
Replace(str, vbLf, vbCrLf)
Depending on how you want to process the file it might be easier to just read the entire file and split the content by vbLf, though.
Set objTextFile = fso.OpenTextFile("the_file.csv", 1)
For Each line In Split(objTextFile.ReadAll, vbLf)
' do stuff
Next
i am creating csv files with php. To write the data into my csv file, i use the php function "fputcsv".
this is the issue:
i can open the created file normally with Excel. But i cant import the file into a shopsystem (in this case "shopware"). It says something like "the data could not be read".
And now comes the clue:
If i open the created file and choose "save as" and select "CSV (comma delimited)" in type, this file can be imported into shopware. I read something about the php function "mb_convert_encoding" which i used to encode the data, but it could not fix the problem.
I will be very glad if you can help me.
thanks.
Thanks for your input.
I solved this problem by replacing fputcsv with fwrite. Then i just needed to add "\r\n" (thanks wmil) to the end of the line and the generated file can be read by shopware.
Obviously the fputcsv function uses \n and not \r\n as EOL character.
I think you cannot set the encode using fputcsv. However fputcsv looks to the locale setting, wich you can change with setlocale.
Maybe you could send your file directly to the users browser and use changing contenttype and charset with header function.
This can't be answered without knowing more about your system. Most likely it has nothing to do with character encoding. It's probably a problem with wrong number of columns or column headers being incorrect.
If it is a character encoding issue, your best bet is:
$new_str = mb_convert_encoding($str, 'Windows-1252', 'auto');
Also end newlines with \r\n, not just \n.
If that doesn't work you'll need to check the software docs.
How to set MacVim display code.
Here is the mess when I open the lua file which create in Windows XP.
gControlMode = 0; -- 1£º¿ªÆôÖØÁ¦¸ÐÓ¦£¬ 0:¿ª´¥ÆÁģʽ
gState = GS_GAME;
sTotalTime = 0; --µ±Ç°¹Ø¿¨»¨µÄ×Üʱ¼ä
The text you posted seems like a Latin-1 (or ISO-8859-1, CP819) decoding of the CP936 encoding (or EUC‑CN, or GB18030 encodings1) of this text2:
gControlMode = 0; -- 1:开启重力感应, 0:开触屏模式
gState = GS_GAME;
sTotalTime = 0; --当前关卡花的总时间
When opening a file, Vim tries the list of encodings specified in the fileencodings option. Usually, latin1 is the last value in this list; reading as Latin-1 will always be successful since it is an 8-bit encoding that maps all 256 values. Thus, Vim is opening your CP936 encoded file as Latin-1.
You have several choices for getting Vim to use another encoding:
You can specify an encoding with the ++enc= option to Vim’s :edit command (this will cause Vim to ignore the fileencodings list for the buffer):
:e ++enc=cp936 /path/to/file
You can apply this to an already-loaded file by leaving off the path:
:e ++enc=cp936
You can add your preferred encoding to fileencodings just before latin1 (e.g. in your ~/.vimrc):
let &fileencodings = substitute(&fileencodings, 'latin1', 'cp936,\0', '')
You can set the encoding option to your desired encoding. This is usually discouraged because it has wide-ranging impacts (see :help encoding).
It might make sense, if possible, to switch your files to UTF-8 since many editors will properly auto-detect UTF-8. Once you have the file loaded properly (see above), Vim can do the conversion like this (set fileencoding, then :write):
:set fenc=utf-8 | w
Vim should pretty much automatically handle reading and writing UTF-8 files (encoding defaults to UTF-8, and utf-8 is in the default fileencodings), but if you are using other editors (i.e. whatever Windows editor edited/created the CP936 file(s)), you may need to configure them to use UTF-8 instead of (e.g.) CP936.
1 I am not familiar with the encodings used for Chinese text, these encodings seem to be identical for the “expected” text.
2 I do not read Chinese, but the presence and locations of the FULLWIDTH COLON and FULLWIDTH COMMA (and Google's translation of this text) make me think this is the text you expected.