CSV data got truncated by SAS - csv

I'm using SAS University Edition 9.4
This is my CSV data.
,MGAAAAAAAA,3,A0000B 2F1
11111,ハアン12222234222B56122,4,AA 0000
,テストデータ,5,AACHY 2410F1
,テストデタテストテ,5,AACHYF2
This is my infile statement.
data wk01;
infile '/folders/myfolders/data/test_csv.txt'
dsd delimiter=','
lrecl=1000 missover firstobs=1;
input firstcol :$ secondcol :$ thirdcol :$ therest :$;
run ;
I expected my result like this.
But after executing SAS, What I got is as below (the yellow highlight indicates which row/column have its data being truncated by SAS)
For example, the first row's second column is MGAAAAAAAA but SAS's outut is MGAAAAAA
Could you please point out what am I missing here? Thanks alot.

The values of your variables are longer than the 8 bytes you are allowing for them. The UTF-8 characters can use up to 4 bytes each. Looks like some of them are getting truncated in the middle, so you get an invalid UTF-8 code.
Just define longer lengths for your variables instead of letting SAS use the default length of 8. In general it is best to explicitly define your variables with a LENGTH or ATTRIB statement. Instead of forcing SAS to guess how to define them based on how you first use them in other statements like INPUT, FORMAT, INFORMAT or assignment.
data wk01;
infile '/folders/myfolders/data/test_csv.txt' dsd dlm=',' truncover ;
length firstcol $8 secondcol $30 thirdcol $30 therest $100;
input firstcol secondcol thirdcol therest;
run ;

I think what you have is a mixed encoding problem. What's essentially happening is that after the first 5 characters which are in ASCII, it changes to UTF8. The commas are getting mixed up in this soup and your standard delimiter gets a bit confused here. You need some manual coding like this to deal with it I think :
data wk01;
infile "test.csv" lrecl=1000 truncover firstobs=1;
input text $utf8x70.;
firstcomma = findc(text,',', 1);
secondcomma = findc(text,',', firstcomma + 1);
thirdcomma = findc(text,',', secondcomma + 1);
fourthcomma = findc(text,',', thirdcomma + 1);
length firstcol $5;
length secondcol $30;
length thirdcol $1;
length fourthcol $30;
firstcol= substr(text,1, firstcomma - 1);
secondcol = substr(text, firstcomma + 1, (secondcomma -firstcomma-1 ));
thirdcol = substr(text, secondcomma + 1, (thirdcomma - secondcomma - 1));
fourthcol = substr(text, thirdcomma + 1);
run;
Probably there is a cleaner way to do it, but this is the quick and dirty method I could come up at 2 AM :)

Related

Proc json produces extra blanks after applying a format

I would like to export a sas dataset to json. I need to apply the commax10.1 format to make it suitable for some language versions. The problem is that the fmtnumeric option applies the format correctly but inserts extra blanks inside the quotes. I have tried trimblanks and other options but have not been able to get rid of them. How to delete the empty blanks inside the quotes? Note: I would like the values to remain inside the quotes
In addition, is it possible to replace the null values with “” ?
Sample data:
data testdata_;
input var1 var2 var3;
format _all_ commax10.1;
datalines;
3.1582 0.3 1.8
21 . .
1.2 4.5 6.4
;
proc json out = 'G:\test.json' pretty fmtnumeric nosastags trimblanks keys;
export testdata_;
run;
In the link you can see what the output looks like.
output of json
Use a custom format function that strips the leading and trailing spaces.
Example:
proc fcmp outlib=work.custom.formatfunctions;
function stripcommax(number) $;
return (strip(put(number,commax10.1)));
endsub;
run;
options cmplib=(work.custom);
proc format;
value commaxstrip other=[stripcommax()];
run;
data testdata_;
input var1 var2 var3;
datalines;
3.1582 0.3 1.8
21 . .
1.2 4.5 6.4
;
proc json out = 'test.json'
pretty
fmtnumeric
nosastags
keys
/*
/* trimblanks */
;
format var: commaxstrip.;
export testdata_;
run;
data _null_;
infile 'test.json';
input;
put _infile_ ;
run;
trimblanks only trims trailing blanks. The format itself is adding leading blanks, and proc json has no option that I am aware of to remove leading blanks.
One option would be to convert all of your values to strings, then export.
data testdata_;
input var1 var2 var3;
format _all_ commax10.1;
array var[*] var1-var3;
array varc[3] $;
do i = 1 to dim(var);
varc[i] = strip(compress(put(var[i], commax10.1), '.') );
end;
keep varc:;
datalines;
3.1582 0.3 1.8
21 . .
1.2 4.5 6.4
;
This would be a great feature request. I recommend posting this to the SASWare Ballot Ideas or contact Tech Support and let them know of this issue.
This is not the only issue with proc json - other challenges include line length limits for a SAS 9 _webout destination, proc failures when ingesting invalid characters, and inability to mod to a destination.
For that reason in the SASjs framework and Data Controller we tend to revert to a data step approach.
Our macro is open source and available here: https://core.sasjs.io/mp__jsonout_8sas.html
To send formatted values, invoke as follows:
data testdata_;
input var1 var2 var3;
format _all_ commax10.1;
datalines;
3.1582 0.3 1.8
21 . .
1.2 4.5 6.4
;
filename _dest "/tmp/test.json";
/* open the JSON */
%mp_jsonout(OPEN,jref=_dest)
/* send the data */
%mp_jsonout(OBJ,testdata_,jref=_dest,fmt=Y)
/* close the JSON */
%mp_jsonout(CLOSE,jref=_dest)
/* display result */
data _null_;
infile _dest;
input;
putlog _infile_;
run;
Which gives:

%MACRO TO IMPORT CSV FILES INTO SAS

I have hundreds of csv files that I need to import into SAS as .sas7bdat files. I don't want to do it manually as it is time consuming. I'm trying to write a %macro in SAS using a data step but don't know how to specify the correct format and length for each variable. I worry about what if I incorrectly specify the length for one of the variables and then some data won't be read correctly.
Here is an example:
Data_1:
A,B,C,D,E
2, Paul Twix, 5/9/2015, 2, 238
2, Paul Twix, 5/10/2015, 3, 238
2, Paul Twix, 5/11/2015, 4, 238
Data_2:
A,B,C,D,E
2345678, Carolina Ferrera, 5/9/2015, 22, 123
2345678, Carolina Ferrera, 5/10/2015, 23, 123
2345678, Carolina Ferrera, 5/11/2015, 24, 123
I thought of running this code first to determine the max length but again I can only check a handful number of files.
proc sql noprint ;
create table varlist as
select memname,varnum,name,type,length,format,format as informat,label
from dictionary.columns
where libname='WORK' and memname='Data_1'
;
quit;
When I have a handful number of files, I can manually adjust the length of the character variable, but if I have many files and I specify the format of the variables based on the first file, some variables will be trimmed. Here is an example:
%macro import_main(inf,outdat);
DATA &outdat.;
INFILE &inf.
LRECL=32767 firstobs=2
TERMSTR=CRLF
DLM=','
MISSOVER
DSD ;
INPUT
A : ?? BEST1.
B : $CHAR9.
C : ?? MMDDYY9.
D : ?? BEST1.
E : ?? BEST3. ;
FORMAT C YYMMDD10.;
RUN;
%mend import_main;
filename inf1 'C:\SAS_data_1.csv';
filename inf2 'C:\SAS_data_2.csv';
%import_main(inf1, work.SAS_data_1);
%import_main(inf2, work.SAS_data_2);
This code correctly displays the values for SAS_data_1 but incorrectly displays the names of the string in SAS_data_2.
Is there anything to avoid this error in %macro?
Thank you.

how to import csv file if it both includes delimiters / and ,

I have a file with mixed delimiters , and /. When I import it into SAS with the following data step:
data SASDATA.Publications ;
infile 'R:/Lipeng_Wang/PATSTAT/Publications.csv'
DLM = ','
DSD missover lrecl = 32767
firstobs = 3 ;
input pat_publn_id :29.
publn_auth :$29.
publn_nr :$29.
publn_nr_original :$29.
publn_kind :$29.
appln_id :29.
publn_date :YYMMDD10.
publn_lg :$29.
publn_first_grant :29.
publn_claims :29. ;
format publn_date :YYMMDDd10. ;
run ;
the sas log shows that
NOTE: Invalid data for appln_id in line 68262946 33-34.
NOTE: Invalid data for publn_date in line 68262946 36-44.
RULE: ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+----9
68262946 390735978,HK,1053433,09/465,054,A1,275562685,2010-03-26, ,0,0 62
pat_publn_id=390735978 publn_auth=HK publn_nr=1053433 publn_nr_original=09/465 publn_kind=054
appln_id=. publn_date=. publn_lg=2010-03-26 publn_first_grant=. publn_claims=0 _ERROR_=1
_N_=68262944
NOTE: Invalid data for appln_id in line 68280355 33-34.
NOTE: Invalid data for publn_date in line 68280355 36-44.
68280355 390753387,HK,1092990,60/523,466,A1,275562719,2010-03-26, ,0,0 62
pat_publn_id=390753387 publn_auth=HK publn_nr=1092990 publn_nr_original=60/523 publn_kind=466
appln_id=. publn_date=. publn_lg=2010-03-26 publn_first_grant=. publn_claims=0 _ERROR_=1
_N_=68280353
it seems that i need to file '60/523,466' into the volume of 'publn_nr_original'. but what should I do for it?
Your program code has two obvious issues.
First your syntax on the FORMAT statement is wrong. The : modifier is a feature of the INPUT or PUT statement syntax and should not be used in a FORMAT statement.
Second you are trying to read 29 digits into a number. You cannot store 29 digits accurately into a number in SAS. If those values are really longer than 15 digits you will need to read them into character variables. And if they really are smaller numbers (that could be stored as numbers) then you don't need to include an informat specification to the INPUT statement. SAS already knows how to read numbers from text files. In list mode the INPUT statement will ignore the width on the informat anyway.
But your error message looks to be caused by an improperly formed file. I suspect that one of the first 6 columns has a comma in its value, but whoever created the data file forgot to add quotes around the value with the comma. If you can figure out which field the comma should be in then you might be able to parse the line in a way that it can be used.
Here is one method that might work assuming that the commas only appear in the publn_nr_original variable and that at most one comma will appear.
data want ;
infile cards dsd truncover firstobs=3;
length
pat_publn_id $30
publn_auth $30
publn_nr $30
publn_nr_original $30
publn_kind $30
appln_id $30
publn_date 8
publn_lg $30
publn_first_grant $30
publn_claims $30
;
informat publn_date YYMMDD10. ;
format publn_date YYMMDDd10. ;
input #;
if countw(_infile_,',','mq')<= 10 then input pat_publn_id -- publn_claims ;
else do ;
list ;
input pat_publn_id -- publn_nr_original xxx :$30. publn_kind -- publn_claims ;
publn_nr_original=catx(',',publn_nr_original,xxx);
drop xxx;
end;
cards4;
Header1
Header2
1,22,333,4444,55,6666,2010-03-26,77,8,9999
390735978,HK,1053433,09/465,054,A1,275562685,2010-03-26, ,0,0
390735978,HK,1053433,"09/465,054",A1,275562685,2010-03-26, ,0,0
390753387,HK,1092990,60/523,466,A1,275562719,2010-03-26, ,0,0
;;;;
But the real solution is to fix the process that created the file. So instead of having a line like this in the file:
390735978,HK,1053433,09/465,054,A1,275562685,2010-03-26, ,0,0
The line should have looked like this:
390735978,HK,1053433,"09/465,054",A1,275562685,2010-03-26, ,0,0
Ok, I see what you mean - you have a field with a comma, in a comma separated file, and that field is not quoted.
For this you will have to read the two parts in seperately and add the comma back in, as per example code below.
It's worth noting that all your values must have commas for this approach to work! This in fact looks like bad data, if your input field is indeed "60/523,466" then it should be "quoted" in your input file to be read in correctly.
%let some_csv=%sysfunc(pathname(work))/some.csv;
data _null_;
file "&some_csv";
put /;
put '390735978,HK,1053433,09/465,054,A1,275562685,2010-03-26, ,0,0';
put '390753387,HK,1092990,60/523,466,A1,275562719,2010-03-26, ,0,0';
run;
data work.Publications ;
infile "&some_csv" DLM = ',' DSD missover lrecl = 32767 firstobs = 3 ;
input pat_publn_id :best. publn_auth :$29. publn_nr :$29.
publn_nr_original1 :$29. publn_nr_original2:$29.
publn_kind :$29. appln_id :best.
publn_date :YYMMDD10. publn_lg :$29. publn_first_grant :best.
publn_claims :best. ;
format publn_date YYMMDDd10. ;
publn_nr_original=cats(publn_nr_original1,',',publn_nr_original2);
run ;

SAS - Reading Raw/Delimited file

I'm having an issue reading a CSV file into a SAS dataset without bringing each field with my import. I don't want every field imported, but that's the only way I can seem to get this to work. The issue is I cannot get SAS to read my data correctly, even if it's reading the columns correctly... I think part of the issue is that I have data above my actual column headers that I don't want to read in.
My data is laid out like so
somevalue somevalue somevalue...
var1 var2 var3 var4
abc abc abc abc
Where I want to exclude somevalue, only read in select var's and their corresponding data.
Below is a sample file where I've scrambled all the values in my fields. I only want to keep COLUMN H(8), AT(46) and BE(57)
Here's some code I've tried so far...
This was SAS generated from a PROC IMPORT. My PROC IMPORT worked fine to read in every field value, so I just deleted the fields that I didn't want, but I don't get the output I expect. The values corresponding to the fields does not match.
A) PROC IMPORT
DATAFILE="C:\Users\dip1\Desktop\TU_&YYMM._FIN.csv"
OUT=TU_&YYMM._FIN
DBMS=csv REPLACE;
GETNAMES=NO;
DATAROW=3;
RUN;
generated this in the SAS log (I cut out the other fields I didn't want)
B) DATA TU_&YYMM._FIN_TEST;
infile 'C:\Users\fip1\Desktop\TU_1701_FIN.csv' delimiter = ',' DSD lrecl=32767
firstobs=3 ;
informat VAR8 16. ;
informat VAR46 $1. ;
informat VAR57 $22. ;
format VAR8 16. ;
format VAR46 $1. ;
format VAR57 $22. ;
input
VAR8
VAR46 $
VAR57 $;
run;
I've also tried this below... I believe I'm just missing something..
C) DATA TU_TEST;
INFILE "C:\Users\fip1\Desktop\TU_&yymm._fin.csv" DLM = "," TRUNCOVER FIRSTOBS = 3;
LABEL ACCOUNT_NUMBER = "ACCOUNT NUMBER";
LENGTH ACCOUNT_NUMBER $16.
E $1.
REJECTSUBCATEGORY $22.;
INPUT ACCOUNT_NUMBER
E
REJECTSUBCATEGORY;
RUN;
As well as trying to have SAS point to the columns I want to read in, modifying the above to:
D) DATA TU_TEST;
INFILE "C:\Users\fip1\Desktop\TU_&yymm._fin.csv" DLM = "," TRUNCOVER FIRSTOBS = 3;
LABEL ACCOUNT_NUMBER = "ACCOUNT NUMBER";
LENGTH ACCOUNT_NUMBER $16.
E $1.
REJECTSUBCATEGORY $22.;
INPUT #8 ACCOUNT_NUMBER
#46 E
#57 REJECTSUBCATEGORY;
RUN;
None of which work. Again, I can do this successfully if I bring in all of the fields with either A) or B), given that B) includes all the fields, but I can't get C) or D) to work, and I want to keep the code to a minimum if I can. I'm sure I'm missing something, but I've never had time to tinker with it so I've just been doing it the "long" way..
Here's a snippet of what the data looks like
A(1) B(2) C(3) D(4) E(5) F(6) G(7)
ABCDEFGHIJ ABCDMCARD 202020 4578917 12345674 457894A (blank)
CRA INTERNALID SUBCODE RKEY SEGT FNM FILEDATE
CREDITBUR 2ABH123 AB2CHE123 A28O5176688 J2 Name 8974561
With a delimited file you need to read all of the fields (or at least all of the fields up to the last one you want to keep) even if you do not want to keep all of those fields. For the ones you want to skip you can just read them into a dummy variable that you drop. Or even one of the variables you want to keep that you will overwrite by reading from a later column.
Also don't model your DATA step after the code generated by PROC IMPORT. You can make much cleaner code yourself. For example there is no need for any FORMAT or INFORMAT statements for the three variables you listed. Although if VAR8 really needs 16 digits you might want to attach a format to it so that SAS doesn't use BEST12. format.
data tu_&yymm._fin_test;
infile 'C:\Users\fip1\Desktop\TU_1701_FIN.csv'
dlm=',' dsd lrecl=32767 truncover firstobs=3
;
length var8 8 var46 $1 var57 $22 ;
length dummy $1 ;
input 7*dummy var8 37*dummy var46 10*dummy var57 ;
drop dummy ;
format var8 16. ;
run;
You can replace the VARxx variable names with more meaningful ones if you want (or add a RENAME statement). Using the position numbers here just makes it clearer in this code that the INPUT statement is reading the 57 columns from the input data.

import issue with SAS due to large columns headers

I have many csv files with many variable column headers , up to 2000 variable column headers for some files.
I'm trying to do an import but at one point , the headers are truncated in a 'random' manner and the rest of the data are ignored therefore not imported. I'm putting random between quotes because it may not be random although I don't know the reason if it is not random. But let me give you more insight .
The headers are truncated randomly , some after the 977th variables , some others after the 1401th variable.
The headers are like this BAL_RT,ET-CAP,EXT_EA16,IVOL-NSA,AT;BAL_RT,ET-CAP,EXT_EA16,IVOL-NSA,AT;BAL_RT,ET-CAP,EXT_EA16,IVOL-NSA,AT
This the part of the import log
642130 VAR1439
642131 VAR1440
642132 VAR1441
642133 VAR1442 $
642134 VAR1443 $
642135 VAR1444 $
As you can see , some headers are seen as numeric although all the headers are alphanumeric as they are blending a mixture of character and numeric.
Please find my code for the import below
%macro lec ;
options macrogen symbolgen;
%let nfic=37 ;
%do i=1 %to &nfic ;
PROC IMPORT OUT= fic&i
DATAFILE= "C:\cygwin\home\appEuro\pot\fic&i..csv"
DBMS=DLM REPLACE;
DELIMITER='3B'x;
guessingrows=500 ;
GETNAMES=no;
DATAROW=1;
RUN;
data dico&i ; set fic&i (drop=var1) ;
if _n_ eq 1 ;
index=0 ;
array v var2-var1000 ;
do over v ;
if v ne "" then index=index+1 ;
end ;
run ;
data dico&i ; set dico&i ;
call symput("nvar&i",trim(left(index))) ;
run ;
%put &&nvar&i ;
%end ;
%mend ;
%lec ;
The code is doing an import and also creating a dictionnary with the headers as some of them are long (e.g more than 34 characters)
I'm not sure if these elements are related however, I would welcome any insights you will be able to give me.
Best.
You need to not use PROC IMPORT, as I mentioned in a previous comment. You need to construct your dictionary from a data step read in, because if you have 2000 columns times 34 or more long variable names, you will have more than 32767 record length.
An approach like this is necessary.
data headers;
infile "whatever" dlm=';' lrecl=99999 truncover; *or perhaps longer even, if that is needed - look at log;
length name $50; *or longer if 50 is not your real absolute maximum;
do until length(_infile_)=0;
input name $ #;
output;
end;
stop; *only want to read the first line!;
run;
Now you have your variable names. Now, you can read the file in with GETNAMES=NO; in proc import (you'll have to discard the first line), and then you can use that dictionary to generate rename statements (you will have lots of VARxxxx, but in a predictable order).