octave - parsing non-delimited text with textscan - octave

I have a large set of data in a file. Each line has the format:
1 character, integer, optional text, optional "#"
There are no whitespaces, commas, etc. Can I use textscan to delimit these fields.
An example
w0319
a29cde
b54863fgh
c4ijk#
b076mno
a7356pqr
d78#
b678
h765677stuvwx
Thank you

No need for textscan. Something along the following lines will give you a good result and more control, and a nice struct array at the end of it.
% Read file and split into lines as a cell array
S = fileread('myfile');
S = strsplit(S, '\n');
if isempty(S{end}); S(end) = []; end % If there was an empty line, remove it
% Create a struct array, one struct per line
for i = 1 : length(S)
% process mandatory character and integer
Out(i).char = S{i}(1); % get the first character of that line
IntIndices = regexp( S{i}, '\d' ); % get the integer part as indices
Out(i).int = S{i}( IntIndices ); % note: integer returned as string
% to preserve 0-padding
% process optional string and hash
if IntIndices(end) == length(S{i}) % no optional string exists after integer
Out(i).str = '';
Out(i).hash = false;
else
Out(i).str = S{i}( IntIndices(end) + 1 : end ); % get remaining string
if strcmp( Out(i).str(end), '#' )
Out(i).str(end) = []; % remove the final hash if it exists
Out(i).hash = true;
else
Out(i).hash = false;
end
end
end

Related

How to parse CSV file in the most performant way?

I would like to parse big CSV files in ABAP in the most performant way under the following conditions:
We do not know the structure of the CSV->the parse result should be table of string_table or something simular
The parsing should happen in accordance to https://www.rfc-editor.org/rfc/rfc4180
No solution specific calls
I found a very nice blog https://blogs.sap.com/2014/09/09/understanding-csv-files-and-their-handling-in-abap/ but it has its shortcoming:
Write your own code - The code example is not sufficient
Read the file using KCD_CSV_FILE_TO_INTERN_CONVERT - solution specific (not available everywhere) and will dump on fields that are big enough
Use RTTI and dynamic programming along with FM RSDS_CONVERT_CSV - we do not know the structure in advance
Use class CL_RSDA_CSV_CONVERTER - we do not know the structure in advance
I also checked the first available solution on github - https://github.com/thedoginthewok/ZwdCSV . Unfortunately, it has macros in the code (absolutely unacceptable) and also requires you to know the structure in advance.
I also tried to use the regex to do the job, but on big files this is too slow.
Even though I am extremely annoyed by this fact, I had to create a solution myself (I cannot believe that I actually did it - it should be in the standard...)
My first solution was a direct copy paste of Java code into ABAP (https://mkyong.com/java/how-to-read-and-parse-csv-file-in-java/). Unfortunately, as my other question How to iterate over string characters in ABAP in performant way? shown, it is not that easy to iterate over string in abap as it is in Java.
I then tried a split/count approach and so far it has the best performance. Does anyone knows the better way achieve this?
REPORT z_csv_test.
CLASS lcl_csv_parser DEFINITION CREATE PRIVATE.
PUBLIC SECTION.
TYPES:
tt_string_matrix TYPE STANDARD TABLE OF string_table WITH EMPTY KEY.
CLASS-METHODS:
create
IMPORTING
!iv_delimiter TYPE string DEFAULT '"'
!iv_separator TYPE string DEFAULT ','
!iv_line_separator TYPE abap_cr_lf DEFAULT cl_abap_char_utilities=>cr_lf
RETURNING
VALUE(r_result) TYPE REF TO lcl_csv_parser.
METHODS:
parse
IMPORTING
iv_string TYPE string
RETURNING
VALUE(r_result) TYPE tt_string_matrix,
constructor
IMPORTING
!iv_delimiter TYPE string
!iv_separator TYPE string
!iv_line_separator TYPE string.
PROTECTED SECTION.
PRIVATE SECTION.
DATA:
gv_delimiter TYPE string,
gv_separator TYPE string,
gv_line_separator TYPE string,
gv_escaped_delimiter TYPE string.
METHODS parse_line_to_string_table
IMPORTING
iv_line TYPE string
RETURNING
VALUE(r_result) TYPE string_table.
ENDCLASS.
CLASS lcl_csv_parser IMPLEMENTATION.
METHOD create.
r_result = NEW #(
iv_delimiter = iv_delimiter
iv_line_separator = CONV #( iv_line_separator )
iv_separator = iv_separator ).
ENDMETHOD.
METHOD constructor.
me->gv_delimiter = iv_delimiter.
me->gv_separator = iv_separator.
me->gv_line_separator = iv_line_separator.
me->gv_escaped_delimiter = |{ iv_delimiter }{ iv_delimiter }|.
ENDMETHOD.
METHOD parse.
"get the lines
SPLIT iv_string AT me->gv_line_separator INTO TABLE DATA(lt_lines).
DATA lx_open_line TYPE abap_bool VALUE abap_false.
DATA lv_current_line TYPE string.
LOOP AT lt_lines ASSIGNING FIELD-SYMBOL(<ls_line>).
FIND ALL OCCURRENCES OF me->gv_delimiter IN <ls_line> IN CHARACTER MODE MATCH COUNT DATA(lv_count).
IF ( lv_count MOD 2 ) = 1.
IF lx_open_line = abap_true.
lv_current_line = |{ lv_current_line }{ me->gv_line_separator }{ <ls_line> }|.
lx_open_line = abap_false.
APPEND parse_line_to_string_table( lv_current_line ) TO r_result.
ELSE.
lv_current_line = <ls_line>.
lx_open_line = abap_true.
ENDIF.
ELSE.
IF lx_open_line = abap_true.
lv_current_line = |{ lv_current_line }{ me->gv_line_separator }{ <ls_line> }|.
ELSE.
APPEND parse_line_to_string_table( <ls_line> ) TO r_result.
ENDIF.
ENDIF.
ENDLOOP.
ENDMETHOD.
METHOD parse_line_to_string_table.
SPLIT iv_line AT me->gv_separator INTO TABLE DATA(lt_line).
DATA lx_open_field TYPE abap_bool VALUE abap_false.
DATA lv_current_field TYPE string.
LOOP AT lt_line ASSIGNING FIELD-SYMBOL(<ls_field>).
FIND ALL OCCURRENCES OF me->gv_delimiter IN <ls_field> IN CHARACTER MODE MATCH COUNT DATA(lv_count).
IF ( lv_count MOD 2 ) = 1.
IF lx_open_field = abap_true.
lv_current_field = |{ lv_current_field }{ me->gv_separator }{ <ls_field> }|.
lx_open_field = abap_false.
APPEND lv_current_field TO r_result.
ELSE.
lv_current_field = <ls_field>.
lx_open_field = abap_true.
ENDIF.
ELSE.
IF lx_open_field = abap_true.
lv_current_field = |{ lv_current_field }{ me->gv_separator }{ <ls_field> }|.
ELSE.
APPEND <ls_field> TO r_result.
ENDIF.
ENDIF.
ENDLOOP.
REPLACE ALL OCCURRENCES OF me->gv_escaped_delimiter IN TABLE r_result WITH me->gv_delimiter.
ENDMETHOD.
ENDCLASS.
CLASS lcl_test_csv_parser DEFINITION
FINAL
CREATE PUBLIC .
PUBLIC SECTION.
CLASS-METHODS run.
CLASS-METHODS get_file
RETURNING VALUE(r_result) TYPE string.
PROTECTED SECTION.
PRIVATE SECTION.
ENDCLASS.
CLASS lcl_test_csv_parser IMPLEMENTATION.
METHOD get_file.
DATA lv_file_line TYPE string.
DO 10 TIMES.
lv_file_line = |"1234,{ cl_abap_char_utilities=>cr_lf }567890",{ lv_file_line }|.
ENDDO.
lv_file_line = lv_file_line && cl_abap_char_utilities=>cr_lf.
DATA(lt_file_as_table) = VALUE string_table(
FOR i = 1 THEN i + 1 UNTIL i = 1000000
( lv_file_line ) ).
CONCATENATE LINES OF lt_file_as_table INTO r_result.
ENDMETHOD.
METHOD run.
DATA lv_prepare_start TYPE timestampl.
GET TIME STAMP FIELD lv_prepare_start.
DATA(lv_file) = get_file( ).
DATA lv_prepare_end TYPE timestampl.
GET TIME STAMP FIELD lv_prepare_end.
WRITE |Preparation took { cl_abap_tstmp=>subtract( tstmp1 = lv_prepare_end tstmp2 = lv_prepare_start ) }|.
DATA lv_parse_start TYPE timestampl.
GET TIME STAMP FIELD lv_parse_start.
DATA(lo_parser) = lcl_csv_parser=>create( ).
DATA(lt_file) = lo_parser->parse( lv_file ).
DATA lv_parse_end TYPE timestampl.
GET TIME STAMP FIELD lv_parse_end.
WRITE |Parse took { cl_abap_tstmp=>subtract( tstmp1 = lv_parse_end tstmp2 = lv_parse_start ) }|.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
lcl_test_csv_parser=>run( ).
I'd like to present a different approach using find heavily, compared to your line based approach this seems to have equivalent performance for unquoted fields but performs slightly better if quoted fields are present:
In general, this uses the pattern position = find( off = position + 1 ) to iterate over the string in chunks, and then uses substring to copy ranges into strings. What can be observed here is that in a loop that iterates a million times, every nanosecond saved has an impact on the performance, and by moving as much of it out of the inner loop one can increase performance significantly. For the "simple" case of 10 digit fields one can see that both algorithms perform equally well, however for "longer" 30 digit fields your algorithm is getting faster in comparison. For fields with quotes the scan & concat approach I've used seems to be faster than the "reconstruct" approach.
I guess although one can achieve small gains through more clever ABAP, further significant optimizations are only possible by utilizing the engine even more.
Anyways, Here's the algorithm:
CLASS lcl_csv_parser_find IMPLEMENTATION.
METHOD parse.
DATA line TYPE string_table.
DATA position TYPE i.
DATA(string_length) = strlen( i_string ).
" Dereferencing member fields is slightly slower than variable access, in a close loop this matters
DATA(separators) = me->separators.
DATA(delimiter) = me->delimiter.
CHECK string_length <> 0.
" Checking for delimiters in the DO loop is quite slow.
" By scanning the whole file once and skipping that check if no delimiter is present
" This lead to a slight performance increase of 1s for 1 million rows
DATA(next_delimiter) = find( val = i_string sub = delimiter ).
DO.
DATA(start_position) = position.
DATA(field) = ``.
" Check if field is enclosed in double quotes, as we need to unescape then
IF next_delimiter <> -1 AND i_string+position(1) = delimiter.
start_position = start_position + 1. " literal starts after opening quote
DO.
position = find( val = i_string off = position + 1 sub = delimiter ).
" literal must be closed
" ASSERT position <> -1.
DATA(subliteral_length) = position - start_position.
field = field && substring( val = i_string off = start_position len = subliteral_length ).
DATA(following_position) = position + 1.
IF position = string_length OR i_string+following_position(1) <> delimiter.
" End of literal is reached
position = position + 1. " skip closing quote
EXIT. " DO
ELSE.
" Found escape quote instead
position = following_position + 1.
field = field && me->delimiter.
" continue searching
ENDIF.
" ASSERT sy-index < 1000.
ENDDO.
ELSE.
" Unescaped field, simply find the ending comma or newline
position = find_any_of( val = i_string off = position + 1 sub = separators ).
IF position = -1.
position = string_length.
ENDIF.
field = substring( val = i_string off = start_position len = position - start_position ).
ENDIF.
APPEND field TO line.
" Check if line ended and new line is started
DATA(current) = substring( val = i_string off = position len = 2 ).
IF current = me->line_separator.
APPEND line TO r_result.
CLEAR line.
position = position + 2. " skip newline
ELSE.
" ASSERT i_string+position(1) = me->separator.
position = position + 1.
ENDIF.
" Check if file ended
IF position >= string_length.
RETURN.
ENDIF.
" ASSERT sy-index < 100000001.
ENDDO.
ENDMETHOD.
ENDCLASS.
As a sidenote, instead of creating a huge table of string fields as stated in #1, I would experiment with some kind of "visitor pattern", e.g. pass an instance of such an interface to the parser:
INTERFACE if_csv_visitor.
METHODS begin_line.
METHODS end_line.
METHODS visit_field
IMPORTING
i_field TYPE string.
ENDINTERFACE.
As in a lot of cases you'll write the CSV fields into a structure anyways,
and thus one can save allocating this quite large table.
And for further reference, here's the whole report:
*&---------------------------------------------------------------------*
*& Report Z_CSV
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
REPORT Z_CSV.
* --------------------- Generic CSV Parser ----------------------------*
CLASS lcl_csv_parser DEFINITION ABSTRACT.
PUBLIC SECTION.
TYPES:
t_string_matrix TYPE STANDARD TABLE OF string_table WITH EMPTY KEY.
METHODS:
parse ABSTRACT
IMPORTING
i_string TYPE string
RETURNING
VALUE(r_result) TYPE t_string_matrix,
constructor
IMPORTING
i_delimiter TYPE string DEFAULT '"'
i_separator TYPE string DEFAULT ','
i_line_separator TYPE abap_cr_lf DEFAULT cl_abap_char_utilities=>cr_lf.
PROTECTED SECTION.
DATA:
delimiter TYPE string,
separator TYPE string,
line_separator TYPE string,
escaped_delimiter TYPE string,
separators TYPE string.
ENDCLASS.
CLASS lcl_csv_parser IMPLEMENTATION.
METHOD constructor.
me->delimiter = i_delimiter.
me->separator = i_separator.
me->line_separator = i_line_separator.
me->escaped_delimiter = |{ i_delimiter }{ i_delimiter }|.
me->separators = i_separator && i_line_separator.
ENDMETHOD.
ENDCLASS.
* --------------------------- Line based CSV Parser ------------------------ *
CLASS lcl_csv_parser_line DEFINITION INHERITING FROM lcl_csv_parser.
PUBLIC SECTION.
METHODS parse REDEFINITION.
PRIVATE SECTION.
METHODS parse_line_to_string_table
IMPORTING
i_line TYPE string
RETURNING
VALUE(r_result) TYPE string_table.
ENDCLASS.
CLASS lcl_csv_parser_line IMPLEMENTATION.
METHOD parse.
"get the lines
SPLIT i_string AT me->line_separator INTO TABLE DATA(lines).
DATA open_line TYPE abap_bool VALUE abap_false.
DATA current_line TYPE string.
LOOP AT lines ASSIGNING FIELD-SYMBOL(<line>).
FIND ALL OCCURRENCES OF me->delimiter IN <line> IN CHARACTER MODE MATCH COUNT DATA(count).
IF ( count MOD 2 ) = 1.
IF open_line = abap_true.
current_line = |{ current_line }{ me->line_separator }{ <line> }|.
open_line = abap_false.
APPEND parse_line_to_string_table( current_line ) TO r_result.
ELSE.
current_line = <line>.
open_line = abap_true.
ENDIF.
ELSE.
IF open_line = abap_true.
current_line = |{ current_line }{ me->line_separator }{ <line> }|.
ELSE.
APPEND parse_line_to_string_table( <line> ) TO r_result.
ENDIF.
ENDIF.
ENDLOOP.
ENDMETHOD.
METHOD parse_line_to_string_table.
SPLIT i_line AT me->separator INTO TABLE DATA(fields).
DATA open_field TYPE abap_bool VALUE abap_false.
DATA current_field TYPE string.
LOOP AT fields ASSIGNING FIELD-SYMBOL(<field>).
FIND ALL OCCURRENCES OF me->delimiter IN <field> IN CHARACTER MODE MATCH COUNT DATA(count).
IF ( count MOD 2 ) = 1.
IF open_field = abap_true.
current_field = |{ current_field }{ me->separator }{ <field> }|.
open_field = abap_false.
APPEND current_field TO r_result.
ELSE.
current_field = <field>.
open_field = abap_true.
ENDIF.
ELSE.
IF open_field = abap_true.
current_field = |{ current_field }{ me->separator }{ <field> }|.
ELSE.
APPEND <field> TO r_result.
ENDIF.
ENDIF.
ENDLOOP.
REPLACE ALL OCCURRENCES OF me->escaped_delimiter IN TABLE r_result WITH me->delimiter.
ENDMETHOD.
ENDCLASS.
*--------------- Find based CSV Parser ------------------------------------*
CLASS lcl_csv_parser_find DEFINITION INHERITING FROM lcl_csv_parser.
PUBLIC SECTION.
METHODS parse REDEFINITION.
ENDCLASS.
CLASS lcl_csv_parser_find IMPLEMENTATION.
METHOD parse.
DATA line TYPE string_table.
DATA position TYPE i.
DATA(string_length) = strlen( i_string ).
" Dereferencing member fields is slightly slower than variable access, in a close loop this matters
DATA(separators) = me->separators.
DATA(delimiter) = me->delimiter.
CHECK string_length <> 0.
" Checking for delimiters in the DO loop is quite slow.
" By scanning the whole file once and skipping that check if no delimiter is present
" This lead to a slight performance increase of 1s for 1 million rows
DATA(next_delimiter) = find( val = i_string sub = delimiter ).
DO.
DATA(start_position) = position.
DATA(field) = ``.
" Check if field is enclosed in double quotes, as we need to unescape then
IF next_delimiter <> -1 AND i_string+position(1) = delimiter.
start_position = start_position + 1. " literal starts after opening quote
DO.
position = find( val = i_string off = position + 1 sub = delimiter ).
" literal must be closed
" ASSERT position <> -1.
DATA(subliteral_length) = position - start_position.
field = field && substring( val = i_string off = start_position len = subliteral_length ).
DATA(following_position) = position + 1.
IF position = string_length OR i_string+following_position(1) <> delimiter.
" End of literal is reached
position = position + 1. " skip closing quote
EXIT. " DO
ELSE.
" Found escape quote instead
position = following_position + 1.
field = field && me->delimiter.
" continue searching
ENDIF.
" ASSERT sy-index < 1000.
ENDDO.
ELSE.
" Unescaped field, simply find the ending comma or newline
position = find_any_of( val = i_string off = position + 1 sub = separators ).
IF position = -1.
position = string_length.
ENDIF.
field = substring( val = i_string off = start_position len = position - start_position ).
ENDIF.
APPEND field TO line.
" Check if line ended and new line is started
DATA(current) = substring( val = i_string off = position len = 2 ).
IF current = me->line_separator.
APPEND line TO r_result.
CLEAR line.
position = position + 2. " skip newline
ELSE.
" ASSERT i_string+position(1) = me->separator.
position = position + 1.
ENDIF.
" Check if file ended
IF position >= string_length.
RETURN.
ENDIF.
" ASSERT sy-index < 100000001.
ENDDO.
ENDMETHOD.
ENDCLASS.
* -------------------- Tests -------------------------------------------------------- *
CLASS lcl_test_csv_parser DEFINITION
FINAL
CREATE PUBLIC .
PUBLIC SECTION.
CLASS-METHODS run.
CLASS-METHODS get_file_complex
RETURNING VALUE(r_result) TYPE string.
CLASS-METHODS get_file_simple
RETURNING VALUE(r_result) TYPE string.
CLASS-METHODS get_file_long
RETURNING VALUE(r_result) TYPE string.
CLASS-METHODS get_file_longer
RETURNING VALUE(r_result) TYPE string.
CLASS-METHODS get_file_mixed
RETURNING VALUE(r_result) TYPE string.
PROTECTED SECTION.
PRIVATE SECTION.
ENDCLASS.
CLASS lcl_test_csv_parser IMPLEMENTATION.
METHOD get_file_complex.
DATA(file_line) =
repeat( val = |"1234,{ cl_abap_char_utilities=>cr_lf }7890",| occ = 9 ) &&
|"1234,{ cl_abap_char_utilities=>cr_lf }7890"| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD get_file_simple.
DATA(file_line) =
repeat( val = |1234567890,| occ = 9 ) &&
|1234567890| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD get_file_long.
DATA(file_line) =
repeat( val = |12345678901234567890,| occ = 4 ) &&
|12345678901234567890| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD get_file_longer.
DATA(file_line) =
repeat( val = |1234567890123456789012345678901234567890,| occ = 2 ) &&
|1234567890123456789012345678901234567890| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD get_file_mixed.
DATA(file_line) =
|1234567890,1234567890,"1234,{ cl_abap_char_utilities=>cr_lf }7890",1234567890,1234567890,1234567890,"1234,{ cl_abap_char_utilities=>cr_lf }7890",1234567890,1234567890,1234567890| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD run.
DATA prepare_start TYPE timestampl.
GET TIME STAMP FIELD prepare_start.
TYPES:
BEGIN OF t_file,
name TYPE string,
content TYPE string,
END OF t_file,
t_files TYPE STANDARD TABLE OF t_file WITH EMPTY KEY.
DATA(files) = VALUE t_files(
( name = `simple` content = get_file_simple( ) )
( name = `long` content = get_file_long( ) )
( name = `longer` content = get_file_long( ) )
( name = `complex` content = get_file_complex( ) )
( name = `mixed` content = get_file_mixed( ) )
).
DATA prepare_end TYPE timestampl.
GET TIME STAMP FIELD prepare_end.
WRITE |Preparation took { cl_abap_tstmp=>subtract( tstmp1 = prepare_end tstmp2 = prepare_start ) }|. SKIP 2.
WRITE: 'File', 15 'Line Parse', 30 'Find Parse', 45 'Match'. NEW-LINE.
ULINE.
LOOP AT files INTO DATA(file).
WRITE file-name UNDER 'File'.
DATA line_start TYPE timestampl.
GET TIME STAMP FIELD line_start.
DATA(line_parser) = NEW lcl_csv_parser_line( ).
DATA(line_result) = line_parser->parse( file-content ).
DATA line_end TYPE timestampl.
GET TIME STAMP FIELD line_end.
WRITE |{ cl_abap_tstmp=>subtract( tstmp1 = line_end tstmp2 = line_start ) }s| UNDER 'Line Parse'.
DATA find_start TYPE timestampl.
GET TIME STAMP FIELD find_start.
DATA(find_parser) = NEW lcl_csv_parser_find( ).
DATA(find_result) = find_parser->parse( file-content ).
DATA find_end TYPE timestampl.
GET TIME STAMP FIELD find_end.
WRITE |{ cl_abap_tstmp=>subtract( tstmp1 = find_end tstmp2 = find_start ) }s| UNDER 'Find Parse'.
" WRITE COND #( WHEN line_result = find_result THEN 'yes' ELSE 'no') UNDER 'Match'.
NEW-LINE.
ENDLOOP.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
lcl_test_csv_parser=>run( ).

Why octave error with function huffmandeco about large index types?

I've got a little MatLab script, which I try to understand. It doesn't do very much. It only reads a text from a file and encode and decode it with the Huffman-functions.
But it throws an error while decoding:
"error: out of memory or dimension too large for Octave's index type
error: called from huffmandeco>dict2tree at line 95 column 19"
I don't know why, because I debugged it and don't see a large index type.
I added the part which calculates p from the input text.
%text is a random input text file in ASCII
%calculate the relative frequency of every Symbol
for i=0:127
nlet=length(find(text==i));
p(i+1)=nlet/length(text);
end
symb = 0:127;
dict = huffmandict(symb,p); % Create dictionary
compdata = huffmanenco(fdata,dict); % Encode the data
dsig = huffmandeco(compdata,dict); % Decode the Huffman code
I can oly use octave instead of MatLab. I don't know, if there is an unexpected error. I use the Octave Version 6.2.0 on Win10. I tried the version for large data, it didn't change anything.
Maybe anyone knows the error in this context?
EDIT:
I debugged the code again. In the function huffmandeco I found the following function:
function tree = dict2tree (dict)
L = length (dict);
lengths = zeros (1, L);
## the depth of the tree is limited by the maximum word length.
for i = 1:L
lengths(i) = length (dict{i});
endfor
m = max (lengths);
tree = zeros (1, 2^(m+1)-1)-1;
for i = 1:L
pointer = 1;
word = dict{i};
for bit = word
pointer = 2 * pointer + bit;
endfor
tree(pointer) = i;
endfor
endfunction
The maximum length m in this case is 82. So the function calculates:
tree = zeros (1, 2^(82+1)-1)-1.
So it's obvious why the error called a too large index type.
But there must be a solution or another error, because the code is tested before.
I haven't weeded through the code enough to know why yet, but huffmandict is not ignoring zero-probability symbols the way it claims to. Nor have I been able to find a bug report on Savannah, but again I haven't searched thoroughly.
A workaround is to limit the symbol list and their probabilities to only the symbols that actually occur. Using containers.Map would be ideal, but in Octave you can do that with a couple of the outputs from unique:
% Create a symbol table of the unique characters in the input string
% and the indices into the table for each character in the string.
[symbols, ~, inds] = unique(textstr);
inds = inds.'; % just make it easier to read
For the string
textstr = 'Random String Input.';
the result is:
>> symbols
symbols = .IRSadgimnoprtu
>> inds
inds =
Columns 1 through 19:
4 6 11 7 12 10 1 5 15 14 9 11 8 1 3 11 13 16 15
Column 20:
2
So the first symbol in the input string is symbols(4), the second is symbols(6), and so on.
From there, you just use symbols and inds to create the dictionary and encode/decode the signal. Here's a quick demo script:
textstr = 'Random String Input.';
fprintf("Starting string: %s\n", textstr);
% Create a symbol table of the unique characters in the input string
% and the indices into the table for each character in the string.
[symbols, ~, inds] = unique(textstr);
inds = inds.'; % just make it easier to read
% Calculate the frequency of each symbol in table
% max(inds) == numel(symbols)
p = histc(inds, 1:max(inds))/numel(inds);
dict = huffmandict(symbols, p);
compdata = huffmanenco(inds, dict);
dsig = huffmandeco(compdata, dict);
fprintf("Decoded string: %s\n", symbols(dsig));
And the output:
Starting string: Random String Input.
Decoded string: Random String Input.
To encode strings other than the original input string, you would have to map the characters to symbol indices (ensuring that all symbols in the string are actually present in the symbol table, obviously):
>> [~, s_idx] = ismember('trogdor', symbols)
s_idx =
15 14 12 8 7 12 14
>> compdata = huffmanenco(s_idx, dict);
>> dsig = huffmandeco(compdata, dict);
>> fprintf("Decoded string: %s\n", symbols(dsig));
Decoded string: trogdor

LZW Compression In Lua

Here is the Pseudocode for Lempel-Ziv-Welch Compression.
pattern = get input character
while ( not end-of-file ) {
K = get input character
if ( <<pattern, K>> is NOT in
the string table ){
output the code for pattern
add <<pattern, K>> to the string table
pattern = K
}
else { pattern = <<pattern, K>> }
}
output the code for pattern
output EOF_CODE
I am trying to code this in Lua, but it is not really working. Here is the code I modeled after an LZW function in Python, but I am getting an "attempt to call a string value" error on line 8.
function compress(uncompressed)
local dict_size = 256
local dictionary = {}
w = ""
result = {}
for c in uncompressed do
-- while c is in the function compress
local wc = w + c
if dictionary[wc] == true then
w = wc
else
dictionary[w] = ""
-- Add wc to the dictionary.
dictionary[wc] = dict_size
dict_size = dict_size + 1
w = c
end
-- Output the code for w.
if w then
dictionary[w] = ""
end
end
return dictionary
end
compressed = compress('TOBEORNOTTOBEORTOBEORNOT')
print (compressed)
I would really like some help either getting my code to run, or helping me code the LZW compression in Lua. Thank you so much!
Assuming uncompressed is a string, you'll need to use something like this to iterate over it:
for i = 1, #uncompressed do
local c = string.sub(uncompressed, i, i)
-- etc
end
There's another issue on line 10; .. is used for string concatenation in Lua, so this line should be local wc = w .. c.
You may also want to read this with regard to the performance of string concatenation. Long story short, it's often more efficient to keep each element in a table and return it with table.concat().
You should also take a look here to download the source for a high-performance LZW compression algorithm in Lua...

How do I suppress blank lines in a Microsoft Word 2007 mail merge?

I'm setting up a mail merge file that reads from a CSV file. In the CSV file, there are ~20 Boolean fields that produce the body of the Word mail merge file. The problem is that if the first 19 fields are all "N", then the Word mail merge file will have 19 blank spaces and then output the 20th field underneath all of them. Is there any way to suppress the output of these blank lines using the built-in mail merge rules?
Here's the header and a sample row of the CSV file:
"firstname","lastname","PRNT1040","LEGALGRD","ACADTRAN","STUD1040","VERWKSIN","VERWKSDP","UNEMPLOY","SSCARD","HSDPLOMA","DPBRTCFT","DEATHCRT","USCTZPRF","SLCTSERV","PROJINCM","YTDINCM","LAYOFFNT","MAJRCARS","W2FATHER","W2MOTHER","W2SPOUSE","W2STUDNT","2YRDEGRE","4YEARDEG","DPOVERRD"
"Joe","Smith","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","N","Y"
Here's a couple lines of the mail merge file that I'm trying (image linked because copying and pasting of mail merge does not show original document)
Does anyone know how to get around this? I tried placing the SKIPIF in front of the conditionals, but that doesn't work either.
The following conditional IF field will eliminate a blank space caused by an empty middle initial field:
{FNAME} {IF {MI} <> "" "{MI} "}{LNAME}
The following conditional MERGEFIELD field will remove blank spaces in any field. For example, given the following fields,
{Prefix} {FirstName} {LastName}
the following conditional statements will properly suppress the space normally included for any blank fields:
{IF {MERGEFIELD Prefix}<>"" "{MERGEFIELD Prefix} "}
{IF {MERGEFIELD FirstName}<>"" "{MERGEFIELD FirstName} "}
{IF {MERGEFIELD LastName}<>"" "{MERGEFIELD LastName}"}
To enter the field characters ({}), choose Field from the Insert menu (or press CTRL+F9).
NOTE: Are you creating your csv by unloading the data from informix and replacing the pipe delimiter with commas?... maybe it would be better for you to create an ace report which can better manipulate strings to create the csv file! here's an example of an ace report I use to achieve a similar objective:
database pawnshop end
define
variable act integer
variable actven integer
variable ret integer
variable ven integer
variable cmp integer
variable plt integer
variable vta integer
variable tot integer
variable totprof integer
end
output
top margin 0
bottom margin 0
left margin 0
right margin 384
report to "clientes.unl"
page length 200000
end
select
pa_serial,
pa_code,
pa_store_id,
pa_user_id,
pa_cust_name,
pa_id_type,
pa_id_no,
pa_dob,
pa_address1,
pa_city,
pa_tel,
pa_cmt,
pa_entry_date,
pa_last_date,
pa_idioma,
pa_apodo,
pwd_id,
pwd_trx_type,
pwd_last_type,
pwd_last_pymt,
pwd_trx_date,
pwd_pawn_amt,
pwd_last_amt,
pwd_cob1,
pwd_cob2,
pwd_cob3,
pwd_cob4,
pwd_update_flag,
st_code,
st_exp_days,
st_com_exp,
st_plat_exp
from CLIENTES, outer BOLETOS, storetab
where pa_serial = pwd_id
and pa_code = st_code
order by pa_cust_name, pwd_last_pymt
end
format
on every row
if pwd_last_type = "E" then
begin
let act = act + 1
if today - pwd_last_pymt >= st_exp_days then
let actven = actven + 1
end
if pwd_last_type = "I" then
begin
let act = act + 1
if today - pwd_last_pymt >= st_exp_days then
let actven = actven + 1
end
if pwd_trx_type = "C" then
begin
let cmp = cmp + 1
if pwd_last_type = "C" and (today - pwd_last_pymt >= st_com_exp) then
let actven = actven + 1
end
if pwd_last_type = "R" then
begin
let ret = ret + 1
end
if pwd_trx_type = "P" and pwd_last_type = "P" then
begin
let plt = plt + 1
if today - pwd_last_pymt >= st_plat_exp then
let actven = actven + 1
end
if pwd_trx_type = "E" and pwd_last_type = "F" then
begin
let ven = ven + 1
end
if pwd_trx_type = "P" and pwd_last_type = "F" then
begin
let ven = ven + 1
end
if pwd_trx_type = "E" and pwd_last_type = "T" then
begin
let ven = ven + 1
end
if pwd_trx_type = "P" and pwd_last_type = "T" then
begin
let ven = ven + 1
end
before group of pa_cust_name
let totprof = 0
let tot = 0
let act = 0
let actven = 0
let ret = 0
let ven = 0
let cmp = 0
let plt = 0
let vta = 0
after group of pa_cust_name
print column 1, pa_serial using "<<<<<","|",
pa_code clipped,"|",
pa_store_id clipped,"|",
pa_user_id clipped,"|",
pa_cust_name clipped,"|",
pa_id_type clipped,"|",
pa_id_no clipped,"|",
pa_dob using "mm-dd-yyyy","|",
pa_address1 clipped,"|",
pa_city clipped,"|",
pa_tel clipped,"|",
pa_cmt clipped,"|",
pa_entry_date using "mm-dd-yyyy","|",
pwd_last_pymt using "mm-dd-yyyy","|",
act using "&&&","|",
ret using "&&&","|",
ven using "&&&","|",
tot using "&&&","|",
totprof using "-&&&&&","|",
actven using "&&&","|",
cmp using "&&&","|",
pa_idioma,"|",
pa_apodo,"|",
plt using "&&&","|",
vta using "&&&","|"
end
The simple thing would have been to skip the 'false' section in your IF condition.
Just write the true part
IF MERGEFIELD = "Y" "true section"
As you included false section, it will (i think) take into account
We resolved the issue by changing the lines:
{ IF { MERGEFIELD PRNT1040} = "Y" "Write some text, then use a return line here
" "" }
Then we put those rules back to back instead of putting the line breaks after each rule inside the dialogoe box - for some reason, Word parses out the line break after you finish going through the IF -> THEN -> ELSE rules included in word's mail-merge options.

Split a string ignoring quoted sections

Given a string like this:
a,"string, with",various,"values, and some",quoted
What is a good algorithm to split this based on commas while ignoring the commas inside the quoted sections?
The output should be an array:
[ "a", "string, with", "various", "values, and some", "quoted" ]
Looks like you've got some good answers here.
For those of you looking to handle your own CSV file parsing, heed the advice from the experts and Don't roll your own CSV parser.
Your first thought is, "I need to handle commas inside of quotes."
Your next thought will be, "Oh, crap, I need to handle quotes inside of quotes. Escaped quotes. Double quotes. Single quotes..."
It's a road to madness. Don't write your own. Find a library with an extensive unit test coverage that hits all the hard parts and has gone through hell for you. For .NET, use the free FileHelpers library.
Python:
import csv
reader = csv.reader(open("some.csv"))
for row in reader:
print row
If my language of choice didn't offer a way to do this without thinking then I would initially consider two options as the easy way out:
Pre-parse and replace the commas within the string with another control character then split them, followed by a post-parse on the array to replace the control character used previously with the commas.
Alternatively split them on the commas then post-parse the resulting array into another array checking for leading quotes on each array entry and concatenating the entries until I reached a terminating quote.
These are hacks however, and if this is a pure 'mental' exercise then I suspect they will prove unhelpful. If this is a real world problem then it would help to know the language so that we could offer some specific advice.
Of course using a CSV parser is better but just for the fun of it you could:
Loop on the string letter by letter.
If current_letter == quote :
toggle inside_quote variable.
Else if (current_letter ==comma and not inside_quote) :
push current_word into array and clear current_word.
Else
append the current_letter to current_word
When the loop is done push the current_word into array
The author here dropped in a blob of C# code that handles the scenario you're having a problem with:
CSV File Imports in .Net
Shouldn't be too difficult to translate.
What if an odd number of quotes appear
in the original string?
This looks uncannily like CSV parsing, which has some peculiarities to handling quoted fields. The field is only escaped if the field is delimited with double quotations, so:
field1, "field2, field3", field4, "field5, field6" field7
becomes
field1
field2, field3
field4
"field5
field6" field7
Notice if it doesn't both start and end with a quotation, then it's not a quoted field and the double quotes are simply treated as double quotes.
Insedently my code that someone linked to doesn't actually handle this correctly, if I recall correctly.
Here's a simple python implementation based on Pat's pseudocode:
def splitIgnoringSingleQuote(string, split_char, remove_quotes=False):
string_split = []
current_word = ""
inside_quote = False
for letter in string:
if letter == "'":
if not remove_quotes:
current_word += letter
if inside_quote:
inside_quote = False
else:
inside_quote = True
elif letter == split_char and not inside_quote:
string_split.append(current_word)
current_word = ""
else:
current_word += letter
string_split.append(current_word)
return string_split
I use this to parse strings, not sure if it helps here; but with some minor modifications perhaps?
function getstringbetween($string, $start, $end){
$string = " ".$string;
$ini = strpos($string,$start);
if ($ini == 0) return "";
$ini += strlen($start);
$len = strpos($string,$end,$ini) - $ini;
return substr($string,$ini,$len);
}
$fullstring = "this is my [tag]dog[/tag]";
$parsed = getstringbetween($fullstring, "[tag]", "[/tag]");
echo $parsed; // (result = dog)
/mp
This is a standard CSV-style parse. A lot of people try to do this with regular expressions. You can get to about 90% with regexes, but you really need a real CSV parser to do it properly. I found a fast, excellent C# CSV parser on CodeProject a few months ago that I highly recommend!
Here's one in pseudocode (a.k.a. Python) in one pass :-P
def parsecsv(instr):
i = 0
j = 0
outstrs = []
# i is fixed until a match occurs, then it advances
# up to j. j inches forward each time through:
while i < len(instr):
if j < len(instr) and instr[j] == '"':
# skip the opening quote...
j += 1
# then iterate until we find a closing quote.
while instr[j] != '"':
j += 1
if j == len(instr):
raise Exception("Unmatched double quote at end of input.")
if j == len(instr) or instr[j] == ',':
s = instr[i:j] # get the substring we've found
s = s.strip() # remove extra whitespace
# remove surrounding quotes if they're there
if len(s) > 2 and s[0] == '"' and s[-1] == '"':
s = s[1:-1]
# add it to the result
outstrs.append(s)
# skip over the comma, move i up (to where
# j will be at the end of the iteration)
i = j+1
j = j+1
return outstrs
def testcase(instr, expected):
outstr = parsecsv(instr)
print outstr
assert expected == outstr
# Doesn't handle things like '1, 2, "a, b, c" d, 2' or
# escaped quotes, but those can be added pretty easily.
testcase('a, b, "1, 2, 3", c', ['a', 'b', '1, 2, 3', 'c'])
testcase('a,b,"1, 2, 3" , c', ['a', 'b', '1, 2, 3', 'c'])
# odd number of quotes gives a "unmatched quote" exception
#testcase('a,b,"1, 2, 3" , "c', ['a', 'b', '1, 2, 3', 'c'])
Here's a simple algorithm:
Determine if the string begins with a '"' character
Split the string into an array delimited by the '"' character.
Mark the quoted commas with a placeholder #COMMA#
If the input starts with a '"', mark those items in the array where the index % 2 == 0
Otherwise mark those items in the array where the index % 2 == 1
Concatenate the items in the array to form a modified input string.
Split the string into an array delimited by the ',' character.
Replace all instances in the array of #COMMA# placeholders with the ',' character.
The array is your output.
Heres the python implementation:
(fixed to handle '"a,b",c,"d,e,f,h","i,j,k"')
def parse_input(input):
quote_mod = int(not input.startswith('"'))
input = input.split('"')
for item in input:
if item == '':
input.remove(item)
for i in range(len(input)):
if i % 2 == quoted_mod:
input[i] = input[i].replace(",", "#COMMA#")
input = "".join(input).split(",")
for item in input:
if item == '':
input.remove(item)
for i in range(len(input)):
input[i] = input[i].replace("#COMMA#", ",")
return input
# parse_input('a,"string, with",various,"values, and some",quoted')
# -> ['a,string', ' with,various,values', ' and some,quoted']
# parse_input('"a,b",c,"d,e,f,h","i,j,k"')
# -> ['a,b', 'c', 'd,e,f,h', 'i,j,k']
I just couldn't resist to see if I could make it work in a Python one-liner:
arr = [i.replace("|", ",") for i in re.sub('"([^"]*)\,([^"]*)"',"\g<1>|\g<2>", str_to_test).split(",")]
Returns ['a', 'string, with', 'various', 'values, and some', 'quoted']
It works by first replacing the ',' inside quotes to another separator (|),
splitting the string on ',' and replacing the | separator again.
Since you said language agnostic, I wrote my algorithm in the language that's closest to pseudocode as posible:
def find_character_indices(s, ch):
return [i for i, ltr in enumerate(s) if ltr == ch]
def split_text_preserving_quotes(content, include_quotes=False):
quote_indices = find_character_indices(content, '"')
output = content[:quote_indices[0]].split()
for i in range(1, len(quote_indices)):
if i % 2 == 1: # end of quoted sequence
start = quote_indices[i - 1]
end = quote_indices[i] + 1
output.extend([content[start:end]])
else:
start = quote_indices[i - 1] + 1
end = quote_indices[i]
split_section = content[start:end].split()
output.extend(split_section)
output += content[quote_indices[-1] + 1:].split()
return output