So I have this app that takes listview and export to device as .csv file.
This is the code
StringBuilder sb = new StringBuilder();
for (String s : array) {
sb.append(s.trim()).append(",");
}
result = sb.deleteCharAt(sb.length() - 1).toString();
}
I'm happy with the output since it's organized the way I intended to. The problem was if the data has comma "," it recognized it (i think) and put it on another cell. But when I change the append to another "sign", it totally ruin it.
Heres the picture of two output.
(https://i.stack.imgur.com/BazVv.png)
The first one is okay because I didn't use comma. On the second one on the other hand I reversed the full name so I used comma.
What do you think I do with this one?
You should separate your CSV with something else that you know will not be in your columns like -> ; <-
To do so, you will select the cells -> right click -> format cells -> custom -> # -> OK
After that you should export your CSV file and you will see that now the columns are separated by -> ; <- and not -> , <-
And then on your code replace the , for the ;
Related
I have been using the following Lua code for a while to do simply csv to array conversions, but everything previously had a value in every column, but this time on a csv formatted bank statement there are empty values, which this does not handle.
Here’s an example csv, with debit and credits.
Transaction Date,Transaction Type,Sort Code,Account Number,Transaction Description,Debit Amount,Credit Amount,Balance
05/04/2022,DD,'11-70-79,6033606,Refund,,10.00,159.57
05/04/2022,DEB,'11-70-79,6033606,Henry Ltd,30.00,,149.57
05/04/2022,SO,'11-70-79,6033606,NEIL PARKS,20.00,,179.57
01/04/2022,FPO,'11-70-79,6033606,MORTON GREEN,336.00,,199.57
01/04/2022,DD,'11-70-79,6033606,WORK SALARY,,100.00,435.57
01/04/2022,DD,'11-70-79,6033606,MERE BC,183.63,,535.57
01/04/2022,DD,'11-70-79,6033606,ABC LIFE,54.39,,719.20
I’ve tried different patterns (https://www.lua.org/pil/20.2.html), but none seem to work, I’m beginning to think I can’t fix this via the pattern as it’ll break how it works for the rest? I appreciate it if anyone can share how they would approach this…
local csvfilename = "/mnt/nas/Fireflyiii.csv"
local MATCH_PATTERN = "[^,]+"
local function create_array_from_file(csvfilename)
local file = assert(io.open(csvfilename, "r"))
local arr = {}
for line in file:lines() do
local row = {}
for match in string.gmatch(line, MATCH_PATTERN) do
table.insert(row, match)
end
table.insert(arr, row)
end
return arr
end
I'm trying to convert a query from Redshift to Snowflake SQL.
The Redshift query looks like this:
SELECT
cr.creatives as creatives
, JSON_ARRAY_LENGTH(cr.creatives) as creatives_length
, JSON_EXTRACT_PATH_TEXT(JSON_EXTRACT_ARRAY_ELEMENT_TEXT (cr.creatives,0),'previewUrl') as preview_url
FROM campaign_revisions cr
The Snowflake query looks like this:
SELECT
cr.creatives as creatives
, ARRAY_SIZE(TO_ARRAY(ARRAY_CONSTRUCT(cr.creatives))) as creatives_length
, PARSE_JSON(PARSE_JSON(cr.creatives)[0]):previewUrl as preview_url
FROM campaign_revisions cr
It seems like JSON_EXTRACT_PATH_TEXT isn't converted correctly, as the Snowflake query results in error:
Error parsing JSON: more than one document in the input
cr.creatives is formatted like this:
"[{""previewUrl"":""https://someurl.com/preview1.png"",""device"":""desktop"",""splitId"":null,""splitType"":null},{""previewUrl"":""https://someurl.com/preview2.png"",""device"":""mobile"",""splitId"":null,""splitType"":null}]"
It seems to me that you are not working with valid JSON data inside Snowflake.
Please review your file format used for the copy into command.
If you open the "JSON" text provided in a text editor , note that the information is not parsed or formatted as JSON because of the quoting you have. Once your issue with double quotes / escaped quotes is handled, you should be able to make good progress
Proper JSON on Left || Original Data on Right
If you are not inclined to reload your data, see if you can create a Javascript User Defined Function to remove the quotes from your string, then you can use Snowflake to process the variant column.
The following code is working POJO that can be used to remove the doublequotes for you.
var textOriginal = '[{""previewUrl"":""https://someurl.com/preview1.png"",""device"":""desktop"",""splitId"":null,""splitType"":null},{""previewUrl"":""https://someurl.com/preview2.png"",""device"":""mobile"",""splitId"":null,""splitType"":null}]';
function parseText(input){
var a = input.replaceAll('""','\"');
a = JSON.parse(a);
return a;
}
x = parseText(textOriginal);
console.log(x);
For anyone else seeing this double double quote issue in JSON fields coming from CSV files in a Snowflake external stage (slightly different issue than the original question posted):
The issue is likely that you need to use the FIELD_OPTIONALLY_ENCLOSED_BY setting. Specifically, FIELD_OPTIONALLY_ENCLOSED_BY = '"' when setting up your fileformat.
(docs)
Example of creating such a file format:
create or replace file format mydb.myschema.my_tsv_file_format
type = CSV
field_delimiter = '\t'
FIELD_OPTIONALLY_ENCLOSED_BY = '"';
And example of querying from a stage using this file format:
select
$1 field_one
$2 field_two
-- ...and so on
from '#my_s3_stage/path/to/file/my_tab_separated_file.csv' (file_format => 'my_tsv_file_format')
I have this csv file,
Almost all the records are getting processed fine, however there are two cases in which i am experiencing an issue.
Case 1:
A record containing quotes within quotes:
"some data "some data" some data"
Case 2:
A record containing comma within quotes:
"some data, some data some data"
i have looked into this issue, and got my way around looking into quoting parameter of the extractor, but i have observed that setting (quoting:false) solves case 1 and fails for case 2 and setting (quoting:true) solves case 2 but fails for case 1.
constraints: There is no room for changing the data file, the future data will be tailored accordingly but for this existing data i have to resolve this.
Try this, import records as one row and fix the row text using double quotes (do the same for the commas):
DECLARE #input string = #"/Samples/Data/Sample1.csv";
DECLARE #output string = #"/Output/Sample1.txt";
// Import records as one row
#data =
EXTRACT rowastext string
FROM #input
USING Extractors.Text('\n', quoting: false );
// Fix the row text using double quotes
#query =
SELECT Regex.Replace(rowastext, "([^,])\"([^,])", "$1\"\"$2") AS rowascsv
FROM #data;
OUTPUT #query
TO #output
USING Outputters.Csv(quoting : false);
I want to transpose row into column and then save words in CSV file. The problem is only last value of column after transpose is save in file, and if i append string with list, it save in file but characters not words.
Anyone help me to sort it. Thanks in advance
import re
import csv
app =[]
with open('afterstem.csv') as f:
words = [x.split() for x in f]
for x in zip(*words):
for y in x:
res=y
newstr = re.sub('"', r'', res)
app = app + list(res)
#print("AFTER" ,newstr)
with open(r"removequotes.csv", "w") as output:
writer = csv.writer(output, lineterminator='\n', delimiter='\t')
for val in app:
writer.writerow(val)
output.close()
The output save in file look like this:
But i want "Bank" in one cell.
Simply use
for column in zip(*words):
newrows = [[word.replace('"', '')] for word in column]
app.extend(newrows)
to put all columns one after another into the first column.
newrow = [[word.replace('"', '')] for word in column] creates a new list for each column with double quotes stripped and wrapped into a list and app.extend(newrow) appends all of these lists to your result variable app.
You got your result because of your inner loop and in particular its last line:
for y in x:
...
app = app + list(res)
The for-loop takes each word in each column and list(res) converts the string with the word into a list of characters. So "Bank" becomes ['B', 'a', 'n', 'k'], etc. Then app = app + list(res) creates a new list that contains every item from app and the characters from the word and assigns that to app.
In the end you got a array containing every letter from the file instead of a array with all words in the file in the right order. The call to writer.writerow(val) then wrote each letter as it's own row.
BTW: If your input also uses tabs to delimit columns, it might be easier to use list(csv.reader(f, lineterminator='\n', delimiter='\t')) instead of your simple read with split() and stripping of quotes.
Hi good day I'm very new to powerbuilder and I'm using PB 11.5
Can someone know how to import comma delimited text file into datawindow.
Example Text file
"1234","20141011","Juan, Delacruz","Usa","001992345456"...
"12345","20141011","Arc, Ino","Newyork","005765753256"...
How can I import the third column which is the full name and the last column which is the account number. I want to transfer the name and account number into my external data window. I've tried to use the ImportString(all the rows are being transferred in one column only). I have three fields in my external data window.the Name and Account number.
Here's the code
ls_File = dw_2.Object.file_name[1]
li_FileHandle = FileOpen(ls_File)
li_FileRead = FileRead(li_FileHandle, ls_Text)
DO WHILE li_FileRead > 0
li_Count ++
li_FileRead = FileRead(li_FileHandle, ls_Text)
ll_row = dw_1.ImportString(ls_Text,1)
Loop.
Please help me with the code! Thank You
It seems that PB expects by default a tab-separated csv file (while the 'c' from 'csv' stands for 'coma'...).
Add the csv! enumerated value in the arguments of ImportString() and it should fix the point (it does in my test box).
Also, the columns defined in your dataobject must match the columns in the csv file (at least for the the first columns your are interested in). If there are mode columns in the csv file, they will be ignored. But if you want to get the 1st (or 2nd) and 3rd columns, you need to define the first 3 columns. You can always hide the #1 or #2 if you do not need it.
BTW, your code has some issues :
you should always test the return values of function calls like FileOpen() for stopping processing in case of non-existent / non-readable file
You are reading the text file twice for the first row: once before the while and another inside of the loop. Or maybe it is intended to ignore a first line with column headers ?
FWIF, here is a working code based on yours:
string ls_file = "c:\dev\powerbuilder\experiment\data.csv"
string ls_text
int li_FileHandle, li_fileread, li_count
long ll_row
li_FileHandle = FileOpen(ls_File)
if li_FileHandle < 1 then
return
end if
li_FileRead = FileRead(li_FileHandle, ls_Text)
DO WHILE li_FileRead > 0
li_Count ++
ll_row = dw_1.ImportString(csv!,ls_Text,1)
li_FileRead = FileRead(li_FileHandle, ls_Text)//read next line
Loop
fileclose(li_fileHandle)
use datawindow_name.importfile(CSV!,file_path) method.