I have a program that maintains custom Z tables by exporting the table to an excel spreadsheet and it also refreshes the table and updates from an excel spreadsheet with .XSLX files.
However, I also want the program to accept .CSV files.
I use the CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD method to get the raw data, but when I try to convert the raw data to an XSTRING, an error is thrown
My question: Is the CL_FDT_XL_SPREADSHEET class suitable for .CSV file data or is it only suitable for .XLSX files?
The upload to SAP from .XLSX is done with the CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD method to get the raw data. Then converted to XSTRING and passed into the CL_FDT_XL_SPREADSHEET class and the IF_FDT_DOC_SPREADSHEET~GET_ITAB_FROM_WORKSHEET method is called to pass that data to a variable where it is used in another method to upload to SAP. This works fine.
Code:
METHOD import_excel_data.
DATA: lt_xtab TYPE cpt_x255,
lv_size TYPE i.
IF i_filetype = abap_true. "******.XLSX UPLOAD*********
cl_gui_frontend_services=>gui_upload( EXPORTING filename = i_file
filetype = 'BIN'
IMPORTING filelength = lv_size
CHANGING data_tab = lt_xtab
EXCEPTIONS
file_open_error = 1
file_read_error = 2
error_no_gui = 3
not_supported_by_gui = 4
OTHERS = 5 ).
IF sy-subrc <> 0.
RAISE EXCEPTION TYPE zcx_excel_exception EXPORTING i_message = |Invalid File { i_file }| ##no_text.
ENDIF.
ELSE."******.CSV UPLOAD*********
cl_gui_frontend_services=>gui_upload( EXPORTING filename = i_file
filetype = 'ASC'
has_field_separator = abap_true
IMPORTING filelength = lv_size
CHANGING data_tab = lt_xtab
EXCEPTIONS
file_open_error = 1
file_read_error = 2
error_no_gui = 3
not_supported_by_gui = 4
OTHERS = 5 ).
IF sy-subrc <> 0.
RAISE EXCEPTION TYPE zcx_excel_exception EXPORTING i_message = |Invalid File { i_file }| ##no_text.
ENDIF.
ENDIF.
cl_scp_change_db=>xtab_to_xstr( EXPORTING im_xtab = lt_xtab
im_size = lv_size
IMPORTING ex_xstring = DATA(lv_xstring) ).
DATA(lo_excel) = NEW cl_fdt_xl_spreadsheet( document_name = i_file
xdocument = lv_xstring ).
lo_excel->if_fdt_doc_spreadsheet~get_worksheet_names(
IMPORTING worksheet_names = DATA(lt_worksheets) ).
rt_table = lo_excel->if_fdt_doc_spreadsheet~get_itab_from_worksheet( lt_worksheets[ 1 ] ).
IF rt_table IS INITIAL.
RAISE EXCEPTION TYPE zcx_excel_exception EXPORTING i_message = 'No Data found in Excel File' ##no_text.
ENDIF.
ENDMETHOD.
Is the CL_FDT_XL_SPREADSHEET class suitable for .CSV file data or is it only suitable for .XLSX files?
No. CL_FDT_XL_SPREADSHEET is based on ABAP iXML framework and works purely with XML formats compliant with OOXML specification which XLSX is also based on.
CSV is nowhere near this pre-requisite, so it won't work.
Related
I am loading CVS files for data import. The files come from various sources so the header names and location are often changing. I searched and found helpful libs like CsvHelper & FileHelpers
Question: either using FileHelper.net or CsvHelper, how do we extract both the Header names & the Column datatype? so that I can create a drop for each col, to map between .NET type <==> to a SQL type
Just read in the first line of the file with, say,
string headers = File.ReadLines("MyFile.txt").First();
And then use a class builder to build whatever CSV spec you need.
DelimitedClassBuilder cb = new DelimitedClassBuilder("MyProduct", delimiter: ",");
cb.AddField("Name", typeof(string));
cb.LastField.TrimMode = TrimMode.Both;
cb.AddField("Description", typeof(string));
cb.LastField.FieldQuoted = true;
cb.LastField.QuoteChar = '"';
cb.LastField.QuoteMode = QuoteMode.OptionalForBoth;
// etc... e.g., add a date field
cb.AddField("SomeDate", typeof(DateTime));
engine = new FileHelperEngine(cb.CreateRecordClass());
DataTable dt = engine.ReadFileAsDT("test.txt");
I know that I can use LIST_TO_ASCI to convert a report to ASCII, but I would like to have a more high level data format like JSON, XML, CSV.
Is there a way to get something that is easier to handle then ASCII?
Here is the report I'd like to convert:
The conversion needs to be executed in ABAP on a result which was executed like this:
SUBMIT <REPORT_NAME> ... EXPORTING LIST TO MEMORY AND RETURN.
You can get access to SUBMIT list in memory like this:
call function 'LIST_FROM_MEMORY'
TABLES
listobject = t_list
EXCEPTIONS
not_found = 1
others = 2.
if sy-subrc <> 0.
message 'Unable to get list from memory' type 'E'.
endif.
call function 'WRITE_LIST'
TABLES
listobject = t_list
EXCEPTIONS
EMPTY_LIST = 1
OTHERS = 2
.
if sy-subrc <> 0.
message 'Unable to write list' type 'E'.
endif.
And the final step of the solution (conversion of result table to JSON) was already answered to you in your question.
I found a solution here: http://zevolving.com/2015/07/salv-table-22-get-data-directly-after-submit/
This is the code:
DATA: lt_outtab TYPE STANDARD TABLE OF alv_t_t2.
FIELD-SYMBOLS: <lt_outtab> like lt_outtab.
DATA lo_data TYPE REF TO data.
" Let know the model
cl_salv_bs_runtime_info=>set(
EXPORTING
display = abap_false
metadata = abap_false
data = abap_true
).
SUBMIT salv_demo_table_simple
AND RETURN.
TRY.
" get data from SALV model
cl_salv_bs_runtime_info=>get_data_ref(
IMPORTING
r_data = lo_data
).
ASSIGN lo_data->* to <lt_outtab>.
BREAK-POINT.
CATCH cx_salv_bs_sc_runtime_info.
ENDTRY.
Big thanks to Sandra Rossi, she gave me the hint to cx_salv_bs_sc_runtime_info.
Related answer: https://stackoverflow.com/a/52834118/633961
I am trying to read a JSON file and convert it to a .csv file and I got this error.
employee_parsed = json.loads('E:/Masters_Materials/Data_Science/Food/train.json')
emp_data = employee_parsed['train']
# open a file for writing
employ_data = open('E:/Masters_Materials/Data_Science/Food/train.csv', 'a')
# create the csv writer object
csvwriter = csv.writer(employ_data)
count = 0
for emp in emp_data:
if count == 0:
header = emp.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(emp.values())
employ_data.close()
Json should be in homogeneous collection.just write it into http response after setting the response type csv
you need to create a header of json object key in comma separated manner.
then create row by getting respective key values in comma separated manner.
I have a IronPython script I use to export all my data tables from a Spotfire project.
Currently it works perfectly. It loops through all datatables and exports them as ".xlsx". Now I need to export the files as ".csv" which I thought would be as simple as changing ".xlsx" to ".csv".
This script still exports the files, names them all .csv, but what is inside the file is a .xlsx, Im not sure how or why. The code is just changing the file extension name but not converting the file to csv.
Here is the code I am currently using:
I have posted the full code at the bottom, and the code I believe is relevant to my question in a separate code block at the top.
if(dialogResult == DialogResult.Yes):
for d in tableList: #cycles through the table list elements defined above
writer = Document.Data.CreateDataWriter(DataWriterTypeIdentifiers.ExcelXlsDataWriter)
table = Document.Data.Tables[d[0]] #d[0] is the Data Table name in the Spotfire project (defined above)
filtered = Document.ActiveFilteringSelectionReference.GetSelection(table).AsIndexSet() #OR pass the filter
stream = File.OpenWrite(savePath+'\\'+ d[1] +".csv") #d[1] is the Excel alias name. You could also use d.Name to export with the Data Table name
names = []
for col in table.Columns:
names.append(col.Name)
writer.Write(stream, table, filtered, names)
stream.Close()
I think it may have to do with the ExcelXlsDataWriter.
I tried with ExcelXlsxDataWriter as well. Is there a csv writer I could use for this? I believe csv and txt files have a different writer.
Any help is appreciated.
Full script shown below:
import System
import clr
import sys
clr.AddReference("System.Windows.Forms")
from sys import exit
from System.Windows.Forms import FolderBrowserDialog, MessageBox, MessageBoxButtons, DialogResult
from Spotfire.Dxp.Data.Export import DataWriterTypeIdentifiers
from System.IO import File, FileStream, FileMode
#This is a list of Data Tables and their Excel file names. You can see each referenced below as d[0] and d[1] respectively.
tableList = [
["TestTable1"],
["TestTable2"],
]
#imports the location of the file so that there is a default place to put the exports.
from Spotfire.Dxp.Application import DocumentMetadata
dmd = Application.DocumentMetadata #Get MetaData
path = str(dmd.LoadedFromFileName) #Get Path
savePath = '\\'.join(path.split('\\')[0:-1]) + "\\DataExports\\"
dialogResult = MessageBox.Show("The files will be save to "+savePath
+". Do you want to change location?"
, "Select the save location", MessageBoxButtons.YesNo)
if(dialogResult == DialogResult.Yes):
# GETS THE FILE PATH FROM THE USER THROUGH A FILE DIALOG instead of using the file location
SaveFile = FolderBrowserDialog()
SaveFile.ShowDialog()
savePath = SaveFile.SelectedPath
#message making sure that the user wants to exporthe files.
dialogResult = MessageBox.Show("Export Files."
+" Export Files","Are you sure?", MessageBoxButtons.YesNo)
if(dialogResult == DialogResult.Yes):
for d in tableList: #cycles through the table list elements defined above
writer = Document.Data.CreateDataWriter(DataWriterTypeIdentifiers.ExcelXlsDataWriter)
table = Document.Data.Tables[d[0]] #d[0] is the Data Table name in the Spotfire project (defined above)
filtered = Document.ActiveFilteringSelectionReference.GetSelection(table).AsIndexSet() #OR pass the filter
stream = File.OpenWrite(savePath+'\\'+ d[1] +".csv") #d[1] is the Excel alias name. You could also use d.Name to export with the Data Table name
names = []
for col in table.Columns:
names.append(col.Name)
writer.Write(stream, table, filtered, names)
stream.Close()
#if the user doesn't want to export then he just gets a message
else:
dialogResult = MessageBox.Show("ok.")
For some reason the Spotfire Iron Python implementation does not support the csv package implemented in Python.
The workaround I found to your implementation is using StdfDataWriter instead of ExcelXsDataWriter. The STDF data format is the Spotfire Text Data Format. The DataWriter class in Spotfire supports only Excel and STDF (see here) and STDF comes closest to CSV.
from System.IO import File
from Spotfire.Dxp.Data.Export import DataWriterTypeIdentifiers
writer = Document.Data.CreateDataWriter(DataWriterTypeIdentifiers.StdfDataWriter)
table = Document.Data.Tables['DropDownSelectors']
filtered = Document.ActiveFilteringSelectionReference.GetSelection(table).AsIndexSet()
stream = File.OpenWrite("C:\Users\KLM68651\Documents\dropdownexport.stdf")
names =[]
for col in table.Columns:
names.append(col.Name)
writer.Write(stream, table, filtered, names)
stream.Close()
Hope this helps
I need to write the result of an SQL query into a CSV file (UTF-8 (I need this encoding as there are French letters)). One of the columns is too large (more than 20000 char) so I can't use DT_WSTR for it. The type that is inputted is DT_TEXT so I use a Data Conversion to change it to DT_NTEXT. But then when I want to write it to the file I have this error message :
Error 2 Validation error. The data type for "input column" is
DT_NTEXT, which is not supported with ANSI files. Use DT_TEXT instead
and convert the data to DT_NTEXT using the data conversion component
Is there a way I can write the data to my file?
Thank you
I had this kind of issues also sometimes. When working with data larger than 255 characters SSIS sees it as blob data and will always handle this as such.
I then converted this blob stream data to a readable text with a script component. Then other transformation should be possible.
This was the case in ssis that came with sql server 2008 but I believe this isn't changed yet.
I ended up doing just like Samyne says, I used a script.
First I've modified my SQL SP, instead of having several columns I put all the info in one single column like follows :
Select Column1 + '^' + Column2 + '^' + Column3 ...
Then I used this code in a script
string fileName = Dts.Variables["SLTemplateFilePath"].Value.ToString();
using (var stream = new FileStream(fileName, FileMode.Truncate))
{
using (var sw = new StreamWriter(stream, Encoding.UTF8))
{
OleDbDataAdapter oleDA = new OleDbDataAdapter();
DataTable dt = new DataTable();
oleDA.Fill(dt, Dts.Variables["FileData"].Value);
foreach (DataRow row in dt.Rows)
{
foreach (DataColumn column in dt.Columns)
{
sw.WriteLine(row[column]);
}
}
sw.WriteLine();
}
}
Putting all the info in one column is optional, I just wanted to avoid handling it in the script, this way if my SP is changed I don't need to modify the SSIS.