Regarding JSON to CSV - json

I am trying to read a JSON file and convert it to a .csv file and I got this error.
employee_parsed = json.loads('E:/Masters_Materials/Data_Science/Food/train.json')
emp_data = employee_parsed['train']
# open a file for writing
employ_data = open('E:/Masters_Materials/Data_Science/Food/train.csv', 'a')
# create the csv writer object
csvwriter = csv.writer(employ_data)
count = 0
for emp in emp_data:
if count == 0:
header = emp.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(emp.values())
employ_data.close()

Json should be in homogeneous collection.just write it into http response after setting the response type csv
you need to create a header of json object key in comma separated manner.
then create row by getting respective key values in comma separated manner.

Related

Using CL_FDT_XL_SPREADSHEET class for a .CSV possible?

I have a program that maintains custom Z tables by exporting the table to an excel spreadsheet and it also refreshes the table and updates from an excel spreadsheet with .XSLX files.
However, I also want the program to accept .CSV files.
I use the CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD method to get the raw data, but when I try to convert the raw data to an XSTRING, an error is thrown
My question: Is the CL_FDT_XL_SPREADSHEET class suitable for .CSV file data or is it only suitable for .XLSX files?
The upload to SAP from .XLSX is done with the CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD method to get the raw data. Then converted to XSTRING and passed into the CL_FDT_XL_SPREADSHEET class and the IF_FDT_DOC_SPREADSHEET~GET_ITAB_FROM_WORKSHEET method is called to pass that data to a variable where it is used in another method to upload to SAP. This works fine.
Code:
METHOD import_excel_data.
DATA: lt_xtab TYPE cpt_x255,
lv_size TYPE i.
IF i_filetype = abap_true. "******.XLSX UPLOAD*********
cl_gui_frontend_services=>gui_upload( EXPORTING filename = i_file
filetype = 'BIN'
IMPORTING filelength = lv_size
CHANGING data_tab = lt_xtab
EXCEPTIONS
file_open_error = 1
file_read_error = 2
error_no_gui = 3
not_supported_by_gui = 4
OTHERS = 5 ).
IF sy-subrc <> 0.
RAISE EXCEPTION TYPE zcx_excel_exception EXPORTING i_message = |Invalid File { i_file }| ##no_text.
ENDIF.
ELSE."******.CSV UPLOAD*********
cl_gui_frontend_services=>gui_upload( EXPORTING filename = i_file
filetype = 'ASC'
has_field_separator = abap_true
IMPORTING filelength = lv_size
CHANGING data_tab = lt_xtab
EXCEPTIONS
file_open_error = 1
file_read_error = 2
error_no_gui = 3
not_supported_by_gui = 4
OTHERS = 5 ).
IF sy-subrc <> 0.
RAISE EXCEPTION TYPE zcx_excel_exception EXPORTING i_message = |Invalid File { i_file }| ##no_text.
ENDIF.
ENDIF.
cl_scp_change_db=>xtab_to_xstr( EXPORTING im_xtab = lt_xtab
im_size = lv_size
IMPORTING ex_xstring = DATA(lv_xstring) ).
DATA(lo_excel) = NEW cl_fdt_xl_spreadsheet( document_name = i_file
xdocument = lv_xstring ).
lo_excel->if_fdt_doc_spreadsheet~get_worksheet_names(
IMPORTING worksheet_names = DATA(lt_worksheets) ).
rt_table = lo_excel->if_fdt_doc_spreadsheet~get_itab_from_worksheet( lt_worksheets[ 1 ] ).
IF rt_table IS INITIAL.
RAISE EXCEPTION TYPE zcx_excel_exception EXPORTING i_message = 'No Data found in Excel File' ##no_text.
ENDIF.
ENDMETHOD.
Is the CL_FDT_XL_SPREADSHEET class suitable for .CSV file data or is it only suitable for .XLSX files?
No. CL_FDT_XL_SPREADSHEET is based on ABAP iXML framework and works purely with XML formats compliant with OOXML specification which XLSX is also based on.
CSV is nowhere near this pre-requisite, so it won't work.

Update Users Laravel Eloquent Active Directory JSON

I am trying to update a Laravel Users table with eloquent from an Active Directory JSON dump. I have gotten as far as parsing out the JSON and removing user records from MYSQL that are in MYSQL but not in the JSON. I'm struggling to figure out how to Insert new records from the JSON that are not in the MYSLQ / Update existing records in the MYSQL that have changed in the JSON.
Here is my code so far
//Read AD Users JSON File
$strJsonFileContents = file_get_contents(public_path('ADusers.json'));
//Remove UTF8 Bom Characters and strip slashes
$bom = pack('H*','EFBBBF');
$strJsonFileContents = preg_replace("/^$bom/", '', $strJsonFileContents);
//run twice for double slashes
$strJsonFileContents = stripslashes($strJsonFileContents);
$strJsonFileContents = stripslashes($strJsonFileContents);
$strJsonFileContents = str_replace('/', '', $strJsonFileContents);
//JSON to Object
$decoded_json = json_decode($strJsonFileContents);
//Get list of EMPIDs from JSON into array to remove from database
$id_list = implode(",", array_map(function ($val) { return (int) $val->extensionAttribute2; },$decoded_json));
$id_list_arr = explode (",", $id_list);
//Delete records from database that are not in JSON Array
$numDelete = DB::table('users')->select()->whereNotIn('extensionAttribute2', $id_list_arr)->delete();
I also realize that the DB::table('users') is probably not quite the right way, so if somebody can help me figure out how to use the User model and how to update/insert from the JSON to the MYSQL, I would so greatly appreciate it.

CSV not importing JSON with correct format into database

Just like the title says, here is my code:
require 'json'
def import_csv
path = Rails.root.join('folder1', 'folder2', 'file.csv')
counter = 0
puts "inserts on table started..."
CSV.foreach(path, headers: true) do |row|
next if row.to_hash['deleted_at'] != nil
counter += 1
puts row.to_json #shows correct format
someModel = someModel.new(row.to_hash) #imports incorrect format of json with backslash in db
#someModel = someModel.new(row.to_json) #ArgumentError: When assigning attributes, you must pass a hash as an argument.
someModel.skip_callbacks = true
someModel.save!
end
puts "#{counter} inserts on table apps complete"
end
import_csv
I can not import the CSV File in the correct format. The import works, but the structure is wrong.
EXPECTED
{"data":{"someData":72}}
GETTING
"{\"data\":{\"someData\":72}}"
How can I import it with the correct JSON format?
If all headers are correct as of the column names of the model
Maybe you can try:
JSON.parse(row.to_json)

insert Knn csv into table LUA

I'm trying to load csv containing knn data (3 columns no names)
e.g
4 3 a
1 3 a
3 3 a
4 5 b
I have been able to load the file into a string.
When I try to move that into a table I get no errors, however when I print the table to screen I get values of nil.
I tried changing contents of file which gives the same result and if changed to (knn_data) I get the path address of the csv in all keys.
I'm trying to get the csv data to appear within the indexed table and in its 3 columns.
Here is the code:
--load kNN file.
local knn_data = system.pathForFile("knn.csv", system.ResourceDirectory)
local file, errorString = io.open(knn_data, "r")
if not file then
print("File Error: File Unavailable")
else
local contents = file:read("*a")
print(contents)
io.close(file)
end
file = nil
-- load data into table
dataset = {}
for line in io.lines(knn_data) do
dataset[#dataset+1] = (contents)
Previously attached screenshot of code
...
else
local contents= file:read("*a")
print(contents)
--io.close(file)
end
contents is a local variable in your else statement.
Outside of it, contents is nil.
dataset = {}
for line in io.lines(iknn_data) do
dataset[#dataset+1] = (contents)
So dataset[#dataset+1]= (contents) is equivalent to dataset[#dataset+1]= nil
Within that generic for loop, line contains the line read from the file. So actually you should work with that.

keyword search in string from mysql using python?

I am pulling from a mysql database table using python3.4. I use the csv module to write the rows of data from the database into .CSV format. Now I am trying toros figure out how I can vet the rows of data by keywords that may show up in the fourth column of data (row[3]). I was thinking of using the re moduleas below but I keep getting errors. Is it not possible to search keywords in a field that is string type and to filter those results if they have those keywords? I keep getting an error. Please help
import re
import csv
userdate = input('What date do you want to look at?')
query = ("SELECT *FROM sometable WHERE timestamp LIKE %s", userdate)
keywords = 'apples', 'bananas', 'cocoa'
# Execute sql Query
cursor.execute(query)
result = cursor.fetchall()
#Reads a CSV file and return it as a list of rows
def read_csv_file(filename):
"""Reads a CSV file and return it as a list of rows."""
for row in csv.reader(open(filename)):
data.append(row)
return data
f = open(path_in + data_file)
read_it = read_csv_file(path_in + data_file)
with open('file.csv', 'wb') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
for row in data:
match = re.search('keywords, read_it)
if match:
spamwriter.writerow(row)
I gave up on the regular expressions and used
for row in data:
found_it = row.find(keywords)
if found_it != -1:
spamwriter.writerow(row)