I am trying to apply the code found on this page, in particular part 'Copy Data from String Iterator' of the Table of Contents, but run into an issue with my code.
Since not all lines coming from the generator (here log_lines) can be imported into the PostgreSQL database, I try to filter the correct lines (here row) using itertools.filterfalse like in the codeblock below:
def copy_string_iterator(connection, log_lines) -> None:
with connection.cursor() as cursor:
create_staging_table(cursor)
log_string_iterator = StringIteratorIO((
'|'.join(map(clean_csv_value, (
row['date'],
row['time'],
row['cs_uri_query'],
row['s_contentpath'],
row['sc_status'],
row['s_computername'],
...
row['sc_substates'],
row['s_port'],
row['cs_version'],
row['c_protocol'],
row.update({'cs_cookie':'x'}),
row['timetakenms'],
row['cs_uri_stem'],
))) + '\n')
for row in filterfalse(lambda line: "#" in line.get('date'), log_lines)
)
cursor.copy_from(log_string_iterator, 'log_table', sep = '|')
When I run this, cursor.copy_from() gives me the following error:
QueryCanceled: COPY from stdin failed: error in .read() call
CONTEXT: COPY log_table, line 112910
I understand why this error happens, it is because in the test file I use there are only 112909 lines that meet the filterfalse condition. But why does it try to copy line 112910 and throw the error and not just stop?
Since Python doesn't have a coalescing operator, add something like:
(map(clean_csv_value, (
row['date'] if 'date' in row else None,
:
row['cs_uri_stem'] if 'cs_uri_stem' in row else None,
))) + '\n')
for each of your fields so you can handle any missing fields in the JSON file. Of course the fields should be nullable in the db if you use None otherwise replace with None with some default value for that field.
Related
I am trying to read data of the following format with textscan:
date,location,new_cases,new_deaths,total_cases,total_deaths
2019-12-31,Afghanistan,0,0,0,0
2020-01-01,Afghanistan,0,0,0,0
2020-01-02,Afghanistan,0,0,0,0
2020-01-03,Afghanistan,0,0,0,0
2020-01-04,Afghanistan,0,0,0,0
...
(Full data file available here: https://covid.ourworldindata.org/data/ecdc/full_data.csv)
My code is:
# Whitespace replaced with _
file_name = "full_data.csv";
fid = fopen(file_name, "rt");
data= textscan(fid, "%s%s%d%d%d%d", "Delimiter", ",", "HeaderLines", 1, ...
"ReturnOnError", 0);
fclose(fid);
Text scan terminates with an error:
error: textscan: Read error in field 3 of row 421
Row 421 is the center row in the example below:
2020-01-12,Australia,0,0,0,0
2020-01-13,Australia,0,0,0,0
2020-01-14,Australia,0,0,0,0
2020-01-15,Australia,0,0,0,0
2020-01-16,Australia,0,0,0,0
2020-01-17,Australia,0,0,0,0
2020-01-18,Australia,0,0,0,0
I've checked the row it complains about and there is nothing different from the example above. I've replaced all spaces in the file with underscores too. Am I doing something wrong with textcan?
Complete Julia newbie here.
I'd like to do some processing on a CSV. Something along the lines of:
using CSV
in_file = CSV.Source('/dir/in.csv')
out_file = CSV.Sink('/dir/out.csv')
for line in CSV.eachline(in_file)
replace!(line, "None", "")
CSV.writeline(out_file, line)
end
This is in pseudocode, those aren't existing functions.
Idiomatically, should I iterate on 1:CSV.countlines(in_file)? Do a while and check something?
If all you want to do is replace a string in the line, you do not need any CSV parsing utilities. All you do is read the file line by line, replace, and write. So:
infile = "/path/to/input.csv"
outfile = "/path/to/output.csv"
out = open(outfile, "w+")
for line in readlines(infile)
newline = replace(line, "a", "b")
write(out, newline)
end
close(out)
This will replicate the pseudocode you have in your question.
If you need to parse and read the csv field by field, use the readcsv function in base.
data=readcsv(infile)
typeof(data) #Array{Any,2}
This will return the data in the file as a 2 dimensional array. You can process this data any way you want, and write it back using the writecsv function.
for i in 1:size(data,1) #iterate by rows
data[i, 1] = "This is " * data[i, 1] # Add text to first column
end
writecsv(outfile, data)
Documentation for these functions:
http://docs.julialang.org/en/release-0.5/stdlib/io-network/?highlight=readcsv#Base.readcsv
http://docs.julialang.org/en/release-0.5/stdlib/io-network/?highlight=readcsv#Base.writecsv
What I am trying to do is to import a dataset with a tree data structure inside from CSV to neo4j. Nodes are stored along with their parent node and depth level (max 6) in the tree. So I try to check depth level using CASE and then add a node to its parent like this (creating a node just for 1st level so far for testing purpose):
export FILEPATH=file:///Example.csv
CREATE CONSTRAINT ON (n:Node) ASSERT n.id IS UNIQUE;
USING PERIODIC COMMIT 500
LOAD CSV WITH HEADERS
FROM {FILEPATH} AS line
WITH DISTINCT line,
line.`Level` AS level,
line.`ParentCodeID_Cal` AS parentCode,
line.`CodeSet` AS codeSet,
line.`Category` AS nodeCategory,
line.`Type` AS nodeType,
line.`L1code` AS l1Code, line.`L1Description` AS l1Description, line.`L1Name` AS l1Name, line.`L1NameAb` AS l1NameAb,
line.`L2code` AS l2Code, line.`L2Description` AS l2Description, line.`L2Name` AS l2Name, line.`L2NameAb` AS l2NameAb,
line.`L3code` AS l3Code, line.`L3Description` AS l3Description, line.`L3Name` AS l3Name, line.`L3NameAb` AS l3NameAb,
line.`L1code` AS l4Code, line.`L4Description` AS l4Description, line.`L4Name` AS l4Name, line.`L4NameAb` AS l4NameAb,
line.`L1code` AS l5Code, line.`L5Description` AS l5Description, line.`L5Name` AS l5Name, line.`L5NameAb` AS l5NameAb,
line.`L1code` AS l6Code, line.`L6Description` AS l6Description, line.`L6Name` AS l6Name, line.`L6NameAb` AS l6NameAb,
codeSet + parentCode AS nodeId
CASE line.`Level`
WHEN '1' THEN CREATE (n0:Node{id:nodeId, description:l1Description, name:l1Name, nameAb:l1NameAb, category:nodeCategory, type:nodeType})
ELSE
END;
But I get this result:
WARNING: Invalid input 'S': expected 'l/L' (line 17, column 3 (offset:
982)) "CASE level " ^
I'm aware there is a mistake at syntax.
I'm using neo4j 3.0.4 & Windows 10 (using neo4j shell running it with D:\Program Files\Neo4j CE 3.0.4\bin>java -classpath neo4j-desktop-3.0.4.jar org.neo4j.shell.StartClient).
You have several syntax errors. For example, a CASE clause cannot contain a CREATE clause.
In any case, you should be able to greatly simplify your Cypher. For example, this might suit your needs:
USING PERIODIC COMMIT 500
LOAD CSV WITH HEADERS
FROM {FILEPATH} AS line
WITH DISTINCT line, ('l' + line.Level) AS prefix
CREATE (:Node{
id: line.CodeSet + line.ParentCodeID_Cal,
description: line[prefix + 'Description'],
name: line[prefix + 'Name'],
nameAb: line[prefix + 'NameAb'],
category: line.Category,
type: line.Type})
I am struggling to create a simple rake task which will generate a csv dump of the database table "baselines".
task :send_report => :environment do
path = "tmp/"
filename = 'data_' + Date.today.to_s + '.csv'
Baseline.all.each do
CSV.open(path + filename, "wb") do |csv|
csv << Baseline.column_names
Baseline.all.each do |p|
csv << p.attributes.values_at(*column_names)
end
end
end
end
I am getting the error
undefined local variable or method `column_names' for main:Object
I am completely unclear why this is....Baseline.column_names will work in the console, in a view etc etc.
Any thought would be appreciated.
You're specifying Baseline.column_names in the first case, but just column_names on your values_at call. That defaults to the main context where no such method exists. It must be called against a model.
Make those two consistent, Baseline is required in both cases.
ok I am trying to create a definition which will read a list of IDS from an external Json file, Which it is doing. Its even putting the data into the database on load of the program, my issue is this. I cant seem to match the list IDs to a comparison. Here is my current code:
def check(account):
global ID_account
import json, httplib
if not hasattr(BigWorld, 'iddata'):
UID_DB = account['databaseID']
UID = ID_account
try:
conn = httplib.HTTPConnection('URL')
conn.request('GET', '/ids.json')
conn.sock.settimeout(2)
resp = conn.getresponse()
qresp = resp.read()
BigWorld.iddata = json.loads(qresp)
LOG_NOTE('[ABRO] Request of URL data successful.')
conn.close()
except:
LOG_NOTE('[ABRO] Http request to URL problem. Loading local data.')
if UID_DB is not None:
list = BigWorld.iddata["ids"]
#print (len(list) - 1)
for n in range(0, (len(list) - 1)):
#print UID_DB
#print list[n]
if UID_DB == list[n]:
#print '[ABRO] userid located:'
#print UID_DB
UID = UID_DB
else:
LOG_NOTE('[ABRO] userid not set.')
if 'databaseID' in account and account['databaseID'] != UID:
print '[ABRO] Account not active in database, game closing...... '
BigWorld.quit()
now my json file looks like this:
{
"ids":[
"1001583757",
"500687699",
"000000000"
]
}
now when I run this with all the commented out prints it seems to execute perfectly fine up till it tries to do the match inside the for loop. Even when the print shows UID_DB and list[n] being the same values, it does not set my variable, it doesn't post any errors, its just simply acting as if there was no match. am I possibly missing a loop break? here is the python log starting with the print of the length of the table print:
INFO: 2
INFO: 1001583757
INFO: 1001583757
INFO: 1001583757
INFO: 500687699
INFO: [ABRO] Account not active, game closing......
as you can see from the log, its never printing the User located print, so it is not matching them. its just continuing with the loop and using the default ID I defined above the definition. Anyone with an idea would definitely help me out as ive been poking and prodding this thing for 3 days now.
the answer to this was found by #VikasNehaOjha it was missing simply a conversion to match types before the match comparison I did this by adding in
list[n] = int(list[n])
that resolved my issue and it finally matched comparisons.