Scientific data - csv

I want to import data from a corrupted CSV file. It contains scientific numbers and it's a big data set with about 300000 rows and 27 columns. When I import it using,
Import["data.csv","HeaderLines"->1]
the data format is string. So I change it to data table format by
StringSplit[ToString[data[[#]]], ";"] & /#
Range[Dimensions[
Import["data.csv"]][[1]]]
and I need to use the first column to analyse the data. But the problem is that this row is
scientific numbers in string type!! I want to change it to numbers. I used this command:
ToExpression[Internal`StringToDouble[fdata[[All, 1]][[#]]]] & /#
Range[291407];
But it takes more than hours to do so!!! Do you have any idea how I can do this without wasting of time??

You could try the following:
(* read the first 5 rows *)
d = ReadList["data.csv", Table[Number, {27}], 5]
(* read the rows 100 to 150 *)
s = OpenRead["data.csv"];
Skip[s, Record, 99]
d = ReadList[s, Table[Number, {27}], 51]
Close[s]
And d[[All,1]] will get you the first column.

Related

Gnuplot one-liner to generate a titled line for each row in a CSV file

I've been trying to figure out gnuplot but haven't been getting anywhere for seemingly 2 reasons. My lack of understanding gnuplot set commands, and the layout of my data file. I've decided the best option is to ask for help.
Getting this gnuplot command into a one-liner is the hope.
Example rows from my CSV data file (MyData.csv):
> _TitleRow1_,15.21,15.21,...could be more, could be less
> _TitleRow2_,16.27,16.27,101,55.12,...could be more, could be less
> _TitleRow3_,16.19,16.19,20.8,...could be more, could be less
...(over 100 rows)
Contents of MyData.csv rows will always be a string as the first column for title, followed by an undetermined amount of decimal values. (Each row gets appended to periodically, so specifying an open ended amount of columns to include is needed)
What I'd like to happen is to generate a line graph showing a line for each row in the csv, using the first column as a row title, and the following numbers generating the actual line.
This is the I'm trying:
gnuplot -e 'set datafile separator ","; set key autotitle columnhead; plot "MyData.csv"'
Which results in:
set datafile separator ","; set key autotitle columnhead; plot "MyData.csv"
^
line 0: Bad data on line 2 of file MyData.csv
This looks like an amazing tool and I'm looking forward to learning more about it. Thanks in advance for any hints/assistance!
Your datafile format is very unfortunate for gnuplot which prefers data in columns.
Although, you can also plot rows (which is not straightforward in gnuplot, but see an example here). This requires a strict matrix, but the problem with your data is that you have a variable column count.
Actually, your CSV is not a "correct" CSV, because a CSV should have the same number of columns for all rows, i.e. if one row has less data than the row with maximum data the line should be filled with ,,, as many as needed. That's basically what the script below is doing.
With this you can plot rows with the option matrix (check help matrix). However, you will get some warnings warning: matrix contains missing or undefined values which you can ignore.
Alternatively, you could transpose your data (with variable column count maybe not straightforward). Maybe there are external tools which can do it easily. With gnuplot-only it will be a bit cumbersome (and first you would have to fill your shorter rows as in the example below).
Maybe there is a simpler and better gnuplot-only solution which I am currently not aware of.
Data: SO73099645.dat
_TitleRow1_, 1.2, 1.3
_TitleRow2_, 2.2, 2.3, 2.4, 2.5
_TitleRow3_, 3.2, 3.3, 3.4
Script:
### plotting rows with variable columns
reset session
FILE = "SO73099645.dat"
getColumns(s) = (sum [i=1:strlen(s)] (s[i:i] eq ',') ? 1 : 0) + 1
set datafile separator "\t"
colCount = 0
myNaNs = myHeaders = ''
stats FILE u (rowCount=$0+1, c=getColumns(strcol(1)), c>colCount ? colCount=c : 0) nooutput
do for [i=1:colCount] {myNaNs=myNaNs.',NaN' }
set table $Data
plot FILE u (s=strcol(1),c=getColumns(s),s.myNaNs[1:(colCount-c)*4]) w table
unset table
set datafile separator ","
stats FILE u (myHeaders=sprintf('%s "%s"',myHeaders,strcol(1))) nooutput
myHeader(n) = word(myHeaders,n)
set key noenhanced
plot for [row=0:rowCount-1] $Data matrix u 1:3 every ::1:row::row w lp pt 7 ti myHeader(row+1)
### end of script
As "one-liner":
FILE = "SO/SO73099645.dat"; getColumns(s) = (sum [i=1:strlen(s)] (s[i:i] eq ',') ? 1 : 0) + 1; set datafile separator "\t"; colCount = 0; myNaNs = myHeaders = ''; stats FILE u (rowCount=$0+1, c=getColumns(strcol(1)), c>colCount ? colCount=c : 0) nooutput; do for [i=1:colCount] {myNaNs=myNaNs.',NaN' }; set table $Data; plot FILE u (s=strcol(1),c=getColumns(s),s.myNaNs[1:(colCount-c)*4]) w table; unset table; set datafile separator ","; stats FILE u (myHeaders=sprintf('%s "%s"',myHeaders,strcol(1))) nooutput; myHeader(n) = word(myHeaders,n); set key noenhanced; plot for [row=0:rowCount-1] $Data matrix u 1:3 every ::1:row::row w lp pt 7 ti myHeader(row+1)
Result:

Efficiently unpacking many Json files and combining into multiple dataframes

I have a massive number (I believe in the tens of thousands) of json files structured as follows:
{'data':
{
"1":{"A":123,"B":456, "C":789},
"2": {"A":3423,"B":356, "C":549},
...,
"4000":{"A":765,"B":355, "C":321}
},
"timestamp":timestamp1}
My end goal is three pandas dataframes structured such that each parameter denoted as a letter above gets its own dataframe like this:
dfA =
1
2
...
4000
timestamp1
123
3432
...
765
[timestamp from next json]
[from next json]
[from next json]
...
[from next json]
dfB =
1
2
...
4000
timestamp1
456
356
...
355
[timestamp from next json]
[from next json]
[from next json]
...
[from next json]
dfC =
1
2
...
4000
timestamp1
789
549
...
321
[timestamp from next json]
[from next json]
[from next json]
...
[from next json]
I have code which I believe works and achieves this, however I'm not very good with pandas and I know it is extremely slow compared to what it could be.
For each json my code looks something like this:
with open(path) as json_file:
data = json.load(json_file)
df = pd.DataFrame.from_dict(data['data']).T
A_series = df['A']
B_series = df['B']
C_series = df['C']
A_series.name = json_timestamp #timestamp is known before reading each json, but is also present inside each json
B_series.name = json_timestamp #adding these lines make json_timestamp the index in the df
C_series.name = json_timestamp
Adf = Adf.append(A_series.to_dict(),ignore_index=true)
Bdf = Bdf.append(B_series.to_dict(),ignore_index=true)
Cdf = Cdf.append(C_series.to_dict(),ignore_index=true)
This code is looking like its going to take about a week to run- are there changes I can make to make this more efficient or elegant?
The main bottleneck of your code lies in the append. Indeed, pandas return a full copy of the input dataframe with a new added line. This means that the complexity is O(n * n) where n is the number of line of the final dataframe -- i.e. the execution time is quadratic to the number of json files added. Copying the dataframe for each newly added frame is very expensive since the dataframe takes at least several hundreds of MiB in memory.
You can solve that by appending the lines to a list and then call pd.join. This performs the operation in linear time. I expect the computation time to be about 3 order of magnitude faster regarding your input.

read_csv file in pandas reads whole csv file in one column

I want to read csvfile in pandas. I have used function:
ace = pd.read_csv('C:\\Users\\C313586\\Desktop\\Daniil\\Daniil\\ACE.csv',sep = '\t')
And as output I got this:
a)First row(should be header)
_AdjustedNetWorthToTotalCapitalEmployed _Ebit _StTradeRec _StTradePay _OrdinaryCf _CfWorkingC _InvestingAc _OwnerAc _FinancingAc _ProdValueGrowth _NetFinancialDebtTotalAdjustedCapitalEmployed_BanksAndOtherInterestBearingLiabilitiesTotalEquityAndLiabilities _NFDEbitda _DepreciationAndAmortizationProductionValue _NumberOfDays _NumberOfDays360
#other rows separated by tab
0 5390\t0000000000000125\t0\t2013-12-31\t2013\tF...
1 5390\t0000000000000306\t0\t2015-12-31\t2015\tF...
2 5390\t00000000000003VG\t0\t2015-12-31\t2015\tF...
3 5390\t0000000000000405\t0\t2016-12-31\t2016\tF...
4 5390\t00000000000007VG\t0\t2013-12-31\t2013\tF...
5 5390\t0000000000000917\t0\t2015-12-31\t2015\tF...
6 5390\t00000000000009VG\t0\t2016-12-31\t2016\tF...
7 5390\t0000000000001052\t0\t2015-12-31\t2015\tF...
8 5390\t00000000000010SG\t0\t2015-12-31\t2015\tF...
Do you have any ideas why it happens? How can I fix it?
You should use the argument sep=r'\t' (note the extra r). This will make pandas search for the exact string \t (the r stands for raw)

How to find intersection or subset of two CSV files

I have 2 CSV files containing two columns and a large number of rows. The first column is the id, and the second is the set of paired values.
e.g.:
CSV1:
1 {[1,2],[1,4],[5,6],[3,1]}
2 {[2,4] ,[6,3], [8,3]}
3 {[3,2], [5,2], [3,5]}
CSV2:
1 {[2,4] ,[6,3], [8,3]}
2 {[3,4] ,[3,3], [2,3]}
3 {[1,4],[5,6],[3,1],[5,5]}
Now I need to get a CSV file which contains either exact matching items or subset which belongs to both CSVs.
Here the result should be:
{[2,4] ,[6,3], [8,3]}
{[1,4],[5,6],[3,1]}
Can anyone suggest python code to do this?
As suggested by this answer you can use set.intersection to get the intersection of two sets, however this does not work with lists as items. Instead you can also use filter (comparable to this answer):
>>> l1 = [[1,2],[1,4],[5,6],[3,1]]
>>> l2 = [[1,4],[5,6],[3,1],[5,5]]
>>> filter(lambda q: q in l2, l1)
[[1, 4], [5, 6], [3, 1]]
In Python 3 you should convert it to list since there filter returns an iterable:
>>> list(filter(lambda x: x in l2,l1))
You can load CSV files (if they are really comma [or some other character] separated files) with csv.reader or pandas.read_csv for example.

Loading column from CSV file as a list assigned to a variable

given is a function f(a,b,x,y) in gnuplot, where we got a 3D-space with x,y,z (using splot).
Also given is a csv file (without any header) of the following structure:
2 4
1 9
6 7
...
Is there a way to read out all the values of the first column and assign them to the variable a? Implicitly it should create something like:
a = [2,1,6]
b = [4,9,7]
The idea is to plot the function f(a,b,x,y) having iterated for all a,b tuples.
I've read through other posts where I hoped it would be related to it such as e.g. Reading dataset value into a gnuplot variable (start of X series). However I could not make any progres.
Is there a way to go through all rows of a csv file with two columns, using the each column value of a row as the parameter of a function?
Say you have the following data file called data:
1 4
2 5
3 6
You can load the 1st and 2nd column values to variables a and b easily using an awk system call (you can also do this using plot preprocessing with gnuplot but it's more complicated that way):
a=system("awk '{print $1}' data")
b=system("awk '{print $2}' data")
f(a,b,x,y)=a*x+b*y # Example function
set yrange [-1:1]
set xrange [-1:1]
splot for [i in a] for [j in b] f(i,j,x,y)
This is a gnuplot-only solution without the need for a system call:
a=""
b=""
splot "data" u (a=sprintf(" %s %f", a, $1), b=sprintf(" %s %f", b, \
$2)):(1/0):(1/0) not, for [i in a] for [j in b] f(i,j,x,y)