Print HTML from raw print tables in pandas/ jupyter - html

Using the code from:
Pandas: cannot import name adjoin
I get print out below. Can I change the output into a an HTML layout easily.
def side_by_side(*objs, **kwds):
from pandas.io.formats.printing import adjoin, pprint_thing
space = kwds.get('space', 6)
reprs = [repr(obj).split('\n') for obj in objs]
print adjoin(space, *reprs)
import pandas as pd
df1 = pd.DataFrame(np.random.rand(10,3))
df2 = pd.DataFrame(np.random.rand(10,3))
side_by_side(df1, df2)
0 1 2 0 1 2
0 0.786732 0.688221 0.339926 0 0.624153 0.611812 0.933379
1 0.444541 0.366336 0.840466 1 0.734519 0.824821 0.335849
2 0.328322 0.322575 0.935291 2 0.907465 0.185209 0.407982
3 0.919987 0.968674 0.807549 3 0.737452 0.333456 0.886134
4 0.086916 0.090911 0.557082 4 0.860656 0.165118 0.230746
5 0.856184 0.884198 0.636849 5 0.052435 0.858721 0.339225
6 0.955805 0.151886 0.221581 6 0.393247 0.270365 0.123228
7 0.332495 0.256805 0.312205 7 0.456939 0.234717 0.563153
8 0.118446 0.375340 0.029774 8 0.202765 0.511387 0.948326
9 0.537782 0.945828 0.445125 9 0.371834 0.954219 0.057206

Panda now provides a to_html() method.
You can use it like:
df.to_html()
Here the official doc

Related

How to view skipped records in pandas read_csv()? [duplicate]

I have a list of skip rows ( say [1,5,10] --> row numbers) and when I passed this to pandas read_csv, it ignores those rows. But, I need to save these skipped rows in a different text file.
I went through pandas read_csv documentation and few other articles, but have no idea how to save this into a text file.
Example :
Input file :
a,b,c
# Some Junk to Skip 1
4,5,6
# Some junk to skip 2
9,20,9
2,3,4
5,6,7
Code :
skiprows = [1,3]
df = pandas.read_csv(file, skip_rows = skiprows)
Now output.txt :
# Some junk to skip 1
# Some junk to skip 2
Thanks in advance!
def write_skiprows(infile, skiprows, outfile='skiprows.csv')
maxrow = max(skiprows)
with open(infile, 'r') as f, open(outfile, 'w') as o:
for i, line in enumerate(f):
if i in skiprows:
o.write(line)
if i == maxrow:
return
try this,
df=pd.read_csv('input.csv')
skiprows=[1,3,6]
df,df_skiprow=df.drop(skiprows),df.iloc[skiprows]
#df_skiprow.to_csv('skiprows.csv',index=False)
Input:
a b
0 1 c1
1 2 c2
2 3 c3
3 4 c4
4 5 c5
5 6 c6
6 7 c7
7 8 c8
8 9 c9
9 10 c10
Output:
df
a b
0 1 c1
2 3 c3
4 5 c5
5 6 c6
7 8 c8
8 9 c9
9 10 c10
df_skiprow
a b
1 2 c2
3 4 c4
6 7 c7
Explanation:
read whole file.
split file by df and skiprow
convert into seperate csv file.

Grouping CSV file by ID and extracting JSON column

I currently have a CSV like this:
A B C
1 10 {"a":"one","b":"two","c":"three"}
1 10 {"a":"four","b":"five","c":"six"}
1 10 {"a":"seven","b":"eight","c":"nine"}
1 10 {"a":"ten","b":"eleven","c":"twelve"}
2 10 {"a":"thirteen","b":"fourteen","c":"fifteen"}
2 10 {"a":"sixteen","b":"seventeen","c":"eighteen"}
2 10 {"a":"nineteen","b":"twenty","c":"twenty-one"}
3 10 {"a":"twenty-two","b":"twenty-three","c":"twenty-four"}
3 10 {"a":"twenty-five","b":"twenty-six","c":"twenty-seven"}
3 10 {"a":"twenty-eight","b":"twenty-nine","c":"thirty"}
3 10 {"a":"thirty-one","b":"thirty-two","c":"thirty-three"}
I want to group by column A, ignore column B, and take only the "b" field in C, and get an output like:
A C
1 ['two','five','eight','eleven']
2 ['fourteen','seventeen','twenty']
3 ['twenty-three','twenty-six','twenty-nine','thirty-two']
Can I do this? I have pandas if that will be useful! Also I would like the output file to be tab delimited.
Try this:
import pandas as pd
import json
# read file that looks exactly as given above
df = pd.read_csv("file.csv", delim_whitespace=True)
# drop the 'B' column
del df['B']
# 'C' will start life as a string. convert from json, extract values, return as list
df['C'] = df['C'].map(lambda x: json.loads(x)['b'])
# 'C' now holds just the 'b' values. group these together:
df = df.groupby('A').C.apply(lambda x : list(x))
print(df)
This returns:
A
1 [two, five, eight, eleven]
2 [fourteen, seventeen, twenty]
3 [twenty-three, twenty-six, twenty-nine, thirty...
IIUC
df.groupby('A').C.apply(lambda x : [y['b'] for y in x ])
A
1 [two, five, eight, eleven]
2 [fourteen, seventeen, twenty]
3 [twenty-three, twenty-six, twenty-nine, thirty...
Name: C, dtype: object

How to create a scikit learn dataset?

I have an array where the first columns are classes (in integer form), and the rest of the columns are features.
SG like this
1,0,34,23,2
0,0,21,11,0
3,11,2,11,1
How can I turn this into a scikit compatible dataset, so I can call sg like
mydataset = datasets.load_mydataset()?
You can simply use pandas. e.g. If you have copied your dataset to dataset.csv file. Just label the columns in csv file appropriately.
In [1]: import pandas as pd
In [2]: df = pd.read_csv('temp.csv')
In [3]: df
Out[3]:
Label f1 f2 f3 f4
0 1 0 34 23 2
1 0 0 21 11 0
2 3 11 2 11 1
In [4]: y_train= df['Label']
In [5]: x_train = df.drop('Label', axis=1)
In [6]: x_train
Out[6]:
f1 f2 f3 f4
0 0 34 23 2
1 0 21 11 0
2 11 2 11 1
In [7]: y_train
Out[7]:
0 1
1 0
2 3

Iterating through CSV reader to slice data frame

I have a data frame that contains 508383 rows. I am only showing the first 10 row.
0 1 2
0 chr3R 4174822 4174922
1 chr3R 4175400 4175500
2 chr3R 4175466 4175566
3 chr3R 4175521 4175621
4 chr3R 4175603 4175703
5 chr3R 4175619 4175719
6 chr3R 4175692 4175792
7 chr3R 4175889 4175989
8 chr3R 4175966 4176066
9 chr3R 4176044 4176144
I want to iterate through each row and check the value of column #2 of the first row to the value of the next row. I want to check if the difference between these values is less than 5000. If the difference is greater than 5000 then I want to slice the data frame from the first row to the previous row and have this be a subset data frame.
I then want to repeat this process and create a second subset data frame. I've only manage to get this done by using CSV reader in combination with Pandas.
Here is my code:
#!/usr/bin/env python
import pandas as pd
data = pd.read_csv('sort_cov_emb_sg.bed', sep='\t', header=None, index_col=None)
import csv
file = open('sort_cov_emb_sg.bed')
readCSV = csv.reader(file, delimiter="\t")
first_row = readCSV.next()
print first_row
count_1 = 0
while count_1 < 100000:
next_row = readCSV.next()
value_1 = int(next_row[1]) - int(first_row[1])
count_1 = count_1 + 1
if value_1 < 5000:
continue
else:
break
print next_row
print count_1
print value_1
window_1 = data[0:63]
print window_1
first_row = readCSV.next()
print first_row
count_2 = 0
while count_2 < 100000:
next_row = readCSV.next()
value_2 = int(next_row[1]) - int(first_row[1])
count_2 = count_2 + 1
if value_2 < 5000:
continue
else:
break
print next_row
print count_2
print value_2
window_2 = data[0:74]
print window_2
I wanted to know if there is a better way to do this process )without repeating the code every time) and get all the subset data frames I need.
Thanks.
Rodrigo
This is yet another example of the compare-cumsum-groupby pattern. Using only rows you showed (and so changing the diff to 100 instead of 5000):
jumps = df[2] > df[2].shift() + 100
grouped = df.groupby(jumps.cumsum())
for k, group in grouped:
print(k)
print(group)
produces
0
0 1 2
0 chr3R 4174822 4174922
1
0 1 2
1 chr3R 4175400 4175500
2 chr3R 4175466 4175566
3 chr3R 4175521 4175621
4 chr3R 4175603 4175703
5 chr3R 4175619 4175719
6 chr3R 4175692 4175792
2
0 1 2
7 chr3R 4175889 4175989
8 chr3R 4175966 4176066
9 chr3R 4176044 4176144
This works because the comparison gives us a new True every time a new group starts, and when we take the cumulative sum of that, we get what is effectively a group id, which we can group on:
>>> jumps
0 False
1 True
2 False
3 False
4 False
5 False
6 False
7 True
8 False
9 False
Name: 2, dtype: bool
>>> jumps.cumsum()
0 0
1 1
2 1
3 1
4 1
5 1
6 1
7 2
8 2
9 2
Name: 2, dtype: int32

how to select/add a column to pandas dataframe based on a non trivial function of other columns

This is a followup question for this one: how to select/add a column to pandas dataframe based on a function of other columns?
have a data frame and I want to select the rows that match some criteria. The criteria is a function of values of other columns and some additional values.
Here is a toy example:
>> df = pd.DataFrame({'A': [1,2,3,4,5,6,7,8,9],
'B': [randint(1,9) for x in xrange(9)],
'C': [4,10,3,5,4,5,3,7,1]})
>>
A B C
0 1 6 4
1 2 8 10
2 3 8 3
3 4 4 5
4 5 2 4
5 6 1 5
6 7 1 3
7 8 2 7
8 9 8 1
I want select all rows for which some non trivial function returns true, e.g. f(a,c,L), where L is a list of lists and f returns True iff a and c are not part of the same sublist.
That is, if L = [[1,2,3],[4,2,10],[8,7,5,6,9]] I want to get:
A B C
0 1 6 4
3 4 4 5
4 5 2 4
6 7 1 3
8 9 8 1
Thanks!
Here is a VERY VERY hacky and non-elegant solution. As another disclaimer, since your question doesn't state what you want to do if a number in the column is in none of the sub lists this code doesn't handle that in any real way besides any default functionality within isin().
import pandas as pd
df = pd.DataFrame({'A': [1,2,3,4,5,6,7,8,9],
'B': [6,8,8,4,2,1,1,2,8],
'C': [4,10,3,5,4,5,3,7,1]})
L = [[1,2,3],[4,2,10],[8,7,5,6,9]]
df['passed1'] = df['A'].isin(L[0])
df['passed2'] = df['C'].isin(L[0])
df['1&2'] = (df['passed1'] ^ df['passed2'])
df['passed4'] = df['A'].isin(L[1])
df['passed5'] = df['C'].isin(L[1])
df['4&5'] = (df['passed4'] ^ df['passed5'])
df['passed7'] = df['A'].isin(L[2])
df['passed8'] = df['C'].isin(L[2])
df['7&8'] = (df['passed7'] ^ df['passed8'])
df['PASSED'] = df['1&2'] & df['4&5'] ^ df['7&8']
del df['passed1'], df['passed2'], df['1&2'], df['passed4'], df['passed5'], df['4&5'], df['passed7'], df['passed8'], df['7&8']
df = df[df['PASSED'] == True]
del df['PASSED']
With an output that looks like:
A B C
0 1 6 4
3 4 4 5
4 5 2 4
6 7 1 3
8 9 8 1
I implemented this rather quickly hence the utter and complete ugliness of this code, but I believe you can refactor it any way you would like (e.g. iterate over the original set of lists with for sub_list in L, improve variable names, come up with a better solution, etc).
Hope this helps. Oh, and did I mention this was hacky and not very good code? Because it is.