Automatically converting a 1-level list to a nested list - html

Here we have a link where there is a table:
http://pitzavod.ru/products/upakovka/
When I read it with pd.read_html I do get a list, but it 1.) is not nested, thus when converted to a dataframe it is not readable, 2.) contains integers 0 to number of rows in the table on the website.
The list I get looks like:
[ 0 1 \
0 Показатели Марка целлюлозы
1 ОСН NaN
2 Механическая прочность при размоле в мельнице ... 10 000 740 520
3 Степень делигнификации, п.е. 28 - 45
4 Сорность - число соринок в условной массе 500г... 6500
5 Влажность, % не более 20
2
0 Методы испытаний
1 NaN
2 ГОСТ13525.1 ГОСТ 13525.3 ГОСТ 13525.8
3 ГОСТ 10070
4 ГОСТ 14363.3
5 ГОСТ 16932 ]
Is there a way to easily clean this pandas outpute, or do I properly need to parse the website? Thank you.

That's because read_html returns always a list (even if the number of tables is 1).
pandas.read_html :Read HTML tables into a list of DataFrame objects.
You need to slice it with [0] :
df = pd.read_html("http://pitzavod.ru/products/upakovka/")[0]
​
Output (showing the last two columns) :
1 2
0 Марка целлюлозы Методы испытаний
1 ОСН Методы испытаний
2 10 000 740 520 ГОСТ13525.1 ГОСТ 13525.3 ГОСТ 13525.8
3 28 - 45 ГОСТ 10070
4 6500 ГОСТ 14363.3
5 20 ГОСТ 16932

Related

tesseract fails to form shapetable

i am attempting to extract OCR data of a 3-digit counter within a video via tesseract 4.1.1 on Kubuntu 21.04. (full tesseract version string below.) i am failing to add characters during the shapetable phase, and no other troubleshooting has worked for me -- i turn to you with humble heart. n.b.: the images are of a small pixel font, which takes up the entirety of my source image
image preparation and collation
from the source videos, i: crop to only the counter, invert, grayscale, dump at 1 fps, and then increase resolution by 1000% to 780x180 resolution. the results are individual frames such as this. i take a section of sequential numbers counting down from 500 (without any duplicates or blank images) and combine them into a .tif. (i can't upload the file here, but find the set of images mosaic'd together here)
i import this file into jTessBoxEditor as, for example, type_3.font.exp0.tif. i run tesseract --psm 6 --oem 3 font_name.font.exp0.tif font_name.font.exp0 makebox to create a .box file, with understandably nonsensical results.
with the hand-chosen source frames and the consistent positions, i'm able to edit the .box file with known box sizes, quantities, like so:
5 0 0 240 180 0
0 270 0 510 180 0
0 540 0 780 180 0
4 0 0 240 180 1
9 270 0 510 180 1
9 540 0 780 180 1
4 0 0 240 180 2
9 270 0 510 180 2
8 540 0 780 180 2
4 0 0 240 180 3
9 270 0 510 180 3
7 540 0 780 180 3
...
i load the edited .box into the jTessBoxEditor to check that it indeed matches my data. this is a 131-page .tif, meaning roughly 40 trains per digit.
training steps (where the problems begin)
i create font_properties and load it with font 0 0 0 0 0. Please note that i've also tried type_3 0 0 0 0 0 and type_3.font.exp0 0 0 0 0 0, with no difference on the below results
i input tesseract type_3.font.exp0.tif type_3.font.exp0 nobatch box.train and a training file is created; however, each page is listed as blank (is this normal?). e.g.:
Page 108
Warning: Invalid resolution 1 dpi. Using 70 instead.
Estimating resolution as 2263
Empty page!!
i input unicharset_extractor font_name.font.exp0.box with success -- the resulting extraction contains the characters i've identified, with some extra lines
13
NULL 0 Common 0
Joined 7 0,255,0,255,0,0,0,0,0,0 Latin 1 0 1 Joined # Joined [4a 6f 69 6e 65 64 ]a
|Broken|0|1 15 0,255,0,255,0,0,0,0,0,0 Common 2 10 2 |Broken|0|1 # Broken
5 8 0,255,0,255,0,0,0,0,0,0 Common 3 2 3 5 # 5 [35 ]0
0 8 0,255,0,255,0,0,0,0,0,0 Common 4 2 4 0 # 0 [30 ]0
4 8 0,255,0,255,0,0,0,0,0,0 Common 5 2 5 4 # 4 [34 ]0
9 8 0,255,0,255,0,0,0,0,0,0 Common 6 2 6 9 # 9 [39 ]0
8 8 0,255,0,255,0,0,0,0,0,0 Common 7 2 7 8 # 8 [38 ]0
7 8 0,255,0,255,0,0,0,0,0,0 Common 8 2 8 7 # 7 [37 ]0
6 8 0,255,0,255,0,0,0,0,0,0 Common 9 2 9 6 # 6 [36 ]0
3 8 0,255,0,255,0,0,0,0,0,0 Common 10 2 10 3 # 3 [33 ]0
2 8 0,255,0,255,0,0,0,0,0,0 Common 11 2 11 2 # 2 [32 ]0
1 8 0,255,0,255,0,0,0,0,0,0 Common 12 2 12 1 # 1 [31 ]0
but i know that failure has come for me when shapeclustering -F font_properties -U unicharset -O type_3.unicharset type_3.font.exp0.tr
results in
Reading type_3.font.exp0.tr ...
Building master shape table
Computing shape distances...
Stopped with 0 merged, min dist 999.000000
Computing shape distances...
Stopped with 0 merged, min dist 999.000000
...
Computing shape distances...
Stopped with 0 merged, min dist 999.000000
Computing shape distances...
Stopped with 0 merged, min dist 999.000000
Master shape_table:Number of shapes = 0 max unichars = 0 number with multiple unichars = 0
It has not recognized any shapes at all.
my plea:
what have i missed?? what can i do to pass these 10 humble characters to tesseract?
full version string (installed via apt)
tesseract 4.1.1
leptonica-1.79.0
libgif 5.1.4 : libjpeg 8d (libjpeg-turbo 2.0.3) : libpng 1.6.37 : libtiff 4.2.0 : zlib 1.2.11 : libwebp 0.6.1 : libopenjp2 2.3.1
Found AVX2
Found AVX
Found FMA
Found SSE
Found libarchive 3.4.3 zlib/1.2.11 liblzma/5.2.4 bz2lib/1.0.8 liblz4/1.9.2 libzstd/1.4.5

CSV, convert multiple rows in to comma delimited list grouped by another column's value

I have a CSV file with two columns. Column 1 contains a group ID and Column 2 contains an item ID.
Here's some sample data (copied out of excel)
- 5 154
- 5 220
- 5 332
- 5 93
- 5 142
- 5 471
- 5 164
- 5 362
- 5 447
- 5 1697
- 5 170
- 6 173
- 6 246
- 6 890
- 6 321
- 6 421
- 6 1106
- 6 5
- 6 253
- 6 230
- 6 551
- 8 2155
- 8 2212
- 8 2205
- 8 2211
- 8 2165
- 8 2202
- 8 1734
- 8 2166
- 8 2129
I need to reformat this so that I have just one row for each group ID and Column 2 contains a comma delimited list of item IDs.
So it should look something like this
-5 154,220,332,93,142,471,164362,447,1697,170
-6 173,246,890,321,421,1106,5,253,230,551
-8 2155,2212,2205,2211,2165,2202,1734,2166,2129
I'm happy to import the CSV in to Excel / Numbers in order to reformat. Or even in to a temp MySQL database if a SELECT query can achieve this.
Thank you for your help!
I feel something like this is best solved with R and reshape.
But here you go in Excel:
assuming
group keys in column A
item keys in column B
unique group keys in column D (I guess you can do this manually)
enter into E2:
=INDEX($B:$B,SMALL(IF($A$2:$A$50=$D2,ROW($A$2:$A$50),""),COLUMN()-COLUMN($D2)))
and press CTRL+SHIFT+ENTER to enter it as an array formula. Now you can copy cell E2 to F2:P2 and E3:P4.
result:

Formatting JSON data in R

I'm really new to working with JSON data, so I had a question about formatting.
Here's the link to the data I was trying to work with
I was using JSONlite and did this:
shot<-"http://stats.nba.com/stats/playerdashptshotlog?DateFrom=&DateTo=&
GameSegment=&LastNGames=0&LeagueID=00&Location=&Month=0&OpponentTeamID=0&
Outcome=&Period=0&PlayerID=202322&Season=2014-15&SeasonSegment=&
SeasonType=Regular+Season&TeamID=0&VsConference=&VsDivision="
I then did fromJSON:
json_data <- fromJSON(paste(readLines(shot), collapse=""))
This gives me the data in a list. My issue (although for all I know I messed up working towards this) is trying to create a data frame out of this info. I was able to make a data frame with code I read under similar questions on the site, but it is all of the data in just one column. Any recommendations would be appreciated!
Thanks
Normally, first thing to do when you get a JSON, you look at the structure.
str(json_data)
Doing so will reveal that your data has a very simple structure: is is a dataframe with rows, a line of headers, wrapped in some metadata about what each column means. Using the $ will allow you to address those specific components. In other words, your specific json is already a data frame structure, all you gotta to is take it out of json
library(jsonlite)
json_data <- fromJSON(paste(readLines(shot), collapse=""))
str(json_data)
mydf <- data.frame(json_data$resultSets$rowSet)
colnames(mydf) <- unlist(json_data$resultSets$headers)
You ought to get something like this:
head(mydf)
GAME_ID MATCHUP LOCATION W FINAL_MARGIN SHOT_NUMBER PERIOD
1 0021401215 APR 14, 2015 - WAS # IND A L -4 1 1
2 0021401215 APR 14, 2015 - WAS # IND A L -4 2 1
3 0021401215 APR 14, 2015 - WAS # IND A L -4 3 1
4 0021401215 APR 14, 2015 - WAS # IND A L -4 4 1
5 0021401215 APR 14, 2015 - WAS # IND A L -4 5 1
6 0021401215 APR 14, 2015 - WAS # IND A L -4 6 1
GAME_CLOCK SHOT_CLOCK DRIBBLES TOUCH_TIME SHOT_DIST PTS_TYPE SHOT_RESULT
1 10:33 7.7 0 1 25 3 missed
2 8:41 14 10 9.6 10.7 2 made
3 6:42 14.9 11 9.7 18.2 2 missed
4 5:16 19 3 3.5 4.2 2 made
5 4:45 19.8 3 3.7 3.3 2 missed
6 3:08 13.5 10 9.7 18 2 missed
CLOSEST_DEFENDER CLOSEST_DEFENDER_PLAYER_ID CLOSE_DEF_DIST FGM PTS
1 Hill, George 201588 4.3 0 0
2 Hill, George 201588 5.7 1 2
3 Hill, George 201588 3 0 0
4 Miles, CJ 101139 4 1 2
5 Hill, Solomon 203524 3 0 0
6 Hill, George 201588 4.5 0 0

Grab HTML table using XML

I am trying to read an html table using the package XML, but even though it looks easy, I haven’t managed to do it. I tried everything, but the names of the columns are always fixed by R as V1, V2, V3,…
This is the code:
require(XML)
tbl <- readHTMLTable("http://facedata.ornl.gov/ornl/npp_98-08.html”,
header = c("year","ring","CO2", "stem","root","leaf","fine root", "NPP"),
skip.rows=c(1,2),colClasses=c(rep("factor",3),rep("numeric",5)))
Many thanks for your help
The first row of the table is causing trouble. It maybe easiest to remove it:
library(XML)
appURL <- "http://facedata.ornl.gov/ornl/npp_98-08.html"
doc <- htmlParse(appURL)
removeNodes(doc["//table/tr[1]"]) # remove the first row with the troublesome header
myTable <- readHTMLTable(doc, which = 1)
> head(myTable)
Year Plot CO2 Stem Coarse Root Leaf Fine Root Total NPP
1 1998 1 elev 1540 127 362 168 2197
2 1998 2 elev 1487 139 418 175 2219
3 1998 3 amb 1085 112 333 231 1762
4 1998 4 amb 1204 113 368 185 1870
5 1998 5 amb 1136 109 382 56 1683
6 1999 1 elev 1218 98 475 295 2086

Querying a table to get values based on no of digits of a parameter?

Considering the following table
I have a large table from which I can query to get the following table
type no of times type occurs
101 450
102 562
103 245
111 25
112 28
113 21
Now suppose I wanted to get a table which shows me the sum of no of times type occurs
for type starting with 1 then starting with 10,11,12,13.......19 then starting with 2, 20,21, 22, 23...29 and so on.
Something like this
1 1331 10 1257
11 74
12 ..
13 ..
.. ..
2 ... 20 ..
21 ..
Hope I am clear
Thanks
You really have two different queries:
SELECT [type]\100 AS TypePart, Count(t.type) AS CountOftype
FROM t
GROUP BY [type]\100;
And:
SELECT [type]\100 AS TypePart, [type] Mod 100 AS TypeEnd,
Count(t.type) AS CountOftype
FROM t
GROUP BY [type]\100, [type] Mod 100;
Where t is the name of the table.
Here on the first query i am getting something like this
utypPart CountOftype
1 29
2 42
3 46
4 50
5 26
6 45
7 33
9 1
it is giving me how many utyp are starting with 1 2 and so on
but whai i want is the sum of no of times those types occur for the utyp .