Read in search strings from text file, search for string in second text file and output to CSV - csv

I have a text file named file1.txt that is formatted like this:
001 , ID , 20000
002 , Name , Brandon
003 , Phone_Number , 616-234-1999
004 , SSNumber , 234-23-234
005 , Model , Toyota
007 , Engine ,V8
008 , GPS , OFF
and I have file2.txt formatted like this:
#==============================================
# 005 : Model
#------------------------------------------------------------------------------
[Model] = Honda
option = 0
length = 232
time = 1000
hp = 75.0
k1 = 0.3
k2 = 0.0
k1 = 0.3
k2 = 0.0
#------------------------------------------------------------------------------
[Model] = Toyota
option = 1
length = 223
time = 5000
speed = 50
CCNA = 1
#--------------------------------------------------------------------------
[Model] = Miata
option = 2
CCNA = 1
#==============================================
# 007 : Engine
#------------------------------------------------------------------------------
[Engine_Type] = V8 #1200HP
option = 0
p = 12.0
pp = 12.0
map = 0.4914
k1mat = 100
k2mat = 600
value =12.00
mep = 79.0
cylinders = 8
#------------------------------------------------------------------------------
[Engine_Type] = v6 #800HP
option = 1
active = 1
cylinders = 6
lim = 500
lim = 340
rpm = 330
start = 350
ul = 190.0
ll = 180.0
ul = 185.0
#==============================================
# 008 : GPS
#------------------------------------------------------------------------------
[GPS] = ON
monitor = 0
#------------------------------------------------------------------------------
[GPS] = OFF
monitor = 1
Enable = 1
#------------------------------------------------------------------------------
[GPS] = Only
monitor = 2
Enable = 1
#==============================================
# 014 :Option
#------------------------------------------------------------------------------
[Option] = Disable
monitor = 0
#------------------------------------------------------------------------------
[Option] = Enable
monitor = 1
#==============================================
# 015 : Weight
#------------------------------------------------------------------------------
[lbs] = &1
weight = &1
#==============================================
The expected output is supposed to look like this:
Since there is only option 005-008 in file1.txt the output would be:
Code:
#==============================================
# 005 : Model
#------------------------------------------------------------------------------
[Model] = Toyota
option = 1
length = 223
time = 5000
speed = 50
CCNA = 1
#==============================================
# 007 : Engine
#------------------------------------------------------------------------------
[Engine_Type] = V8 #1200HP
option = 0
p = 12.0
pp = 12.0
map = 0.4914
k1mat = 100
k2mat = 600
value =12.00
mep = 79.0
cylinders = 8
#==============================================
# 008 : GPS
#------------------------------------------------------------------------------
[GPS] = OFF
monitor = 1
Enable = 1
#-----------------------------------------------------------------
Now, using Awk and the values from the 2nd and 3rd columns in file1, I want to search for those strings in file2 and output everything in that section to a CSV file ie from where the string is found to where there is the #-------------
demarcation.
Could someone please help me with this and explain also? I am new to Awk
Thank you!

I wouldn't really use awk for this job as specified, but here's a little snippet to get started:
awk -F'[ ,]+' 'FNR == NR { section["[" $2 "]"] = $3; next }
/^\[/ && section[$1] == $3, /^#/' file1.txt file2.txt
1) The -F'[ ,]+' sets the field separator to one or more of spaces and/or commas (since file1.txt looks like it's not a proper CSV file).
2) FNR == NR (record number in file equals total record number) is only true when reading file1.txt. So for each line in file1.txt, we record [second_field] as the pattern to look for with the third field as value.
3) Then we look for lines that begin with a [ and where the value stored in section for the first field of that line matches the third field of that line (/^\[/ && section[$1] == $3), and print from that line until the next line that begins with a #.
The output for your example input is:
[Model] = Toyota
option = 1
length = 223
time = 5000
speed = 50
CCNA = 1
#--------------------------------------------------------------------------
[GPS] = OFF
monitor = 1
Enable = 1
#------------------------------------------------------------------------------
The matched lines in step 3 were [Model] = Toyota and [GPS] = OFF. The Engine line is missing because file2.txt had Engine_Type instead. Also, I didn't bother with the section headers; it would be easy to add another condition to print them all but it requires lookahead to print only the ones that are going to have matching content in them (because at the time you read the header you don't know if a match is found inside). For that, I would switch to another language (e.g., Ruby).

Related

Automatically parse selectors with insheetjson in Stata

I am trying to build a program that gets data from Statistics Denmark's API using insheetjson in Stata. However, I have not been able to find a solution for the following problem: I want to get the metadata from a random table (in this case: "FOLK1A" which is a table of demographics). This table has variables of region, age, marital status and time. If we take regions as an example, there are 105 regions, so if I run insheetjson using "https://api.statbank.dk/v1/tableinfo/FOLK1A?lang=en", showresponse flatten, I see a very clear pattern:
variables:1:values:1:id = 000
variables:1:values:1:text = All Denmark
variables:1:values:2:id = 084
variables:1:values:2:text = Region Hovedstaden
variables:1:values:3:id = 101
variables:1:values:3:text = Copenhagen
variables:1:values:4:id = 147
variables:1:values:4:text = Frederiksberg
variables:1:values:5:id = 155
variables:1:values:5:text = Dragør
variables:1:values:6:id = 185
variables:1:values:6:text = Tårnby
variables:1:values:7:id = 165
variables:1:values:7:text = Albertslund
variables:1:values:8:id = 151
variables:1:values:8:text = Ballerup
variables:1:values:9:id = 153
variables:1:values:9:text = Brøndby
variables:1:values:10:id = 157
variables:1:values:10:text = Gentofte
...
variables:1:values:100:text = Mariagerfjord
variables:1:values:101:id = 773
variables:1:values:101:text = Morsø
variables:1:values:102:id = 840
variables:1:values:102:text = Rebild
variables:1:values:103:id = 787
variables:1:values:103:text = Thisted
variables:1:values:104:id = 820
variables:1:values:104:text = Vesthimmerlands
variables:1:values:105:id = 851
variables:1:values:105:text = Aalborg
However, I am not able not parse all these regions in a single call. Is there a way to tell insheetjson to get all of these regions, i.e. "variables:1:values:[1-105]:id" in one call? I don't want to run the command several times, thus pinging the server way too much.
Best regards,
Emil Blicher

Getting an empty result for newQuery

I've a problem in getting the value of the query $scholars for $lt = $scholars->lat.The result is empty array for dd($lt);
.Any help would be helpful to my school project.
database of Scholar
id lat lng scholar_birthday scholar_GPA
1 10.275667 123.8569163 1995-12-12 89
2 10.2572114 123.839243 2000-05-05 88
3 9.9545909 124.1368558 2002-05-05 89
4 10.1208564 124.8495005 2010-05-05 85
$scholars = (new Scholar)->newQuery()->select('*');
$scholars->whereBetween(DB::raw('TIMESTAMPDIFF(YEAR,scholars.scholar_birthday,CURDATE())'),array($ship_age_from,$ship_age_to));
$scholars->whereBetween(DB::raw('scholar_GPA'),array($ship_gpa_from,$ship_gpa_to));
$lt = $scholars->lat;
$lg = $scholars->lng;
$str = $lt.','.$lg;
$url = 'http://maps.googleapis.com/maps/api/geocode/json?latlng='.trim($lt).','.trim($lg).'&sensor=false';
$json = #file_get_contents($url);
$data=json_decode($json);
$status = $data->status;
$data->results[0]->formatted_address;
dd($lt);
$scholars = $scholars->get();
dd Result
Undefined property: Illuminate\Database\Eloquent\Builder::$lat
Two things,
when you use the newQuery() you will still need to get() the result like such
$scholars = (new Scholar)->newQuery()->select('*')->get();
This however will retrieve a collection and not a single result so you will need to loop over this.
foreach($scholars as $scholar){
$lt = $scholars->lat;
dd($lt);
}

Function does not return the list correctly

I have written a code for adding the numbers from two different text files. For a very big data 2-3 GB, I get the MemoryError. So, I am writing a new code using some functions to avoid loading the whole data into memory.
This code opens an input file 'd.txt' an reads the numbers after some lines from a bigger data as following:
SCALAR
ND 3
ST 0
TS 1000
1.0
1.0
1.0
SCALAR
ND 3
ST 0
TS 2000
3.3
3.4
3.5
SCALAR
ND 3
ST 0
TS 3000
1.7
1.8
1.9
and adds to the number have read from a smaller text file 'e.txt' as following:
SCALAR
ND 3
ST 0
TS 0
10.0
10.0
10.0
The result is written in a text file 'output.txt' like this:
SCALAR
ND 3
ST 0
TS 1000
11.0
11.0
11.0
SCALAR
ND 3
ST 0
TS 2000
13.3
13.4
13.5
SCALAR
ND 3
ST 0
TS 3000
11.7
11.8
11.9
The code which I prepared:
def add_list_same(list1, list2):
"""
list2 has the same size as list1
"""
c = [a+b for a, b in zip(list1, list2)]
print(c)
return c
def list_numbers_after_ts(n, f):
result = []
for line in f:
if line.startswith('TS'):
for node in range(n):
result.append(float(next(f)))
return result
def writing_TS(f1):
TS = []
ND = []
for line1 in f1:
if line1.startswith('ND'):
ND = float(line1.split()[-1])
if line1.startswith('TS'):
x = float(line1.split()[-1])
TS.append(x)
return TS, ND
with open('d.txt') as depth_dat_file, \
open('e.txt') as elev_file, \
open('output.txt', 'w') as out:
m = writing_TS(depth_dat_file)
print('number of TS', m[1])
for j in range(0,int(m[1])-1):
i = m[1]*j
out.write('SCALAR\nND {0:2f}\nST 0\nTS {0:2f}\n'.format(m[1], m[0][j]))
list1 = list_numbers_after_ts(int(m[1]), depth_dat_file)
list2 = list_numbers_after_ts(int(m[1]), elev_file)
Eh = add_list_same(list1, list2)
out.writelines(["%.2f\n" % item for item in Eh])
the output.txt is like this:
SCALAR
ND 3.000000
ST 0
TS 3.000000
SCALAR
ND 3.000000
ST 0
TS 3.000000
SCALAR
ND 3.000000
ST 0
TS 3.000000
The addition of lists does not work, besides I checked separately the functions, they work. I don't find the error. I changed it a lot, but it does not work. Any suggustion? I really appreciate any help you can provide!
You can use grouper to read files by fixed count of lines. Next code should works if order of lines in groups is unchanged.
from itertools import zip_longest
#Split by group iterator
#See http://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks
def grouper(iterable, n, padvalue=None):
return zip_longest(*[iter(iterable)]*n, fillvalue=padvalue)
add_numbers = []
with open("e.txt") as f:
# Read data by 7 lines
for lines in grouper(f, 7):
# Suppress first SCALAR line
for line in lines[1:]:
# add last number in every line to array (6 elements)
add_numbers.append(float(line.split()[-1].strip()))
#template for every group
template = 'SCALAR\nND {:.2f}\nST {:.2f}\nTS {:.2f}\n{:.2f}\n{:.2f}\n{:.2f}\n'
with open("d.txt") as f, open('output.txt', 'w') as out:
# As before
for lines in grouper(f, 7):
data_numbers = []
for line in lines[1:]:
data_numbers.append(float(line.split()[-1].strip()))
# in result_numbers sum elements of two arrays by pair (6 elements)
result_numbers = [x + y for x, y in zip(data_numbers, add_numbers)]
# * unpack result_numbers as 6 arguments of function format
out.write(template.format(*result_numbers))
I had to change some small things in the code and now it works but just for small input files, because many variables are loaded into memory. Can you please tell me how can I work with yield.
from itertools import zip_longest
def grouper(iterable, n, padvalue=None):
return zip_longest(*[iter(iterable)]*n, fillvalue=padvalue)
def writing_ND(f1):
for line1 in f1:
if line1.startswith('ND'):
ND = float(line1.split()[-1])
return ND
def writing_TS(f):
for line2 in f:
if line2.startswith('TS'):
x = float(line2.split()[-1])
TS.append(x)
return TS
TS = []
ND = []
x = 0.0
n = 0
add_numbers = []
with open("e.txt") as f, open("d.txt") as f1,\
open('output.txt', 'w') as out:
ND = writing_ND(f)
TS = writing_TS(f1)
n = int(ND)+4
f.seek(0)
for lines in grouper(f, int(n)):
for item in lines[4:]:
add_numbers.append(float(item))
i = 0
for l in grouper(f1, n):
data_numbers = []
for line in l[4:]:
data_numbers.append(float(line.split()[-1].strip()))
result_numbers = [x + y for x, y in zip(data_numbers, add_numbers)]
del data_numbers
out.write('SCALAR\nND %d\nST 0\nTS %0.2f\n' % (ND, TS[i]))
i += 1
for item in result_numbers:
out.write('%s\n' % item)

convert JSON to data.frame in R

I have problem with converting lists to data.frame
First I have downloaded dataset in JSON format from Data API:
request1 <- POST(url = "https://api.data-api.io/v1/subjekti", add_headers('x-dataapi-key' = "xxxxxxx", 'content-type'= "application/json"), body = list(oib = oibreq), encode = "json")
json1 <- content(request1, type = "application/json")
json2 <- fromJSON(toJSON(json1, null = "null"), flatten = TRUE)
The problem is that data are elements of lists. For example
> json2[['oib']]
[[1]]
[1] "00045103869"
[[2]]
[1] "18527887472"
[[3]]
[1] "92680516748"
all colnames:
> colnames(json2)
[1] "oib" "mb" "mbs" "mbo" "rno" "naziv"
[7] "adresa" "grad" "posta" "zupanija" "nkd2007" "puo"
[13] "godinaOsnivanja" "status" "temeljniKapital" "isActive" "datumBrisanja" "predmetPoslovanja"
How can I convert this lists to data.frame?
Sorry, that was my first question on stockoverflow. There is my dataset:
> data <- dput(json3)
structure(list(oib = list("00045103869", "18527887472", "92680516748"),
mb = list("01699032", "03858731", "02591596"), mbs = list(
"080451345", "060060881", "040260786"), mbo = c(NA, NA,
NA), rno = c(NA, NA, NA), naziv = list("INTERIJER DIZAJN d.o.o.",
"M - Đ COMMERCE d.o.o.", "HIP REKLAME d.o.o. u stečaju"),
adresa = list("Savska cesta 179", "Put Piketa 0", "Sadska 2"),
grad = list("Zagreb", "Sinj", "Rijeka"), posta = list("10000",
"21230", "51000"), zupanija = list("Grad Zagreb", "Splitsko-dalmatinska",
"Primorsko-goranska"), nkd2007 = list("1623", "4719",
"4711"), puo = list(92L, 92L, 92L), godinaOsnivanja = list(
"2003", "1995", "2009"), status = list("bez postupka",
"bez postupka", "stečaj"), temeljniKapital = list("20.000,00 kn",
"509.100,00 kn", "20.000,00 kn"), isActive = list(TRUE,
TRUE, FALSE), datumBrisanja = list(NULL, NULL, "2015-12-24T00:00:00+01:00")), .Names = c("oib",
"mb", "mbs", "mbo", "rno", "naziv", "adresa", "grad", "posta",
"zupanija", "nkd2007", "puo", "godinaOsnivanja", "status", "temeljniKapital",
"isActive", "datumBrisanja"), class = "data.frame", row.names = c(NA,
3L))
A quick & dirty way would be to substitute the NULL values by e.g. NAs like this
f <- function(lst) lapply(lst, function(x) if (is.list(x)) f(x) else if (is.null(x)) NA_character_ else x)
df <- as.data.frame(lapply(f(json2), unlist))
str(df)
# 'data.frame': 3 obs. of 17 variables:
# $ oib : Factor w/ 3 levels "00045103869",..: 1 2 3
# $ mb : Factor w/ 3 levels "01699032","02591596",..: 1 3 2
# $ mbs : Factor w/ 3 levels "040260786","060060881",..: 3 2 1
# $ mbo : logi NA NA NA
# $ rno : logi NA NA NA
# $ naziv : Factor w/ 3 levels "HIP REKLAME d.o.o. u stecaju",..: 2 3 1
# $ adresa : Factor w/ 3 levels "Put Piketa 0",..: 3 1 2
# $ grad : Factor w/ 3 levels "Rijeka","Sinj",..: 3 2 1
# $ posta : Factor w/ 3 levels "10000","21230",..: 1 2 3
# $ zupanija : Factor w/ 3 levels "Grad Zagreb",..: 1 3 2
# $ nkd2007 : Factor w/ 3 levels "1623","4711",..: 1 3 2
# $ puo : int 92 92 92
# $ godinaOsnivanja: Factor w/ 3 levels "1995","2003",..: 2 1 3
# $ status : Factor w/ 2 levels "bez postupka",..: 1 1 2
# $ temeljniKapital: Factor w/ 2 levels "20.000,00 kn",..: 1 2 1
# $ isActive : logi TRUE TRUE FALSE
# $ datumBrisanja : Factor w/ 1 level "2015-12-24T00:00:00+01:00": NA NA 1
But there may be better options.

R: Trying to format a data.frame created from a JSON object so that I can use write.table

I’m using the R programming language (and R Studio) having trouble organizing some data that I’m pulling via API so that it’s writeable to a table. I’m using the StubHub API to get a JSON response that contains all ticket listings for a particular event. I can successfully make the call to StubHub, I get the successful response. Here’s the code I am using to grab the response:
# get the content part of the response
msgContent = content(response)
# format to JSON object
jsonContent = jsonlite::fromJSON(toJSON(msgContent),flatten=TRUE,simplifyVector=TRUE)
This JSON object has a node called “listing” and that’s what I’m most interested in, so I set a variable to that part of the object:
friListings = jsonContent $listing
Checking the class of “friListings” I see I have a data.frame:
> class(friListings)
[1] "data.frame"
When I click on this variable in R Studio — View(friListings) — it opens in a new tab and looks pretty and nicely formatted. There are 21 variables (columns) and 609 observations (row). I see null values for certain cells, which is expected.
I would like to write this data.frame out as a table in a file on my computer. When I try to do that, I get this error.
> write.table(friListings,file="data",row.names=FALSE)
Error in if (inherits(X[[j]], "data.frame") && ncol(xj) > 1L) X[[j]] <- as.matrix(X[[j]]) :
missing value where TRUE/FALSE needed
Looking at other postings, it appears this is happening because my data.frame is actually not “flat” and is a list of lists with different classes and nesting. I validate this by str() on each of the columns in friListings….
> str(friListings[1])
'data.frame': 609 obs. of 1 variable:
$ listingId:List of 609
..$ : int 1138579989
..$ : int 1138969061
..$ : int 1138958138
(this is just the first couple of lines, there are hundreds)
Another example:
> str(friListings[6])
'data.frame': 609 obs. of 1 variable:
$ sellerSectionName:List of 609
..$ : chr "Upper 354 - No View"
..$ : chr "Club 303 - Obstructed/No View"
..$ : chr "Middle 254 - Obstructed/No View"
(this is just the first couple of lines, there are hundreds)
Here is the head of friListings that I am attempting to share using dput from the reproducible example post:
> dput(head(friListings,4))
structure(list(listingId = list(1138579989L, 1138969061L, 1138958138L,
1139003985L), sectionId = list(1552295L, 1552172L, 1552220L,
1552289L), row = list("16", "6", "22", "26"), quantity = list(
1L, 2L, 4L, 1L), sellerSectionName = list("Upper 354 - No View",
"Club 303 - Obstructed/No View", "Middle 254 - Obstructed/No View",
"353"), sectionName = list("Upper 354 - Obstructed/No View",
"Club 303 - Obstructed/No View", "Middle 254 - Obstructed/No View",
"Upper 353 - Obstructed/No View"), seatNumbers = list("21",
"7,8", "13,14,15,16", "General Admission"), zoneId = list(
232917L, 232909L, 232914L, 232917L), zoneName = list("Upper",
"Club", "Middle", "Upper"), listingAttributeList = list(structure(c(204L,
201L), .Dim = c(2L, 1L)), structure(c(4369L, 5370L), .Dim = c(2L,
1L)), structure(c(4369L, 5989L), .Dim = c(2L, 1L)), structure(c(204L,
4369L), .Dim = c(2L, 1L))), listingAttributeCategoryList = list(
structure(1L, .Dim = c(1L, 1L)), structure(1L, .Dim = c(1L,
1L)), structure(1L, .Dim = c(1L, 1L)), structure(1L, .Dim = c(1L,
1L))), deliveryTypeList = list(structure(5L, .Dim = c(1L,
1L)), structure(5L, .Dim = c(1L, 1L)), structure(5L, .Dim = c(1L,
1L)), structure(5L, .Dim = c(1L, 1L))), dirtyTicketInd = list(
FALSE, FALSE, FALSE, FALSE), splitOption = list("0", "0",
"1", "1"), ticketSplit = list("1", "2", "2", "1"), splitVector = list(
structure(1L, .Dim = c(1L, 1L)), structure(2L, .Dim = c(1L,
1L)), structure(c(2L, 4L), .Dim = c(2L, 1L)), structure(1L, .Dim = c(1L,
1L))), sellerOwnInd = list(0L, 0L, 0L, 0L), currentPrice.amount = list(
468.99, 475L, 475L, 550.45), currentPrice.currency = list(
"USD", "USD", "USD", "USD"), faceValue.amount = list(NULL,
NULL, NULL, NULL), faceValue.currency = list(NULL, NULL,
NULL, NULL)), .Names = c("listingId", "sectionId", "row",
"quantity", "sellerSectionName", "sectionName", "seatNumbers",
"zoneId", "zoneName", "listingAttributeList", "listingAttributeCategoryList",
"deliveryTypeList", "dirtyTicketInd", "splitOption", "ticketSplit",
"splitVector", "sellerOwnInd", "currentPrice.amount", "currentPrice.currency",
"faceValue.amount", "faceValue.currency"), row.names = c(NA,
4L), class = "data.frame")
I tried to get around this by going through each column in friListings, unlisting that node, saving to a vector and then doing a cbind to stitch them all together. But, when I do that, I get vectors of different lengths because of the nulls. I took this approach one step further and tried to class each column to force NAs to preserve the nulls, but that’s not working. And, regardless, there’s gotta be a better approach than this. Here's some output to illustrate what happens when I attempt this approach.
# Take the column zoneId and casting it as numeric to force NA
friListings$zoneId<-lapply(friListings$zoneId, as.numeric)
# check the length
> length(friListings$zoneId)
[1] 609
# unlist and check the length... and I lost 11 items
> zoneid <- unlist(friListings$zoneId, use.names=FALSE)
> length(zoneid)
[1] 598
# here's the tail of the column... (because I happen to know that's where the empty values that are being dropped are)
> tail(friListings$zoneId)
[[1]]
numeric(0)
[[2]]
numeric(0)
[[3]]
numeric(0)
[[4]]
numeric(0)
[[5]]
numeric(0)
[[6]]
numeric(0)
I know people work with JSON and R all the time (I'm obviously not one of those people!), so maybe I’m missing something obvious. But I’ve spent 5 hours trying different ways to clean this data and searching the internet for answers. I read the JSON package documentation, too.
I really just want to "flatten" this object so that it’s pretty and structured in the same way the R Studio renders it when I do View(friListings). I'm already passing "flatten=TRUE" in my "fromJSON" call above and it doesn't seem to be doing what I expect. Same with the "simplifyVector=TRUE" (which is TRUE by default according to the docs, but added it for clarity).
Thanks for any insight or guidance you may be able to offer!!!
You might want to try and adapt this approach:
f <- function(x)
if(is.list(x)) {
unlist(lapply(x, f))
} else {
x[which(is.null(x))] <- NA
paste(x, collapse = ",")
}
df <- as.data.frame(do.call(cbind, lapply(friListings, f)))
write.table(df, tf <- tempfile(fileext = "csv"))
df <- read.table(tf)
str(df)
# 'data.frame': 4 obs. of 21 variables:
# $ listingId : int 1138579989 1138969061 1138958138 1139003985
# $ sectionId : int 1552295 1552172 1552220 1552289
# $ row : int 16 6 22 26
# $ quantity : int 1 2 4 1
# $ sellerSectionName : Factor w/ 4 levels "353","Club 303 - Obstructed/No View",..: 4 2 3 1
# $ sectionName : Factor w/ 4 levels "Club 303 - Obstructed/No View",..: 4 1 2 3
# $ seatNumbers : Factor w/ 4 levels "13,14,15,16",..: 2 3 1 4
# $ zoneId : int 232917 232909 232914 232917
# $ zoneName : Factor w/ 3 levels "Club","Middle",..: 3 1 2 3
# $ listingAttributeList : Factor w/ 4 levels "204,201","204,4369",..: 1 3 4 2
# $ listingAttributeCategoryList: int 1 1 1 1
# $ deliveryTypeList : int 5 5 5 5
# $ dirtyTicketInd : logi FALSE FALSE FALSE FALSE
# $ splitOption : int 0 0 1 1
# $ ticketSplit : int 1 2 2 1
# $ splitVector : Factor w/ 3 levels "1","2","2,4": 1 2 3 1
# $ sellerOwnInd : int 0 0 0 0
# $ currentPrice.amount : num 469 475 475 550
# $ currentPrice.currency : Factor w/ 1 level "USD": 1 1 1 1
# $ faceValue.amount : logi NA NA NA NA
# $ faceValue.currency : logi NA NA NA NA