Greater and less than operator in a URL - json

Hello everyone i have a db.json file which i run in http://localhost:3004
The format is like this:
[
{
"Symbol": "AAPL",
"prodType": "STK",
"Market": "Symbol Industries Inc",
"Country": "US",
"Quantity": 10,
"NominalValue": 123.45,
"AvgPrice": 131.16,
"PositionCost": 123.87,
"LastPrice": 123.567,
"PositionValue": 145.78,
"TotalValue": 123.347,
"PortfolioPercentage": 12,
"ProfitLoss": 1235.09,
"Results": "12 Dec, 2017",
"Dividend": "29 Apr, 2017"
},
{
"Symbol": "fsdfsd",
"prodType": "DER",
"Market": "Symbol Industries Inc",
"Country": "Greece",
"Quantity": 10,
"NominalValue": 123.45,
"AvgPrice": 131.16,
"PositionCost": 123.87,
"LastPrice": 123.567,
"PositionValue": 145.78,
"TotalValue": 123.347,
"PortfolioPercentage": 12,
"ProfitLoss": 1235.09,
"Results": "12 Dec, 2017",
"Dividend": "29 Apr, 2017"
}
]
i want to make a query to the URL to bring me the records which have total value greater than 100 and lesser than 130 for example but i can't. I use different approaches but no one is working properly. How can i implement such a query in this URL?

Related

Accessing a nested JSON file in Flutter

I am trying to access the 'title' from the following list but it keeps throwing error.
var moviesDB = {
"genres": [
"Comedy",
"Fantasy",
"Crime",
"Drama",
"Music",
"Adventure",
"History",
"Thriller",
"Animation",
"Family",
"Mystery",
"Biography",
"Action",
"Film-Noir",
"Romance",
"Sci-Fi",
"War",
"Western",
"Horror",
"Musical",
"Sport"
],
"movies": [
{
"id": 1,
"title": "Beetlejuice",
"year": "1988",
"runtime": "92",
"genres": ["Comedy", "Fantasy"],
"director": "Tim Burton",
"actors": "Alec Baldwin, Geena Davis, Annie McEnroe, Maurice Page",
"plot":
"A couple of recently deceased ghosts contract the services of a \"bio-exorcist\" in order to remove the obnoxious new owners of their house.",
"posterUrl":
"https://images-na.ssl-images-amazon.com/images/M/MV5BMTUwODE3MDE0MV5BMl5BanBnXkFtZTgwNTk1MjI4MzE#._V1_SX300.jpg"
},
{
"id": 2,
"title": "The Cotton Club",
"year": "1984",
"runtime": "127",
"genres": ["Crime", "Drama", "Music"],
"director": "Francis Ford Coppola",
"actors": "Richard Gere, Gregory Hines, Diane Lane, Lonette McKee",
"plot":
"The Cotton Club was a famous night club in Harlem. The story follows the people that visited the club, those that ran it, and is peppered with the Jazz music that made it so famous.",
"posterUrl":
"https://images-na.ssl-images-amazon.com/images/M/MV5BMTU5ODAyNzA4OV5BMl5BanBnXkFtZTcwNzYwNTIzNA##._V1_SX300.jpg"
},
]
}
I can go as far as moviesDB["movies"][0] but cannot get the title property.
Although I can do the same in Javascript and it works with no errors.
console.log(moviesDB["movies"][0]["title"]);
Any solution for this?
You need to make a cast on the element of your movie list.
print((moviesDB['movies'][0] as Map<String, dynamic>)['title']);

Classroom API returns wrong due date

I am requesting for a list of assignment for each course. For this example I created the due date to be on the 11th but the API returns the due date as the 12th.
courseWork": [
{
"courseId": "116315138435",
"id": "116726071520",
"title": "Test Assignments",
"description": "ojoijoijoijoijoij",
"state": "PUBLISHED",
"alternateLink": "https://classroom.google.com/c/MTE2MzE1MTM4NDM1/a/MTE2NzI2MDcxNTIw/details",
"creationTime": "2020-07-09T17:53:00.220Z",
"updateTime": "2020-07-09T17:53:10.544Z",
"dueDate": {
"year": 2020,
"month": 7,
"day": 12
},
"dueTime": {
"hours": 4,
"minutes": 59
},
"maxPoints": 100,
"workType": "ASSIGNMENT",
"submissionModificationMode": "MODIFIABLE_UNTIL_TURNED_IN",
"creatorUserId": "111094682610866207024"
}
]
I am not sure that I can use a time zone to fix this because it is simply returning the day/month/year without a specified time zone. Can I simply subtract the due dates day by one? or is there a more logical approach?

NYC Open data DOB Missing Information

I am facing some issue in NYC department of building API.
help me if you know any other API giving the same information
I have used this API but didn't work for me
https://data.cityofnewyork.us/resource/83x8-shf7.json
Missing fields
permitee detailed address
https://data.cityofnewyork.us/resource/83x8-shf7.json?$where=filing_date BETWEEN '2018-05-01T06:00:00' AND '2018-05-30T10:00:00'
Also i am not able get expected data using filters for "filing_date" from same api
expected data should return all data between 2018-05-01 and 2018-05-30 for this API But i am getting only few results.
I am getting this data
[
{
"bin__": "3118313",
"bldg_type": "1",
"block": "05143",
"borough": "BROOKLYN",
"city": "BROOKLYN",
"community_board": "314",
"dobrundate": "2018-05-03T00:00:00.000",
"expiration_date": "2018-06-11T00:00:00.000",
"filing_date": "2018-05-02T00:00:00.000",
"filing_status": "INITIAL",
"gis_census_tract": "1522",
"gis_council_district": "40",
"gis_latitude": "40.641731",
"gis_longitude": "-73.966432",
"gis_nta_name": "Flatbush",
"house__": "328",
"issuance_date": "2018-05-02T00:00:00.000",
"job__": "321679046",
"job_doc___": "01",
"job_start_date": "2018-05-02T00:00:00.000",
"job_type": "A2",
"lot": "00068",
"non_profit": "N",
"owner_s_business_name": "N/A",
"owner_s_business_type": "INDIVIDUAL",
"owner_s_first_name": "MATTHEW",
"owner_s_house__": "328",
"owner_s_house_street_name": "ARGYLE ROAD",
"owner_s_last_name": "LIMA",
"owner_s_phone__": "3475968096",
"owner_s_zip_code": "11218",
"permit_sequence__": "01",
"permit_si_no": "3452932",
"permit_status": "ISSUED",
"permit_subtype": "OT",
"permit_type": "EW",
"permittee_s_business_name": "BMB BUILDER INC",
"permittee_s_first_name": "YUAN HANG",
"permittee_s_last_name": "XIAO",
"permittee_s_license__": "0612790",
"permittee_s_license_type": "GC",
"permittee_s_phone__": "9175776544",
"residential": "YES",
"self_cert": "N",
"site_fill": "NOT APPLICABLE",
"state": "NY",
"street_name": "ARGYLE ROAD",
"superintendent_business_name": "BMB BUILDER INC",
"superintendent_first___last_name": "YUAN HANG XIAO",
"work_type": "OT",
"zip_code": "11218"
}]
Expected Data should be
[{
"bin__": "1090379",
"bldg_type": "2",
"block": "00760",
"borough": "MANHATTAN",
"city": "GREAT NECK",
"community_board": "104",
"dobrundate": "2018-05-02T00:00:00.000",
"expiration_date": "2018-10-28T00:00:00.000",
"filing_date": "2018-05-01T00:00:00.000",
"filing_status": "RENEWAL",
"gis_census_tract": "111",
"gis_council_district": "3",
"gis_latitude": "40.753978",
"gis_longitude": "-73.993673",
"gis_nta_name": "Hudson Yards-Chelsea-Flatiron-Union Square",
"house__": "337",
"issuance_date": "2018-05-01T00:00:00.000",
"job__": "121187606",
"job_doc___": "01",
"job_start_date": "2016-02-19T00:00:00.000",
"job_type": "NB",
"lot": "00020",
"non_profit": "N",
"owner_s_business_name": "HKONY WEST 36 LLC",
"owner_s_business_type": "PARTNERSHIP",
"owner_s_first_name": "SAM",
"owner_s_house__": "420",
"owner_s_house_street_name": "GREAT NECK ROAD",
"owner_s_last_name": "CHANG",
"owner_s_phone__": "9178380886",
"owner_s_zip_code": "11021",
"permit_sequence__": "07",
"permit_si_no": "3451790",
"permit_status": "ISSUED",
"permit_type": "NB",
"permittee_s_business_name": "OMNIBUILD CONSTRUCTION IN",
"permittee_s_first_name": "PETER",
"permittee_s_last_name": "SERPICO",
"permittee_s_license__": "0608390",
"permittee_s_license_type": "GC",
"permittee_s_phone__": "2124191930",
"self_cert": "N",
"site_fill": "ON-SITE",
"site_safety_mgr_s_first_name": "ROBERT",
"site_safety_mgr_s_last_name": "FILIPPONE",
"special_district_1": "GC",
"state": "NY",
"street_name": "W 36 ST",
"zip_code": "10018"
}]
Combing through the JSON, it appears that these columns are not matching: permit_subtype, superintendent_business_name, superintendent_first___last_name, site_safety_mgr_s_first_name, site_safety_mgr_s_last_name, and special_district_1.
Looking at the original data sources, the columns that do not match are instances where the field is blank for that field. That is, bin__ = 1090379 does not have a permit_subtype, so it is omitted in the JSON (which is standard practice).
It will, however, be included in the CSV output since that format must include all columns: https://data.cityofnewyork.us/resource/83x8-shf7.csv?$where=filing_date%20BETWEEN%20%272018-05-01T06:00:00%27%20AND%20%272018-05-30T10:00:00%27.
This answer took a bit of digging because it wasn't immediately obvious which columns were different between the two examples. It's always helpful to over-explain to make it easier to track-down the issue.
Likewise, per the filing_date question, please include the query you're attempting to use.

Convert to dataframe from JSON in R

I am facing issues with conversion of JSON to dataframe. I tried using libraries: jsonlite, RJSONIO,rjson.
I keep getting 'invalid character in the string' or unclosed string.
I am getting this data from a standard API so should be able to parse this json. Also, JSON editors can parse this data just fine.
My question is:
Is there a standard way using which I can make sure that my dataframe gets created and ignore above errors?
My best guess was to convert this data to JSON format using toJSON function from either of the libraries but if I use
newdata <- fromJSON(toJSON(data))
it somehow never gets converted to dataframe. Why is that?
If I instead use
newdata <- fromJSON(data)
I get a valid dataframe but sometimes because of above errors, it doesn't work which is what I am trying to know. How do you deal with this?
I have tried using this too freshDeskTicketsToDF <-
jsonlite::fromJSON(paste(readLines(textConnection(freshDeskTickets)), collapse=""))
It seemed to solve the problem but somewhere I got unclosed string with this method which I otherwise did not.
Are there better ways to deal with this in R?
Also, why is it that using toJSON on data passed to fromJSON never gets converted to a dataframe?
If I decide to take off html tags from the values assisgned to keys in JSON data. How does that work? Can I do that?
Edit: It looks like I get this error when I have <html tags> in my "string data" but I have them all across my JSON data and I don't get it every time.
How to deal with problems like this?
Note: this issue not specific to the data that I have. What I am looking for is ways to deal with problems like these and not one specific solution to a single problem.
I just realized thattoJSON converts R objects to JSON and not JSON to valid JSON. Is there a way to do it instead?
Sample data:
[
{
"cc_emails": [
],
"fwd_emails": [
],
"reply_cc_emails": [
],
"fr_escalated": false,
"spam": false,
"email_config_id": 1000062780,
"group_id": 1000179078,
"priority": 1,
"requester_id": 1022205968,
"responder_id": 1018353725,
"source": 1,
"company_id": null,
"status": 5,
"subject": "Order number-100403891",
"to_emails": [
"contact#stalkbuylove.com"
],
"product_id": null,
"id": 174093,
"type": "Order Status query",
"due_by": "2016-09-02T08:57:30Z",
"fr_due_by": "2016-09-02T02:57:30Z",
"is_escalated": true,
"description": "<div dir=\"ltr\">Hi Team,<div><br></div>\n<div>I have ordered an item from your website, order number-100403891. I had called on August 30 2016 to postpone the delivery date. The guy i spoke from your end had confirmed that he will hold and push the delivery date to September 5 or 6 or 7 2016. And he confirmed the same.</div>\n<div>However, the guy I spoke to<b> did not do it</b>. </div>\n<div>I got to know it from ABHINAV from your customer care team who I spoke to on August 1st at 13:10. Hence I have put a request again and he said he will talk to some guys and give me the desired dates for delivery which is 5,6,7 of September 2016. </div>\n<div>Please let me know the concern on this and hope for a quick turn around.</div>\n<div><br></div>\n<div>Thank you,</div>\n<div>Hari,</div>\n<div>+91-9538199699.</div>\n</div>\n",
"description_text": "Hi Team,\r\n\r\nI have ordered an item from your website, order number-100403891. I had\r\ncalled on August 30 2016 to postpone the delivery date. The guy i spoke\r\nfrom your end had confirmed that he will hold and push the delivery date to\r\nSeptember 5 or 6 or 7 2016. And he confirmed the same.\r\nHowever, the guy I spoke to* did not do it*.\r\nI got to know it from ABHINAV from your customer care team who I spoke to\r\non August 1st at 13:10. Hence I have put a request again and he said he\r\nwill talk to some guys and give me the desired dates for delivery which is\r\n5,6,7 of September 2016.\r\nPlease let me know the concern on this and hope for a quick turn around.\r\n\r\nThank you,\r\nHari,\r\n+91-9538199699.\n",
"custom_fields": {
},
"created_at": "2016-09-01T07:51:18Z",
"updated_at": "2016-09-11T11:00:33Z"
},
{
"cc_emails": [
],
"fwd_emails": [
],
"reply_cc_emails": [
],
"fr_escalated": false,
"spam": false,
"email_config_id": 1000062780,
"group_id": 1000179078,
"priority": 1,
"requester_id": 1022148025,
"responder_id": 1021145209,
"source": 1,
"company_id": null,
"status": 5,
"subject": "Defect in d piece",
"to_emails": [
"contact#stalkbuylove.com"
],
"product_id": null,
"id": 174092,
"type": "Return",
"due_by": "2016-09-01T15:51:00Z",
"fr_due_by": "2016-09-01T09:51:00Z",
"is_escalated": false,
"description": "<div><br></div>\n<div><br></div>\n<div><br></div>\n<div><div style=\"font-size:75%;color:#575757\">Sent from Samsung Mobile</div></div>",
"description_text": "\n\n\nSent from Samsung Mobile",
"custom_fields": {
},
"created_at": "2016-09-01T07:51:00Z",
"updated_at": "2016-09-06T09:00:14Z"
},
{
"cc_emails": [
],
"fwd_emails": [
],
"reply_cc_emails": [
],
"fr_escalated": false,
"spam": false,
"email_config_id": 1000062780,
"group_id": 1000179078,
"priority": 1,
"requester_id": 1022205895,
"responder_id": 1018353725,
"source": 1,
"company_id": null,
"status": 5,
"subject": "Re: StalkBuyLove Return Request for order: 100404435",
"to_emails": [
"StalkBuyLove <contact#stalkbuylove.com>"
],
"product_id": null,
"id": 174088,
"type": "Refund query",
"due_by": "2016-09-01T15:43:56Z",
"fr_due_by": "2016-09-01T09:43:56Z",
"is_escalated": true,
"description": "<div>Hi. Can u deposit the amount if i giv u my account number. Right away i cant choose any other product frim ur site. <br><br>Sent from my iPhone</div>\n<div>\n<br>On Sep 1, 2016, at 12:38 PM, StalkBuyLove <contact#stalkbuylove.com> wrote:<br><br>\n</div>\n<blockquote><div>\n<div><img title=\"StalkBuyLove\" alt=\"Stalkbuylove\" src=\"http://www.stalkbuylove.com/launcher_icons/Newlogo_Stalkbuylove_240x50.png\"></div>\n<div>Hello <b>Anamica Aggarwal</b>,</div>\n<div>We have initiated a return request for order: <b>100404435</b> with the following products:</div>\n<table style=\"width:80%\">\r\n <tbody>\n<tr style=\"background-color:#B0C4DE\">\r\n <th>Item Name</th>\r\n <th>Sku</th>\r\n </tr>\n<tr>\r\n <td style=\"text-align:center\">Articuno Top</td>\r\n <td style=\"text-align:center\">IN1627MTOTOPPCH-198-18</td>\r\n </tr>\n</tbody>\n</table>\n<div>Lots of love,</div>\n<div>Team SBL</div>\n<img src=\"http://mandrillapp.com/track/open.php?u=30069003&id=bff0a5daee4a47fe9c6b04d2680c3c39\" height=\"1\" width=\"1\">\r\n</div></blockquote>",
"description_text": "Hi. Can u deposit the amount if i giv u my account number. Right away i cant choose any other product frim ur site. \n\nSent from my iPhone\n\n> On Sep 1, 2016, at 12:38 PM, StalkBuyLove <contact#stalkbuylove.com> wrote:\n> \n> \n> Hello Anamica Aggarwal,\n> \n> We have initiated a return request for order: 100404435 with the following products:\n> \n> Item Name\tSku\n> Articuno Top\tIN1627MTOTOPPCH-198-18\n> Lots of love,\n> \n> Team SBL\n> \n",
"custom_fields": {
},
"created_at": "2016-09-01T07:43:56Z",
"updated_at": "2016-09-11T11:00:32Z"
},
{
"cc_emails": [
],
"fwd_emails": [
],
"reply_cc_emails": [
],
"fr_escalated": false,
"spam": false,
"email_config_id": 1000062780,
"group_id": 1000179078,
"priority": 1,
"requester_id": 1022205881,
"responder_id": 1021145209,
"source": 1,
"company_id": null,
"status": 5,
"subject": "Details for order",
"to_emails": [
"contact#stalkbuylove.com"
],
"product_id": null,
"id": 174086,
"type": "Order Status query",
"due_by": "2016-09-01T15:42:50Z",
"fr_due_by": "2016-09-01T09:42:50Z",
"is_escalated": false,
"description": "<div><span></span></div>\n<div>\n<span>Hey can i get details of my order </span><br><span>How much more time will it take to get delivered? </span><br><span>Order no-</span><h2 style=\"font-weight: normal; margin: 0px;\"><font><span style=\"background-color: rgba(255, 255, 255, 0);\">100403837</span></font></h2>\n<span></span><br><span>Sent from my iPhone</span><br>\n</div>",
"description_text": "Hey can i get details of my order \r\nHow much more time will it take to get delivered? \r\nOrder no-\r\n100403837\r\n\r\nSent from my iPhone\n",
"custom_fields": {
},
"created_at": "2016-09-01T07:42:50Z",
"updated_at": "2016-09-06T09:00:13Z"
},
{
"cc_emails": [
],
"fwd_emails": [
],
"reply_cc_emails": [
],
"fr_escalated": true,
"spam": false,
"email_config_id": 1000062780,
"group_id": 1000179078,
"priority": 1,
"requester_id": 1022204690,
"responder_id": 1021145209,
"source": 1,
"company_id": null,
"status": 5,
"subject": "Refund",
"to_emails": [
"contact#stalkbuylove.com"
],
"product_id": null,
"id": 174080,
"type": "Refund query",
"due_by": "2016-09-01T15:36:26Z",
"fr_due_by": "2016-09-01T09:36:26Z",
"is_escalated": true,
"description": "<div>\r<br>Bank statement as asked for refund! Please intiate the proccedings asap!<br>\n</div>",
"description_text": "\r\nBank statement as asked for refund! Please intiate the proccedings asap!\n",
"custom_fields": {
},
"created_at": "2016-09-01T07:36:26Z",
"updated_at": "2016-09-07T08:00:19Z"
}
]
library(jsonlite)
df <- stream_in(file("~/data/sample.json"))
This stream_in function directly convert into datafram

JQ or any json parser to make a join over mutiple large JSON files

At this page - https://openlibrary.org/developers/dumps - there are JSON data dumps for 'editions' and 'authors' totalling about 7Gb of data when compressed (about 28Gb when uncompressed).
The editions files are structured like this (the information in each row varies):
/type/edition /books/OL24712550M 2 2011-08-12T15:48:15.081632 {"subtitle": "finding solace and strength from friends and strangers", "series": ["Thorndike Press large print biography", "Thorndike large print biography series"], "covers": [6783622], "lc_classifications": ["E840.8.E29 E24 2007"], "latest_revision": 2, "ocaid": "savinggracesfind00edwa", "source_records": ["ia:savinggracesfind00edwa"], "title": "Saving graces", "languages": [{"key": "/languages/eng"}], "subjects": ["Cancer", "Family", "Legislators' spouses", "Philosophy", "Patients", "Large type books", "Lawyers' spouses", "Biography", "Protected DAISY"], "subject_people": ["Elizabeth Edwards (1949-)", "John Edwards (1953 June 10-)"], "publish_country": "meu", "by_statement": "Elizabeth Edwards", "oclc_numbers": ["71809986"], "type": {"key": "/type/edition"}, "revision": 2, "publishers": ["Thorndike Press"], "ia_box_id": ["IA133215"], "full_title": "Saving graces finding solace and strength from friends and strangers", "last_modified": {"type": "/type/datetime", "value": "2011-08-12T15:48:15.081632"}, "key": "/books/OL24712550M", "authors": [{"key": "/authors/OL6606949A"}], "publish_places": ["Waterville, Me"], "pagination": "613 p. (large print) ;", "created": {"type": "/type/datetime", "value": "2011-06-29T22:47:47.350358"}, "dewey_decimal_class": ["973.931092", "B"], "number_of_pages": 613, "isbn_13": ["9780786291670"], "lccn": ["2006031151"], "subject_places": ["United States", "North Carolina"], "isbn_10": ["0786291672"], "publish_date": "2007", "copyright_date": "2006", "works": [{"key": "/works/OL15801457W"}]}
/type/edition /books/OL11119269M 5 2010-04-24T18:14:28.389476 {"number_of_pages": 362, "subtitle": "Godparenthood and Adoption in the Early Middle Ages (The University of Delaware Press Series, the Family in Interdisciplinary Perspective)", "weight": "1.6 pounds", "covers": [2673249], "latest_revision": 5, "edition_name": "Rev Exp edition", "title": "Spiritual Kinship As Social Practice", "languages": [{"key": "/languages/eng"}], "subjects": ["Family & Relationships", "Genealogy, heraldry, names and honours", "c 500 CE to c 1000 CE", "Ancient Rome - History", "Social Institutions", "Sociology", "Ancient Rome", "Sociology - Marriage & Family", "Alternative Family", "Ancient - Rome", "Spirituality - General", "Adoption", "Europe", "History", "Medieval, 500-1500", "Social history", "Sponsors", "To 1500"], "type": {"key": "/type/edition"}, "physical_dimensions": "9.8 x 6.2 x 1 inches", "revision": 5, "publishers": ["University of Delaware Press"], "physical_format": "Hardcover", "last_modified": {"type": "/type/datetime", "value": "2010-04-24T18:14:28.389476"}, "key": "/books/OL11119269M", "authors": [{"key": "/authors/OL797447A"}], "identifiers": {"goodreads": ["2994735"]}, "isbn_13": ["9780874136326"], "isbn_10": ["0874136326"], "publish_date": "June 2000", "works": [{"key": "/works/OL4195029W"}]}
/type/edition /books/OL25407707M 1 2012-08-08T08:36:18.306844 {"series": ["Then & now"], "lc_classifications": ["F459.E43 C375 2012"], "latest_revision": 1, "source_records": ["marc:marc_loc_updates/v40.i32.records.utf8:13804252:745"], "title": "Elizabethtown", "languages": [{"key": "/languages/eng"}], "subjects": ["Buildings, structures", "Pictorial works", "Historic buildings"], "publish_country": "scu", "by_statement": "Meranda L. Caswell", "type": {"key": "/type/edition"}, "revision": 1, "publishers": ["Arcadia Pub."], "full_title": "Elizabethtown", "last_modified": {"type": "/type/datetime", "value": "2012-08-08T08:36:18.306844"}, "key": "/books/OL25407707M", "authors": [{"key": "/authors/OL1397347A"}], "publish_places": ["Charleston, S.C"], "pagination": "x, 95 p. :", "created": {"type": "/type/datetime", "value": "2012-08-08T08:36:18.306844"}, "lccn": ["2012933881"], "number_of_pages": 95, "isbn_13": ["9780738591667"], "subject_places": ["Elizabethtown (Ky.)", "Elizabethtown", "Kentucky"], "isbn_10": ["0738591661"], "publish_date": "2012", "works": [{"key": "/works/OL16772737W"}]}
The author files are structured like this:
/type/author /authors/OL100223A 2 2008-09-08T16:20:28.105165 {"name": "Umu Hilmy", "personal_name": "Umu Hilmy", "last_modified": {"type": "/type/datetime", "value": "2008-09-08T16:20:28.105165"}, "key": "/authors/OL100223A", "type": {"key": "/type/author"}, "revision": 2}
/type/author /authors/OL6606949A 1 2009-05-14T08:13:43.294872 {"name": "Elizabeth Edwards", "created": {"type": "/type/datetime", "value": "2009-05-14T08:13:43.294872"}, "personal_name": "Elizabeth Edwards", "last_modified": {"type": "/type/datetime", "value": "2009-05-14T08:13:43.294872"}, "latest_revision": 1, "key": "/authors/OL6606949A", "birth_date": "1949", "type": {"key": "/type/author"}, "revision": 1}
/type/author /authors/OL1003081A 5 2012-06-06T22:11:38.525232 {"name": "William Pinder Eversley", "created": {"type": "/type/datetime", "value": "2008-04-01T03:28:50.625462"}, "death_date": "1918", "photos": [6897255, 6897254], "last_modified": {"type": "/type/datetime", "value": "2012-06-06T22:11:38.525232"}, "latest_revision": 5, "key": "/authors/OL1003081A", "birth_date": "1850", "personal_name": "William Pinder Eversley", "type": {"key": "/type/author"}, "revision": 5}
What I want to end up with is a tab-delimited file with only the following information:
OL reference title name isbn_10 isbn_13 subjects subject_places subject_people
For example:
/books/OL24712550M Saving graces Elizabeth Edwards 0786291672 9780786291670 "Cancer", "Family", "Legislators' spouses", "Philosophy", "Patients", "Large type books", "Lawyers' spouses", "Biography", "Protected DAISY" "United States", "North Carolina" "Elizabeth Edwards (1949-)", "John Edwards (1953 June 10-)"
(In some cases of course some of these fields will be empty.)
So all of the information I want is in the editions dump except for the 'name' field which comes from the authors dump, looked up by the reference in the editions dump, eg /authors/OL6606949A.
So I was trying to use JQ with the following query (for testing only few columns):
.personal_name as $names | .authors | {title , name, author: $names[.key]}
But it does not even execute as I am also having problem finding the notation for author key.
Since subjects and so on can have multiple values, how do you want them separated in the output so as not to be ambiguous?
jq '.personal_name as $names | .authors as $authors| {title, name, author: $names[.key]}'
is the fixed version of the jq command you have in your question, but not using $authors.
Anyways, if you clarify what you're after we can definitely do this!