how to load json and extract into separate nodes in neo4j - json

i'm newbie in neo4j and need help with my case...
i'm trying to load json file with the structure (updated by suggested) like below and extract into 3 nodes (big5_personality, personality_result & personality)
{
"big5_personality":[
{
"user_name":"Satu' Roy Nababan",
"preference":"MRT Jakarta",
"user_name_link":"https://www.facebook.com/satu.nababan",
"post_text_id":"[ \"was there,Berangkat kerja gelap pulang gelap selama 49 hari berturut-turut sebelum cuti 14 hari.,.,.,Tempat pertama belajar health and safety sbg seorang pefesional. Salah satu perusahaan dgn safety management system terbaik yg pernah saya temui.? See More\", \"Old normal: dominasi big four (Peter Gade, Lee Chong Wei, Taufik Hidayat, dan Lin Dan) akhirnya benar2 berakhir. Semuanya sudah pensiun.,New normal: saatnya para rising star memperebutkan dominasi atau membuat formasi big four versi new normal. Mereka yang sangat potensial saat ini; Kento Momota, Chou Thien Chen, Viktor Axelsen, Anders Antonsen, Anthony Ginting, Jonathan Christie, Lee Zii Jia.,#LinDan,#SuperDan,#Legend,#TwoT? See More\", \"#MenjemputRezeki Seri-4.,Ini adalah shipment ke-4 #Jengkiers dari Tanjung Balai ke Jakarta. Mumpung masih fresh dan stock ready lengkap, yuk mainkan!,#AyoMakanIkan,#I? See More\", \"The best version of Sabine Lisicki\", \"Naik #MRTTetapAman\", \"Terima kasih #MRTTetapAman\", \"#Jengkiers is back!,Kita kedatangan bbrp varian baru loh, seperti gabus asin, kerang, jengki putih belah, dll. Sok atuh langsung cek gambar yak, lengkap dengan PL dan kontaknya,#AyoMakanIkan,#IkanAsliTanjungBalai\", \"#CeritaGuruDiAtasGaris,#KakLiss,#GuruMantul\", \"Nih satu geng cuma bertiga doang sih, di WAG chatnya ngegas terus, ketemuan ngomongnya juga ga kalah ngegasss, tapi tetep aja pen ketemuan mulu meski banyakan dramanya utk cari jadwal yg pas,Thank you chit chat dan traktirannya woiii, enak bgt itu po*k nya, pen pesan lagi ah kapan2,#GroupHolanH,#Nama? See More\", \"Coming up next, Aldila Sutjiadi / Priska Madelyn Nugroho vs Jessy Rompies / Rifanty Dwi Kahfiani #ayotenis\", \"Beberapa cara lain menikmati produk #Jengkiers, bisa jadi nasi goreng teri medan atau mie gomak udang manis (slide 2-3),*monmaap klo teri dan udangnya ga terlalu terlihat, krn emang dikit aja ditaronya, klo kebanyakan rugi pedagang percaya ajalah disitu ada udang dan teri pokoknya,#AyoMakanIkan,#Ika? See More\", \"Thank you, Abang John Evendy Hutabarat utk waktunya, utk berbagi cerita dan pelajaran setelah sekian tahun ga ketemu secara fisik.\", \"Next match double antara duo punggawa Fed Cup ( Priska Madelyn Nugroho / Janice Tjen) vs petenis berpengalaman, Beatrice Gumulya duet dgn juniornya, Rifanty Dwi Kahfiani. Selamat menyaksikan,#ayotenis,#indonesiantennis\", \"Women single final antara 2 terbaik dalam babak round robin pekan lalu, Aldila Sutjiadi vs Jessy Rompies,#ayotenis,#indonesiantennis\", \"Here we go,Paket2 yg siap meluncur ke alamat para costumer setia #Jengkiers,Yuk j? See More\", \"Barang baru datang guys, unit ready pengiriman besok sore,Ada yg baru nih, terasi spesial, asli, didatangkan langsung dari Tg Balai, buat kalian penggemar sambal terasi sudah tentu tak mau kelewatan kan?\", \"INOVASI\", \"Federer's shots\", \"RIP Ibu Josephine Mekel (Ibunda Vokalis Once Mekel, Alumni UI, FH'89). Turut berdukacita utk Bung Once dan keluarga besar Mekel-Cambey.,Tadi malam utk pertama kalinya menghadiri acara kebaktian penghiburan sejak masa pandemik, tentu dgn protok kesehatan yg sangat ketat, jumlah yg hadir dibatasi termasuk kita yang nyanyi cuma berlima saja.\", \"Bagi yg sudah lama ga mampir kawasan MRT Dukuh Atas, nih ada yg baru loh,Totem terpasang dibanyak titik, serasa di luar negeri bukan?,#MRTJakarta\" ]",
"post_text_eng":"[ \"was there, leaving for dark work went home dark for 49 consecutive days before taking 14 days off ..., the first place to learn health and safety as a professional. One of the companies with the best safety management system I have ever met.? See More \", \"Old normal: the dominance of the big four (Peter Gade, Lee Chong Wei, Taufik Hidayat, and Lin Dan) has finally come to an end. Everything is retired., New normal: it's time for the rising stars to fight for domination or make a new version of the Big Four formation. with very potential right now: Kento Momota, Chou Thien Chen, Viktor Axelsen, Anders Antonsen, Anthony Ginting, Jonathan Christie, Lee Zii Jia., # LinDan, # SuperDan, # Legend, # TwoT? See More \", \"#Pick up the 4th Series Rezeki., This is the 4th shipment #Jengkiers from Tanjung Balai to Jakarta. While it's still fresh and ready stock, let's play!, # Come on Eat, # I? See More\", \"The best version of Sabine Lisicki\", \"Ride #MRTKeep Safe\", \"Thank you #MRTKeep Safe\", \"#Jengkiers is back !, We have a number of new variants, such as salted cork, clams, white jengki split, etc. Sok directly check yak pictures, complete with PL and contacts, # Come on Eat, # Ikan AsliTanjungBalai\", \"# Stories of Teachers On Top of Line, # KakLiss, # Teachers Bounce\", \"Here, one gang is only the three of them, on WAG chat, it keeps on firing, meeting and talking isn't too bad, but still just finding it, even though most of the drama is to find the right schedule, Thank you chit chat and treats wow, how nice is that po * Please, order again sometime, # GroupHolanH, # Name? See More \", \"Coming up next, Aldila Sutjiadi / Priska Madelyn Nugroho vs Jessy Rompies / Rifanty Dwi Kahfiani #ayotenis\", \"Some other ways to enjoy #Jengkiers products, can be Medan teri fried rice or sweet shrimp gomak noodles (slides 2-3), * monmaap if the anchovies and the shrimp are not too visible, because it is only a little in the menu, if most of the traders lose trust there there are shrimp and anchovies, \\\"Let's Eat, # Ika? See More\", \"Thank you, Brother John Evendy Hutabarat for his time, to share stories and lessons after years of not meeting him physically.\" \"Next match doubles between Fed Cup retainer duo (Priska Madelyn Nugroho / Janice Tjen) vs. experienced tennis player, Beatrice Gumulya duet with her junior, Rifanty Dwi Kahfiani. Happy watching, # ayotenis, # indonesiantennis\", \"Women singles final between the 2 best in the round robin round last week, Aldila Sutjiadi vs Jessy Rompies, # ayotenis, # indonesiantennis\", \"Here we go, Paket2 are ready to slide to the address of loyal customers # Jengkiers, let's see?\" More, \"New goods are coming, guys, the unit is ready for delivery tomorrow afternoon. There are new ones, special shrimp paste, original, imported directly from Tg Balai, for you fans of terasi sauce, of course you don't want to go too far right?\", \"INNOVATION\", \"Federer's shots\", \"RIP Mrs. Josephine Mekel (Mother of Vocalist Once Mekel, UI Alumni, FH'89). Also sorrowing for Bung Once and the Mekel-Cambey extended family, last night for the first time attending consolation conventions since the pandemic, of course with health protection very strict, the number of attendees is limited including those of us who sing only five of them. \", \"For those who haven't stopped in the Dukuh Atas MRT area, there are new ones, Totem is installed at many points, feels like overseas, right? # MRTJakarta\" ]",
"personality_result":[
{
"user_name_link":"https://www.facebook.com/satu.nababan",
"word_count":472,
"word_count_message":"There were 472 words in the input. We need a minimum of 600, preferably 1,200 or more, to compute statistically significant estimates",
"processed_language":"en",
"personality":[
{
"trait_id":"big5_openness",
"name":"Openness",
"category":"personality",
"percentile":0.029368278774753065,
"raw_score":0.6883050000463327,
"significant":true,
"children":[
{
"trait_id":"facet_adventurousness",
"name":"Adventurousness",
"category":"personality",
"percentile":0.3272995424004471,
"raw_score":0.4889059610578305,
"significant":true
},
{
"trait_id":"facet_artistic_interests",
"name":"Artistic interests",
"category":"personality",
"percentile":0.48276246519083293,
"raw_score":0.6631367244448523,
"significant":true
},
{
"trait_id":"facet_emotionality",
"name":"Emotionality",
"category":"personality",
"percentile":0.4573453643547154,
"raw_score":0.6438277579254967,
"significant":true
},
{
"trait_id":"facet_imagination",
"name":"Imagination",
"category":"personality",
"percentile":0.5606034995849714,
"raw_score":0.7424334257188285,
"significant":true
},
{
"trait_id":"facet_intellect",
"name":"Intellect",
"category":"personality",
"percentile":0.7374704343214584,
"raw_score":0.6366655478430054,
"significant":true
},
{
"trait_id":"facet_liberalism",
"name":"Authority-challenging",
"category":"personality",
"percentile":0.7808736715557572,
"raw_score":0.5552707231598478,
"significant":true
}
]
},
{
"trait_id":"big5_conscientiousness",
"name":"Conscientiousness",
"category":"personality",
"percentile":0.22939241684474615,
"raw_score":0.5934971632418898,
"significant":true,
"children":[
{
"trait_id":"facet_achievement_striving",
"name":"Achievement striving",
"category":"personality",
"percentile":0.2677419988694361,
"raw_score":0.655591077028367,
"significant":true
},
{
"trait_id":"facet_cautiousness",
"name":"Cautiousness",
"category":"personality",
"percentile":0.40904830424778305,
"raw_score":0.47795518572548,
"significant":true
},
{
"trait_id":"facet_dutifulness",
"name":"Dutifulness",
"category":"personality",
"percentile":0.164224436809277,
"raw_score":0.6349680810761815,
"significant":true
},
{
"trait_id":"facet_orderliness",
"name":"Orderliness",
"category":"personality",
"percentile":0.867165384494327,
"raw_score":0.530616236542301,
"significant":true
},
{
"trait_id":"facet_self_discipline",
"name":"Self-discipline",
"category":"personality",
"percentile":0.2026779873552365,
"raw_score":0.5310156326644194,
"significant":true
},
{
"trait_id":"facet_self_efficacy",
"name":"Self-efficacy",
"category":"personality",
"percentile":0.3023937616129415,
"raw_score":0.7348991796444799,
"significant":true
}
]
},
{
"trait_id":"big5_extraversion",
"name":"Extraversion",
"category":"personality",
"percentile":0.2667979477554203,
"raw_score":0.5232267972734429,
"significant":true,
"children":[
{
"trait_id":"facet_activity_level",
"name":"Activity level",
"category":"personality",
"percentile":0.3490192324295949,
"raw_score":0.5205273056390818,
"significant":true
},
{
"trait_id":"facet_assertiveness",
"name":"Assertiveness",
"category":"personality",
"percentile":0.3371249743821161,
"raw_score":0.6230467403390507,
"significant":true
},
{
"trait_id":"facet_cheerfulness",
"name":"Cheerfulness",
"category":"personality",
"percentile":0.24258354512261554,
"raw_score":0.594713504568435,
"significant":true
},
{
"trait_id":"facet_excitement_seeking",
"name":"Excitement-seeking",
"category":"personality",
"percentile":0.46972100101797953,
"raw_score":0.6003831372285343,
"significant":true
},
{
"trait_id":"facet_friendliness",
"name":"Outgoing",
"category":"personality",
"percentile":0.29192693589475666,
"raw_score":0.5330152232542364,
"significant":true
},
{
"trait_id":"facet_gregariousness",
"name":"Gregariousness",
"category":"personality",
"percentile":0.34577689008301526,
"raw_score":0.4329464839207155,
"significant":true
}
]
},
{
"trait_id":"big5_agreeableness",
"name":"Agreeableness",
"category":"personality",
"percentile":0.2778846312783998,
"raw_score":0.7187775451521589,
"significant":true,
"children":[
{
"trait_id":"facet_altruism",
"name":"Altruism",
"category":"personality",
"percentile":0.3340915482705341,
"raw_score":0.6900524000049065,
"significant":true
},
{
"trait_id":"facet_cooperation",
"name":"Cooperation",
"category":"personality",
"percentile":0.445551905959055,
"raw_score":0.5711407367161474,
"significant":true
},
{
"trait_id":"facet_modesty",
"name":"Modesty",
"category":"personality",
"percentile":0.5418929802964033,
"raw_score":0.4539269679292031,
"significant":true
},
{
"trait_id":"facet_morality",
"name":"Uncompromising",
"category":"personality",
"percentile":0.3327649613483089,
"raw_score":0.6054136547271408,
"significant":true
},
{
"trait_id":"facet_sympathy",
"name":"Sympathy",
"category":"personality",
"percentile":0.5776699806826077,
"raw_score":0.6709599083365048,
"significant":true
},
{
"trait_id":"facet_trust",
"name":"Trust",
"category":"personality",
"percentile":0.6506691562935983,
"raw_score":0.6017503767590401,
"significant":true
}
]
},
{
"trait_id":"big5_neuroticism",
"name":"Emotional range",
"category":"personality",
"percentile":0.012225596986201182,
"raw_score":0.3709704629886742,
"significant":true,
"children":[
{
"trait_id":"facet_anger",
"name":"Fiery",
"category":"personality",
"percentile":0.5581412468086754,
"raw_score":0.5437137741013285,
"significant":true
},
{
"trait_id":"facet_anxiety",
"name":"Prone to worry",
"category":"personality",
"percentile":0.7355932800664517,
"raw_score":0.6370636497177248,
"significant":true
},
{
"trait_id":"facet_depression",
"name":"Melancholy",
"category":"personality",
"percentile":0.8073480016353904,
"raw_score":0.5034267686780826,
"significant":true
},
{
"trait_id":"facet_immoderation",
"name":"Immoderation",
"category":"personality",
"percentile":0.24332416646800148,
"raw_score":0.47385528964341017,
"significant":true
},
{
"trait_id":"facet_self_consciousness",
"name":"Self-consciousness",
"category":"personality",
"percentile":0.7754603650051617,
"raw_score":0.5865448670864387,
"significant":true
},
{
"trait_id":"facet_vulnerability",
"name":"Susceptible to stress",
"category":"personality",
"percentile":0.7069366679699797,
"raw_score":0.5023098966625577,
"significant":true
}
]
}
],
"needs":[
{
"trait_id":"need_challenge",
"name":"Challenge",
"category":"needs",
"percentile":0.3111815824704851,
"raw_score":0.7096962589081414,
"significant":true
},
{
"trait_id":"need_closeness",
"name":"Closeness",
"category":"needs",
"percentile":0.35074889692132116,
"raw_score":0.776347669910458,
"significant":true
},
{
"trait_id":"need_curiosity",
"name":"Curiosity",
"category":"needs",
"percentile":0.31319024070209367,
"raw_score":0.8038374100057984,
"significant":true
},
{
"trait_id":"need_excitement",
"name":"Excitement",
"category":"needs",
"percentile":0.381914846033436,
"raw_score":0.6613380266802147,
"significant":true
},
{
"trait_id":"need_harmony",
"name":"Harmony",
"category":"needs",
"percentile":0.31267505503919857,
"raw_score":0.7923972251247591,
"significant":true
},
{
"trait_id":"need_ideal",
"name":"Ideal",
"category":"needs",
"percentile":0.3275871372890826,
"raw_score":0.6698318741171541,
"significant":true
},
{
"trait_id":"need_liberty",
"name":"Liberty",
"category":"needs",
"percentile":0.32239839981885254,
"raw_score":0.7192415822205642,
"significant":true
},
{
"trait_id":"need_love",
"name":"Love",
"category":"needs",
"percentile":0.3964120015403447,
"raw_score":0.7558832961971879,
"significant":true
},
{
"trait_id":"need_practicality",
"name":"Practicality",
"category":"needs",
"percentile":0.9649293870023881,
"raw_score":0.7669009397738932,
"significant":true
},
{
"trait_id":"need_self_expression",
"name":"Self-expression",
"category":"needs",
"percentile":0.6353593836964153,
"raw_score":0.6869779372404304,
"significant":true
},
{
"trait_id":"need_stability",
"name":"Stability",
"category":"needs",
"percentile":0.24020391881699688,
"raw_score":0.711976290266912,
"significant":true
},
{
"trait_id":"need_structure",
"name":"Structure",
"category":"needs",
"percentile":0.5035013183383961,
"raw_score":0.6963163749792464,
"significant":true
}
],
"warnings":[
{
"warning_id":"WORD_COUNT_MESSAGE",
"message":"There were 472 words in the input. We need a minimum of 600, preferably 1,200 or more, to compute statist"
}
]
}
],
"gender":"Male",
"marital_status":"Single",
"user_likes":"Jengkiers\r\nPriska Madelyn Nugroho\r\nMrs Laos World\r\nWimbledon\r\nKakliss MCI\r\nILUNI K3 FKM UI\r\nBadminton Vietnam\r\nBMT MEDIA\r\nTennis Indonesia\r\nRina Silvia Aritonang\r\nDina Maria Simamora\r\nIndustrial Hygiene\r\nChristopher Rungkat\r\nEffendi Hutahaean\r\nOya Yolanda Haam\r\nDigdyo Fakirul Gareng Crew\r\nShanty Sihombing\r\nPaulus Nalsali Herianto Tamba\r\nIndonesia Feminis\r\nIsna Muharyunis\r\nJoko Sumartono\r\nMRT Jakarta\r\nLagu Rohani Terbaru\r\nAldila Sutjiadi\r\nNurhalima Purba\r\nToro Prima Jaya\r\nHanif Optik Citra\r\nDarus Harjuniadi\r\nAtika Sumco\r\nMuhamad Yusuf Lamba\r\nMadinisafety Const\r\nRiri Gpp\r\nBethanie Mattek-Sands\r\nPenyet Everest\r\nSteve Harvey\r\nChoir \"Alumni Kristiani UI\"\r\nMartina Hingis\r\nGenie Bouchard\r\nBelinda Bencic\r\nLaLiga\r\nDr. Suryo Bawono, Sp.OG\r\nJoyful Choir Cibubur\r\nIMPORTIR.ORG\r\nIrwan Wahyudiono\r\nArdantya Syahreza\r\nBubuk Silky Puding\r\nEnglish For All Indonesian\r\nIIEF EduFair\r\nTAR Team BP Tangguh West Papua\r\nKang H Idea",
"location":"Depok",
"age_range":"31-35",
"education":"Bachelor"
}
]
}
i've tried to using this command, and successfully to create a node labeled big5_personality, but get stuck with 2 others.
WITH "///big5_personality.json" AS file
call apoc.load.json(file) YIELD value
unwind value.big5_personality as item
merge (a:big5_personality{user_name_link: item.user_name_link})
on create set
a.user_name = item.user_name,
a.preference = item.preferance,
a.gender = item.gender,
a.marital_status = item.marital_status,
a.education = item.education,
a.age_range = item.age_range,
a.location = item.location,
a.user_likes = item.user_likes,
a.post_text_id = item.post_text_id,
a.post_text_eng = item.post_text_eng,
a.identity = item.identity
foreach (personality_result in item.personality_result |
merge (b:personality_result {user_name_link: item.user_name_link})
on create set
b.word_count = personality_result.word_count,
b.word_count_message = personality_result.word_count_message,
b.processed_language = personality_result.processed_language
)
merge (a)-[r1:rel_personality_result]->(b)
foreach (personality in item.personality_result.personality |
merge (c:personality {user_name_link: b.user_name_link})
on create set
c.trait_id = personality.trait_id
MERGE (b)<-[:rel_personality]-(c)
)
please help

You have multiple issues with your data file. Among them are:
Your Cypher code expects personality_result to be a list of JSON objects. It is not.
(a) It is a single string, not a list.
(b) That string seems to consist of the truncated start of a stringified JSON object (that includes a lot of extra pretty-printing whitespace).
So, everything in your Cypher query starting at the FOREACH will not work.
In your next-to-last MERGE, personality_result.personality should probably be just personality.
You may have other issues, but it is hard to tell until you fix your data file and code.

i found the solution for my problem... maybe it's dirty way and there's better solution for my case... the updated code is below :
WITH "///big5_personality.json" AS file
call apoc.load.json(file) YIELD value
unwind value.big5_personality as item
unwind item.personality_result as itema
unwind itema.personality_detail as itemb
UNWIND itemb.children as itemc
merge (a:big5_personality{user_name_link: item.user_name_link})
on create set
a.user_name = item.user_name,
a.preference = item.preferance,
a.gender = item.gender,
a.marital_status = item.marital_status,
a.education = item.education,
a.age_range = item.age_range,
a.location = item.location,
a.user_likes = item.user_likes,
a.post_text_id = item.post_text_id,
a.post_text_eng = item.post_text_eng,
a.identity = item.identity
foreach (personality_result in itema |
merge (b:personality_result {user_name_link: item.user_name_link})
on create set
b.word_count = personality_result.word_count,
b.word_count_message = personality_result.word_count_message,
b.processed_language = personality_result.processed_language
)
merge (a)-[r1:rel_big5_personality_result{user_name_link: a.user_name_link, word_count: itema.word_count, word_count_message: itema.word_count_message, processed_language: itema.processed_language}]->(b)
foreach (trait in itemb |
merge (c:personality{user_name_link : item.user_name_link, trait_id: trait.trait_id, trait_name: trait.name, trait_category: trait.category, trait_percentile: trait.percentile, trait_significant: trait.significant})
ON CREATE SET
c.trait_raw_score = trait.raw_score
MERGE (b)-[:rel_personality_result{user_name_link : itema.user_name_link, trait_id: itemb.trait_id, trait_name: itemb.name, trait_category: itemb.category, trait_percentile: itemb.percentile, trait_significant: itemb.significant}]->(c)
)
FOREACH (facet IN itemc |
MERGE (d:personality_children{user_name_link : itema.user_name_link, personality_trait_id: itemb.trait_id})
ON CREATE SET
d.facet_trait_id = facet.trait_id,
d.facet_name = facet.name,
d.facet_category = facet.category,
d.facet_percentile = facet.percentile,
d.facet_significant = facet.significant
MERGE (c)-[:rel_personality_children{user_name_link : itema.user_name_link, personality_trait_id: itemb.trait_id, facet_trait_id: itemc.trait_id, facet_name: itemc.name, facet_category: itemc.category, facet_percentile: itemc.percentile, facet_significant: itemc.significant}]->(d)
)

Related

Concatenate values from non-adjacent objects based on multiple matching criteria

I received help on a related question previously on this forum and am wondering if there is a similarly straightforward way to resolve a more complex issue.
Given the following snippet, is there a way to merge the partial sentence (the one which does not end with a "[punctuation mark][white space]" pattern) with its remainder based on the matching TextSize? When I tried to adjust the answer from the related question I quickly ran into issues, but I am basically looking to translate a rule such as if .Text !endswith("[punctuation mark][white space]") then .Text + next .Text where .TextSize matches
{
"Text": "Was it political will that established social democratic policies in the 1930s and ",
"Path": "P",
"TextSize": 9
},
{
"Text": "31 Lawrence Mishel and Jessica Schieder, Economic Policy Institute website, May 24, 2016 at (https://www.epi.org/publication/as-union-membership-has-fallen-the-top-10-percent-have-been-getting-a-larger-share-of-income/). ",
"Path": "Footnote",
"TextSize": 8
},
{
"Text": "Fig. 9.2 Higher union membership has been associated with a higher share of income to lower income brackets (the lower 90%) and a lower share of income to the top 10% of earners. ",
"Path": "P",
"TextSize": 8
},
{
"Text": "1940s, or that undermined them after the 1970s? Or was it abundant and cheap energy resources that enabled social democratic policies to work until the 1970s, and energy constraints that forced a restructuring of policy after the 1970s? ",
"Path": "P",
"TextSize": 9
},
{
"Text": "Recall that my economic modeling discussed in Chap. 6 shows that, even with no change in the assumption related to labor \u201cbargaining power,\u201d you can explain a shift from increasing to declining income equality (higher equality expressed as a higher wage share) by a corresponding shift from a period of rapidly increasing per capita resource consumption to one of constant per capita resource consumption. ",
"Path": "P",
"TextSize": 9
}
The result I'm looking for would be as follows:
{
"Text": "Was it political will that established social democratic policies in the 1930s and 1940s, or that undermined them after the 1970s? Or was it abundant and cheap energy resources that enabled social democratic policies to work until the 1970s, and energy constraints that forced a restructuring of policy after the 1970s? ",
"Path": "P",
"TextSize": 9
},
{
"Text": "31 Lawrence Mishel and Jessica Schieder, Economic Policy Institute website, May 24, 2016 at (https://www.epi.org/publication/as-union-membership-has-fallen-the-top-10-percent-have-been-getting-a-larger-share-of-income/). ",
"Path": "Footnote",
"TextSize": 8
},
{
"Text": "Fig. 9.2 Higher union membership has been associated with a higher share of income to lower income brackets (the lower 90%) and a lower share of income to the top 10% of earners. ",
"Path": "P",
"TextSize": 8
},
{
"Text": "Recall that my economic modeling discussed in Chap. 6 shows that, even with no change in the assumption related to labor \u201cbargaining power,\u201d you can explain a shift from increasing to declining income equality (higher equality expressed as a higher wage share) by a corresponding shift from a period of rapidly increasing per capita resource consumption to one of constant per capita resource consumption. ",
"Path": "P",
"TextSize": 9
}
The following, which assumes the input is a valid JSON array, will merge every .Text with at most one successor, but can easily be modified to merge multiple .Text values together as shown in Part 2 below.
Part 1
# input and output: an array of {Text, Path, TextSize} objects.
# Attempt to merge the .Text of the $i-th object with the .Text of a subsequent compatible object.
# If a merge is successful, the subsequent object is removed.
def attempt_to_merge_next($i):
.[$i].TextSize as $class
| first( (range($i+1; length) as $j | select(.[$j].TextSize == $class) | $j) // null) as $j
| if $j then .[$i].Text += .[$j].Text | del(.[$j])
else .
end;
reduce range(0; length) as $i (.;
if .[$i] == null then .
elif .[$i].Text|test("[,.?:;]\\s*$")|not
then attempt_to_merge_next($i)
else .
end)
Part 2
Using the above def:
def merge:
def m($i):
if $i >= length then .
elif .[$i].Text|test("[,.?:;]\\s*$")|not
then attempt_to_merge_next($i) as $x
| if ($x|length) == length then m($i+1)
else $x|m($i)
end
else m($i+1)
end ;
m(0);
merge

SQL json_arrayarg with json_object producing duplicate values

This SQL query gives me same values repeatedly:
select json_object('id',basics.resume_id,
'work',JSON_ARRAYAGG(json_object('name', work.name, 'location', work.location,
'description', work.description, 'position', work.position, 'url', work.url,
'startDate', work.startDate, 'endDate', work.endDate, 'summary', work.summary,
'highlights', work.highlights, 'keywords', work.keywords)
),
'awards', JSON_ARRAYAGG(JSON_OBJECT(
'title', awards.title
)
)) from basics
left join work on basics.resume_id = work.resume_id
left join awards on basics.resume_id = awards.resume_id where basics.resume_id = 1
The value I get for it is:
{
"id":1,
"work":[
{
"url":"http://piedpiper.example.com",
"name":"Pied Piper3",
"endDate":"2014-12-01",
"summary":"Pied Piper is a multi-platform technology based on a proprietary universal compression algorithm that has consistently fielded high Weisman Scores™ that are not merely competitive, but approach the theoretical limit of lossless compression.",
"keywords":"Javascript, React",
"location":"Palo Alto, CA",
"position":"CEO/President",
"startDate":"2013-12-01",
"highlights":"Build an algorithm for artist to detect if their music was violating copy right infringement laws, Successfully won Techcrunch Disrupt, Optimized an algorithm that holds the current world record for Weisman Scores",
"description":"Awesome compression company"
},
{
"url":" bnvc ",
"name":"Pied Piper",
"endDate":"vb nsncd",
"summary":"bcbbv",
"keywords":"nbdcnbsvd",
"location":"nbdcnb",
"position":"m m",
"startDate":" vbn vb",
"highlights":"jhsfcf ",
"description":"mbvm"
}
],
"awards":[
{
"title":"Digital Compression Pioneer Award"
},
{
"title":"Digital Compression Pioneer Award"
}
]}
I have added two rows of data in work table for resume_id = 1, but only 1 row of data in awards table for resume_id = 1

Altering JSON Structure

So I am using a webscraper to pull information on sneakers from a website. The son data that comes back is structured like so
[
{
"web-scraper-order": "1554084909-97",
"web-scraper-start-url": "https://www.goat.com/sneakers",
"productlink": "$200AIR JORDAN 6 RETRO 'INFRARED' 2019",
"productlink-href": "https://www.goat.com/sneakers/air-jordan-6-retro-black-infrared-384664-060",
"name": "Air Jordan 6 Retro 'Infrared' 2019",
"price": "Buy New - $200",
"description": "The 2019 edition of the Air Jordan 6 Retro ‘Infrared’ is true to the original colorway, which Michael Jordan wore when he captured his first NBA title. Dressed primarily in black nubuck with a reflective 3M layer underneath, the mid-top features Infrared accents on the midsole, heel tab and lace lock. Nike Air branding adorns the heel and sockliner, an OG detail last seen on the 2000 retro.",
"releasedate": "2019-02-16",
"colorway": "Black/Infrared 23-Black",
"brand": "Air Jordan",
"designer": "Tinker Hatfield",
"technology": "Air",
"maincolor": "Black",
"silhouette": "Air Jordan 6",
"nickname": "Infrared",
"category": "lifestyle",
"image-src": "https://image.goat.com/crop/1250/attachments/product_template_additional_pictures/images/018/675/318/original/464372_01.jpg.jpeg"
},
{
"web-scraper-order": "1554084922-147",
"web-scraper-start-url": "https://www.goat.com/sneakers",
"productlink": "$190YEEZY BOOST 350 V2 'CREAM WHITE / TRIPLE WHITE'",
"productlink-href": "https://www.goat.com/sneakers/yeezy-boost-350-v2-cream-white-cp9366",
"name": "Yeezy Boost 350 V2 'Cream White / Triple White'",
"price": "Buy New - $220",
"description": "First released on April 29, 2017, the Yeezy Boost 350 V2 ‘Cream White’ combines a cream Primeknit upper with tonal cream SPLY 350 branding, and a translucent white midsole housing full-length Boost. Released again in October 2018, this retro helped fulfill Kanye West’s oft-repeated ‘YEEZYs for everyone’ Twitter mantra, as adidas organized the biggest drop in Yeezy history by promising pre-sale to anyone who signed up on the website. Similar to the first release, the ‘Triple White’ 2018 model features a Primeknit upper, a Boost midsole and custom adidas and Yeezy co-branding on the insole.",
"releasedate": "2017-04-29",
"colorway": "Cream White/Cream White/Core White",
"brand": "adidas",
"designer": "Kanye West",
"technology": "Boost",
"maincolor": "White",
"silhouette": "Yeezy Boost 350",
"nickname": "Cream White / Triple White",
"category": "lifestyle",
"image-src": "https://image.goat.com/crop/1250/attachments/product_template_additional_pictures/images/014/822/695/original/116662_03.jpg.jpeg"
},
However, I want to change it so that the top level node is sneakers and the next level down would be a specific sneaker brand ( Jordan, Nike, Adidas) and then the list of sneakers that belong to that brand. So my JSON structure would look something like this
Sneakers {
Adidas :{
[shoe1,
shoe2,
....
] },
Jordan: {
[shoe1,
shoe2,
....
]
}
}
I am not sure what tool I could use to do that. Any help would be greatly appreciated. All I have at the moment is the JSON file and it is not in the structure that I want it to be in.
One way of doing this would be to populate a dict whose keys are brand names and their values are lists of sneaker records. Assuming that data is your original list, here's the code:
sneakers_by_brand = {}
for record in data:
if sneakers_by_brand.get(record.get("brand")):
sneakers_by_brand[record.get("brand")].append(record)
else:
sneakers_by_brand[record.get("brand")] = [record]
print(sneakers_by_brand)

Why my angular.forEach loop isn't working

I am not sure what's going wrong, cause it looks pretty straight forward to me. I have few things in my mind about appending the JSON response, but for that when I am trying to apply forEach loop to my Json , no data is shown in the alert
Angular code
mainApp.controller('MultiCarouselController', function($scope, $http) {
$scope.products = [];
$http.get("/get_broad_category_products/?BroadCategory=BroadCategory3")
.success(function (response) {
$scope.products = response;
angular.forEach(products,function(value,key){
alert(key+'----'+value); // can see this alerts
});
}).error(function(){
console.log('Error happened ... ');
});
});
My JSON response is some junk data of Products
[
{
"sku": "d1cd71a8-9dc5-4724-b269-473ede28a1d7",
"selectedQtyOptions": [],
"selectedSize": "",
"description": "foolish interpolates trumpet monographs ferried inboards Forster tike grammatically sunroof vaporizing Sweden demure retouching completely robbing readies unloose guiltless tatty unobservant cuffs fortieth wither rigorously paradoxically snowmobiling charts clenching planning dealing lesions bicameral pertly chaffinches grumpiness private purled insanely attainment proposal Fatima execrates pshaws chars actuators turboprop soughed kicking majors conquistadores Cynthia septuagenarians kneecaps titans attractions larvas invigorating trunking Shevat recluse Trina slenderness kinking falsified logistically hogged skyrocketing ordinal avoiding trademarked underfoot garter sacrificial pricey nosedive bachelors deiced heave dictatorial muffing prayed rewinding recopied limpidly Crichton conversion chitterlings signets Aiken Froissart turnoff snowshoe forded spiralled underwriters flourishes Sade splicer transfusions cesspools lifelike ruckus showering paean voguish Buck copings Russell watchdog magneto pored height zodiac motherland backings Venus obeys scooters nonintervention dinosaur unashamedly anathema hibernate consumerism portended worked mystically existentialist dissatisfies badgers unanimously triplicated Jenny sagacity Windex snoopier nonplusing shovelling Assam putty darn Sulawesi Italians gunnery codify develops rhinos upwards Louise welled experiences socks pinky mewed Camille claimants swirl squattest ware parenthetic bonitoes hydrangeas decolonizing omit skyjacks Gorky financiers province flywheel southeastward Bayeux updated yowl Tulsidas macintosh sprees pralines systolic uncommoner cilium tromping Asimov heinous cordoned combated camerawomen syndrome identified prizefights heavyweight vertically reflector integrity Hebrides sepulchral loner parrot smooths candidness",
"selectedQty": "1",
"title": "viragoes",
"brand": "Brand0",
"images": [
{
"image0": "/media/products/f791a316ced7b3b774bd61e138197224.jpg"
}
],
"sizeQtyPrice": [
{
"discountAttributes": "chinos theosophy misdemeanor irrigates school Pullman sombrely suspect vortex baddest",
"measureUnit": "ltr",
"discountPercent": 4,
"mrp": 3102,
"qty": 7,
"size": 66
},
{
"discountAttributes": "Molotov absurd traces pounces contracts clarions thighbone Hesse parricide constrains",
"measureUnit": "m",
"discountPercent": 16,
"mrp": 2773,
"qty": 7,
"size": 18
},
{
"discountAttributes": "detainment gunnysack vied expropriation unobtrusive collectables embracing poster hexing governess",
"measureUnit": "m",
"discountPercent": 6,
"mrp": 9920,
"qty": 6,
"size": 69
}
],
"id": 9
},
{
"sku": "838660bb-7ab9-4f2a-8be7-9602a5801756",
"selectedQtyOptions": [],
"selectedSize": "",
"description": "agreeing vizier bleariest trig appliquéing copulating commissariats Balzac lunchtimes glittery quacking Leoncavallo heehawing Tampax lizards pegged nanosecond centigrade subplots tumbrils give jawed skits nickel discontinues impinged evangelized Platonist waterlines dams symposiums intercessor cognition heavier softener dromedaries bravos immobilize consciously Clemons patch klutzier Kirkpatrick caddying designs Dulles twelfths undemocratic isolationists infected ma homering soliciting minibus pluralism fraternity catalyzed Scorpio pandemonium waxwing starter infuses rebuttals spirals rerunning interrogatories Manuel whomsoever tenderized conjoint baronesses callower parenthetic plusses extend cockier Fokker dewlap Cowper Swammerdam secs hock relaxations Judas Canadian presidency lo wildness Philippe picture beekeeper lull manuals transnational yaw chloroformed perennials distinctive Nottingham antiquaries underneath parted nervously basemen observatories scrubbed encoder egalitarians winnow caddish Hawaiians brownstones robbing exhaustible antagonist benefactresses Plasticine Peace platypi Guzman stippled shuts peacemakers butterfly Bolton grout McCain Lebanon bounce oleander Balkans endearments snowfall spoonerisms furnaces inequities billowy jutting guffaw beautifully penis newtons snuffboxes j Angelita tinkles literature depicts insouciant scribblers blinker disobediently devotees primordial sixties Kalamazoo shear contest classes cripple edging exactest cheat invocation thrived drunkenness Fuller architectures sprite Lillian constricts tucking chastisements walruses guzzlers rejoinder apprenticeships pillory spendthrift omens spoonful contortions precociously intensely motorway guts cahoot sculptor paralytics reminisce meltdown trusts lady pronghorn scurried Campbell micron flawing foals nigher",
"selectedQty": "1",
"title": "smokier",
"brand": "Brand2",
"images": [
{
"image0": "/media/products/f51a649e72694d23962ee77a97872f0e.jpg"
}
],
"sizeQtyPrice": [
{
"discountAttributes": "Beerbohm earldom Stanley seconding hypertension Sayers miserly epitome retires ditching",
"measureUnit": "m",
"discountPercent": 15,
"mrp": 5065,
"qty": 6,
"size": 83
},
{
"discountAttributes": "confine Newman bagel cornflower rears generator goaded midweeks drain cigarillo",
"measureUnit": "Kg",
"discountPercent": 12,
"mrp": 2284,
"qty": 9,
"size": 13
},
{
"discountAttributes": "eerier fizzes lessened rotisserie developer Gray industrial callused convergences ampoule",
"measureUnit": "gms",
"discountPercent": 4,
"mrp": 6816,
"qty": 8,
"size": 18
}
],
"id": 14
}
]
products isn't defined. It should be $scope.products

Iterating through a JSON string in Angularjs

I am hosting a web server which exposes the REST APIs. following is the JSON response that I get from the server.
[
{
"score": 4,
"sense": "be the winner in a contest or competition; be victorious; \"He won the Gold Medal in skating\"; \"Our home team won\"; \"Win the game\""
},
{
"score": 2,
"sense": "win something through one's efforts; \"I acquired a passing knowledge of Chinese\"; \"Gain an understanding of international finance\""
},
{
"score": 0,
"sense": "obtain advantages, such as points, etc.; \"The home team was gaining ground\"; \"After defeating the Knicks, the Blazers pulled ahead of the Lakers in the battle for the number-one playoff berth in the Western Conference\""
},
{
"score": 4,
"sense": "attain success or reach a desired goal; \"The enterprise succeeded\"; \"We succeeded in getting tickets to the show\"; \"she struggled to overcome her handicap and won\""
}
]
I want to display this in list. I am using material design, in the following manner:
<md-list data-ng-repeat="item in sensesscores track by $index">
<md-item-content>
<div class="md-tile-content">
{{item.sense}}
</div>
<div class="md-tile-left">
{{item.score}}
</div>
</md-item-content>
</md-list>
In my controller, I have the following:
$http.get('http://localhost:8080/nlp-wsd-demo/wsd/disambiguate').
success(function(data) {
$scope.sensesscores = data;
console.log(data);
});
I made sure that I am able to get the data in 'sensesscores' and also printed it on the screen. However, I am not able to make parse and display it the list. Thanks in advance.
EDIT
I changed the code to correct the syntax and moving the ng-repeat up the heirarchy, but it still doesnt work. However, I tried it on a different JSON file, which works.
[{
"sense": "sensaS,NF,ASNGD.,AD., BVAS.,GMDN,FG e1",
"score" : 5
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 13
}
,
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 1
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 0
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 3
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 2
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 1
}
]
I dont understand whats wrong with the JSON response.
Your HTML has many typos, the below is formatted correctly:
<md-list>
<md-item data-ng-repeat="item in sensesscores track by $index">
<md-item-content>
<div class="md-tile-content">
{{item.sense}}
</div>
<div class="md-tile-left">
{{item.score}}
</div>
</md-item-content> // this tag was not closed, but rather a new opener
</md-item>
</md-list> // this tag was not closed properly, missing ">"
Change it, and your data will show as you expect.
I figured out the solution. The content needed to be converted to json object and I did that using jquery - $.parseJSON(data)