Iterating through a JSON string in Angularjs - json

I am hosting a web server which exposes the REST APIs. following is the JSON response that I get from the server.
[
{
"score": 4,
"sense": "be the winner in a contest or competition; be victorious; \"He won the Gold Medal in skating\"; \"Our home team won\"; \"Win the game\""
},
{
"score": 2,
"sense": "win something through one's efforts; \"I acquired a passing knowledge of Chinese\"; \"Gain an understanding of international finance\""
},
{
"score": 0,
"sense": "obtain advantages, such as points, etc.; \"The home team was gaining ground\"; \"After defeating the Knicks, the Blazers pulled ahead of the Lakers in the battle for the number-one playoff berth in the Western Conference\""
},
{
"score": 4,
"sense": "attain success or reach a desired goal; \"The enterprise succeeded\"; \"We succeeded in getting tickets to the show\"; \"she struggled to overcome her handicap and won\""
}
]
I want to display this in list. I am using material design, in the following manner:
<md-list data-ng-repeat="item in sensesscores track by $index">
<md-item-content>
<div class="md-tile-content">
{{item.sense}}
</div>
<div class="md-tile-left">
{{item.score}}
</div>
</md-item-content>
</md-list>
In my controller, I have the following:
$http.get('http://localhost:8080/nlp-wsd-demo/wsd/disambiguate').
success(function(data) {
$scope.sensesscores = data;
console.log(data);
});
I made sure that I am able to get the data in 'sensesscores' and also printed it on the screen. However, I am not able to make parse and display it the list. Thanks in advance.
EDIT
I changed the code to correct the syntax and moving the ng-repeat up the heirarchy, but it still doesnt work. However, I tried it on a different JSON file, which works.
[{
"sense": "sensaS,NF,ASNGD.,AD., BVAS.,GMDN,FG e1",
"score" : 5
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 13
}
,
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 1
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 0
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 3
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 2
},
{
"sense": "sen ASG SFG S H D GD FJDF JDF J GFJ FDFGse2",
"score" : 1
}
]
I dont understand whats wrong with the JSON response.

Your HTML has many typos, the below is formatted correctly:
<md-list>
<md-item data-ng-repeat="item in sensesscores track by $index">
<md-item-content>
<div class="md-tile-content">
{{item.sense}}
</div>
<div class="md-tile-left">
{{item.score}}
</div>
</md-item-content> // this tag was not closed, but rather a new opener
</md-item>
</md-list> // this tag was not closed properly, missing ">"
Change it, and your data will show as you expect.

I figured out the solution. The content needed to be converted to json object and I did that using jquery - $.parseJSON(data)

Related

how to load json and extract into separate nodes in neo4j

i'm newbie in neo4j and need help with my case...
i'm trying to load json file with the structure (updated by suggested) like below and extract into 3 nodes (big5_personality, personality_result & personality)
{
"big5_personality":[
{
"user_name":"Satu' Roy Nababan",
"preference":"MRT Jakarta",
"user_name_link":"https://www.facebook.com/satu.nababan",
"post_text_id":"[ \"was there,Berangkat kerja gelap pulang gelap selama 49 hari berturut-turut sebelum cuti 14 hari.,.,.,Tempat pertama belajar health and safety sbg seorang pefesional. Salah satu perusahaan dgn safety management system terbaik yg pernah saya temui.? See More\", \"Old normal: dominasi big four (Peter Gade, Lee Chong Wei, Taufik Hidayat, dan Lin Dan) akhirnya benar2 berakhir. Semuanya sudah pensiun.,New normal: saatnya para rising star memperebutkan dominasi atau membuat formasi big four versi new normal. Mereka yang sangat potensial saat ini; Kento Momota, Chou Thien Chen, Viktor Axelsen, Anders Antonsen, Anthony Ginting, Jonathan Christie, Lee Zii Jia.,#LinDan,#SuperDan,#Legend,#TwoT? See More\", \"#MenjemputRezeki Seri-4.,Ini adalah shipment ke-4 #Jengkiers dari Tanjung Balai ke Jakarta. Mumpung masih fresh dan stock ready lengkap, yuk mainkan!,#AyoMakanIkan,#I? See More\", \"The best version of Sabine Lisicki\", \"Naik #MRTTetapAman\", \"Terima kasih #MRTTetapAman\", \"#Jengkiers is back!,Kita kedatangan bbrp varian baru loh, seperti gabus asin, kerang, jengki putih belah, dll. Sok atuh langsung cek gambar yak, lengkap dengan PL dan kontaknya,#AyoMakanIkan,#IkanAsliTanjungBalai\", \"#CeritaGuruDiAtasGaris,#KakLiss,#GuruMantul\", \"Nih satu geng cuma bertiga doang sih, di WAG chatnya ngegas terus, ketemuan ngomongnya juga ga kalah ngegasss, tapi tetep aja pen ketemuan mulu meski banyakan dramanya utk cari jadwal yg pas,Thank you chit chat dan traktirannya woiii, enak bgt itu po*k nya, pen pesan lagi ah kapan2,#GroupHolanH,#Nama? See More\", \"Coming up next, Aldila Sutjiadi / Priska Madelyn Nugroho vs Jessy Rompies / Rifanty Dwi Kahfiani #ayotenis\", \"Beberapa cara lain menikmati produk #Jengkiers, bisa jadi nasi goreng teri medan atau mie gomak udang manis (slide 2-3),*monmaap klo teri dan udangnya ga terlalu terlihat, krn emang dikit aja ditaronya, klo kebanyakan rugi pedagang percaya ajalah disitu ada udang dan teri pokoknya,#AyoMakanIkan,#Ika? See More\", \"Thank you, Abang John Evendy Hutabarat utk waktunya, utk berbagi cerita dan pelajaran setelah sekian tahun ga ketemu secara fisik.\", \"Next match double antara duo punggawa Fed Cup ( Priska Madelyn Nugroho / Janice Tjen) vs petenis berpengalaman, Beatrice Gumulya duet dgn juniornya, Rifanty Dwi Kahfiani. Selamat menyaksikan,#ayotenis,#indonesiantennis\", \"Women single final antara 2 terbaik dalam babak round robin pekan lalu, Aldila Sutjiadi vs Jessy Rompies,#ayotenis,#indonesiantennis\", \"Here we go,Paket2 yg siap meluncur ke alamat para costumer setia #Jengkiers,Yuk j? See More\", \"Barang baru datang guys, unit ready pengiriman besok sore,Ada yg baru nih, terasi spesial, asli, didatangkan langsung dari Tg Balai, buat kalian penggemar sambal terasi sudah tentu tak mau kelewatan kan?\", \"INOVASI\", \"Federer's shots\", \"RIP Ibu Josephine Mekel (Ibunda Vokalis Once Mekel, Alumni UI, FH'89). Turut berdukacita utk Bung Once dan keluarga besar Mekel-Cambey.,Tadi malam utk pertama kalinya menghadiri acara kebaktian penghiburan sejak masa pandemik, tentu dgn protok kesehatan yg sangat ketat, jumlah yg hadir dibatasi termasuk kita yang nyanyi cuma berlima saja.\", \"Bagi yg sudah lama ga mampir kawasan MRT Dukuh Atas, nih ada yg baru loh,Totem terpasang dibanyak titik, serasa di luar negeri bukan?,#MRTJakarta\" ]",
"post_text_eng":"[ \"was there, leaving for dark work went home dark for 49 consecutive days before taking 14 days off ..., the first place to learn health and safety as a professional. One of the companies with the best safety management system I have ever met.? See More \", \"Old normal: the dominance of the big four (Peter Gade, Lee Chong Wei, Taufik Hidayat, and Lin Dan) has finally come to an end. Everything is retired., New normal: it's time for the rising stars to fight for domination or make a new version of the Big Four formation. with very potential right now: Kento Momota, Chou Thien Chen, Viktor Axelsen, Anders Antonsen, Anthony Ginting, Jonathan Christie, Lee Zii Jia., # LinDan, # SuperDan, # Legend, # TwoT? See More \", \"#Pick up the 4th Series Rezeki., This is the 4th shipment #Jengkiers from Tanjung Balai to Jakarta. While it's still fresh and ready stock, let's play!, # Come on Eat, # I? See More\", \"The best version of Sabine Lisicki\", \"Ride #MRTKeep Safe\", \"Thank you #MRTKeep Safe\", \"#Jengkiers is back !, We have a number of new variants, such as salted cork, clams, white jengki split, etc. Sok directly check yak pictures, complete with PL and contacts, # Come on Eat, # Ikan AsliTanjungBalai\", \"# Stories of Teachers On Top of Line, # KakLiss, # Teachers Bounce\", \"Here, one gang is only the three of them, on WAG chat, it keeps on firing, meeting and talking isn't too bad, but still just finding it, even though most of the drama is to find the right schedule, Thank you chit chat and treats wow, how nice is that po * Please, order again sometime, # GroupHolanH, # Name? See More \", \"Coming up next, Aldila Sutjiadi / Priska Madelyn Nugroho vs Jessy Rompies / Rifanty Dwi Kahfiani #ayotenis\", \"Some other ways to enjoy #Jengkiers products, can be Medan teri fried rice or sweet shrimp gomak noodles (slides 2-3), * monmaap if the anchovies and the shrimp are not too visible, because it is only a little in the menu, if most of the traders lose trust there there are shrimp and anchovies, \\\"Let's Eat, # Ika? See More\", \"Thank you, Brother John Evendy Hutabarat for his time, to share stories and lessons after years of not meeting him physically.\" \"Next match doubles between Fed Cup retainer duo (Priska Madelyn Nugroho / Janice Tjen) vs. experienced tennis player, Beatrice Gumulya duet with her junior, Rifanty Dwi Kahfiani. Happy watching, # ayotenis, # indonesiantennis\", \"Women singles final between the 2 best in the round robin round last week, Aldila Sutjiadi vs Jessy Rompies, # ayotenis, # indonesiantennis\", \"Here we go, Paket2 are ready to slide to the address of loyal customers # Jengkiers, let's see?\" More, \"New goods are coming, guys, the unit is ready for delivery tomorrow afternoon. There are new ones, special shrimp paste, original, imported directly from Tg Balai, for you fans of terasi sauce, of course you don't want to go too far right?\", \"INNOVATION\", \"Federer's shots\", \"RIP Mrs. Josephine Mekel (Mother of Vocalist Once Mekel, UI Alumni, FH'89). Also sorrowing for Bung Once and the Mekel-Cambey extended family, last night for the first time attending consolation conventions since the pandemic, of course with health protection very strict, the number of attendees is limited including those of us who sing only five of them. \", \"For those who haven't stopped in the Dukuh Atas MRT area, there are new ones, Totem is installed at many points, feels like overseas, right? # MRTJakarta\" ]",
"personality_result":[
{
"user_name_link":"https://www.facebook.com/satu.nababan",
"word_count":472,
"word_count_message":"There were 472 words in the input. We need a minimum of 600, preferably 1,200 or more, to compute statistically significant estimates",
"processed_language":"en",
"personality":[
{
"trait_id":"big5_openness",
"name":"Openness",
"category":"personality",
"percentile":0.029368278774753065,
"raw_score":0.6883050000463327,
"significant":true,
"children":[
{
"trait_id":"facet_adventurousness",
"name":"Adventurousness",
"category":"personality",
"percentile":0.3272995424004471,
"raw_score":0.4889059610578305,
"significant":true
},
{
"trait_id":"facet_artistic_interests",
"name":"Artistic interests",
"category":"personality",
"percentile":0.48276246519083293,
"raw_score":0.6631367244448523,
"significant":true
},
{
"trait_id":"facet_emotionality",
"name":"Emotionality",
"category":"personality",
"percentile":0.4573453643547154,
"raw_score":0.6438277579254967,
"significant":true
},
{
"trait_id":"facet_imagination",
"name":"Imagination",
"category":"personality",
"percentile":0.5606034995849714,
"raw_score":0.7424334257188285,
"significant":true
},
{
"trait_id":"facet_intellect",
"name":"Intellect",
"category":"personality",
"percentile":0.7374704343214584,
"raw_score":0.6366655478430054,
"significant":true
},
{
"trait_id":"facet_liberalism",
"name":"Authority-challenging",
"category":"personality",
"percentile":0.7808736715557572,
"raw_score":0.5552707231598478,
"significant":true
}
]
},
{
"trait_id":"big5_conscientiousness",
"name":"Conscientiousness",
"category":"personality",
"percentile":0.22939241684474615,
"raw_score":0.5934971632418898,
"significant":true,
"children":[
{
"trait_id":"facet_achievement_striving",
"name":"Achievement striving",
"category":"personality",
"percentile":0.2677419988694361,
"raw_score":0.655591077028367,
"significant":true
},
{
"trait_id":"facet_cautiousness",
"name":"Cautiousness",
"category":"personality",
"percentile":0.40904830424778305,
"raw_score":0.47795518572548,
"significant":true
},
{
"trait_id":"facet_dutifulness",
"name":"Dutifulness",
"category":"personality",
"percentile":0.164224436809277,
"raw_score":0.6349680810761815,
"significant":true
},
{
"trait_id":"facet_orderliness",
"name":"Orderliness",
"category":"personality",
"percentile":0.867165384494327,
"raw_score":0.530616236542301,
"significant":true
},
{
"trait_id":"facet_self_discipline",
"name":"Self-discipline",
"category":"personality",
"percentile":0.2026779873552365,
"raw_score":0.5310156326644194,
"significant":true
},
{
"trait_id":"facet_self_efficacy",
"name":"Self-efficacy",
"category":"personality",
"percentile":0.3023937616129415,
"raw_score":0.7348991796444799,
"significant":true
}
]
},
{
"trait_id":"big5_extraversion",
"name":"Extraversion",
"category":"personality",
"percentile":0.2667979477554203,
"raw_score":0.5232267972734429,
"significant":true,
"children":[
{
"trait_id":"facet_activity_level",
"name":"Activity level",
"category":"personality",
"percentile":0.3490192324295949,
"raw_score":0.5205273056390818,
"significant":true
},
{
"trait_id":"facet_assertiveness",
"name":"Assertiveness",
"category":"personality",
"percentile":0.3371249743821161,
"raw_score":0.6230467403390507,
"significant":true
},
{
"trait_id":"facet_cheerfulness",
"name":"Cheerfulness",
"category":"personality",
"percentile":0.24258354512261554,
"raw_score":0.594713504568435,
"significant":true
},
{
"trait_id":"facet_excitement_seeking",
"name":"Excitement-seeking",
"category":"personality",
"percentile":0.46972100101797953,
"raw_score":0.6003831372285343,
"significant":true
},
{
"trait_id":"facet_friendliness",
"name":"Outgoing",
"category":"personality",
"percentile":0.29192693589475666,
"raw_score":0.5330152232542364,
"significant":true
},
{
"trait_id":"facet_gregariousness",
"name":"Gregariousness",
"category":"personality",
"percentile":0.34577689008301526,
"raw_score":0.4329464839207155,
"significant":true
}
]
},
{
"trait_id":"big5_agreeableness",
"name":"Agreeableness",
"category":"personality",
"percentile":0.2778846312783998,
"raw_score":0.7187775451521589,
"significant":true,
"children":[
{
"trait_id":"facet_altruism",
"name":"Altruism",
"category":"personality",
"percentile":0.3340915482705341,
"raw_score":0.6900524000049065,
"significant":true
},
{
"trait_id":"facet_cooperation",
"name":"Cooperation",
"category":"personality",
"percentile":0.445551905959055,
"raw_score":0.5711407367161474,
"significant":true
},
{
"trait_id":"facet_modesty",
"name":"Modesty",
"category":"personality",
"percentile":0.5418929802964033,
"raw_score":0.4539269679292031,
"significant":true
},
{
"trait_id":"facet_morality",
"name":"Uncompromising",
"category":"personality",
"percentile":0.3327649613483089,
"raw_score":0.6054136547271408,
"significant":true
},
{
"trait_id":"facet_sympathy",
"name":"Sympathy",
"category":"personality",
"percentile":0.5776699806826077,
"raw_score":0.6709599083365048,
"significant":true
},
{
"trait_id":"facet_trust",
"name":"Trust",
"category":"personality",
"percentile":0.6506691562935983,
"raw_score":0.6017503767590401,
"significant":true
}
]
},
{
"trait_id":"big5_neuroticism",
"name":"Emotional range",
"category":"personality",
"percentile":0.012225596986201182,
"raw_score":0.3709704629886742,
"significant":true,
"children":[
{
"trait_id":"facet_anger",
"name":"Fiery",
"category":"personality",
"percentile":0.5581412468086754,
"raw_score":0.5437137741013285,
"significant":true
},
{
"trait_id":"facet_anxiety",
"name":"Prone to worry",
"category":"personality",
"percentile":0.7355932800664517,
"raw_score":0.6370636497177248,
"significant":true
},
{
"trait_id":"facet_depression",
"name":"Melancholy",
"category":"personality",
"percentile":0.8073480016353904,
"raw_score":0.5034267686780826,
"significant":true
},
{
"trait_id":"facet_immoderation",
"name":"Immoderation",
"category":"personality",
"percentile":0.24332416646800148,
"raw_score":0.47385528964341017,
"significant":true
},
{
"trait_id":"facet_self_consciousness",
"name":"Self-consciousness",
"category":"personality",
"percentile":0.7754603650051617,
"raw_score":0.5865448670864387,
"significant":true
},
{
"trait_id":"facet_vulnerability",
"name":"Susceptible to stress",
"category":"personality",
"percentile":0.7069366679699797,
"raw_score":0.5023098966625577,
"significant":true
}
]
}
],
"needs":[
{
"trait_id":"need_challenge",
"name":"Challenge",
"category":"needs",
"percentile":0.3111815824704851,
"raw_score":0.7096962589081414,
"significant":true
},
{
"trait_id":"need_closeness",
"name":"Closeness",
"category":"needs",
"percentile":0.35074889692132116,
"raw_score":0.776347669910458,
"significant":true
},
{
"trait_id":"need_curiosity",
"name":"Curiosity",
"category":"needs",
"percentile":0.31319024070209367,
"raw_score":0.8038374100057984,
"significant":true
},
{
"trait_id":"need_excitement",
"name":"Excitement",
"category":"needs",
"percentile":0.381914846033436,
"raw_score":0.6613380266802147,
"significant":true
},
{
"trait_id":"need_harmony",
"name":"Harmony",
"category":"needs",
"percentile":0.31267505503919857,
"raw_score":0.7923972251247591,
"significant":true
},
{
"trait_id":"need_ideal",
"name":"Ideal",
"category":"needs",
"percentile":0.3275871372890826,
"raw_score":0.6698318741171541,
"significant":true
},
{
"trait_id":"need_liberty",
"name":"Liberty",
"category":"needs",
"percentile":0.32239839981885254,
"raw_score":0.7192415822205642,
"significant":true
},
{
"trait_id":"need_love",
"name":"Love",
"category":"needs",
"percentile":0.3964120015403447,
"raw_score":0.7558832961971879,
"significant":true
},
{
"trait_id":"need_practicality",
"name":"Practicality",
"category":"needs",
"percentile":0.9649293870023881,
"raw_score":0.7669009397738932,
"significant":true
},
{
"trait_id":"need_self_expression",
"name":"Self-expression",
"category":"needs",
"percentile":0.6353593836964153,
"raw_score":0.6869779372404304,
"significant":true
},
{
"trait_id":"need_stability",
"name":"Stability",
"category":"needs",
"percentile":0.24020391881699688,
"raw_score":0.711976290266912,
"significant":true
},
{
"trait_id":"need_structure",
"name":"Structure",
"category":"needs",
"percentile":0.5035013183383961,
"raw_score":0.6963163749792464,
"significant":true
}
],
"warnings":[
{
"warning_id":"WORD_COUNT_MESSAGE",
"message":"There were 472 words in the input. We need a minimum of 600, preferably 1,200 or more, to compute statist"
}
]
}
],
"gender":"Male",
"marital_status":"Single",
"user_likes":"Jengkiers\r\nPriska Madelyn Nugroho\r\nMrs Laos World\r\nWimbledon\r\nKakliss MCI\r\nILUNI K3 FKM UI\r\nBadminton Vietnam\r\nBMT MEDIA\r\nTennis Indonesia\r\nRina Silvia Aritonang\r\nDina Maria Simamora\r\nIndustrial Hygiene\r\nChristopher Rungkat\r\nEffendi Hutahaean\r\nOya Yolanda Haam\r\nDigdyo Fakirul Gareng Crew\r\nShanty Sihombing\r\nPaulus Nalsali Herianto Tamba\r\nIndonesia Feminis\r\nIsna Muharyunis\r\nJoko Sumartono\r\nMRT Jakarta\r\nLagu Rohani Terbaru\r\nAldila Sutjiadi\r\nNurhalima Purba\r\nToro Prima Jaya\r\nHanif Optik Citra\r\nDarus Harjuniadi\r\nAtika Sumco\r\nMuhamad Yusuf Lamba\r\nMadinisafety Const\r\nRiri Gpp\r\nBethanie Mattek-Sands\r\nPenyet Everest\r\nSteve Harvey\r\nChoir \"Alumni Kristiani UI\"\r\nMartina Hingis\r\nGenie Bouchard\r\nBelinda Bencic\r\nLaLiga\r\nDr. Suryo Bawono, Sp.OG\r\nJoyful Choir Cibubur\r\nIMPORTIR.ORG\r\nIrwan Wahyudiono\r\nArdantya Syahreza\r\nBubuk Silky Puding\r\nEnglish For All Indonesian\r\nIIEF EduFair\r\nTAR Team BP Tangguh West Papua\r\nKang H Idea",
"location":"Depok",
"age_range":"31-35",
"education":"Bachelor"
}
]
}
i've tried to using this command, and successfully to create a node labeled big5_personality, but get stuck with 2 others.
WITH "///big5_personality.json" AS file
call apoc.load.json(file) YIELD value
unwind value.big5_personality as item
merge (a:big5_personality{user_name_link: item.user_name_link})
on create set
a.user_name = item.user_name,
a.preference = item.preferance,
a.gender = item.gender,
a.marital_status = item.marital_status,
a.education = item.education,
a.age_range = item.age_range,
a.location = item.location,
a.user_likes = item.user_likes,
a.post_text_id = item.post_text_id,
a.post_text_eng = item.post_text_eng,
a.identity = item.identity
foreach (personality_result in item.personality_result |
merge (b:personality_result {user_name_link: item.user_name_link})
on create set
b.word_count = personality_result.word_count,
b.word_count_message = personality_result.word_count_message,
b.processed_language = personality_result.processed_language
)
merge (a)-[r1:rel_personality_result]->(b)
foreach (personality in item.personality_result.personality |
merge (c:personality {user_name_link: b.user_name_link})
on create set
c.trait_id = personality.trait_id
MERGE (b)<-[:rel_personality]-(c)
)
please help
You have multiple issues with your data file. Among them are:
Your Cypher code expects personality_result to be a list of JSON objects. It is not.
(a) It is a single string, not a list.
(b) That string seems to consist of the truncated start of a stringified JSON object (that includes a lot of extra pretty-printing whitespace).
So, everything in your Cypher query starting at the FOREACH will not work.
In your next-to-last MERGE, personality_result.personality should probably be just personality.
You may have other issues, but it is hard to tell until you fix your data file and code.
i found the solution for my problem... maybe it's dirty way and there's better solution for my case... the updated code is below :
WITH "///big5_personality.json" AS file
call apoc.load.json(file) YIELD value
unwind value.big5_personality as item
unwind item.personality_result as itema
unwind itema.personality_detail as itemb
UNWIND itemb.children as itemc
merge (a:big5_personality{user_name_link: item.user_name_link})
on create set
a.user_name = item.user_name,
a.preference = item.preferance,
a.gender = item.gender,
a.marital_status = item.marital_status,
a.education = item.education,
a.age_range = item.age_range,
a.location = item.location,
a.user_likes = item.user_likes,
a.post_text_id = item.post_text_id,
a.post_text_eng = item.post_text_eng,
a.identity = item.identity
foreach (personality_result in itema |
merge (b:personality_result {user_name_link: item.user_name_link})
on create set
b.word_count = personality_result.word_count,
b.word_count_message = personality_result.word_count_message,
b.processed_language = personality_result.processed_language
)
merge (a)-[r1:rel_big5_personality_result{user_name_link: a.user_name_link, word_count: itema.word_count, word_count_message: itema.word_count_message, processed_language: itema.processed_language}]->(b)
foreach (trait in itemb |
merge (c:personality{user_name_link : item.user_name_link, trait_id: trait.trait_id, trait_name: trait.name, trait_category: trait.category, trait_percentile: trait.percentile, trait_significant: trait.significant})
ON CREATE SET
c.trait_raw_score = trait.raw_score
MERGE (b)-[:rel_personality_result{user_name_link : itema.user_name_link, trait_id: itemb.trait_id, trait_name: itemb.name, trait_category: itemb.category, trait_percentile: itemb.percentile, trait_significant: itemb.significant}]->(c)
)
FOREACH (facet IN itemc |
MERGE (d:personality_children{user_name_link : itema.user_name_link, personality_trait_id: itemb.trait_id})
ON CREATE SET
d.facet_trait_id = facet.trait_id,
d.facet_name = facet.name,
d.facet_category = facet.category,
d.facet_percentile = facet.percentile,
d.facet_significant = facet.significant
MERGE (c)-[:rel_personality_children{user_name_link : itema.user_name_link, personality_trait_id: itemb.trait_id, facet_trait_id: itemc.trait_id, facet_name: itemc.name, facet_category: itemc.category, facet_percentile: itemc.percentile, facet_significant: itemc.significant}]->(d)
)

Loading JSON data to a list in a particular order using PyMongo

Let's say I have the following document in a MongoDB database:
{
"assist_leaders" : {
"Steve Nash" : {
"team" : "Phoenix Suns",
"position" : "PG",
"draft_data" : {
"class" : 1996,
"pick" : 15,
"selected_by" : "Phoenix Suns",
"college" : "Santa Clara"
}
},
"LeBron James" : {
"team" : "Cleveland Cavaliers",
"position" : "SF",
"draft_data" : {
"class" : 2003,
"pick" : 1,
"selected_by" : "Cleveland Cavaliers",
"college" : "None"
}
},
}
}
I'm trying to collect a few values under "draft_data" for each player in an ORDERED list. The list needs to look like the following for this particular document:
[ [1996, 15, "Phoenix Suns"], [2003, 1, "Cleveland Cavaliers"] ]
That is, each nested list must contain the values corresponding to the "pick", "selected_by", and "class" keys, in that order. I also need the "Steve Nash" data to come before the "LeBron James" data.
How can I achieve this using pymongo? Note that the structure of the data is not set in stone so I can change this if that makes the code simpler.
I'd extract the data and turn it into a list in Python, once you've retrieved the document from MongoDB:
for doc in db.collection.find():
for name, info in doc['assist_leaders'].items():
draft_data = info['draft_data']
lst = [draft_data['class'], draft_data['pick'], draft_data['selected_by']]
print name, lst
List comprehension is the way to go here (Note: don't forget .iteritems() in Python2 or .items() in Python3 or you'll get a ValueError: too many values to unpack).
import pymongo
import numpy as np
client = pymongo.MongoClient()
db = client[database_name]
dataList = [v for i in ["Steve Nash", "LeBron James"]
for key in ["class", "pick", "selected_by"]
for document in db.collection_name.find({"assist_leaders": {"$exists": 1}})
for k, v in document["assist_leaders"][i]["draft_data"].iteritems()
if k == key]
print dataList
# [1996, 15, "Phoenix Suns", 2003, 1, "Cleveland Cavaliers"]
matrix = np.reshape(dataList, [2,3])
print matrix
# [ [1996, 15, "Phoenix Suns"],
# [2003, 1, "Cleveland Cavaliers"] ]

How to scrape the text by categories and make a json file?

We scrape the website www.theft-alerts.com. Now we get all the text.
connection = urllib2.urlopen('http://www.theft-alerts.com')
soup = BeautifulSoup(connection.read().replace("<br>","\n"), "html.parser")
theftalerts = []
for sp in soup.select("table div.itemspacingmodified"):
for wd in sp.select("div.itemindentmodified"):
text = wd.text
if not text.startswith("Images :"):
print(text)
with open("theft-alerts.json", 'w') as outFile:
json.dump(theftalerts, outFile, indent=2)
Output:
STOLEN : A LARGE TAYLORS OF LOUGHBOROUGH BELL
Stolen from Bromyard on 7 August 2014
Item : The bell has a diameter of 37 1/2" is approx 3' tall weighs just shy of half a ton and was made by Taylor's of Loughborough in 1902. It is stamped with the numbers 232 and 11.
The bell had come from Co-operative Wholesale Society's Crumpsall Biscuit Works in Manchester.
Any info to : PC 2361. Tel 0300 333 3000
Messages : Send a message
Crime Ref : 22EJ / 50213D-14
No of items stolen : 1
Location : UK > Hereford & Worcs
Category : Shop, Pub, Church, Telephone Boxes & Bygones
ID : 84377
User : 1 ; Antique/Reclamation/Salvage Trade ; (Administrator)
Date Created : 11 Aug 2014 15:27:57
Date Modified : 11 Aug 2014 15:37:21;
How can we categories the text for the JSON file. The JSON file is now empty.
Output JSON:
[]
You can define a list and append all dictionary objects that you create to the list. e.g:
import json
theftalerts = [];
atheftobject = {};
atheftobject['location'] = 'UK > Hereford & Worcs';
atheftobject['category'] = 'Shop, Pub, Church, Telephone Boxes & Bygones';
theftalerts.append(atheftobject);
atheftobject['location'] = 'UK';
atheftobject['category'] = 'Shop';
theftalerts.append(atheftobject);
with open("theft-alerts.json", 'w') as outFile:
print(json.dump(theftalerts, outFile, indent=2))
After this run the theft-alerts.json will contain this json object:
[
{
"category": "Shop",
"location": "UK"
},
{
"category": "Shop",
"location": "UK"
}
]
You can play with this to generate your own JSON object.
Checkout the json module
Your JSON output remains empty because your loop doesn't append to the list.
Here's how I would extract the category name:
theftalerts = []
for sp in soup.select("table div.itemspacingmodified"):
item_text = "\n".join(
[wd.text for wd in sp.select("div.itemindentmodified")
if not wd.text.startswith("Images :")])
category = sp.find(
'span', {'class': 'itemsmall'}).text.split('\n')[1][11:]
theftalerts.append({'text': item_text, 'category': category})

Dataframe in R to be converted to sequence of JSON objects

I had asked the same question after editing 2 times of a previous question I had posted. I am sorry for the bad usage of this website. I have flagged it for deletion and I am posting a proper new question on the same here. Please look into this.
I am basically working on a recommender system code. The output has to be converted to sequence of JSON objects. I have a matrix that has a look up table for every item ID, with the list of the closest items it is related to and the the similarity scores associated with their combinations.
Let me explain through a example.
Suppose I have a matrix
In the below example, Item 1 is similar to Items 22 and 23 with similarity scores 0.8 and 0.5 respectively. And the remaining rows follow the same structure.
X1 X2 X3 X4 X5
1 22 23 0.8 0.5
34 4 87 0.4 0.4
23 7 92 0.6 0.5
I want a JSON structure for every item (every X1 for every row) along with the recommended items and the similarity scores for each combination as a separate JSON entity and this being done in sequence. I don't want an entire JSON object containing these individual ones.
Assume there is one more entity called "coid" that will be given as input to the code. I assume it is XYZ and it is same for all the rows.
{ "_id" : { "coid" : "XYZ", "iid" : "1"}, "items" : [ { "item" : "22", "score" : 0.8},{ "item": "23", "score" : 0.5}] }
{ "_id" : { "coid" : "XYZ", "iid" : "34"},"items" : [ { "item" : "4", "score" : 0.4},{ "item": "87", "score" : 0.4}] }
{ "_id" : { "coid" : "XYZ", "iid" : "23"},"items" : [ { "item" : "7", "score" : 0.6},{ "item": "92", "score" : 0.5}] }
As in the above, each entity is a valid JSON structure/object but they are not put together into a separate JSON object as a whole.
I appreciate all the help done for the previous question but somehow I feel this new alteration I have here is not related to them because in the end, if you do a toJSON(some entity), then it converts the entire thing to one JSON object. I don't want that.
I want individual ones like these to be written to a file.
I am very sorry for my ignorance and inconvenience. Please help.
Thanks.
library(rjson)
## Your matrix
mat <- matrix(c(1,34,23,
22, 4, 7,
23,87,92,
0.8, 0.4, 0.6,
0.5, 0.4, 0.5), byrow=FALSE, nrow=3)
I use a function (not very interesting name makejson) that takes a row of the matrix and returns a JSON object. It makes two list objects, _id and items, and combines them to a JSON object
makejson <- function(x, coid="ABC") {
`_id` <- list(coid = coid, iid=x[1])
nitem <- (length(x) - 1) / 2 # Number of items
items <- list()
for(i in seq(1, nitem)) {
items[[i]] <- list(item = x[i + 1], score = x[i + 1 + nitem])
}
toJSON(list(`_id`=`_id`, items=items))
}
Then using apply (or a for loop) I use the function for each row of the matrix.
res <- apply(mat, 1, makejson, coid="XYZ")
cat(res, sep = "\n")
## {"_id":{"coid":"XYZ","iid":1},"items":[{"item":22,"score":0.8},{"item":23,"score":0.5}]}
## {"_id":{"coid":"XYZ","iid":34},"items":[{"item":4,"score":0.4},{"item":87,"score":0.4}]}
## {"_id":{"coid":"XYZ","iid":23},"items":[{"item":7,"score":0.6},{"item":92,"score":0.5}]}
The result can be saved to a file with cat by specifying the file argument.
## cat(res, sep="\n", file="out.json")
There is a small difference in your output and mine, the numbers are in quotes ("). If you want to have it like that, mat has to be character.
## mat <- matrix(as.character(c(1,34,23, ...
Hope it helps,
alex

Parsing JSON from Google Distance Matrix API with Corona SDK

So I'm trying to pull data from a JSON string (as seen below). When I decode the JSON using the code below, and then attempt to index the duration text, I get a nil return. I have tried everything and nothing seems to work.
Here is the Google Distance Matrix API JSON:
{
"destination_addresses" : [ "San Francisco, CA, USA" ],
"origin_addresses" : [ "Seattle, WA, USA" ],
"rows" : [
{
"elements" : [
{
"distance" : {
"text" : "1,299 km",
"value" : 1299026
},
"duration" : {
"text" : "12 hours 18 mins",
"value" : 44303
},
"status" : "OK"
}]
}],
"status" : "OK"
}
And here is my code:
local json = require ("json")
local http = require("socket.http")
local myNewData1 = {}
local SaveData1 = function (event)
distanceReturn = ""
distance = ""
local URL1 = "http://maps.googleapis.com/maps/api/distancematrix/json?origins=Seattle&destinations=San+Francisco&mode=driving&&sensor=false"
local response1 = http.request(URL1)
local data2 = json.decode(response1)
if response1 == nil then
native.showAlert( "Data is nill", { "OK"})
print("Error1")
distanceReturn = "Error1"
elseif data2 == nill then
distanceReturn = "Error2"
native.showAlert( "Data is nill", { "OK"})
print("Error2")
else
for i = 1, #data2 do
print("Working")
print(data2[i].rows)
for j = 1, #data2[i].rows, 1 do
print("\t" .. data2[i].rows[j])
for k = 1, #data2[i].rows[k].elements, 1 do
print("\t" .. data2[i].rows[j].elements[k])
for g = 1, #data2[i].rows[k].elements[k].duration, 1 do
print("\t" .. data2[i].rows[k].elements[k].duration[g])
for f = 1, #data2[i].rows[k].elements[k].duration[g].text, 1 do
print("\t" .. data2[i].rows[k].elements[k].duration[g].text)
distance = data2[i].rows[k].elements[k].duration[g].text
distanceReturn = data2[i].rows[k].elements[k].duration[g].text
end
end
end
end
end
end
timer.performWithDelay (100, SaveData1, 999999)
Your loops are not correct. Try this shorter solution.
Replace all your "for i = 1, #data2 do" loop for this one below:
print("Working")
for i,row in ipairs(data2.rows) do
for j,element in ipairs(row.elements) do
print(element.duration.text)
end
end
This question was solved on Corona Forums by Rob Miracle (http://forums.coronalabs.com/topic/47319-parsing-json-from-google-distance-matrix-api/?hl=print_r#entry244400). The solution is simple:
"JSON and Lua tables are almost identical data structures. In this case your table data2 has top level entries:
data2.destination_addresses
data2.origin_addresses
data2.rows
data2.status
Now data2.rows is another table that is indexed by numbers (the [] brackets) but here is only one of them, but its still an array entry:
data.rows[1]
Then inside of it is another numerically indexed table called elements.
So far to get to the element they are (again there is only one of them
data2.rows[1].elements[1]
then it's just accessing the remaining elements:
data2.rows[1].elements[1].distance.text
data2.rows[1].elements[1].distance.value
data2.rows[1].elements[1].duration.text
data2.rows[1].elements[1].duration.value
There is a great table printing function called print_r which can be found in the community code which is great for dumping tables like this to see their structure."