JSON Parse Error: Expecting 'STRING' - json

I am using JSONLint to parse some JSON and i keep getting the error:
Error: Parse error on line 1:
[{“ product”: [{“
---^
Expecting 'STRING', '}', got 'undefined'
This is the code:
[
{
“product” : [ { “code” : “Abc123”, “description” : “Saw blade”, “price” : 34.95 } ],
“vendor” : [ { “name” : “Acme Hardware”, “state” : “New Jersey” } ]
},
{
“product” : [ { “code” : “Def456”, “description” : “Hammer”, “price” : 22.51 } ],
},
{
“product” : [ { “code” : “Ghi789”, “description” : “Wrench”, “price” : 12.15 } ],
“vendor” : [ { “name” : “Acme Hardware”, “state” : “New Jersey” } ]
},
{
“product” : [ { “code” : “Jkl012”, “description” : “Pliers”, “price” : 14.54 } ],
“vendor” : [ { “name” : “Norwegian Tool Suppliers”, “state” : “Kentucky” } ]
}
]

JSON string literals must use normal quote characters ("), not smart quotes (“”).

You're using some unicode double quotes characters. Replace them with the normal " double quotes.
You also had some extra comma at the end in the second element.
Now it's alright
[
{
"product" : [ { "code" : "Abc123", "description" : "Saw blade", "price" : 34.95 } ],
"vendor" : [ { "name" : "Acme Hardware", "state" : "New Jersey" } ]
},
{
"product" : [ { "code" : "Def456", "description" : "Hammer", "price" : 22.51 } ]
},
{
"product" : [ { "code" : "Ghi789", "description" : "Wrench", "price" : 12.15 } ],
"vendor" : [ { "name" : "Acme Hardware", "state" : "New Jersey" } ]
},
{
"product" : [ { "code" : "Jkl012", "description" : "Pliers", "price" : 14.54 } ],
"vendor" : [ { "name" : "Norwegian Tool Suppliers", "state" : "Kentucky" } ]
}
]

JSON must use normal quote characters("), not smart quotes for(“”) for string literals.
To get the normal quote in JSON data format:
right-click on browser window and select - view page source.

This is how I save the MySQL text format and get the json_decode data
[{"5":[29,30,5],"6":[1,2,3],"7":[4,5,6]}]
$row_days= $rows['days'];
var_dump(json_decode($row_days, true));
Result array (size=1)
0 => array (size=3) 5 => array (size=3) 0 => int 29 1 => int 30 2 => int 5 6 => array (size=3) 0 => int 1 1 => int 2 2 => int 3 7 => array (size=3) 0 => int 4 1 => int 5 2 => int 6

Many times this error is caused by doing:
object.age = 31
instead of the correct way:
object["age"] = 31

Related

Parse and Map 2 Arrays with jq

I am working with a JSON file similar to the one below:
{ "Response" : {
"TimeUnit" : [ 1576126800000 ],
"metaData" : {
"errors" : [ ],
"notices" : [ "query served by:1"]
},
"stats" : {
"data" : [ {
"identifier" : {
"names" : [ "apiproxy", "response_status_code", "target_response_code", "target_ip" ],
"values" : [ "IO", "502", "502", "7.1.143.6" ]
},
"metric" : [ {
"env" : "dev",
"name" : "sum(message_count)",
"values" : [ 0.0]
} ]
} ]
} } }
My object is to display a mapping of the identifier and values like :
apiproxy=IO
response_status_code=502
target_response_code=502
target_ip=7.1.143.6
I have been able to parse both names and values with
.[].stats.data[] | (.identifier.names[]) and .[].stats.data[] | (.identifier.values[])
but I need help with the jq way to map the values.
The whole thing can be done in jq using the -r command-line option:
.[].stats.data[]
| [.identifier.names, .identifier.values]
| transpose[]
| "\(.[0])=\(.[1])"

Mongo forEach Query

I have the JSON that you can see below and I want to sum the values of the two objects, but when I make an aggregation it returns me 0.Here you can see the query that I use; really the first line I only use it to be sure that the path works, and it does. On the other hand,when I use this path in the aggregation query it gives me the "ID" and the "COUNT" with right values,but the "SUM" is always 0 when it must be 3600.Any idea?
db.getCollection('TEST').find({"prices.year.months.day.csv.price.valPrice":1800})
db.TEST.aggregate([
{ $match: {"location.cp":"20830"}},
{$group:{_id:"20830",total:{$sum:"$prices.year.months.day.csv.price.valPrice"}, count: { $sum: 1 }
}}])
And this is the JSON:
{
"_id" : "20830:cas:S:3639",
"lodgtype" : "Casa",
"lodg" : "Motrico: country holiday home - San sebastian",
"webid" : "6107939",
"location" : {
"thcod" : "20",
"cp" : "20830",
"th" : "Gipuzkoa",
"geometry" : {
"type" : "Point",
"coordinates" : [
43.31706238,
-2.40293598
]
}
},
"prices" : {
"year" : [
{
"valYear" : "2018",
"months" : [
{
"valMonth" : "02",
"day" : [
{
"valDay" : "13",
"csv" : [
{
"valCsv" : "20180205210908_223",
"price" : [
{
"valPrice" : 1800.0
}
]
}
]
}
]
}
]
}
]
},
"reg" : {
"created" : "20180213",
"updated" : "20180213",
"viewed" : "20180213"
}
},{
"_id" : "TEST20830:cas:S:3639",
"lodgtype" : "Casa",
"lodg" : "TESTMotrico: country holiday home - San sebastian",
"webid" : "6107930",
"location" : {
"thcod" : "20",
"cp" : "20830",
"th" : "Gipuzkoa",
"geometry" : {
"type" : "Point",
"coordinates" : [
43.31706238,
-2.40293598
]
}
},
"prices" : {
"year" : [
{
"valYear" : "2018",
"months" : [
{
"valMonth" : "02",
"day" : [
{
"valDay" : "13",
"csv" : [
{
"valCsv" : "20180205210908_223",
"price" : [
{
"valPrice" : 1800.0
}
]
}
]
}
]
}
]
}
]
},
"reg" : {
"created" : "20180213",
"updated" : "20180213",
"viewed" : "20180213"
}
}
Since you've deeply nested array you've to unwind to flatten to a document structure. To count the number of matches you've to use extra group after $match with $push with $$ROOT to keep the matching data.
db.TEST.aggregate([
{"$match":{"location.cp":"20830"}},
{"$group":{
"_id":"20830",
"data":{"$push":"$$ROOT"},
"count":{"$sum":1}
}},
{"$unwind":"$data.prices.year"},
{"$unwind":"$data.prices.year"},
{"$unwind":"$data.prices.year.months"},
{"$unwind":"$data.prices.year.months.day"},
{"$unwind":"$data.prices.year.months.day.csv"},
{"$unwind":"$data.prices.year.months.day.csv.price"},
{"$group":{
"_id":"20830",
"total":{"$sum":"$prices.year.months.day.csv.price.valPrice"},
"count":{"$first":"$count"}
}}
])

Mongolite group by/aggregate on JSON object

I have a json document like this on my mongodb collection:
Updated document:
{
"_id" : ObjectId("59da4aef8c5d757027a5a614"),
"input" : "hi",
"output" : "Hi. How can I help you?",
"intent" : "[{\"intent\":\"greeting\",\"confidence\":0.8154089450836182}]",
"entities" : "[]",
"context" : "{\"conversation_id\":\"48181e58-dd51-405a-bb00-c875c01afa0a\",\"system\":{\"dialog_stack\":[{\"dialog_node\":\"root\"}],\"dialog_turn_counter\":1,\"dialog_request_counter\":1,\"_node_output_map\":{\"node_5_1505291032665\":[0]},\"branch_exited\":true,\"branch_exited_reason\":\"completed\"}}",
"user_id" : "50001",
"time_in" : ISODate("2017-10-08T15:57:32.000Z"),
"time_out" : ISODate("2017-10-08T15:57:35.000Z"),
"reaction" : "1"
}
I need to perform group by on intent.intent field and I'm using Rstudio with mongolite library.
What I have tried is :
pp = '[{"$unwind": "$intent"},{"$group":{"_id":"$intent.intent", "count": {"$sum":1} }}]'
stats <- chat$aggregate(
pipeline=pp,
options = '{"allowDiskUse":true}'
)
print(stats)
But it's not working, output for above code is
_id count
1 NA 727
If intent attribute type is string and keep the object as string.
We can split it to array with \" and use third item of array.
db.getCollection('test1').aggregate([
{ "$project": { intent_text : { $arrayElemAt : [ { $split: ["$intent", "\""] } ,3 ] } } },
{ "$group": {"_id": "$intent_text" , "count": {"$sum":1} }}
])
Result:
{
"_id" : "greeting",
"count" : 1.0
}

Manipulating JSON messages from Kafka topic using Logstash filter

I am using Logstash 2.4 to read JSON messages from a Kafka topic and send them to an Elasticsearch Index.
The JSON format is as below --
{
"schema":
{
"type": "struct",
"fields": [
{
"type":"string",
"optional":false,
"field":"reloadID"
},
{
"type":"string",
"optional":false,
"field":"externalAccountID"
},
{
"type":"int64",
"optional":false,
"name":"org.apache.kafka.connect.data.Timestamp",
"version":1,
"field":"reloadDate"
},
{
"type":"int32",
"optional":false,
"field":"reloadAmount"
},
{
"type":"string",
"optional":true,
"field":"reloadChannel"
}
],
"optional":false,
"name":"reload"
},
"payload":
{
"reloadID":"328424295",
"externalAccountID":"9831200013",
"reloadDate":1446242463000,
"reloadAmount":240,
"reloadChannel":"C1"
}
}
Without any filter in my config file, the target documents from the ES index look like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfcyTU4SyCFNFP2z5-l",
"_score" : 1.0,
"_source" : {
"schema" : {
"type" : "struct",
"fields" : [ {
"type" : "string",
"optional" : false,
"field" : "reloadID"
}, {
"type" : "string",
"optional" : false,
"field" : "externalAccountID"
}, {
"type" : "int64",
"optional" : false,
"name" : "org.apache.kafka.connect.data.Timestamp",
"version" : 1,
"field" : "reloadDate"
}, {
"type" : "int32",
"optional" : false,
"field" : "reloadAmount"
}, {
"type" : "string",
"optional" : true,
"field" : "reloadChannel"
} ],
"optional" : false,
"name" : "reload"
},
"payload" : {
"reloadID" : "155559213",
"externalAccountID" : "9831200014",
"reloadDate" : 1449529746000,
"reloadAmount" : 140,
"reloadChannel" : "C1"
},
"#version" : "1",
"#timestamp" : "2016-10-19T11:56:09.973Z",
}
}
But, I want only the value part of the "payload" field to move to my ES index as the target JSON body. So I tried to use the 'mutate' filter in the config file as below --
input {
kafka {
zk_connect => "zksrv-1:2181,zksrv-2:2181,zksrv-4:2181"
group_id => "logstash"
topic_id => "reload"
consumer_threads => 3
}
}
filter {
mutate {
remove_field => [ "schema","#version","#timestamp" ]
}
}
output {
elasticsearch {
hosts => ["datanode-6:9200","datanode-2:9200"]
index => "kafka_reloads"
}
}
With this filter, the ES documents now look like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfch0yhSyCFNFP2z59f",
"_score" : 1.0,
"_source" : {
"payload" : {
"reloadID" : "850846698",
"externalAccountID" : "9831200013",
"reloadDate" : 1449356706000,
"reloadAmount" : 30,
"reloadChannel" : "C1"
}
}
}
But actually It should be like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfch0yhSyCFNFP2z59f",
"_score" : 1.0,
"_source" : {
"reloadID" : "850846698",
"externalAccountID" : "9831200013",
"reloadDate" : 1449356706000,
"reloadAmount" : 30,
"reloadChannel" : "C1"
}
}
Is there a way to do this? Can anyone help me on this?
I also tried the below filter --
filter {
json {
source => "payload"
}
}
But that is giving me errors like --
Error parsing json {:source=>"payload", :raw=>{"reloadID"=>"572584696", "externalAccountID"=>"9831200011", "reloadDate"=>1449093851000, "reloadAmount"=>180, "reloadChannel"=>"C1"}, :exception=>java.lang.ClassCastException: org.jruby.RubyHash cannot be cast to org.jruby.RubyIO, :level=>:warn}
Any help will be much appreciated.
Thanks
Gautam Ghosh
You can achieve what you want using the following ruby filter:
ruby {
code => "
event.to_hash.delete_if {|k, v| k != 'payload'}
event.to_hash.update(event['payload'].to_hash)
event.to_hash.delete_if {|k, v| k == 'payload'}
"
}
What it does is:
remove all fields but the payload one
copy all payload inner fields at the root level
delete the payload field itself
You'll end up with what you need.
It's been a while but here there is a valid workaround, hope it would be useful.
json_encode {
source => "json"
target => "json_string"
}
json {
source => "json_string"
}

Cypher query JSON formatted result

On the Actor/Movie demo graph, cypher returns column names in a separate array.
MATCH (n:Person) RETURN n.name as Name, n.born as Born ORDER BY n.born LIMIT 5
results:
{ "columns" : [ "Name", "Born" ], "data" : [ [ "Max von Sydow", 1929 ], [ "Gene Hackman", 1930 ], [ "Richard Harris", 1930 ], [ "Clint Eastwood", 1930 ], [ "Mike Nichols", 1931 ] ]}
Is it possible to get each node properties tagged instead?
{ "nodes" : [ ["Name": "Max von Sydow", "Born": 1929 ], ...] }
If I return the node instead of selected properties, I get way too many properties.
MATCH (n:Person) RETURN n LIMIT 5
results:
{ "columns" : [ "n" ], "data" : [ [ { "outgoing_relationships" : "http://localhost:7474/db/data/node/58/relationships/out", "labels" : "http://localhost:7474/db/data/node/58/labels", "data" : { "born" : 1929, "name" : "Max von Sydow" }, "all_typed_relationships" : "http://localhost:7474/db/data/node/58/relationships/all/{-list|&|types}", "traverse" : "http://localhost:7474/db/data/node/58/traverse/{returnType}", "self" : "http://localhost:7474/db/data/node/58", "property" : "http://localhost:7474/db/data/node/58/properties/{key}", "outgoing_typed_relationships" : "http://localhost:7474/db/data/node/58/relationships/out/{-list|&|types}", "properties" : "http://localhost:7474/db/data/node/58/properties", "incoming_relationships" : "http://localhost:7474/db/data/node/58/relationships/in", "extensions" : { }, "create_relationship" : "http://localhost:7474/db/data/node/58/relationships", "paged_traverse" : "http://localhost:7474/db/data/node/58/paged/traverse/{returnType}{?pageSize,leaseTime}", "all_relationships" : "http://localhost:7474/db/data/node/58/relationships/all", "incoming_typed_relationships" : "http://localhost:7474/db/data/node/58/relationships/in/{-list|&|types}" } ], ... ]}
You can use the new literal map syntax in Neo4j 2.0 and do something like:
MATCH (n:Person)
RETURN { Name: n.name , Born: n.born } as Person
ORDER BY n.born
LIMIT 5