In my node app i am using Elasticsearch as my backend process. I am trying to insert data from a json file but I got an error.
My json:
{"index":{"_index":"mfissample", "_type":"place_mfi", "_id": "1"}}
{"PAR" : 42.31,"Center":"xx","District":"yy","Country" : "vv","GLP" : 13073826.63,"State" : "zz","SSScore" :null, "location":"80.102134,12.897401"}
{"index":{"_index":"mfissample", "_type":"place_mfi", "_id": "2"}}
{"PAR" : 42.31,"Center" : "xx","District" : "yy","Country" : "zz","GLP" : 13073826.63,"State" : "vv","SSScore" :null,
"location":"80.102134,12.897401"}
My command:
curl -XPOST 'http://localhost:9200/_bulk' --data-binary #jsonbulk.json
The error:
{"error":"JsonParseException[Unexpected character (':' (code 58)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')\n at [Source: [B#792c4b55; line: 1, column: 12]]","status":500}
Remove the \n after "SSScore" :null, and before the "location":"80.102134,12.897401".
Related
I as trying to parse a json response. However, there is a trailing comma at the end of the reponse which throws an error while decoding it. How do i automatically format if from code such that it removes the comma and validates it?
[{"id" : "9991","last_message" : "How about tomorrow then?","members" : ["John", "Daniel", "Rachel"],"topic" : "pizza night", "modified_at" : 1599814026153}, {"id" : "9992","last_message" : "I will send them to you asap","members" : ["Raphael"],"topic" : "slides", "modified_at" : 1599000026153}, {"id" : "9993","last_message" : "Can you please?","members" : ["Mum", "Dad", "Bro"],"topic" : "pictures", "modified_at" : 1512814026153},]
Error
D/EGL_emulation( 7121): app_time_stats: avg=32.94ms min=4.95ms max=83.26ms count=30
E/flutter ( 7121): [ERROR:flutter/shell/common/shell.cc(93)] Dart Unhandled Exception: FormatException: Unexpected character (at character 435)
E/flutter ( 7121): ..."Mum", "Dad", "Bro"],"topic" : "pictures", "modified_at" : 1512814026153},]
So, you should give some code from source, but I think you're using a http.Response object so you can do something like this:
String bodyRes = response.body;
bodyRes = bodyRes.endsWith(',]') ? bodyRes.replaceFirst(',]', ']', bodyRes.length - 2) : bodyRes;
I have multiline log that consists correct json part (one or more lines), and after it - stack trace.
Is it possile to parse first part of the log as json, and for stack-trace make new label ("stackTrace" for example) and put there all the lines after first part?
Unfortunately, logs can contain a different number of fields in json format, and therefore it is unlikely to parse them using regex.
{ "timestamp" : "2022-03-28 14:33:00,000", "logger" : "appLog", "level" : "ERROR", "thread" : "ktor-8080", "url" : "/path","method" : "POST","httpStatusCode" : 400,"callId" : "f7a22bfb1466","errorMessage" : "Unexpected JSON token at offset 184: Encountered an unknown key 'a'. Use 'ignoreUnknownKeys = true' in 'Json {}' builder to ignore unknown keys. JSON input: { \"entityId\" : \"TGT-8c8d950036bf\", \"processCode\" : \"test\", \"tokenType\" : \"SSO_CCOM\", \"ttlMills\" : 600000, \"a\" : \"a\" }" }
com.example.info.core.WebApplicationException: Unexpected JSON token at offset 184: Encountered an unknown key 'a'.
Use 'ignoreUnknownKeys = true' in 'Json {}' builder to ignore unknown keys.
JSON input: {
"entityId" : "TGT-8c8d950036bf",
"processCode" : "test",
"tokenType" : "SSO_CCOM",
"ttlMills" : 600000,
"a" : "a"
}
at com.example.info.signtoken.SignTokenApi$signTokenModule$2$1$1.invokeSuspend(SignTokenApi.kt:94)
at com.example.info.signtoken.SignTokenApi$signTokenModule$2$1$1.invoke(SignTokenApi.kt)
at com.example.info.signtoken.SignTokenApi$signTokenModule$2$1$1.invoke(SignTokenApi.kt)
at io.ktor.util.pipeline.SuspendFunctionGun.loop(SuspendFunctionGun.kt:248)
at io.ktor.util.pipeline.SuspendFunctionGun.proceed(SuspendFunctionGun.kt:116)
at io.ktor.util.pipeline.SuspendFunctionGun.execute(SuspendFunctionGun.kt:136)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:78)
at io.ktor.routing.Routing.executeResult(Routing.kt:155)
at io.ktor.routing.Routing.interceptor(Routing.kt:39)
at io.ktor.routing.Routing$Feature$install$1.invokeSuspend(Routing.kt:107)
at io.ktor.routing.Routing$Feature$install$1.invoke(Routing.kt)
at io.ktor.routing.Routing$Feature$install$1.invoke(Routing.kt)
UPD.
I've made promtail pipeline like so
scrape_configs:
- job_name: Test_AppLog
static_configs:
- targets:
- ${HOSTNAME}
labels:
job: INFO-Test_AppLog
host: ${HOSTNAME}
__path__: /home/adm_web/app.log
pipeline_stages:
- multiline:
firstline: ^\{\s?\"timestamp\"
max_lines: 128
max_wait_time: 1s
- match:
selector: '{job="INFO-Test_AppLog"}'
stages:
- regex:
expression: '(?P<log>^\{ ?\"timestamp\".*\}[\s])(?s)(?P<stacktrace>.*)'
- labels:
log:
stacktrace:
- json:
expressions:
logger: logger
url: url
method: method
statusCode: httpStatusCode
sla: sla
source: log
But in fact, json config block does not work, the result in Grafana is only two fields - log and stacktrace.
Any help would be appreciated
if the style is constantly like this maybe the easiest way is to analyze whole log string find index of last symbol "}" - then split the string using its index+1 and result should be in the first part of output array
I am trying to retrieve an xml file from my Mongo DB with mongofiles. I get a JSON parsing error. Here is an excerpt my terminal:
$ mongofiles -d anhalytics get_id 'ObjectId("5e7f56d30800611b17fc66b1")'
2020-09-15T16:55:33.205+0200 connected to: mongodb://localhost/
2020-09-15T16:55:33.205+0200 Failed: error parsing id as Extended JSON: invalid JSON number. Position: 18
I am using a MongoDB server version: 4.2.9
Here is the record of the target file
{
"_id" : ObjectId("5e7f56d30800611b17fc66b1"),
"filename" : "5e7f56d30800611b17fc66b0.tei.xml",
"aliases" : null,
"chunkSize" : NumberLong(261120),
"uploadDate" : ISODate("2020-03-28T13:53:23.708Z"),
"length" : NumberLong(35405),
"contentType" : null,
"md5" : "eeafae907c44b207071ccb6036148808"
}
Any idea why I am getting this error? Thanks!
The message error parsing id as Extended JSON indicates that the mongofiles tool had trouble parsing the id string that was provided on the command line.
That is done in parseOrCreateId function here: https://github.com/mongodb/mongo-tools/blob/master/mongofiles/mongofiles.go#L330
That function wraps the value from the command line in another string like {"_id":"%s"}, so the value actually passed to the bson.UnmarshalExtJSON function would have been
"{\"_id\":\"ObjectId(\"5e7f56d30800611b17fc66b1\")\"}"
Position 18 of that string, as called out in the error message is the quotation mark immediately preceding the hex string.
I have json file in follwoing format:
{ "_id" : "foo.com", "categories" : [], "h1" : { "bar==" : { "first" : 1281916800, "last" : 1316995200 }, "foo==" : { "first" : 1281916800, "last" : 1316995200 } }, "name2" : [ "foobarl.com", "foobar2.com" ], "rep" : null }
So, how do i parse this json in pig..
also, the categories and rep can have some char in it..and might not be always empty.
I made the following attempt.
a = load 'sample_json.json' using JsonLoader('id:chararray,categories:[chararray], hostt:{ (variable_a: {(first:int,last:int)})}, ns:[chararray],rep:chararray ');
But i get this error:
org.codehaus.jackson.JsonParseException: Unexpected character ('D' (code 68)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: java.io.ByteArrayInputStream#4795b8e9; line: 1, column: 50]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1291)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:385)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportUnexpectedChar(JsonParserMinimalBase.java:306)
at org.codehaus.jackson.impl.Utf8StreamParser._handleUnexpectedValue(Utf8StreamParser.java:1582)
at org.codehaus.jackson.impl.Utf8StreamParser.nextToken(Utf8StreamParser.java:386)
at org.apache.pig.builtin.JsonLoader.readField(JsonLoader.java:173)
at org.apache.pig.builtin.JsonLoader.getNext(JsonLoader.java:157)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
You can use elephant bird pig jar for parsing json. It can parse all sort of json data.
Here are certain examples for parsing json via elephant bird pig using this jar.
https://github.com/twitter/elephant-bird/tree/master/examples/src/main/pig
It doesn't break even if an expected json tag isn't present.
Format:
{
"lastUpdate" : "20/9/2012-12:12",
"data":[{
"user" : "_name_",
"username" : "_fullname_",
"photoURL" : "_url_"
}, {
"user" : "_name_",
"username" : "_fullname_",
"photoURL" : "_url_"
}, {
"user" : "_name_",
"username" : "_fullname_",
"photoURL" : "_url_"
}]
}
Aptana gives errors at the :
Screenshot Aptana JSON format
Why is that? It seems I'm not having any problems receiving and processing the data.
[EDIT 1] Error given: Syntax Error: unexpected token ":"
In Aptana json is parsed "as json" only when you create/open a file with extension .json.
When have a json object inside a .js file works only the javascript parser, for that you see the error, is not a valid token for JS.