NodeJS deserialization from a stream - json

I am having an issue deserializing from a stream in node (specifically the pricing feed from the Bitcoin GOX exchange). Basically a chunk arrives which is well formed complete and verified JSON. Here is the code:
var gox = require('goxstream');
var fs = require('fs');
var options = {
currency: 'AUD',
ticker: true,
depth: false
};
var goxStream = gox.createStream(options);
goxStream.on('data', function(chunk) {
console.log(JSON.parse(chunk));
});
When trying to parse it I get the following
undefined:0
^
SyntaxError: Unexpected end of input
Any ideas? I have included a sample chunk:
> {"channel": "eb6aaa11-99d0-4f64-9e8c-1140872a423d", "channel_name":
> "ticker.BTCAUD", "op": "private", "origin": "broadcast", "private":
> "ticker", "ticker": {
> "high": {
> "value": "121.51941",
> "value_int": "12151941",
> "display": "AU$121.51941",
> "display_short": "AU$121.52",
> "currency": "AUD"
> },
> "low": {
> "value": "118.00001",
> "value_int": "11800001",
> "display": "AU$118.00001",
> "display_short": "AU$118.00",
> "currency": "AUD"
> },
> "avg": {
> "value": "119.58084",
> "value_int": "11958084",
> "display": "AU$119.58084",
> "display_short": "AU$119.58",
> "currency": "AUD"
> },
> "vwap": {
> "value": "119.80280",
> "value_int": "11980280",
> "display": "AU$119.80280",
> "display_short": "AU$119.80",
> "currency": "AUD"
> },
> "vol": {
> "value": "249.73550646",
> "value_int": "24973550646",
> "display": "249.73550646\u00a0BTC",
> "display_short": "249.74\u00a0BTC",
> "currency": "BTC"
> },
> "last_local": {
> "value": "118.50000",
> "value_int": "11850000",
> "display": "AU$118.50000",
> "display_short": "AU$118.50",
> "currency": "AUD"
> },
> "last_orig": {
> "value": "108.99500",
> "value_int": "10899500",
> "display": "$108.99500",
> "display_short": "$109.00",
> "currency": "USD"
> },
> "last_all": {
> "value": "118.79965",
> "value_int": "11879965",
> "display": "AU$118.79965",
> "display_short": "AU$118.80",
> "currency": "AUD"
> },
> "last": {
> "value": "118.50000",
> "value_int": "11850000",
> "display": "AU$118.50000",
> "display_short": "AU$118.50",
> "currency": "AUD"
> },
> "buy": {
> "value": "118.50000",
> "value_int": "11850000",
> "display": "AU$118.50000",
> "display_short": "AU$118.50",
> "currency": "AUD"
> },
> "sell": {
> "value": "119.99939",
> "value_int": "11999939",
> "display": "AU$119.99939",
> "display_short": "AU$120.00",
> "currency": "AUD"
> },
> "item": "BTC",
> "now": "1376715241731341" }}
You can verify it here: http://jsonlint.com
Also it is probably worth mentioning I have already tried parsing and removing the escaped characters. Also have tried a couple of different serializers with the same results

You are getting the data chunk by chunk. Chunks themselves may not be complete JSON objects. Either buffer all of the data, or use something to do it for you (say the request module), or if you need to parse a long stream take a look at the JSONparse module.

You are getting two separate chunks (or at least: that's what I am getting when re-creating your issue). One (the first) is a valid JSON object, while the other (the second) is "almost empty": it is a 1-byte string containing just an LF (ASCII 0x0a).
The second one fails parsing, of course.
Read my first answer: this is exactly such a case. If you concat the two chunks together you get a complete JSON object with a trailing LF, easily passing JSON.parse(). If you try to parse the chunks separately, though, however, the first one succeeds (a trailing LF is not a mandatory) while the second one fails (an LF by itself is not a valid JSON object).
For your case, you would have to:
1) Either assume Mt.Gox always sends data "this way", ignore those "almost empty" chunks, and parse only the "non empty" chunks.
2) Or use JSONparse which parses JSON streams.

Related

Add json object into another json object using bash/jq

I am trying to create a new file adding the object defined in country1.json into world.json. Essentially:
world.json
{
"class": "world",
"version": "1.1.0"
}
+
country1.json
{
"class": "country",
"country_1": {
"class": "city",
"name": "seattle"
}
}
=
world_country1.json
{
"class": "world",
"version": "1.1.0",
"country1": {
"class": "country",
"country_1": {
"class": "city",
"name": "seattle"
}
}
}
Using the filename for the objects in country1.json file. I would like to use bash/jq if possible.
Thanks for your help,
Best,
Romain
Use input to access the second file, and redirect using > into another file
jq '.country1 = input' world.json country1.json > world_country1.json
{
"class": "world",
"version": "1.1.0",
"country1": {
"class": "country",
"country_1": {
"class": "city",
"name": "seattle"
}
}
}
Demo
If you want to utilize the file's name for the new field's name, use input_filename and cut off the last 5 characters (removing .json):
jq '. + (input | {(input_filename[:-5]): .})' world.json country1.json > world_country1.json

Print JSON as a table in bash

I have the below JSON data and I am required to print it as a table. I have managed to come up with below query using jq in BASH to print the data in TOTAL section but unable to get headers including AN,BN part(whats the technical term?) of JSON. Total is equal to sum of sub values in it.
required format for output:
Name Total
-----------------------------------
AN xxxxxxx
BN xxxxxxx
my current command:
curl -s 'https://url.json' | jq '.[] | .total | "\(.confirmed)-\(.deceased)-\(.recovered)-\(.tested)"'
DATA:
> { "AN": {
> "delta7": {
> "confirmed": 238,
> "deceased": 2,
> "recovered": 199,
> "tested": 9953,
> "vaccinated": 24243
> },
> "districts": {
> "Unknown": {
> "delta7": {
> "confirmed": 238,
> "deceased": 2,
> "recovered": 199,
> "tested": 9953
> },
> "meta": {
> "tested": {
> "last_updated": "2021-04-21",
> "source": "https://dhs.andaman.gov.in/NewEvents/642.pdf"
> }
> },
> "total": {
> "confirmed": 5527,
> "deceased": 65,
> "recovered": 5309,
> "tested": 357442
> }
> }
> },
> "meta": {
> "last_updated": "2021-04-23T00:10:19+05:30",
> "population": 397000,
> "tested": {
> "last_updated": "2021-04-21",
> "source": "https://dhs.andaman.gov.in/NewEvents/642.pdf"
> }
> },
> "total": {
> "confirmed": 5527,
> "deceased": 65,
> "recovered": 5309,
> "tested": 357442,
> "vaccinated": 91977
> } }, {
"BN": {
"delta7": {
"confirmed": 238,
"deceased": 2,
"recovered": 199,
"tested": 9953,
"vaccinated": 24243
},
"districts": {
"Unknown": {
"delta7": {
"confirmed": 238,
"deceased": 2,
"recovered": 199,
"tested": 9953
},
"meta": {
"tested": {
"last_updated": "2021-04-21",
"source": "https://dhs.andaman.gov.in/NewEvents/642.pdf"
}
},
"total": {
"confirmed": 5527,
"deceased": 65,
"recovered": 5309,
"tested": 357442
}
}
},
"meta": {
"last_updated": "2021-04-23T00:10:19+05:30",
"population": 397000,
"tested": {
"last_updated": "2021-04-21",
"source": "https://dhs.andaman.gov.in/NewEvents/642.pdf"
}
},
"total": {
"confirmed": 5527,
"deceased": 65,
"recovered": 5309,
"tested": 357442,
"vaccinated": 91977
} } }
With your input (once it has been corrected):
jq -r 'to_entries[] |
[.key,
(.value | .total | "\(.confirmed)-\(.deceased)-\(.recovered)-\(.tested)")]|#tsv' input.json
AN 5527-65-5309-357442
BN 5527-65-5309-357442

URL template parameters not working in APIM

I have a scenario:
I want to connect to my backend apis by providing the api endpoints as path.
For eg. apis would look like following
/Measure/Test/Calories?q=*
/Measure/Test/Weight
/Food/Test/IntakeAmount/
/v1/Food/Test/Summary
Though, when I provide the absolute path to api endpoints it do work but providing the endpoints from url template parameters it throws error of 404 Not Found
Also when I check the trace the inbound request is not able to find the operation:
> api-inspector (0.008 ms) {
> "configuration": {
> "api": {
> "from": "/testapi",
> "to": {
> "scheme": "http",
> "host": "dev-foodmeasures-summary.com",
> "port": 80,
> "path": "/",
> "queryString": "",
> "query": {},
> "isDefaultPort": true
> },
> "version": null,
> "revision": "1"
> },
> **"operation": "-"**,
> "user": {
> "id": "1",
> "groups": [
> "Administrators",
> "Developers"
> ]
> },
> "product": {
> "id": "unlimited"
> }
> } }
Below is the snapshot for path parameter
Thanks!

expected [END_OBJECT] but got [FIELD_NAME], possibly too many query clauses error in kibana

when I am trying to comprise a compound bool query that has a fuzzy must requirement and several should reqs with one being a wildcard, I run into this error message. So far, no alterations to the syntax have helped me to resolve this issue.
The query:
{
"query": {
"bool": {
"must": {
"fuzzy": {
"message": "<fuzzy string>",
"fuzziness": "auto"
}
},
"should": [
{ "query": { "message": "<string>" } },
{ "query": { "message": "<string>" } },
{ "wildcard":
{
"query": { "message": "<partial string*>"}
}
}
],
"minimum_should_match": "50%"
}
}
}
The text inside <> is replaced with my searched string.
You need to replace query with match in your bool/should clause:
> { "query": {
> "bool": {
> "must": {
> "fuzzy": {
> "message": "<fuzzy string>",
> "fuzziness": "auto"
> }
> },
> "should": [
> {"match": {"message": "<string>"}}, <-- here
> {"match": {"message": "<string>"}}, <-- and here
> {"wildcard": {"query": {"message": "<partial string*>"}}}
> ],
> "minimum_should_match": "50%"
> } } }

Reading JSON file from R

I try reading a JSON file from R using rjson but keep getting errors. I validated the JSON file using various online validators. Here is the content of the JSON file:
{
"scenarios": [
{
"files": {
"type1": "/home/blah/Desktop/temp/scen_0.type1",
"type2": "/home/blah/Desktop/temp/scen_0.type2"
},
"ID": "scen_0",
"arr": [],
"TypeToElementStatsFilename": {
"type1": "/home/blah/Desktop/temp/scen_0.type1.elements",
"type2": "/home/blah/Desktop/temp/scen_0.type2.elements"
}
}
],
"randomSeed": "39327314969888",
"zone": {
"length": 1000000,
"start": 1
},
"instanceFilename": "/home/blah/bloo/data/XY112.zip",
"txtFilename": "/home/blah/bloo/data/XY112.txt",
"nSimulations": 2,
"TypeTodbFilename": {
"type1": "/home/blah/bloo/data/map.type1.oneAmb.XY112.out"
},
"arr": {
"seg11": {
"length": 1000,
"start": 147000
},
"seg12": {
"length": 1000,
"start": 153000
},
"seg5": {
"length": 1000,
"start": 145000
},
"seg6": {
"length": 1000,
"start": 146000
},
"seg1": {
"length": 100,
"start": 20000
}
},
"outPath": "/home/blah/Desktop/temp",
"instanceID": "XY112",
"arrIds": [
"seg5",
"seg6",
"seg1",
"seg11",
"seg12"
],
"truth": {
"files": {
"type1": "/home/blah/Desktop/temp/truth.type1",
"type2": "/home/blah/Desktop/temp/truth.type2"
},
"ID": "truth",
"TypeToElementStatsFilename": {
"type1": "/home/blah/Desktop/temp/truth.type1.elements",
"type2": "/home/blah/Desktop/temp/truth.type2.elements"
}
}
}
And the error:
> json_file <- "~/json"
> json_data <- fromJSON(paste(readLines(json_file), collapse=""))
Error in fromJSON(paste(readLines(json_file), collapse = "")) :
unexpected character: :
RJSON freaks out about empty arrays.
fromJSON( '{ "arr": [ ] }')
Error in fromJSON("{ \"arr\": [ ] }") : unexpected character: :
You can try the fromJSON function in the RJSONIO package hosted at http://www.omegahat.org. It seems to read the file fine.
There's a fix for this.
Create a new function to replace the existing getURL function used in RCurl and you should have your solution.
myGetURL <- function(...) {
rcurlEnv <- getNamespace("RCurl")
mapUnicodeEscapes <- get("mapUnicodeEscapes", rcurlEnv)
unlockBinding("mapUnicodeEscapes", rcurlEnv)
assign("mapUnicodeEscapes", function(str) str, rcurlEnv)
on.exit({
assign("mapUnicodeEscapes", mapUnicodeEscapes, rcurlEnv)
lockBinding("mapUnicodeEscapes", rcurlEnv)
}, add = TRUE)
return(getURL(...))
}
Test:
> json <- myGetURL("http://abicky.net/hatena/rcurl/a.json")
> cat(json, fill = TRUE)
{"a":"\\\"\u0030\\\""}
> fromJSON(json)
$a
[1] "\\\"0\\\""