JSON.parse(dt) doesn't work at all, give all the errors it can imagine - json

I have this code at server side (nodejs):
socket.on('data', function(dt){
var rdata = dt;
var msg = JSON.parse(rdata);
broadcast(msg);
});
Also I tried this way: var msg = JSON.parse(dt);
dt gets either:
{"chat":"hey","nickname":"nick_name"} OR
'{"chat":"hey","nickname":"nick_name"}'
Also I have this at the client side (AS3), tried both:
var msg = JSON.stringify({nickname: nname.text, chat: input_txt.text}); OR
var msg = "'" + JSON.stringify({nickname: nname.text, chat: input_txt.text}) + "'";
That is what console gives:
undefined:1
{"chat":"hey","nickname":"nick_name"}
^
SyntaxError: Unexpected token
DEBUG: Program node app exited with code 8
Also in some other situations, it gives all kinds of messages.
Just have no idea what is going on.
BTW, also tried JSONStream, still doesn't work.

What kind of socket exactly are you using? If you are using a websocket you might have already received an object as a response (I think most frameworks do so). If you are using plain net.socket you might be receiving a buffer or the data in chunks and not all at once. This seems like an appropriate fix for that situation:
var buffer;
socket.setEncoding('utf8');
socket.on('data', function(data) {
buffer += data;
});
socket.on('end', function() {
var object = JSON.parse(buffer);
});

Unexpected token at the end of data string, is some ghost symbol that is not a white space. trim() doesn't work, so to substring the last symbor works. This is AS3 symbol, so we have to keep it. First you save this symbol in the new variable. the you erase this symbol from the line. After that you can parse the string. work with it.
undefined:1
{"chat":"hey","nickname":"nick_name"}
^
SyntaxError: Unexpected token
DEBUG: Program node app exited with code 8
When you finish working with it, stringify the object, then add ghost symbol to the end and send over the socket. Without this symbol AS3 will not parse the data.
I don't know why is that, but that works for me.

Related

How to parse json newline delimited in Angular 2

I am writing an Angular 2 app (built with angular cli), and trying to use AWS Polly text-to-speech API.
According to the API you can request audio output as well as "Speech Marks" which can describe word timing, visemes, etc. The audio is delivered as "mp3" format, and the speech marks as "application/x-json-stream", which I understand as a "new line" delimited JSON. It cannot be parsed with JSON.parse() due to the new lines. I have yet been unable to read/parse this data. I have looked at several libs that are for "json streaming" but they are all built for node.js and won't work with Angular 2. My code is as follows...
onClick() {
AWS.config.region = 'us-west-2';
AWS.config.accessKeyId = 'xxxxx';
AWS.config.secretAccessKey = 'yyyyy';
let polly = new AWS.Polly();
var params = {
OutputFormat: 'json',
Text: 'Hello world',
VoiceId: 'Joanna',
SpeechMarkTypes:['viseme']
};
polly.synthesizeSpeech(params, (err, data) => {
if (err) {
console.log(err, err.stack);
} else {
var uInt8Array = new Uint8Array(data.AudioStream);
var arrayBuffer = uInt8Array.buffer;
var blob = new Blob([arrayBuffer]);
var url = URL.createObjectURL(blob);
this.audio.src = url;
this.audio.play(); // works fine
// speech marks info displays "application/x-json-stream"
console.log(data.ContentType);
}
});
Strangely enough Chrome browser knows how to read this data and displays it in the response.
Any help would be greatly appreciated.
I had the same problem. I saved the file so I could then read it line by line, accessing the JSON objects when I need to highlight words being read. Mind you this is probably not the most effective way, but an easy way to move on and get working on the fun stuff.
I am trying out different ways to work with Polly, will update answer if I find a better way
You can do it with:
https://www.npmjs.com/package/ndjson-parse
That worked for me.
But I can't play audio, I tried your code it says
DOMException: Failed to load because no supported source was found.

cocos2d-js: Error: Invalid Native Object using setPhysicsBody

I'm trying to implement chipmunk physics engine for a cocos2d-js game. I'm getting the following error when i run it.
jsb: ERROR: File Y:\Documents\cocos\PrebuiltRuntimeJs\frameworks\js-bindings\bindings\auto\jsb_cocos2dx_auto.cpp: Line: 2143, Function: js_cocos2dx_Node_setPhysicsBody
Invalid Native Object
JS: D:/PROJECTS/cocos/Sliderule/runtime/win32/../../src/app.js:32:Error: Invalid Native Object
Here is the code i'm working with
`init:function () {
this._super();
var size = cc.winSize;
this.rect1 = new cc.Sprite(res.null_png,cc.rect(0,0, 200, 25));
this.rect1.setColor(cc.color(255,50,50,1));
this.rect1.setPosition(size.width/2, size.height-12.5);
this.rect1._setAnchorX(0.5);
this.rect1._setAnchorY(0.5);
this.rectbody1 = new cp.Body(1,cp.momentForBox(1,this.rect1.getContentSize().width, this.rect1.getContentSize().height));
this.rectbody1.p = cc.p(size.width/2, size.height-12.5);
this.space.addBody(this.rectbody1);
this.rectshape1 = new cp.BoxShape(this.rectbody1, this.rect1.getContentSize().width - 14, this.rect1.getContentSize().height);
this.space.addShape(this.rectshape1);
this.rect1.setPhysicsBody(this.rectbody1);
this.addChild(this.rect1,1);
`
I get the problem when setting the body to the sprite. Thanks in Advance.
This error message usually appears because of a missing retain(). You have to explicitly set sprites to be kept by the native system (Android, iOS) otherwise it's not valid after some time. And then, if you don't need it anymore: release it.
Try:
this.rect1.retain()
after you created the sprite. And then
this.rect1.release()
when you don't need it anymore.

How write and immediately read a file nodeJS

I have to obtain a json that is incrusted inside a script tag in certain page... so I can't use regular scraping techniques, like cheerio.
Easy way out, write the file (download the page) to the server and then read it using string manipulation to extract the json (there are several) work on them and save to my db hapily.
the thing is that I'm too new to nodeJS, and can't get the code to work, I think that I'm trying to read the file before it is fully written, and if read it time before obtain [Object Object]...
Here's what I have so far...
var http = require('http');
var fs = require('fs');
var request = require('request');
var localFile = 'tmp/scraped_site_.html';
var url = "siteToBeScraped.com/?searchTerm=foobar"
// writing
var file = fs.createWriteStream(localFile);
var request = http.get(url, function(response) {
response.pipe(file);
});
//reading
var readedInfo = fs.readFileSync(localFile, function (err, content) {
callback(url, localFile);
console.log("READING: " + localFile);
console.log(err);
});
So first of all I think you should understand what went wrong.
The http request operation is asynchronous. This means that the callback code in http.get() will run sometime in the future, but the fs.readFileSync, due to its synchronous nature will execute and complete even before the http request will actually be sent to the background thread that will execute it, since they are both invoked in what is commonly known as the (same) tick. Also fs.readFileSync returns a value and does not use a callback.
Even if you replace fs.readFileSync with fs.readFile instead the code still might not work properly since the readFile operation might execute before the http response is fully read from the socket and written to the disk.
I strongly suggest reading: stackoverflow question and/or Understanding the node.js event loop
The correct place to invoke the file read is when the response stream has finished writing to the file, which would look something like this:
var request = http.get(url, function(response) {
response.pipe(file);
file.once('finish', function () {
fs.readFile(localFile, /* fill encoding here */, function(err, data) {
// do something with the data if there is no error
});
});
});
Of course this is a very raw and not recommended way to write asynchronous code but that is another discussion altogether.
Having said that, if you download a file, write it to the disk and then read it all back again to the memory for manipulation, you might as well forgo the file part and just read the response into a string right away. Your code will then look something like so (this can be implemented in several ways):
var request = http.get(url, function(response) {
var data = '';
function read() {
var chunk;
while ( chunk = response.read() ) {
data += chunk;
}
}
response.on('readable', read);
response.on('end', function () {
console.log('[%s]', data);
});
});
What you really should do IMO is to create a transform stream that will strip away all the data you need from the response, while not consuming too much memory and yielding this more elegantly looking code:
var request = http.get(url, function(response) {
response.pipe(yourTransformStream).pipe(file)
});
Implementing this transform stream, however, might prove slightly more complex. So if you're a node beginner and you don't plan on downloading big files or lots of small files than maybe loading the whole thing into memory and doing string manipulations on it might be simpler.
For further information about transformation streams:
node.js stream api
this wonderful guide by substack
this post from strongloop
Lastly, see if you can use any of the million node.js crawlers already out there :-) take a look at these search results on npm
According to the http module help 'get' does not return the response body
This is modified from the request example on the same page
What you need to do is process the response with in the callback (function) passed into http.request so it can be called when it is ready (async)
var http = require('http')
var fs = require('fs')
var localFile = 'tmp/scraped_site_.html'
var file = fs.createWriteStream(localFile)
var req = http.request('http://www.google.com.au', function(res) {
res.pipe(file)
res.on('end', function(){
file.end()
fs.readFile(localFile, function(err, buf){
console.log(buf.toString())
})
})
})
req.on('error', function(e) {
console.log('problem with request: ' + e.message)
})
req.end();
EDIT
I updated the example to read the file after it is created. This works by having a callback on the end event of the response which closes the pipe and then it can reopen the file for reading. Alternatively you can use
req.on('data', function(chunk){...})
to process the data as it arrives without putting it into a temporary file
My impression is that you serializing a js object into JSON by reading it from a stream that's downloading a file containing HTML. This is do-able yet hard. Its difficult to know when you're search expression is found because if you parse as the chunks come in then you never know if you received only context and you could never find what you're looking for because it was split into 2 or many parts which were never analyzed as a whole.
You could try something like this:
http.request('u/r/l',function(res){
res.on('data',function(data){
//parse data as it comes in
}
});
This allows you to read data as it comes in. You can handle it to save to disc, db, or even parse it if you accumulated the contents within the script tags into a single string then parsed objects in that.

Issues when reading a string from TCP socket in Node.js

I've implemented a client/server that communicate using a TCP socket. The data that I'm writing to the socket is stringified JSON. Initially everything works as expected, however, as I increase the rate of writes I eventually encounter JSON parse errors where the beginning on the client receives the beginning of the new write on the end of the old one.
Here is the server code:
var data = {};
data.type = 'req';
data.id = 1;
data.size = 2;
var string = JSON.stringify(data);
client.write(string, callback());
Here is how I am receiving this code on the client server:
client.on('data', function(req) {
var data = req.toString();
try {
json = JSON.parse(data);
} catch (err) {
console.log("JSON parse error:" + err);
}
});
The error that I'm receiving as the rate increases is:
SyntaxError: Unexpected token {
Which appears to be the beginning of the next request being tagged onto the end of the current one.
I've tried using ; as a delimiter on the end of each JSON request and then using:
var data = req.toString().substring(0,req.toString().indexOf(';'));
However this approach, instead of resulting in JSON parse errors seems to result in completely missing some requests on the client side as I increase the rate of writes over 300 per second.
Are there any best practices or more efficient ways to delimit incoming requests via TCP sockets?
Thanks!
Thanks everyone for the explanations, they helped me to better understand the way in which data is sent and received via TCP sockets. Below is a brief overview of the code that I used in the end:
var chunk = "";
client.on('data', function(data) {
chunk += data.toString(); // Add string on the end of the variable 'chunk'
d_index = chunk.indexOf(';'); // Find the delimiter
// While loop to keep going until no delimiter can be found
while (d_index > -1) {
try {
string = chunk.substring(0,d_index); // Create string up until the delimiter
json = JSON.parse(string); // Parse the current string
process(json); // Function that does something with the current chunk of valid json.
}
chunk = chunk.substring(d_index+1); // Cuts off the processed chunk
d_index = chunk.indexOf(';'); // Find the new delimiter
}
});
Comments welcome...
You're on the right track with using a delimiter. However, you can't just extract the stuff before the delimiter, process it, and then discard what came after it. You have to buffer up whatever you got after the delimiter and then concatenate what comes next to it. This means that you could end up with any number (including 0) of JSON "chunks" after a given data event.
Basically you keep a buffer, which you initialize to "". On each data event you concatenate whatever you receive to the end of the buffer and then split it the buffer on the delimiter. The result will be one or more entries, but the last one might not be complete so you need to test the buffer to make sure it ends with your delimiter. If not, you pop the last result and set your buffer to it. You then process whatever results remain (which might not be any).
Be aware that TCP does not make any guarantees about where it divides the chunks of data you recieve. All it guarantees is that all the bytes you send will be received in order, unless the connection fails entirely.
I believe Node data events come in whenever the socket says it has data for you. Technically you could get separate data events for each byte in your JSON data and it would still be within the limits of what the OS is allowed to do. Nobody does that, but your code needs to be written as if it could suddenly start happening at any time to be robust. It's up to you to combine data events and then re-split the data stream along boundaries that make sense to you.
To do that, you need to buffer any data that isn't "complete", including data appended to the end of a chunk of "complete" data. If you're using a delimiter, never throw away any data after the delimiter -- always keep it around as a prefix until you see either more data and eventually either another delimiter or the end event.
Another common choice is to prefix all data with a length field. Say you use a fixed 64-bit binary value. Then you always wait for 8 bytes, plus however many more the value in those bytes indicate, to arrive. Say you had a chunk of ten bytes of data incoming. You might get 2 bytes in one event, then 5, then 4 -- at which point you can parse the length and know you need 7 more, since the last 3 bytes of the third chunk were payload. If the next event actually contains 25 bytes, you'd take the first 7 along with the 3 from before and parse that, and look for another length field in bytes 8-16.
That's a contrived example, but be aware that at low traffic rates, the network layer will generally send your data out in whatever chunks you give it, so this sort of thing only really starts to show up as you increase the load. Once the OS starts building packets from multiple writes at once, it will start splitting on a granularity that is convenient for the network and not for you, and you have to deal with that.
Following this response :
var chunk = "";
client.on('data', function(data) {
chunk += data.toString(); // Add string on the end of the variable 'chunk'
d_index = chunk.indexOf(';'); // Find the delimiter
// While loop to keep going until no delimiter can be found
while (d_index > -1) {
try {
string = chunk.substring(0,d_index); // Create string up until the delimiter
json = JSON.parse(string); // Parse the current string
process(json); // Function that does something with the current chunk of valid json.
}
chunk = chunk.substring(d_index+1); // Cuts off the processed chunk
d_index = chunk.indexOf(';'); // Find the new delimiter
}
});
I get a problem with the delimiter because ; was part of my sent data.
It is possible to use this update in order to implement a custom delimiter :
var chunk = "";
const DELIMITER = (';;;');
client.on('data', function(data) {
chunk += data.toString(); // Add string on the end of the variable 'chunk'
d_index = chunk.indexOf(DELIMITER); // Find the delimiter
// While loop to keep going until no delimiter can be found
while (d_index > -1) {
try {
string = chunk.substring(0,d_index); // Create string up until the delimiter
json = JSON.parse(string); // Parse the current string
process(json); // Function that does something with the current chunk of valid json.
}
chunk = chunk.substring(d_index+DELIMITER.length); // Cuts off the processed chunk
d_index = chunk.indexOf(DELIMITER); // Find the new delimiter
}
});
I know this question is old but I have an answer for the people still looking at this.
As said in the answers above, the data event will be fired with a nodejs Buffer containing the data received.
res.on('data', function(chunk) {
//chunk contains the data
})
This next part doesnt seem to be commonly known. The end event is fired when all data is consumed. The close event is fired when the client disconnects
res.on('end', function() {
//the response body has been consumed
})
The full code to get the entire body is below
var body = Buffer.from('');
res.on('data', function(chunk) {
if (chunk && chunk.byteLength > 0) {
body = Buffer.concat([body, chunk]);
}
})
res.on('end', function() {
var data = JSON.parse(body.toString());
//data contains the response json
})
End event is fired when the data is all consumed: source
close event is fired when the request is closed: source
Try with end event and no data
var data = '';
client.on('data', function (chunk) {
data += chunk.toString();
});
client.on('end', function () {
data = JSON.parse(data); // use try catch, because if a man send you other for fun, you're server can crash.
});
Hope help you.

JSONP and invalid label

Using mootools and JsonP I get "invalid label" error in Firefox Error console
JsonP seems to work (I get the data correctly)
{"jsondata":[{"title":"title1","link":"http://xxxx.xxx.xxx","thumbsrc":"http://xxxx.xxx.xxx/17_t.jpg" ,"description":".......","pubDate":"2009-03-09 06:26:00",},{"title":"title2","link":"http://xxxx.xxx.xxx","thumbsrc":"http://xxxx.xxx.xxx/16_t.jpg" ,"description":".......","pubDate":"2009-03-09 06:08:09",}]}
but I get the Invalid label error on "jsondata"
the same file works good with request.json
comma removed... nothing
this is the code I'm using
window.addEvent('domready', function() {
var gallery = $('gallery');
new JsonP('http://myjsoncodeurl',{
onComplete: function(jsonObj) {
addImages(jsonObj.jsondata);
}
}).request();
var addImages = function(images) {
images.each(function(image) {
var el = new Element('div', {'class': 'item'});
var name = new Element('h3').inject(el);
var a1 = new Element('a', {'href': image.link,'html': image.title}).inject(name);
var desc = new Element('span', {'html': image.description}).inject(name, 'after');
var a2 = new Element('a', {'href': image.link}).inject(desc,'after');
var img = new Element('img', {'src': image.thumbsrc}).inject(a2);
el.inject(gallery);
});
};
});
it works with normal request.Json, but JSONP that doesn't like my code :(
the same file works good with
request.json
With JSONP, your response should be returning a JavaScript function call (i.e. callback) with the JSON data passed in as the argument. If your response is a plain old JSON text, it won't work in the context of JSONP. You have to tailor your backend to accept a callback argument and call that callback with the JSON data.
You need to put brackets (normal ones, not curly ones) around your object, because sometimes Javascript gets horribly confused and thinks you're doing a label statement, a statement type that I didn't know existed until I Googled this problem.
https://developer.mozilla.org/en/Core_JavaScript_1.5_Guide/Statements#label_Statement
Try passing your object, {"jsondata":[ ... ]} , as ({"jsondata":[ ... ]}) instead. That seems to sort it.
Putting it in here:
http://json.parser.online.fr/
Shows that its valid, but has the extra comma (which will bork IE, although FF should handle it). If removing the comma doesn't fix it, you'll need to post more of your code to help us find the error.
This could be due to the extra commas after the dates