Display images using Base64 and Node.js - mysql

I'm trying to get some blob files (images) and then display it on the screen using base64.
This is my node.js code:
var queryimage = "SELECT iproduct FROM images";
connection.query(queryimage, function(err, rows, fields){
socket.emit('image_prova', new Buffer(rows, 'binary').toString('base64'));
});
Then I'm getting the suposed string:
websocket.on('image_prova', function(data){
$('#imagehere').append('<img src=data:image/jpeg;base64,'+data+' />');
});
The image is not being displayed and the string given is: AA==
I don't understand why...!

You are passing rows instead of rows[0] to Buffer. You only requested one, but it is still an array, so you need to access the one you actually want.
If that doesn't work, let me know.

Related

How do I parse a html page using nodejs to find a qr code?

I want to parse a web page, searching for QRcodes in the page. When I find them, I am going to read them using the QRcode npm module.
The hard part is, I don't know how to parse the html page in a way I can detect the only the image tags that contains a QRcode inside it.
I tried finding some kind of pattern in the images that contain a Qr code, but it usually starts with "?qr" but I think the ending is different everytimwe.
I'm using the module require-promise to get the raw html, and then I parse through it
const rp = require('request-promise');
const url = 'https://en.wikipedia.org/wiki/List_of_Presidents_of_the_United_States';
rp(url)
.then(function(html){
//success!
console.log(html);
})
.catch(function(err){
//handle error
});
I want to be able to download the image of the QRcode.
You need to pass the html returned into something like https://www.npmjs.com/package/node-html-parser
const rp = require('request-promise');
const parser = require('node-html-parser');
const url = 'https://en.wikipedia.org/wiki/List_of_Presidents_of_the_United_States';
rp(url)
.then(function(html){
const data = parser.parse(html);
console.log(JSON.stringify(data));
})
.catch(function(err){
//handle error
});
Then you can access things off the data object to find the QR code

Edit on Express outputing JSON to database field

Trying to create my first simple CRUD in Express JS and I cant seem to find this annoying bug.
When I try to update a field, the JSON from that field, gets outputed to the view, instead of the new data.
Screenshot: http://i59.tinypic.com/wi5yj4.png
Controller gist: https://gist.github.com/tiansial/2ce28e3c9a25b251ff7c
The update method is used for finding and updating documents without returning the documents that are updated. Basically what you're doing is finding documents without updating them, since the first parameter of the update function is the search criteria. You need to use the save function to update an exiting document, after updating it's properties.
Your code below, modified (not tested):
//PUT to update a blob by ID
.put(function(req, res) {
//find the document by ID
mongoose.model('Email').findById(req.id, function (err, email) {
//add some logic to handle err
if (email) {
// Get our REST or form values. These rely on the "name" attributes
email.email = req.body.email;
email.password = req.body.password;
email.servico = req.body.servico;
//save the updated document
email.save(function (err) {
if (err) {
res.send("There was a problem updating the information to the database: " + err);
}
else {
//HTML responds by going back to the page or you can be fancy and create a new view that shows a success page.
res.format({
html: function(){
res.redirect("/emails");
},
//JSON responds showing the updated values
json: function(){
res.json(email);
}
});
}
});
}
});
})

fs.readstream to read an object and then pipe to writeable to file?

Currently I have a module pulling sql results like this:
[{ID: 'test', NAME: 'stack'},{ID: 'test2', NAME: 'stack'}]
I want to just literally have that written to file so i can read it as an object later, but i want to write it by stream because some of the objects are really really huge and keeping them in memory isnt working anymore.
I am using mssql https://www.npmjs.org/package/mssql
and I am stuck at here:
request.on('recordset', function(result) {
console.log(result);
});
how do I stream this out to a writable stream? I see options for object mode but i cant seem to figure out how to set it?
request.on('recordset', function(result) {
var readable = fs.createReadStream(result),
writable = fs.createWriteStream("loadedreports/bot"+x[6]);
readable.pipe(writable);
});
this just errors because createReadStream must be a filepath...
am I on the right track here or do I need to do something else?
You´re almost on the right track: You just dont need a readable stream, since your data already arrives in chunks.
Then, you can just create the writeable stream OUTSIDE of the actual 'recordset'-Event, else you would create a new stream everytime you get a new chunk (and this is not what you want).
Try it like this:
var writable = fs.createWriteStream("loadedreports/bot"+x[6]);
request.on('recordset', function(result) {
writable.write(result);
});
EDIT
If the recordset is already too big, use the row-Event:
request.on('row', function(row) {
// Same here
});

How write and immediately read a file nodeJS

I have to obtain a json that is incrusted inside a script tag in certain page... so I can't use regular scraping techniques, like cheerio.
Easy way out, write the file (download the page) to the server and then read it using string manipulation to extract the json (there are several) work on them and save to my db hapily.
the thing is that I'm too new to nodeJS, and can't get the code to work, I think that I'm trying to read the file before it is fully written, and if read it time before obtain [Object Object]...
Here's what I have so far...
var http = require('http');
var fs = require('fs');
var request = require('request');
var localFile = 'tmp/scraped_site_.html';
var url = "siteToBeScraped.com/?searchTerm=foobar"
// writing
var file = fs.createWriteStream(localFile);
var request = http.get(url, function(response) {
response.pipe(file);
});
//reading
var readedInfo = fs.readFileSync(localFile, function (err, content) {
callback(url, localFile);
console.log("READING: " + localFile);
console.log(err);
});
So first of all I think you should understand what went wrong.
The http request operation is asynchronous. This means that the callback code in http.get() will run sometime in the future, but the fs.readFileSync, due to its synchronous nature will execute and complete even before the http request will actually be sent to the background thread that will execute it, since they are both invoked in what is commonly known as the (same) tick. Also fs.readFileSync returns a value and does not use a callback.
Even if you replace fs.readFileSync with fs.readFile instead the code still might not work properly since the readFile operation might execute before the http response is fully read from the socket and written to the disk.
I strongly suggest reading: stackoverflow question and/or Understanding the node.js event loop
The correct place to invoke the file read is when the response stream has finished writing to the file, which would look something like this:
var request = http.get(url, function(response) {
response.pipe(file);
file.once('finish', function () {
fs.readFile(localFile, /* fill encoding here */, function(err, data) {
// do something with the data if there is no error
});
});
});
Of course this is a very raw and not recommended way to write asynchronous code but that is another discussion altogether.
Having said that, if you download a file, write it to the disk and then read it all back again to the memory for manipulation, you might as well forgo the file part and just read the response into a string right away. Your code will then look something like so (this can be implemented in several ways):
var request = http.get(url, function(response) {
var data = '';
function read() {
var chunk;
while ( chunk = response.read() ) {
data += chunk;
}
}
response.on('readable', read);
response.on('end', function () {
console.log('[%s]', data);
});
});
What you really should do IMO is to create a transform stream that will strip away all the data you need from the response, while not consuming too much memory and yielding this more elegantly looking code:
var request = http.get(url, function(response) {
response.pipe(yourTransformStream).pipe(file)
});
Implementing this transform stream, however, might prove slightly more complex. So if you're a node beginner and you don't plan on downloading big files or lots of small files than maybe loading the whole thing into memory and doing string manipulations on it might be simpler.
For further information about transformation streams:
node.js stream api
this wonderful guide by substack
this post from strongloop
Lastly, see if you can use any of the million node.js crawlers already out there :-) take a look at these search results on npm
According to the http module help 'get' does not return the response body
This is modified from the request example on the same page
What you need to do is process the response with in the callback (function) passed into http.request so it can be called when it is ready (async)
var http = require('http')
var fs = require('fs')
var localFile = 'tmp/scraped_site_.html'
var file = fs.createWriteStream(localFile)
var req = http.request('http://www.google.com.au', function(res) {
res.pipe(file)
res.on('end', function(){
file.end()
fs.readFile(localFile, function(err, buf){
console.log(buf.toString())
})
})
})
req.on('error', function(e) {
console.log('problem with request: ' + e.message)
})
req.end();
EDIT
I updated the example to read the file after it is created. This works by having a callback on the end event of the response which closes the pipe and then it can reopen the file for reading. Alternatively you can use
req.on('data', function(chunk){...})
to process the data as it arrives without putting it into a temporary file
My impression is that you serializing a js object into JSON by reading it from a stream that's downloading a file containing HTML. This is do-able yet hard. Its difficult to know when you're search expression is found because if you parse as the chunks come in then you never know if you received only context and you could never find what you're looking for because it was split into 2 or many parts which were never analyzed as a whole.
You could try something like this:
http.request('u/r/l',function(res){
res.on('data',function(data){
//parse data as it comes in
}
});
This allows you to read data as it comes in. You can handle it to save to disc, db, or even parse it if you accumulated the contents within the script tags into a single string then parsed objects in that.

jQuery: How to replace content with JSON response

I am having difficulty replacing the content of an HTML element with a JSON object property. Here's my code:
url = '/blah/blah-blah';
data = $.getJSON(url);
$(this).parent('.status').replaceWith(data.content);
Now, I know that the correct JSON object is being returned and that it includes a properly formatted property called 'content'. (I am displaying it in the console). Secondly, I know that I am selecting the correct element to replace. (If I replace data.content with 'bingo!' I see the text displayed on screen.)
When I run the code above, however, I see the content of my element replaced with nothing. What am I doing wrong?
Note that I tried replacing data.content with data.responseJSON.content, but that didn't help.
Thanks!
You need to use a callback,
url = '/blah/blah-blah';
$.getJSON(url, function(data) {
$("some selector").parent('.status').replaceWith(data.content);
})
In your example, $.getJSON doesn't return anything meaningful -- probably just 'undefined'. Meanwhile, it makes your request. When getJSON succeeds, the result is passed to a handling function which does things with it. If you don't provide a callback, nothing will happen when you get a response back from the server.
or if you don't want to use a new selector, you can save $(this).
url = '/blah/blah-blah';
item = $(this)
$.getJSON(url, function(data) {
item.parent('.status').replaceWith(data.content);
})
The AJAX call is asynchronous, so the content hasn't arrived yet when you try to use it. When you display it in the console, you can't do that fast enough to see that the response doesn't arrive immediately.
Use a callback in the getJSON call to handle the data when it arrives:
url = '/blah/blah-blah';
$.getJSON(url, function(data) {
$(this).parent('.status').replaceWith(data.content);
});
Your code is executing before the .getJSON(url) call is completing. Try specifying a success handler like so:
$.getJSON(url, function(data) {
$(this).parent('.status').replaceWith(data.content);
});