I just have two (see below) lines of code in my index.html. I want to execute the first line BEFORE the second line will be executed. How can I do/ensure that? Currently, "undefined" will be apprear in the console for dataDB, because the function getDataFunction() takes some time.
var dataDB = getDataFunction(afterDate, toDate, afterTime, toTime);
console.log("Content of dataDB: " + dataDB);
Probably an easy question for you :-) I appreciate your help!
UPDATE: getDataFunction()
This function just get some data (collection+json) from a server with d3 (data driven document)...The parameters are used to identify the data of interest (time frame).
function getDataFunction(afterDate, toDate, afterTime, toTime){
d3.json("http://server...", function(error, data){
if(error) {
console.log(error);
} else {
console.log(data);
dataDB = data.collection.items;
console.log(dataDB);
}
});
}
D3 api reference
It says in the api reference the post is indeed done asychronously, thus the execution of the rest of the code proceeds (in this case console.log) there are no decent ways to make javascript wait. The best thing to do is to redesign it that your callback function takes care of whatever needs to come next.
Related
I am trying to load a GeoJSON file and to draw some graphics using it as a basis with D3 v5.
The problem is that the browser is skipping over everything included inside the d3.json() call. I tried inserting breakpoints to test but the browser skips over them and I cannot figure out why.
Code snippet below.
d3.json("/trip_animate/tripData.geojson", function(data) {
console.log("It just works"); // This never logs to console.
//...all the rest
}
The code continues on from the initial console.log(), but I omitted all of it since I suspect the issue is with the d3.json call itself.
The signature of d3.json has changed from D3 v4 to v5. It has been moved from the now deprecated module d3-request to the new d3-fetch module. As of v5 D3 uses the Fetch API in favor of the older XMLHttpRequest and has in turn adopted the use of Promises to handle those asynchronous requests.
The second argument to d3.json() no longer is the callback handling the request but an optional RequestInit object. d3.json() will now return a Promise you can handle in its .then() method.
Your code thus becomes:
d3.json("/trip_animate/tripData.geojson")
.then(function(data){
// Code from your callback goes here...
});
Error handling of the call has also changed with the introduction of the Fetch API. Versions prior to v5 used the first parameter of the callback passed to d3.json() to handle errors:
d3.json(url, function(error, data) {
if (error) throw error;
// Normal handling beyond this point.
});
Since D3 v5 the promise returned by d3.json() will be rejected if an error is encountered. Hence, vanilla JS methods of handling those rejections can be applied:
Pass a rejection handler as the second argument to .then(onFulfilled, onRejected).
Use .catch(onRejected) to add a rejection handler to the promise.
Applying the second solution your code thus becomes
d3.json("/trip_animate/tripData.geojson")
.then(function(data) {
// Code from your callback goes here...
})
.catch(function(error) {
// Do some error handling.
});
Since none of the answers helped, I had to find the solution on my own that works. I am using v4 and have to stick with it. The problem was (in my case) that d3.json worked the first time, but did not work the second or third time (with a HTML dropdown).
The idea is to use the initial function, and then simply to use a second function with
let data = await d3.json("URL");
instead of
d3.json("URL", function(data) {
Therefore, the general pattern becomes:
async function drawWordcloudGraph() {
let data = await d3.json("URL");
...
}
function initialFunction() {
d3.json("URL", function (data) {
...
});
}
initialFunction();
I have tried several approaches, and only this worked. Not sure if it can be simplified, please test on your own.
I have a script that asks for the user input to download (or not) a file. It's fairly straight forward, but I have a problem with the following piece of code. If the user chooses "NO", then the else if condition works fine and the code finishes its expected execuetion. But if the user chooses Yes, the file gets downloaded but I get the following error:
UnhandledPromiseRejectionWarning: TypeError: Promise resolver [object Array] is not a function
I problably need to learn more about Promises, but I share the section of the code that fails in case I am making an obvious mistake that I fail to see.
async function download_fallo(page) {
if (download == "Y") {
await new Promise([
page.click('div > div.col-xs-12.col-sm-11 > div.row > div.col-sm-4.col-lg-3 > a'),
//page.wait({ waitUntil: 'networkidle0' }) // does not work either
//page.wait(2000) // UnhandledPromiseRejectionWarning page.wait is not a function...
]);
return console.log("Perfect")
} else if (download == "N") {
console.log("Just the information then!") }
}
Thanks guys --- I was making obvious mistakes and you clarified them. I was not using Promise.all and made a mistake with page.await, both were pointed out. With that corrected, the code works. I post below in case someone finds it useful, it's a simple if condition tied to a readLine user input to download (or not) a PDF file from a website.
async function download_fallo(page) {
if (download == "Y") {
await Promise.all([
page.click('div > div.col-xs-12.col-sm-11 > div.row > div.col-sm-4.col-lg-3 > a'),
page.waitFor(2000)
]);
return console.log("Perfect")
} else if (download == "N") {
console.log("Just the information then!") }
}
I dont really know what you need because i need more context, but if you want to execute an array of promises you could try
Promise.all([ promise1, promise2 ])
This method takes an array of promises as an input, and returns a single Promise as an output. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all
I am not a 100% sure if your script does what you expect to do, but ‘page.wait’ is a non-existent puppeteer method. You need ‘page.waitFor’ if you want to wait for / pause the script for certain ms-s.
I am creating an element (a router - but that is not important), that is scanning the DOM soon after it has attached for particular other custom elements. I certain cases it needs to throw an error and I want to test for these.
The test I constructed is not failing - but as far as I can make out the test has already failed before my element gets attached. I suspect it is the asynchronous nature of things.
Here is the snippet of the test in question. The test fixture in question contains elements that will cause one of the elements to fail after a 'dom-change' event happens (which it has a listener for) when it then scans the dom for other things.
it('should fail if two route elements both designate thenselves as home', function(done) {
var t= document.getElementById('multiple_home');
function multiple () {
t.create();
}
expect(multiple).to.throw(Error);
t.restore();
done();
});
I think the problem is related to the fact that the fixture is created in multiple, but hasn't yet failed by the time multiple exits. I am wondering if I can pass a Promise to expect - except I am not sure how to turn mulitple into a Promise to try it out.
I eventually found a way, but it requires instrumenting the element a bit to support this.
In the elements "created" callback I create a Promise and store the two functions to resolve and reject it in "this" variables - thus:-
this.statusPromise = new Promise(function(resolve,reject){
this.statusResolver = resolve;
this.statusRejector = reject;
}.bind(this));
In the DOM parsing section I use a try catch block like this
try {
//parse the dom throwing errors if anything bad happens
this.statusResolver('Any useful value I like');
} catch (error) {
this.statusRejector(error);
}
I then made a function that returns the promise
domOK: function() {
return this.statusPromise;
}
Finally in my test I was now able to test something like this (I load the fixture in each test, rather than a beforeEach, because I am using a different fixture for each test. I do clear it down again in an afterEach). Note the use of the .then and .catch functions from the Promise.
it('should fail if two route elements declare the same path name',function(done){
t = document.getElementById('multiple_path');
t.create();
r = document.getElementById('router')
r.domOK().then(function(status){
//We should not get here throw an error
assert.fail('Did not error - status is: ' + status);
done();
}).catch(function(error){
expect(error.message).to.equal('There are two nodes with the same name: /user');
done();
});
I'm trying to understand the es6 Promises. As I understood, they can be chained to be executed sequentially. It does not work in by case.
console.log("Started");
function doStuff(num, timeout) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
console.log("now " + num);
resolve();
}, timeout);
});
}
doStuff(1, 3000).then(doStuff(2, 2000)).then(doStuff(3, 1000));
However the output is:
$ node test
Started
now 3
now 2
now 1
I was expecting the reverse order. I do understand why it gets like this, they are all queued up and finishes in the "reverse" order.
But the thing is, I thought that the second was not executed until the first was finished and so on. What am I missing?
If you write it like this, the 3 calls to doStuff will start when you write the line. You have to write it like this :
doStuff(1, 3000).then(function() {
return doStuff(2, 2000);
}).then(function() {
return doStuff(3, 3000);
});
As loganfsmyth said, is you are doing ES6, you can also use arrow functions :
doStuff(1, 3000).then(() => doStuff(2, 2000)).then(() => doStuff(3, 3000));
Isn't there a typo ? you should chain the then part to the doStuff call, perhaps like this:
doStuff(1, 3000).then(function(){
doStuff(2, 2000).then(function(){
doStuff(3, 1000);
});
});
timeouts in javascript are asynchronous. The way you have it written now, all three promises are executed in order, and the timeout function just queues up the code inside of it to run after a certain time duration. A timeout's execution doesn't mean its resolution; it's considered "done" when its internal code is queued. That's why the second and third promises don't have to wait for the line "console.log("now " + num);" to execute before being kicked off.
See this answer https://stackoverflow.com/a/19626821/2782404 for some background on asynchronous tasks in js.
I have to obtain a json that is incrusted inside a script tag in certain page... so I can't use regular scraping techniques, like cheerio.
Easy way out, write the file (download the page) to the server and then read it using string manipulation to extract the json (there are several) work on them and save to my db hapily.
the thing is that I'm too new to nodeJS, and can't get the code to work, I think that I'm trying to read the file before it is fully written, and if read it time before obtain [Object Object]...
Here's what I have so far...
var http = require('http');
var fs = require('fs');
var request = require('request');
var localFile = 'tmp/scraped_site_.html';
var url = "siteToBeScraped.com/?searchTerm=foobar"
// writing
var file = fs.createWriteStream(localFile);
var request = http.get(url, function(response) {
response.pipe(file);
});
//reading
var readedInfo = fs.readFileSync(localFile, function (err, content) {
callback(url, localFile);
console.log("READING: " + localFile);
console.log(err);
});
So first of all I think you should understand what went wrong.
The http request operation is asynchronous. This means that the callback code in http.get() will run sometime in the future, but the fs.readFileSync, due to its synchronous nature will execute and complete even before the http request will actually be sent to the background thread that will execute it, since they are both invoked in what is commonly known as the (same) tick. Also fs.readFileSync returns a value and does not use a callback.
Even if you replace fs.readFileSync with fs.readFile instead the code still might not work properly since the readFile operation might execute before the http response is fully read from the socket and written to the disk.
I strongly suggest reading: stackoverflow question and/or Understanding the node.js event loop
The correct place to invoke the file read is when the response stream has finished writing to the file, which would look something like this:
var request = http.get(url, function(response) {
response.pipe(file);
file.once('finish', function () {
fs.readFile(localFile, /* fill encoding here */, function(err, data) {
// do something with the data if there is no error
});
});
});
Of course this is a very raw and not recommended way to write asynchronous code but that is another discussion altogether.
Having said that, if you download a file, write it to the disk and then read it all back again to the memory for manipulation, you might as well forgo the file part and just read the response into a string right away. Your code will then look something like so (this can be implemented in several ways):
var request = http.get(url, function(response) {
var data = '';
function read() {
var chunk;
while ( chunk = response.read() ) {
data += chunk;
}
}
response.on('readable', read);
response.on('end', function () {
console.log('[%s]', data);
});
});
What you really should do IMO is to create a transform stream that will strip away all the data you need from the response, while not consuming too much memory and yielding this more elegantly looking code:
var request = http.get(url, function(response) {
response.pipe(yourTransformStream).pipe(file)
});
Implementing this transform stream, however, might prove slightly more complex. So if you're a node beginner and you don't plan on downloading big files or lots of small files than maybe loading the whole thing into memory and doing string manipulations on it might be simpler.
For further information about transformation streams:
node.js stream api
this wonderful guide by substack
this post from strongloop
Lastly, see if you can use any of the million node.js crawlers already out there :-) take a look at these search results on npm
According to the http module help 'get' does not return the response body
This is modified from the request example on the same page
What you need to do is process the response with in the callback (function) passed into http.request so it can be called when it is ready (async)
var http = require('http')
var fs = require('fs')
var localFile = 'tmp/scraped_site_.html'
var file = fs.createWriteStream(localFile)
var req = http.request('http://www.google.com.au', function(res) {
res.pipe(file)
res.on('end', function(){
file.end()
fs.readFile(localFile, function(err, buf){
console.log(buf.toString())
})
})
})
req.on('error', function(e) {
console.log('problem with request: ' + e.message)
})
req.end();
EDIT
I updated the example to read the file after it is created. This works by having a callback on the end event of the response which closes the pipe and then it can reopen the file for reading. Alternatively you can use
req.on('data', function(chunk){...})
to process the data as it arrives without putting it into a temporary file
My impression is that you serializing a js object into JSON by reading it from a stream that's downloading a file containing HTML. This is do-able yet hard. Its difficult to know when you're search expression is found because if you parse as the chunks come in then you never know if you received only context and you could never find what you're looking for because it was split into 2 or many parts which were never analyzed as a whole.
You could try something like this:
http.request('u/r/l',function(res){
res.on('data',function(data){
//parse data as it comes in
}
});
This allows you to read data as it comes in. You can handle it to save to disc, db, or even parse it if you accumulated the contents within the script tags into a single string then parsed objects in that.