node express: Issue with res.render() after writing file to disk - json

I'm writing a method that uses async/await and promises to write some JSON to a file and then render a pug template. But for some reason the code that writes the JSON conflicts with the res.render() method resulting in the browser not being able to connect to the server.
The weird thing is that I don't get any errors in the console, and the JSON file is generated as expected — the page just won't render.
I'm using the fs-extra module to write to disk.
const fse = require('fs-extra');
exports.testJSON = async (req, res) => {
await fse.writeJson('./data/foo.json', {Key: '123'})
.then(function(){
console.log('JSON updated.')
})
.catch(function(err){
console.error(err);
});
res.render('frontpage', {
title: 'JSON Updated...',
});
}
I'm starting to think that there is something fundamental I'm not getting that conflicts with promises, writing to disk and/or express' res.render method. It's worth noting that res.send() works fine.
I've also tried a different NPM module to write the file (write-json-file). It gave me the exact same issue.
UPDATE:
So I'm an idiot. The problem has nothing to do with Express og the JSON file. It has to do with the fact that I'm running nodemon to automatically restart the server when files are changed. So as soon as the JSON file was saved the server would restart, stopping the process of rendering the page. Apologies to the awesome people trying to help me anyway. You still helped me get to the problem, so I really appreciate it!

Here's the actual problem:
The OP is running nodemon to restart the server whenever it see filechanges, and this is what stops the code from running, because as soon as the json file is generated the server restarts.
Efforts to troubleshoot:
It's going to take some trouble shooting to figure this out and since I need to show you code, I will put it in an answer even though I don't yet know what is causing the problem. I'd suggest you fully instrument things with this code:
const fse = require('fs-extra');
exports.testJSON = async (req, res) => {
try {
console.log(`1:cwd - ${process.cwd()}`);
await fse.writeJson('./data/foo.json', {Key: '123'})
.then(function(){
console.log('JSON updated.')
}).catch(function(err){
console.error(err);
});
console.log(`2:cwd - ${process.cwd()}`);
console.log("about to call res.render()");
res.render('frontpage', {title: 'JSON Updated...',}, (err, html) => {
if (err) {
console.log(`res.render() error: ${err}`);
res.status(500).send("render error");
} else {
console.log("res.render() success 1");
console.log(`render length: ${html.length}`);
console.log(`render string (first part): ${html.slice(0, 20}`);
res.send(html);
console.log("res.render() success 2");
}
});
console.log("after calling res.render()");
} catch(e) {
console.log(`exception caught: ${e}`);
res.status(500).send("unknown exception");
}
}

Related

Nextjs/Vercel error: only absolute URLs are supported. Where is the '.json' in my route params coming from?

The Problem:
I'm new to Next.js (1 month) and Vercel (1 day), and between them something appears to be inserting .json in my urls on the search route, causing them to fail with error:
[GET] /_next/data/9MJcw6afNEM1L-eax6OWi/search/hand.json?term=hand
10:21:52:87
Function Status:None
Edge Status:500
Duration:292.66 ms
Init Duration: 448.12 ms
Memory Used:88 MB
ID:fra1:fra1::ldzhz-1644484912454-0a30b71b6c90
User Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:97.0) Gecko/20100101 Firefox/97.0
TypeError: Only absolute URLs are supported
at getNodeRequestOptions (/var/task/node_modules/next/dist/compiled/node-fetch/index.js:1:61917)
at /var/task/node_modules/next/dist/compiled/node-fetch/index.js:1:63448
at new Promise (<anonymous>)
at Function.fetch [as default] (/var/task/node_modules/next/dist/compiled/node-fetch/index.js:1:63382)
at fetchWithAgent (/var/task/node_modules/next/dist/server/node-polyfill-fetch.js:38:39)
at getServerSideProps (/var/task/.next/server/chunks/730.js:238:28)
at Object.renderToHTML (/var/task/node_modules/next/dist/server/render.js:566:26)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async doRender (/var/task/node_modules/next/dist/server/base-server.js:855:38)
at async /var/task/node_modules/next/dist/server/base-
server.js:950:28
2022-02-10T09:21:53.788Z 994c9544-0bbe-4a68-af83-f0e4c322151e ERROR
Error: Your `getServerSideProps` function did not return an object. Did you forget to add a `return`?
at Object.renderToHTML (/var/task/node_modules/next/dist/server/render.js:592:19)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async doRender (/var/task/node_modules/next/dist/server/base-server.js:855:38)
at async /var/task/node_modules/next/dist/server/base-server.js:950:28
at async /var/task/node_modules/next/dist/server/response-cache.js:63:36 { page: '/search/[term]'}
RequestId: 994c9544-0bbe-4a68-af83-f0e4c322151e Error: Runtime exited with error: exit status 1
Runtime.ExitError
Though the browser says https://.../search/hand as it should.
No such thing is happening on my local server build though, and it works perfectly well.
Background/Code Snippets:
The search route is the only route that uses SSR, and is also the only route with this issue. It is a dynamic route, so it seems either next in production or vercel expects some kind of json for it -presumably pre-rendered content-, and is replacing the route URL with json.
Also I have had to use the VERCEL_URL environment variable to prepare a URL for fetch requests, so this may also be messing up the URL, but the .json in the error message makes me think otherwise, since search should not be pre-rendered.
The Page Structure For the Search Route (Index imports the component in [term] and defines its own getServerSide props to accommodate a search route without a param):
|-Search
|- [term].js
|- Index.js
The Code For [term].js:
...
export default function Search({results, currentSearch}){
...
}
export async function getServerSideProps(req) {
const { criteria, page } = req.query;
const { term } = req.params || { term: '' };
try {
const data = await fetch(`${process.env.VERCEL_URL}/api/search/${term}?criteria=${criteria || 'name'}&page=${page}`);
const searchRes = await data.json();
return {
props: {
results: searchRes.data,
currentSearch: searchRes.query
}
}
} catch (e) {
console.log(e)
}
}
Index.js is similar:
import Search from "./[term]";
export default Search;
export async function getServerSideProps(req) {
const { criteria, page } = req.query;
const { term } = req.params || { term: '' };
if(!term){
return {
props: {
results: [],
currentSearch: {}
}
}
}
try {
const data = await fetch(`${process.env.VERCEL_URL}/api/search/${term}?criteria=${criteria || 'name'}&page=${page}`);
const searchRes = await data.json();
return {
props: {
results: searchRes.data,
currentSearch: searchRes.query
}
}
} catch (e) {
console.log(e)
}
}
The API I'm trying to fetch from is confirmed to be working, so this problem is strictly regarding pages, or .json being provided to the fetch method from router params.
It would turn out that VERCEL_URL is actually an absolute URL (It does not include a protocol). I had to deploy console.log statements to find this. A little embarrassed that I missed it in the docs.
The .json was not actually in the query or params, and therefore not in the fetch request. The fetch failed because the url had no protocol.
The .json in the page url must be from Next's internal operations, and does not mean the page is being built ahead of time. Yes it is being rendered using some json, but my thinking that the json indicates a pre-rendered page(SSG/ISR) was wrong. This must mean Server Side Rendering will also make use of such json, but only at runtime, when the request is made.
The use of .json after the params slug in the GET requests for a page has no bearing on the internal flow of your app, provided it has worked correctly. If you see it in error messages, know that it is from Next and examine other parts of the code at the point of failure.
The page structure I attempted ([param].js + index.js in the same
directory) is fine, which is why my local build could work properly.
I want to delete this question because the solution is essentially one that a thorough look in the docs would have revealed, but at the same time I think the mistake itself is an easily made one and that some of the conclusions listed above(particularly the one about json being used in all next routes) could save time spent debugging for some new users of Next/Vercel.

How to get around previously declared json body-parser in Nodebb?

I am writing a private plugin for nodebb (open forum software). In the nodebb's webserver.js file there is a line that seems to be hogging all incoming json data.
app.use(bodyParser.json(jsonOpts));
I am trying to convert all incoming json data for one of my end-points into raw data. However the challenge is I cannot remove or modify the line above.
The following code works ONLY if I temporarily remove the line above.
var rawBodySaver = function (req, res, buf, encoding) {
if (buf && buf.length) {
req.rawBody = buf.toString(encoding || 'utf8');
}
}
app.use(bodyParser.json({ verify: rawBodySaver }));
However as soon as I put the app.use(bodyParser.json(jsonOpts)); middleware back into the webserver.js file it stops working. So it seems like body-parser only processes the first parser that matches the incoming data type and then skips all the rest?
How can I get around that? I could not find any information in their official documentation.
Any help is greatly appreciated.
** Update **
The problem I am trying to solve is to correctly handle an incoming stripe webhook event. In the official stripe documentation they suggested I do the following:
// Match the raw body to content type application/json
app.post('/webhook', bodyParser.raw({type: 'application/json'}),
(request, response) => {
const sig = request.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(request.body, sig,
endpointSecret);
} catch (err) {
return response.status(400).send(Webhook Error:
${err.message});
}
Both methods, the original at the top of this post and the official stripe recommended way, construct the stripe event correctly but only if I remove the middleware in webserver. So my understanding now is that you cannot have multiple middleware to handle the same incoming data. I don't have much wiggle room when it comes to the first middleware except for being able to modify the argument (jsonOpts) that is being passed to it and comes from a .json file. I tried adding a verify field but I couldn't figure out how to add a function as its value. I hope this makes sense and sorry for not stating what problem I am trying to solve initially.
The only solution I can find without modifying the NodeBB code is to insert your middleware in a convenient hook (that will be later than you want) and then hack into the layer list in the app router to move that middleware earlier in the app layer list to get it in front of the things you want to be in front of.
This is a hack so if Express changes their internal implementation at some future time, then this could break. But, if they ever changed this part of the implementation, it would likely only be in a major revision (as in Express 4 ==> Express 5) and you could just adapt the code to fit the new scheme or perhaps NodeBB will have given you an appropriate hook by then.
The basic concept is as follows:
Get the router you need to modify. It appears it's the app router you want for NodeBB.
Insert your middleware/route as you normally would to allow Express to do all the normal setup for your middleware/route and insert it in the internal Layer list in the app router.
Then, reach into the list, take it off the end of the list (where it was just added) and insert it earlier in the list.
Figure out where to put it earlier in the list. You probably don't want it at the very start of the list because that would put it after some helpful system middleware that makes things like query parameter parsing work. So, the code looks for the first middleware that has a name we don't recognize from the built-in names we know and insert it right after that.
Here's the code for a function to insert your middleware.
function getAppRouter(app) {
// History:
// Express 4.x throws when accessing app.router and the router is on app._router
// But, the router is lazy initialized with app.lazyrouter()
// Express 5.x again supports app.router
// And, it handles the lazy construction of the router for you
let router;
try {
router = app.router; // Works for Express 5.x, Express 4.x will throw when accessing
} catch(e) {}
if (!router) {
// Express 4.x
if (typeof app.lazyrouter === "function") {
// make sure router has been created
app.lazyrouter();
}
router = app._router;
}
if (!router) {
throw new Error("Couldn't find app router");
}
return router;
}
// insert a method on the app router near the front of the list
function insertAppMethod(app, method, path, fn) {
let router = getAppRouter(app);
let stack = router.stack;
// allow function to be called with no path
// as insertAppMethod(app, metod, fn);
if (typeof path === "function") {
fn = path;
path = null;
}
// add the handler to the end of the list
if (path) {
app[method](path, fn);
} else {
app[method](fn);
}
// now remove it from the stack
let layerObj = stack.pop();
// now insert it near the front of the stack,
// but after a couple pre-built middleware's installed by Express itself
let skips = new Set(["query", "expressInit"]);
for (let i = 0; i < stack.length; i++) {
if (!skips.has(stack[i].name)) {
// insert it here before this item
stack.splice(i, 0, layerObj);
break;
}
}
}
You would then use this to insert your method like this from any NodeBB hook that provides you the app object sometime during startup. It will create your /webhook route handler and then insert it earlier in the layer list (before the other body-parser middleware).
let rawMiddleware = bodyParser.raw({type: 'application/json'});
insertAppMethod(app, 'post', '/webhook', (request, response, next) => {
rawMiddleware(request, response, (err) => {
if (err) {
next(err);
return;
}
const sig = request.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(request.body, sig, endpointSecret);
// you need to either call next() or send a response here
} catch (err) {
return response.status(400).send(`Webhook Error: ${err.message}`);
}
});
});
The bodyParser.json() middleware does the following:
Check the response type of an incoming request to see if it is application/json.
If it is that type, then read the body from the incoming stream to get all the data from the stream.
When it has all the data from the stream, parse it as JSON and put the result into req.body so follow-on request handlers can access the already-read and already-parsed data there.
Because it reads the data from the stream, there is no longer any more data in the stream. Unless it saves the raw data somewhere (I haven't looked to see if it does), then the original RAW data is gone - it's been read from the stream already. This is why you can't have multiple different middleware all trying to process the same request body. Whichever one goes first reads the data from the incoming stream and then the original data is no longer there in the stream.
To help you find a solution, we need to know what end-problem you're really trying to solve? You will not be able to have two middlewares both looking for the same content-type and both reading the request body. You could replace bodyParser.json() that does both what it does now and does something else for your purpose in the same middleware, but not in separate middleware.

Cloud Functions for Firebase Could not handle the request after a successful request

TLDR: After writing a JSON (successfully) to my Firestore, the next request will give me Internal Server Error (500). I have a suspicion that the problem is that inserting is not yet complete.
So basically, I have this code:
const jsonToDb = express();
exports.jsondb = functions.region('europe-west1').https.onRequest(jsonToDb);
jsonToDb.post('', (req, res) => {
let doc;
try {
doc = JSON.parse(req.body);
} catch(error) {
res.status(400).send(error.toString()).end();
return;
}
myDbFuncs.saveMyDoc(doc);
res.status(201).send("OK").end();
}
The database functions are in another JS file.
module.exports.saveMyDoc = function (myDoc) {
let newDoc = db.collection('insertedDocs').doc(new Date().toISOString());
newDoc.set(myDoc).then().catch();
return;
};
So I have several theories, maybe one of them is not wrong, but please help me with this. (Also if I made some mistakes in this little snippet, just tell me.)
Reproduction:
I send the first request => everything is OK, Json in the database.
I send a second request after the first request give me OK status => it does not do anything for a few secs, then 500: Internal Server Error.
Logs: Function execution took 4345 ms, finished with status: 'connection error'.
I just don't understand. Let's imagine I'm using this as an API, several requests simultaneously. Can't it handle? (I suppose it can handle, just I do something stupid.) Deliberately, I'm sending the second request after the first has finished and this occurs.
Should I make the saveMyDoc async?
saveMyDoc isn't returning a promise that resolves when all the async work is complete. If you lose track of a promise, Cloud Functions will shut down the work and clean up before the work is complete, making it look like it simply doesn't work. You should only send a response from an HTTP type function after all the work is fully complete.
Minimally, it should look more like this:
module.exports.saveMyDoc = function (myDoc) {
let newDoc = db.collection('insertedDocs').doc(new Date().toISOString());
return newDoc.set(myDoc);
};
Then you would use the promise in your main function:
myDbFuncs.saveMyDoc(doc).then(() => {
res.status(201).send("OK").end();
}
See how the response is only sent after the data is saved.
Read more about async programming in Cloud Functions in the documentation. Also watch this video series that talks about working with promises in Cloud Functions.

How to parse or Stringify in asycnhronous way in javascript

I see that JSON.stringify and JSON.parse are both sycnhronous.
I would like to know if there a simple npm library that does this in an asynchonous way .
Thank you
You can make anything "asynchronous" by using Promises:
function asyncStringify(str) {
return new Promise((resolve, reject) => {
resolve(JSON.stringify(str));
});
}
Then you can use it like any other promise:
asyncStringfy(str).then(ajaxSubmit);
Note that because the code is not asynchronous, the promise will be resolved right away (there's no blocking operation on stringifying a JSON, it doesn't require any system call).
You can also use the async/await API if your platform supports it:
async function asyncStringify(str) {
return JSON.stringify(str);
}
Then you can use it the same way:
asyncStringfy(str).then(ajaxSubmit);
// or use the "await" API
const strJson = await asyncStringify(str);
ajaxSubmit(strJson);
Edited: One way of adding true asynchrnous parsing/stringifying (maybe because we're parsing something too complex) is to pass the job to another process (or service) and wait on the response.
You can do this in many ways (like creating a new service that shares a REST API), I will demonstrate here a way of doing this with message passing between processes:
First create a file that will take care of doing the parsing/stringifying. Call it async-json.js for the sake of the example:
// async-json.js
function stringify(value) {
return JSON.stringify(value);
}
function parse(value) {
return JSON.parse(value);
}
process.on('message', function(message) {
let result;
if (message.method === 'stringify') {
result = stringify(message.value)
} else if (message.method === 'parse') {
result = parse(message.value);
}
process.send({ callerId: message.callerId, returnValue: result });
});
All this process does is wait a message asking to stringify or parse a JSON and then respond with the right value.
Now, on your code, you can fork this script and send messages back and forward. Whenever a request is sent, you create a new promise, whenever a response comes back to that request, you can resolve the promise:
const fork = require('child_process').fork;
const asyncJson = fork(__dirname + '/async-json.js');
const callers = {};
asyncJson.on('message', function(response) {
callers[response.callerId].resolve(response.returnValue);
});
function callAsyncJson(method, value) {
const callerId = parseInt(Math.random() * 1000000);
const callPromise = new Promise((resolve, reject) => {
callers[callerId] = { resolve: resolve, reject: reject };
asyncJson.send({ callerId: callerId, method: method, value: value });
});
return callPromise;
}
function JsonStringify(value) {
return callAsyncJson('stringify', value);
}
function JsonParse(value) {
return callAsyncJson('parse', value);
}
JsonStringify({ a: 1 }).then(console.log.bind(console));
JsonParse('{ "a": "1" }').then(console.log.bind(console));
Note: this is just one example, but knowing this you can figure out other improvements or other ways to do it. Hope this is helpful.
Check this out, another npm package-
async-json is a library that provides an asynchronous version of the standard JSON.stringify.
Install-
npm install async-json
Example-
var asyncJSON = require('async-json');
asyncJSON.stringify({ some: "data" }, function (err, jsonValue) {
if (err) {
throw err;
}
jsonValue === '{"some":"data"}';
});
Note-Didn't test it, you need to manually check it's dependency and
required packages.
By asynchronous I assume you actually mean non-blocking asynchronous - i.e., if you have a large (megabytes large) JSON string, and you stringify, you don't want your web server to hard freeze and block newly incoming web requests for 500+ milliseconds while it processes the object.
Option 1
The generic answer is to iterate through your object piece by piece, and to then call setImmedate whenever a threshold is reached. This then allows other functions in the event queue to run for a bit.
For JSON (de)serialization, the yieldable-json library does this very well. It does however drastically sacrifice JSON processing time (which is somewhat intentional).
Usage example from the yieldable-json readme:
const yj = require('yieldable-json')
yj.stringifyAsync({key:"value"}, (err, data) => {
if (!err)
console.log(data)
})
Option 2
If processing speed is extremely important (such as with real-time data), you may want to consider spawning multiple Node threads instead. I've used used the PM2 Process Manager with great success, although initial setup was quite daunting. Once it works however, the final result is magic, and does not require modifying your source code, just your package.json file. It acts as a proxy, load balancer, and monitoring tool for Node applications. It's somewhat analogous to Docker swarm, but bare metal, and does not require a special client on the server.

socketstream async call to mysql within rpc actions

First, I need to tell you that I am very new to the wonders of nodejs, socketstream, angularjs and JavaScript in general. I come from a Java background and this might explain my ignorance of the correct way of doing things async.
To toy around with things I installed the ss-angular-demo from americanyak. My problem is now that the Rpc seems to be a synchronous interface and my call the the mysql database has an asynchronous interface. How can I return the database results upon a call of the Rpc?
Here is what I did so far with socketstream 0.3:
In app.js I successfully tell ss to allow my mysql database connection to be accessed by putting ss.api.add('coolStore',mysqlConn); in there at the right place (as explained in the socketstream docs). I use the mysql npm, so I can call mysql within the Rpc
server/rpc/coolRpc.js
exports.actions = function (req, res, ss) {
// use session middleware
req.use('session');
return {
get: function(threshold){
var sql = "SELECT cool.id, cool.score, cool.data FROM cool WHERE cool.score > " + threshold;
if (!ss.arbStore) {
console.log("connecting to mysql arb data store");
ss.coolStore = ss.coolStore.connect();
}
ss.coolStore.query(sql, function(err, rows, fields) {
if(err) {
console.log("error fetching stuff", err);
} else {
console.log("first row = "+rows[0].id);
}
});
var db_rows = ???
return res(null, db_rows || []);
}
}
The console logs the id of my database entry, as expected. However, I am clueless how I can make the Rpc's return statement return the rows of my query. What is the right way of addressing this sort of problem?
Thanks for your help. Please be friendly with me, because this is also my first question on stackoverflow.
It's not synchronous. When your results are ready, you can send them back:
exports.actions = function (req, res, ss) {
// use session middleware
req.use('session');
return {
get: function(threshold){
...
ss.coolStore.query(sql, function(err, rows, fields) {
res(err, rows || []);
});
}
}
};
You need to make sure that you always call res(...) from an RPC function, even when an error occurs, otherwise you might get dangling requests (where the client code keeps waiting for a response that's never generated). In the code above, the error is forwarded to the client so it can be handled there.