using mongodb for caching json responses - json

Disclaimer: I'm new to NoSQL databases, if something is not clear, appreciate comments and questions to clear things up.
I'm calling some 3rd party web services to get JSON response but need a way to cache the response to avoid repeated calling since each response is constant across remote entities.
Question.
I've selected mongodb, is it the right tool for the job, the response entities have lengthy schemas and mongoose is forcing me to define one, is there a way to avoid having to define schema and just save the response by some id and read it later. if someone can kindly help with condition for checking the A. cache and B. saving.
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/test');
if() { // A. condition to check if cached response is available
} else { // call web service
http.request({}, function (response) {
var buffers = [];
response.on('data', function (chunk) {
buffers.push(chunk)
});
response.on('end', function () {
var data = Buffer.concat(buffers).toString();
// B. how to save data to a mongoose db
});
})
}
If approach is not right, and I should be using something else then please also enlighten.

If you don't have a schema available then don't use mongoose. Just store the json as mongo documents directly.
https://docs.mongodb.com/getting-started/node/client/
https://docs.mongodb.com/getting-started/node/update/

Related

How to get around previously declared json body-parser in Nodebb?

I am writing a private plugin for nodebb (open forum software). In the nodebb's webserver.js file there is a line that seems to be hogging all incoming json data.
app.use(bodyParser.json(jsonOpts));
I am trying to convert all incoming json data for one of my end-points into raw data. However the challenge is I cannot remove or modify the line above.
The following code works ONLY if I temporarily remove the line above.
var rawBodySaver = function (req, res, buf, encoding) {
if (buf && buf.length) {
req.rawBody = buf.toString(encoding || 'utf8');
}
}
app.use(bodyParser.json({ verify: rawBodySaver }));
However as soon as I put the app.use(bodyParser.json(jsonOpts)); middleware back into the webserver.js file it stops working. So it seems like body-parser only processes the first parser that matches the incoming data type and then skips all the rest?
How can I get around that? I could not find any information in their official documentation.
Any help is greatly appreciated.
** Update **
The problem I am trying to solve is to correctly handle an incoming stripe webhook event. In the official stripe documentation they suggested I do the following:
// Match the raw body to content type application/json
app.post('/webhook', bodyParser.raw({type: 'application/json'}),
(request, response) => {
const sig = request.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(request.body, sig,
endpointSecret);
} catch (err) {
return response.status(400).send(Webhook Error:
${err.message});
}
Both methods, the original at the top of this post and the official stripe recommended way, construct the stripe event correctly but only if I remove the middleware in webserver. So my understanding now is that you cannot have multiple middleware to handle the same incoming data. I don't have much wiggle room when it comes to the first middleware except for being able to modify the argument (jsonOpts) that is being passed to it and comes from a .json file. I tried adding a verify field but I couldn't figure out how to add a function as its value. I hope this makes sense and sorry for not stating what problem I am trying to solve initially.
The only solution I can find without modifying the NodeBB code is to insert your middleware in a convenient hook (that will be later than you want) and then hack into the layer list in the app router to move that middleware earlier in the app layer list to get it in front of the things you want to be in front of.
This is a hack so if Express changes their internal implementation at some future time, then this could break. But, if they ever changed this part of the implementation, it would likely only be in a major revision (as in Express 4 ==> Express 5) and you could just adapt the code to fit the new scheme or perhaps NodeBB will have given you an appropriate hook by then.
The basic concept is as follows:
Get the router you need to modify. It appears it's the app router you want for NodeBB.
Insert your middleware/route as you normally would to allow Express to do all the normal setup for your middleware/route and insert it in the internal Layer list in the app router.
Then, reach into the list, take it off the end of the list (where it was just added) and insert it earlier in the list.
Figure out where to put it earlier in the list. You probably don't want it at the very start of the list because that would put it after some helpful system middleware that makes things like query parameter parsing work. So, the code looks for the first middleware that has a name we don't recognize from the built-in names we know and insert it right after that.
Here's the code for a function to insert your middleware.
function getAppRouter(app) {
// History:
// Express 4.x throws when accessing app.router and the router is on app._router
// But, the router is lazy initialized with app.lazyrouter()
// Express 5.x again supports app.router
// And, it handles the lazy construction of the router for you
let router;
try {
router = app.router; // Works for Express 5.x, Express 4.x will throw when accessing
} catch(e) {}
if (!router) {
// Express 4.x
if (typeof app.lazyrouter === "function") {
// make sure router has been created
app.lazyrouter();
}
router = app._router;
}
if (!router) {
throw new Error("Couldn't find app router");
}
return router;
}
// insert a method on the app router near the front of the list
function insertAppMethod(app, method, path, fn) {
let router = getAppRouter(app);
let stack = router.stack;
// allow function to be called with no path
// as insertAppMethod(app, metod, fn);
if (typeof path === "function") {
fn = path;
path = null;
}
// add the handler to the end of the list
if (path) {
app[method](path, fn);
} else {
app[method](fn);
}
// now remove it from the stack
let layerObj = stack.pop();
// now insert it near the front of the stack,
// but after a couple pre-built middleware's installed by Express itself
let skips = new Set(["query", "expressInit"]);
for (let i = 0; i < stack.length; i++) {
if (!skips.has(stack[i].name)) {
// insert it here before this item
stack.splice(i, 0, layerObj);
break;
}
}
}
You would then use this to insert your method like this from any NodeBB hook that provides you the app object sometime during startup. It will create your /webhook route handler and then insert it earlier in the layer list (before the other body-parser middleware).
let rawMiddleware = bodyParser.raw({type: 'application/json'});
insertAppMethod(app, 'post', '/webhook', (request, response, next) => {
rawMiddleware(request, response, (err) => {
if (err) {
next(err);
return;
}
const sig = request.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(request.body, sig, endpointSecret);
// you need to either call next() or send a response here
} catch (err) {
return response.status(400).send(`Webhook Error: ${err.message}`);
}
});
});
The bodyParser.json() middleware does the following:
Check the response type of an incoming request to see if it is application/json.
If it is that type, then read the body from the incoming stream to get all the data from the stream.
When it has all the data from the stream, parse it as JSON and put the result into req.body so follow-on request handlers can access the already-read and already-parsed data there.
Because it reads the data from the stream, there is no longer any more data in the stream. Unless it saves the raw data somewhere (I haven't looked to see if it does), then the original RAW data is gone - it's been read from the stream already. This is why you can't have multiple different middleware all trying to process the same request body. Whichever one goes first reads the data from the incoming stream and then the original data is no longer there in the stream.
To help you find a solution, we need to know what end-problem you're really trying to solve? You will not be able to have two middlewares both looking for the same content-type and both reading the request body. You could replace bodyParser.json() that does both what it does now and does something else for your purpose in the same middleware, but not in separate middleware.

Cloud Functions for Firebase Could not handle the request after a successful request

TLDR: After writing a JSON (successfully) to my Firestore, the next request will give me Internal Server Error (500). I have a suspicion that the problem is that inserting is not yet complete.
So basically, I have this code:
const jsonToDb = express();
exports.jsondb = functions.region('europe-west1').https.onRequest(jsonToDb);
jsonToDb.post('', (req, res) => {
let doc;
try {
doc = JSON.parse(req.body);
} catch(error) {
res.status(400).send(error.toString()).end();
return;
}
myDbFuncs.saveMyDoc(doc);
res.status(201).send("OK").end();
}
The database functions are in another JS file.
module.exports.saveMyDoc = function (myDoc) {
let newDoc = db.collection('insertedDocs').doc(new Date().toISOString());
newDoc.set(myDoc).then().catch();
return;
};
So I have several theories, maybe one of them is not wrong, but please help me with this. (Also if I made some mistakes in this little snippet, just tell me.)
Reproduction:
I send the first request => everything is OK, Json in the database.
I send a second request after the first request give me OK status => it does not do anything for a few secs, then 500: Internal Server Error.
Logs: Function execution took 4345 ms, finished with status: 'connection error'.
I just don't understand. Let's imagine I'm using this as an API, several requests simultaneously. Can't it handle? (I suppose it can handle, just I do something stupid.) Deliberately, I'm sending the second request after the first has finished and this occurs.
Should I make the saveMyDoc async?
saveMyDoc isn't returning a promise that resolves when all the async work is complete. If you lose track of a promise, Cloud Functions will shut down the work and clean up before the work is complete, making it look like it simply doesn't work. You should only send a response from an HTTP type function after all the work is fully complete.
Minimally, it should look more like this:
module.exports.saveMyDoc = function (myDoc) {
let newDoc = db.collection('insertedDocs').doc(new Date().toISOString());
return newDoc.set(myDoc);
};
Then you would use the promise in your main function:
myDbFuncs.saveMyDoc(doc).then(() => {
res.status(201).send("OK").end();
}
See how the response is only sent after the data is saved.
Read more about async programming in Cloud Functions in the documentation. Also watch this video series that talks about working with promises in Cloud Functions.

how can I verify response against a predefined json schema in karate?

Currently for checking answer response IO use below method:
And match response ==
"""
{
"status":#number,
"message":#string
}
"""
Is there any way to do like below?
And match response == someJsonSchemaDefinedInKarateConfigFile
Yes, refer to the documentation on reading files.
And match response == read('my-schema.json')
(edit): There was a comment requesting how to initialize these in karate-config.js
karate-config.js is intended for 'global' config, I really don't recommend dumping schemas here unless you are sure it will be used by almost all of your tests. But if you are reading from a file, it might be ok as it won't be a time consuming operation, remember karate-config.js is re-loaded for every Scenario.
Within karate-config.js you can easily load a JSON or JS file by using karate.read(). This should answer your question:
function() {
var config = {
};
config.mySchema = karate.read('classpath:my-schema.json');
return config;
}

PUT requests with Custom Ember-Data REST Adapter

I'm using Ember-Data 1.0.0.Beta-9 and Ember 1.7 to consume a REST API via DreamFactory's REST Platform. (http://www.dreamfactory.com).
I've had to extend the RESTAdapter in order to use DF and I've been able to implement GET and POST requests with no problems. I am now trying to implement model.save() (PUT) requests and am having a serious hiccup.
Calling model.save() sends the PUT request with the correct data to my API endpoint and I get a 200 OK response with a JSON response of { "id": "1" } which is what is supposed to happen. However when I try to access the updated record all of the properties are empty except for ID and the record on the server is not updated. I can take the same JSON string passed in the request, paste it into the DreamFactory Swagger API Docs and it works no problem - response is good and the record is updated on the DB.
I've created a JSBin to show all of the code at http://emberjs.jsbin.com/nagoga/1/edit
Unfortunately I can't have a live example as the servers in question are locked down to only accept requests from our company's public IP range.
DreamFactory provides a live demo of the API in question at
https://dsp-sandman1.cloud.dreamfactory.com/swagger/#!/db/replaceRecordsByIds
OK in the end I discovered that you can customize the DreamFactory response by adding a ?fields=* param to the end of the PUT request. I monkey-patched that into my updateRecord method using the following:
updateRecord: function(store, type, record) {
var data = {};
var serializer = store.serializerFor(type.typeKey);
serializer.serializeIntoHash(data, type, record);
var adapter = this;
return new Ember.RSVP.Promise(function(resolve, reject) {
// hack to make DSP send back the full object
adapter.ajax(adapter.buildURL(type.typeKey) + '?fields=*', "PUT", { data: data }).then(function(json){
// if the request is a success we'll return the same data we passed in
resolve(json);
}, function(reason){
reject(reason.responseJSON);
});
});
}
And poof we haz updates!
DreamFactory has support for tacking several params onto the end of the requests to fully customize the response - at some point I will look to implement this correctly but for the time being I can move forward with my project. Yay!
EmberData is interpreting the response from the server as an empty object with an id of "1" an no other properties in it. You need to return the entire new object back from the server with the changes reflected.

Creating a non-REST JSON API server with node

I am relatively new to NodeJS, but I'm porting an existing API server written in PHP to use NodeJS. I started out looking at Express, but realised that with all the layout-rendering and templating stuff in Express, it wasn't suited for the task. Then I looked at Restify, but realised it's REST-ness wouldn't work with the model of this API.
I don't want anything that is tied to a database, or any specific way of setting out the API endpoints. Is the best solution to fully roll my own server, without the help of any libraries?
EDIT: Sorry, it seems I was unclear. I am trying to recreate the PHP API as close as possible, and the PHP version does not use REST. It has a few different PHP scripts which take some POST parameters.
If you just want a simple JSON API, Express is still an option. Layouts, temptating and middleware are optional, and you can just use simpler functions.
var express = require('express');
var app = express();
app.use(express.bodyParser());
app.post('/', function(req, res) {
// req.body is an object with POST parameters
// respond with JSON
res.json(200, { data: 'payload' })
// or show an error
res.json(500, { error: 'message' });
});
app.listen(80);
That is one of the simplest solutions available. Unless you want to do request body parsing, checking the HTTP request method, other things yourself, then you can create your own server. That would look more like this:
var http = require('http');
http.createServer(function(request, response) {
if (request.method === 'POST') {
var data = '';
request.on('data', function(chunk) {
data += chunk;
});
request.on('end', function() {
// parse the data
});
}
}).listen(80);
A method like so would also require checking the path as well as other things that would be handled automatically in Express.