I have created a webpage with Node JS, Express JS, Mongoose and D3 JS.
In the webpage, it contains 3 pull down menus: Department, Employee, Week.
The usage of the webpage is as follows:
When 'Department' is selected, 'Employee' menu will be filtered to show only those from the selected 'Department'. The same goes to 'Week' after 'Employee' is selected.
After the 3 menus are selected and 'PLOT' button is clicked, a line chart (using d3.js) will be plotted to show the employee working hours for the month.
MongoDB Json
{ dep: '1',
emp: 'Mr A',
week: 1,
hrs: [{
{1,8},
{2,10},
...
}]
}
Here are the snippets of my codes:
routes.js
// Connect the required database and collection
var dataAll = require('./models/dataModel');
module.exports = function(app) {
app.get('/api/data', function(req, res) {
dataAll.find({}, {}, function(err, dataRes) {
res.json(dataRes);
});
}
app.get('*', function(req,res) {
res.sendfile('./index.html');
}
}
index.html
... // More codes
<div id="menuSelect1"></div>
<div id="menuSelect2"></div>
<div id="menuSelect3"></div>
...
<script src="./display.js" type='text/javascript'></script>
... // More codes
display.js
//Menu (Department,Employee,Week) Information is gathered here
queue()
.defer(d3.json, "/api/data")
.await(createPlot);
function createPlot(error, plotData) {
var myData = plotData;
var depData = d3.nest()
.key(function(d) {return d.dep;})
.rollup(function(v) {return v.length;})
.entries(myData);
selectField1 = d3.select('#menuSelect1')
.append("select")
.on("change", menu1change)
.selectAll(depData)
.enter()
.append("option")
.attr("value", function(d) {return d.key;})
.text(function(d) {return d.key;});
function menu1Change() {
//Filter Next Menu with the option chosen in this menu
... // More codes
var selectedVal = this.options[this.selectedIndex].value;
var empData = dataSet.filter(function(d) { return d.emp = selectString; });
... // More codes
}
... // More codes
}
Problem:
Functionally, it is working as expected. Problem is when the database is getting larger and larger, the loading of the page becomes very very slow (mins to load). I believe it should be due to the routing where all data is retrieved (.find({},{})) but I thought I need it because I am using it in 'display.js' to filter my menu options.
Is there a better way to do this to resolve the performance issue?
It is rarely necessary to send all the data to the client. In fact, I haven't seen an API with a single endpoint that returns the entire database to everyone.
It's hard to give you any specific solution not knowing how your data looks like, how large it is, how fast it grows etc. The performance issues may be related to querying the database, to large data transfer, or large JSON to parse by the browser.
In any case, you shouldn't send all your database to the client with no limits. Usually it is implemented with a number of records to skip and a maximum number of records to return.
Some frameworks like LoopBack does it for you, see:
https://docs.strongloop.com/display/public/LB/Skip+filter
https://docs.strongloop.com/display/public/LB/Limit+filter
If you're using Express then you'll have to implement the limits yourself.
To test the bottleneck, you can run the Mongo shell and try to run the .find({},{}) query from there to see how long it takes. You can see the transfer size and time in the browser's developer tools. This may find you narrow down the place that needs most attention, but returning the entire database no matter how large it is, is already a good place to start.
Related
This may sound silly... but is there any way to embed all videos in a directory to a webpage? I'm hosting some videos on my website but right now you have to manually browse the directory and just click a link to a video.
I know I can just embed those videos to a html page but is there any way to make it adapt automatically when I add new videos?
How you do this will depend on how you are building your server code and web page code, but the example below which is node and angular based does exactly what you are asking:
// GET: route to return list of upload videos
router.get('/video_list', function(req, res) {
//Log the request details
console.log(req.body);
// Get the path for the uploaded_video directory
var _p;
_p = path.resolve(__dirname, 'public', 'uploaded_videos');
//Find all the files in the diectory and add to a JSON list to return
var resp = [];
fs.readdir(_p, function(err, list) {
//Check if the list is undefined or empty first and if so just return
if ( typeof list == 'undefined' || !list ) {
return;
}
for (var i = list.length - 1; i >= 0; i--) {
// For each file in the directory add an id and filename to the response
resp.push(
{"index": i,
"file_name": list[i]}
);
}
// Set the response to be sent
res.json(resp);
});
});
This code is old in web years (i.e. about 3 years old) so the way node handles routes etc is likely different now but the concepts remains the same, regardless of language:
go to the video directory
get the lit of video files in it
build them into a JSON response and send them to the browser
browser extracts and displays the list
The browser code corresponding to the above server code in this case is:
$scope.videoList = [];
// Get the video list from the Colab Server
GetUploadedVideosFactory.getVideoList().then(function(data) {
// Note: should really do some type checking etc here on the returned value
console.dir(data.data);
$scope.videoList = data.data;
});
You may find some way to automatically generate a web page index from a directory, but the type of approach above will likely give you more control - you can exclude certain file names types etc quite easily, for example.
The full source is available here: https://github.com/mickod/ColabServer
I want to create a form on an index page that can store data via session storage. I also want to make sure that whatever data(let's say name) ... is remembered and used throughout the site with angular. I have researched pieces of this process but I do not understand how to write it or really even what it's called.
Any help in the right direction would be useful as I am in the infant stages of all of this angular business. Let me know.
The service you want is angular-local-storage.
Just configure it in your app.js file:
localStorageServiceProvider
.setStorageType('sessionStorage');
And then use it in the controller that contains whatever data you want to remember. Here is an example of a controller that loads the session storage data on initialization, and saves it when a user fires $scope.doSearch through the UI. This should give you a good place to start.
(function () {
angular.module("pstat")
.controller("homeCtrl", homeCtrl);
homeCtrl.$inject = ['$log', 'dataService', 'localStorageService', '$http'];
function homeCtrl ($log, dataService, localStorageService, $http) { {
if (localStorageService.get("query")) { //Returns null for missing 'query' cookie
//Or store the results directly if they aren't too large
//Do something with your saved query on page load, probably get data
//Example:
dataService.getData(query)
.success( function (data) {})
.error( function (err) {})
}
$scope.doSearch = function (query) {
vm.token = localStorageService.set("query", query);
//Then actually do your search
}
})
}()
Trying to create my first simple CRUD in Express JS and I cant seem to find this annoying bug.
When I try to update a field, the JSON from that field, gets outputed to the view, instead of the new data.
Screenshot: http://i59.tinypic.com/wi5yj4.png
Controller gist: https://gist.github.com/tiansial/2ce28e3c9a25b251ff7c
The update method is used for finding and updating documents without returning the documents that are updated. Basically what you're doing is finding documents without updating them, since the first parameter of the update function is the search criteria. You need to use the save function to update an exiting document, after updating it's properties.
Your code below, modified (not tested):
//PUT to update a blob by ID
.put(function(req, res) {
//find the document by ID
mongoose.model('Email').findById(req.id, function (err, email) {
//add some logic to handle err
if (email) {
// Get our REST or form values. These rely on the "name" attributes
email.email = req.body.email;
email.password = req.body.password;
email.servico = req.body.servico;
//save the updated document
email.save(function (err) {
if (err) {
res.send("There was a problem updating the information to the database: " + err);
}
else {
//HTML responds by going back to the page or you can be fancy and create a new view that shows a success page.
res.format({
html: function(){
res.redirect("/emails");
},
//JSON responds showing the updated values
json: function(){
res.json(email);
}
});
}
});
}
});
})
There aren't many examples demonstrating indexedDB in a ServiceWorker yet, but the ones I saw were all structured like this:
const request = indexedDB.open( 'myDB', 1 );
var db;
request.onupgradeneeded = ...
request.onsuccess = function() {
db = this.result; // Average 8ms
};
self.onfetch = function(e)
{
const requestURL = new URL( e.request.url ),
path = requestURL.pathname;
if( path === '/test' )
{
const response = new Promise( function( resolve )
{
console.log( performance.now(), typeof db ); // Average 15ms
db.transaction( 'cache' ).objectStore( 'cache' ).get( 'test' ).onsuccess = function()
{
resolve( new Response( this.result, { headers: { 'content-type':'text/plain' } } ) );
}
});
e.respondWith( response );
}
}
Is this likely to fail when the ServiceWorker starts up, and if so what is a robust way of accessing indexedDB in a ServiceWorker?
Opening the IDB every time the ServiceWorker starts up is unlikely to be optimal, you'll end up opening it even when it isn't used. Instead, open the db when you need it. A singleton is really useful here (see https://github.com/jakearchibald/svgomg/blob/master/src/js/utils/storage.js#L5), so you don't need to open IDB twice if it's used twice in its lifetime.
The "activate" event is a great place to open IDB and let any "onupdateneeded" events run, as the old version of ServiceWorker is out of the way.
You can wrap a transaction in a promise like so:
var tx = db.transaction(scope, mode);
var p = new Promise(function(resolve, reject) {
tx.onabort = function() { reject(tx.error); };
tx.oncomplete = function() { resolve(); };
});
Now p will resolve/reject when the transaction completes/aborts. So you can do arbitrary logic in the tx transaction, and p.then(...) and/or pass a dependent promise into e.respondWith() or e.waitUntil() etc.
As noted by other commenters, we really do need to promisify IndexedDB. But the composition of its post-task autocommit model and the microtask queues that Promises use make it... nontrivial to do so without basically completely replacing the API. But (as an implementer and one of the spec editors) I'm actively prototyping some ideas.
I don't know of anything special about accessing IndexedDB from the context of a service worker via accessing IndexedDB via a controlled page.
Promises obviously makes your life much easier within a service worker, so I've found using something like, e.g., https://gist.github.com/inexorabletash/c8069c042b734519680c to be useful instead of the raw IndexedDB API. But it's not mandatory as long as you create and manage your own promises to reflect the state of the asynchronous IndexedDB operations.
The main thing to keep in mind when writing a fetch event handler (and this isn't specific to using IndexedDB), is that if you call event.respondWith(), you need to pass in either a Response object or a promise that resolves with a Response object. As long as you're doing that, it shouldn't matter whether your Response is constructed from IndexedDB entries or the Cache API or elsewhere.
Are you running into any actual problems with the code you posted, or was this more of a theoretical question?
I have an array = [ 'something', 'other' ]
And I want to retrieve only the values of those 2 ids from Firebase, which contains more than 2 items ( potentially millions ), but if I do this:
var questionRef = new Firebase(fireBaseURL+"/morethanamillionitems/");
loadUID.once('value', function (dataSnapshot) {
dataSnapshot.forEach(function(childSnapshot) { // Firebase method
console.log(dataSnapshot.numChildren()); // potentially outputs 1.000.000 +
var uid = childSnapshot.name();
var childData = childSnapshot.val();
console.log(uid.indexOf('something'));
result.push(uid)
});
}
I first basically load the whole database, which is not that efficient
Now I could do:
array.forEach(key, function() {
var questionRef = new Firebase(fireBaseURL+"/morethanamillionitems/"+key);
refID = questionRef.val();
result.push(refID);
})
Or maybe:
questionRef = new Firebase(fireBaseURL+"/morethanamillionitems/");
array.forEach(key, function() {
if ( questionRef.child(key) !== null ){
refID = questionRef.val();
result.push(refID);
}
})
The last one seems the nicest, the previous one seems a bit expensive on the old RAM.
However, I apparently have to call questionRef.once('value', function(){}) each time, hence already loading the whole document-root...
Or am I misunderstanding how Firebase handles these requests? is the .numChildren() just an answer directly from the server?
Is the .forEach actually remotely executed?
I'm wondering if there is any other way to reduce traffic per request. Which brings me to another question: it seems that firebase searches locally first, but eventually will search remotely, but it's not clear when this exactly happens. Does it periodically check if something has changed? Or will that only happend when I use .on() and not .once().
Or am I using the wrong backend for this purpose? Any other suggestions? I tried hood.ie which is still very beta, looked at Parse but firebase seemed to have the simplicity I need.
(sorry for the sloppy syntax, but you can see what I intended)
[update]
I now have this:
load: function(uids){
var FB = new Firebase(URL);
uids.map(function(uid) {
var currentRef = FB.child( uid+"/_current" );
currentRef.once('value', function (each) {
eachVal = each.val()
if (eachVal !== null){
var localSave = {};
localSave[uid] = eachVal;
this.saveLocal(localSave)
} else {
console.error("Not found: [%s]", uid)
}}, function (err) { });
});
}
But I'm still wondering when the request actually happens, on .child()? or in .once() and if the latter, what is the use of .child() exactly? It seems it's only used for referencing.
Then the second thing, if I want to retrieve an array of a hundred items, this would still mean a hundred seperate requests? or does Firebase have a way of collecting requests and then send them in a batch?
In that last case .once would be a more 'conservative' option for initial retrieval, then later you could attach a .on listener if you need real-time updates.