Node.js Store object in database or in array? - mysql

I am developing a node.js multiplayer card game application, played by 4 players at the same time.
I have an array of object which contains all games in progress,
I was wondering if 5000 games or more are in progress can I have memory problems with my server application ?
Would it be better for me to store the object in a database and read it each time, data connection will be a lot more used but memory less ? What is the best approach in this kind of situation ?

If you can practically keep your data in memory, that will usually yield a solution that is faster and less complicated.
Here are reasons that you might have to use a database instead:
You need access from multiple processes.
You need persistence of data (if server should be restarted or crashes).
You are storing more data than will fit in memory.
You need certain concurrency or transactional features already built into typical databases.
You want to use certain searching/indexing features of existing databases.
If none of those reasons drive you to a database and the data comfortably fits into memory (for node.js probably less than 500MB-1GB) depending upon how much other memory your server uses and your server has access to enough run-time memory, then it's usually faster and simpler to store and access the data from memory.

Related

In-memory database for mahout recommendatiion

I have been working on mahout lately. The current version of supports inputs from Files, MySQL etc... via its DataModels. In my case, the raw-data resides within a Postgres DB at a client location. The raw-data requires a good amount of pre-processing before being fed into the mahout DataModel. Currently I'm storing the refined data as a simple *.csv file and loading it to Mahout using inbuilt FileDataModel.
Is it possible to use an inmemory DB to actually store the refined data and t load it to Mahout using its existing MySQLJDBCDataModel/JDBCDataModel? . If so, what kind of inmemory DB would serve this purpose
sqllite3 is quite often the goto in memory database and for good reason it's one of the most battle hardened databases out there and can be found literally everywhere. The browser you're using is likely using it. It has an in memory option that's fairly straight forward. Even disk based it's also fast.
Most databases given enough RAM will efficiently load most of your data into RAM anyway. I used PostgreSQL as the backend for a search engine for a long time and most access was to RAM with almost nothing going to disk when reading. If you already have the database in PostgreSQL it might be simpler to keep it in that.
Keep in mind that you can only access an SQLite in-memory database from a single process.
If you need the ultimate performance, even a fully cached persistent database won't be as fast as a true in-memory database system. To me, though, it doesn't sound like you need that level of extreme performance.

How can I store a 2 - 3 GB tree in memory and have it accessible to nodejs?

I have a large tree of data that I want to be able to efficiently access leaves, and efficiently serialize large chunks (10 - 20 MBs of it at a time) into json.
Right now I'm storing it as javascript objects, but I'm seeing garbage collection times of 4 - 5 seconds, which is not okay.
I tried using an embedded database (both sqlite and lmdb), but the performance overhead of going from rows to trees when I access data is prety high -- taking me 6 seconds to serialize 5 MBs into json.
Ideally I'd want to be able to tell v8 "please don't try to garbage collect that tree!" (I tried turning GC off on the whole process, but I'm running a lightweight tcp server in front of it and that quickly started to run out of memory).
Or, maybe there's an embedded (or not embedded?) database that handles this natively that I don't know about. (I do know about MongoDB -- it has a 16 MB limit on max object size though).
I'm thinking of maybe trying to pack the tree in a node buffer object (ie, basically simulate the v8 stack myself) but before I get that desperate I thought I'd ask stackoverflow :-)
Storing large objects in a GC language is a bad practice. It is a problem in Java world as well.
There are 2 solutions to this:
Use an In Memory DB - like Redis. See if you can leverage the data structure primitives Redis provides to your advantage.
Go Native - NodeJS provides simple(comparatively) FFI, as half of the library is written in it. See the addons document here on how to proceed.
If you are deploying on server, then you have a 3rd option as well. Instead of linking native code directly with Node, you can write it as an service, and tie it together using a Message Broker like Beanstalk / ZeroMQ / RabbitMQ.
This allows for ease of deployment, as suitable server resources can be provisioned for the app. In your case, the frontend TCP server can sit on its own cheap instance, while the Tree wrangling program can have a large memory instance to work with.
Also, MongoDB is horrible for relational data, which makes it a bad choice for storing Trees. Graph databases might work for you depending on your usecase.
Perhaps you can look into graph databases? Neo4j seems to be a popular one these days and they have node.js client libraries.

Caching the data result of complex computation

I have a Spring Boot server application. Clients of this server ask for statistics about different things all the time. These statistics can be shared among clients, and must not be real time.
It's good enough if these statistics are refreshed every 15-30 mins.
Also, computing these statistics requires reading the whole database.
So, I'd like to cache these computed statistics and update them now and then.
What is your suggestion, what tool or pattern should I use?
I have the following ideas so far:
using memcached
upgrading to MySQL 5.7 which has JSON store, and store the data there
Please keep in mind that the hardware of my server is not too powerful: 512MB RAM and 1 CPU (cheapest option in DigitalOcean).
Thank you in advance!
Edit 1:
These statistics are composed of quite simple data structures: int to int maps, lists, etc. and they are NOT fitting well for a relational database.
Edit 2:
The whole data is only a few megabytes. The crutial point is that creating this data requires a lot of database reads, and a lot of clients are asking for it.
I also want to keep my server application stateless. I think it's important to mention.
A simple solution for the problem, is saving the data in JSON format to a file, and that's it.
Additionally, this file can be on a ram disk partition, so it will be blazing fast.

SQLite faster than MySQL?

I want to set up a teamspeak 3 server. I can choose between SQLite and MySQL as database. Well I usually tend to "do not use SQLite in production". But on the other hand, it's a teamspeak server. Well okay, just let me google this... I found this:
Speed
SQLite3 is much faster than MySQL database. It's because file database is always faster than unix socket. When I requested edit of channel it took about 0.5-1 sec on MySQL database (127.0.0.1) and almost instantly (0.1 sec) on SQLite 3. [...]
http://forum.teamspeak.com/showthread.php/77126-SQLite-vs-MySQL-Answer-is-here
I don't want to start a SQLite vs MySQL debate. I just want to ask: Is his argument even valid? I can't imagine it's true what he says. But unfortunately I'm not expert enough to answer this question myself.
Maybe TeamSpeak dev's have some major differences in their db architecture between SQLite and MySQL which explains a huge difference in speed (I can't imagine this).
At First Access Time will Appear Faster in SQLite
The access time for SQLite will appear faster at first instance, but this is with a small number of users online. SQLite uses a very simplistic access algorithm, its fast but does not handle concurrency.
As the database starts to grow, and the amount of simultaneous access it will start to suffer. The way servers handle multiple requests is completely different and way more complex and optimized for high concurrency. For example, SQLite will lock the whole table if an update is going on, and queue the orders.
RDBMS's Makes a lot of extra work that make them more Scalable
MySQL for example, even with a single user will create an access QUEUE, lock tables partially instead of allowing only single user-per time executions, and other pretty complex tasks in order to make sure the database is still accessible for any other simultaneous access.
This will make a single user connection slower, but pays off in the future, when 100's of users are online, and in this case, the simple
"LOCK THE WHOLE TABLE AND EXECUTE A SINGLE QUERY EACH TIME"
procedure of SQLite will hog the server.
SQLite is made for simplicity and Self Contained Database Applications.
If you are expecting to have 10 simultaneous access writing at the database at a time SQLite may perform well, but you won't want an 100 user application that constant writes and reads data to the database using SQLite. It wasn't designed for such scenario, and it will trash resources.
Considering your TeamSpeak scenario you are likely to be ok with SQLite, even for some business it is OK, some websites need databases that will be read only unless when adding new content.
For this kind of uses SQLite is a cheap, easy to implement, self contained, perfect solution that will get the job done.
The relevant difference is that SQLite uses a much simpler locking algorithm (a simple global database lock).
Using fine-grained locking (as MySQL and most other DB servers do) is much more complex, and slower if there is only a single database user, but required if you want to allow more concurrency.
I have not personally tested SQLite vs MySQL, but it is easy to find examples on the web that say the opposite (for instance). You do ask a question that is not quite so religious: is that argument valid?
First, the essence of the argument is somewhat specious. A Unix socket would be used to communicate to a database server. A "file database" seems to refer to the fact that communication is through a compiled-in interface. In the terminology of SQLite, it is server-less. Most databases store data in files, so the terminology "file database" is a little misleading.
Performance of a database involves multiple factors, such as:
Communication of query to the database.
Speed of compilation (ability to store pre-compiled queries is a plus here).
Speed of processing.
Ability to handle complex processing.
Compiler optimizations and execution engine algorithms.
Communication of results back to the application.
Having the interface be compiled-in affects the first and last of these. There is nothing that prevents a server-less database from excelling at the rest. However, database servers are typically millions of lines of code -- much larger than SQLite. A lot of this supports extra functionality. Some of it supports improved optimizations and better algorithms.
As with most performance questions, the answer is to test the systems yourself on your data in your environment. Being server-less is not an automatic performance gain. Having a server doesn't make a database "better". They are different applications designed for different optimization points.
In short:
For Local application databses, single user applications, and little simple projects keeping small data SQLite is winner.
For Network database applications, multiuser and concurrency, load balancing and growing data managements, security and roll based authentications, big projects and widely used services you should choose MySql.
In your question I do not know much about teamspeak servers and what kind of data it actually needs to keep in its database but if it just needs a local DBMS and not needs to proccess lots of concurrency and managements SQLite will be my choice.

Mysql with Node.js: Does it make sense to have node.js save/load stuff to/from the database all the time?

So I have a small game in node.js(only the server of course) which has map data and player accounts stored in a mysql database. Right now I constructed it in a way that minimizes the amount of queries made by loading data from the database and keeping it in javascript objects/arrays or whatever seems appropriate and only writing to the database when needed.
Now I was thinking: Is this really worth it? In many cases it would be alot better(in terms of data would be more save and WAY more up-to-date) to hardly store data in the server and just loading it from the database when needed(respectively writing when it needs to be changed).
My question is: Is it efficient/save/recommendable to have the server read/write from the database often rather than having data from the database in javascript variables in the server?
Additional info:
-The nodejs server and my mysql server are on the same machine and a query usually takes less than 1ms or maybe 3ms for big queries like loading room data.
-I am using a module simply called mysql.
-If needed I will include extra info, just ask in a comment.
Really depends on your Use-Case. Generally speaking, I would not add another layer of caching in node.js but handle that in your db with a bigger cache and optimized queries.