Storing in a JSON file vs. JSON object vs. MYSQL database - json

I've programmed a chat application using nodejs, expressjs and socket.io.
So, When I used MYSQL database the application slows down then I replaced it with storing the data using JSON objects (in the server-side of nodejs).
Now everything is ok and the application is working well, but If I want to release an update of the nodejs' app.js file it should be restarted, so, everything in the JSON objects will be deleted!
How can I fix this problem? and can I store in a JSON file to fix it? and will the application stay at the same speed or what?

Storing data in RAM will be always faster than writing to a database, but the problem in your case you need to persist it somewhere.
There are many solutions to this problem but if you are using JSON's, I recommend to look at mongodb database.
MongoDb supports multiple storage engines, but the one that you are interested is in memory.
An interesting architecture for you can be the following replica set configuration:
Primary with in-memory storage engine
Secondary with in-memory storage engine
An other Secondary with wiredtiger storage engine.
You will have the benefits of speed by storing in RAM, and also it will be persisted in the database.
A simpler possibility will be to use a key-store db like redis. Easier to configure.

Related

Storage: database vs in-memory objects vs in-memory database

I'm doing a project where I have to store data for a NodeJS express server. It's not a LOT of data, but i have to save it somewhere.
I always hear that a database is good for that kinda stuff, but I thought about just saving all the data in objects in NodeJS and back them up as JSON to disk every minute (or 5 minutes). Would that be a good idea?
What im thinking here is that the response time from objects like that are way faster than from a database, and saving them is easy. But then I heared that there are in-memory databases aswell, so my question is:
Are in-memory databases faster than javascript objects? Are JSON-based data backups a good idea in this aspect? Or should I simply go with a normal database because the performance doesn't really matter in this case?
Thanks!
If this is nothing but a school assignment or toy project with very simple models and access patterns, then sure rolling your own data persistence might make sense.
However, I'd advocate for using a database if:
you have a lot of objects or different types of objects
you need to query or filter objects by various criteria
you need more reliable data persistence
you need multiple services to access the same data
you need access controls
you need any other database feature
Since you ask about speed, for trivial stuff, in-memory objects will likely be faster to access. But, for more complicated stuff (lots of data, object relations, pagination, etc.), a database could start being faster.
You mention in-memory databases but those would only be used if you want the database features without the persistence and would be closer to your in-memory objects but without the file writing. So it just depends on if you care about keeping the data or not.
Also if you haven't ever worked with any kind of database, now's a perfect time to learn :).
What I'm thinking here is that the response time from objects like that is way faster than from a database, and saving them is easy.
That's not true. Databases are the persistence storage, there will always be I/O latency. I would recommend using Mysql for sql database and MongoDB or Cassandra for nosql.
An in-memory database is definitely faster but again you need persistence storage for those data. redis is a very popular in-memory database.
MongoDB store data in BSON (a superset of JSON) like formate, so it will be a good choice in your case.

In-memory database for mahout recommendatiion

I have been working on mahout lately. The current version of supports inputs from Files, MySQL etc... via its DataModels. In my case, the raw-data resides within a Postgres DB at a client location. The raw-data requires a good amount of pre-processing before being fed into the mahout DataModel. Currently I'm storing the refined data as a simple *.csv file and loading it to Mahout using inbuilt FileDataModel.
Is it possible to use an inmemory DB to actually store the refined data and t load it to Mahout using its existing MySQLJDBCDataModel/JDBCDataModel? . If so, what kind of inmemory DB would serve this purpose
sqllite3 is quite often the goto in memory database and for good reason it's one of the most battle hardened databases out there and can be found literally everywhere. The browser you're using is likely using it. It has an in memory option that's fairly straight forward. Even disk based it's also fast.
Most databases given enough RAM will efficiently load most of your data into RAM anyway. I used PostgreSQL as the backend for a search engine for a long time and most access was to RAM with almost nothing going to disk when reading. If you already have the database in PostgreSQL it might be simpler to keep it in that.
Keep in mind that you can only access an SQLite in-memory database from a single process.
If you need the ultimate performance, even a fully cached persistent database won't be as fast as a true in-memory database system. To me, though, it doesn't sound like you need that level of extreme performance.

Access a SQLite db file from inside MySQL?

Given the number of different storage engines MySQL supports, I'm just a bit surprised I can't find one that uses SQLite files.
Does such a thing exist?
The use case is that I have data in multiple MySQL data bases that I want to process and export as a SQLite data base that other tools can then process.
The current proposed solution is to use a scratch MySQL DB server to access the other instance using the FEDERATED Storage Engine and to access and populate a local SQLite db file using another Storage Engine.
The constraint is that the cost/benefit trade off can barely justify the proposed workflow (and can't justify writing any code beyond the SQL that reads and processes the federated tables) so I'm strictly limited to "works robustly out of the box" solutions.

where does neo4j save its data?

I read some where that it is better to use Redis as cache server,because Redis holds the data in memory,so if you are going to save lots of data,Redis is not a good choice. Redis is good for keeping temporary data.now my question is:
1.where do rest of databases (especially neo4j and sql server) save data?
Don't they save data in memory?
if no,so where they save them?
if yes,why do we use them for saving lots of data?
2."It is better to save indices/relationships in neo4j and data in mysql,and retrieve the index from neo4j and then take the data related to the index from mysql" (I have read it some where),is this because neo4j has the same problem as Redis does?
Neo4J and SQL Server both store data on the file system. However, both also implement caching strategies. I am not an expert on the caching in these databases. Usually you can expect recently accessed data to be cached and data that has not been accessed for a while to fall out of the cache. If the DB needs to get something that is in the cache, it can avoid hits to the file system. Neo4j saves data in a subfolder called "data" by default. This linke may help you find the location of a SQL Server database: http://technet.microsoft.com/en-us/library/dd206993.aspx
This will depend a lot on your specific use-case and the required performance characteristics. My gut feeling is to put data in one or the other based on some initial performance tests. Split the data up if it solves some specific problem.

Options for transferring data between MySQL and SQLite via a web service

I've only recently started to deal with database systems.
I'm developing an ios app that will have a local database (sqlite) and that will have to periodically update the internal database with the contents of a database stored in a webserver (mySQL). My questions is, whats the best way to fetch the data from the webserver and store it in the local database? There are some options that came to me, don't know if all of them are possible
Webserver->XML/JSON->Send it->Locally convert and store in local database
Webserver->backupFile->Send it->Feed it to the SQLite db
Are there any other options? Which one is better in terms of amount of data taken?
Thank you
The XML/JSON route is by far the simplest while providing sufficient flexibility to handle updates to the database schema/older versions of the app accessing your web service.
In terms of the second option you mention, there are two approaches - either use an SQL statement dump, or a CSV dump. However:
The "default" (i.e.: mysqldump generated) backup files won't import into SQLite without substantial massaging.
Using a CSV extract/import will mean you have considerably less flexibility in terms of schema changes, etc. so it's probably not a sensible approach if the data format is ever likely to change.
As such, I'd recommend sticking with the tried and tested XML/JSON approach.
In terms of the amount of data transmitted, JSON may be smaller than the equivalent XML, but it really depends on the variable/element names used, etc. (See the existing How does JSON compare to XML in terms of file size and serialisation/deserialisation time? question for more information on this.)