NodeJS - JSON as DB & Corrupted data on close - json

I have a large file that is almost constantly updated (around 10MB but will grow to +100MB)
It is JSON and I am using it as a database. I don't want to use Mongo in this case because it needs to live self contained on a client machine (I am using electron to package it) And because it will be distributed to windows I am also trying to avoid any compiled code.
The problem is it gets corrupted when node closes. I have tried saving it to a .tmp file and then renaming it once done, which has reduced the number of corrupted incidents, but is there a better way (or a native JS DB system)? I don't need querying, just load and save.

Related

exporting mysql table with bulk data in phpmyadmin

I have a mysql table with large amount of data. I need to export this table to another database with all the data. But when I try to export the table as sql file from phpmyadmin, it shows error
The webpage at https://i.p.adress/domains/databases/phpMyAdmin/export.php might be temporarily down or it may have moved permanently to a new web address.
I tried exporting as CSV also, but the same error happens.
Does it happens because my table contains large amount of data? Is there any other way to export this table with all data?
I have around 1346641 records.
Does export work with a smaller database? Does it process for some time before showing that error or is it displayed as soon as you confirm your export options? I don't recall having seen that error message relating to large exports, but I may be remembering incorrectly. Any hints in your webserver or PHP error logs? Which phpMyAdmin version are you using?
Regarding large exports:
Because of the nature of phpMyAdmin running as a PHP script on your server, as well as sending the downloaded file to you as a download, there are a number of limitations forced on it. Web servers are usually configured to keep PHP programs from running for very long, and a long download (or long time processing the export) can be affected. Additionally, memory and other resources are often limited in a similar manner. Therefore, it's usually better to use some other means of exporting large databases.
The command-line utility mysqldump, provided with the MySQL server package, is the definitive standard. If you have command line/shell access, it's best to use mysqldump to export the .sql file(s) and then copy those through any normal file-transfer protocol (FTP, SCP, SSH, etc).
That being said, phpMyAdmin has several enhancements and tweaks that can make this possible.
Use $cfg[SaveDir] to enable writing the exported file to disk on the server, which you can then copy through any normal file transfer protocol.
If you encounter timeouts or resource restrictions, you can edit the PHP configuration directives (the linked documentation refers to imports but the same restrictions apply to exports).
Experiment with the export compression setting, in particular using an uncompressed export format (exporting to SQL directly rather than a zipped archive) can work around some memory restrictions.

Sending .csv files to a database: MariaDB

I will preface this by saying I am very new to databases. I am working on a project for my undergraduate research that requires various sensor data to be send from a Raspberry Pi via the internet to a database. I am using MariaDB at the moment, but am open to other options.
The background: Currently all sensor data is being saved in csv files on the RPi. There will be automation to send data at given intervals to the database.
The question: Am I able to audit the file itself to a database? For our application, a csv file is the most logical data storage format and we simply want the database to be a way for us to retrieve data remotely, since the system will be installed miles away from where we work.
I have read about "LOAD DATA INFILE" on this website, but am unsure how it applies to this database. Would JSON be at all applicable for this? I am willing to learn if it makes the process more streamlined.
Thank you!
If 'sending data to the database' means that, by one means or another, additional or replacement CSV files are saved on disk, in a location accessible to a MariaDB client program, then you can load these into the database using the "mysql" command-line client and an appropriate script of SQL commands. That script very likely will make use of the LOAD DATA LOCAL INFILE command.
The "mysql" program may be launched in a variety of ways: 1) spawned by the process that receives the uploaded file; 2) launched by a cron job (Task Scheduler on Windows) that runs periodically to check for new or changed CSV files; of 3) launched by a daemon that continually monitors the disk for new or changed CSV files.
A CSV is typically human readable. I would work with that first before worrying about using JSON. Unless the CSVs are huge, you could probably open them up in a simple text editor to read their contents to get an idea of what the data looks like.
I'm not sure of your environment (feel free to elaborate), but you could just use whatever web services you have to read in the CSV directly and inject the data into your database.
You say that data is being sent using automation. How is it communicating to your web service?
What is your web service? (Is it php?)
Where is the database being hosted? (Is it in the same webservice?)

Heroku resets my JSON file

I have a node.js application that uses a simple JSON file as the model. Since this is an MVP with very limited data storage needs, I don't want to spend time designing and configuring a MongoDB database. Instead, I simply read from and write to a JSON file stored in /data directory of my Node JS application.
However, on Heroku, the JSON file appears to get reset (to the original file I'd deployed to Heroku) every so often. I don't know why this happens or how to turn off this behavior. Any help would be really appreciated, I need to fix this problem within the next four hours.
Heroku uses an ephemeral file system, so that's why it's going to vanish (every 24 hours, or thereabouts).
If you want to store something, you have to use an external backing store. Adding a free tier MongoDB database shouldn't take more than a few minutes. See here or here for examples.

Explore database contents from .sql file

I inherited the maintenance of a small web forum. Near as I can tell, it is powered by a MySQL database on the backend (the frontend is all PHP).
I need to extract some of the data (which also involves searching for the data I need to extract), but I don't want to touch the production database. I exported a database backup, which produced a several-hundred-megabyte .sql file.
What's the best way to mine these data? I can see several options:
grep through the .sql script in text mode, trying to extract the relevant data
Load it up in sqlite3 (I tried doing this, but it barfed on some of the statements in the script and didn't produce any tables. I have no database experience whatsoever though, so I haven't ruled it out as a dead end just yet).
Install MySQL on my home box, create a database, and execute the .sql script to recreate the data. Then just attach some database explorer tool.
Find some (Linux) app which can understand the .sql file natively (seems unlikely after a bit of Googling).
Any pointers to which of these options (or one I haven't thought of yet) would be the most productive?
I would say any option might work but for data mining, you definitely want to load it up in a new database so you can start query-ing the data and building reports on the data. I would load it up on your Home box. No need to have it remote.

How does one properly cache/update data-driven iPhone apps that use remote databases?

My app is highly data driven, and needs to be frequently updated. Currently the MySQL database is dumped to an xml file via PHP, and when the app loads it downloads this file. Then it loads all the values in to NSMutableArray's inside of a data manager class which can be accessed anywhere in the app.
Here is the issue, the XML file produced is about 400kb, and this apparently takes several minutes to download on the EDGE network, and even for some people on 3G. So basically I'm looking for options on how to correctly cache or optimize my app's download process.
My current thought is something along the lines of caching the entire XML file on to the iPhone's hard disk, and then just serving that data up as the user navigates the app, and loading the new XML file in the background. The problem with this is that the user is now always going to see the data from the previous run, also it seems wasteful to download the entire XML file every time if only one field was changed.
TLDR: My iPhone app's download of data is slow, how would one properly minimize this effect?
I've had to deal with something like this in an app I developed over the summer.
I what did to solve it was to do an initial download of all the data from the server and place that in a database on the client along with a revision number.
Then each time the user connects again it sends the revision number to the server, if the revision number is smaller than the server revision number it sends across the new data (and only the new data) from the server, if its the same then it does nothing.
It's fairly simple and it seems to work pretty well for me.
This method does have the drawback that your server has to do a little more processing than normal but it's practically nothing and is much better than wasted bandwidth.
My suggestion would be to cache the data to a SQLite database on the iPhone. When the application starts, you sync the SQLite database with your remote database...while letting the user know that you are loading incremental data in the background.
By doing that, you get the following:
Users can use the app immediately with stale data.
You're letting the user know new data is coming.
You're storing the data in a more appropriate format.
And once the most recent data is loaded...the user gets to see it.