Heroku resets my JSON file - json

I have a node.js application that uses a simple JSON file as the model. Since this is an MVP with very limited data storage needs, I don't want to spend time designing and configuring a MongoDB database. Instead, I simply read from and write to a JSON file stored in /data directory of my Node JS application.
However, on Heroku, the JSON file appears to get reset (to the original file I'd deployed to Heroku) every so often. I don't know why this happens or how to turn off this behavior. Any help would be really appreciated, I need to fix this problem within the next four hours.

Heroku uses an ephemeral file system, so that's why it's going to vanish (every 24 hours, or thereabouts).
If you want to store something, you have to use an external backing store. Adding a free tier MongoDB database shouldn't take more than a few minutes. See here or here for examples.

Related

How to save new Django database entries to JSON?

The git repo for my Django app includes several .tsv files which contain the initial entries to populate my app's database. During app setup, these items are imported into the app's SQLite database. The SQLite database is not stored in the app's git repo.
During normal app usage, I plan to add more items to the database by using the admin panel. However I also want to get these entries saved as fixtures in the app repo. I was thinking that a JSON file might be ideal for this purpose, since it is text-based and so will work with the git version control. These files would then become more fixtures for the app, which would be imported upon initial configuration.
How can I configure my app so that any time I add new entries to the Admin panel, a copy of that entry is saved in a JSON file as well?
I know that you can use the manage.py dumpdata command to dump the entire database to JSON, but I do not want the entire database, I just want JSON for new entries of specific database tables/models.
I was thinking that I could try to hack the save method on the model to try and write a JSON representation of the item to file, but I am not sure if this is ideal.
Is there a better way to do this?
Overriding save method for something that can go wrong or that can take more than it should is not recommended. You usually override save when changes are simple and important.
You can use signals but in your case it's too much work. You can instead write a function to do this for you but still not exactly after you saved the data to database. You can do it right away but it's too much process unless it's so important for your file to be updated.
I recommend using something like celery to run a function in the background separated from all of your django functions. You can call it on every data update or each hour for example and edit your backup file. You can even create a table to monitor the update process.
Which solution is the best is highly depended you and how important the data is. And keep in mind that editing a file can be a heavy process too so creating a backup like everyday might be a better idea anyway.

Heroku via Github, where is my JSON files updated?

This isn't exactly a question in need of help, however, I am curious as to which file is updated, when updated, when I use Heroku via Github. Would it be the one within my Github or does Heroku save those files and update them somewhere else?
All I'm trying to accomplish is edit a JSON file so I can store an integer to each player (I'm using a worker, for a discord bot). Also, yes, that seems like what I am trying to do. Anything that saves the information, doesn't require money and isn't too complex
EDIT:
This issue has been solved with the answer that Heroku simply cannot update JSON files. I have resolved it myself by moving my host onto a Raspberry Pi 3 Model B+. Thank you for all the answers.
When you use Heroku's GitHub Sync feature, a deployment will retrieve your code directly from GitHub.
Those files aren't saved anywhere else. A new deployment from master will take the code fresh from GitHub.
All I'm trying to accomplish is edit a JSON file so I can store an integer to each player (I'm using a worker, for a discord bot). Also, yes, that seems like what I am trying to do. Anything that saves the information, doesn't require money and isn't too complex
Heroku's filesystem is ephemeral. Any changes you save to the local filesystem will be lost when your dyno restarts, which happens frequently. If you scale your application to multiple dynos you'll also run into trouble since the ephemeral filesystems are dyno-local.
Your best bet is to use a proper client-server datastore, like PostgreSQL. Heroku provides its own Postgres service, which has a free tier. If Postgres isn't to your liking, feel free to choose something else.

indexedDB in a Chrome App

I'm building a chrome app which requires a persistent and local database, which in this case can be either indexedDB or basic object storage. I have several questions before i begin developing the app:
Is it possible to persist indexedDB data after un-installation of the chrome app and chrome browser?
If the indexedDB file/data persist can i locate and view it?
If I can locate but can't view it, is it possible to change the location of the indexedDB file?
Can I store the indexedDB in a file located on desktop or any other custom location?
If I had these requirements, I see a couple of options that you might pursue
Write a simple database backed by the FileSystem API, and periodically lock the database and back up that file. This would be pretty cool because I don't know of anyone who has implemented a simple FileSystem API backed database, but I could see it being useful for other purposes.
Any edits to the database would be also made to a copy of the database stored on your backup server, and I would write functions that could import snapshots from your backup.
Simply write functions to export from your indexedDB to some format into a backup, and to import from the backup.
All options seem quite time consuming. It would be cool if when you create an indexedDB, you could specify an HTML FileSystem API entry file to back it, and that way you wouldn't have to do 1 or 2.
I agree that it seems like quite an oversight that an indexedDB is quite difficult to back up.
I am writing a basic browser only application. No back end server code at this time. So I also have storage requirements. But I am not doing backup. I am looking at pouchdb as a solution: http://pouchdb.com/
Everything is looking good so far. They also mention that they would work well with Google Apps.
http://pouchdb.com/faq.html#native_support
The nice thing is you could sync your pouchdb data with a server couchdb instance.
http://pouchdb.com/api.html#replication
http://pouchdb.com/api.html#sync
If you want to keep the application local to the browser with no server support you could backup the entire database by using a batch fetch.
http://pouchdb.com/api.html#batch_fetch
I would run the result through gzip before you put it on the filesystem.
I am currently attempting this very same thing. I am using the Chrome Sync File System Api (http://goo.gl/5q8Z9M), but running into some instances where my file (or its contents) is deleted. With this approach I am writing out a JSON object. Hope this helps.

What's the best way of backing up a rails app data?

I need to make a backup system for my rails app but this has to be a little special: It doesn't have to back up all the database info and files in a single file or folder but it has to back up the database info and attachment files per user. I mean, every one of this backups should be able to regenerate all the information and files for one single user.
My questions are:
Is this possible? What's the best way to do it? And, if it's impossible or a bad idea at all, why is it?
Note: The database is a MySQL one.
Note2: I used Paperclip for the users uploads.
Im guessing you have an app that backs up data, when a user clicks on something right? I'm thinking get all the info connected to the user(depends on how you did your user model, so maybe you should have a get_all_info method) then write it out in sql format to a file, which you save as .sql. (either using File.new or Logger.new)
I would dump the entire user object and related objects into a single xml file dump. As you go through the creation of the XML grab out all the files and write the XML + all files into one directory, then compress them.
I think there are definitely use cases to have a feature like this, but be sure to have it run in a background process and only when needed in order to not bog down the web server. Take a look at http://github.com/tobi/delayed_job or http://github.com/defunkt/resque.

How does one properly cache/update data-driven iPhone apps that use remote databases?

My app is highly data driven, and needs to be frequently updated. Currently the MySQL database is dumped to an xml file via PHP, and when the app loads it downloads this file. Then it loads all the values in to NSMutableArray's inside of a data manager class which can be accessed anywhere in the app.
Here is the issue, the XML file produced is about 400kb, and this apparently takes several minutes to download on the EDGE network, and even for some people on 3G. So basically I'm looking for options on how to correctly cache or optimize my app's download process.
My current thought is something along the lines of caching the entire XML file on to the iPhone's hard disk, and then just serving that data up as the user navigates the app, and loading the new XML file in the background. The problem with this is that the user is now always going to see the data from the previous run, also it seems wasteful to download the entire XML file every time if only one field was changed.
TLDR: My iPhone app's download of data is slow, how would one properly minimize this effect?
I've had to deal with something like this in an app I developed over the summer.
I what did to solve it was to do an initial download of all the data from the server and place that in a database on the client along with a revision number.
Then each time the user connects again it sends the revision number to the server, if the revision number is smaller than the server revision number it sends across the new data (and only the new data) from the server, if its the same then it does nothing.
It's fairly simple and it seems to work pretty well for me.
This method does have the drawback that your server has to do a little more processing than normal but it's practically nothing and is much better than wasted bandwidth.
My suggestion would be to cache the data to a SQLite database on the iPhone. When the application starts, you sync the SQLite database with your remote database...while letting the user know that you are loading incremental data in the background.
By doing that, you get the following:
Users can use the app immediately with stale data.
You're letting the user know new data is coming.
You're storing the data in a more appropriate format.
And once the most recent data is loaded...the user gets to see it.