How do I host and edit a JSON file separate from my GitHub repository? - json

I'm building a Discord bot and am trying to host it with Heroku and GitHub. I intend to store user data in a JSON file but cannot figure out how to edit the JSON file because I cannot edit it while it is in the repository. I am hoping there is a way to do it through Heroku, without using a separate website.
Note: I know how you would normally edit the JSON file, but because it is in a GitHub Repository it doesn't work the normal way.

Don't use a file as a database. Use a database as a database.
This is generally good advice, but especially important on Heroku where the ephemeral filesystem prevents changes to files from persisting long-term.
Heroku Postgres is a relatively easy way to get started. Its base plan is free.

I believe GitLab allows you to edit files in place, and they have a free tier like Github. As mentioned by Chris, this is generally not recommended, but it may work for your needs.
https://about.gitlab.com/

Related

Heroku via Github, where is my JSON files updated?

This isn't exactly a question in need of help, however, I am curious as to which file is updated, when updated, when I use Heroku via Github. Would it be the one within my Github or does Heroku save those files and update them somewhere else?
All I'm trying to accomplish is edit a JSON file so I can store an integer to each player (I'm using a worker, for a discord bot). Also, yes, that seems like what I am trying to do. Anything that saves the information, doesn't require money and isn't too complex
EDIT:
This issue has been solved with the answer that Heroku simply cannot update JSON files. I have resolved it myself by moving my host onto a Raspberry Pi 3 Model B+. Thank you for all the answers.
When you use Heroku's GitHub Sync feature, a deployment will retrieve your code directly from GitHub.
Those files aren't saved anywhere else. A new deployment from master will take the code fresh from GitHub.
All I'm trying to accomplish is edit a JSON file so I can store an integer to each player (I'm using a worker, for a discord bot). Also, yes, that seems like what I am trying to do. Anything that saves the information, doesn't require money and isn't too complex
Heroku's filesystem is ephemeral. Any changes you save to the local filesystem will be lost when your dyno restarts, which happens frequently. If you scale your application to multiple dynos you'll also run into trouble since the ephemeral filesystems are dyno-local.
Your best bet is to use a proper client-server datastore, like PostgreSQL. Heroku provides its own Postgres service, which has a free tier. If Postgres isn't to your liking, feel free to choose something else.

MySQL & GIT - Is it possible to use GIT versioning to merge MySQL databases? [duplicate]

I am a WordPress Designer/Developer, who is getting more and more heavily involved with using version control, notably Git, though I do use SVN for some projects. I am currently using Beanstalk for my remote repo.
Adding all of the WordPress files to my repo is no problem, if I wanted to I know I could .gitignore the wp-config file, but since I'm the only developer, currently, and these projects are closed source, it really makes little sense.
WordPress relies heavily on the database, as any CMS does, to keep textual content, and many settings depending on the specific plugin/theme configuration I'm using. I'm wondering what the best way of using version control on the database would be, if it's even possible. I guess I could do a SQL dump, though my MySQL server is running on Windows (read as: I don't know how to do it), and then add the SQL dump to my repository. But when I push something live, that poses huge security threats.
Is there an accepted practice of doing this?
You can backup your database within a git repository. Of course, if you place the data into git in a binary form, you will lose all of git's ability to efficiently store the data using diffs (changes). So the number one best practice is this: store the data in a text serialised format.
mysqldump is a suitable program to help you do this. It isn't perfect though. If anything disturbs the serialisation order of items (eg. as a result of creating new tables, etc.) then artificial breaks will enter into the diff. That will decrease the efficiency of storage. You could write a custom serialiser to serialise changes only -- but then you are doing the hard work that git is already good at. Just use the sql dump.
That being said, what you are wanting to do isn't what devs normally mean when they talk about putting the database in git. For instance, if you read the link posted by #eggyal (link to codinghorror) you will see that what is actually placed in git are the scripts needed to generate the initial database. There may be additional scripts, like those to populate the database data with a clean state, or to populate it with testing data. All such sql scripts are text files, and pretty much the same format as the sql dump you would get from mysqldump. So there's no reason you can't do it that way with your day-to-day data as well.
There are not many software available to version control databases like MySQL and MongoDB.
But one is under development and the beta version is about to be launched soon. Check out Klonio - Version Control for databases
The article How to Sync A Local & Remote WordPress Blog Using Version Control gives advice on how to automate sync between two instances (development, production) of a WordPress blog using Mercurial. Is mentions that for this scenario, Git and Mercurial are very similar.
Step 4 (Synchronizing The Databases) is of interest here.
The database content will be exported to a file that is tracked by the revision control. Each time we pull changes, the database content will be replaced by this file, making our database up-to-date.
Then, it elaborates on conflicts and the scripting part of the job.
There is a version control tutorial in Mercurial out there, if you're not familiar with it.
If you are only interested in schema changes under version control, there is a nice stuff SqlRog. It extracts schema into the project files that can be put under the git.
Be aware that Wordpress stores all news feed content in the database, so even if you don't make any changes, there will be a lot of changing content.

Can you write to a JSON file from a heroku app deployed on GitHub

I am deploying a node.js app on Heroku through GitHub and I want to write to/update a JSON file that is in the GitHub repository. Is that possible and if so how do I do it?
No, and it's not really a recommended pattern. What are you trying to achieve? Generally it's a bad practice to mix your plumbing (code in your repo) and what runs through it (the data your app uses and creates)
A database is generally used to store and retrieve data.

Using version control (Git) on a MySQL database

I am a WordPress Designer/Developer, who is getting more and more heavily involved with using version control, notably Git, though I do use SVN for some projects. I am currently using Beanstalk for my remote repo.
Adding all of the WordPress files to my repo is no problem, if I wanted to I know I could .gitignore the wp-config file, but since I'm the only developer, currently, and these projects are closed source, it really makes little sense.
WordPress relies heavily on the database, as any CMS does, to keep textual content, and many settings depending on the specific plugin/theme configuration I'm using. I'm wondering what the best way of using version control on the database would be, if it's even possible. I guess I could do a SQL dump, though my MySQL server is running on Windows (read as: I don't know how to do it), and then add the SQL dump to my repository. But when I push something live, that poses huge security threats.
Is there an accepted practice of doing this?
You can backup your database within a git repository. Of course, if you place the data into git in a binary form, you will lose all of git's ability to efficiently store the data using diffs (changes). So the number one best practice is this: store the data in a text serialised format.
mysqldump is a suitable program to help you do this. It isn't perfect though. If anything disturbs the serialisation order of items (eg. as a result of creating new tables, etc.) then artificial breaks will enter into the diff. That will decrease the efficiency of storage. You could write a custom serialiser to serialise changes only -- but then you are doing the hard work that git is already good at. Just use the sql dump.
That being said, what you are wanting to do isn't what devs normally mean when they talk about putting the database in git. For instance, if you read the link posted by #eggyal (link to codinghorror) you will see that what is actually placed in git are the scripts needed to generate the initial database. There may be additional scripts, like those to populate the database data with a clean state, or to populate it with testing data. All such sql scripts are text files, and pretty much the same format as the sql dump you would get from mysqldump. So there's no reason you can't do it that way with your day-to-day data as well.
There are not many software available to version control databases like MySQL and MongoDB.
But one is under development and the beta version is about to be launched soon. Check out Klonio - Version Control for databases
The article How to Sync A Local & Remote WordPress Blog Using Version Control gives advice on how to automate sync between two instances (development, production) of a WordPress blog using Mercurial. Is mentions that for this scenario, Git and Mercurial are very similar.
Step 4 (Synchronizing The Databases) is of interest here.
The database content will be exported to a file that is tracked by the revision control. Each time we pull changes, the database content will be replaced by this file, making our database up-to-date.
Then, it elaborates on conflicts and the scripting part of the job.
There is a version control tutorial in Mercurial out there, if you're not familiar with it.
If you are only interested in schema changes under version control, there is a nice stuff SqlRog. It extracts schema into the project files that can be put under the git.
Be aware that Wordpress stores all news feed content in the database, so even if you don't make any changes, there will be a lot of changing content.

Dealing with mass images, some small - some large, in spring/java application using mysql

I was wondering what the best pattern was to handle the management of images these days when using spring/java and mysql.
I have several options. Some of the
images are just small avatars for
the users. Is it fine to put these
directly into mysql? Or use the file
system?
For the larger images, is file
system pretty much the only option,
and then use mysql to store the
location on the file system?
Where is a good spot to put them on
a linux server? /var/files/images?
Since the files are hidden from the
war deployment directory, what is
the best way to stream them? Use
some kind of a file output stream as
the response body for an http
request?
Also, do I have to develop all of
the file management stuff myself,
like cleaning up unused files and
the like?
What about image security? Some images should not be accessed by everyone. I think I'd need to use a separate url with Spring security checking the current user for this.
I'd appreciate advice on all of these questions. Thanks.
You could use MySQL, and that would have the advantage of centralization and easy cleanup, but IMHO it's a waste of the database's resources if you plan to scale.
For data like images where everything is public, consider something like Amazon S3 which allows you to serve images directly from S3's web servers. If you plan to host everything yourself, just serve from a directory. Just remember to turn directory listings off :)