Jhipster Entity and User data gone after MySQL runs out of memory - mysql

I have an issue with Jhipster crashing and losing all entity entries. I'm using a Jhipster in production with a mysql database server on Docker. Every day or so the mysql database crashes and I have to restart the Jhispter application. When it restarts all of the mysql data is gone and doesn't come back on the reload. This includes both entities and user login information. I think I need to load in the entity data again somehow, but I'm not sure how.
How would I go about reloading the Jhipster user and entity data once the server crashes?
I have both MySQL and JHispter running with docker-compose.

Related

Heroku restart not the same as Nodejs restart

I have a Nodejs application which uses mysql database. I have written a function that runs everytime the server restarts. This function updates the mysql schema based on schema versions I have created. If I connect an empty database for example my schemaCheck function will build everything from the scratch.
Now, I have a database connected to heroku which had some tables. I deleted all the tables from mysql workbench and ran the:
heroku ps:restart myApp
This was supposed to rebuild all of my tables and display various console messages.
The console says that the restarting is successful but I know that it did not restart because it does not display any of the console message that it is supposed to show in heroku logs --tail
However, if I do a git push to heroku all of my functions that are supposed to run on server restart runs properly. The tail message starts with
Build started by user dave32#mct.edu
Do I have to build everytime to restart my Heroku server? Building takes a lot of time

How to avoid optaweb-employee-rostering rebuild persisted data on server restart

I'm running optaweb-employee-rostering in a dockerized Wildfly server, persisting data with MySql database running in a container too. The .war file is not built in sever's Docker image, it's manually deployed in it via Wildfly's admin interface. Every time container is stopped a restarted, the application rebuild sample data, deleting any data saved during usage, so that the final behavior is the same as ram based storage: the data is lost if the server stops.
Is there a way to avoid this behavior and keep saved data on server restart?
This is caused by the hbm2dll value here and due to the Generator's post construct. In the current openshift image there are environment variables to change that.
We're working on streamlining this "getting started" and "putting it into production" experience, as part of the refactor to react / springboot.

Sails.js: Production env + sails-mysql - database tables not created upon lift

In production mode, when lifting a Sails application, the database tables are not created upon lift, while in dev mode, they are. Right now, when deploying, I'm running in dev mode once first so that the tables can be created and then running in prod mode. Is there a way around this?
No; this is by design. In the production environment, Sails does not do any migration to ensure that data isn't corrupted or lost when lifting.
From the Sails deployment guide:
Sails sets all your models to migrate:safe when run in production, which means no auto-migrations are run on starting up the app. You can set your database up the following way: Create the database on
the server and then run your sails app with migrate:alter locally, but
configured to use the production server as your db. This will
automatically set things up. In case you can't connect to the server
remotely, you'll simply dump your local schema and import it into the
database server.

Automatic syncing database on local server to remote server

I have developed a django application using MySQL as database engine.
I want my application to performing any action with database on the local machine then in an interval of time (suppose every 5 minutes) the local database sync to the database on the server automatically.
How can I do this kind of thing, using MySQL script or django can do it for me using its tools?
The term “MySQL Replication” is what you want~
But in factor it'll sync data whenever your data on the local server is changed~

Grails DataSource for remote MYSQL database access and migrations

Is it possible for locally running grails application to access and update remote MySQL db?
Assume the remote server is linux on which tomcat, MySQL are installed in the usual places.
Assume remote URL is accessed as tom#189.124.24.249. So grails needs to access the db as the user 'tom' or does it need to be the root user or mysql user??? Does the password of user tom need to be specified in the DataSource.groovy? In MYSQL the db test_db is configured to be accessed with user name 'guru' and password 'secret'.
If the same grails application is also running on the remote server accessing that remote db, a locally running instance of grails application accessing the same remote db should not cause any problems??
Assume the remote db name is test_db.
I need this also in context with liquibase grails plugin and database migrations. I need to run grails migrate command against the remote db to synchronize it with local db.
A side question: how do I synchronize local db in which table data is already populated to remote fresh newly created db with no data? This seems to fall under the domain of db content migrations which is not covered by the grails plugin as far as I know. I would like to know what would be the correct approach to this in the context of a grails application.
you just need to set the proper credentials in the Datasource.groovy and it all should work fine. We are running our app in a production environment and the database server is on a different box.
I dont think that two applications accessing the database server should be a problem.
Can't help with the side question... sorry
I believe, it's mostly a duplicate of Liquibase Grails database migrations
For side question: after Grails migrates structure, mysqldump or whatever backup/restore procedure should work.