I'm facing a problem in running the Nest JS cron job in sync with Mysql database. The problem right now is once the server gets restarted all the cron jobs are lost. What is the best way to get the existing cron jobs which was set before ?
I would not recommend running cronjobs in your primary server. The cronjobs you run in NestJs are run in the same node process. So if you restart the server, they are all lost.
It's better to run cronjobs in another node process. (you start another nestJs app that does only the cronjobs).
The reason is that you want to monitor cronjobs and decouple them from the main application
Clouds like render.com have a cronjob section, for instance.
The only way that I found it's creating a crons table in the db and use the OnModuleInit provided by NestJS and retrieve all the crons from DB and run them inside the OnModuleInit method.
Related
I have a project based on a massive ingestion of Kafka messages, those messages interact with a MySQL database.
I want to know the best way to update MySQL tables using scripts (.sql). I'm thinking about deploy them during the application startup, then Kafka will save the messages until the application is started and send them to the application with all database modifications finished.
Any idea/example? I suppose kubernetes orchestration can be a problem to achieve this!
One theoretical possibility here sergio
Attach the script to a PVC to be used in MySQL and add the scripts to
run
Use a post start hook and run the script mounted
(Post start hook)
For the kafka container have a
init-container. This checks for existance of a row or some
check if all is well with the MySQL pod
Bring up the kafka pod
(was over limit for a comment , posted as an answer)
Use Case: A MySQL instance will be running on production with required databases. I need to configure this running container as a master and spin up one more MySQL instance as a slave.
Challenges: Facing issues in configuring running MySQL instance as a master. The issue is not able to create replication user and not able to append the master/slave configuration to my.cnf file. The reason is,
To create replication user or to execute any custom SQL commands in container, we have to place initdb.sql with required SQL commands inside docker-entrypoint-initdb.d. So when container starting it execute the file present in docker-entrypoint-initdb.d and executes it, if the database had not created, if the database had created already it skips executing this .sql file residing in docker-entrypoint-initdb.d. This is the root for failing to configure master because MySQL is running with databases in production. So I cannot take this solution to configure as MySQL.
After facing this issue we planned to put the configuration SQL commands in .sh and keep in docker-entrypoint-initdb.d and execute them by patching the deployment. In this scenario we are facing some permission issues when executing the .sh files.
I need to configure replication(master-slave) for MySQL instance(s) in kubernetes world. I gone through the lot of posts to understand how to implement this. Nothing worked out as I am expecting and as I explained above. Along with this I found a custom image(bitnami/mysql) which supports setting up the replication, which I don't want use because finally I would not be able to implement this in production env.
So it will be very grateful if anyone helps me by suggesting any approach to solve this problem.
Thank you very much in advance.!!!
I want to create containers w/ a MySQL db and a dump loaded for integration tests. Each test should connect to a fresh container, with the DB in the same state. It should be able to read and write, but all changes should be lost when the test ends and the container is destroyed. I'm using the "mysql" image from the official docker repo.
1) The image's docs suggests taking advantage of the "entrypoint" script that will import any .sql files you provide on a specific folder. As I understand, this will import the dump again every time a new container is created, so not a good option. Is that correct?
2) This SO answer suggests extending that image with a RUN statement to start the mysql service and import all dumps. This seems to be the way to go, but I keep getting
mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
followed by
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
when I run build, even though I can connect to mysql fine on containers of the original image. I tried sleep 5 to wait for the mysqld service to startup, and adding -h with 'localhost' or the docker-machine ip.
How can I fix "2)"? Or, is there a better approach?
If re-seeding the data is an expensive operation another option would be starting / stopping a Docker container (previously build with the DB and seed data). I blogged about this a few months ago Integration Testing using Spring Boot, Postgres and Docker and although the blog focuses on Postgres, the idea is the same and could be translated to MySQL.
The standard MySQL image is pretty slow to start up so might be useful to use something that has been prepared more for this situation like this:
https://github.com/awin/docker-mysql
You can include data or use with a Flyway situation too, but it should speed things up a bit.
How I've solved this before is using a Database Migration tool, specifically flyway: http://flywaydb.org/documentation/database/mysql.html
Flyway is more for migrating the database schema opposed to putting data into it, but you could use it either way. Whenever you start your container just run the migrations against it and your database will be setup however you want. It's easy to use and you can also just use the default MySQL docker container without messing around with any settings. Flyway is also nice for many other reasons, like having a way to have version control for a database schema, and the ability to perform migrations on production databases easily.
To run integration tests with a clean DB I would just have an initial dataset that you insert before the test, then afterwards just truncate all the tables. I'm not sure how large your dataset is, but I think this is generally faster than restarting a mysql container every time,.
Yes, the data will be imported every time you start a container. This could take a long time.
You can view an example image that I created
https://github.com/kliewkliew/mysql-adventureworks
https://hub.docker.com/r/kliew/mysql-adventureworks/
My Dockerfile builds an image by installing MySQL, imports a sample database (from a .sql file), and sets the entrypoint to auto-start MySQL server. When you start a container from this image, it will have the data pre-loaded in the database.
I'm running two MySQL server one on production and one on staging, both are EC2 Instance.
The same way i have two MySQL RDS Instances parallel to the production and staging.
Here want i wanted to do.
I would like to mirror the production database to the development server every few hours,
for 1. backup, 2. to run new features against the latest database changes.
I didn't find much information regarding this issue, can anyone help?
Thanks.
Additional information:
i'm running nginx on linux server, with php backend.
If you are running on RDS, you have two options.
Snapshot and restore your instance. You can automate this, but the time it make take more time the larger the DB is. Your endpoint will probably change too.
Dump the database from production, reload into development.
I have a .sql file (initial sql scripts). I have recently deployed application in cloudfoundry, So I want to run these scripts to make application work, Scripts will update more than 5 db tables.
Is there any other way to run the mysql scripts from the grails application on start up Or Is there any provision to run the scripts in the cloudfoundry.
you have several options here.
The first one (which I recommend), is to use something like http://liquibase.org/ (there is a Grails plugin for it: http://grails.org/plugin/liquibase). This tool will make sure that any script you give it will run prior to the app starting, without running the same script twice, etc. This is great to keep track of your database changes.
This works independently of CloudFoundry and would help anyone installing your app having an up to date schema
The second option would be to tunnel to the CloudFoundry database and run the script to the db. Have a look at http://docs.cloudfoundry.com/tools/vmc/caldecott.html or even easier with STS : http://blog.cloudfoundry.com/2012/07/31/cloud-foundry-integration-for-eclipse-now-supports-tunneling-to-services/
Yup, what ebottard said! :-) Although, personally I would opt for using the tunnel feature on VMC, but that said, I am a Ruby guy!
Be weary of the fact that there are timeouts against queries in MySQL if are bootstrapping your database with large datasets!