I'm trying to create a Python script to make a backups for MySQL DB.
(I have a Django-REST app with MySQL DB wrapped in Docker compose)
The script should make backuos and send in to Disk API. It accept parameters:
db name
user name
user password
API parameters for working with Disk API
There sholud be a simple function in the script. I think I'll just add it to cron (linux task scheduler) and it will run once a week, for example.
The function have to create a DB backup and send it via the API to yandex disk.
So, these are my thoughts, but I never made smth like this. I will be very grateful for links or examples of such a script.
Related
I have a project based on a massive ingestion of Kafka messages, those messages interact with a MySQL database.
I want to know the best way to update MySQL tables using scripts (.sql). I'm thinking about deploy them during the application startup, then Kafka will save the messages until the application is started and send them to the application with all database modifications finished.
Any idea/example? I suppose kubernetes orchestration can be a problem to achieve this!
One theoretical possibility here sergio
Attach the script to a PVC to be used in MySQL and add the scripts to
run
Use a post start hook and run the script mounted
(Post start hook)
For the kafka container have a
init-container. This checks for existance of a row or some
check if all is well with the MySQL pod
Bring up the kafka pod
(was over limit for a comment , posted as an answer)
I'm a bit stuck, I switched recently to Google Cloud MySQL and I would like to clone one of my database (not instance) for an external development environment for freelancers.
The idea is to clone/duplicate existing live database, then scrap sensitive datas (emails, etc...).
I know I need to use "gcloud" command line function but I don't really know to do it.
Can someone help me ?
The easiest way to do this would be to restore a backup made on the first instance to a new instance. I recommend you review the Cloud SQL documentation around backups
Example steps:
Create an on demand backup
gcloud sql backups create --async --instance [SOURCE_INSTANCE_NAME]
You can see a list of backup ids for the source instance with this:
gcloud sql backups list --instance [SOURCE_INSTANCE_NAME]
Restore to the new instance. After preparing the new instance (creating, ensuring it has no replicas, etc).
gcloud sql backups restore [BACKUP_ID] --restore-instance=[TARGET_INSTANCE_NAME] \
--backup-instance=[SOURCE_INSTANCE_NAME]
You can also do all of the above through the console.
Once the restore is complete, you can remove the backup. The easiest way to do this is through the console, but it can be done via the REST API if necessary.
Of course, there isn't a gcloud command to do the data cleanup you describe, you would need to do that yourself, based on your own data and anonymization requirements. Doing good anonymization can be tricky unless you have a very limited amount of sensitive data.
If instead you just want to export a single database, then you can use the export and import functionality. This is subject to some limitations, for example, triggers, stored procedures, and possibly views, etc, will need to be manually recreated.
Full instructions for export, but here's a quick summary.
You will need a cloud storage bucket to hold the output, and the service account for the database will need to be a writer on that bucket. Once that is in place:
gcloud sql export sql [INSTANCE_NAME] gs://[BUCKET_NAME]/[DUMP_FILE_NAME] \
--database=[DATABASE_NAME]
You can then either download the file and use it on a local database, or import it into a new instance, as so:
gcloud sql import sql [INSTANCE_NAME] gs://[BUCKET_NAME]/[DUMP_FILE_NAME] \
--database=[DATABASE_NAME]
Obviously, sanitizing the data is still up to you.
I have a SQL script which selects data from DB and stores it to files. I am unable to create a directory to store these files.
I have shell script that loads the SQL file. Shell and the SQL are on separate server than MySQL db. I would prefer to create this directory using SQL as I want to avoid ssh.
Any suggestions? Surprisingly I couldn't find anything on Google.
I will assume that you're using mysql, according to your tags. You could do it with a Microsoft SQL Server or Oracle database but unfortunately, at the moment, there is no solution to create a directory from MySQL.
Some will guide you with a workaround based on the creation of a data directory, I wouldn't recommand this, as it could lead to performances issues in the future, or worst.
The best solution would be to use a script (java, vbscript, SSH, batch, ...). Again, you won't be able to start this script within your SQL query easily. I know that's no good news, but it is important not to lead you on the wrong direction.
I would suggest to reverse your thinking, and start your SQL query from a script (again, any language you're used to).
I couldn't find any other way other than opening ssh session to the target box.
Open ssh session
Create directory
close ssh session
Load sql file using shell
The sql adds the generated files to the directory created in step 2.
ssh -t $USER#$HOST <<-SSH-END;
mkdir -p "dir/path";
exit;
SSH-END
Sharing just in case someone else needs to do the same.
What is the best way to import a full .sql file (with DDL and DML sentences) to a mysql database from a Java application deployed in CloudBees?
Should I try to get the Runtime process and see if something like this works
Runtime rt = Runtime.getRuntime();
Process pr = rt.exec("mysql -p -h ServerName DbName < dump.sql");
(recommended solution in a previous question for a self-hosted environment, not sure if in a CloudBees hosted application I can execute process that access mysql)?
Is there a better solution? (again, to execute it from within the application, the .sql file to import will be provided by the user as part of his interaction with the web application).
I´d really like to avoid parsing the .sql file and sending the sentences one by one through jdbc.
I haven't tried this myself, but Flyway seems like it would let you import your SQL file during initialization of your app on CloudBees.
Flyway is an attempt to bring the popular Ruby concept of database migrations to Java. Flyway will let you place your .sql files inside the classpath of your app, and you can then use some Java code to update your database as needed.
Based on their migration docs, you should be able to place your .sql file as a file named V1__Initial_version.sql into a /db/migration/ directory on your classpath. You would then use something like the following code to trigger the migration when the app starts:
import com.googlecode.flyway.core.Flyway;
...
Flyway flyway = new Flyway();
flyway.setDataSource(...);
flyway.migrate();
I noticed that the Flyway FAQ explains that the database is locked during migrations, so this approach should work even if you scale out your application on CloudBees to use more than one instance (very nice!!).
Give it a try I guess. mysql cmd tools may or may not be on host. If they aren't we can probably add them.
The other option would be using a Jenkins job to do it. You could expose an api that jenkins calls and loads the database.
I have two DB servers, i.e Server1 and Server2.
I want to transfer data from Server1 to Server2 every morning (at 9:00AM lets say).
How can we achieve it?
Can this transfer of data be done automatically?
My choice on a windows machine is to create a batch file that runs mysqldump with the parameters that suite you best.
This batch file can be tied to the windows scheduler for an automated execution at any point in time.
you can consult this page for some guideline by the MySQL community.
Now that you have a dump of your DB your script should send it to the destination server and deploy it locally (can also be done automatically).
i use the parameters for mysqldump that allows me to incrementally add data to the new server
i transfer the dump using DeltaCopy which is a community windows wrapper around the rsync program (if interested you should check Syncrify on that page as well)
both of those last point allow for an extremely faster process than copying the entire DB everytime.