Openshift 3 mysql.user table is damaged. please run mysql_upgrade openshift - mysql

Issue
When starting a mysql deployment on OpenShift V3 I get the following exception:
mysql.user table is damaged. Please run mysql_upgrade
I cannot run mysql_upgrade as pod isn't ready.
Questions
I have the following questions
How can I fix this, or
How can I backup the data

If the pod won't start, you can mount the volume with your data to another pod and download (oc rsync, interactive tutorial here) what you had mounted in the database pod under /var/lib/mysql/data/. Then, you can try recovering the data from that.
Generally, this could happen if you process an older database dump sql script (created using mysqldump) on a newer MySQL version. In that case, chances are the root user was removed from the table too (if it was not in the old database). If you created such a dump, still have the older dump, and it's "good enough", you should be able to proceed as follows to import the original data again and prevent this situation:
Create a backup copy of the database dump that you have previously created using mysqldump from the older MySQL database version, so that you can always get back to it, if things go south.
Edit the database dump sql file and remove all the content that manipulates the mysql.user table; that is, delete lines under the -- Table structure for table 'user' and -- Dumping data for table 'user' sections and save the modified file. I assume here that you have your user and password specified in environment variables, in the MySQL deployment configuration).
Scale down your database pod to 0 replicas.
Delete your mysql persistent volume claim; this will delete the database that you have hopefully downloaded after mounting the volume to another pod, as mentioned above.
Recreate the PVC, under the same name.
Scale up your MySQL pod to one replica. That will initialize the database and create a user as per the environment variables .
Copy the modified sql dump file created in step 2 (that is, the one not affecting the mysql.user table) to the database pod using oc rsync.
In the MySQL pod, restore the database using the uploaded file, as per this migration guide (step 6).
Grant all privileges to your user on the application database by GRANT ALL PRIVILEGES ON <database> TO '<mysql_user>'#'%' (Replace <database> and <mysql_user> appropriately.)
Exit the MySQL CLI on the pod, and run mysql_upgrade -u root in the shell.

Related

how do i automated MySQL data base restore everyday with crontab?

i have some demo website user can register and login and share post add products with some function but its for demo purpose only so daily 10 to 15 people register and test how its work
but i don't need everyone data i have new mysql.sql file in this SQL don't have any much data i want to do automate task with crone tab
every day its will delete current database and upload my mysql.sql file
how can i do this?
os:ubuntu 19.04
Initially, if your database contains stored procedure as well, just make sure your restore file contains query to delete the stored procedure as well before restoring the database.
To delete stored procedures add the following line in your mysql.sql file.
DELETE FROM mysql.proc WHERE db like '%{{database_name}}%' AND type = 'PROCEDURE'
After this you have to add a cron job which will restore your database everyday, to do this open terminal and type sudo crontab -e
Now enter 0 13 * * * mysql -u {{user_name}} -p {{password}} {{database_name}} < {{path_to_your_sql_file}} assuming you have to restore the database at 1 PM daily.
After adding the job, save the file.
Once the job is added you can check the it by typing sudo crontab -l in terminal
Since all you want to do is "reinstall" your DB on a daily basis (is that correct?). You can add to your install script on the first line:
DROP database <your databese>
# Here you re-create your DB again (your current sql script)
Let's say you call this script "reinstall.sql"; you can add to your cron table the following line (which runs the command everyday # 1am):
0 1 * * * mysql -u username -p password database_name < /path/to/your/reinstall.sql
To open the cron table you can do this:
sudo crontab -l -u [USER_WITH_RIGHTS_TO_EDIT]
Hope it helps!
MySQL keeps all DB data inside one directory. By default this directory resides within the MySQL installation and is called data. For example if your default installation is at c:/Users/prakash/mysql-8.0.17, a directory named data will be available inside it.
In principle, you will have to keep a fresh copy of this data directory (without any online user information, as it was when you built the database first time by running DDL scripts) somewhere, let us say at c:/Users/prakash/mysql-8.0.17/fresh. You can then write a crone job to achieve following and schedule it at any convenient time.
Shutdown the database
Delete data directory (recursively)
Copy fresh directory (recursively) where data directory resides
Rename the copied directory to data
Restart the database

MySQL Data Directory on Network Drive?

I am new to databases and MySQL and am still in the process of learning it. I have been tasked to see if it is possible to store the MySQL Data Directory in a Network Drive... The purpose is to have a backup of the directory and allowing multiple users to point to that particular directory.
I have been able to successfully move the data directory to a different location on my PC but have been unsuccessful when I tried moving the data directory into a Network Drive.
Is it possible to move the data directory into a shared Network Drive, and if so, what steps should I take?
Notes:
Windows 10
Attempted moving the directory and editing the my.ini
file
Perhaps your approach is not optimal or I'm misunderstanding the question (or whoever gave you the task isn't clear on the best ways to back up MySQL databases). If I were you, I'd put it to whoever asked you to do this task that making plain-text SQL (*.sql) dumps of the databases and putting those into the backup directory would be easier/simpler than making a backup of the data directory itself, which contains binary file representations of the databases.
From the MySQLdump manual page:
To dump all databases:
$ mysqldump --all-databases > dump.sql
To dump only specific databases, name them on the command line and use the --databases option:
$ mysqldump --databases db1 db2 db3 > dump.sql
The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names.
With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement.
To dump a single database, name it on the command line:
$ mysqldump --databases test > dump.sql
Exercise for the reader: Write a script (crontab) or set up a scheduled task to dump the databases and move the output to the network drive.
If that's not what is required, but access to the database by multiple people is, create user accounts using the MySQL Server RDBMS instead. (You might need to configure the server to allow remote access. In that case, remove any test or anonymous/blank password accounts and change the root password to something more secure than root, admin or password1.)

connecting MySQL database to copied and pasted data directory

I copied a database directory from the datadir (/var/lib/mysql/) of a MySQL instance running on a server to my local machine. Is it possible to put this database directory into my local MySQL datadir and access that database?
What I have done so far is copy the database directory like above, I log in to the MySQL and can see the database, I switch to it and can list all the tables. But when ever I try to query a table I get something like:
select * from users limit 1;
ERROR 1146 (42S02): Table 'users' doesn't exist
Also from mysqldump:
mysqldump: Got error: 1146: Table 'very_first_table' doesn't exist when using LOCK TABLES
Is it possible to do what I am trying to do here?
So I got it to work, bare in mind that my end goal was to get a database dump from the database. The mysql folder was extracted from a older virtual machine snapshot which could not be run at the moment, so I couldn't just log in to it and do a normal dump. Here is what I did:
1) I installed mysql on a fresh vm on my local machine
2) I shut down mysql with service mysql stop
3) I removed the existing /var/lib/mysql folder from the fresh install
4) I replaced it with the /var/lib/mysql folder that was removed from the old snapshot
5) I ran chown -R mysql:mysql /var/lib/mysql
6) I restarted mysql with service mysql start
7) Then I checked if I could log in and query the tables, I could!
So I was able to run the dump after that.

Copying database backup files from xamp/mysql/data of windows to linux in path /var/lib/mysql

Copying database backup files from xamp/mysql/data of windows to linux in path /var/lib/mysql, but it is creating only empty database in phpmyadmin of linux.
Please some one help me to solve this issue, i have only these files backup with me
The best way is-
Step1: Take backup from windows by mysqldump-
mysqldump -uroot -proot123 -A > backup.sql
Step2: Move this backup to linux, you can use winscp tool for it.
Step3: Now restore this backup to linux machine.
mysql -uroot -proot123 < backup.sql
Modification:
It seems your db engine is myisam and you just coppied file/folder from window to linux, so give permissions as per below-
chown -R mysql.mysql /var/lib/mysql
First create a .my.cnf file containing the mysql root password in your users home folder, on linux.
On windows, there exists a .my.ini or something which serves the same purpose. That way you will not have to reenter your passwords a lot during the next steps, which you are very likely to repeat several times until you get them right, I fear. :)
Since unix/linux and windows have different ways to save files, you might very likely run into errors during a simple copy-restore process, depending on how you copy files.
Your best bet is likely copying the original mysql folders to another windows machine and save them accordingly, such that mysql can find them. (With an installed mysql instance, of course.) I don't know what else you might need, if the databases are not found instantly, since I never had to do this prior and have no test setup here to check this case out.
When the databases are found through the mysql on the WINDOWS server, from the mysql cli prompt there look up which encoding etc the db uses:
SELECT SCHEMA_NAME 'database', default_character_set_name 'charset', DEFAULT_COLLATION_NAME 'collation' FROM information_schema.SCHEMATA;
Then create a new database on the LINUX server with the same name and the same encoding's from MYSQL CLI:
create database <db-name> character set <charset> collate <collation>;
Then on the WINDOWS server in a CMD window, do a mysqldump which should look familiar on windows like on linux:
mysqldump <db-name> > <db-name>.sql
Then copy the dump over to the LINUX server and replay it:
mysql <db-name> < <db-name>.sql
Afterwards you will have to recreate a user (if you know which user and password your web app used to access the database, create a new user with these credentials and grant him full access on your database.
If you do not happen to know the credentials anymore, create an arbitrary user and then change the database credentials in the configfile of your web application.
In case you have problems, check the unix file permissions of the files you copied, such that mysql can access them.
Good luck, mate.

Replicate MYSQL data from stage to dev with a script

I have two versions of my application, one "stage" and one "dev."
Right now, "stage" is exposed to the real world for beta-testing.
From time to time, I want an exact replica of the data to be replicated into the "dev" database.
Both databases are on the same hosted Linux machine.
Sometimes I create "dummy" data in the development environment. At this stage, I'd be fine if it needs to get written over in stage.
Thanks.
Be sure to add security to your script so only the user you are authorizing is able to run that script. basically you want to use mysql and mysqldump commands.
mysqldump -u username --password=userpass --add-drop-database --add=locks --create-options --disable-keys --extend-insert --result-file=database.sql databasename
mysql -u username --password=userpass -e "source database.sql;"
The first command will make the backup the second command will bring the backup to another database engine. Be careful because if you run it on the same exact process of mysql you are only backing up the database adn then restoring it to the same database, you have to change the database name.
Hope this helps.
Just use mysqldump to create a backup of the staging database and then load the dump file over your dev database. This will give you an exact copy of the stage data.