xtrabackup can not use tar - mysql

I use
innobackupex --user=root --password=root --stream=tar ./ | gzip - >
backup.tar.gz
to backup a MySQL, but the backup.tar.gz only containes one file ".\backup-my.cnf", what's wrong?
--stream=xbstream is OK.“backup.xbstream” will containes all of the files.
MySQL:5.5,xtrabackup:2.2.6

I made a mistake.
The xtrabackup manual have said that : tar the gz file must use -i. I missed the -i, so I can get only one file.

Related

MySQL Docker file : sed: couldn't open temporary file

I am trying to create new MySQL image and deploying in Kubernetes.
FROM oraclelinux:7-slim
USER root
ARG MYSQL_SERVER_PACKAGE=mysql-community-server-minimal-8.0.19
ARG MYSQL_SHELL_PACKAGE=mysql-shell-8.0.19
# Install server
RUN yum install -y https://repo.mysql.com/mysql-community-minimal-release-el7.rpm \
https://repo.mysql.com/mysql-community-release-el7.rpm \
&& yum-config-manager --enable mysql80-server-minimal \
&& yum install -y \
$MYSQL_SERVER_PACKAGE \
$MYSQL_SHELL_PACKAGE \
libpwquality \
&& yum clean all \
&& mkdir /docker-entrypoint-initdb.d
VOLUME /var/lib/mysql
COPY docker-entrypoint.sh /entrypoint.sh
COPY healthcheck.sh /healthcheck.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD /healthcheck.sh
EXPOSE 3306 33060
RUN chmod +rwx /entrypoint.sh
RUN chmod +rwx /healthcheck.sh
RUN groupadd -r mysql && useradd -r -g mysql mysql
EXPOSE 3306
CMD ["mysqld"]
It's working fine in the container. But throwing error when I deployed in Kubernetes like below:
How can I understand this issue?
ADDED
docker-entrypoint.sh:
if [ -n "$MYSQL_LOG_CONSOLE" ] || [ -n "console" ]; then
# Don't touch bind-mounted config files
if ! cat /proc/1/mounts | grep "/etc/my.cnf"; then
sed -i 's/^log-error=/#&/' /etc/my.cnf
fi
fi
P.S : I have added content of the file.
The problem is related with sed's in-place editing implementation. When you edit a file using the -i or --in-place option, the edition doesn't actually happen in-place. sed saves the changes into a temporary file and then uses it to replace the original one.
It happens that you don't have write permission to /etc directory, where sed is trying to create its temporary file.
As suggested in comments most probably the command is run by user mysql. For sure it is not run as root as it has enough privileges to be able to write to /etc:
bash-4.2# ls -ld /etc
drwxr-xr-x 1 root root 4096 Mar 27 15:04 /etc
As you can see others don't have write permission. Changing permissions or owner of /etc directory itself is a really bad idea and I won't advise you to run this command as root user either.
The simplest solution is to give up on using --in-place option, save the result in a directory such as /tmp, to which everyone has access:
bash-4.2# ls -ld /tmp
drwxrwxrwt 1 root root 4096 Mar 27 16:39 /tmp
and after that replace the content of the original file with the content of the temporary one.
Your command may look like this:
sed 's/^log-error=/#&/' /etc/my.cnf > /tmp/my.cnf && cat /tmp/my.cnf > /etc/my.cnf
One important caveat:
You need to make sure you have write permission on /etc/my.cnf file. As you can see below, by default you don't have such permission either, so the error will occur later, when the command will try to write to the original config file.
bash-4.2# ls -l /etc/my.cnf
-rw-r--r-- 1 root root 1239 Mar 27 15:04 /etc/my.cnf
You need to modify it in your Dockerfile either by making it availeble for edit by everyone:
RUN chmod 666 /etc/my.cnf
or better option:
RUN chown mysql /etc/my.cnf
to change its owner to mysql, if this is the user that executes the entrypoint.sh script.
Please let me know if it helps.

How to backup MYSQL database with kubernetes

I successfully, recreated Single-Instance Stateful Application tutorial. Naturally, I'd like to create a periodic backup of all databases. I found this article that explains how to make a backup. Unfortunately, it does not work for me. The command that I am running looks like this
$ kubectl exec -n <namespace> <pod> -- mysqldump -u root -p$MYSQL_ROOT_PASSWORD --all-databases > /var/lib/mysql/backup/alldbs.sql
I found error(s). Backup was not working for two reasons.
1st, incorrect semantic. Instead of using kubectl exec -n <namespace> <pod> mysqldump -u root -p$MYSQL_ROOT_PASSWORD --all-databases > dump.sql as article mentions. I had to use a syntax described in mysql dockerhub documentation that looks like this kubectl exec -n <namespace> <pod> -- sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > dump.sql
2nd, incorrect path assumption. I assumed that dump.sql was created on the pod/container filesystem so, I expected to see backup file inside container. Instead backup file was created relative to your host machine filesystem not pod/container.

Creating custom DVD for RHEL 7 with kickstart

I am trying to create a custom CD/DVD to deploy RHEL 7 with kickstart file. Here is what I did:
Edited isolinux.cfg (in the ISOLinux folder) and grub.cfg file (in the EFI\BOOT folder).
Created ISO using mkisofs.
But it is not working. Am I using correct files/method?
Edit the ISO image and put the ks.cfg file that you have created.
Preferably, put the ks.cfg file inside ks directory. More information can be found here.
You need to use the new command. Here is an example of what will work:
Add the kickstart file to your download and exploded ISO.
Run this command in the area with the ISO and kickstart and point to another location to build the ISO:
genisoimage -r -v -V "OEL6 with KS for OVM Manager" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o OEL6U6_OVM_Manager.iso /var/www/html/Template/ISO/
I found the way to create custom DVD from the RHEL7 page.
Mount the downloaded image
mount -t iso9660 -o loop path/to/image.iso /mnt/iso
Create a working directory - a directory where you want to place the contents of the ISO image.
mkdir /tmp/ISO
Copy all contents of the mounted image to your new working directory. Make sure to use the -p option to preserve file and directory permissions and ownership.
cp -pRf /mnt/iso /tmp/ISO
Unmount the image.
umount /mnt/iso
Make sure your current working directory is the top-level directory of the extracted ISO image - e.g. /tmp/ISO/iso. Create the new ISO image using genisoimage:
genisoimage -U -r -v -T -J -joliet-long -V "RHEL-7.1 Server.x86_64" -Volset "RHEL-7.1 Server.x86_64" -A "RHEL-7.1 Server.x86_64" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .
Hope the answer will helpful:
I am editing my answer due to the comments posted. Here is a more comprehensive solution:
(A) You need to create the ISO properly. I found helpful information in this URL.
Here is the line that I actually ended up with, for my MBR/UEFI ISO creation:
mkisofs -U -A "<Volume Header>" -V "RHEL-7.1 x86_64" -volset "RHEL-7.1 x86_64" -J -joliet-long -r -v -T -x ./lost+found -o ${OUTPUT}/${HOST}.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -boot-load-size 18755 /dir/where/sources/for/ISO/are/located
Be careful with the -V parameter, as it has to match what the kernel has defined for inst.stage2. In the default grub.conf included in the boot disk, it is configured to be "hd:LABEL=RHEL-7.1\x20x86_64" which matches with the settings above.
(B) You need the correct setup for EFI for RHEL7. For some reason, this has changed from RHEL6, where you could just use the /EFI/BOOT/BOOTX64.conf. Now it uses the /EFI/BOOT/grub.cfg. Common wisdom from Red Hat Manuals state to add the inst.ks= parameter to the kernel line. The grub.cfg that comes in the /EFI/BOOT directory of the RHEL7 boot iso actually has the linuxefi parameter, instead of the kernel one, I would guess they would work the same. If you are including the KS file on the CD, this should get you there.
Good Luck!

MySQL Start Error

I am trying to install MySQL 5.6.17 on Ubuntu Linux and I am having difficulties doing so. I opened the MySQL Reference and opened "Installing MySQL on Unix/Linux Using Generic Binaries". I followed the following step:
shell> groupadd mysql
shell> useradd -r -g mysql mysql
shell> cd /usr/local
shell> tar zxvf /path/to/mysql-VERSION-OS.tar.gz
shell> ln -s full-path-to-mysql-VERSION-OS mysql
shell> cd mysql
shell> chown -R mysql .
shell> chgrp -R mysql .
shell> scripts/mysql_install_db --user=mysql
shell> chown -R root .
shell> chown -R mysql data
shell> bin/mysqld_safe --user=mysql &
//Next command is optional
shell> cp support-files/mysql.server /etc/init.d/mysql.server
After that when I try to start MySQL using /etc/init.d/mysql.server start I get the following error:
Couldn't find MySQL server (/usr/bin/mysqld_safe)
I looked in /usr/bin and I found mysqld_safe. Any suggestions on how to fix that problem. Please reply with a detailed solution.
Thank You
It seems you have installed Mysql in "/usr/local" and init script is looking for binaries in "/usr"
Change the "basedir" in /etc/init.d/mysql.server to :
basedir=/usr/local
The fact that you found /usr/bin/mysqld_safe suggests that MySQL in some shape or form was preinstalled on your OS. This can cause some confusion, in particular due to location of config files.
So for instance, on some versions of Ubuntu, the packages mysql-common is pre-installed, which means you might have an /etc/mysql/my.conf file with some defaults in it. When you install from the tar file to /usr/local, follow the INSTALL-BINARIES (or equivalent) instructions, and try to start /etc/init.d/mysql.server start you might get errors as the one you report ("Couldn't find MySQL server (/usr/bin/mysqld_safe)"), because the default configuration in /etc/init.d/mysql.server and any /etc/my.cnf that you created (optional step during install) is getting over-ridden by a setting in the OS installed /etc/mysql/my.conf. Note that this might happen even if you change the values in /etc/init.d/mysql.server and/or /etc/my.cnf.
One way out is to merge /etc/my.cnf and /etc/mysql/my.cnf into a single file at one of these locations, with the correct defaults that you wish to use.

run innobackupex with gzip and pipe display output to file

How is it possible to run this and output the innobackupex output to a file (but still send output to the display)?
innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz
I need to ouput the innobackupex log with ... completed OK! in the last line to a file? How can I do that?
I've also noticed that it is a bit challenging to save the "OK" output from xtrabackup to the log file, as the Perl script playing with tty. Here is what worked for me.
If you need execute innobackupex from the command line, you can do:
nohup innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz 2>/path/mybkp.log
if you need to script it and get an OK message you can do:
/bin/bash -c "innobackupex --user=root --password=pass --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz" 2>/path/mybkp.log
Please note that in the second command, the double quote closes before the 2>
Prepend
2> >(tee file)
to your command.