Chrooted OS grub2-mkconfig / os-prober is identifying host OS - not the target OS - fedora

The project is to build a new boot grub2 boot entry. The OS is loaded and chrooted to.
Below is the commands to mount the target
mount /dev/sda2 /mnt/root/boot
mount /dev/ssd_2tb_1/home /mnt/root/home
mount /dev/ssd_2tb_1/var /mnt/root/var
mount --bind /proc /mnt/root/proc
mount --bind /sys /mnt/root/sys
mount --bind /dev /mnt/root/dev
mount -o bind /dev /mnt/root/dev/pts
mount -o bind /run /mnt/root/run
chroot /mnt/root
--- this works successfully. The target OS is accessed properly except for the os-prober command
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ssd_2tb_1-root 28637220 10952984 16199848 41% /
/dev/sda2 736509 290436 403732 42% /boot
/dev/mapper/ssd_2tb_1-home 308587328 279016328 13825976 96% /home
/dev/mapper/ssd_2tb_1-var 16448380 2972412 12662672 20% /var
devtmpfs 4096 0 4096 0% /dev
tmpfs 3213596 1904 3211692 1% /run
/etc/default/grug.cfg has been edited to reflect the target OS
/etc/fstab has been updated to reflect the target OS
Except for the above non of the files in /boot have been changed. Is os-prober dependent on /boot ??
However grub2-mkconfig using os-prober identifies the host OS not the target OS. How can the mount configuration be changed such that os-prober will correctly identify the target OS?
It appears as though /proc/cmdline has the host OS. Is this where os-prober is getting the OS ? If so how can this be modified to reflect the target OS rather than the host?

Related

Lightsail Bitnami - Error establishing a database connection

I have a WordPress website hosted by AWS Lightsail.
When I access my website, I got this error - Error establishing a database connection
I didn't change anything for the last few days, and it was working okay.
Also, I have two snapshots and tried to restore the instance, but I got same error messages.
I tried to fix it by adding define('WP_ALLOW_REPAIR', true); line in my wp-config.php file, but it's not working.
I tried to access mysql on the terminal by following the command mysql -u root -p and typed password that I got from cat bitnami_application_password, but I got this error message - ERROR 2002 (HY000): Can't connect to local server through socket '/opt/bitnami/mariadb/tmp /mysql.sock' (111)
$ ps -ef | grep mysql
bitnami 3483 802 0 15:17 pts/0 00:00:00 grep mysql
If I run df -h, I got this:
Filesystem Size Used Avail Use% Mounted on
udev 989M 0 989M 0% /dev
tmpfs 200M 2.9M 197M 2% /run
/dev/xvda1 59G 8.6G 48G 16% /
tmpfs 998M 0 998M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 998M 0 998M 0% /sys/fs/cgroup
/dev/xvda15 124M 278K 124M 1% /boot/efi
tmpfs 200M 0 200M 0% /run/user/1000
If I run sudo service mysql stop, I got this error message:
Failed to stop mysql.service: Unit mysql.service not loaded.
If I run sudo /opt/bitnami/ctlscript.sh status, I got this message:
Cannot find any running daemon to contact. If it is running, make sure you are pointing to the right pid file (/var/run/gonit.pid)

mysql select query "No Space Left On Device"

My system is using Ubuntu with mysql database:
I have a complex mysql select query to run.
mysql -u root -p myDB < query.sql
But when I try to run it it always gives me:
ERROR 3 (HY000) at line 1: Error writing file
'/mnt/disk/tmp/MY0Wy7vA' (Errcode: 28 - No space left on device)
I have 11 GB free on disk and while the query is running, I keep track of it using
df -h
and
df -hi
to keep track of inodes
and I don't see any decrease in disk space while the query is running. All the time there is always 11 GB free on disk where the tmp folder is located.
This is the output of df -h:
ubuntu#ip-10-0-0-177:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 799M 57M 742M 8% /run
/dev/xvda1 30G 24G 5.4G 82% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/xvdf1 50G 39G 11G 80% /mnt/disk
tmpfs 799M 0 799M 0% /run/user/1000
This is the output of df -aTh:
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvdf1 ext4 50G 39G 11G 80% /mnt/disk
Take a look at the part of the error that indicate it’s actually writing to /tmp (see below)
: Error writing file '/mnt/disk/tmp/MY0Wy7vA
In some Linux distributions, /tmp is mounted as a tmpfs (ramdisk), so even if your disk has plenty of space you can get a "no space" error if you try to write too much there.
To investigate about the mount, try
$ cat /proc/mounts
OR
$ cat /proc/self/mounts
Or better yet
df - aTh
df -h /tmp
To see hidden ones, try
du -sc * .[^.]* | sort -n
Among other things you can try making MySQL use a different temp directory, or to not use tmpfs for /tmp
If TMPDIR is not set in my.cnf, MySQL uses the system default which is usually /tmp
You can change the MySQL tmp dir as the following suggests:
Changing the tmp folder of mysql
Credits ::
https://github.com/LogBlock/LogBlock/issues/540

unable to import large database to docker mysql container

I'm trying to import a large database to a mysql container. I've mounted host directories as volumes for the mysql container. So the data is persistent on host. The importing sql file is 14 GB+. The mysql container becomes unresponsive half way of through importing. When I run docker stats I can see the CPU % usage becomes < 1 once mysql container ate all the memory available. I tried increasing memory of docker up to 10 GB and It creates more tables from import when I allocate more memory to Docker. But I cannot allocate more than 10GB from host.
Following is my docker-compose.yml file
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=12345678
volumes:
- ./mysql/lib:/var/lib/mysql
- ./mysql/conf.d:/etc/mysql/conf.d
- ./mysql/log:/var/log/mysql
- /tmp:/tmp
ports:
- "3306:3306"
I'm using Docker for mac which has docker version 1.12.1
I was using docker exec -it docker_mysql_1 /bin/bash to login to container and import the sql file from /tmp
Also I tried the way recommended by mysql repo by mounting sql file to /docker-entrypoint-initdb.d. But that also halt the mysql init.
UPDATE 1
$ docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 2
Server Version: 1.12.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 18
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.20-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 9.744 GiB
Name: moby
ID: 43S4:LA5E:6MTG:IFOG:HHJC:HYLX:LYIT:YU43:QGBQ:K5I5:Z6LP:AENZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 16
Goroutines: 27
System Time: 2016-10-12T07:52:58.516469676Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
$ df -h
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 233Gi 141Gi 92Gi 61% 2181510 4292785769 0% /
devfs 193Ki 193Ki 0Bi 100% 668 0 100% /dev
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
/dev/disk2s2 466Gi 64Gi 401Gi 14% 1857 4294965422 0% /Volumes/mac
/dev/disk2s3 465Gi 29Gi 436Gi 7% 236633 3575589 6% /Volumes/PORTABLE
/dev/disk3s1 100Mi 86Mi 14Mi 86% 12 4294967267 0% /Volumes/Vagrant
I was using /dev/disk1 directories to mount volumes.
I solved phpmyadmin->import of large Database error, by changing Environment
variable at docker-compose.yml
UPLOAD_LIMIT=1G
myadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
ports:
- "8083:80"
environment:
- UPLOAD_LIMIT=1G
- PMA_ARBITRARY=1
- PMA_HOST=${MYSQL_HOST}
restart: always
depends_on:
- mysqldb
I had a similar issue when trying to load a big sql file into my database. I just had to increase the maximum packet size in the container and the import worked as expected. For example, you want to increase the maximum size of your SQL file to 512 MB and your container is named as my_mysql, you can adjust the package size in a running container with this command:
docker exec -it my_mysql bash -c "echo 'max_allowed_packet = 512M' >> /etc/mysql/mysql.conf.d/mysqld.cnf"
This appends the line to the config file. After this, you need to restart the container:
docker restart my_mysql
Even I had run into similar problem.
Follow the below process, this might help you:
Firstly copy your sql file(filename.sql) into the db container.
docker cp filename.sql docker_db_container:/filename.sql
Later login to your db container,and populate db with this(filename.sql) file.
In order to insert your file(filename.sql) in the database,
Go-to mysql into the db container, use the db in which you want to store the database,ie. use database_name;
source /filename.sql
Yet if you're facing issue wrt to large packet size,then increase containers packet size
docker exec -it docker_db_container bash -c "echo 'max_allowed_packet = 1024M' >> /etc/mysql/mysql.conf.d/mysqld.cnf"

How to Import a MySQL Database When Using Vagrant

I am having trouble importing a database using vagrant.
I have run vagrant up and everything works fine. I then run vagrant ssh, sign-in to mysql, setup a database and user -- and again everything is working fine.
I then exit mysql (but NOT vagrant ssh) and run and run the following command:
mysql -u root -p mydatabase < "C:\Users\moshe\Websites\Projects\backup.sql"
And this point I get the following message:
-bash: C:\Users\moshe\Websites\Projects\backup.sql: No such file or directory
Any idea on what I am doing wrong?
P.S. In case it is relevant - I'm working on Windows 10.
P.S. 2: I took a look at this question and tried moving the backup.sql into the .vagrant folder and then running the mysql command, but it still didn't work.
UPDATE
df command run in vagrant ssh
vagrant#scotchbox:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 41251136 6750852 32764372 18% /
none 4 0 4 0% /sys/fs/cgroup
udev 1019772 12 1019760 1% /dev
tmpfs 204992 380 204612 1% /run
none 5120 0 5120 0% /run/lock
none 1024956 0 1024956 0% /run/shm
none 102400 0 102400 0% /run/user
192.168.33.1:/C/Users/moshe/Websites/Projects 249478140 80929092 168549048 33% /var/www
I'm quite sure that you don't have Windows 10 in your VM, are you? At least the error message contains bash which is normally a Linux shell.
So, if you have some Linux there, normally paths beginning with C:\ doesn't work. Instead, there must be some mount point in the Linux file system which maps to your Windows file system (is shared).
Have a look at the /vagrant folder in your VM, it is very likely that you find your backup file in a sub folder of this directory. Try find /vagrant -name backup.sql to find the correct path.
However, /vagrant is only the default for mounted folders in VirtualBoxes. So, if it doesn't exist, try the command mount | grep vboxsf in your VM. The output should look similar to this:
vagrant on /vagrant type vboxsf (uid=1000,gid=1000,rw)
var_www on /var/www type vboxsf (uid=33,gid=33,rw)
One of the entries in your output (not the ones in this example) will be the one where your file is located.
Alternatively you can use the command df which gives you the direct relation between the host directory (e.g. C:\...) and the folder in the VM (e.g. /var/www).
In your case, it's the last line
192.168.33.1:/C/Users/moshe/Websites/Projects 249478140 80929092 168549048 33% /var/www
which shows that C:\Users\moshe\Websites\Projects (in Unix-like notation) maps to /var/www.

Rsync ACLs over NFSv4 to EXT4

I'm trying to rsync with options -AX over an NFSv4 mount to an ext4 drive that has acl and user_xattr enabled. The comannd
rsync -aAX /data/ /mnt/back/data
results in
rsync: rsync_xal_set: lsetxattr(""/mnt/back/data/Users/user/Documents/Desktop"","security.NTACL") failed: Operation not supported (95)
along with all the other files.
Running the same command to a local folder works perfect so it must have something to do with NFS4 or EXT4 on the Server side.
My fstab mount
UUID=732683f0-e6ac-42d6-a492-e07643d7719c /media/back ext4 defaults,acl,user_xattr,barrier=1 0 0
My nfs exports file
/media/back 10.111.106.3(fsid=0,rw,async,no_root_squash,no_subtree_check)
My mount command
mount -t nfs4 -o proto=tcp,port=2049 10.111.106.12:/ /mnt/back
Version info:
Server
Ubuntu 14.04.2
Client
Ubuntu 14.04.2
rsync 3.1.0
Thanks for any suggestions!
This isn't a direct answer to the error I was receiving, but I worked around it by letting rsync do the networking via ssh instead of going over NFS. It turns out NFS doesn't support xattrs.