I don't have a tech background, but decided to create a Google Compute Engine instance to be able to create a 1.6GB database. I use PostgreSQL (Pgadmin3) to access the database on GCE. It has been working fine so far, but now I am trying to import a new file and am getting the error message
"Panic: could not write to file "pg_xlog/xlogtemp.1293": No space left on the device"
I looked at the documentation in Google for resizing the disk. I did a df -h in SSH and got
/dev/sda1 9.8G 9.2G 16M 100% /
I also did
sudo lsblk --output NAME,size,state
NAME SIZE STATE
sda 20G running
└─sda1 10G
So then I did
sudo resize2fs /dev/disk/by-id/google-myddisk-part1
resize2fs 1.42.12 (29-Aug-2014)
The filesystem is already 2620928 (4k) blocks long. Nothing to do!
Anyone have any ideas?
As this is the root disk you have to reboot the instance. It should then automatically resize the disk during boot. Please check details here: https://cloud.google.com/compute/docs/disks/create-root-persistent-disks#repartitionrootpd
Related
My Ubuntu 22.04 server is suddenly telling me that "The redo log file "./#innodb_redo/#ib_redo0 size 23289856 is not a multiple of innodb_page_size." My innodb_page_size is 16K, so the error is correct, but I can't seem to find any advice on how to fix it. I tried moving ib_redo0 out of the way but that didn't help. Any ideas?
I also encountered this issue. It appeared to be specific to using ZFS on Ubuntu, in my case it was during an upgrade to MYSQL 8.0.30-0ubuntu0.20.04.2.
Following details in this Ubuntu issue report and this MySQL issue report I was able to come up with a solution that worked in my environment.
There are 3 commands below to be ran as root or with sudo. You should replace 8192 in the first with the result of <broken_file_size> % <default_page_size>. The default page size is usually 16384 unless modified.
You may need to replace the #ib_redo0 part of the second command with the broken file reported in the error message.
These commands are intended to pad out the reportedly invalid file with zeros.
Perform a backup before running!
# Gather required zeros to append
# Will create a "zeros" file in the current directory
# This has been calculated based upon 23289856 % 16384 = 8192 or <broken_file_size> % <default_page_size>
dd if=/dev/zero bs=1 count=8192 of=./zeros
# Append zeroes to invalid file
cat zeros >> /var/lib/mysql/#innodb_redo/#ib_redo0
# Restart MySQL
systemctl restart mysql.service
I'd be wary of remaining on ZFS, even if the above fixes things, for the sake of potentially hitting the same issue again.
I had the same Problem in a LXD Container running on ZFS. I had to move it to a different type of storage-pool, e.g. Directory or BTRFS.
After that the solution of #DanBrown worked for me too.
Thank you.
Problem:
MySQL is taking over 350MB on idle, as shown in docker stats
Tried:
Tweaking the configuration file as suggested in posts like this one.
Added these lines to the file: /etc/mysql/my.cnf
innodb_buffer_pool_size=64M
innodb_log_buffer_size=256K
max_connections=5
key_buffer_size=8
thread_cache_size=0
host_cache_size=0
innodb_ft_cache_size=1600000
innodb_ft_total_cache_size=32000000
thread_stack=131072
sort_buffer_size=32K
read_buffer_size=8200
read_rnd_buffer_size=8200
max_heap_table_size=16K
tmp_table_size=1K
bulk_insert_buffer_size=0
join_buffer_size=128
net_buffer_length=1K
innodb_sort_buffer_size=64K
Dockerfile
FROM mysql
COPY my.cnf /etc/mysql/
Actual:
I made sure my.cnf changes were in the container, however its still taking over 350MB idle, is it possible to get it below that or am I trying something just not possible?
Prepared repository:
https://github.com/alexanderkoller/low-memory-mysql
Before using this repository's configurations memory is about 458Mib/s
After: 31.12Mib/s
Tested on mysql 5.6
You can try and appropriately reduce footage.
https://github.com/major/MySQLTuner-perl
I'm running a VPS, with specs:
Ubuntu 12.04.5 LTS (GNU/Linux 3.13.0-32-generic x86_64)
512mb RAM
1 CPU
20gb SSD
If you're wondering it's a DigitalOcean droplet. It's running TS3, LAMP (with wordpress), OpenVPN, BYOBU, and OwnCloud.
Now my problem is with mySQL dying on me after like 30m to 1hour. Usually after a reboot, the memory usage is 54% and mySQL doesn't have a problem, but as the memory usage goes towards 80-89% I start to get issues.
System load: 0.01 Users logged in: 0
Usage of /: 22.1% of 19.56GB IP address for eth0: *****
Memory usage: 90% IP address for as0t0: *****
Swap usage: 0% IP address for as0t1: *****
Processes: 93
As you can see, the memory usage is VERY high, and I've noticed the trend that mySQL process dies as the memory usage gets higher. However the swap usage is 0%.
Is there a way to make mySQL and the other processes to use the swap?
Would letting mySQL make use of the swap stop letting it die after my memory usage gets so high?
After the high memory usage, the process dies and I get this error:
[2002] SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
The processor load never goes above 25% in most cases. The server also runs a fast SSD, so it wouldn't be a problem to use a swap, and I don't have that much traffic.
Fixed it, by making a swap file of size 256mb. mySQL doesn't stop now after having no available memory to work in.
After following this tutorial by Etel Sverdlov:
https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-12-04
I was able to make a swap file. I'll copy the tutorial for the sake it gets deleted.
How To Add Swap on Ubuntu 12.04
About Linux Swapping
Linux RAM is composed of chunks of memory called pages. To free up pages of RAM, a “linux swap” can occur and a page of memory is copied from the RAM to preconfigured space on the hard disk. Linux swaps allow a system to harness more memory than was originally physically available.
However, swapping does have disadvantages. Because hard disks have a much slower memory than RAM, virtual private server performance may slow down considerably. Additionally, swap thrashing can begin to take place if the system gets swamped from too many files being swapped in and out.
Check for Swap Space
Before we proceed to set up a swap file, we need to check if any swap files have been enabled on the VPS by looking at the summary of swap usage.
sudo swapon -s
An empty list will confirm that you have no swap files enabled:
Filename Type Size Used Priority
Check the File System
After we know that we do not have a swap file enabled on the virtual server, we can check how much space we have on the server with the df command. The swap file will take 256MB— since we are only using up about 8% of the /dev/sda, we can proceed.
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda 20907056 1437188 18421292 8% /
udev 121588 4 121584 1% /dev
tmpfs 49752 208 49544 1% /run
none 5120 0 5120 0% /run/lock
none 124372 0 124372 0% /run/shm
Create and Enable the Swap File
Now it’s time to create the swap file itself using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Subsequently we are going to prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
All credit to: Etel Sverdlov at: https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-12-04
HDD failure occurred.
So, a new primary HDD was added in and the old HDD was added in as a secondary one.
I'm trying to mount my secondary HDD but there are errors occurring.
I made /media/qwe/.
I then went on Putty and used these SSH commands:
root#chicken [/]# mount /dev/sdb2 /media/qwe
mount: unknown filesystem type 'LVM2_member'
But, I got an error.
root#chicken [/]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup" using metadata type lvm2
Found volume group "VolGroup" using metadata type lvm2
root#chicken [/]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 3 0 wz--n- 1.82t 0
VolGroup 1 3 0 wz--n- 1.82t 0
I use cPanel and WHM.
I am trying to recover the MySQL databases that were lost. I managed to mount the sdb1 bit, but I think that's the boot partition. I don't need that. I need to access the other files!
Any help?
You don't need file system to get you data back.
Start with taking an image from the failed disk
I am following this guide https://developers.google.com/compute/docs/troubleshooting#ssherrors specifically the section about recovering your persistent disk with another vm.
I am trying to follow this part:
mount /dev/disk/by-id/scsi-0Google_PersistentDisk_myinstance-debugging /mnt/myinstance
This is the error I get:
root#debugger:~# mount /dev/disk/by-id/scsi-0Google_PersistentDisk_marty-wll-debugging /mnt/marty-wll
mount: you must specify the filesystem type
I am unsure of the filesystem due to google-compute disks being used, and the system has already been deleted and attached to another machine following the google developers guide I referenced above.
parted scsi-0Google_PersistentDisk_marty-wll-debugging -l
root#debugger:/dev/disk/by-id# parted scsi-0Google_PersistentDisk_marty-wll-debugging -l
Model: Google PersistentDisk (scsi)
Disk /dev/sda: 10.7GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 10.7GB 10.7GB primary ext4
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 10.7GB 10.7GB primary ext4
gave me the information that its "ext4".
although when I issue the following command I still get an error:
root#debugger:~# mount -t ext4 /dev/disk/by-id/scsi-0Google_PersistentDisk_marty-wll-debugging /mnt/marty-wll
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
dmesg of syslog said :
[ 2452.205447] EXT4-fs (sdb): VFS: Can't find ext4 filesystem
any ideas?
Thanks for pointing this out, I will update the docs. Try adding -part1 to the end of your device name. This will mount the partition, instead of the disk. For your specific case:
mount /dev/disk/by-id/scsi-0Google_PersistentDisk_myinstance-debugging-part1 /mnt/myinstance
Also, there are cleaner aliases, so this should work as well:
mount /dev/disk/by-id/google-myinstance-debugging-part1 /mnt/myinstance