Amazon EC2 MySQL crashing issue because of less memory, how to sort it out? - mysql

I am new to Amazon EC2, recently I found mysql is not working properly, continuously crashing.
I think issue is less space or something..
Here is the some of the outputs:
free -k
total used free shared buffers cached
Mem: 1020536 254744 765792 0 54028 83748
-/+ buffers/cache: 116968 903568
Swap: 82628 0 82628
swapon -s
Filename Type Size Used Priority
/swapfile file 82628 0 -1
I am not able to start the mysql.
Can someone please suggest me the solutions?

Check the InnoDB cache size in my.cnf file. If it is more than the size of your system's memory you found your problem. Try to reduce that and try to start MySQL.

Related

innodb_redo0 is not a multiple of innodb_page_size

My Ubuntu 22.04 server is suddenly telling me that "The redo log file "./#innodb_redo/#ib_redo0 size 23289856 is not a multiple of innodb_page_size." My innodb_page_size is 16K, so the error is correct, but I can't seem to find any advice on how to fix it. I tried moving ib_redo0 out of the way but that didn't help. Any ideas?
I also encountered this issue. It appeared to be specific to using ZFS on Ubuntu, in my case it was during an upgrade to MYSQL 8.0.30-0ubuntu0.20.04.2.
Following details in this Ubuntu issue report and this MySQL issue report I was able to come up with a solution that worked in my environment.
There are 3 commands below to be ran as root or with sudo. You should replace 8192 in the first with the result of <broken_file_size> % <default_page_size>. The default page size is usually 16384 unless modified.
You may need to replace the #ib_redo0 part of the second command with the broken file reported in the error message.
These commands are intended to pad out the reportedly invalid file with zeros.
Perform a backup before running!
# Gather required zeros to append
# Will create a "zeros" file in the current directory
# This has been calculated based upon 23289856 % 16384 = 8192 or <broken_file_size> % <default_page_size>
dd if=/dev/zero bs=1 count=8192 of=./zeros
# Append zeroes to invalid file
cat zeros >> /var/lib/mysql/#innodb_redo/#ib_redo0
# Restart MySQL
systemctl restart mysql.service
I'd be wary of remaining on ZFS, even if the above fixes things, for the sake of potentially hitting the same issue again.
I had the same Problem in a LXD Container running on ZFS. I had to move it to a different type of storage-pool, e.g. Directory or BTRFS.
After that the solution of #DanBrown worked for me too.
Thank you.

Catalina issues with FD_SETSIZE and MySQL, Configuration not read

I',m having a huge issue with running MySQL#5.7 on my freshly installed 16" Macbook (with OSX 10.15.1 Catalina) During certain actions I get errors like
PDO::__construct(): MySQL server has gone away.
This is caused by the following error I found in the MySQL log.
2019-11-27T13:24:04.835245Z 0 [Warning] File Descriptor 3226 exceeded FD_SETSIZE=1024
After some research, I tried stuff like sudo launchctl limit maxfiles 65536 200000
When i run launchctl limit i get the follwing data
cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 4096 4096
maxfiles 524288 524288
This looks fine to me. To get the max processes and max files correct I also tried
sudo sysctl -w kern.maxfilesperproc=524288
my.cnf looks like this
[mysqld]
open_files_limit=999999
local_infile=ON
secure_file_priv=""
max_allowed_packet=1073741824
max_connections=100000
key_buffer_size=2G
innodb_buffer_pool_size=12G
query_cache_size=67108864
query_cache_type=1
query_cache_limit=4194304
table_open_cache=4096
innodb_buffer_pool_instances=24
innodb_sort_buffer_size=2G
sort_buffer_size=1G
innodb_flush_log_at_trx_commit=0
innodb_log_file_size=3G
interactive_timeout=3600
max_connect_errors=1000000
thread_cache_size=4096
log_error=/var/log/mysql/error.log
[mysqld_safe]
open_files_limit=999999
There is of course a solution to change my table_open_cache to a lower value but that hurts performance and before i always had this on a higher number.
Anybody any clue where this FD_SETSIZE is coming from and how to change it so its used properly
rebooting has no effect by the way.
Resource explaining issue: https://expressionengine.com/blog/mysql-5.7-server-os-x-has-gone-away
try setting following environment variables in mysql configuration file (my.cnf)
interactive_timeout = 300
wait_timeout = 300
I was having this issue on Big Sur v 11.6. The solution for me was modifying the MySQL config with:
max_allowed_packet=256M
table_open_cache=250

Memory usage issues with VPS (ubuntu): MySQL process dies

I'm running a VPS, with specs:
Ubuntu 12.04.5 LTS (GNU/Linux 3.13.0-32-generic x86_64)
512mb RAM
1 CPU
20gb SSD
If you're wondering it's a DigitalOcean droplet. It's running TS3, LAMP (with wordpress), OpenVPN, BYOBU, and OwnCloud.
Now my problem is with mySQL dying on me after like 30m to 1hour. Usually after a reboot, the memory usage is 54% and mySQL doesn't have a problem, but as the memory usage goes towards 80-89% I start to get issues.
System load: 0.01 Users logged in: 0
Usage of /: 22.1% of 19.56GB IP address for eth0: *****
Memory usage: 90% IP address for as0t0: *****
Swap usage: 0% IP address for as0t1: *****
Processes: 93
As you can see, the memory usage is VERY high, and I've noticed the trend that mySQL process dies as the memory usage gets higher. However the swap usage is 0%.
Is there a way to make mySQL and the other processes to use the swap?
Would letting mySQL make use of the swap stop letting it die after my memory usage gets so high?
After the high memory usage, the process dies and I get this error:
[2002] SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
The processor load never goes above 25% in most cases. The server also runs a fast SSD, so it wouldn't be a problem to use a swap, and I don't have that much traffic.
Fixed it, by making a swap file of size 256mb. mySQL doesn't stop now after having no available memory to work in.
After following this tutorial by Etel Sverdlov:
https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-12-04
I was able to make a swap file. I'll copy the tutorial for the sake it gets deleted.
How To Add Swap on Ubuntu 12.04
About Linux Swapping
Linux RAM is composed of chunks of memory called pages. To free up pages of RAM, a “linux swap” can occur and a page of memory is copied from the RAM to preconfigured space on the hard disk. Linux swaps allow a system to harness more memory than was originally physically available.
However, swapping does have disadvantages. Because hard disks have a much slower memory than RAM, virtual private server performance may slow down considerably. Additionally, swap thrashing can begin to take place if the system gets swamped from too many files being swapped in and out.
Check for Swap Space
Before we proceed to set up a swap file, we need to check if any swap files have been enabled on the VPS by looking at the summary of swap usage.
sudo swapon -s
An empty list will confirm that you have no swap files enabled:
Filename Type Size Used Priority
Check the File System
After we know that we do not have a swap file enabled on the virtual server, we can check how much space we have on the server with the df command. The swap file will take 256MB— since we are only using up about 8% of the /dev/sda, we can proceed.
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda 20907056 1437188 18421292 8% /
udev 121588 4 121584 1% /dev
tmpfs 49752 208 49544 1% /run
none 5120 0 5120 0% /run/lock
none 124372 0 124372 0% /run/shm
Create and Enable the Swap File
Now it’s time to create the swap file itself using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Subsequently we are going to prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
All credit to: Etel Sverdlov at: https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-12-04

sphinx index fail, and ask me to restart the server with query_cache_type=1 to enable it

mysql config my.ini default query_cache_type=0 .
I have already set sql_query_pre = SET SESSION query_cache_type=OFF in sphinx.conf.I think it is not good to turn cache while indexing.But sphinx still asking me to turn on cache...
error detail:
win7 x64, sphinx 2.1.7
I:\sphinx\bin>I:\sphinx\bin\indexer.exe --all --config I:\sphinx\bin\sphinx.conf
Sphinx 2.1.7-id64-release (r4638)
Copyright (c) 2001-2014, Andrew Aksyonoff
Copyright (c) 2008-2014, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'I:\sphinx\bin\sphinx.conf'...
indexing index 'test1'...
ERROR: index 'test1': sql_query_pre[1]: Query cache is disabled; restart the server with query_cache_type=1 to enable it
(DSN=mysql://root:***#localhost:3306/test).
total 0 docs, 0 bytes
total 0.018 sec, 0 bytes/sec, 0.00 docs/sec
skipping non-plain index 'rt'...
total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 0 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
The 'message' you are receiving is coming from mysql - not from sphinx. indexer just runs the commands as provided and reports/uses the results.
Basically mysql is telling yo the query cache is already disabled. its not enabled globally.
So trying to turn if off for just the (indexing) session, fails, because its not on. If its not enabled in teh first place you cant disable it!
http://www.big.info/2013/04/error-code-1651-query-cache-is-disabled.html
Its telling you NEED to turn it on globally first, before you are ABLE to turn if off.
Maybe mysql could just silently fail to turn it off, rather than giving an error, but thats a different story.
I had a case where I was seeing this error, and it was actually preventing the indexer --all command from generating indices. I went to the XAMPP Control Panel and clicked on the Config button for the MySQL module. This opened the file my.ini in Notepad. I added the following line to the [mysqld] section in the file:
query_cache_type = 1
Then I restarted the MySQL service. The value of query_cache_type was now displayed as ON, and the indexer --all command successfully generated indices.

NTFS/GPT Mount exited with Exit Code 13

This is a duplicated post since I didn't get any help on askubuntu.com.
I have a 1TB external hard drive that I recently formatted to NTFS. It was mounting on my Ubuntu 11.10 fine until just now. I didn't make any changes to affect my OS or my exhdd.
The error that I get is:
Error mounting: mount exited with exit code 13: $MFTMirr does not match $MFT (record 0).
Failed to mount '/dev/sdb2': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows twice. The usage of the /f parameter is very
important! If the device is a SoftRAID/FakeRAID then first activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
for more details.
I did read this and this. But neither helped.
I tried installing ntfsfix but no such package exists anymore.
I have never used this HDD on a windows machine. If I need to use an other machine to do stuff to fix this, I have access to a mac.
Any advice?
This is my sudo fdisk -l output:
What in the world is GPT? I didn't do that. It used to be NTFS.
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000586fb
Device Boot Start End Blocks Id System
/dev/sda1 * 2148 961320312 480659082+ 83 Linux
/dev/sda2 961320313 976773167 7726427+ 5 Extended
/dev/sda5 961320314 976773167 7726427 83 Linux
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcfd88605
Device Boot Start End Blocks Id System
/dev/sdb1 1 1953525167 976762583+ ee GPT
This is the thing that worked:
I first needed to get ntfs-3g (sudo apt-get install ntfs-3g)
Run sudo fdisk -l to figure out where the mount point is. Mine was /dev/sdb1
I ran ntfsfix -b /dev/sdb1 and that fixed the problem.
Error mounting: mount exited with exit code 13: $MFTMirr does not match $MFT (record 0). Failed to mount '/dev/sda1': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1).
Please see the 'dmraid' documentation for more details.
Solution :-
sudo fdisk -l
sudo ntfsfix /dev/select_disk_name
To find Disk name:
Go dashboard -> Disk utility -> Click disk -> then show Device /Dev/***