Amazon EC2 MySQL Failed to Start - mysql

I'm having issues starting MySQL after it randomly stopped working a few minutes ago. I'm getting this error while trying to connect:
Connect failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
So I tried to restart MySQL (this had worked for me before) and I got this:
Stopping mysqld: [ OK ]
MySQL Daemon failed to start.
Starting mysqld: [FAILED]
Here's my error log:
130414 20:03:45 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
130414 20:03:45 [Note] Plugin 'FEDERATED' is disabled.
130414 20:03:45 InnoDB: The InnoDB memory heap is disabled
130414 20:03:45 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130414 20:03:45 InnoDB: Compressed tables use zlib 1.2.5
130414 20:03:45 InnoDB: Using Linux native AIO
130414 20:03:45 InnoDB: Initializing buffer pool, size = 128.0M
130414 20:03:45 InnoDB: Completed initialization of buffer pool
InnoDB: The first specified data file ./ibdata1 did not exist:
InnoDB: a new database to be created!
130414 20:03:45 InnoDB: Setting file ./ibdata1 size to 10 MB
InnoDB: Database physically writes the file full: wait...
130414 20:03:46 InnoDB: Log file ./ib_logfile0 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile0 size to 5 MB
InnoDB: Database physically writes the file full: wait...
130414 20:03:46 InnoDB: Log file ./ib_logfile1 did not exist: new to be created
InnoDB: Setting log file ./ib_logfile1 size to 5 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
InnoDB: 127 rollback segment(s) active.
InnoDB: Creating foreign key constraint system tables
Can anyone offer some tips? I'm pretty noobish at this server stuff :P
Many thanks!

I fixed this by restarting my EC2 instance.

Restarting the instance helps, but it doesn't resolve it permanently.
This is what I used to resolve it:
http://www.prowebdev.us/2012/05/amazon-ec2-linux-micro-swap-space.html
Amazon EC2 Micro Instance Swap Space - Linux
I have a Amazon EC2 Linux Micro instance. Since Micro instances have only 613MB of memory, MySQL crashed every now and then. After a long search about MySQL, Micro Instance and Memory Managment I found out there is no default SWAP space for Micro instance. So if you want to avoid the crash you may need to setup a swap space for your micro instance. Actually performance wise is better to enable swap.
Steps below show how to make a swap space for your Micro instance. I assume you have AWS Account with a Micro instance running.
1.) Run dd if=/dev/zero of=/swapfile bs=1M count=1024
2.) Run mkswap /swapfile
3.) Run swapon /swapfile
4.) Add this line /swapfile swap swap defaults 0 0 to /etc/fstab
Step 4 is needed if you would like to automatically enable swap file after each reboot.
Some useful command related to SWAP space:
$ swapon -s
$ free -k
$ swapoff -a
$ swapon -a
References:
http://www.thegeekstuff.com/2010/08/how-to-add-swap-space/
http://cloudstory.in/2012/02/getting-the-best-out-of-amazon-ec2-micro-instances/
http://cloudstory.in/2012/02/adding-swap-space-to-amazon-ec2-linux-micro-instance-to-increase-the-performance/
http://aws.amazon.com/ec2/instance-types/

In My case i solved it by clearing some space in server.
I was taking backup of database on daily basis which results in consumption of lots of space.
I deleted all those backup and started mysql.

I also had faced the same challenge this is what I explored, the reason for restarting can be one of the below:
There have been updates applied at your EC2 instance, causing the MySql service to stop and not able to restart again.
There may be other processes running along which cause a crunch in memory, thus not allowing the MySql to restart.
To Tackle this you can either use one of the below:
If your MySql service is going down because of memory issue -- Upgrade your instance.
If your mysqld restart command fails, try to restart the httpd service first and then your mysql service. Here are the commands:
sudo service httpd restart
sudo service mysqld restart
If none of the above work restart your EC2 Instance. Not a permanent fix, but help if you want your services to be up and running, and later want to do an RCA of the issue
If you want you can create a script shown below, and execute it after via a cron-job every 5-10 mins depending on your requirement :
#!/bin/bash
dateFormat=`date "+%Y-%m-%d %T"`
log_file_path="/home/ec2-user/mysql_restart_log.dat"
sudo service mysqld status | grep "is running"
if [ $? -ne 0 ]; then
echo "HTTPD restart attempted ${dateFormat}" >> ${log_file_path}
sudo service httpd restart
if [ $? -ne 0 ]; then
echo "HTTPD restart failed... ${dateFormat}" >> ${log_file_path}
else
echo "HTTPD restart success... ${dateFormat}" >> ${log_file_path}
fi
echo "MYSQL restart attempted ${dateFormat}" >> ${log_file_path}
sudo service mysqld restart
if [ $? -ne 0 ]; then
echo "MYSQL restart failed... ${dateFormat}" >> ${log_file_path}
else
echo "MYSQL restart success... ${dateFormat}" >> ${log_file_path}
fi
fi
Also found this link helpful: mysql on amazon linux - MySQL Daemon failed to start

I had this issue too on an EC2 micro instance I was running WordPress on. Turned out that Apache was being a memory hog, which was not allowing mysqld to start/re-start. Once I edited httpd.conf to tame apache's memory use, httpd started with no issues.

I have seen this issue multiple times on multiple AWS servers, main root cause in my cases was lack of space. I deleted some unused files from the server which were taking space and problem was resolved.
This issue of lack of space will also stop your apache from running.

Related

MYSQL_ROOT_PASSWORD is set but getting "Access denied for user 'root'#'localhost' (using password: YES)" in docker container

I have a docker-compose file and a Dockerfile. MySQL is installed properly. I have set MYSQL_ROOT_PASSWORD. But when trying to access mysql db, getting the error - Access denied. I have read the other threads of this site, but couldn't get that much help. :(
Here is my docker-compose file:
version: '3'
volumes:
db_data: {}
services:
db:
build:
context: .
dockerfile: ./db/Dockerfile
args:
- database=iTel
- password=123
image: db_image
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
and Dockerfile:
FROM mysql:5.7.15
ARG database
ARG password
RUN echo ${database}
RUN echo ${password}
MAINTAINER me
ENV MYSQL_DATABASE=${database} \
MYSQL_ROOT_PASSWORD=${password}
ADD ./db/database100.sql /docker-entrypoint-initdb.d
EXPOSE 3306
Here are logs for build:
docker-compose up -d
Building db
Step 1/9 : FROM mysql:5.7.15
5.7.15: Pulling from library/mysql
6a5a5368e0c2: Pull complete
0689904e86f0: Pull complete
486087a8071d: Pull complete
3eff318f6785: Pull complete
3df41d8a4cfb: Pull complete
1b4a00485931: Pull complete
0bab0b2c2630: Pull complete
264fc9ce512d: Pull complete
e0181dcdbbe8: Pull complete
53b082fa47c7: Pull complete
e5cf4fe00c4c: Pull complete
Digest: sha256:966490bda4576655dc940923c4883db68cca0b3607920be5efff7514e0379aa7
Status: Downloaded newer image for mysql:5.7.15
---> 18f13d72f7f0
Step 2/9 : ARG database
---> Running in 62819f9fc38b
Removing intermediate container 62819f9fc38b
---> 863fd3212046
Step 3/9 : ARG password
---> Running in ea9d36c1a954
Removing intermediate container ea9d36c1a954
---> 056100b1d5eb
Step 4/9 : RUN echo ${database}
---> Running in 941bd2f4fc58
iTel
Removing intermediate container 941bd2f4fc58
---> 7b2b48e7bd8c
Step 5/9 : RUN echo ${password}
---> Running in 9cb80396bb62
123
Removing intermediate container 9cb80396bb62
---> 155d184c78ba
Step 6/9 : MAINTAINER me
---> Running in 8e3b3b53ce7b
Removing intermediate container 8e3b3b53ce7b
---> 9a7617a24800
Step 7/9 : ENV MYSQL_DATABASE=${database} MYSQL_ROOT_PASSWORD=${password}
---> Running in e483e65caf55
Removing intermediate container e483e65caf55
---> acf8ac829607
Step 8/9 : ADD ./db/database100.sql /docker-entrypoint-initdb.d
---> 42d992439f98
Step 9/9 : EXPOSE 3306
---> Running in 4e138502c6f9
Removing intermediate container 4e138502c6f9
---> a0818deda593
Successfully built a0818deda593
Successfully tagged db_image:latest
WARNING: Image for service db was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating reve_db_1 ... done
to see the containers:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
49419cb9980a db_image "docker-entrypoint.s…" 10 seconds ago Up 8 seconds 0.0.0.0:3306->3306/tcp reve_db_1
That is log for this container:
docker logs 49419cb9980a
2020-01-21T07:53:13.050129Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2020-01-21T07:53:13.051767Z 0 [Note] mysqld (mysqld 5.7.15) starting as process 1 ...
2020-01-21T07:53:13.054945Z 0 [Note] InnoDB: PUNCH HOLE support available
2020-01-21T07:53:13.055053Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-01-21T07:53:13.055103Z 0 [Note] InnoDB: Uses event mutexes
2020-01-21T07:53:13.055179Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2020-01-21T07:53:13.055226Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
2020-01-21T07:53:13.055268Z 0 [Note] InnoDB: Using Linux native AIO
2020-01-21T07:53:13.055608Z 0 [Note] InnoDB: Number of pools: 1
2020-01-21T07:53:13.055791Z 0 [Note] InnoDB: Using CPU crc32 instructions
2020-01-21T07:53:13.061164Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-01-21T07:53:13.072998Z 0 [Note] InnoDB: Completed initialization of buffer pool
2020-01-21T07:53:13.075325Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2020-01-21T07:53:13.101337Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2020-01-21T07:53:13.142134Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2020-01-21T07:53:13.142356Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2020-01-21T07:53:13.184613Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2020-01-21T07:53:13.185628Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2020-01-21T07:53:13.185733Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2020-01-21T07:53:13.186108Z 0 [Note] InnoDB: Waiting for purge to start
2020-01-21T07:53:13.236391Z 0 [Note] InnoDB: 5.7.15 started; log sequence number 12146163
2020-01-21T07:53:13.236828Z 0 [Note] Plugin 'FEDERATED' is disabled.
2020-01-21T07:53:13.237186Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2020-01-21T07:53:13.252074Z 0 [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
2020-01-21T07:53:13.252900Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2020-01-21T07:53:13.253023Z 0 [Note] IPv6 is available.
2020-01-21T07:53:13.253076Z 0 [Note] - '::' resolves to '::';
2020-01-21T07:53:13.253184Z 0 [Note] Server socket created on IP: '::'.
2020-01-21T07:53:13.269950Z 0 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
2020-01-21T07:53:13.270581Z 0 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
2020-01-21T07:53:13.277379Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200121 7:53:13
2020-01-21T07:53:13.295467Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
2020-01-21T07:53:13.367019Z 0 [Note] Event Scheduler: Loaded 0 events
2020-01-21T07:53:13.368851Z 0 [Note] mysqld: ready for connections.
Version: '5.7.15' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
Now entered in the container:
docker exec -it 49419cb9980a bash
root#49419cb9980a:/#
I have checked if MYSQL_ROOT_PASSWORD is set correctly(in the container):
root#49419cb9980a:/# echo $MYSQL_ROOT_PASSWORD
123
Then tried to log into mysql:
root#49419cb9980a:/# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
My question is, how to solve this problem? Why can't I access mysql? I tried with no password option too.
That gave me this error:
mysql -u root
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: NO)
This is my project structure:
tree
.
├── db
│   ├── Dockerfile
│   └── database100.sql
└── docker-compose.yml
1 directory, 3 files
The below description is specifically for MySQL but many other official db docker images (postgres, mongodb....) work a similar way. Hence the symptom (i.e. access denied with configured credentials) and workaround (i.e. delete the data volume to start initialization from scratch) are the same.
Taking for granted you have shown your entire start log, it appears you started your mysql container against a pre-existing db_data volume already containing a mysql database filesystem.
In this case, absolutely nothing will be initialized on container start and environment variables are useless. Quoting the official image documentation in the "Environment Variables" section:
Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
If you want your instance to be initialized, you have to start from scratch. It is quite easy to do with docker compose when using a named volume like in your case. Warning: this will permanently delete the contents in your db_data volume, wiping out any previous database you had there. Create a backup first if you need to keep the contents.
docker-compose down -v
docker-compose up -d
If you ever convert to a bind mount, you will have to delete all it's content yourself (i.e. rm -rf /path/to/bind/mount/*)
I've tested with all of the possible solutions posted in this thread.
However, after try and error, I identified for any reason complex passwords were not recognized.
I've changed
mariadb:
container_name: dev_db
image: mariadb:10.5
restart: always
environment:
MARIADB_ROOT_PASSWORD: a8Gh#c8wi#gL^
MARIADB_DATABASE: wp_my_database
MARIADB_USER: wp
MARIADB_PASSWORD: a8Gh#c8wi#gL^
by
mariadb:
container_name: dev_db
image: mariadb:10.5
restart: always
environment:
MARIADB_ROOT_PASSWORD: qwerty
MARIADB_DATABASE: wp_my_database
MARIADB_USER: wp
MARIADB_PASSWORD: qwerty
docker compose version: '3.9'.
services:
nginx:1.20-alpine
php:7.2.34-fpm-alpine
mariadb:mariadb:10.5
phpmyadmin: phpmyadmin/phpmyadmin:latest.
If you are on a development server you could simply remove all unused local volumes. Unused local volumes are those which are not referenced by any containers:
docker volume prune
Similar to a post, if you are running with docker run --env-list [your env.list file] doing the follow works, even with underscore.
[env.list]
MYSQL_ROOT_PASSWORD=admin_password
MYSQL_USER=admin
MYSQL_PASSWORD=admin_password
I found a better way, no need to delete all volume etc.
1 - change your docker compose file to keep mysql container alive like this;
under mysql service;
command: tail -F anything
after this operation, you can undo this, don't worry with the previous value.
2 - then, stop mysql container and run with
docker-compose up --build --force-recreate YOUR_SERVICE_NAME
this doesn't delete your volume.
3 - enter to mysql container;
docker exec -it YOUR_SERVICE_NAME bash
finally follow any mysql reset password guide like this;
https://www.a2hosting.com/kb/developer-corner/mysql/reset-mysql-root-password
I have the same problem, and after delete the volume the problem still there, then I find this solution:
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/usr/local/mysql/bin/mysqladmin -u root password 'new-password'
/usr/local/mysql/bin/mysqladmin -u root -h password 'new-password'
Alternatively you can run:
/usr/local/mysql/bin/mysql_secure_installation
maybe a little too late. I had this same issue yesterday and today.
To solve this I had to remove the volume (in this example db_data)
$docker volume ls
$docker volume rm db_data
another way to fix it was running the container pointing to new volume (e.g. -v db_data2:/var/lib/mysql)
i thought that passwords were being stored on the volumes and hence its complaining. However, the volume was empty as shown by $ls -al db_data
don't understand why empty volume cannot be mounted to a new run
None of the suggested solutions were appropriate in my case, but I solved the same issue by replacing
PASSWORD=$pecialcharacter$
with
PASSWORD='$pecialcharacter$'
in my .env file. The presence of $ in the variable and the variable not being wrapped in single quotes was causing my issue. Hope this helps someone.
I had the same problem with mariadb.
my solution was to upgrade the mariadb image from 10 to 10.8-jammy, of course after cleaning the docker environment images and running containers. I deleted everything and did a clean pull.

Service mysql start failed

I get this error when I try to start mysql with service mysql start under a Debian machine (a Docker container actually):
[FAIL] Starting MySQL database server: mysqld . . . . . failed!
I think the problem is in the Debian itself, but I'll explain the context just in case. I have the following context:
A Debian Docker container based in this image running over an Ubuntu 18.04 host.
This container was runninng properly in previous Ubuntu 16.04 and actually is still running in another computer with Ubuntu 16.04. Of course, the container has been rebuilt in new OS.
If I try sh -x /etc/init.d/mysql start to get some feedback info all I get is messages like:
+ echo -n Starting MySQL database server: mysqld
Starting MySQL database server: mysqld+ log_daemon_msg_post Starting MySQL database server mysqld
+ :
+ mysqld_status check_alive nowarn
+ /usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping
+ ping_output=/usr/bin/mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
I'm quite stucked in this, I've been several days searching and moving in circles to no avail.
EDIT: I'll add some tries and fails I did in order to get some output or logging:
$/usr/bin/mysqld_safe -v
181010 15:30:51 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect.
181010 15:30:51 mysqld_safe Logging to '/var/log/mysql/error.log'.
181010 15:30:51 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
181010 15:30:51 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
$mysqld --print-defaults
(empty output)
$mysqld --v
(empty output)
$tail -f /var/log/mysql/error.log
(last log entry is 1 month old)
180921 10:59:40 InnoDB: highest supported file format is Barracuda.
180921 10:59:40 InnoDB: Waiting for the background threads to start
180921 10:59:41 InnoDB: 5.5.60 started; log sequence number 1595675
ERROR: 1050 Table 'plugin' already exists
180921 10:59:41 [ERROR] Aborting
180921 10:59:41 InnoDB: Starting shutdown...
180921 10:59:42 InnoDB: Shutdown completed; log sequence number 1595675
180921 10:59:42 [Note] /usr/sbin/mysqld: Shutdown complete

How to solve InnoDB: Unable to lock ./ibdata1 mysql error?

2016-03-14 02:30:29 58150 [ERROR] InnoDB: Unable to lock ./ibdata1,
error: 35 2016-03-14 02:30:29 58150 [Note] InnoDB: Check that you do
not already have another mysqld process using the same InnoDB data or
log files.
First, list MySQL processes using the ps command:
ps aux | grep mysql
And then kill the process
sudo kill 56311
Believe it or not, the solution is elsewhere. The problem stems from AppArmor misconfiguration apparently.
So just do:
$ apt install apparmor-profiles
and then restart MySQL (notice how fast it'll restart).
I noticed a file missing related to AppArmor when doing:
$ systemctl status mysql.service
Voila.
It should say "There is another solution: the problem might be AppArmor". If you do not see an entry in /var/log/apport.log, then probably AppArmor is not the problem.
An intriguing solution, however.

mysql on amazon linux - MySQL Daemon failed to start

I tried that restarting EC2 instance and sudo service httpd restart.
But I still get this message:
[ec2-user#ip-* ~]$ sudo service mysqld start
MySQL Daemon failed to start.
Starting mysqld: [FAILED]
Also I get this message:
[ec2-user#ip-* ~]$ mysqld
160127 7:01:48 [Note] mysqld (mysqld 5.5.46) starting as process 2745 ...
160127 7:01:48 [Warning] Can't create test file /var/lib/mysql/ip-*.lower-test
160127 7:01:48 [Warning] Can't create test file /var/lib/mysql/ip-*.lower-test
160127 7:01:48 [Note] Plugin 'FEDERATED' is disabled.
mysqld: Can't find file: './mysql/plugin.frm' (errno: 13)
160127 7:01:48 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
160127 7:01:48 InnoDB: The InnoDB memory heap is disabled
160127 7:01:48 InnoDB: Mutexes and rw_locks use GCC atomic builtins
160127 7:01:48 InnoDB: Compressed tables use zlib 1.2.8
160127 7:01:48 InnoDB: Using Linux native AIO
160127 7:01:48 InnoDB: Initializing buffer pool, size = 128.0M
160127 7:01:48 InnoDB: Completed initialization of buffer pool
160127 7:01:48 InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
I don't know what should I do..
Cheers!
EDIT
(+ I'm also using amazon RDS.)
Now, I tried sudo chown -R mysql:mysql /var/lib/mysql too.
But
[ec2-user#ip-* ~]$ mysql_upgrade
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
FATAL ERROR: Upgrade failed
[ec2-user#ip-* ~]$ ls -lh /var/lib/mysql/
합계 29M
-rw-rw---- 1 mysql mysql 5.0M 1월 27 06:52 ib_logfile0
-rw-rw---- 1 mysql mysql 5.0M 1월 27 06:12 ib_logfile1
-rw-rw---- 1 mysql mysql 18M 1월 27 06:52 ibdata1
drwx------ 2 mysql mysql 4.0K 1월 20 07:03 mysql
drwx------ 2 mysql mysql 4.0K 1월 20 07:03 performance_schema
drwx------ 2 mysql mysql 4.0K 1월 20 07:03 test
As per today the nano instances with 0.5GiB memory do not allow running mysqld - they do not have enough memory. Running a micro instance with 1GiB memory fixes the problem completely. I think guys at AWS should have made a note about this in the LAMP installation instructions, as this research costed me several hours or even a day before I figured it out.
I answer myself to my question to help other people in the same problem.
I resolved this problem from these steps:
Restart your EC2 instance.
Type sudo service httpd restart. (I don't know why I should do these two steps.)
Type sudo chown -R mysql:mysql /var/lib/mysql. - Before typing this command, check your user= option in your /etc/my.cnf. If there is no user= option, your user is "mysql". (Visit https://stackoverflow.com/a/21435052/1570534)
Change the innodb_buffer_pool_size option inside /etc/my.cnf from 128 (can be different) to 500M. If you don't have innodb_buffer_pool_size option in your /etc/my.cnf, add innodb_buffer_pool_size=500M in [mysqld] section.
I hope it helps you!
I also faced the same issue when setting up an instance on Lightsail with the lowest cost instance (512MB, 1 vCPU, 20 GB SSD).
I was able to solve the issue by adding a swap file in the instance.
Here is the step to create a swap:
Use the dd command to create a swap file on the root file system, where "bs" is the block size and "count" is the number of blocks. In this example the swap file is 1 GB:
$ sudo dd if=/dev/zero of=/swapfile bs=1M count=1000
Update the read and write permissions for the swap file:
$ chmod 600 /swapfile
Set up a Linux swap area:
$ mkswap /swapfile
Make the swap file available for immediate use by adding the swap file to swap space:
$ swapon /swapfile
Verify that the procedure was successful:
$ swapon -s
Enable the swap file at boot time by editing the /etc/fstab file:
$ vi /etc/fstab
$ /swapfile swap swap defaults 0 0
If the issue is in Amazon Linux OS, please check whether you have free swap.
I faced the same issue, when I add more swap, mysql service start working.
You can see more from here http://www.linuxblackmagic.com/2018/04/mysql-daemon-failed-to-start-in-amazon.html
This will surely work for you.
Shivin Vijai
I had this problem, similar to #Michael Zelensky, when I tried to make use of an Amazon t3.nano machine. (FWIW t3.micro needed no extra adjustments on my end, but I was trying to see "how low could I go" )
I was able to find a workaround that seems to be working for now. Simply add:
table_definition_cache=400
.. to the [mysqld] section in /etc/my.cnf . This apparently lowered some of the memory usage enough to allow me to then perform a service mysqld start and the service came up (along with my Wordpress websites :) )
Thanks to some AWS support specialists and also this website for helping me figure this out: http://www.tocker.ca/2014/03/10/configuring-mysql-to-use-minimal-memory.html

MySQL restarting every 30 minutes on Ubuntu 11.04

I'm having an issue where MySQL 5.1.54 is restarting every 30 minutes on Ubuntu 11.04. When this occurs, the following appears in the MySQL log:
111030 12:01:52 [Note] /usr/sbin/mysqld: Normal shutdown
111030 12:01:52 [Note] Event Scheduler: Purging the queue. 0 events
111030 12:01:52 InnoDB: Starting shutdown...
111030 12:01:54 InnoDB: Shutdown completed; log sequence number 0 875122
111030 12:01:54 [Note] /usr/sbin/mysqld: Shutdown complete
111030 12:01:55 [Note] Plugin 'FEDERATED' is disabled.
111030 12:01:55 InnoDB: Initializing buffer pool, size = 256.0M
111030 12:01:55 InnoDB: Completed initialization of buffer pool
111030 12:01:55 InnoDB: Started; log sequence number 0 875122
111030 12:01:55 [Note] Event Scheduler: Loaded 0 events
111030 12:01:55 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.1.54-1ubuntu4-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu)
This occurs like clockwork every 30 minutes, so it's obviously some service restarting it.
I have checked the crontab of every user on the system (including system users), and none of them have a crontab setup, as you can see in the output below:
# awk -F: '{print $1}' /etc/passwd | xargs -n 1 -i crontab -u {} -l
no crontab for root
no crontab for daemon
no crontab for bin
no crontab for sys
no crontab for sync
no crontab for games
no crontab for man
no crontab for lp
no crontab for mail
no crontab for news
no crontab for uucp
no crontab for proxy
no crontab for www-data
no crontab for backup
no crontab for list
no crontab for irc
no crontab for gnats
no crontab for nobody
no crontab for libuuid
no crontab for syslog
no crontab for sshd
no crontab for landscape
no crontab for ubuntu
no crontab for statd
no crontab for myproxy
no crontab for condor
no crontab for messagebus
no crontab for avahi
no crontab for joe
no crontab for smmta
no crontab for smmsp
no crontab for postfix
no crontab for deploy
no crontab for mysql
no crontab for redis
My dmesg contains the following each time it is restarted. I'm not an apparmor expert, but I believe this is a normal message obtained each time the MySQL service starts:
[1165328.780405] type=1400 audit(1319976114.984:74): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=31985 comm="apparmor_parser"
Also, here are the contents of the MySQL upstart configuration in /etc/init/mysql.conf:
# MySQL Service
description "MySQL Server"
author "Mario Limonciello <superm1#ubuntu.com>"
start on (net-device-up
and local-filesystems
and runlevel [2345])
stop on runlevel [016]
respawn
env HOME=/etc/mysql
umask 007
# The default of 5 seconds is too low for mysql which needs to flush buffers
kill timeout 300
pre-start script
#Sanity checks
[ -r $HOME/my.cnf ]
[ -d /var/run/mysqld ] || install -m 755 -o mysql -g root -d /var/run/mysqld
/lib/init/apparmor-profile-load usr.sbin.mysqld
LC_ALL=C BLOCKSIZE= df --portability /var/lib/mysql/. | tail -n 1 | awk '{ exit ($4<4096) }'
end script
exec /usr/sbin/mysqld
post-start script
for i in `seq 1 30` ; do
/usr/bin/mysqladmin --defaults-file="${HOME}"/debian.cnf ping && {
exec "${HOME}"/debian-start
# should not reach this line
exit 2
}
sleep 1
done
exit 1
end script
Any idea what might be causing this? It doesn't cause any problems, other than Monit alerts stating that "PID changed Service mysqld" (I have Monit monitoring mysqld -- but it reports no errors with the mysqld process, other than the fact that every 30 minutes, it has its PID changed since MySQL is restarted).
Thanks in advance.
are you using chef or puppet, which might be doing something that triggers the reboot?
can you check (and probably post) the definition of your mysql job in upstart? (/etc/init/mysql.conf). OK = try removing the "respawn". It does not work as documented in the upstart documentation. Generally it's used to respawn the process if it's killed by other process, but it seems it does not function as expected. You can see why the apparmour is always loading - because of the pre-start stanza in the script.
As upstart is very new, and is still evolving, better is to use the SysV way.
You should try to run it without AppArmor: just run /usr/bin/mysqld_safe or /usr/bin/mysqld without using upstart and wait 30 minutes. If mysql does not auto-restart, then disable AppArmor in the /etc/init/mysql.conf file, or configure it differently.
If the problem is still there, read mysql's log. If the log is not enabled by default, you can use the option --log-error=/tmp/mysql.log --log-warnings when launching mysqld.
Seems converting the /etc/init.d/mysql start up script to sysV style rather than as an upstart script seemed to correct the problem for me.
I had this same problem upgrading to Ubuntu 14.04. I found this question because it mentions the AppArmor log message, so thanks! I may not have realised MySQL was restarting otherwise.
Upon investigating /var/log/daemon.log, I found the output of /etc/mysql/debian-start appearing repeatedly. The relevant part was this:
May 18 06:48:18 tom /etc/mysql/debian-start[15525]: Upgrading MySQL tables if necessary.
May 18 06:48:18 tom /etc/mysql/debian-start[15528]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored
May 18 06:48:18 tom /etc/mysql/debian-start[15528]: Looking for 'mysql' as: /usr/bin/mysql
May 18 06:48:18 tom /etc/mysql/debian-start[15528]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
May 18 06:48:18 tom /etc/mysql/debian-start[15528]: Error: Server version (5.5.35-1ubuntu1) does not match with the version of
May 18 06:48:18 tom /etc/mysql/debian-start[15528]: the server (5.5.37) with which this program was built/distributed. You can
May 18 06:48:18 tom /etc/mysql/debian-start[15528]: use --skip-version-check to skip this check.
May 18 06:48:18 tom /etc/mysql/debian-start[15528]: FATAL ERROR: Upgrade failed
I read through the /etc/mysql/debian-start script and tried running the upgrade command as in the script, hoping to debug it (MySQL server needs to be running at the time):
/usr/bin/mysql_upgrade --defaults-extra-file=/etc/mysql/debian.cnf
I then found that this worked without complaint and everything simply worked from that point onward. I don't know why it was failing in the first place, but that seemed to fix it. MySQL hasn't restarted itself since.