MySQL fails on restart, InnoDB me alloc error 12 - mysql

I am having a problem with MySQL on my Rackspace Centos 6.4 server instance. The problem is similar to the one described in this StackOverflow question. MySQL is being restarted automatically at some point by mysqld_safe, and the restart fails because InnoDB tries to allocate 128Mb of RAM, which fails. The output of mysqld.log is as follows:
140129 18:05:26 mysqld_safe Number of processes running now: 0
140129 18:07:30 InnoDB: Mutexes and rw_locks use GCC atomic builtins
140129 18:07:30 InnoDB: Compressed tables use zlib 1.2.3
140129 18:07:30 InnoDB: Using Linux native AIO
140129 18:07:35 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
140129 18:07:46 InnoDB: Completed initialization of buffer pool
140129 18:07:46 InnoDB: Fatal error: cannot allocate memory for the buffer pool
140129 18:07:47 [ERROR] Plugin 'InnoDB' init function returned error.
140129 18:07:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
140129 18:08:07 [ERROR] Unknown/unsupported storage engine: InnoDB
140129 18:08:10 [ERROR] Aborting
140129 18:08:53 [Note] /usr/libexec/mysqld: Shutdown complete
140129 18:18:18 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
The solution provided in that other question seemed to be one of "create a swap file". I have checked my server and it seems there is already an active swap file:
# swapon -s
Filename Type Size Used Priority
/dev/xvdc1 partition 499992 34876 -1
and, looking at that output, it is the size I was thinking I needed (512Mb). For completeness, here are the contents of my /etc/fstab file:
/dev/xvda1 / ext3 defaults,noatime,barrier=0 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/xvdc1 none swap sw 0 0
So am I missing something, or do I already have a working swap file, of about 512Mb, which is reasonably empty and therefore should be capable of handling a request for 128Mb? Should I reduce the size of the InnoDB buffer to, say, 64Mb? Will there be any issues related to shrinking this buffer?
(My Rackspace server is the smallest one available, which has 512Mb RAM. Whenever I do a top on the server, it seems to have between 50 and 80 Mb free.)

From the output it does look like you have approximately 488MB of swap space.
I am not sure if MySQL allocates the innodb buffer pool based off how much memory+swap is free. Even if it did, you would want to avoid it going to swap as it is slower than keeping things in RAM. My guess is that it does not include swap.
The error "InnoDB: mmap(137363456 bytes) failed; errno 12" lets us know that you could not allocate the memory.
# perror 12
OS error code 12: Cannot allocate memory
I would reduce the size of the innodb buffer pool to 64MB; see if that works. If it does not, either increase the size of the cloud server or reduce the size of the buffer pool again.
In general InnoDB likes memory. It tries to keep as much of the data in memory as possible and reduce disk IO. By reducing the buffer size, you decrease how much MySQL can keep in memory. MySQL will go to disk more often to retrieve data.
This number should reflect the size of your data set. No point in having a buffer much larger than your actual data set.
You may be able to use some of the queries here to determine the size. http://www.mysqlperformanceblog.com/2008/03/17/researching-your-mysql-table-sizes/
If your data set is much larger than the amount of RAM you normally have on hand, you may need to increase the size of the server itself.

Related

MariaDB service fails to start with InnoDB Space id error

I've been running MariaDB on a Raspberry Pi 4 (Raspbian 10 / buster) as the database server for my Webtrees and Owncloud sites. Recently, I noticed that the server wasn't looking right, and after a reboot, I can see that the MariaDB service is failing to start.
Contents of /var/log/mysql/error.log:
2021-03-26 14:45:06 0 [Note] InnoDB: Using Linux native AIO
2021-03-26 14:45:06 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2021-03-26 14:45:06 0 [Note] InnoDB: Uses event mutexes
2021-03-26 14:45:06 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2021-03-26 14:45:06 0 [Note] InnoDB: Number of pools: 1
2021-03-26 14:45:06 0 [Note] InnoDB: Using generic crc32 instructions
2021-03-26 14:45:06 0 [Note] InnoDB: Disabling background log and ibuf IO write threads.
2021-03-26 14:45:06 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2021-03-26 14:45:06 0 [Note] InnoDB: Completed initialization of buffer pool
2021-03-26 14:45:06 0 [Note] InnoDB: innodb_force_recovery=6 skips redo log apply
2021-03-26 14:45:06 0 [ERROR] InnoDB: Space id and page no stored in the page, read in are [page id: space=0, page number=399], should be [page id: space=0, page number=394]
210326 14:45:06 [ERROR] mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.3.27-MariaDB-0+deb10u1
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=0
max_threads=153
thread_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 466214 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x49000
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /var/lib/mysql
Resource Limits:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 12860 12860 processes
Max open files 16384 16384 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 12860 12860 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Core pattern: core
Here's the terminal output from trying to start fro the terminal:
$ sudo systemctl restart mariadb.service
Job for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
Googling hasn't turned up too much, though I've tried
innodb_force_recovery = 1
innodb_purge_threads=0
In /etc/my.cnf, which didn't seem to change anything.
Any thoughts before I attempt to restore from backup?

Django can't connect to database every once in a while (Centos)

I'm having a very weird issue, I have an Apache server running with mod_wsgi. The website runs fine but every once in a while I get the
IOError: failed to write data
error on all the pages of the website. I then gets solved with
sudo service mysqld restart
Since the website can't be down for long, I have not time to debug this problem and I just run the command every time this happens. I only see the error in the logs that's why I can't really debug it, and it has no clear replication steps, it just occurs randomly.
Any help would be appreciated and let me know if you need me to post any configuration files.
Edit: The exact error displayed by django is:
(2002, "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)")
I saved the error message and it is hosted here. (passwords edited out)
Edit:
Here is an extract from the mysql server error log.
160610 10:51:53 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
160610 10:51:53 [Note] /usr/libexec/mysql55/mysqld (mysqld 5.5.46) starting as process 7658 ...
160617 14:35:47 [Note] /usr/libexec/mysql55/mysqld (mysqld 5.5.46) starting as process 32054 ...
160617 14:35:47 [Note] Plugin 'FEDERATED' is disabled.
160617 14:35:47 InnoDB: The InnoDB memory heap is disabled
160617 14:35:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins
160617 14:35:47 InnoDB: Compressed tables use zlib 1.2.8
160617 14:35:47 InnoDB: Using Linux native AIO
160617 14:35:47 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
160617 14:35:47 InnoDB: Completed initialization of buffer pool
160617 14:35:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool
160617 14:35:47 [ERROR] Plugin 'InnoDB' init function returned error.
160617 14:35:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
160617 14:35:47 [ERROR] Unknown/unsupported storage engine: InnoDB
160617 14:35:47 [ERROR] Aborting
I saved the full mysqld.log here.
Why it crashes
This seems to be the culprit
160617 14:35:47 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
This condition is caused by the server running out of physical memory. That's what errno 12 stands for in the kernel errors.
This topic has been discussed on both Stack Overflow and dba.stackexchange. If you want to simulate low memory situations or try to manually reproduce the error try some of these tools:
How to fill 90% of the free memory?
Quick Solution
If you are able to upgrade memory, you can try that. If not, you can try creating a large swap file. It's possible that you don't have a swap at all. Some AWS EC2 instances don't have one by default. You can find out by typing top in the shell. If you don't see swap near the top, that means you don't have one.
A swap file would make the queries a lot slower but at least it's better than the site getting offline.
You might be tempted to try to modify the systemd files to make mysql auto start. Update: #PeterBrittain points out that mysql is auto restarting anyway as shown by the logs. Sometimes though databases can take a bit of time to restart and if the data get's corrupted it will refuse to restart.
Why is memory being exhausted?
If you don't have any other server running on it, 2GB will be more than enough to host a site that serves 6000 pages per day. It could be that you have some rather heavy queries that put an unnecessary load on the db. There are some remedial actions that can be taken
Use django-debug-toolbar to identify pages that execute many queries and see if select_related or prefetch_related can be used to reduce the number.
use mysql slow query log to find the queries that take a long time to execute and optimize them.
Use caching to save the results of complex queries.

Why MySQL Restarted When Number of Processes Running Now 0

Some background. I hosted a WordPress site in VPS and sometimes MYSQL down with error "Error Establishing a Database Connection". I've spent some time to research and believe the problem is due to when MySQL get restarted, it is not able to allocate enough RAM to proceed.
I believe I can improve the situation by increasing physical RAM or swap. But my question for this post is, why MySQL need to restart itself? My site is with pretty low traffic and doesn't seem like the DB is corrupted.
Below is the full log for this issue:
160103 18:39:54 mysqld_safe Number of processes running now: 0
160103 18:39:54 mysqld_safe mysqld restarted
160103 18:39:55 [Note] /usr/libexec/mysqld (mysqld 5.5.44-MariaDB) starting as process 22061 ...
160103 18:39:55 InnoDB: The InnoDB memory heap is disabled
160103 18:39:55 InnoDB: Mutexes and rw_locks use GCC atomic builtins
160103 18:39:55 InnoDB: Compressed tables use zlib 1.2.7
160103 18:39:55 InnoDB: Using Linux native AIO
160103 18:39:55 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137756672 bytes) failed; errno 12
160103 18:39:55 InnoDB: Completed initialization of buffer pool
160103 18:39:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool
160103 18:39:56 [ERROR] Plugin 'InnoDB' init function returned error.
160103 18:39:56 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
160103 18:39:56 [ERROR] mysqld: Out of memory (Needed 128917504 bytes)
160103 18:39:56 [ERROR] mysqld: Out of memory (Needed 96681984 bytes)
160103 18:39:56 [ERROR] mysqld: Out of memory (Needed 72499200 bytes)
160103 18:39:56 [Note] Plugin 'FEEDBACK' is disabled.
160103 18:39:56 [ERROR] Unknown/unsupported storage engine: InnoDB
160103 18:39:56 [ERROR] Aborting
When Number of Processes Running Now 0 means that MySQL isn't running. So the daemon mysqld_safe "makes you a favor" and start MySQL
You have assigned very low RAM 128MB to innodb_buffer_pool_size (which is default RAM). So you should assign approx. 80% of total RAM to this variable if you are using innodb engine, as mysql uses intially memory from this variable to cache index as well data in innodb engine.
So update at least 1 GB RAM (should be 80% of total ram for innodb) to innodb_buffer_pool_size in your config file and restart mysql service.
Update:
You have 1 GB RAM: You can assign 800M RAM (or >=500M) to innodb_buffer_pool_size.
MySQL will auto restart when number of running process 0: As per errors shared by you...this error is coming at the time of mysql service start and server is not able to initiate mysql service with such a less RAM.
160103 18:39:55 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137756672 bytes) failed; errno 12
160103 18:39:55 InnoDB: Completed initialization of buffer pool
160103 18:39:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool
I had a similar problem with MySQL on CentOS 6.9. It was caused by memory starving due another PHP process and not by MySQL. MySQL was just a victim.
You can look at kernel messages. The location of this messages may be specific to OS. In case of CentOS 6.x they are located in /var/log/messages .
Here are some related messages from /var/log/messages:
Jul 25 20:34:46 myserver kernel: Out of memory: Kill process 21467 (mysqld) score 30 or sacrifice child
Jul 25 20:34:46 myserver kernel: Killed process 21467, UID 497, (mysqld) total-vm:757004kB, anon-rss:17728kB, file-rss:320kB
You can run the following command to see if kernel run out of memory:
cat /var/log/messages | grep out_of_memory

mysql Fatal error: cannot allocate memory for the buffer pool

I have this error log from MySQL, any idea?
Website works for some time and then I get MySQL shutdown completely after a couple of hours.
140919 10:48:27 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
140919 10:48:27 [Note] Plugin 'FEDERATED' is disabled.
140919 10:48:27 InnoDB: The InnoDB memory heap is disabled
140919 10:48:27 InnoDB: Mutexes and rw_locks use GCC atomic builtins
140919 10:48:27 InnoDB: Compressed tables use zlib 1.2.3.4
140919 10:48:28 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
140919 10:48:28 InnoDB: Completed initialization of buffer pool
140919 10:48:28 InnoDB: Fatal error: cannot allocate memory for the buffer pool
140919 10:48:28 [ERROR] Plugin 'InnoDB' init function returned error.
140919 10:48:28 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
140919 10:48:28 [ERROR] Unknown/unsupported storage engine: InnoDB
140919 10:48:28 [ERROR] Aborting
140919 10:48:28 [Note] /usr/sbin/mysqld: Shutdown complete
140919 10:48:28 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
140919 10:48:28 [Note] Plugin 'FEDERATED' is disabled.
140919 10:48:28 InnoDB: The InnoDB memory heap is disabled
140919 10:48:28 InnoDB: Mutexes and rw_locks use GCC atomic builtins
140919 10:48:28 InnoDB: Compressed tables use zlib 1.2.3.4
140919 10:48:28 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
140919 10:48:28 InnoDB: Completed initialization of buffer pool
140919 10:48:28 InnoDB: Fatal error: cannot allocate memory for the buffer pool
140919 10:48:28 [ERROR] Plugin 'InnoDB' init function returned error.
140919 10:48:28 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
140919 10:48:28 [ERROR] Unknown/unsupported storage engine: InnoDB
140919 10:48:28 [ERROR] Aborting
140919 10:48:28 [Note] /usr/sbin/mysqld: Shutdown complete
TLDR;
Mysql can't restart because it's out of memory, check that you have an appropriate swapfile configured.
Didn't help? If that's not your issue, more qualified questions to continue research are:
mysqld service stops once a day on ec2 server
https://askubuntu.com/questions/422037/optimising-mysql-settings-mysqld-running-out-of-memory
Background
I had exactly this problem on the very first system I set up on EC2, characterised by the wordpress site hosted there going down on occasion with "Error establishing database connection".
The logs showed the same error that the OP posted. My reading of the error (timestamps removed) is:
Out of memory error:
InnoDB: Fatal error: cannot allocate memory for the buffer pool
InnoDB can't start without enough memory
[ERROR] Plugin 'InnoDB' init function returned error.
[ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
[ERROR] Unknown/unsupported storage engine: InnoDB
[ERROR] Aborting
mysqld is shutting down, which in this context, really means failing to restart!
[Note] /usr/sbin/mysqld: Shutdown complete
Checking /var/log/syslog and searching for mysql yields:
Out of memory: Kill process 15452 (mysqld) score 93 or sacrifice child
Killed process 15452 (mysqld) total-vm:888672kB, anon-rss:56252kB, file-rss:0kB
init: mysql main process (15452) killed by KILL signal
init: mysql main process ended, respawning
type=1400 audit(1443812767.391:30): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=21984 comm="apparmor_parser"
init: mysql main process (21996) terminated with status 1
init: mysql main process ended, respawning
init: mysql post-start process (21997) terminated with status 1
<repeated>
Note: you may have to gunzip and search through archived logs if the error occurred before the logs were rotated by cron.
Solution
In my case the underlying issue was that I'd neglected to configure a swapfile.
You can check to see if you have one configured by running free -m.
total used free shared buffers cached
Mem: 604340 587364 16976 0 29260 72280
-/+ buffers/cache: 485824 118516
Swap: 0 0 0
In the example above, Swap: 0 indicates no swapfile.
Tutorials on setting one up:
https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04
https://help.ubuntu.com/community/SwapFaq
Note that bigger is not necessarily better! From the Ubuntu guide:
The "diminishing returns" means that if you need more swap space than twice your RAM size, you'd better add more RAM as Hard Disk Drive (HDD) access is about 10³ slower then RAM access, so something that would take 1 second, suddenly takes more then 15 minutes! And still more then a minute on a fast Solid State Drive (SSD)...
Regarding the other answers here...
The InnoDB memory heap is disabled
This isn’t really an error, just an indication that InnoDB is using the system’s internal memory allocator instead of its own. The default is yes/1, and is acceptable for production.
According to the docs, this command is deprecated, and will be removed in MySQL versions above 5.6 (and I assume MariaDB):
http://dev.mysql.com/doc/refman/5.6/en/innodb-performance-use_sys_malloc.html
Thanks to: Ruben Schade comment
[Note] Plugin 'FEDERATED' is disabled.
The message about FEDERATED disabled is not an error. It just meant that the FEDERATED engine its not ON for your mysql server. It's not used by default. If you don't need it, don't care about this message.
See: https://stackoverflow.com/a/16470822/2586761
I found this answer adds to the discussion: https://www.digitalocean.com/community/questions/mysql-server-keeps-stopping-unexpectedly?answer=26021
In short, on top of setting innodb_buffer_pool_size to something reasonable like 64M, you also may need to modify /etc/apache2/mods-enabled/mpm_prefork.conf to reduce the number of connections started by apache;
<IfModule mpm_prefork_module>
StartServers 3
MinSpareServers 3
MaxSpareServers 5
MaxRequestWorkers 25
MaxConnectionsPerChild 1024
</IfModule>
The solution is NOT more space, Problem is Apache web server not mysql, actually you need to decrease innodb-buffer-pool-size
This buffer is used by the mysql process right off the start, so when Apache needs more resources the kernel will clear RAM from services this means stopping mysql instead of crashing the server.
Would also add a CRON to check the db status and restart it if you dont want to change to ngnx or httplight.

Cannot Start MySql

Our server has been running mysql just fine for over a year. I ran a set of sql script to build a rather large database and in the middle of those scripts, I started getting errors that I had lost connection. Nobody did anything else happened as far as we know. When I tried to log in to mysql, I got:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
When I try to restart mysql, I get:
# sudo service mysql restart
stop: Unknown instance:
start: Job failed to start
The error.log shows:
130212 9:37:51 [Note] Plugin 'FEDERATED' is disabled.
130212 9:37:51 InnoDB: The InnoDB memory heap is disabled
130212 9:37:51 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130212 9:37:51 InnoDB: Compressed tables use zlib 1.2.7
130212 9:37:51 InnoDB: Using Linux native AIO
130212 9:37:51 InnoDB: Initializing buffer pool, size = 10.0G
130212 9:37:51 InnoDB: Completed initialization of buffer pool
130212 9:37:51 InnoDB: highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 2186809272046
130212 9:37:51 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 2186814514688
InnoDB: Doing recovery: scanned up to log sequence number 2186816162838
InnoDB: 1 transaction(s) which must be rolled back or cleaned up
InnoDB: in total 27964 row operations to undo
InnoDB: Trx id counter is 18834200
130212 9:37:51 InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 4424818.
InnoDB: You may have to recover from a backup.
130212 9:37:51 InnoDB: Page dump in ascii and hex (16384 bytes):
len 16384; hex 058425e20043847200356a3f003e3720000001fd2807769645bf00000000000000000000000000ef31a083ca0000000031980005000003c8000$
InnoDB: End of page dump
18 130212 9:37:51 InnoDB: Page checksum 1501194131, prior-to-4.0.14-form checksum 441953139
InnoDB: stored checksum 92546530, prior-to-4.0.14-form stored checksum 1240647222
InnoDB: Page lsn 509 671577750, low 4 bytes of lsn at page end 441447404
InnoDB: Page number (if stored to page already) 4424818,
InnoDB: space id (if created with >= MySQL-4.1.1 and stored already) 0
InnoDB: Page may be an index page where index id is 1096815
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 4424818.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: See also http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
InnoDB: Ending processing because of a corrupt database page.
130212 9:37:51 InnoDB: Assertion failure in thread 140114781574912 in file buf0buf.c line 3603
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
16:37:51 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=0
max_threads=151
thread_count=0
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x29)[0x7f72aa2435b9]
/usr/sbin/mysqld(handle_fatal_signal+0x3d8)[0x7f72aa12c548]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f72a8c8dcb0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f72a82f6425]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f72a82f9b8b]
/usr/sbin/mysqld(+0x605429)[0x7f72aa32d429]
/usr/sbin/mysqld(+0x631b69)[0x7f72aa359b69]
/usr/sbin/mysqld(+0x5c20a8)[0x7f72aa2ea0a8]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f72a8c85e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f72a83b3cbd]
I cannot find anything running for mysql including any sockets.
I am taking over for a previous SysAdmin and am fairly new to linux and MySql. We've got to get this system back online soon. Please help.
Have you tried adding the following line to your /etc/mysql/my.cnf and then restarting the server?
[mysqld]
innodb_force_recovery = 4
To add to #elico3000 you will now need to dump your corrupt table(s) and Data to repair the innodb fs. There are a number of ways to do this. You can read through the logs to determine the point of failure and possible tablenames, then dump and recreate those specific tables. Or you can dump the entire MySQL DB and all schemas using a single command, but that will take some time depending on how big your DB. Either way once you have addressed the corrupt table(s) you can set the innodb_force option to 0 and restart mysqld_safe.
Here is a good tutorial on recovery options for both MyISAM and InnoDB MySQL instances and covers a few options. It is far easier to point you here, than regurgitate the commands and concepts again in this answer.
Good luck and come back to ask more pointed questions once you have tried one of the options. There are probably more tutorials out there, but I have used this in Development to rebuild my Dev DB and it has plenty of information.
Look here