Out of memory while dropping a column - mysql

I've a database hosted on Clever-Cloud (https://www.clever-cloud.com/pricing - MySQL addon size LM: memory 1 GB & 2 vCPUS). I have a table with 188 000 lines about 311 MB using InnoDb engine.
When I try to drop a column of my table (no index on this column) I get in phpMyAdmin the following error:
2006 - MySQL server has gone away
Log of MySQL at the time of the error : https://gist.github.com/urcadox/038c180cefdcba20e1052e7418a43324
I've read that InnoDb engine used memory to create a new table, copy the data without the dropped column and switch old and new tables to perform the drop operation.
Is there anything I can do to use less memory?
Is there anyway to make InnoDb use disk instead of memory?
Thank you!

Why don't you try ALGORITHM=COPY in in your table alteration query? It's is part of the ALTER TABLE syntax It forces the table to be copied rather than be modified in place. It's memory usage is likely to be lower. But certain caveats apply
Any ALTER TABLE operation run with the ALGORITHM=COPY clause prevents
concurrent DML operations. Concurrent queries are still allowed. That
is, a table-copying operation always includes at least the concurrency
restrictions of LOCK=SHARED (allow queries but not DML). You can
further restrict concurrency for such operations by specifying
LOCK=EXCLUSIVE, which prevents DML and queries.

Related

Slow MySQL table

I am currently trying to figure out why the site I am working on (Laravel 4.2 framework) is really slow at times, and I think it has to do with my database setup. I am not a pro at all so I would assume that where the problem is
My sessions table has roughly 2.2 million records in it, when I run show processlist;, all the queries that take the longest relate to that table.
Here is a picture for example:
Table structure
Surerly I am doing something wrong or it's not index properly? I'm not sure, not fantastic with databases.
We don't see the complete SQL being executed, so we can't recommend appropriate indexes. But if the only predicate on the DELETE statements is on the last_activity column i.e.
DELETE FROM `sessions` WHERE last_activity <= 'somevalue' ;
Then performance of the DELETE statement will likely be improved by adding an index with a leading column of somevalue, e.g.
CREATE INDEX sessions_IX1 ON sessions (last_activity);
Also, if this table is using MyISAM storage engine, then DML statements cannot execute concurrently; DML statements will block while waiting to obtain exclusive lock on the table. The InnoDB storage engine uses row level locking, so some DML operations can be concurrent. (InnoDB doesn't eliminate lock contention, but locks will be on rows and index blocks, rather than on the entire table.)
Also consider using a different storage mechanism (other than MySQL database) for storing and retrieving info for web server "sessions".
Also, is it necessary (is there some requirement) to persist 2.2 million "sessions" rows? Are we sure that all of those rows are actually needed? If some of that data is historical, and isn't specifically needed to support the current web server sessions, we might consider moving the historical data to another table.

What tools are available to free allocated space in a MySQL database after deleting data?

I am using MySQL Server-5.1.58 Community log. The problem is after deleting the data the allocated space of MySQL database is not getting free and as a result day by day the backup size of my using database is increasing.
Please kindly let me know any tool which can resolve the issue.
Remember that MySQL locks the table during the time OPTIMIZE TABLE is running
For your MySQL version from the official documentation:
OPTIMIZE TABLE should be used if you have deleted a large part of a
table or if you have made many changes to a table with variable-length
rows (tables that have VARCHAR, VARBINARY, BLOB, or TEXT columns).
Deleted rows are maintained in a linked list and subsequent INSERT
operations reuse old row positions. You can use OPTIMIZE TABLE to
reclaim the unused space and to defragment the data file
Additional notes for InnoDB:
For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE, which
rebuilds the table to update index statistics and free unused space in
the clustered index. Beginning with MySQL 5.1.27, this is displayed in
the output of OPTIMIZE TABLE when you run it on an InnoDB table, as
shown here:
mysql> OPTIMIZE TABLE foo;
Table does not support optimize, doing recreate + analyze instead
So:
OPTIMIZE [NO_WRITE_TO_BINLOG | LOCAL] TABLE
tbl_name [, tbl_name] ...
By default, OPTIMIZE TABLE statements are written to the binary log so
that they will be replicated to replication slaves. Logging can be
suppressed with the optional NO_WRITE_TO_BINLOG keyword or its alias
LOCAL.

Mysql insert,updates very slow

Our server database is in mysql 5.1
we have 754 tables in our db.We create a table for each project. Hence the large no of tables.
From past one week i have noticed a very long delay in inserts and updates to any table.If i create a new table and insert into it,It takes one min to insert around 300 recs.
Where as our test database in the same server has 597 tables Same insertion is very fast in test db.
Default engine is MYISAM. But we have few tables in INNODB .
There were a few triggers running. After i deleted triggers it has become some what faster. But it is not fast enough.
USE DESCRIBE to know your query execution plans.
Look more at http://dev.mysql.com/doc/refman/5.1/en/explain.html for its usage.
As #swapnesh mentions, the DESCRIBE command is very usefull for performance debugging.
You can also check your installation for issues using:
https://raw.github.com/rackerhacker/MySQLTuner-perl/master/mysqltuner.pl
You use it like this:
wget https://raw.github.com/rackerhacker/MySQLTuner-perl/master/mysqltuner.pl
chmod +x mysqltuner.pl
./mysqltuner.pl
Of course, here I am assuming that you run some kind of a Unix based system.
You can use OPTIMIZE. According to Manual it does the following:
Reorganizes the physical storage of table data and associated index
data, to reduce storage space and improve I/O efficiency when
accessing the table. The exact changes made to each table depend on
the storage engine used by that table
The syntax is:
OPTIMIZE TABLE tablename
Inserts are typically faster when made in bulk rather than one by one. Try inserting 10, 30, or 100 records per statement.
If you use jdbc you may be able to achieve the same effect with batching, without changing the SQL.

Slow DROP TEMPORARY TABLE

Ran into an interesting problem with a MySQL table I was building as a temporary table for reporting purposes.
I found that if I didn't specify a storage engine, the DROP TEMPORARY TABLE command would hang for up to half a second.
If I defined my table as ENGINE = MEMORY this short hang would disappear.
As I have a solution to this problem (using MEMORY tables), my question is why would a temporary table take a long time to drop? Do they not use the MEMORY engine by default? It's not even a very big table, a couple of hundred rows with my current test data.
Temporary tables, by default, will be created where ever the mysql configuration tells it to, typically /tmp or somewhere else on a disk. You can set this location (and even multiple locations) to a RAM disk location such as /dev/shm.
Hope this helps!
If the temporary file is created with InnoDb engine, which may be the case if your default engine was InnoDb, and the InnoDb buffer pool is large, DROP TEMPORARY TABLE may take some time since it needs to scan all pages to discard.
It was mentionned in a comment to this stack overflow question.
Note also that DROP (TEMPORARY) TABLE uses a LOCK that may have huge impact on all your server. See for example this.
At my work, we recently had a server slow down because we had an InnoDb buffer pool of 80 Gb and some SQL requests had been optimized using InnoDb temporary tables.
About 100 such DROP TEMPORARY TABLE requests every 5 minutes were sufficient to have a huge impact. And the problem was hard to debug since slow query log would tell us that UPDATEs of a single row accessed by primary key in some other table was taking two seconds, and there was an enormous amount of such updates. But even if most query time was spent on these updates, the problem was really because of the DROP TEMPORARY TABLE requests.

Mysql: what to do when memory tables reach max_heap_table_size?

I'm using a mysql memory table as a way to cache data rows which are read several times. I chose this alternative because I'm not able to use xcache or memcache in my solution.
After reading the mysql manual and forum about this topic I have concluded that an error will be raised when the table reaches its maximum memory size. I want to know if there is a way to catch this error in order to truncate the table and free the memory. I don't want to raise the limit of memory that can be used, I need a way to free the memory automatically so the table can continue working.
Thanks.
If you're out of memory, the engine will raise the error 1114 with the following error message:
The table 'table_name' is full
You should catch this error on the client side and delete some data from the table.
You should use normal, persistent tables instead and rely on the inherent caching. For tables where the contents can safely be thrown away, MyISAM is a safe engine (provided you are happy to do a TRUNCATE TABLE on each boot up), alternatively, you can use the same engine as your permanent tables (e.g. InnoDB).
Memory tables are extremely sucky anyway (In all released MySQL versions; better in Drizzle and some others) because they pad rows to the maximum length, which means you can't really start putting VARCHARs in them sensibly.
Unfortunately, you cannot yet set the innodb durability parameter on a per-table (or per-transaction) basis, so you must decide on a per-server basis how much durability you need - in your case, none, so you can set innodb_flush_log_at_trx_commit to 2 (or even 0, but it gains you little)