High CPU - What to do - mysql

I have a high CPU problem with MYSQL using "top" ( linux ) shows cpu peaks of 90%.
I was trying to find the source of the problem, turned on general log and slow query log,
The slow query log did not find anything.
The Db contains a few small tables and one large table that contains almost 100k rows, Database Engine is MyIsam. strange thing i have noticed that on the large table, select, insert are very fast but update takes 0.2 - 0.5 secs.
already used optimize and repair and no improvement.
the table is being updated frequently, could this be the source of the high CPU% ?
What can i do to improve this?

Does your MySQL server have ganglia setup on it? Regular ganglia metrics along with the mysql_stats plugin for ganglia might reveal what's going on.

I found mytop extremely helpful.

First of all, could you define which is the query that is overloading the server? In that case, please paste it here and may be we can give you a hand on it.
Also, please look at the table structure. Tables with many indexes are likely to have slow updating timespans.
I also recommend you to give us more data about the problem.
Hope that helps,

The first thing that pops into mind is indexing but that doesn't fit since your selects and inserts are fast. It's usually inserts and updates that will slow down on an "overindexed" table. That leaves triggers... do you have an update trigger on that table that could be doing a lot of work and causing the spike?

a query that takes .5 secs won't showup in top cpu of 100%. Its too small.
Also try "show full processlist"; verify you my.cnf and even try reducing the slow query timeout. slow query log can catch anything that is slow long enough.

Any update statement on that table based on the table's key is slow.
for example UPDATE customers SET CustMoney = 1 WHERE CustUID = 'someid'
CREATE TABLE IF NOT EXISTS `customers` (
`CustFullName` varchar(45) NOT NULL,
`CustPassword` varchar(45) NOT NULL,
`CustEmail` varchar(128) NOT NULL,
`SocialNetworkId` tinyint(4) NOT NULL,
`CustUID` varchar(64) character set ascii NOT NULL,
`CustMoney` bigint(20) NOT NULL default '0',
`LastIpAddress` varchar(45) character set ascii NOT NULL,
`LastLoginTime` datetime NOT NULL default '1900-10-10 10:10:10',
`SmallPicURL` varchar(120) character set ascii default '',
`LargePicURL` varchar(120) character set ascii default '',
`LuckyChips` int(10) unsigned NOT NULL default '0',
`AccountCreationTime` datetime NOT NULL default '2009-11-11 11:11:11',
`AccountStatus` tinyint(4) NOT NULL default '1',
`CustLevel` int(11) NOT NULL default '0',
`City` varchar(32) NOT NULL default '',
`State` varchar(32) NOT NULL default '0',
`Country` varchar(32) NOT NULL default '',
`Zip` varchar(16) character set ascii NOT NULL,
`CustExp` bigint(20) NOT NULL default '0',
PRIMARY KEY (`CustUID`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Again im not sure that this is the cause for the high CPU Usage but it seems to me that its not normal for an update statement to take that long. ( 0.5 sec)
The table is being updated up to 5 times in a sec at the moment and in the future it will update more frequently.

What kind of server is this? I've seen slooow writes and relatively fast reads on virtual machines. What does http://en.wikipedia.org/wiki/Hdparm has to say?
What cpu/ram you have on it? What the load avg?

Related

.ibd files taking too much space on ubuntu server

I'm having a problem with .ibd MySQL files.
Scenario:
I'm having a ubuntu server of 200GB and deployed an application of Django and using MySQL server.
The nature of my application is to store huge data and do some x type of processing on it. I have one table which has 5 to 6 million data recrods. This Table has acquired almost 60GB of space (The space occupied by tablename.ibd file).
I tried running Optimize table tablename but the .ibd file doesn't get shrunk.
The InnoDb is true.
PROBLEM
Firstly the storage is running out as the file getting too much large.
Secondly when I try to migrate the migration for adding a column on this table while running the server gets out of space because on running migration the .ibd file starts getting bigger and the server eventually runs out of space.
I will be very thankful If someone helps me out of this.
Note:(I could not purge data from the table as data is very important for me)
(UPDATED)
SHOW CREATE TABLE tablename
| Table | Create Table |
+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| table_name | CREATE TABLE `table_name` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(255) DEFAULT NULL,
`price` double DEFAULT NULL,
`item_identifier` varchar(20) NOT NULL,
`upc` varchar(20) DEFAULT NULL,
`mpn` varchar(100) DEFAULT NULL,
`weight` double DEFAULT NULL,
`weight_unit` varchar(10) DEFAULT NULL,
`main_category` varchar(50) DEFAULT NULL,
`sub_category` varchar(50) DEFAULT NULL,
`category_tree` varchar(500) DEFAULT NULL,
`description` varchar(3800) DEFAULT NULL,
`color` varchar(50) DEFAULT NULL,
`brand` varchar(150) DEFAULT NULL,
`main_image` varchar(2048) DEFAULT NULL,
`secondary_images` varchar(255) DEFAULT NULL,
`shipping` double,
`stock` int(11) NOT NULL,
`sale_rank` varchar(100) DEFAULT NULL,
`itemHeight` double DEFAULT NULL,
`itemLength` double DEFAULT NULL,
`itemWeight` double DEFAULT NULL,
`itemWidth` double DEFAULT NULL,
`manufacturer` varchar(100) DEFAULT NULL,
`product_model` varchar(150) DEFAULT NULL,
`variations` longtext,
`pack_count` int(11),
`size` varchar(100) DEFAULT NULL,
`flavor` varchar(100) DEFAULT NULL,
`successfully_stored` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `item_identifier` (`item_identifier`),
KEY `table_name_upc_3ca3d702` (`upc`)
) ENGINE=InnoDB AUTO_INCREMENT=7279139 DEFAULT CHARSET=latin1 |
+---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
SHOW TABLE STATUS LIKE 'tablename'\G
*************************** 1. row ***************************
Name: table_name
Engine: InnoDB
Version: 10
Row_format: Dynamic
Rows: 7439966
Avg_row_length: 8807
Data_length: 65530740736
Max_data_length: 0
Index_length: 323633152
Data_free: 5242880
Auto_increment: 7279139
Create_time: 2021-06-11 21:26:17
Update_time: 2021-06-12 18:08:06
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.01 sec)
InnoDB disk space is 2-3 times as much as you might think. This is because of several different "overhead" things. They provide performance and features; live with it.
60GB / 5M = 12KB per row. Sounds like you have one more big TEXT or BLOB columns? Please provide SHOW CREATE TABLE so we can further discuss the layout of the table.
(OPTIMIZE TABLE is rarely of any use; don't bother using it.)
Sizes
Bill covered most of the size-related things (DOUBLE->FLOAT, etc); alas they will shrink the disk footprint by only a few percent in your case.
It seems that variations must be the bulkiest column. What do you get from SELECT AVG(LENGTH(variations)) FROM table_name; ? I suspect it is a few thousand. Most "text" can easily be compressed 3:1 by standard compression libraries. If the average is 3000, then the potential savings is about 2KB which is something like 20-30% of the table. (It may save more due to the "off-record" storage mechanism, but the computation is complex.)
Compressing a single column requires the cooperation of the client. That is, code in Django needs to compress and uncompress the column between client and server.
Using ROW_FORMAT=COMPRESSED gives about 2:1 compression for the whole table and is transparent to the client. So, overall, this is probably better.
As Bill points, out, all of this is a temporary fix -- you will run out of disk space as the table grows. That is, Optimize, smaller datatypes, and compressions are only temporary fixes. You really need more disk space.
Get a server with larger storage volumes.
Alternative: Get a second server running MySQL Server, and move some of the data in your current instance to that new instance.
Re your update with the table definition and status:
The table status shows that the data length, that is, the rows, use about 61 GiB, and the secondary indexes use about 0.3 GiB. So it's unlikely that you can save space by dropping indexes.
The average row size is 8807 bytes (this is an estimate, it's just the data_length divided by the number of rows). You might be able to reduce the average row size a little bit by changing some data types.
For example, each double takes 8 bytes. Could you get enough precision using float or numeric(9,2) instead? These take 4 bytes each. Similarly, there are some int columns that might be able to be smallint and still store the range of values you need.
You should read about the storage requirements of each data type and make decisions about how best to define your columns. See https://dev.mysql.com/doc/refman/8.0/en/storage-requirements.html
The variable-length data types like varchar and longtext already store only the length of the content in the column on each row, not the max length allowed. So for example changing varchar(200) to varchar(100) doesn't make any difference if the strings in them are already shorter than 100 characters.
There are some cases of varchar that might be replaced by an integer reference to a lookup table. An integer may take less space than repeating the same string on every row.
You could use the InnoDB COMPRESSED row format. This has variable results depending on your data, but it might shrink strings by about half.
Changing data types and the row format do require you to run ALTER TABLE, so there needs to be enough storage space for the copy of the table temporarily, similar to running OPTIMIZE TABLE. If you don't have enough space to do that, then you can't alter the table.
Even with these techniques, your table will still be quite large, and databases tend to grow over time as we store more rows of data in them. Even if you shrink it a bit today, you will still need a plan for getting a larger storage volume eventually.

How to resolve update lock issue in MySQL

I have 2 MySQL UPDATE Query problem on my website.
Problem 1
I run a content site that updates page views for posts when users read.
Each time I send push notifications, my server times out; when I comment on the Update query that increments the page views, everything returns to normal.
This I think maybe as a result of hundreds of UPDATE query trying to update the views on the same row.
**The query that updated the tablename**
update table set views='$newview' where id=1
Query Explain
id: 1
select_type: SIMPLE
table: new_jobs
type: range
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: NULL
rows: 1
Extra: Using where
**tablename create table**
CREATE TABLE `tablename` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`company_id` int(11) DEFAULT NULL,
`job_title` varchar(255) DEFAULT NULL,
`slug` varchar(255) DEFAULT NULL,
`advert_date` date DEFAULT NULL,
`expiry_date` date DEFAULT NULL,
`no_deadline` int(1) DEFAULT 0,
`source` varchar(20) DEFAULT NULL,
`featured` int(1) DEFAULT 0,
`views` int(11) DEFAULT 1,
`email_status` int(1) DEFAULT 0,
`draft` int(1) DEFAULT 0,
`created_by` int(11) DEFAULT NULL,
`show_company_name` int(1) DEFAULT 1,
`display_application_method` int(1) DEFAULT 0,
`status` int(1) DEFAULT 1,
`upload_date` datetime DEFAULT NULL,
`country` int(1) DEFAULT 1,
`followers_email_status` int(1) DEFAULT 0,
`og_img` varchar(255) DEFAULT NULL,
`old_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `new_jobs_company_id_index` (`company_id`),
KEY `new_jobs_views_index` (`views`),
KEY `new_jobs_draft_index` (`draft`),
KEY `new_jobs_country_index` (`country`)
) ENGINE=InnoDB AUTO_INCREMENT=151359 DEFAULT CHARSET=utf8
What is the best way of handling this?
[Scenario 2 removed on request]
Scenario 1. I would expect the update of a 'view' count (or 'click' or 'like' or whatever) to be more like
UPDATE t SET views = views + 1 WHERE id = 123;
I assume you have an index (probably the PRIMARY KEY) on id?
Since there are other things going on with that table, it may be wise to split off the rapidly incrementing counter into a separate table. This would avoid interfering with other queries. You can get other data, plus the counter, by using JOIN .. USING(id).
Scenario 2 does not make sense. It seems to keep the latest date for each email, but what does country mean? Since it seems like more than just a counter, you might want a separate table to log those 3 columns.
Please provide SHOW CREATE TABLE.
There are many things that novices perceive as a "crash". Please describe further -- out of connections, out of disk space, sluggishness, the client gave error message, other operations taking too long, etc. Each has a different remedy.
Query
Are you are currently logically doing
BEGIN;
$ct = SELECT views ... FOR UPDATE;
...
UPDATE ... SET views = $ct+1 WHERE ...;
COMMIT;
If so, that is much less efficient than
(with autocommit = ON)
UPDATE ... SET views = views+1 ...;
Note that the first version hangs onto the row longer. If you fail to use FOR UPDATE, you will drop some counts.
Splitting into a separate table sort of forces you to run the UPDATE as its own transaction.
Other
innodb_flush_log_at_trx_commit:
Default is 1, which is secure, but leads to at least one IOPs for each transaction.
2 leads to a flush once a second. During intense times, this is much more efficient. But a crash could lose up to one second's worth of updates. the inaccuracy of "view count" due to a rare crash is, in my opinion, acceptable.
KEY(views) needs to be updated every time views is changed. But, thanks to the "change buffer", this is unlikely to involve any extra I/O, at least now while you are doing the UPDATE.
INT(1) takes 4 bytes; the (1) has no meaning. Suggest changing to TINYINT (1 byte), thereby saving about 27 bytes per row. (7 columns plus 2 indexes)
country INT(1) -- Is it a flag? What is the meaning? Is it normalized to another table? Using 4 bytes for an id and an extra table when standard abbreviations ('US', 'UK', 'RU', 'IN', etc) would take 2 bytes? Suggest country CHAR(2) CHARACTER SET ascii COLLATE ascii_general_ci.
Indexing flags rarely benefits. Let's see the queries where you think such indexes might be used. And the EXPLAIN SELECT ... for them.

Performance Impact to Adding MySQL Index to Large Table

We would like to add a normal index to field 3 & 4 of the following MySQL table and would like to understand the impact to the server performance before doing so. E.g. will the index take up additional RAM and slow down the database as a result?
we understand it will take time initially to create the index. we're not concerned about that. rather, we want to know if we need to upgrade our server to anticipate for the potential increase in loading/memory pressure to the database after adding the index. our dba insists that we must increase RAM from 16GB to 48GB as he believes the new index will be kept in the RAM causing the server to run out of memory for other operations. would be great to confirm if that's necessary.
Thanks in advance for your expert advice.
MySQL version: 5.5.30
OS: CentOS
Hardware config: 8 Core, 32G RAM, 1TB Disk
Table size: 490GB
No. of rows: 67M
CREATE TABLE `mytable` (
`field_1` text NOT NULL,
`field_2` varchar(200) NOT NULL,
`field_3` varchar(100) NOT NULL,
`field_4` text NOT NULL,
`field_5` char(8) NOT NULL,
`field_6` varchar(100) NOT NULL DEFAULT '',
`field_7` varchar(100) DEFAULT '',
`field_8` varchar(20) NOT NULL,
`field_9` char(16) NOT NULL,
`field_0` varchar(25) NOT NULL,
`field_a` varchar(50) NOT NULL DEFAULT '',
`field_b` varchar(20) DEFAULT '',
`field_c` varchar(35) DEFAULT '',
`field_d` varchar(35) DEFAULT '',
`field_e` varchar(30) NOT NULL DEFAULT '',
`field_f` varchar(30) DEFAULT '',
`field_g` varchar(3) NOT NULL DEFAULT 'xx',
`field_h` varchar(50) DEFAULT '',
`field_i` varchar(100) DEFAULT '',
`field_j` char(8) NOT NULL,
`field_k` varchar(10) NOT NULL DEFAULT '',
`field_l` datetime NOT NULL,
PRIMARY KEY (`field_9`),
KEY `field_j_idx` (`field_j`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
First of all, indexes are stored on the disk, not in the memory. Both MyISAM and innodb may cache certain index blocks into the memory to enable faster access to the most commonly used blocks. For innodb the size of this buffer is controlled by the innodb_buffer_pool_size server system variable.
As you can see from the description, the setting of this variable is not affected by the addition or removal of indexes. So, unless you decide to increase the size of this variable, there is no direct impact of adding new index on MySQL memory usage.
Obviously, adding a new index to a large existing table will have a performance impact during the creation of the index. There will be an obvious impact after the index is added on any insert / update / delete operations, since MySQL will have to update the additional index data as well.
It depends. What version of MySQL do you have? With newer versions, ALGORITHM=INPLACE makes adding a secondary, non-unique, index relatively fast and painless.
You have another potential problem looming. If this table is really half the size of disk, if you do need to do an ALTER that cannot be done with INPLACE, it will probably crash for lack of disk space. Consider getting a bigger disk before this happens, and/or think about ways to shrink the table.
CHAR(8) -- what kind of data is in it? If it is always hex or plain letters, it should be declared CHARACTER SET ascii (or latin1), not utf8 -- which takes 24 bytes. Field_j already takes double that because of the index.
If some of the columns have repeated values, consider "normalizing" them. Then replace the bulky string with MEDIUMINT UNSIGNED (3 bytes, 16M max) or INT UNSIGNED.
(I understand your need for obfuscation of the column names, but it makes it hard to give you concrete suggestions.)
field_4 is TEXT, which cannot be indexed. Please describe further what type of text is in it; we may be able to suggest workarounds.
I assume innodb_file_per_table=ON when you built the table? And is still ON? Else, all hope is lost.

Recommendations for aggregating MySQL data (millions of rows)

Can anyone recommend a strategy for aggregating raw 'click' and 'impression' data stored in a MySQL table with over 100,000,000 rows?
Here is the table structure.
CREATE TABLE `clicks` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`companyid` int(11) DEFAULT '0',
`type` varchar(32) NOT NULL DEFAULT '',
`contextid` int(11) NOT NULL DEFAULT '0',
`period` varchar(16) NOT NULL DEFAULT '',
`timestamp` int(11) NOT NULL DEFAULT '0',
`location` varchar(32) NOT NULL DEFAULT '',
`ip` varchar(32) DEFAULT NULL,
`useragent` varchar(64) DEFAULT NULL,
`processed` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `type` (`type`),
KEY `companyid` (`companyid`),
KEY `period` (`period`),
KEY `contextid` (`contextid`)
) ENGINE=MyISAM AUTO_INCREMENT=21189 DEFAULT CHARSET=latin1;
What I want to do is make this data easier to work with. I want to extract weekly and monthly aggregates from it, grouped by type, companyid and contextid.
Ideally, I'd like to take this data off the production server, aggregate it and then merge it back.
I'm really in a bit of a pickle and wondered whether anyone had any good starting points or strategies for actually aggregating the data so that it can be queried quickly using MySQL. I do not require 'real time' reporting for this data.
I've tried batch PHP scripts in the past but this seemed quite slow.
You can implement a simple PHP script with the whole monthly/weekly data aggregation logic and make it execute via cron job at a given time. Depending on the software context, it could possibly be scheduled for running at night. Additionally, you could pass a GET parameter in the request for recognizing the request source.
You might be interested in MySQL replication... set up a 2nd server who's sole job is just to run the reports on the replicated copy of the data set, and therefore you can tune it specifically for that job. If you set up your replication scheme as master-master, then when the report server updates it's own tables based the report findings, those database changes will automatically replicate back over to the production server.
Also I would highly recommend you read High Performance MySQL, 3rd Ed., and take a look at http://www.mysqlperformanceblog.com/ for further info on working with massive datasets in MySQL

Handling a huge MYSQL Table

Hope you are all doing great. We have a huge mysql table called 'posts'. It has about 70,000 records and has gone up to about 10GB is size.
My boss says that something has to be done to make it easy for us to handle this huge table because what if that table gets corrupted then it would take us a lot of time to recover the table. Also at times its slow.
What the are possible solutions so that handling this table becomes easier for as in all aspects.
The structure of the table is as follows:
CREATE TABLE IF NOT EXISTS `posts` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`thread_id` int(11) unsigned NOT NULL,
`content` longtext CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
`first_post` mediumtext CHARACTER SET utf8 COLLATE utf8_unicode_ci,
`publish` tinyint(1) NOT NULL,
`deleted` tinyint(1) NOT NULL,
`movedToWordPress` tinyint(1) NOT NULL,
`image_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
`video_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`video_image_src` varchar(500) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`thread_title` text CHARACTER SET utf8 COLLATE utf8_unicode_ci,
`section_title` text CHARACTER SET utf8 COLLATE utf8_unicode_ci,
`urlToPost` varchar(280) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`posts` int(11) DEFAULT NULL,
`views` int(11) DEFAULT NULL,
`forum_name` varchar(50) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`subject` varchar(150) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL,
`visited` int(11) DEFAULT '0',
`replicated` tinyint(4) DEFAULT '0',
`createdOn` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `urlToPost` (`urlToPost`,`forum_name`),
KEY `thread_id` (`thread_id`),
KEY `publish` (`publish`),
KEY `createdOn` (`createdOn`),
KEY `movedToWordPress` (`movedToWordPress`),
KEY `deleted` (`deleted`),
KEY `forum_name` (`forum_name`),
KEY `subject` (`subject`),
FULLTEXT KEY `first_post` (`first_post`,`thread_title`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=78773 ;
Thanking You.
UPDATED
Note: although I am great-full for the replies but almost all answers have been about optimizing the current database and not about how to generally handle large tables. Although I can optimize the database based on the replies I got, it really does not answer the question about handling huge databases. Right now I am talking about 70,000 records but during the next few months if not weeks we are going to grow a magnitude. Each record can be about 300kb in size.
My answer's also an addition to two previous comments.
You've indexed half of your table. But if you take a look at some indexes (publish, deleted, movedToWordPress) you'll notice they are 1 or 0, so their selectivity is low (number of rows divided by number of distinct values of that column). Those indexes are a waste of space.
Some things also make no sense.
tinyint(4) - that doesn't actually make it a 4 digit integer. Number there is display length. tinyint is 1 byte, so it's got 256 possible values. I'm assuming something went wrong there.
Also, 10 gigs in size for just 75k records? How did you measure the size? Also, what's the hardware you got?
Edit in regards to your updated question:
There are many ways to scale databases. I'll link one SO question/answer so you can get the idea what you can do: here it is.
The other thing you might do is get better hardware. Usually, the reason why databases are slow when they increase in size is the HDD subsystem and available memory left to work with the dataset. The more RAM you have - the faster it all gets.
Another thing you could do is split your table into two in such a way that one table holds the textual data and the other holds the data relevant to what your system requires to perform certain searching or matching (you'd put integer fields there).
Using InnoDB, you'd gain huge performance boost if the two tables were connected via some sort of a foreign key pointing to primary key. Since InnoDB is such that primary key lookups are fast - you are opening several new possibilities to what you can do with your dataset. In case your data gets increasingly huge, you can get enough RAM and InnoDB will try to buffer the dataset in RAM. There's an interesting thing called HandlerSocket that does some neat magic with servers that have enough RAM and are using InnoDB.
In the end it really boils down to what you need to do and how you are doing it. Since you didn't mention that, it's hard to give an estimate here of what you should do.
My first step to optimizing would definitely be to tweak MySQL instance and to back that big table up.
I guess you have to change some columns.
You can start by reduce your var char variables.
image_src/video_src/video_image_src VARCHAR(500) is a little too much i think. (100 varchars is enough i would say)
thread_title is text but should be a VARCHAR(200?) if you say me
same with section_title
Ok here is your problem
content longtext
Do you really need longtext here? longtext is up to 4GB of space. I think if you change this column to text it would be a lot smaller
TINYTEXT 256 bytes
TEXT 65,535 bytes ~64kb
MEDIUMTEXT 16,777,215 bytes ~16MB
LONGTEXT 4,294,967,295 bytes ~4GB
Edit: i see you use a fulltext index. I am quite sure that is saving a lot a lot a lot of data. You should use another mechanism for searching full text.
In addition to what Michael has commented, the slowness can be an issue based on how well the queries are optimized, and proper indexes to match. I would try to find some of the culprit queries that are taking longer time than you hope and post here at S/O to see if someone can help in optimizing options.