Hi and thanx for reading my post i am having a little trouble learning my database in mysql.
Now i have it set up already but recently, but i had another person tell me my members table is slow and useless if i intend to have a lots of members!
I have looked it over a lot of times and did some google searches but i don't see anything wrong with it, maybe because i am new at it? can one of you sql experts look it over and tell me whats wrong with it please :)
--
-- Table structure for table `members`
--
CREATE TABLE IF NOT EXISTS `members` (
`userid` int(9) unsigned NOT NULL AUTO_INCREMENT,
`username` varchar(20) NOT NULL DEFAULT '',
`password` longtext,
`email` varchar(80) NOT NULL DEFAULT '',
`gender` int(1) NOT NULL DEFAULT '0',
`ipaddress` varchar(80) NOT NULL DEFAULT '',
`joinedon` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`acctype` int(1) NOT NULL DEFAULT '0',
`acclevel` int(1) NOT NULL DEFAULT '0',
`birthdate` date DEFAULT NULL,
`warnings` int(1) NOT NULL DEFAULT '0',
`banned` int(1) NOT NULL DEFAULT '0',
`enabled` int(1) NOT NULL DEFAULT '0',
`online` int(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`userid`),
UNIQUE KEY `username` (`username`),
UNIQUE KEY `emailadd` (`emailadd`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=19 ;
--
-- Dumping data for table `members`
--
It's going to be a site for faqs/tips for games, i do expect to get lots of members at one point later on but i thought i would ask to make sure it's all ok, thanx again peace.
Did the other person explain why they think it is slow and useless?
Here's a few things that I think could be improved:
email should be longer - off the top of my head, 320 should be long enough for most email addresses, but you might want to look that up.
If the int(1) fields are simple on/off fields, then they could be tinyint(1) or bool instead.
As #cularis points out, the ipaddress field might not be the appropriate type. INT UNSIGNED is better than varchar for IPv4. You can use INET_ATON() and INET_NTOA() for conversion. See:
Best Field Type for IP address?
How to store IPv6-compatible address in a relational database
As #Delan Azabani points out, your password field is too long for the value you are storing. MD5 produces a 32 character string, so varchar(32) will be sufficient. You could switch to the more secure SHA2, and use the MySQL 'SHA2()' function.
Look into using the InnoDB database engine instead of MyISAM. It offers foreign key constraints, row-level locking and transactions, amongst other things. See Should you move from MyISAM to Innodb ?.
I don't think it's necessarily slow, but I did notice that among all other text fields where you used varchar, you used longtext for the password field. This seems like you are going to store the password in the database -- don't do this!
Always take a fixed-length cryptographic hash (using, for example, SHA-1 or SHA-2) of the user's password, and put that into the database. That way, if your database server is compromised, the users' passwords are not exposed.
Apart from what #Delan said, I noted that;
JoinedOn column defined as ON UPDATE CURRENT_TIMESTAMP. If you need to maintain only the date joined, you should not update the field when the records been updated.
IPAddress column is VARCHAR(80). If you store IPv4 type IP addresses, this will be too lengthy.
Empty string ('') as DEFAULT for NOT NULL columns. Not good if intention is to have a value (other than '') on the field.
Empty string ('') as DEFAULT for UNIQUE Fields. This contradicts the contraints enforced if your intention is to have a Unique Value (other than '').
Related
I'm about to deploy a web app which can end up with a quite big database and have some doubts which I would like to clear up before going live.
Will explain a bit my setup and most common querys:
1- I use sqlalchemy
2- I have many different tables referenced among them by their id (Integer unique field)
3- Some tables use a column with random 50chars unique string which I use client side to avoid exposing id to the clients. This column is indexed.
4- I also indexed some datetime columns which I use for querys which find rows in date ranges.
5- All relations are indexed because sometimes I query by that parameter.
6- Also have indexed some Bool columns which I query together with another index column.
So taking this in mind I ask:
In point 3: It's fine to query by this unique indexed 50chars string? It's not too long to work as index? Will work as fast now as with 50millions register?
Example query:
customer=users.query.filter_by(secretString="ZT14V-_hD9qZrtwLj-rLPTioWQ1MJ4rhfqCUx8SvF0BrrCLtgvV_ZVXXV8_xXW").first()
Then I use this user query to find his associated object:
associatedObject=objects.query.filter_by(id=customer.associatedObject).first()
So once I have this results I just get whatever I need from them:
return({"username":user.Name,"AssociatedStuff":associatedObject.Name})
About point 4:
Will this indexes in datetime columns do some work when comparing with < > operators?
About point 6:
It's ok to query something like:
userFineshedTasks=tasks.query.filter(task.completed==True, task.userID==user.id).all()
being completed and userID indexed columns and userID a reference to users id column.
"Note this query doesn't makes sense because I can get the user completed task from user.tasks.all() given they are referenced and filter the completed from there, but just like an example query..."
Basically asking for confirmation about if this is a correct way to query rows in a huge database given most of my querys will be for unique objects or if I'm doing something wrong.
Hope someone can let me know if this is a good practice or if I will have performance issues in the future.
Thanks in advance!
#Rick James:
Here I'm posting the create table sql code from the database export file:
Hope this is enough to get an idea, is an example of one of the tables, basically same ideas which applies to my questions.
CREATE TABLE `Bookings` (
`id` int(11) NOT NULL,
`CodigoAlojamiento` int(11) DEFAULT NULL,
`Entrada` datetime DEFAULT NULL,
`Salida` datetime DEFAULT NULL,
`Habitaciones` longtext COLLATE utf8_unicode_ci,
`Precio` float DEFAULT NULL,
`Agencia` varchar(500) COLLATE utf8_unicode_ci DEFAULT NULL,
`Extras` text COLLATE utf8_unicode_ci,
`Confirmada` tinyint(1) DEFAULT NULL,
`NumeroOcupantes` int(11) DEFAULT NULL,
`Completada` tinyint(1) DEFAULT NULL,
`Tarifa` int(11) DEFAULT NULL,
`SafeURL` varchar(120) COLLATE utf8_unicode_ci DEFAULT NULL,
`EmailContacto` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
`TelefonoContacto` varchar(30) COLLATE utf8_unicode_ci DEFAULT NULL,
`Titular` varchar(300) COLLATE utf8_unicode_ci DEFAULT NULL,
`Observaciones` text COLLATE utf8_unicode_ci,
`IdentificadorReserva` varchar(500) COLLATE utf8_unicode_ci DEFAULT NULL,
`Facturada` tinyint(1) DEFAULT NULL,
`FacturarAClienteOAgencia` varchar(1) COLLATE utf8_unicode_ci DEFAULT NULL,
`Pagada` tinyint(1) DEFAULT NULL,
`CheckOut` tinyint(1) DEFAULT NULL,
`PagaClienteOAgencia` char(1) COLLATE utf8_unicode_ci DEFAULT NULL,
`NumeroFactura` int(11) DEFAULT NULL,
`FechaFactura` datetime DEFAULT NULL,
`CheckIn` tinyint(1) DEFAULT NULL,
`EsPreCheckIn` tinyint(1) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
here the indexes:
ALTER TABLE `Bookings`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `ix_Bookings_SafeURL` (`SafeURL`),
ADD KEY `ix_Bookings_CodigoAlojamiento` (`CodigoAlojamiento`),
ADD KEY `ix_Bookings_Tarifa` (`Tarifa`),
ADD KEY `ix_BookingsE_CheckIn` (`CheckIn`),
ADD KEY `ix_Bookings_CheckOut` (`CheckOut`),
ADD KEY `ix_Bookings_Completada` (`Completada`),
ADD KEY `ix_Bookings_Confirmada` (`Confirmada`),
ADD KEY `ix_Bookings_Entrada` (`Entrada`),
ADD KEY `ix_Bookings_EsPreCheckIn` (`EsPreCheckIn`),
ADD KEY `ix_Bookings_Salida` (`Salida`);```
And here the references:
```ALTER TABLE `Bookings`
ADD CONSTRAINT `Bookings_ibfk_1` FOREIGN KEY (`CodigoAlojamiento`) REFERENCES `Alojamientos` (`id`),
ADD CONSTRAINT `Bookings_ibfk_2` FOREIGN KEY (`Tarifa`) REFERENCES `Tarifas` (`id`);```
4- for querys which find rows in date ranges.
Usually there is something else in the WHERE, say
WHERE x = 123
AND Entrada BETWEEN ... AND ...
I that case this is optimal: INDEX(x, Entrada)
`CheckOut` tinyint(1) DEFAULT NULL
ADD KEY `ix_Bookings_CheckOut` (`CheckOut`),
It is rarely useful to index a "flag". However, a composite index (as above) may be useful.
Why are most columns NULLable? For "booleans", simply use 0 and 1 and DEFAULT to whichever one is appropriate. Use NULL for "don't know", "optional", "not yet supplied", etc.
6- Also have indexed some Bool columns which I query together with another index column.
Then have a composite index. And be sure to say b=1 not b<>0, since <> does not optimize as well.
It's fine to query by this unique indexed 50chars string? It's not too long to work as index? Will work as fast now as with 50millions register?
If the dataset becomes bigger than RAM, there is a performance problem with "random" indexes. Your example should be fine. (Personally, I think 50 chars is excessive.) And such a 'hash' should probably be CHARACTER SET ascii and perhaps with COLLATE ascii_bin instead of a case-folding version.
And "task.completed==True, task.userID==user.id" os probably best indexed with a "composite" INDEX(userID, completed) in either order.
Yes, indexes in datetime columns do some work when comparing with <, <=, >, >= operators? Strings can also be compared, though I do not see any likely columns for string comparisions other than with =.
50M rows is large, but not "huge". Composite indexes are often important for large tables.
I have a MySQL database table with more than 34M rows (and growing).
CREATE TABLE `sensordata` (
`userID` varchar(45) DEFAULT NULL,
`instrumentID` varchar(10) DEFAULT NULL,
`utcDateTime` datetime DEFAULT NULL,
`dateTime` datetime DEFAULT NULL,
`data` varchar(200) DEFAULT NULL,
`dataState` varchar(45) NOT NULL DEFAULT 'Original',
`gps` varchar(45) DEFAULT NULL,
`location` varchar(45) DEFAULT NULL,
`speed` varchar(20) NOT NULL DEFAULT '0',
`unitID` varchar(5) NOT NULL DEFAULT '1',
`parameterID` varchar(5) NOT NULL DEFAULT '1',
`originalData` varchar(200) DEFAULT NULL,
`comments` varchar(45) DEFAULT NULL,
`channelHashcode` varchar(12) DEFAULT NULL,
`settingHashcode` varchar(12) DEFAULT NULL,
`status` varchar(7) DEFAULT 'Offline',
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=98772 DEFAULT CHARSET=utf8
I access this table from multiple threads (at least 400 threads) every minute to insert data into the table.
As the table was growing, it was getting slower to read and write the data. One SELECT query used to take about 25 seconds, then I added a unique index
UNIQUE INDEX idx_userInsDate ( userID,instrumentID,utcDateTime)
This reduced the read time from 25 seconds to some milliseconds but it has increased the insert time as it has to update the index for each record.
Also If I run a SELECT query from multiple threads as the same time the queries take too long to return the data.
This is an example query
Select dateTime from sensordata WHERE userID = 'someUserID' AND instrumentID = 'someInstrumentID' AND dateTime between 'startDate' AND 'endDate' order by dateTime asc;
Can someone help me, to improve the table schema or add an effective index to improve the performance, please.
Thank you in advance
A PRIMARY KEY is a UNIQUE key. Toss the redundant UNIQUE(id) !
Is id referenced by any other tables? If not, then get rid of it all together. Instead have just
PRIMARY KEY ( userID, instrumentID, utcDateTime)
That is, if that triple is guaranteed to be unique. You mentioned DST -- use the datatype TIMESTAMP instead of DATETIME. Doing that, you can convert to DATETIME if needed, thereby eliminating one of the columns.
That one index (the PK) takes virtually no space since it is "clustered" with the data in InnoDB.
Your table is awfully fat with all those VARCHARs. For example, status can be reduced to a 1-byte ENUM. Others can be normalized. Things like speed can be either a 4-byte FLOAT or some smaller DECIMAL, depending on how much range and precision you need.
With 34M wide rows, you have probably recently exceeded the cacheability of the RAM you have. By making the row narrower, you will postpone that overflow.
Why attack the indexes? Every UNIQUE (including PRIMARY) index is checked before allowing the row to be inserted. By getting it down to 1 index, that minimizes the cost there. (InnoDB really needs a PRIMARY KEY.)
INT is 4 bytes. Do you have a billion instruments? Maybe instrumentID could be SMALLINT UNSIGNED, which is 2 bytes, with a max of 64K? Think about all the other IDs.
You have 400 INSERTs/minute, correct? That is not bad. If you get to 400/second, we need to have a different talk.
("Fill factor" is not tunable in MySQL because it does not make much difference.)
How much RAM do you have? What is the setting for innodb_buffer_pool_size? Optimal is somewhere around 70% of available RAM.
Let's see your main queries; there may be other issues to address.
It's not the indexes at fault here. It's your data types. As the size of the data on disk grows, the speed of all operations decrease. Indexes can certainly help speed up selects - provided your data is properly structured - but it appears that it isnt
CREATE TABLE `sensordata` (
`userID` int, /* shouldn't this have a foreign key constraint? */
`instrumentID` int,
`utcDateTime` datetime DEFAULT NULL,
`dateTime` datetime DEFAULT NULL,
/* what exactly are you putting here? Are you sure it's not causing any reduncy? */
`data` varchar(200) DEFAULT NULL,
/* your states will be a finite number of elements. They can be represented by constants in your code or a set of values in a related table */
`dataState` int,
/* what's this? Sounds like what you are saving in location */
`gps` varchar(45) DEFAULT NULL,
`location` point,
`speed` float,
`unitID` int DEFAULT '1',
/* as above */
`parameterID` int NOT NULL DEFAULT '1',
/* are you sure this is different from data? */
`originalData` varchar(200) DEFAULT NULL,
`comments` varchar(45) DEFAULT NULL,
`channelHashcode` varchar(12) DEFAULT NULL,
`settingHashcode` varchar(12) DEFAULT NULL,
/* as above and isn't this the same as */
`status` int,
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=98772 DEFAULT CHARSET=utf8
1st of all: Avoid varchars for indexes and especially IDs. Each character position in the varchar generates an own index-entry internally!
2nd: Your select uses dateTime, your index is set to utcDateTime. It will only take userID and instrumentID and ignore the utcDateTime-Part.
Advise: Change your data types for the ids and change your index to match the query (dateTime, not utcDateTime)
Using an index decreases your performance on inserts, unluckily, there is nothing such as a fill factor for indexes in mysql right now. So the best thing you can do is try the indexes to be as small as possible.
Another approach on heavily loaded databases with random access would be: write to an unindexed table, read from an indexed one. At a given time, build the indexes and swap the tables (may require a third table for the index creation while leaving the other ones untouched in between).
I'm creating a membership directory for an organization and I'm trying to figure out a nice way to keep everyone's details in order and updateable manner. I have 3 tables:
Person table handles the actual person
CREATE TABLE `person` (
`personid` BIGINT PRIMARY KEY AUTO_INCREMENT,
`personuuid` CHAR(32) NOT NULL,
`first_name` VARCHAR(50) DEFAULT '',
`middle_name` VARCHAR(50) DEFAULT '',
`last_name` VARCHAR(50) DEFAULT '',
`prefix` VARCHAR(32) DEFAULT '',
`suffix` VARCHAR(32) DEFAULT '',
`nickname` VARCHAR(50) DEFAULT '',
`username` VARCHAR(32) ,
`created_on` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00',
`created_by` CHAR(33) DEFAULT '000000000000000000000000000000000',
`last_updated` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`last_updated_by` CHAR(33) DEFAULT '000000000000000000000000000000000'
) ENGINE=InnoDB, COMMENT='people';
Information about a person. Such as school, phone number, email, twitter name, etc. All of these values would be stored in 'value' as a json and my program will handle everything. On each update by the user a new entry is created to show the transition of changes.
CREATE TABLE `person_info` (
`person_infoid` BIGINT PRIMARY KEY AUTO_INCREMENT,
`person_infouuid` CHAR(32) NOT NULL,
`person_info_type` INT(4) NOT NULL DEFAULT 9999,
`value` TEXT,
`created_on` DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00',
`created_by` CHAR(33) DEFAULT '000000000000000000000000000000000',
`last_updated` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`last_updated_by` CHAR(33) DEFAULT '000000000000000000000000000000000'
) ENGINE=InnoDB, COMMENT="Personal Details";
A map between person and person_info tables
CREATE TABLE `person_info_map` (
`personuuid` CHAR(32),
`person_infouuid` CHAR(32) ,
`created_on` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`created_by` CHAR(33) DEFAULT '000000000000000000000000000000000',
`is_active` INTEGER(1)
) ENGINE=InnoDB, COMMENT="Map between person and person info";
So given that I am creating a new entry into person_info everytime there is an update, I'm wondering if I should worry about i/o errors, tables getting to big, etc. And if so, what are possible solutions? I've never really worked with database schemas like this so I figure I should ask for help rather than get screwed in the future.
I'm sure some might ask how often the updates might occur. Truthfully I'm not expecting too much. We currently have 2k members in our directory and I don't expect us to ever have more than 10k active members at any time. I'm also thinking that we will have at most 50 different option types, but for safety and future purposes I allow up to 1000 different option types.
Considering this small piece, does anyone have any advice as to how I should proceed from here?
The person to person_info relationship seems like it should be modeled as a one-to-many relationship (i.e one person records has many person_info records). If that is true, the person_info_map can be dropped as it serves no purpose.
Do not use the UUID fields to establish relationships between your tables. Use the primary keys and create foreign key constraints on them. This will enforce data integrity and improve the performance of joins when querying.
I would personally have the 2 tables merged as the view of the person at the current moment, and have another table where you write the changes (like an event store).
That will avoid you to join the 2 tables every single time you need to fetch the person.
But that's now from the application point of view. I hear already the DBA shouting in my ears :)
So no one really answered the question so I'll post what I ended up doing. I don't know if it will work yet as our systems aren't active, but I think it should work.
I went ahead and kept the system we have above. I'm hoping that we won't ever hit too many rows that it will become an issue, but if we do then I plan on archiving a chunk of the 'old' data. I think that will help, but I think machines will likely our pace of usage.
Can anyone recommend a strategy for aggregating raw 'click' and 'impression' data stored in a MySQL table with over 100,000,000 rows?
Here is the table structure.
CREATE TABLE `clicks` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`companyid` int(11) DEFAULT '0',
`type` varchar(32) NOT NULL DEFAULT '',
`contextid` int(11) NOT NULL DEFAULT '0',
`period` varchar(16) NOT NULL DEFAULT '',
`timestamp` int(11) NOT NULL DEFAULT '0',
`location` varchar(32) NOT NULL DEFAULT '',
`ip` varchar(32) DEFAULT NULL,
`useragent` varchar(64) DEFAULT NULL,
`processed` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `type` (`type`),
KEY `companyid` (`companyid`),
KEY `period` (`period`),
KEY `contextid` (`contextid`)
) ENGINE=MyISAM AUTO_INCREMENT=21189 DEFAULT CHARSET=latin1;
What I want to do is make this data easier to work with. I want to extract weekly and monthly aggregates from it, grouped by type, companyid and contextid.
Ideally, I'd like to take this data off the production server, aggregate it and then merge it back.
I'm really in a bit of a pickle and wondered whether anyone had any good starting points or strategies for actually aggregating the data so that it can be queried quickly using MySQL. I do not require 'real time' reporting for this data.
I've tried batch PHP scripts in the past but this seemed quite slow.
You can implement a simple PHP script with the whole monthly/weekly data aggregation logic and make it execute via cron job at a given time. Depending on the software context, it could possibly be scheduled for running at night. Additionally, you could pass a GET parameter in the request for recognizing the request source.
You might be interested in MySQL replication... set up a 2nd server who's sole job is just to run the reports on the replicated copy of the data set, and therefore you can tune it specifically for that job. If you set up your replication scheme as master-master, then when the report server updates it's own tables based the report findings, those database changes will automatically replicate back over to the production server.
Also I would highly recommend you read High Performance MySQL, 3rd Ed., and take a look at http://www.mysqlperformanceblog.com/ for further info on working with massive datasets in MySQL
I have a table inside of my mysql database which I constantly need to alter and insert rows into but it continues running slow when I make changes making it difficult because there are over 200k+ entries. I tested another table which has very few rows and it moves quickly, so it's not the server or database itself but that particular table which has a tough time. I need all of the table's rows and cannot find a solution to get around the load issues.
DROP TABLE IF EXISTS `articles`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `articles` (
`id` int(11) NOT NULL auto_increment,
`content` text NOT NULL,
`author` varchar(255) NOT NULL,
`alias` varchar(255) NOT NULL,
`topic` varchar(255) NOT NULL,
`subtopics` varchar(255) NOT NULL,
`keywords` text NOT NULL,
`submitdate` timestamp NOT NULL default CURRENT_TIMESTAMP,
`date` varchar(255) NOT NULL,
`day` varchar(255) NOT NULL,
`month` varchar(255) NOT NULL,
`year` varchar(255) NOT NULL,
`time` varchar(255) NOT NULL,
`ampm` varchar(255) NOT NULL,
`ip` varchar(255) NOT NULL,
`score_up` int(11) NOT NULL default '0',
`score_down` int(11) NOT NULL default '0',
`total_score` int(11) NOT NULL default '0',
`approved` varchar(255) NOT NULL,
`visible` varchar(255) NOT NULL,
`searchable` varchar(255) NOT NULL,
`addedby` varchar(255) NOT NULL,
`keyword_added` varchar(255) NOT NULL,
`topic_added` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
KEY `score_up` (`score_up`),
KEY `score_down` (`score_down`),
FULLTEXT KEY `SEARCH` (`content `),
FULLTEXT KEY `asearch` (`author`),
FULLTEXT KEY `topic` (`topic`),
FULLTEXT KEY `keywords` (`content `,`keywords`,`topic`,`author`),
FULLTEXT KEY `content ` (`content `,`keywords`),
FULLTEXT KEY `new` (`keywords`),
FULLTEXT KEY `author` (`author`)
) ENGINE=MyISAM AUTO_INCREMENT=290823 DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
With indexes it depends:
more indexes = faster selecting, slower inserting
less indexes = slower selecting, faster inserting
Because the index tables has to be rebuild when inserting and the more data in the table is the more work is for mysql to do to rebuild the index.
So maybe you could remove indexes you not need, that should speed your inserting up.
Another option is to partition you table into many - this stops the bottle neck.
Just try to pass the changes in an update script. This is slow because it creates tables. try updating the tables where changes has been made.
For example create a variable that catches all the changes in the program, with that, insert it to the tables query. That should be fast enough for programs. But as we all know speed depends on how much data is processed.
Let me know if you need anything else.
This may or may not help you directly, but I notice that you have a lot of VARCHAR(255) columns in your table. Some of them seem like they might be totally unnecessary — do you really need all those date / day / month / year / time / ampm columns? — and many could be replaced by more compact datatypes:
Dates could be stored as a DATETIME (or TIMESTAMP).
IP addresses could be stored as INTEGERs, or as BINARY(16) for IPv6.
Instead of storing usernames in the article table, you should create a separate user table and reference it using INTEGER keys.
I don't know what the approved, visible and searchable fields are, but I bet they don't need to be VARCHAR(255)s.
I'd also second Adrian Cornish's suggestion to split your table. In particular, you really want to keep frequently changing and frequently accessed metadata, such as up/down vote scores, separate from rarely changing and infrequently accessed bulk data like article content. See for example http://20bits.com/articles/10-tips-for-optimizing-mysql-queries-that-dont-suck/
"I have a table inside of my mysql database which I constantly need to alter and insert rows into but it continues"
Try innodb on this table if you application performs A LOT update, insert concurrently there, row level locking $$$
I recommend you to split that "big table"(not that big actually, but for MySQL it may be) in several tables to make the most of the query cache. Any time you update some record in that table, the query cache is erased. Also you can try to reduce the isolation level, but that is a little more complicated.