I am working with mysql .
I have checked the CREATE table statement , and I saw there a KEY word
| pickupspc | CREATE TABLE `pickupspc` (
`McId` int(11) NOT NULL,
`Slot` int(11) NOT NULL,
`FromTime` datetime NOT NULL,
`ToTime` datetime NOT NULL,
`Head` int(11) NOT NULL,
`Nozzle` int(11) DEFAULT NULL,
`FeederID` int(11) DEFAULT NULL,
`CompName` varchar(64) DEFAULT NULL,
`CompID` varchar(32) DEFAULT NULL,
`PickUps` int(11) DEFAULT NULL,
`Errors` int(11) DEFAULT NULL,
`ErrorCode` varchar(32) DEFAULT NULL,
KEY `ndx_PickupSPC` (`McId`,`Slot`,`FromTime`,`ToTime`,`Head`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
But what is the meaning of it ?
It's not like a PRIMARY KEY right ?
Thanks .
It is simply a synonym for INDEX. It creates an index with the name ndx_PickupSPC on the columns specified in parenthesis.
See the CREATE TABLE syntax for more information.
It's just a non-unique index. From the manual
KEY is normally a synonym for INDEX. The key attribute PRIMARY KEY can
also be specified as just KEY when given in a column definition. This
was implemented for compatibility with other database systems.
Key and index are the same. The word Key in the table creation is used to create an index, which enables faster performance.
In the above code, Key ndx_PickupSPC means that it is creating an index by the name ndx_PickupSPC on the columns mentioned in parenthesis.
It's an INDEX on the table. Indexes enable fast lookups for specific queries which check the values of the columns the index is built on. The example uses a compound key.
They are a bit similar to the indexes you find at the end of the books. You can quickly find an entry with the index without searching through the whole book. Databases typically use B-Trees for indexes.
Related
I have an Aurora MySQL cluster and when running queries against the reader I see a degradation in performance over time. A reboot of the reader results in query performance that matches the writer. But after going a week without a reboot queries take 25x as long to run.
The replication lag for the reader instance is 20ms and none of the monitoring metrics are showing issues. The highest I have seen the CPU is 40%. I tried a suggestion to set block_nested_loop to off but that had no effect.
The reader does not get much activity so load should not be an issue. We do need to run a complex query against it that returns a lot of data which is used for analytics. I have found that queries that return a small number of records that are retrieved by an index do NOT have the performance problem. But a similar query that returns the same small number of records and requires a table scan does have the performance problem.
The rate of degradation seems consistent, so it seems like a resource issue related to replication, but I have not had any luck finding anything online documenting the issue.
Any help would be much appreciated.
Update: Additional details
Query execution plans
-- Fast query
explain select cpv.SHORT_TEXT_VALUE, c.UIDPK, c.GUID, c.SHARED_ID, cpv.*
from TCUSTOMERPROFILEVALUE cpv
inner join TCUSTOMER c on cpv.CUSTOMER_UID = c.UIDPK
where LOCALIZED_ATTRIBUTE_KEY = 'CP_EMAIL' and cpv.SHORT_TEXT_VALUE = 'some-email#gmail.com';
-- Slow query, using function to prevent use of index for email match
explain select cpv.SHORT_TEXT_VALUE, c.UIDPK, c.GUID, c.SHARED_ID, cpv.*
from TCUSTOMERPROFILEVALUE cpv
inner join TCUSTOMER c on cpv.CUSTOMER_UID = c.UIDPK
where LOCALIZED_ATTRIBUTE_KEY = 'CP_EMAIL' and LOWER(cpv.SHORT_TEXT_VALUE) = 'some-email#gmail.com';
Table definitions
CREATE TABLE `TCUSTOMERPROFILEVALUE` (
`UIDPK` bigint(20) NOT NULL,
`ATTRIBUTE_UID` bigint(20) NOT NULL,
`ATTRIBUTE_TYPE` int(11) NOT NULL,
`LOCALIZED_ATTRIBUTE_KEY` varchar(255) NOT NULL,
`SHORT_TEXT_VALUE` varchar(255) DEFAULT NULL,
`LONG_TEXT_VALUE` mediumtext,
`INTEGER_VALUE` int(11) DEFAULT NULL,
`DECIMAL_VALUE` decimal(19,2) DEFAULT NULL,
`BOOLEAN_VALUE` int(11) DEFAULT '0',
`DATE_VALUE` datetime DEFAULT NULL,
`CUSTOMER_UID` bigint(20) DEFAULT NULL,
`LAST_MODIFIED_DATE` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`CREATION_DATE` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`UIDPK`),
KEY `I_CPV_ATTR_UID` (`ATTRIBUTE_UID`),
KEY `I_CPV_CUID_ATTKEY` (`CUSTOMER_UID`,`LOCALIZED_ATTRIBUTE_KEY`),
KEY `I_CPV_STV_ATTVALUE` (`SHORT_TEXT_VALUE`),
KEY `I_CPV_ATTKEY_SHORTTEXT` (`LOCALIZED_ATTRIBUTE_KEY`,`SHORT_TEXT_VALUE`),
CONSTRAINT `FK_PROFILE_CUSTOMER` FOREIGN KEY (`CUSTOMER_UID`) REFERENCES `TCUSTOMER` (`UIDPK`) ON DELETE CASCADE,
CONSTRAINT `TCUSTOMERPROFILEVALUE_FK_1` FOREIGN KEY (`ATTRIBUTE_UID`) REFERENCES `TATTRIBUTE` (`UIDPK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='values associated with customer profiles.'
CREATE TABLE `TCUSTOMER` (
`UIDPK` bigint(20) NOT NULL,
`PREF_BILL_ADDRESS_UID` bigint(20) DEFAULT NULL,
`PREF_SHIP_ADDRESS_UID` bigint(20) DEFAULT NULL,
`CREATION_DATE` datetime NOT NULL,
`LAST_EDIT_DATE` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`GUID` varchar(64) NOT NULL,
`STATUS` int(11) NOT NULL,
`AUTHENTICATION_UID` bigint(20) DEFAULT NULL,
`STORECODE` varchar(64) DEFAULT NULL,
`IS_FIRST_TIME_BUYER` tinyint(4) DEFAULT '1',
`CUSTOMER_TYPE` varchar(64) NOT NULL,
`SHARED_ID` varchar(255) NOT NULL,
`PARENT_CUSTOMER_GUID` varchar(64) DEFAULT NULL,
`DTYPE` varchar(40) DEFAULT 'ExtCustomerImpl',
`LAST_SESSION_DATE` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`UIDPK`),
UNIQUE KEY `TCUSTOMER_UNIQUE` (`GUID`),
UNIQUE KEY `TCUSTOMER_SHARED_ID_TYPE_UNIQ` (`SHARED_ID`,`CUSTOMER_TYPE`),
UNIQUE KEY `I_CUST_AUTH_UID` (`AUTHENTICATION_UID`),
UNIQUE KEY `SHARED_ID` (`SHARED_ID`,`STORECODE`),
KEY `I_CUST_CR_DATE` (`CREATION_DATE`),
KEY `I_CUST_STORE_CODE` (`STORECODE`),
KEY `I_TYPE_LAST_EDIT` (`CUSTOMER_TYPE`,`LAST_EDIT_DATE`),
KEY `I_CUSTOMER_SHAREDID` (`SHARED_ID`),
KEY `I_CUSTOMER_PARENT` (`PARENT_CUSTOMER_GUID`),
CONSTRAINT `CUSTOMER_STORECODE_FK` FOREIGN KEY (`STORECODE`) REFERENCES `TSTORE` (`STORECODE`),
CONSTRAINT `TCUSTOMER_PARENT_GUID_FK` FOREIGN KEY (`PARENT_CUSTOMER_GUID`) REFERENCES `TCUSTOMER` (`GUID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='customer account information.'
Indexes
Well, I can't explain the shift in performance unless the Optimizer is randomly shifting between different query plans.
I do see that you are using the notorious Entity-Attribute-Value schema design. And doing it in a rather bulky and complex way -- with multiple columns for different datatypes.
I do see a few things that can probably help performance in general as the dataset grows. (I assume it will grow.)
The primary key, UIDPK of the attribute table TCUSTOMERPROFILEVALUE probably has no use. This will probably be better: PRIMARY KEY(CUSTOMER_UID, ATTRIBUTE_UID). Or maybe that should be LOCALIZED_ATTRIBUTE_KEY??? Why are there two columns for the attribute?
When changing the PK, this KEY I_CPV_ATTKEY_SHORTTEXT (LOCALIZED_ATTRIBUTE_KEY,SHORT_TEXT_VALUE) would implicitly have CUSTOMER_UID added on the end, thereby benefiting your JOIN.
BIGINT is usually overkill; consider using a smaller datatype.
Do you have another attribute table - TATTRIBUTE?
Having 5 UNIQUE keys for a table slows down inserts. Perhaps you can have fewer?
INDEX(SHARED_ID) is redundant since there are other keys starting with that column.
Have your tried removing the LOWER(xxxx) from the SLOW QUERY?
If this corrects the problem, and your results are the same, you were just wasting time with the LOWER(xxx) manipulation.
I'm about to deploy a web app which can end up with a quite big database and have some doubts which I would like to clear up before going live.
Will explain a bit my setup and most common querys:
1- I use sqlalchemy
2- I have many different tables referenced among them by their id (Integer unique field)
3- Some tables use a column with random 50chars unique string which I use client side to avoid exposing id to the clients. This column is indexed.
4- I also indexed some datetime columns which I use for querys which find rows in date ranges.
5- All relations are indexed because sometimes I query by that parameter.
6- Also have indexed some Bool columns which I query together with another index column.
So taking this in mind I ask:
In point 3: It's fine to query by this unique indexed 50chars string? It's not too long to work as index? Will work as fast now as with 50millions register?
Example query:
customer=users.query.filter_by(secretString="ZT14V-_hD9qZrtwLj-rLPTioWQ1MJ4rhfqCUx8SvF0BrrCLtgvV_ZVXXV8_xXW").first()
Then I use this user query to find his associated object:
associatedObject=objects.query.filter_by(id=customer.associatedObject).first()
So once I have this results I just get whatever I need from them:
return({"username":user.Name,"AssociatedStuff":associatedObject.Name})
About point 4:
Will this indexes in datetime columns do some work when comparing with < > operators?
About point 6:
It's ok to query something like:
userFineshedTasks=tasks.query.filter(task.completed==True, task.userID==user.id).all()
being completed and userID indexed columns and userID a reference to users id column.
"Note this query doesn't makes sense because I can get the user completed task from user.tasks.all() given they are referenced and filter the completed from there, but just like an example query..."
Basically asking for confirmation about if this is a correct way to query rows in a huge database given most of my querys will be for unique objects or if I'm doing something wrong.
Hope someone can let me know if this is a good practice or if I will have performance issues in the future.
Thanks in advance!
#Rick James:
Here I'm posting the create table sql code from the database export file:
Hope this is enough to get an idea, is an example of one of the tables, basically same ideas which applies to my questions.
CREATE TABLE `Bookings` (
`id` int(11) NOT NULL,
`CodigoAlojamiento` int(11) DEFAULT NULL,
`Entrada` datetime DEFAULT NULL,
`Salida` datetime DEFAULT NULL,
`Habitaciones` longtext COLLATE utf8_unicode_ci,
`Precio` float DEFAULT NULL,
`Agencia` varchar(500) COLLATE utf8_unicode_ci DEFAULT NULL,
`Extras` text COLLATE utf8_unicode_ci,
`Confirmada` tinyint(1) DEFAULT NULL,
`NumeroOcupantes` int(11) DEFAULT NULL,
`Completada` tinyint(1) DEFAULT NULL,
`Tarifa` int(11) DEFAULT NULL,
`SafeURL` varchar(120) COLLATE utf8_unicode_ci DEFAULT NULL,
`EmailContacto` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
`TelefonoContacto` varchar(30) COLLATE utf8_unicode_ci DEFAULT NULL,
`Titular` varchar(300) COLLATE utf8_unicode_ci DEFAULT NULL,
`Observaciones` text COLLATE utf8_unicode_ci,
`IdentificadorReserva` varchar(500) COLLATE utf8_unicode_ci DEFAULT NULL,
`Facturada` tinyint(1) DEFAULT NULL,
`FacturarAClienteOAgencia` varchar(1) COLLATE utf8_unicode_ci DEFAULT NULL,
`Pagada` tinyint(1) DEFAULT NULL,
`CheckOut` tinyint(1) DEFAULT NULL,
`PagaClienteOAgencia` char(1) COLLATE utf8_unicode_ci DEFAULT NULL,
`NumeroFactura` int(11) DEFAULT NULL,
`FechaFactura` datetime DEFAULT NULL,
`CheckIn` tinyint(1) DEFAULT NULL,
`EsPreCheckIn` tinyint(1) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
here the indexes:
ALTER TABLE `Bookings`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `ix_Bookings_SafeURL` (`SafeURL`),
ADD KEY `ix_Bookings_CodigoAlojamiento` (`CodigoAlojamiento`),
ADD KEY `ix_Bookings_Tarifa` (`Tarifa`),
ADD KEY `ix_BookingsE_CheckIn` (`CheckIn`),
ADD KEY `ix_Bookings_CheckOut` (`CheckOut`),
ADD KEY `ix_Bookings_Completada` (`Completada`),
ADD KEY `ix_Bookings_Confirmada` (`Confirmada`),
ADD KEY `ix_Bookings_Entrada` (`Entrada`),
ADD KEY `ix_Bookings_EsPreCheckIn` (`EsPreCheckIn`),
ADD KEY `ix_Bookings_Salida` (`Salida`);```
And here the references:
```ALTER TABLE `Bookings`
ADD CONSTRAINT `Bookings_ibfk_1` FOREIGN KEY (`CodigoAlojamiento`) REFERENCES `Alojamientos` (`id`),
ADD CONSTRAINT `Bookings_ibfk_2` FOREIGN KEY (`Tarifa`) REFERENCES `Tarifas` (`id`);```
4- for querys which find rows in date ranges.
Usually there is something else in the WHERE, say
WHERE x = 123
AND Entrada BETWEEN ... AND ...
I that case this is optimal: INDEX(x, Entrada)
`CheckOut` tinyint(1) DEFAULT NULL
ADD KEY `ix_Bookings_CheckOut` (`CheckOut`),
It is rarely useful to index a "flag". However, a composite index (as above) may be useful.
Why are most columns NULLable? For "booleans", simply use 0 and 1 and DEFAULT to whichever one is appropriate. Use NULL for "don't know", "optional", "not yet supplied", etc.
6- Also have indexed some Bool columns which I query together with another index column.
Then have a composite index. And be sure to say b=1 not b<>0, since <> does not optimize as well.
It's fine to query by this unique indexed 50chars string? It's not too long to work as index? Will work as fast now as with 50millions register?
If the dataset becomes bigger than RAM, there is a performance problem with "random" indexes. Your example should be fine. (Personally, I think 50 chars is excessive.) And such a 'hash' should probably be CHARACTER SET ascii and perhaps with COLLATE ascii_bin instead of a case-folding version.
And "task.completed==True, task.userID==user.id" os probably best indexed with a "composite" INDEX(userID, completed) in either order.
Yes, indexes in datetime columns do some work when comparing with <, <=, >, >= operators? Strings can also be compared, though I do not see any likely columns for string comparisions other than with =.
50M rows is large, but not "huge". Composite indexes are often important for large tables.
I am using a MySQL database in my ASP.NET with C# web application. The MySQL Server version is 5.7 and there is 8 GB RAM in the PC. When I am executing the select query in MySQL database table, it takes more time in execution; a simple select query takes around 42 seconds. Across 1 crorerecord (10 million records) in the table. I have also done indexing for the table. How can I fix this?
The following is my table structure.
CREATE TABLE `smstable_read` (
`MessageID` int(11) NOT NULL AUTO_INCREMENT,
`ApplicationID` int(11) DEFAULT NULL,
`Api_userid` int(11) DEFAULT NULL,
`ReturnMessageID` varchar(255) DEFAULT NULL,
`Sequence_Id` int(11) DEFAULT NULL,
`messagetext` longtext,
`adtextid` int(11) DEFAULT NULL,
`mobileno` varchar(255) DEFAULT NULL,
`deliverystatus` int(11) DEFAULT NULL,
`SMSlength` int(11) DEFAULT NULL,
`DOC` varchar(255) DEFAULT NULL,
`DOM` varchar(255) DEFAULT NULL,
`BatchID` int(11) DEFAULT NULL,
`StudentID` int(11) DEFAULT NULL,
`SMSSentTime` varchar(255) DEFAULT NULL,
`SMSDeliveredTime` varchar(255) DEFAULT NULL,
`SMSDeliveredTimeTicks` decimal(28,0) DEFAULT '0',
`SMSSentTimeTicks` decimal(28,0) DEFAULT '0',
`Sent_SMS_Day` int(11) DEFAULT NULL,
`Sent_SMS_Month` int(11) DEFAULT NULL,
`Sent_SMS_Year` int(11) DEFAULT NULL,
`smssent` int(11) DEFAULT '1',
`Batch_Name` varchar(255) DEFAULT NULL,
`User_ID` varchar(255) DEFAULT NULL,
`Year_ID` int(11) DEFAULT NULL,
`Date_Time` varchar(255) DEFAULT NULL,
`IsGroup` double DEFAULT NULL,
`Date_Time_Ticks` decimal(28,0) DEFAULT NULL,
`IsNotificationSent` int(11) DEFAULT NULL,
`Module_Id` double DEFAULT NULL,
`Doc_Batch` decimal(28,0) DEFAULT NULL,
`SMS_Category_ID` int(11) DEFAULT NULL,
`SID` int(11) DEFAULT NULL,
PRIMARY KEY (`MessageID`),
KEY `index2` (`ReturnMessageID`),
KEY `index3` (`mobileno`),
KEY `BatchID` (`BatchID`),
KEY `smssent` (`smssent`),
KEY `deliverystatus` (`deliverystatus`),
KEY `day` (`Sent_SMS_Day`),
KEY `month` (`Sent_SMS_Month`),
KEY `year` (`Sent_SMS_Year`),
KEY `index4` (`ApplicationID`,`SMSSentTimeTicks`),
KEY `smslength` (`SMSlength`),
KEY `studid` (`StudentID`),
KEY `batchid_studid` (`BatchID`,`StudentID`),
KEY `User_ID` (`User_ID`),
KEY `Year_Id` (`Year_ID`),
KEY `IsNotificationSent` (`IsNotificationSent`),
KEY `isgroup` (`IsGroup`),
KEY `SID` (`SID`),
KEY `SMS_Category_ID` (`SMS_Category_ID`),
KEY `SMSSentTimeTicks` (`SMSSentTimeTicks`)
) ENGINE=MyISAM AUTO_INCREMENT=16513292 DEFAULT CHARSET=utf8;
The following is my select query:
SELECT messagetext, SMSSentTime, StudentID, batchid,
User_ID,MessageID,Sent_SMS_Day, Sent_SMS_Month,
Sent_SMS_Year,Module_Id,Year_ID,Doc_Batch
FROM smstable_read
WHERE StudentID=977 AND SID = 8582 AND MessageID>16013282
You need to learn about compound indexes and covering indexes. Read about those things.
Your query is slow because it's doing a half-scan of the table. It uses the primary key to find the first row with a qualifying MessageID, then looks at every row of the table to find matching rows.
Your filter criteria are StudentID = constant, SID = constant AND MessageID > constant. That means you need those three columns, in that order, in an index. The first two filter criteria will random-access your index to the correct place. The third criterion will scan the index starting right after the constant value in your query. It's called an Index Range Scan operation, and it's quite efficient.
ALTER TABLE smstable_read
ADD INDEX StudentSidMessage (StudentId, SID, MessageId);
This compound index should make your query efficient. Notice that in MyISAM, the primary key column of a table should appear in compound indexes. That's cool in this case because it's also part of your query criteria.
If this query is used very frequently, you could make a covering index: you could add the other columns of the query (the ones mentioned in your SELECT clause) to the index.
But, unfortunately you have defined your messageText column with a longtext data type. That allows for each message to contain up to four gigabytes. (Why? Is this really SMS data? There's a limit of 160 bytes per message in SMS. Four gigabytes >> 160 bytes.)
Now the point of a covering index is to allow the query to be satisfied entirely from the index, without referring back to the table. But when you include a longtext or any other LOB column in an index, it only contains a subset of the data. So the point of the covering index is lost.
If I were you I would change my table so messageText was a VARCHAR(255) data type, and then create this covering index:
ALTER TABLE smstable_read
ADD INDEX StudentSidMessage (StudentId, SID, MessageId,
SMSSentTime, batchid,
User_ID, Sent_SMS_Day, Sent_SMS_Month,
Sent_SMS_Year,Module_Id,Year_ID,Doc_Batch,
messageText);
(Notice that you should put variable-length items last in the index if you can.)
If you can't change your application to handle VARCHAR(255) then go with the first index I mentioned.
Pro tip: putting lots of single-column indexes on MySQL tables rarely helps SELECT performance and always harms INSERT and UPDATE performance. You need an index on your primary key, and you need indexes to support the queries you run. Extra indexes are harmful.
It looks like your database is not properly indexed and even not properly normalized. Normalizing your database will go a long way to speed up all your queries. Particularly in view of the fact that mysql used only one index per table in a query. Even though you have lot's of indexes, they cannot be used.
Your current query filters on StudentID,SID, and MessageID. The last is an inequality comparision so an index will not be very effective with that but the other two columns are equality comparisons. I suggest an index like this:
KEY `studid` (`StudentID`,`SID`)
Follow that up by dropping your existing index on SID. If you find that you don't want to drop it because it's used in another query, further evidence that your table is in desperate need of normalization.
Too many indexes slow down inserts and adds a little overhead to each SELECT because the query planner needs more effort to figure out which index to use.
I've read heaps of posts here on stackoverflow, blog posts, tutorials and more, but I still fail to resolve a rather nasty performance issue with my MySQL db. Keep in mind that I'm a novice when it comes to large MySQL databases.
I have a table with approx. 11.000.000 rows (will increase to say 20.000.000 or more). Here's the layout:
CREATE TABLE `myTable` (
`intcol1` int(11) DEFAULT NULL,
`charcol1` char(25) DEFAULT NULL,
`intcol2` int(11) DEFAULT NULL,
`charcol2` char(50) DEFAULT NULL,
`charcol3` char(50) DEFAULT NULL,
`charcol4` char(50) DEFAULT NULL,
`intcol3` int(11) DEFAULT NULL,
`charcol5` char(50) DEFAULT NULL,
`intcol4` int(20) DEFAULT NULL,
`intcol5` int(20) DEFAULT NULL,
`intcol6` int(20) DEFAULT NULL,
`intcol7` int(11) DEFAULT NULL,
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
FULLTEXT KEY `idx` (`charcol2`,`charcol3`)
) ENGINE=MyISAM AUTO_INCREMENT=11665231 DEFAULT CHARSET=latin1;
A select statement like
SELECT * from myTable where charchol2='bogus' AND charcol3='bogus2';
takes 25 seconds or so to execute. That's too slow, and will be even slower as the table grows.
The table will not have any inserts or updates at all (so to speak), and will be primarily used for outputting searches on the char-columns.
I've tried to make indexing work (playing around with FULLTEXT, as you can see), but it seems that I'm missing something. Any takes on how to speed up the performance?
Please note: Im currently running MySQL on my Macbook Air (1.7 GHz i5, 4GB RAM). If this is the only answer to my performance issues, I'll move the database to something appropriate ;-)
EDIT: Explain table
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE myTable ALL NULL NULL NULL NULL 11596725 Using where
You don't need to create FULLTEXT indexes for such requests, where equality operator is used. Just create an index on every char field, that will be used in WHERE condition, and remove the fulltext index:
DROP INDEX idx;
ALTER TABLE myTable ADD INDEX charchol_idx (charchol2, charchol3);
I have a problem when I am trying to create a new table in phpmyadmin in a database. The code is:
DROP TABLE IF EXISTS phpcrawler_links;
CREATE TABLE phpcrawler_links (
id int(11) NOT NULL auto_increment,
site_id int(11) NOT NULL default 0,
depth int(11) NOT NULL default 0,
url text NOT NULL,
url_title text,
url_md5 varchar(255) NOT NULL,
content text NOT NULL,
content_md5 varchar(255) NOT NULL,
last_crawled datetime,
crawl_now int(11) NOT NULL default 1,
PRIMARY KEY (id),
KEY idx_site_id(site_id),
KEY idx_url (url(255)),
KEY idx_content_md5(content_md5),
FULLTEXT ft_content(content),
KEY idx_last_crawled(last_crawled)
);
DROP TABLE IF EXISTS words;
CREATE TABLE words (
id int(11) NOT NULL,
word varchar(255) NOT NULL
);
The problem seems to be on this line:
last_crawled datetime,
The error I get is:
#1214 - The used table type doesn't support FULLTEXT indexes
If anyone can help me out with this I will be greatfull!
Google rules!
Only MyISAM tables support fulltext indicies.
http://forums.mysql.com/read.php?107,194405,194677
you cannot add index to that DB if the table is Innodb, you must change the engine to MyISAM or change the data type to int
seems you are using an old version of MySQL, as from MySQL 5.6, InnoDB support fulltext index already.