So i get an error when i try and use
SELECT views, keywords, title, url, thumbnail,
MATCH(keywords,title) AGAINST ('%$search_value%') AS relevance
FROM straight
WHERE MATCH (keywords,title) AGAINST ('%$search_value%')
ORDER BY relevance DESC
This is due to me not having FULLtext search enabled, but i cant seem to enable it. when i run the sql below:
ALTER TABLE straight ADD FULLTEXT(keywords, title)
i get this response:
MySQL returned an empty result set (i.e. zero rows). (Query took 3.8022 sec)
Then when trying to run the first query again i get the failed
#1191 - Can't find FULLTEXT index matching the column list
I can't tell why it's not registering. Any help would be great.
Thanks!
Edit:
My tabel:
CREATE TABLE `straight` (
`url` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,
`title` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,
`keywords` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,
`production` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,
`categories` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,
`views` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,
`likes` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,
`length` varchar(80) COLLATE utf8_unicode_ci DEFAULT NULL,
`thumbnail` varchar(200) COLLATE utf8_unicode_ci DEFAULT NULL,
`date` varchar(12) COLLATE utf8_unicode_ci DEFAULT NULL,
UNIQUE KEY `url` (`url`),
FULLTEXT KEY `url_2` (`url`,`title`,`keywords`,`production`,
`categories`,`views`,`likes`,`length`,`thumbnail`,`date`
), ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
You need a FULLTEXT index that matches, exactly, the columns upon which you want to search. The FULLTEXT index you have has more columns than you need.
Try adding the one you mentioned.
ALTER TABLE straight ADD FULLTEXT(keywords, title)
Then look at the table definition and make sure it's there.
Related
I have a table in a mysql DB with about 6 mil rows of data. Structure below. Most of my queries are searching for specific "customer" fields and display a value for each customer according to the value in column "value". The query searches the whole Table to match those customers specified in the query. This table started rather small but now it's gotten too big and my queries are taking quite some time to retrieve results. My questions is the following: If i create a separate table with just the customer field, along with an index, will that make my customer queries faster?
TABLE `data` (
`id` bigint(20) UNSIGNED NOT NULL,
`entry_id` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`date` date DEFAULT NULL,
`media_name` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`media_type` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`rate` decimal(8,2) DEFAULT NUCREATELL,
`value` decimal(8,2) DEFAULT NULL,
`page` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`type` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`sector` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`category` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`customer` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`product` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`description` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`image_id` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`city` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
`address` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`supplier` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`time` time DEFAULT NULL,
`duration` time DEFAULT NULL,
`promoted_on` datetime NOT NULL,
`hasimage` tinyint(4) NOT NULL DEFAULT '0'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
You want an index.
If you are searching for customers using in or = (the most common methods), then you can use a standard index on customer.
If you have more complex searches -- say using like with a leading wildcard -- then this does not work. A full text index might help. Or it might not, depending on the nature of the query and the data.
The "separate table" you're thinking about should be an index on your main table.
CREATE INDEX index_name
ON data(customer,value);
This will speed up the queries, and even prevent access to the table itself, at the cost of slightly slower INSERT and UPDATE operations.
I have table that contain 2 long text columns , when I fetch 100 rows it takes 5 seconds, this is long time right ?
maybe this happen because I have this 2 long text columns ?
here it is the table structure :
CREATE TABLE `tempBusiness2` (
`bussId` int(11) NOT NULL AUTO_INCREMENT,
`nameHe` varchar(200) COLLATE utf8_bin NOT NULL,
`nameAr` varchar(200) COLLATE utf8_bin NOT NULL,
`nameEn` varchar(200) COLLATE utf8_bin NOT NULL,
`addressHe` varchar(200) COLLATE utf8_bin NOT NULL,
`addressAr` varchar(200) COLLATE utf8_bin NOT NULL,
`addressEn` varchar(200) COLLATE utf8_bin NOT NULL,
`x` varchar(200) COLLATE utf8_bin NOT NULL,
`y` varchar(200) COLLATE utf8_bin NOT NULL,
`categoryId` int(11) NOT NULL,
`subcategoryId` int(11) NOT NULL,
`cityId` int(11) NOT NULL,
`cityName` varchar(200) COLLATE utf8_bin NOT NULL,
`phone` varchar(200) COLLATE utf8_bin NOT NULL,
`userDetails` text COLLATE utf8_bin NOT NULL,
`selectedIDFace` tinyint(4) NOT NULL,
`alluserDetails` longtext COLLATE utf8_bin NOT NULL,
`details` varchar(500) COLLATE utf8_bin NOT NULL,
`picture` varchar(200) COLLATE utf8_bin NOT NULL,
`imageUrl` varchar(200) COLLATE utf8_bin NOT NULL,
`fax` varchar(200) COLLATE utf8_bin NOT NULL,
`email` varchar(200) COLLATE utf8_bin NOT NULL,
`facebook` varchar(200) COLLATE utf8_bin NOT NULL,
`trash` tinyint(4) NOT NULL,
`subCategories` varchar(500) COLLATE utf8_bin NOT NULL,
`openHours` varchar(500) COLLATE utf8_bin NOT NULL,
`lastCheckedDuplications` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`bussStatus` tinyint(4) NOT NULL,
`approveStatus` tinyint(4) NOT NULL,
`steps` tinyint(4) NOT NULL DEFAULT '0',
`allDetails` longtext COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`bussId`),
KEY `bussId` (`allDetails`(5),`bussId`),
KEY `face` (`alluserDetails`(5),`userDetails`(5),`bussId`)
) ENGINE=InnoDB AUTO_INCREMENT=2515926 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
my query = SELECT * FROM tempBusiness2 LIMIT 100
If SELECT * FROM tempBusiness2 LIMIT 100 is really your query, then no INDEX is involved, and no INDEX would make it run any faster.
What that statement does:
Start at the beginning of the "data". (In InnoDB the PRIMARY KEY and the data are 'clustered' together. So, you are coincidentally starting with the first value of the PK.)
Read that row.
Move to the next row -- This is easy and efficient because the PK & Data are stored in a B+Tree structure.
Repeat until finished with the 100 or the table.
But... Because of lots of TEXT and VARCHAR fields, it is not that efficient. No more than 8K of a row is stored in the B+Tree mentioned above; the rest is sitting in extra blocks that are linked to. (I do not know how many extra blocks, but I fear it is more than one.) Each extra block is another disk hit.
Now, let's try to "count the disk hits". If you run this query a second time (and have a reasonably large innodb_buffer_pool_size), there would be any disk hits. Instead, let's focus on a "cold cache" and count the data blocks that are touched.
If there is only one row per block (as derived from the 8KB comment), that's 100 blocks to read. Plus the extra blocks -- hundred(s) more.
Ordinary disks can handle 100 reads per second. So that is a total of several seconds -- possibly the 5 that you experienced!.
Now... What can be done?
Don't do SELECT * unless you really want all the columns. By avoiding some of the bulky column, you can avoid some of the disk hits.
innodb_buffer_pool_size should be about 70% of available RAM.
"Vertical partitioning" may help. This is where you split off some columns into a 'parallel' table. This is handy if some subset of the columns are really a chunk of related stuff, and especially handy if it is "optional" in some sense. The JOIN to "put the data back together" is likely to be no worse than what you are experiencing now.
Do you really need (200) in all those fields?
You have what looks like a 3-element array of names and addresses. That might be better as another table with up to 3 rows per bussId.
On another note: If you run EXPLAIN SELECT ... on all your queries, you will probably find that the "prefix indexes" are never used:
KEY `bussId` (`allDetails`(5),`bussId`),
KEY `face` (`alluserDetails`(5),`userDetails`(5),`bussId`)
What were you hoping for in them? Consider FULLTEXT index(es), instead.
Why do you have both city_id and city_name in this table? It sounds like normalization gone berserk.
Yes, this kind of column take a lot of time, return this column only when you need.
And in addiction you will have to do a index on your table.
I have table products_discription from OpenCart.
I created new search engine. Everything is okey, except that is case sensitive.
How I can make it insensitive.
I readed in Mysql Documentation I must change utf8_bin to utf8_general_ci.
But how to make it, without deleting all indexes.
Its not only one table. I'm looking for at 4 tables. Every table has around 4 -5 indexes.
The site brings non-stop information. Loss of information is simply not acceptable.
I was wondering if there is a way to extract keys to delete, and change the encoding. Then add them again with just one application. As such, I think that there will be no data loss.
CREATE TABLE IF NOT EXISTS `product_description` (
`product_id` int(11) NOT NULL,
`language_id` int(11) NOT NULL,
`name` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`short_description` text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`description` text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`meta_description` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`meta_keyword` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`tag` text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`custom_title` varchar(255) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT '',
PRIMARY KEY (`product_id`,`language_id`),
FULLTEXT KEY `description` (`description`),
FULLTEXT KEY `tag` (`tag`),
FULLTEXT KEY `ft_namerel` (`name`,`description`),
FULLTEXT KEY `name` (`name`,`short_description`,`description`,`meta_description`,`meta_keyword`,`tag`,`custom_title`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
have you tried searching in boolean mode?
I deleted all index keys and change encoding, after that I set new index keys.
I have the member table which has 9 fields: id,email,... so on.
member_type is the 8th field
The 8th field is always converted to decimal, no matter what name it is or what type it is.
Here is some experimenting I have done:
irb(main):010:0> Member.all()[0].attributes
=> {"created_date"=>nil, "email"=>"tanixxx#yahoo.com", "id"=>1, "is_admin"=>0, "
member_type"=>#<BigDecimal:4f87ce0,'0.0',4(8)>, "name"=>"tanin", "password"=>"3c
f622832f10a313cb74a59e6032f115", "profile_picture_path"=>"aaaaa", "status"=>"APP
ROVED"}
Please notice :member_type, which is the 8th field.
Now if I query only some fields, the result is correct:
irb(main):007:0> Member.all(:select=>"member_type,email")[0].attributes
=> {"email"=>"tanixxx#yahoo.com", "member_type"=>"GENERAL"}
I think there must be a bug in ActiveRecord.
Here is some more experiment. I have added "test_8th_field" to be the 8th field and I got this:
irb(main):016:0> Member.all[0].attributes
=> {"created_date"=>nil, "email"=>"tanixxx#yahoo.com", "id"=>1, "is_admin"=>0, "
member_type"=>"GENERAL", "name"=>"tanin", "password"=>"3cf622832f10a313cb74a59e6
032f115", "profile_picture_path"=>"aaaaa", "status"=>"APPROVED", "test_8th_field
"=>#<BigDecimal:30c87f0,'0.0',4(8)>}
The 8th field is a BigDecimal (it is a text field in MySQL, though). But the member_type field is amazingly correct this time.
I don't know what is wrong with the number 8...
Please help me.
Here is my schema dump, including test_8th_field:
CREATE TABLE IF NOT EXISTS `members` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`email` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`password` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`profile_picture_path` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`status` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`is_admin` int(11) NOT NULL,
`test_8th_field` text COLLATE utf8_unicode_ci NOT NULL,
`member_type` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'GENERAL',
`created_date` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=2 ;
I have solved it. It turns out that the MySql binary library does not match the version for the MySql database itself.
I am trying to convert some MYSQL code over to ORACLE PL/SQL. I am looking for a equivalent to the COLLATE command.
Here is a code snippet:
CREATE TABLE IF NOT EXISTS `service_types` (
`service_type_id` int(11) NOT NULL AUTO_INCREMENT,
`service_type` varchar(50) COLLATE latin1_bin NOT NULL,
`service_type_code` varchar(5) COLLATE latin1_bin NOT NULL,
`last_update_date` datetime NOT NULL,
`last_update_user` varchar(16) COLLATE latin1_bin NOT NULL,
PRIMARY KEY (`service_type_id`),
UNIQUE KEY `service_type_ix1` (`service_type_code`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin AUTO_INCREMENT=11 ;
I believe you'd want the linguistic sort parameters NLS_SORT and NLS_COMP. Note that these are session-level settings in Oracle, not table-level settings.