Running FULLTEXT searches is returning no result - mysql

OK here is my query which I'm running in php my admin.
There are records which should match, e.g. War Dog, War Hog, Warrior. If I search 'War' I get nothing. If I search for other things, 'Dwarf' I get matches with Dwarf in the title. It seems 'War' and 'War Dog' don't want to match, not sure why.
DROP TABLE IF EXISTS `searchable_products`;
CREATE TEMPORARY TABLE `searchable_products` SELECT * FROM content_product;
ALTER TABLE `searchable_products` ENGINE = MYISAM;
ALTER TABLE `searchable_products` ADD FULLTEXT(`productTitle`);
SELECT *, MATCH (`productTitle`) AGAINST ('war') AS `relevance` FROM `searchable_products` WHERE MATCH (`productTitle`) AGAINST ('war') ORDER BY `relevance` DESC LIMIT 50;

Related

Mysql - inner join with or condition taking long time mysql

Need help with MySQL query.
I have indexed mandatory columns but still getting results in 160 seconds.
I know I have a problem with Contact conditions without it results are coming in 15s.
Any kind of help is appreciated.
My Query is :
SELECT `order`.invoicenumber, `order`.lastupdated_by AS processed_by, `order`.lastupdated_date AS LastUpdated_date,
`trans`.transaction_id AS trans_id,
GROUP_CONCAT(`trans`.subscription_id) AS subscription_id,
GROUP_CONCAT(`trans`.price) AS trans_price,
GROUP_CONCAT(`trans`.quantity) AS prod_quantity,
`user`.id AS id, `user`.businessname AS businessname,
`user`.given_name AS given_name, `user`.surname AS surname
FROM cdp_order_transaction_master AS `order`
INNER JOIN `cdp_order_transaction_detail` AS trans ON `order`.transaction_id=trans.transaction_id
INNER JOIN cdp_user AS user ON (`order`.user_id=user.id OR CONCAT( user.id , '_CDP' ) = `order`.lastupdated_by)
WHERE `order`.xero_invoice_status='Completed' AND `order`.order_date > '2021-01-01'
GROUP BY `order`.transaction_id
ORDER BY `order`.lastupdated_date
DESC LIMIT 100
1. Index the columns used in the join, where section so that sql does not scan the entire table and only scans the desired columns. A full scan of the table works extremely badly.
create index for cdp_order_transaction_master table :
CREATE INDEX idx_cdp_order_transaction_master_transaction_id ON cdp_order_transaction_master(transaction_id);
CREATE INDEX idx_cdp_order_transaction_master_user_id ON cdp_order_transaction_master(user_id);
CREATE INDEX idx_cdp_order_transaction_master_lastupdated_by ON cdp_order_transaction_master(lastupdated_by);
CREATE INDEX idx_cdp_order_transaction_master_xero_invoice_status ON cdp_order_transaction_master(xero_invoice_status);
CREATE INDEX idx_cdp_order_transaction_master_order_date ON cdp_order_transaction_master(order_date);
create index for cdp_order_transaction_detail table :
CREATE INDEX idx_cdp_order_transaction_detail_transaction_id ON cdp_order_transaction_detail(transaction_id);
create index for cdp_user table :
CREATE INDEX idx_cdp_user_id ON cdp_user(id);
2. Use Owner/Schema Name
If the owner name is not specified, the SQL Server engine tries to find it in all schemas to find the object.

What sql query to use for only deleting duplicate results for wp_comments table?

I need to finish the select query below. The query shows me the count of comments with the same comment_id.I just ultimately want to delete the duplicates and leave the non duplicates alone.This is a wordpress database
screenshot of my current query results
SELECT `comment_ID`, `comment_ID`, count(*) FROM `wp_comments` GROUP BY `comment_ID` HAVING COUNT(*) > 1 ORDER BY `count(*)` ASC
example of 2 entries I need to delete one
First back up your bad table in case you goof something up.
CREATE TABLE wp_commments_bad_backup SELECT * FROM wp_comments;
Do you actually have duplicate records here (duplicate in all columns) ? If so, try this
CREATE TABLE wp_comments_deduped SELECT DISTINCT * FROM wp_comments;
RENAME TABLE wp_comments TO wp_comments_not_deduped;
RENAME TABLE wp_comments_deduped TO wp_comments;
If they don't have exactly the same contents and you don't care which contents you keep from each pair of duplicate rows, try something like this:
CREATE TABLE wp_comments_deduped
SELECT comment_ID,
MAX(comment_post_ID) comment_post_ID,
MAX(comment_author) comment_author,
MAX(comment_author_email) comment_author_email,
MAX(comment_author_url) comment_author_url,
MAX(comment_author_IP) comment_author_IP,
MAX(comment_date) comment_date,
MAX(comment_date_gmt) comment_date_gmt,
MAX(comment_content) comment_content,
MAX(comment_karma) comment_karma,
MAX(comment_approved) comment_approved,
MAX(comment_agent) comment_agent,
MAX(comment_type) comment_type,
MAX(comment_parent) comment_parent,
MAX(user_id) user_id
FROM wp_comments
GROUP BY comment_ID;
RENAME TABLE wp_comments TO wp_comments_not_deduped;
RENAME TABLE wp_comments_deduped TO wp_comments;
Then you'll need to doublecheck whether your deduplicating worked:
SELECT comment_ID, COUNT(*) num FROM wp_comments GROUP BY comment_ID;
Then, once you're happy with it, put back WordPress's indexes.
Pro tip: Use a plugin like Duplicator when you migrate from one WordPress setup to another; its authors have sorted out all this data migration for you.
I would recommand add a unique key to the table make it auto incremental call it tempId , so you would be able to to distinguish between one duplicate set, use below query to remove duplicate copies and at the end remove that '`tempid' column:
DELETE FROM `wp_comments`
WHERE EXISTS (
SELECT `comment_ID` , MIN(`tempid`) AS `tempid`
FROM `wp_comments` as `dups`
GROUP BY `comment_ID`
HAVING
COUNT(*) > 1
AND `dups`.`comment_ID` = `wp_comments`.`comment_ID`
AND `dups`.`tempid` = `wp_comments`.`tempid`
)
I'm not clear on why there appear to be two different fields both named 'column_ID' from the same table, but I believe this will delete only the first of the two identical records. Before running a DELETE statement, however, be sure to make a backup of the original table.
DELETE
TOP 1 *
FROM
'wp_comments'
WHERE
comment_ID IN
(
SELECT
comment_ID,
r,
(comment_ID + '_' + r) AS unique
FROM
(
SELECT
`comment_ID`,
`comment_ID`,
RANK() OVER (PARTITION BY 'comment_id' ORDER BY 'comment_id') AS r
FROM
'wp_comments'
)
WHERE
r>1
)

How to add an index to such sql query?

Please tell me how to add an index to this sql query?
SELECT *
FROM table
WHERE (cities IS NULL) AND (position_id = '2') AND (is_pub = '1')
ORDER BY ordering asc
LIMIT 1
Field types:
cities = text
position_id = int(11)
is_pub = tinyint(1)
I try so:
ALTER TABLE table ADD FULLTEXT ( 'cities', 'position_id', 'is_pub' );
But I get an error: The used table type doesn't support FULLTEXT indexes
First, rewrite the query so you are not mixing types. That is, get rid of the single quotes:
SELECT *
FROM table
WHERE (cities IS NULL) AND (position_id = 2) AND (is_pub = 1)
ORDER BY ordering asc
LIMIT 1;
Then, the best query for this is on table(position_id, is_pub, cities, ordering):
create index idx_table_4 on table(position_id, is_pub, cities(32), ordering);
The first three columns can be in any order in the index, so long as they are the first three.
You should change cities to a varchar() type. Is there is reason you want to use a text for this?
You need to change the engine for your table to MyISAM.
possible duplicate of #1214 - The used table type doesn't support FULLTEXT indexes

Optimizing MySql query

I would like to know if there is a way to optimize this query :
SELECT
jdc_organizations_activities.*,
jdc_organizations.orgName,
CONCAT(jos_hpj_users.firstName, ' ', jos_hpj_users.lastName) AS nameContact
FROM jdc_organizations_activities
LEFT JOIN jdc_organizations ON jdc_organizations_activities.organizationId =jdc_organizations.id
LEFT JOIN jos_hpj_users ON jdc_organizations_activities.contact = jos_hpj_users.userId
WHERE jdc_organizations_activities.status LIKE 'proposed'
ORDER BY jdc_organizations_activities.creationDate DESC LIMIT 0 , 100 ;
Now When i see the query log :
Query_time: 2
Lock_time: 0
Rows_sent: 100
Rows_examined: **1028330**
Query Profile :
2) Should i put indexes on the tables having in mind that there will be a lot of inserts and updates on those tables .
From Tizag Tutorials :
Indexes are something extra that you
can enable on your MySQL tables to
increase performance,cbut they do have
some downsides. When you create a new
index MySQL builds a separate block of
information that needs to be updated
every time there are changes made to
the table. This means that if you
are constantly updating, inserting and
removing entries in your table this
could have a negative impact on
performance.
Update after adding indexes and removing the lower() , group by and the wildcard
Time: 0.855ms
Add indexes (if you haven't) at:
Table: jdc_organizations_activities
simple index on creationDate
simple index on status
simple index on organizationId
simple index on contact
And rewrite the query by removing call to function LOWER() and using = or LIKE. It depends on the collation you have defined for this table but if it's a case insensitive one (like latin1), it will still show same results. Details can be found at MySQL docs: case-sensitivity
SELECT a.*
, o.orgName
, CONCAT(u.firstName,' ',u.lastName) AS nameContact
FROM jdc_organizations_activities AS a
LEFT JOIN jdc_organizations AS o
ON a.organizationId = o.id
LEFT JOIN jos_hpj_users AS u
ON a.contact = u.userId
WHERE a.status LIKE 'proposed' --- or (a.status = 'proposed')
ORDER BY a.creationDate DESC
LIMIT 0 , 100 ;
It would be nice if you posted the execution plan (as it is now) and after these changes.
UPDATE
A compound index on (status, creationDate) may be more appopriate (as Darhazer suggested) for this query, instead of the simple (status). But this is more guess work. Posting the plans (after running EXPLAIN query) would provide more info.
I also assumed that you already have (primary key) indexes on:
jdc_organizations.id
jos_hpj_users.userId
Post the result from EXPLAIN
Generally you need indexes on jdc_organizations_activities.organizationId, jdc_organizations_activities.contact, composite index on jdc_organizations_activities.status and jdc_organizations_activities.creationDate
Why you are using LIKE query for constant lookup (you have no wildcard symbols, or maybe you've edited the query)
The index on status can be used for LIKE 'proposed%' but can't be used for LIKE '%proposed%' - in the later case better leave only index on creationDate
What indexes do you have on these tables? Specifically, have you indexed jdc_organizations_activities.creationDate?
Also, why do you need to group by jdc_organizations_activities.id? Isn't that unique per row, or can an organization have multiple contacts?
The slowness is because mysql has to apply lower() to every row. The solution is to create a new column to store the result of lower, then put an index on that column. Let's also use a trigger to make the solution more luxurious. OK, here we go:
a) Add a new column to hold the lower version of status (make this varchar as wide as status):
ALTER TABLE jdc_organizations_activities ADD COLUMN status_lower varchar(20);
b) Populate the new column:
UPDATE jdc_organizations_activities SET status_lower = lower(status);
c) Create an index on the new column
CREATE INDEX jdc_organizations_activities_status_lower_index
ON jdc_organizations_activities(status_lower);
d) Define triggers to keep the new column value correct:
DELIMITER ~;
CREATE TRIGGER jdc_organizations_activities_status_insert_trig
BEFORE INSERT ON jdc_organizations_activities
FOR EACH ROW
BEGIN
NEW.status_lower = lower(NEW.status);
END;
CREATE TRIGGER jdc_organizations_activities_status_update_trig
BEFORE UPDATE ON jdc_organizations_activities
FOR EACH ROW
BEGIN
NEW.status_lower = lower(NEW.status);
END;~
DELIMITER ;
Your query should now fly.

How to find duplicates in 2 columns not 1

I have a MySQL database table with two columns that interest me. Individually they can each have duplicates, but they should never have a duplicate of BOTH of them having the same value.
stone_id can have duplicates as long as for each upsharge title is different, and in reverse. But say for example stone_id = 412 and upcharge_title = "sapphire" that combination should only occur once.
This is ok:
stone_id = 412 upcharge_title = "sapphire"
stone_id = 412 upcharge_title = "ruby"
This is NOT ok:
stone_id = 412 upcharge_title = "sapphire"
stone_id = 412 upcharge_title = "sapphire"
Is there a query that will find duplicates in both fields? And if possible is there a way to set my data-base to not allow that?
I am using MySQL version 4.1.22
You should set up a composite key between the two fields. This will require a unique stone_id and upcharge_title for each row.
As far as finding the existing duplicates try this:
select stone_id,
upcharge_title,
count(*)
from your_table
group by stone_id,
upcharge_title
having count(*) > 1
I found it helpful to add a unqiue index using an "ALTER IGNORE" which removes the duplicates and enforces unique records which sounds like you would like to do. So the syntax would be:
ALTER IGNORE TABLE `table` ADD UNIQUE INDEX(`id`, `another_id`, `one_more_id`);
This effectively adds the unique constraint meaning you will never have duplicate records and the IGNORE deletes the existing duplicates.
You can read more about eh ALTER IGNORE here: http://mediakey.dk/~cc/mysql-remove-duplicate-entries/
Update: I was informed by #Inquisitive that this may fail in versions of MySql> 5.5 :
It fails On MySQL > 5.5 and on InnoDB table, and in Percona because of
their InnoDB fast index creation feature [http://bugs.mysql.com/bug.php?id=40344]. In this case
first run set session old_alter_table=1 and then the above command
will work fine
Update - ALTER IGNORE Removed In 5.7
From the docs
As of MySQL 5.6.17, the IGNORE clause is deprecated and its use
generates a warning. IGNORE is removed in MySQL 5.7.
One of the MySQL dev's give two alternatives:
Group by the unique fields and delete as seen above
Create a new table, add a unique index, use INSERT IGNORE, ex:
CREATE TABLE duplicate_row_table LIKE regular_row_table;
ALTER TABLE duplicate_row_table ADD UNIQUE INDEX (id, another_id);
INSERT IGNORE INTO duplicate_row_table SELECT * FROM regular_row_table;
DROP TABLE regular_row_table;
RENAME TABLE duplicate_row_table TO regular_row_table;
But depending on the size of your table, this may not be practical
You can find duplicates like this..
Select
stone_id, upcharge_title, count(*)
from
particulartable
group by
stone_id, upcharge_title
having
count(*) > 1
To find the duplicates:
select stone_id, upcharge_title from tablename group by stone_id, upcharge_title having count(*)>1
To constrain to avoid this in future, create a composite unique key on these two fields.
Incidentally, a composite unique constraint on the table would prevent this from occurring in the first place.
ALTER TABLE table
ADD UNIQUE(stone_id, charge_title)
(This is valid T-SQL. Not sure about MySQL.)
this SO post helped me, but i too wanted to know how to delete and keep one of the rows... here's a PHP solution to delete the duplicate rows and keep one (in my case there were only 2 columns and it is in a function for clearing duplicate category associations)
$dupes = $db->query('select *, count(*) as NUM_DUPES from PRODUCT_CATEGORY_PRODUCT group by fkPRODUCT_CATEGORY_ID, fkPRODUCT_ID having count(*) > 1');
if (!is_array($dupes))
return true;
foreach ($dupes as $dupe) {
$db->query('delete from PRODUCT_CATEGORY_PRODUCT where fkPRODUCT_ID = ' . $dupe['fkPRODUCT_ID'] . ' and fkPRODUCT_CATEGORY_ID = ' . $dupe['fkPRODUCT_CATEGORY_ID'] . ' limit ' . ($dupe['NUM_DUPES'] - 1);
}
the (limit NUM_DUPES - 1) is what preserves the single row...
thanks all
This is what worked for me (ignoring null and blank). Two different email columns:
SELECT *
FROM members
WHERE email IN (SELECT soemail
FROM members
WHERE NOT Isnull(soemail)
AND soemail <> '');