I have two sql tables called scan_sc and rescan_rsc. The scan table looks like this:
CREATE TABLE scan_sc
(
id_sc int(4),
Type_sc varchar(255),
ReScan_sc varchar(255),
PRIMARY KEY (id_sc)
)
When a scan a document I insert a row into the scan table. If the result of this scanning is poor I have to do a rescan, and therefore I have a rescan table.
CREATE TABLE rescan_rsc
(
id_rsc int(4),
Scan_rsc varchar(255),
PRIMARY KEY (id_rsc)
)
The problem is, I want to have a trigger that will fill in the column ReScannet_sc with an "x", in the scan_sc table, so I can see that there has been some problems here.
The trigger has to do it where the id from the rescan table is the same as in the scan table.
Hope you all understand my question.
Thanks in advance.
Do you really need the ReScan_sc column and the trigger?
With a simple JOIN, you can find out the records in your scan_sc table that have been re-scanned, without using the ReScan_sc column at all.
There are several possibilities:
Show all scans, with an additional column with the Rescan ID, if any:
SELECT scan_sc.*, rescan_sc.id_rsc
FROM scan_sc
LEFT JOIN rescan_sc ON scan_sc.id_sc = rescan_sc.id_rsc
Show only the scans which have been re-scanned:
SELECT scan_sc.*
FROM scan_sc
INNER JOIN rescan_sc ON scan_sc.id_sc = rescan_sc.id_rsc
(I assume that id_sc and id_rsc are the primary keys and that PRIMARY KEY (id_sd) is a typo, like marc_s pointed out in his comment)
Related
I have two tables that look something like this:
Table #1:
CREATE TABLE iteminfo
(Code CHAR(1) PRIMARY KEY,
Tags TEXT NOT NULL);
Table #2:
CREATE TABLE items
(ID INT UNSIGNED PRIMARY KEY,
Name VARCHAR(255) NOT NULL,
Code CHAR(1),
FOREIGN KEY (Code) REFERENCES iteminfo(Code));
I want to create a FULLTEXT index using the fields Name and Tags from the two tables. I would assume that I will have to use EQUIJOIN or something similar but this doesn't work:
ALTER TABLE items JOIN iteminfo WHERE items.Code = iteminfo.Code
ADD FULLTEXT (Name, Tags);
I want to know:
Is this even possible to do?
If yes, then how do I do it?
If no, then what other ways are there to index two columns present in different tables?
Thanks for answering in advance! I apologise if this question already exists but I couldn't find the answer online.
No.
But... It would make sense to collect the various columns from the various tables together in a single column and apply a FULLTEXT index to it.
Assuming there are many tags for each item, you could initialize such via:
CREATE search_info ( PRIMARY KEY(name) )
SELECT name,
CONCAT(name, ' ',
( SELECT GROUP_CONCAT(tags) FROM iteminfo
WHERE code = items.code ) ) AS search
FROM items;
Then
ALTER TABLE search_info ADD FULLTEXT(search);
(After that, changes to items or iteminfo would need to also modify search_info.)
I tired what was suggested in #RickJames's answer and it seems to be the solution to my issue. For anyone else who might want to know, here's what I did but in context to the information I gave in my original question:
ALTER TABLE items ADD COLUMN Tags TEXT;
UPDATE items
SET search = CONCAT(name, ' ', (SELECT GROUP_CONCAT(tags) FROM iteminfo WHERE code = items.code))
WHERE Code IS NOT NULL;
ALTER TABLE items ADD FULLTEXT(Tags);
I'm a beginner to SQL so this is probably a pretty newbie question, but I can't seem to get my head straight on it. I have a pair of tables called MATCH and SEGMENT.
MATCH.id int(11) ai pk
MATCH.name varchar(45)
etc.
SEGMENT.id int(11) ai pk
SEGMENT.name varchar(45)
etc.
Each row in MATCH can have one or more SEGMENT rows associated with it. The name in MATCH is unique on each row. Right now I do an inner join on the name fields to figure out which segments go with which match. I want to copy the tables to a new set of tables and set up a foreign key in SEGMENT that contains the unique ID from the MATCH row both to improve performance and to fix some problems where the names aren't always precisely the same (and they should be).
Is there a way to do a single INSERT or UPDATE statement that will do the name comparisons and add the foreign key to each row in the SEGMENT table - at least for the rows where the names are precisely the same? (For the ones that don't match, I may have to write a SQL function to "clean" the name by removing extra blanks and special characters before comparing)
Thanks for any help anyone can give me!
Here's one way I would consider doing it: add the FK column, add the constraint definition, then populate the column with an UPDATE statement using a correlated subquery:
ALTER TABLE `SEGMENT` ADD COLUMN match_id INT(11) COMMENT 'FK ref MATCH.id' ;
ALTER TABLE `SEGMENT` ADD CONSTRAINT fk_SEGMENT_MATCH
FOREIGN KEY (match_id) REFERENCES `MATCH`(id) ;
UPDATE `SEGMENT` s
SET s.match_id = (SELECT m.id
FROM MATCH m
WHERE m.name = s.name) ;
A correlated subquery (like in the example UPDATE statement above) usually isn't the most efficient approach to getting a column populated. But it seems a lot of people think it's easier to understand than the (usually) more efficient alternative, an UPDATE using a JOIN operation like this:
UPDATE `SEGMENT` s
JOIN `MATCH` m
ON m.name = s.name
SET s.match_id = m.id
Add an ID field you your MATCH Table and populate it.
them add a column MATCHID (which will be your foriegn key) to your SEGMENT table - Note you wont be able to set this as a Foreign Key till you have mapped the records correctly
Use the following query to update the foreign keys:
UPDATE A
FROM SEGMENT A
INNER JOIn MATCH B
on A.NAME=B.NAME
SET MATCHID = B.ID
I have table containing settings for an application with the columns: id, key, and value.
The id column is auto-incrementing but as of current, I do not use it nor does it have any foreign key constraints. I'm populating the settings and would like to restructure it so they are alphabetical as I've not been putting the settings in that way, but reordering alphabetically would help group related settings together.
For example, if I have the following settings:
ID KEY VALUE
======================================
1 App.Name MyApplication
2 Text.Title Title of My App
3 App.Version 0.1
I would want all the App.* settings to be grouped together sequential without having to do an ORDER BY everytime. Anyway, thats the explanation. I have tried the following and it didn't seem to change the order:
CREATE TABLE mydb.Settings2 LIKE mydb.Settings;
INSERT INTO mydb.Settings2 SELECT `key`,`value` FROM mydb.Settings ORDER BY `key` ASC;
DROP TABLE mydb.Settings;
RENAME TABLE mydb.Settings2 TO mydb.Settings;
That will make a duplicate of the table as suggested, but won't restructure the data. What am I missing here?
The easy way to reorder a table is with ALTER TABLE table ORDER BY column ASC. The query you tried looks like it should have worked, but I know the ALTER TABLE query works; I use it fairly often.
Note: Reordering the data in a table only works and makes sense in MyISAM tables. InnoDB always stores data in PRIMARY KEY order, so it can't be rearranged.
Decided to make that an answer.
As I said in a comment to the initial answer, for you to achieve a long term effect you need to recreate the settings table with the key column as the PRIMARY KEY. Because as G-Nugget correctly said 'InnoDB always stores data in PRIMARY KEY order'.
You can do that like this
CREATE TABLE settings2
(`id` int NULL, `key` varchar(64), `value` varchar(64), PRIMARY KEY(`key`));
INSERT INTO settings2
SELECT id, `key`, `value`
FROM settings;
DROP TABLE settings;
RENAME TABLE settings2 TO settings;
That way you get your order intact after inserting new records.
And if you don't need the initial id column in settings table it's a good time to ditch it.
Here is working sqlfiddle
Disclaimer: Personally I would use ORDER BY anyway
I've got a MySQL table that has a lot of entries. Its got a unique key defined as (state, source) so there are no duplicates for that combination of columns. However now I am realizing that much of the state data is not entered consistently. For example in some rows it is entered as "CA" and others it might be spelled out as "California."
I'd like to update all the entries that say "California" to be "CA" and if it creates a conflict in the unique key, drop the row. How can I do that?
You may be better off dumping your data and using an external tool like Google Refine to clean it up. Look at using foreign keys in the future to avoid these issues.
I don't think you can do this in one SQL statement. And if you have foreign key relationships from other tables to the one you are trying to clean-up then you definitely do not want to do this in one step (even if you could).
CREATE TABLE state_mappings (
`old` VARCHAR(64) NOT NULL,
`new` VARCHAR(64) NOT NULL
);
INSERT INTO state_mappings VALUES ('California', 'CA'), ...;
INSERT IGNORE INTO MyTable (state, source)
SELECT sm.new, s.source from states s JOIN state_mappings sm
ON s.state = sm.old;
// Update tables with foreign keys here
DELETE FROm MyTable WHERE state IN (SELECT distinct old FROM state_mappings);
DROP TABLE state_mappings;
I'm no SQL pro, so these statements can probably be optimized, but you get the gist.
I am trying to run a query:
INSERT
INTO `ProductState` (`ProductId`, `ChangedOn`, `State`)
SELECT t.`ProductId`, t.`ProcessedOn`, \'Activated\'
FROM `tmpImport` t
LEFT JOIN `Product` p
ON t.`ProductId` = p.`Id`
WHERE p.`Id` IS NULL
ON DUPLICATE KEY UPDATE
`ChangedOn` = VALUES(`ChangedOn`)
(I am not quite sure the query is correct, but it appears to be working), however I am running into the following issue. I am running this query before creating the entry into the 'Products' table and am getting a foreign key constraint problem due to the fact that the entry is not in the Products table yet.
My question is, is there a way to run this query, but wait until the next query (which updates the Product table) before performing the insert portion of the query above? Also to note, if the query is run after the Product entry is created it will no longer see the p.Id as being null and therefore failing so it has to be performed before the Product entry is created.
---> Edit <---
The concept I am trying to achieve is as follows:
For starters I am importing a set of data into a temp table, the Product table is a list of all products that are (or have been in the past) added through the set of data from the temp table. What I need is a separate table that provides a state change to the product as sometimes the product will become unavailable (no longer in the data set provided by the vendor).
The ProductState table is as follows:
CREATE TABLE IF NOT EXISTS `ProductState` (
`ProductId` VARCHAR(32) NOT NULL ,
`ChangedOn` DATE NOT NULL ,
`State` ENUM('Activated','Deactivated') NULL ,
PRIMARY KEY (`ProductId`, `ChangedOn`) ,
INDEX `fk_ProductState_Product` (`ProductId` ASC) ,
CONSTRAINT `fk_ProductState_Product`
FOREIGN KEY (`ProductId` )
REFERENCES `Product` (`Id` )
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_general_ci;
The foreign key is an identifying relationship with the Product table (Product.Id)
Essentially what I am trying to accomplish is this:
1. Anytime a new product (or previously deactivated product) shows up in the vendor data set, the record is created in the ProductState table as 'Activated'.
2. Anytime a product (that is activated), does not show up in the vendor data set, the record is created as 'Deactivated' in the ProductState table.
The purpose of the ProductState table is to track activation and deactivation states of a product. Also the ProductState is a Multi-To-One relationship with the Product Table, and the state of the product will only change once daily, therefore my PKEY would be ProductId and ChangedDate.
With foreign keys, you definitely need to have the data on the Product table first, before entering the state, think about it with this logic: "How can something that dont exist have a state" ?
So pseudocode of what you should do:
Read in the vendor's product list
Compare them to the existing list in your Product table
If new ones found: 3.1 Insert it to Product table, 3.2 Insert it to ProductState table
If missing from vendor's list: 4.1 Insert it to ProductState table
All these should be done in 1 transaction. Note that you should NOT delete things from Product table, unless you really want to delete every information associated with it, ie. also delete all the "states" that you have stored.
Rather than trying to do this all in 1 query - best bet is to create a stored procedure that does the work as step-by-step above. I think it gets overly complicated (or in this case, probably impossible) to do all in 1 query.
Edit: Something like this:
CREATE PROCEDURE `some_procedure_name` ()
BEGIN
-- Breakdown the tmpImport table to 2 tables: new and removed
SELECT * INTO _temp_new_products
FROM`tmpImport` t
LEFT JOIN `Product` p
ON t.`ProductId` = p.`Id`
WHERE p.`Id` IS NULL
SELECT * INTO _temp_removed_products
FROM `Product` p
LEFT JOIN `tmpImport` t
ON t.`ProductId` = p.`Id`
WHERE t.`ProductId` IS NULL
-- For each entry in _temp_new_products:
-- 1. Insert into Product table
-- 2. Insert into ProductState table 'activated'
-- For each entry in _temp_removed_products:
-- 1. Insert into ProductState table 'deactivated'
-- drop the temporary tables
DROP TABLE _temp_new_products
DROP TABLE _temp_removed_products
END
I think you should:
start a transaction
do your insert into the Products table
do your insert into the ProductState table
commit the transaction
This will avoid any foreign key errors, but will also make sure your data is always accurate. You do not want to 'avoid' the foreign key constraint in any way, and InnoDB (which I'm sure you are using) never defers these constraints unless you turn them off completely.
Also no you cannot insert into multiple tables in one INSERT ... SELECT statement.