I have a cursor which is declared as so:
DECLARE staging_cur CURSOR FOR
SELECT
col1, col2, ......
FROM crawl_db.staging_listing
WHERE is_deleted = FALSE;
I then fetch each row, perform some checks and then insert the row into another (production) database
OPEN staging_cur;
the_loop: LOOP
FETCH staging_cur
INTO col1_val, col2_val,.....;
-- perform some checks and some optional inserts
-- for example, if city with given name is not found in production DB, insert it
-- insert into production db
END LOOP the_loop;
I realize I need to declare a variable (col1_val, col2_val ...) for each corresponding column of table staging_listing (col1, col2....). The problem is that this table contains 90-100 columns and declaring all variables is really cumbersome
It seems there should be a better way than this. Is there some way in which we can access the column of the cursor's current row without having to declare separate variables to hold the column values?
If you need to insert rows into another table, then a better way is to use INSERT...SELECT statement. Try to avoid using cursors.
INSERT ... SELECT Syntax.
Related
I was trying to create trigger which can update value of column user_count of table user_details using value of u_count of table user_info.
CREATE TRIGGER `test`
AFTER INSERT ON `user_details` FOR EACH ROW
BEGIN
DECLARE default_user_count int(11);
SELECT u_count INTO #default_user_count FROM
user_info WHERE user_info.id= user_details.id_c;
IF user_details.user_count= 0
THEN UPDATE user_details SET
user_count = default_user_count
WHERE user_details.id_c = user_info.id;
END IF;
END
Trigger saved successfully but when i tried to insert value in both table it is preventing to insert record into user_details means no row inserted in 2 this table, if we delete trigger then its working.
Can anyone let me know wrong with this trigger?
THanks,
M.
It's not really clear what you're trying to accomplish, but it seems like it's something like what we have below.
There are numerous errors in and ambiguities in your trigger.
Confusion on variables -- DECLARE default_user_count INT(11); does not declare the user-defined variable #default_user_count. It declares the program variable default_user_count. The # prefix references an entirely different variable scope and namespace.
SELECT and UPDATE from the table which invoked the trigger doesn't usually make sense (SELECT) or is completely invalid (UPDATE).
With in a trigger, you are operating FOR EACH ROW -- that is, for each row included in the statement that invoked the trigger. Inside an INSERT trigger, the NEW values for the row are in a pseudo-table/pseudo-row accessible via the alias NEW. For UPDATE triggers, there are NEW and OLD row values, and for DELETE triggers, just OLD.
AFTER INSERT doesn't seem to make sense. I think you're looking for BEFORE INSERT -- that is, while processing an INSERT INTO ... query, before the newly-inserted row actually gets written into the table, modify its values accordingly. The resulting row contains the original values except where the trigger has modified them.
SELECT ... INTO a variable is a practice you should not get into the habit of, because it can bite you in a way a scalar subquery can't, by leaving a variable unexpectedly unaltered instead of setting it to NULL as would be expected. In this case, it would have made no difference, but it's still a caution worth mentioning... and in this case, I've eliminated that intermediate variable altogether, so the subquery is the only option.
If you are trying to set a value in this table using a value found in another table, all you need to do is SET NEW.column_name equal to the value you want used in the row instead of the value provided with the insert statement.
CREATE TRIGGER `test`
BEFORE INSERT ON `user_details` FOR EACH ROW
BEGIN
IF NEW.user_count = 0 /* maybe also >> */ OR NEW.user_count IS NULL /* << this */ THEN
SET NEW.user_count = (SELECT ui.u_count
FROM user_info ui
WHERE ui.id = NEW.id_c);
END IF;
END
Again, it's unclear how the two tables are connected based on the content of the original question, but this appears to do what you're trying to accomplish.
I'm having a problem with my sql query. I need to insert a data that needs to be checked first if it is existing or not. If the data is existing the sql query must return it, if not insert and return it. I already google it but the result is not quite suitable to my problem. I already read this.
Check if a row exists, otherwise insert
How to 'insert if not exists' in MySQL?
Here is a query that' I'm thinking.
INSERT INTO #tablename(#field, #conditional_field, #field, #conditional_field)
VALUES(
"value of field"
(SQL QUERY THAT CHECK IF THERE IS AN EXISTING DATA, IF NOT INSERT THE DATA and RETURN IT, IF YES return it),
"value of feild",
(SQL QUERY THAT CHECK IF THERE IS AN EXISTING DATA, IF NOT INSERT THE DATA and RETURN IT, IF YES return it)
);
Please take note that the conditional field is a required field so it can't be NULL.
Your tag set is quite weird, I'm unsure you require all the technologies listed but as long as Firebird is concerned there's UPDATE OR INSERT (link) construction.
The code could be like
UPDATE OR INSERT INTO aTable
VALUES (...)
MATCHING (ID, SomeColumn)
RETURNING ID, SomeColumn
Note that this will only work for PK match, no complex logic available. If that's not an option, you could use EXECUTE BLOCK which has all the power of stored procedures but is executed as usual query. And you'll get into concurrent update error if two clients execute updates at one time.
You could split it out into 2 steps
1. run a select statement to retrieve the rows that match your valus. select count (*) will give you the number of rows
2. If zero rows found, then run the insert to add the new values.
Alternatively, you could create a unique index form all your columns. If you try to insert a row where all the values exist, an error will be returned. You could then run a select statement to get the ID for this existing row. Otherwise, the insert will work.
You can check with if exists(select count(*) from #tablename) to see if there is data, but with insert into you need to insert data for all columns, so if there is only #field missing, you cant insert values with insert into, you will need to update the table and go with a little different method. And im not sure, why do you check every row? You know for every row what is missing? Are you comparing with some other table?
You can achieve it using MySQL stored procedure
Sample MySQL stored procedure
CREATE TABLE MyTable
(`ID` int, `ConditionField` varchar(10))
;
INSERT INTO MyTable
(`ID`, `ConditionField`)
VALUES
(1, 'Condition1'),
(1, 'Condition2')
;
CREATE PROCEDURE simpleproc (IN identifier INT,ConditionData varchar(10))
BEGIN
IF (SELECT ID FROM MyTable WHERE `ConditionField`=ConditionData) THEN
BEGIN
SELECT * FROM MyTable WHERE `ConditionField`=ConditionData;
END;
ELSE
BEGIN
INSERT INTO MyTable VALUES (identifier,ConditionData);
SELECT * FROM MyTable WHERE `ConditionField`=ConditionData;
END;
END IF;
END//
To Call stored procedure
CALL simpleproc(3,'Condition3');
DEMO
I need to update a table with pre-calculated values from tables where data can be added/updated/deleted.
I could use
insert into precalculated(...)
select ... from ...
on duplicate key update ...
to add/update the pre-calculated table but is there an optimized method to delete the obsolete rows ?
I think you should create a stored procedure that deletes the data of your related tables if and only if the records fulfill a condition.
There's not enough information in your question to design the procedure, but I can give you a little example:
delimiter $$
create procedure delete_orphans()
begin
declare id_orphan int;
declare done int default false;
declare cur_orphans cursor for
select distinct d.id
from data as d
left join precalculated as p on d.id = p.id
where p.id is null;
declare continue handler for not found set done = true;
open cur_orphans;
loop_delete_orphans: loop
fetch cur_orphans into id_orphan;
if done then
leave cur_orphans;
end if;
delete from data where id = id_orphan;
end loop;
close cur_orphans;
end$$
delimiter ;
This procedure will delete every row in the data table that does not have at least one related row in the precalculated table.
Of course, this approach might be inneficient, because it will delete the rows one by one, but as I said this is only an example. You can customize it to fit your needs.
You can call this procedure from a trigger if you want (with call delete_orphans()).
Hope this helps.
Since you are always adding or updating rows that exist in these other tables, and you want to remove any rows that don't exist, why don't you just :
DELETE FROM precalculated
insert into precalculated(...)
select ... from ...
on duplicate key update ...
Always starting clean means you don't have to worry about orphans later.
You could add triggers for insert, delete and update on the main tables that maintains precalculated.
When inserting or updating the same code can be used to calculate the values and issuing a replace into precalculated (...) values (...)
When deleting it's probably the same, with the addition that you'll also delete rows from precalculated that are orphans. Be smart here and use values from the original delete to query precalculated for orphans instead of doing a table scan.
I may have found my solution using rename.
so basically, I will do a simple insert select to the temporary table and then
rename precalculated to precalculated_temprename, precalculated_temp to precalculated, precalculated_temprename to precalculated_temp;
truncate precalculated_temp;
need some tests but it seems the rename operation is fast and atomic.
I'm trying to select a column from a record variable in a function I'm calling from an Update rule and am getting the following error:
'could not identify column "name" in record data type'
The following is what I'm doing to produce the error:
From within an Update rule:
SELECT * INTO TEMPORARY TABLE TempTable FROM NEW;
SELECT MyFunction();
From within MyFunction()
DECLARE RecordVar Record;
SELECT * INTO STRICT RecordVar FROM TempTable;
EXECUTE 'UPDATE AnotherTable SET column = $1.name' USING RecordVar;
Note: I realise that there are easier ways to achieve what the above code is achieving but I've simplified the actual implementation to focus on the problem I'm having, which has opened up other possible solutions but I'd really like to get the above code working if possible.
I just figured it out. Rather than inserting the columns from NEW into the Temporary Table, I insert the NEW record as a single column into the Temporary Table and refer to it as RecordVar."NEW" inside my function. My rule and function now look like this:
From within an Update rule:
SELECT NEW AS "NEW" INTO TEMPORARY TABLE TempTable;
SELECT MyFuction();
From within MyFunction()
DECLARE RecordVar Record;
SELECT * INTO STRICT RecordVar FROM TempTable;
EXECUTE 'UPDATE AnotherTable SET column = $1.name' USING RecordVar."NEW";
The second part could work like this:
DO
$BODY$
DECLARE
RecordVar TempTable;
BEGIN
SELECT * INTO STRICT RecordVar FROM TempTable LIMIT 1;
EXECUTE 'UPDATE AnotherTable SET column = $1.name'
USING RecordVar;
END;
$BODY$
Note how I use the table name as type. PostgreSQL automatically creates a composite type for every table in the system.
A variable holds one row, the SELECT can return many rows. All but the first will be discarded. I added LIMIT 1 to clarify the effect. I doubt that is what you want.
You probably shouldn't have to use a temporary table in a rule to begin with. You may want to post your complete setup ...
I'm converting a ColdFusion Project from Oracle 11 to MS SQL 2008. I used SSMA to convert the DB including triggers, procedures and functions. Sequences were mapped to IDENTITY columns.
I planned on using INSERT-Statements like
INSERT INTO mytable (col1, col2)
OUTPUT INSERTED.my_id
values('val1', 'val2')
This throws an error since the table has a trigger defined, that AFTER INSERT writes some of the INSERTED data to another table to keep a history of the data.
Microsoft writes:
If the OUTPUT clause is specified without also specifying the INTO
keyword, the target of the DML operation cannot have any enabled
trigger defined on it for the given DML action. For example, if the
OUTPUT clause is defined in an UPDATE statement, the target table
cannot have any enabled UPDATE triggers.
http://msdn.microsoft.com/en-us/library/ms177564.aspx
I'm now wondering what is the best practice fo firstly retrieve the generated id and secondly to "backup" the INSERTED data in a second table.
Is this a good approach for the INSERT? It works because the INSERTED value is not simply returned but written INTO a temporary variable. It works in my tests as Microsoft describes without throwing an error regarding the trigger.
<cfquery>
DECLARE #tab table(id int);
INSERT INTO mytable (col1, col2)
OUTPUT INSERTED.my_id INTO #tab
values('val1', 'val2');
SELECT id FROM #tab;
</cfquery>
Should I use the OUTPUT clause at all? When I have to write multiple clauses in one cfquery-block, shouldn't I better use SELECT SCOPE_DENTITY() ?
Thanks and best,
Bernhard
I think this is what you want to do:
<cfquery name="qryInsert" datasource="db" RESULT="qryResults">
INSERT INTO mytable (col1, col2)
</cfquery>
<cfset id = qryResults.IDENTITYCOL>
This seems to work - the row gets inserted, the instead of trigger returns the result, the after trigger doesn't interfere, and the after trigger logs to the table as expected:
CREATE TABLE dbo.x1(ID INT IDENTITY(1,1), x SYSNAME);
CREATE TABLE dbo.log_after(ID INT, x SYSNAME,
dt DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP);
GO
CREATE TRIGGER dbo.x1_after
ON dbo.x1
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT dbo.log_after(x) SELECT x FROM inserted;
END
GO
CREATE TRIGGER dbo.x1_before
ON dbo.x1
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #tab TABLE(id INT);
INSERT dbo.x1(x)
OUTPUT inserted.ID INTO #tab
SELECT x FROM inserted;
SELECT id FROM #tab;
END
GO
Now, if you write this in your cfquery, you should get a row back in output. I'm not CF-savvy so I'm not sure if it has to see some kind of select to know that it will be returning a result set (but you can try it in Management Studio to confirm I am not pulling your leg):
INSERT dbo.x1(x) SELECT N'foo';
Now you should just move your after insert logic to this trigger as well.
Be aware that right now you will get multiple rows back for (which is slightly different from the single result you would get from SCOPE_IDENTITY()). This is a good thing, I just wanted to point it out.
I have to admit that's the first time I've seen someone use a merged approach like that instead of simply using the built-in PK retrieval and splitting it into separate database requests (example).