I want to replace a table "Met" in my sql server database with a new datatable from an application.
My basic idea to loop each row in the new table and compare the existing table.
I used a stored procedure, but it only having "insert "and "update" function. Do I need delete the old table first?
Thanks
For each row, I want to loop the following stored procedure.
;WITH CTE AS (SELECT skey=#skey,ProbMetID=#ProbMetID,Interval=#Interval,Counts=#Counts)
MERGE Met AS TARGET
USING CTE SOURCE
ON SOURCE.skey = TARGET.skey
WHEN MATCHED THEN
UPDATE
SET ProbMetID = SOURCE.ProbMetID,
Interval = SOURCE.Interval,
Counts = SOURCE.Counts,
WHERE skey = #skey
WHEN NOT MATCHED THEN
INSERT INTO Met(skey,ProbMetID,Interval,Counts)
VALUES(#skey,#ProbMetID,#Interval,#Counts);
The MERGE statement does include a DELETE function. Check out Pinal Dave's post on it: http://blog.sqlauthority.com/2008/08/28/sql-server-2008-introduction-to-merge-statement-one-statement-for-insert-update-delete/
I've found that the easiest way to do this from the app layer is to insert everything into a temp table, then perform the entire MERGE operation between the temp table and the actual table. The great part is that you can do the entire thing in a single transaction. Plus, you can bulk insert the application table into the temp table for super awesome db speed (TM).
Related
I'm trying to delete records in my target table based on whether records exist in source table. I tried using a 'Delete' step, but I noticed that this step is based on a conditional clause.
My condition is quite simple "if the record/row does NOT exist in table A [source], delete the record/row from table B [destination]".
I also read about using a 'Merge Rows (diff)' step, but it seems to check/compare the entire set of tables for differences.
The table has several million records with many hundreds of columns in a MySQL server, I need to do this in the most efficient way.
I'm doing a search of table A with the Table input object and sql command:
'' ' SELECT I went , user , password , attribute , op FROM viewuserradiusunisulma
Any help would be appreciated.
print - image screen pentaho transformation
Transformation
Delete Pentaho
if your source and target table are in the same database, you can use a SQL query to delete all records in tableB that don't have a corresponding entry in tableA:
delete tableB where not exists (select id from tableA where id = tableB.id)
if source and destination tables are not in the same database, you would have to go through all rows in tableB and check whether the record exists in tableA. If your source tableA has a limited number of rows, loading the key values in memory and then performing a stream lookup instead of a database lookup would be much faster. I'd probably try that even with higher number of rows because of the significant performance impact.
note: I hope I haven't messed up the sql syntax, I'm thinking almost exclusively in abap at the moment and that messes with my memory a bit. So please test this on some backup before firing away.
I found the solution. In this case, I check the records, then report, update and enter the new data
Trasnsformation
Is this the right syntax:
INSERT INTO stock (Image)
SELECT Image,
FROM productimages
WHERE stock.Name_of_item = productimages.number;
SQL Server Management Studio's "Import Data" task (right-click on the DB name, then tasks) will do most of this for you. Run it from the database you want to copy the data into.
If the tables don't exist it will create them for you, but you'll probably have to recreate any indexes and such. If the tables do exist, it will append the new data by default but you can adjust that (edit mappings) so it will delete all existing data.
I use this all the time and it works fairly well.
INSERT INTO bar..tblFoobar( *fieldlist* )
SELECT *fieldlist* FROM foo..tblFoobar
This just moves the data. If you want to move the table definition (and other attributes such as permissions and indexes), you'll have to do something else.
The query logic that you are trying appears to be not correct (the query itself is buggy).
Assuming you have the correct query for the above logic and what you are trying is to insert new rows into the table stock by selecting a column from productimages table with a matching record as stock.Name_of_item = productimages.number
The above logic will add redundant data in to the table.
You perhaps looking to update instead of insert, something as -
update stock s
join productimages p on p.number = s.Name_of_item
set s.Image = p.Image
Using a MySQL DB, I am having trouble with a stored procedure and event timer that I created.
I made an empty table that gets populated with data from another via SELECT INTO.
Prior to populating, I TRUNCATE the current data. It's used to track only log entries that occur within 2 months from the current date.
This turns a 350k+ log table into about 750 which really speeds up reporting queries.
The problem is that if a client sends a query precisely between the TRUNCATE statement and the SELECT INTO statement (which has a high probability considering the EVENT is set to run every 1 minute), the query returns no rows...
I have looked into locking a read on the table while this PROCEDURE is ran, but locks are not allowed in STORED PROCEDURES.
Can anyone come up with a workaround that (preferably) doesn't require a remodel?
I really need to be pointed in the right direction here.
Thanks,
Max
I'd suggest an alternate approach instead of truncating the table, and then selecting into it...
You can instead select your new data set into a new table. Next, using a single RENAME command, rename the new table to the existing table and the existing table to some backup name.
RENAME TABLE existing_table TO backup_table, new_table TO existing_table;
This is a single, atomic operation... so it wouldn't be possible for the client to read from the data after it is emptied but before it is re-populated.
Alternately, you could change your TRUNCATE to a DELETE FROM, and then wrap this in a transaction along with the SELECT INTO:
START TRANSACTION
DELETE FROM YourTable;
SELECT INTO YourTable...;
COMMIT
Here is a chunk of the SQL I'm using for a Perl-based web application. I have a number of requests and each has a number of accessions, and each has a status. This chunk of code is there to update the table for every accession_analysis that shares all these fields for each accession in a request.
UPDATE accession_analysis
SET analysis_id = ? ,
reference_id = ? ,
status = ? ,
extra_parameters = ?
WHERE analysis_id = ?
AND reference_id = ?
AND status = ?
AND extra_parameters = ?
and accession_id is (
SELECT accesion_id
FROM accessions
where request_id = ?
)
I have changed the tables so that there's a status table for accession_analysis, so when I update, I update both accession_analysis and accession_analysis_status, which has status, status_text and the id of the accession_analysis, which is a not null auto_increment variable.
I have no strong idea about how to modify this code to allow this. My first pass grabbed all the accessions and looped through them, then filtered for all the fields, then updated. I didn't like that because I had many connections with short SQL commands, which I understood to be bad, but I can't help but think the only way to really do this is to go back to the loop in Perl holding two simpler SQL statements.
Is there a way to do this in SQL that, with my relative SQL inexperience, I'm just not seeing?
The answer depends on which DBMS you're using. The easiest way is to create a trigger on one table that provides the logic of updating the other table. (For any DB newbies -- a trigger is procedural code attached to a table at the DBMS (not application) layer that runs in response to an insert, update or delete on the table.). A similar, slightly less desirable method is to put the logic in a stored procedure and execute that instead of the update statement you're now using.
If the DBMS you're using doesn't support either of these mechanisms, then there isn't a good way to do what you're after while guaranteeing transactional integrity. However if the problem you're solving can tolerate a timing difference in the two tables' updates (i.e. The data in one of the tables is only used at predetermined times, like reporting or some type of batched operation) you could write to one table (live) and create a separate process that runs when needed (later) to update the second table using data from the first table. The correctness of allowing data to be updated at different times becomes a large and immovable design assumption, however.
If this is mostly about connection speed, then one option you have is to write a stored procedure that handles the "double update or insert" transparently. See the manual for stored procedures:
http://dev.mysql.com/doc/refman/5.5/en/create-procedure.html
Otherwise, You probably cannot do it in one statement, see the MySQL INSERT syntax:
http://dev.mysql.com/doc/refman/5.5/en/insert.html
The UPDATE syntax allows for multi-table updates (not in combination with INSERT, though):
http://dev.mysql.com/doc/refman/5.5/en/update.html
Each table needs its own INSERT / UPDATE in the query.
In fact, even if you create a view by JOINing multiple tables, when you INSERT into the view, you can only INSERT with fields belonging to one of the tables at a time.
The modifications made by the INSERT statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For example, an INSERT into a multitable view must use a column_list that references only columns from one base table. For more information about updatable views, see CREATE VIEW.
Inserting data into multiple tables through an sql view (MySQL)
INSERT (SQL Server)
Same is true of UPDATE
The modifications made by the UPDATE statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For more information on updatable views, see CREATE VIEW.
However, you can have multiple INSERTs or UPDATEs per query or stored procedure.
I would like to store all changes to my tables (in MySQL). I created a table 'audit' to track all changes of all tables in one table (so I don't have to create audit-tables for each table).
I then created triggers for each table and a stored procedure that inserts a record into the audit-table. The parameters for the stored procedure are the tablename and the primary id. Now I'm able to track the insert/update/delete dates for each record in my database.
But I also would like to trace all changes to the DATA with this procedure. For this I'd have to find a way to use the OLD and NEW records from the triggers in the stored procedure.
Anybody know how to do this?
Maybe some kind of serializing the records OLD and NEW into a string??
The only solution that I could derive was checking the OLD & NEW values of every field in my table.
BEGIN
-- record changes to table log_new
IF OLD.fieldA != NEW.fieldA
OR OLD.fieldB != NEW.fieldB
THEN
INSERT INTO log_new (