We are in the process of putting a partnering site on our ERP system. They have a notes table that I am trying to move into our notes table by scripting it in T-SQL since there are hundreds of records. I am using SQL Server 2008R2. I keep getting an The INSERT statement conflicted with the CHECK constraint "ObjectNotesCk4". The conflict occurred in database "PLT_NAMALT_App", table "dbo.ObjectNotes" error.
Some details. Our ObjectNotes table requires the RowPointer from the PO, the PK from the SpecificNotes table and then some additional fields which I'm populating with constants. So first I capture the RowPointer from the PO table using a cursor, then I insert into the SpecificNotes table matching on PO# and try to capture the PK using OUTPUT INTO a temp table. Then I try to insert into the ObjectNotes table using the variable I inserted into in my OUTPUT. So my query...(just an FYI I am limiting it to one PO#, one line # for verification).
DECLARE #SpecificToken TABLE (SpecificTokenID TokenType, NoteContent OleObjectType, NoteDesc LongDescType)
DECLARE #RowPointer RowPointerType
DECLARE MY_CURSOR CURSOR
LOCAL STATIC FOR
SELECT RowPointer FROM poitem
OPEN MY_CURSOR
FETCH NEXT FROM MY_CURSOR INTO #RowPointer
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO SpecificNotes (NoteContent, NoteDesc)
OUTPUT inserted.SpecificNoteToken, inserted.NoteContent, inserted.NoteDesc
INTO #SpecificToken
SELECT LongNote, 'Visual Note'
FROM purc_line_notes vb
JOIN poline slp
ON vb.PONUM = slp.po_num
and vb.LINE_NO = po_line
WHERE slp.po_num = 'P000060883'
and slp.po_line = '1'
AND slp.rowpointer = #RowPointer
INSERT INTO ObjectNotes (RefRowPointer,NoteHeaderToken,SpecificNoteToken,NoteType)
VALUES (#OuterID, 5, (SELECT SpecificTokenID FROM #SpecificToken),0)
FETCH NEXT FROM MY_CURSOR INTO #RowPointer
END
CLOSE MY_CURSOR
DEALLOCATE MY_CURSOR
Also when testing what the values are in #SpecificToken, SpecificNoteToken is 0 but should be 185 (checked the value in the SpecificNotes table)
In SpecificNotes, SpecificNoteToken is the Primary Key, which is why I do my insert into that table first.
In ObjectNotes, ObjectNoteToken is the Primary Key but SpecificNoteToken is a foreign key. RefRowPointer is a foreign key to the poline table (slp).
This is my first time using OUTPUT INTO, I'm not sure this is the right way to go since I am reading mixed things about being able to use it with a primary key. I also apologize for being wordy, I wanted to make sure I described my situation thoroughly.
Related
Good morning,
I found the solution to my problem, but I thought I'd share it anyway as it might be useful for future projects/problems. I have a simple SQL table below which will be the foreign key of my much bigger table of stock prices market data.
CREATE TABLE [StockMarket]
(
[ID] INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
[ReutersRIC] VARCHAR(50),
[BloombergTicker] VARCHAR(50),
[YahooSymbol] VARCHAR(50)
/* other irrelevant columns here*/
)
With that in mind, I am trying to add robustness to the structure as I will be adding from different data sources. For each underlying time series on the financial markets, there are multiple names depending on which data provider you use. I wanted to avoid having multiple lines with different data sources representing the same time series. I needed a trigger which:
1) If the inserted values are not yet in the table, it is simply inserted.
2) if I insert a line for which at least one [ReutersRIC], [BloombergTicker], [YahooSymbol], [ISIN] already exists, I update that specific line instead.
2.1) The update should only happen on Non-Null entrees
My question was how this can be achieved in the best possible way? It took me some time, but I wanted to share the answer below for future reference.
Here is my solution:
CREATE TRIGGER UniqueStockMarketInserts ON [dbo].[StockMarket]
INSTEAD OF INSERT
AS
BEGIN
DECLARE #StockId int
SELECT #StockId = S.ID
FROM [StockMarket] S INNER JOIN [INSERTED] I
ON
S.YahooSymbol = I.YahooSymbol OR
S.ReutersRIC = I.ReutersRIC OR
S.BloombergTicker = I.BloombergTicker OR
S.ISIN = I.ISIN
IF #StockId IS NULL
BEGIN
INSERT INTO [StockMarket] (YahooSymbol,ReutersRIC,BloombergTicker,ISIN)
SELECT I.YahooSymbol, I.ReutersRIC, I.BloombergTicker, I.ISIN
FROM [INSERTED] I
END
ELSE
BEGIN
UPDATE S SET
S.ISIN = ISNULL(S.ISIN,I.ISIN),
S.YahooSymbol = ISNULL(S.YahooSymbol,I.YahooSymbol),
S.ReutersRIC = ISNULL(S.ReutersRIC,I.ReutersRIC),
S.BloombergTicker = ISNULL(S.BloombergTicker,I.BloombergTicker)
FROM INSERTED I, StockMarket S
WHERE S.ID = #StockId
END
END
GO
Here is the test I ran on it with real-life examples (except the ISIN which I made up):
/*-----------TESTING THE TRIGGER--------*/
INSERT INTO StockMarket (ReutersRIC, BloombergTicker) VALUES ('G.TO', 'GG US Equity');
INSERT INTO StockMarket (YahooSymbol, BloombergTicker, ISIN) VALUES ('GOOG', 'GOOG US Equity','US123454321');
INSERT INTO StockMarket (YahooSymbol, ISIN) VALUES ('RDSA','NL111112222')
INSERT INTO StockMarket (YahooSymbol, ReutersRIC) VALUES ('GG', 'G.TO'); /*this should update as per trigger*/
INSERT INTO StockMarket (ReutersRIC, ISIN) Values ('GOOG.OQ','US123454321'); /*this should update as per trigger*/
INSERT INTO StockMarket (YahooSymbol, ReutersRIC) Values ('RDSA', 'RDSa.L'); /*this should update as per trigger*/
And the results:
Hope this can help someone in the future. Happy coding
I have a controller to save a record
My Table contains BELOW Fields
This is Must (It has to repeat in LOOP).
I want to insert a record single time in a table for a Employee Id, and it should not repeat again. But Same Employee can have multiple Batch Ids and Multiple Course ID's.
If I take Unique as a Employee Id that is not working again to insert the another record to the same employee.
This process should repeat inside the loop and I need to get the Last Inserted ID from the Table and have to assign the number of students in the another table. This everything is working fine if I create a Procedure in Mysql and If I call Procedure. But my Linux server is not executing and throwing MySQL error.
Here is my query and
<code>
$insert_staff_assign = "insert into staff_assign
(`main_course_id`, `main_batch_id`, `section`, `semester_course_id`, `emp_mas_staff_id`, `emp_category`)
VALUES
(:main_course_id, :main_batch_id, :section_id, :semester_course_id, :emp_mas_staff_id, :emp_category)
ON DUPLICATE KEY UPDATE
main_course_id=:main_course_id, main_batch_id=:main_batch_id, section=:section_id, semester_course_id=:semester_course_id, emp_mas_staff_id=:emp_mas_staff_id, emp_category=:emp_category ";
insert into staff_assign
(`main_course_id`, `main_batch_id`, `section`, `semester_course_id`, `emp_mas_staff_id`, `emp_category`)
VALUES
(:main_course_id, :main_batch_id, :section_id, :semester_course_id, :emp_mas_staff_id, :emp_category)
ON DUPLICATE KEY UPDATE
main_course_id=:main_course_id, main_batch_id=:main_batch_id, section=:section_id, semester_course_id=:semester_course_id, emp_mas_staff_id=:emp_mas_staff_id, emp_category=:emp_category
insert into staff_assign
(`main_course_id`, `main_batch_id`, `section`, `semester_course_id`, `emp_mas_staff_id`, `emp_category`)
SELECT * FROM (
SELECT
:main_course_id, :main_batch_id, :section_id, :semester_course_id, :emp_mas_staff_id, :emp_category
) AS tmp WHERE NOT IN (
SELECT emp_mas_staff_id FROM staff_assign WHERE emp_mas_staff_id = $save_emp_mas_staff_id
) LIMIT 1
</code>
Please send me the query to get rid of this problem.
The above are my queries.
I found the answer for the above question.
mysql.proc problem.
In my mysql my mysql.proc was corrupted. That is the reason it doesn't execute the above query.
To fix the above issue you need to update mysql in linux.
I have a table with 3 fields: Id(PK,AI), Name(varchar(36)), LName(varchar(36)).
I have to insert name and last name, Id inserts automatically because of it's constraints,
Is There a way to Jump id auto increment value when it reaches 6?
for instance do this 7 times:
Insert Into table(Name, LName) Values ('name1', 'lname1') "And jump id to 7 if it is going to be 6"
It may sound stupid to do this but I have the doubt.
Also Jump and do not record id 6.
record only, 1-5, 7,8,9 and so on
What I want to achieve starts from a Union:
Select * From TableNames
Union All
Select * From TableNames_general
In the TableNames_general I assign it's first value so that when the user sees the table for the first time it will be displayed the record I inserted.
The problem comes when the user inserts a new record, if the Id of the inserted record is the same as the one I have inserted it will be duplicated, that is why I want to achieve when the users inserts one record and if the last insert id already exists just jump that record. this is because I must have different ids due to its relationship among child tables.
Identity column generate values for you, And its best left this way, You have the ability to insert specific values in Identity column but its best left alone and let it generate values for you.
Imagine you have inserted a value explicitly in an identity column and then later on Identity column generates the same value for you, you will end up with duplicates.
If you want to have your input in that column then why bother with identity column anyway ??
Well this is not the best practice but you can jump to a specific number by doing as follows:
MS SQL SERVER 2005 and Later
-- Create test table
CREATE TABLE ID_TEST(ID INT IDENTITY(1,1), VALUE INT)
GO
-- Insert values
INSERT INTO ID_TEST (VALUE) VALUES
(1),(2),(3)
GO
-- Set idnentity insert on to insert values explicitly in identity column
SET IDENTITY_INSERT ID_TEST ON;
INSERT INTO ID_TEST (ID, VALUE) VALUES
(6, 6),(8,8),(9,9)
GO
-- Set identity insert off
SET IDENTITY_INSERT ID_TEST OFF;
GO
-- 1st reseed the value of identity column to any smallest value in your table
-- below I reseeded it to 0
DBCC CHECKIDENT ('ID_TEST', RESEED, 0);
-- execute the same commad without any seed value it will reset it to the
-- next highest idnetity value
DBCC CHECKIDENT ('ID_TEST', RESEED);
GO
-- final insert
INSERT INTO ID_TEST (VALUE) VALUES
(10)
GO
-- now select data from table and see the gap
SELECT * FROM ID_TEST
If you query the database to get the last inserted ID, then you can check if you need to increment it, by using a parameter in the query to set the correct ID.
If you use MSSQL, you can do the following:
Before you insert check for the current ID, if it's 5, then do the following:
Set IDENTITY_INSERT to ON
Insert your data with ID = 7
Set IDENTITY_INSERT to OFF
Also you might get away with the following scenario:
check for current ID
if it's 5, run DBCC CHECKIDENT (Table, reseed, 6), it will reseed the table and in this case your next identity will be 7
If you're checking for current identity just after INSERT, you can use SELECT ##IDENTITY or SELECT SCOPE_IDENTITY() for better results (as rcdmk pointed out in comments)
Otherwise you can just use select: SELECT MAX(Id) FROM Table
There's no direct way to influence the AUTO_INCREMENT to "skip" a particular value, or values on a particular condition.
I think you'd have to handle this in an AFTER INSERT trigger. An AFTER INSERT trigger can't update the values of the row that was just inserted, and I don't think it can make any modifications to the table affected by the statement that fired the trigger.
A BEFORE INSERT trigger won't work either, because the value assigned to an AUTO_INCREMENT column is not available in a BEFORE INSERT trigger.
I don't believe there's a way to get SQL Server IDENTITY to "skip" a particular value either.
UPDATE
If you need "unique" id values between two tables, there's a rather ugly workaround with MySQL: roll your own auto_increment behavior using triggers and a separate table. Rather than defining your tables with AUTO_INCREMENT attribute, use a BEFORE INSERT trigger to obtain a value.
If an id value is supplied, and it's larger than the current maximum value from the auto_increment column in the dummy auto_increment_seq table, we'd need to either update that row, or insert a new one.
As a rough outline:
CREATE TABLE auto_increment_seq
(id INT NOT NULL PRIMARY KEY AUTO_INCREMENT) ENGINE=MyISAM;
DELIMITER $$
CREATE TRIGGER TableNames_bi
BEFORE INSERT ON TableNames
FOR EACH ROW
BEGIN
DECLARE li_new_id INT UNSIGNED;
IF ( NEW.id = 0 OR NEW.id IS NULL ) THEN
INSERT INTO auto_increment_seq (id) VALUES (NULL);
SELECT LAST_INSERT_ID() INTO li_new_id;
SET NEW.id = li_new_id;
ELSE
SELECT MAX(id) INTO li_max_seq FROM auto_increment_seq;
IF ( NEW.id > li_max_seq ) THEN
INSERT INTO auto_increment_seq (id) VALUES (NEW.id);
END IF;
END IF;
END$$
CREATE TRIGGER TableNames_ai
AFTER INSERT ON TableNames
FOR EACH ROW BEGIN
DECLARE li_max_seq INT UNSIGNED;
SELECT MAX(id) INTO li_max_seq FROM auto_increment_seq;
IF ( NEW.id > li_max_seq ) THEN
INSERT INTO auto_increment_seq (id) VALUES (NEW.id);
END IF;
END;
DELIMITER ;
The id column in the table could be defined something like this:
TableNames
( id INT UNSIGNED NOT NULL DEFAULT 0 PRIMARY KEY
COMMENT 'populated from auto_increment_seq.id'
, ...
You could create an identical trigger for the other table as well, so the two tables are effectively sharing the same auto_increment sequence. (With less efficiency and concurrency than an Oracle SEQUENCE object would provide.)
IMPORTANT NOTES
This doesn't really insure that the id values between the tables are actually kept unique. That would really require a query of the other table to see if the id value exists or not; and if running with InnoDB engine, in the context of some transaction isolation levels, we might be querying a stale (as in, consistent from the point in time at the start of the transaction) version of the other table.
And absent some additional (concurrency killing) locking, the approach outline above is subject to a small window of opportunity for a "race" condition with concurrent inserts... the SELECT MAX() from the dummy seq table, followed by the INSERT, allows a small window for another transaction to also run a SELECT MAX(), and return the same value. The best we can hope for (I think) is for an error to be thrown due to a duplicate key exception.
This approach requires the dummy "seq" table to use the MyISAM engine, so we can get an Oracle-like AUTONOMOUS TRANSACTION behavior; if inserts to the real tables are performed in the context of a REPEATABLE READ or SERIALIZABLE transaction isolation level, reads of the MAX(id) from the seq table would be consistent from the snapshot at the beginning of the transaction, we wouldn't get the newly inserted (or updated) values.
We'd also really need to consider the edge case of an UPDATE of row changing the id value; to handle that case, we'd need BEFORE/AFTER UPDATE triggers as well.
Im trying to generate a bunch of discounts codes. These have foreign key contraints.
At the moment I have
INSERT INTO code (title, code, desc) VALUES ('code1','XPISY9','test code');
INSERT INTO code_details (code_id, used, attempts) VALUES (
SELECT code_id from code where code = 'XPISY9',0,0);
The code_id in code_details is a foreign key for code_id in the code table.
What would be the best way to create a loop where I can generate a set of these values (around 100). I would need the code to be a not repeating random value.
Any help would be appreciated.
THanks
Once you have your 100 or so records in the code table, you can add the code details in one statement:
INSERT INTO code_details (code_id, used, attempts)
SELECT code, 0, 0
FROM code;
Generating the code records in the first place is another matter and is probably best done by using some other tool to generate 100 insert statements in a text file which you can then execute.
I've seen that done with every scripting language possible - chose your favourite. I've even seen Excel used with a column for the id and string formula to generate the insert statements.
Thanks for the help guys. I decided to put together a procedure for this and it worked great:
DELIMITER $$
CREATE PROCEDURE vouchergen(IN length INT(10) ,IN duration VARCHAR(20),IN sponsor VARCHAR(20),IN amount INT(10))
BEGIN
DECLARE i INT DEFAULT 1;
WHILE (i< amount) DO
SET #vcode= CONCAT(BINARY brand , UPPER(SUBSTRING(MD5(RAND()) FROM 1 FOR 6)));
INSERT INTO code (title, code, desc) VALUES (CONCAT(brand,i),#vcode,CONCAT(length,' ',duration));
INSERT INTO code_details (code_id, used, attempts) VALUES (
SELECT code_id from code where code = #vcode,0,0);
SET i=i+1;
END WHILE;
END$$;
I can then call this to gen however many i want:
CALL vouchergen(1,'week',APPL,400);
CALL vouchergen(1,'month',APPL,100);
CALL vouchergen(1,'day',APPL,200);
I had posted other questions relating to this problem, but haven't had any responses to directly address the issue of multiple row data imports from XLS. I'm an infrequent user of SQL or DBs in general, so my background/experience is limited in regard to writing these queries. If there is an easier or more direct approach to reach my goal, I'm certainly open to them. I don't mean to overpost or anything, but this site seems to be the most helpful (thank you everyone who has replied to my other post).
From some posts I have looked at, I understand that I have a working set-based query/trigger (since multiple rows do get imported). Ultimately I only need to import data into the parent table, and the child table can be populated with static values and or values from the parent table, but the PK/FK relationship needs to be maintained. And this is what I seem to have the most trouble with when more than 1 row of data is imported from XLS.
I have set up a trigger to insert values into a child table when a insert is executed on the parent table. The query executes correctly however I am unable to have the FK match the PK when multiple rows of data are inserted. The FK always has the ID of the last row inserted in the parent table. I have tried several approaches from other forum posts (here and on other sites) but always get errors.
Here is my updatePgVer Trigger code:
ALTER TRIGGER [updatePgVer]
ON [prototype].[dbo].[PageVersion]
FOR INSERT AS
BEGIN
SET NOCOUNT ON;
-- Insert into PageHistory
INSERT
INTO [prototype].[dbo].[PageHistory] ([VersionID], [Date], [Action], [Who], [StateId], [Owner])
SELECT
##IDENTITY
, GETDATE()
, 'created'
, 'xls_user'
, [StateID]
, 'xls_user'
FROM inserted
END
And the query used to insert into the parent table:
INSERT INTO [prototype].[dbo].[PageVersion] ([Number], [PageId], [Properties], [StateId], [Language], [SearchText], [PageMetaDescription], [PageMetaKeyWords], [PageTypeId], [Name], [Title], [Owner], [Admin], [ShowInMenu])
SELECT [Number], [PageId], [Properties], [StateId], [Language], [SearchText], [PageMetaDescription], [PageMetaKeyWords], [PageTypeId], [Name], [Title], [Owner], [Admin], [ShowInMenu]
FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\test_import.xls', 'SELECT * FROM [Query$]');
The only other idea I have would to be to create some sort of loop that goes through each row and imports 1 at a time, so that the ##IDENTITY will always match. However, examples I have looked at seem hard to apply to my import.
The value for column VersionID, which appears to be the column with IDENTITY, is in the inserted table. You can reference it in your trigger like this
INSERT
INTO [prototype].[dbo].[PageHistory] ([VersionID], [Date], [Action], [Who], [StateId], [Owner])
SELECT
[VersionID],
, GETDATE()
, 'created'
, 'xls_user'
, [StateID]
, 'xls_user'
FROM inserted
If you want to see what data is available from inserted during the INSERT, temporarily put this in your trigger:
SELECT *
FROM inserted