SQL server insert multiple rows and incrementing a int column - sql-server-2008

I have some rows in a table and need to transfer them to another table. In the destination table i need also to add a field with an incremental value.
I'm doing the following, but i know that something in the insert is wrong, because the incremented value (intCodInterno) is always the same:
INSERT INTO Emp_VISOT.dbo.TBL_GCE_ARTIGOS
( strCodigo ,
strDescricao ,
intCodInterno ,
intCodTaxaIvaCompra ,
intCodTaxaIvaVenda ,
strCodCategoria ,
strAbrevMedStk ,
strAbrevMedVnd ,
strAbrevMedCmp ,
bitAfectaIntrastat
)(
SELECT A.Artigo ,
a.Descricao ,
IDENT_CURRENT('Emp_VISOT.dbo.TBL_GCE_ARTIGOS')+1,
'3' ,
'3' ,
'1' ,
'Un' ,
'Un' ,
'Un' ,
'0'
FROM PRIVESAM.DBO.Artigo A)
What do i need to change so the value is incremented correcty?
Thank you.
EDIT:
I made a small change in the query, and now it works.
I just insert a SELECT in the IDENT_CURRENT inside brackets:
(SELECT IDENT_CURRENT('Emp_VISOT.dbo.TBL_GCE_ARTIGOS')+1)
I got all the rows that i need from the old table to the new with the incremented value.

the IDENT_CURRENT('Emp_VISOT.dbo.TBL_GCE_ARTIGOS')+1
evaluated once when you want to run the query and all the rows will get the same id.
first solution is to iterate over the select result by a loop construct like cursor or whatsoever and insert the incremented index(you do that)
second solution is to make that column in destination table identity

Remove the part with intCodInterno and in SQL Server use the Identity property to automatically increment it for you.

IDENT_CURRENT won't update until the transaction commits, therefore its value remains constant until you insert.
Here are three options for fixing this issue:
Use some kind of counter (#newRowNum) such that for each row in your SELECT query, #newRowNum = #newRowNum +1, and thus your intCodInterno number = IDENT_CURRENT() + #newRowNum. This would probably require a lot of hacking to work though. Don't recommend it.
Insert your rows sequentially using the same business logic you have now - it will be tremendously less performant, however. Don't recommend it.
Set that column in your destination table to be an identity column itself. This is by far the best way to do it.
If you need a custom identity function (I assume there's a reason you're not using an identity column now), you can create one using some of the steps outlined above: http://www.sqlteam.com/article/custom-auto-generated-sequences-with-sql-server

In my case , i Inserted rows sequentially using the same business logic. I cannot use auto increment as i have to import old data also into this column. Once you have imported the data then u may go for updating the column for auto increment .

Related

Finding updated records in SSIS -- to hash or not to hash?

I'm working on migrating data from a table in a DB2 database to our SQL Server database using SSIS. The table that I am pulling data from contains a respectable amount of data--a little less than 100,000 records; but, it also has 46 columns.
I only want to update the rows that NEED to be updated, and so I came to conclusion that I could either use a Lookup Transformation and check all 46 columns and redirect the "no matches" to be updated on the SQL table. Or, I could hash each row in the datasets after I read the data in at the beginning of my data task flow, and then, subsequently, use the hash values as a comparison later on when determining if the rows are equal or not.
My question would be: Which is the better route to take? I like hashing them, but I'm not sure if that is the best route to take. Does anyone have any pearls of wisdom they'd like to share?
Why not both?
Generally speaking, there are two things we look for when doing an incremental load: Does this exist? If it exists, has it changed. If there's a single column, it's trivial. When there are many columns to check, that becomes quite the pain, especially if you're using SSIS to map all those columns and/or have to deal with worrying about NULLs.
I solve the multicolumn problem by cheating - I create two columns in all my tables: HistoricalHashKey and ChangeHashKey. Historical hash key will be all the business keys. Change hash key is all the rest of the material columns (I'd exclude things like audit columns). We are not storing the concatenated values directly in our hash columns. Instead, "we're going Math the stuff out of it" and apply a hashing algorithm called SHA-1. This algorithm will take all the input columns and return a 20 byte output.
There are three caveats to using this approach. You must concatenate the columns in the same order every time. These will be case sensitive. Trailing space is significant. That's it.
In your tables, you would add those the two columns as binary(20) NOT NULL.
Set up
Your control flow would look something like this
and your data flow something like this
OLESRC Incremental Data
(Assume I'm sourced from Adventureworks2014, Production.Product) I'm going to use the CONCAT function from SQL Server 2012+ as it promotes all data types to string and is NULL safe.
SELECT
P.ProductID
, P.Name
, P.ProductNumber
, P.MakeFlag
, P.FinishedGoodsFlag
, P.Color
, P.SafetyStockLevel
, P.ReorderPoint
, P.StandardCost
, P.ListPrice
, P.Size
, P.SizeUnitMeasureCode
, P.WeightUnitMeasureCode
, P.Weight
, P.DaysToManufacture
, P.ProductLine
, P.Class
, P.Style
, P.ProductSubcategoryID
, P.ProductModelID
, P.SellStartDate
, P.SellEndDate
, P.DiscontinuedDate
, P.rowguid
, P.ModifiedDate
-- Hash my business key(s)
, CONVERT(binary(20), HASHBYTES('MD5',
CONCAT
(
-- Having an empty string as the first argument
-- allows me to simplify building of column list
''
, P.ProductID
)
)
) AS HistoricalHashKey
-- Hash the remaining columns
, CONVERT(binary(20), HASHBYTES('MD5',
CONCAT
(
''
, P.Name
, P.ProductNumber
, P.MakeFlag
, P.FinishedGoodsFlag
, P.Color
, P.SafetyStockLevel
, P.ReorderPoint
, P.StandardCost
, P.ListPrice
, P.Size
, P.SizeUnitMeasureCode
, P.WeightUnitMeasureCode
, P.Weight
, P.DaysToManufacture
, P.ProductLine
, P.Class
, P.Style
, P.ProductSubcategoryID
, P.ProductModelID
, P.SellStartDate
, P.SellEndDate
, P.DiscontinuedDate
)
)
) AS ChangeHashKey
FROM
Production.Product AS P;
LKP Check Existence
This query will pull back the stored HistoricalHashKey and ChangeHashKey from our reference table.
SELECT
DP.HistoricalHashKey
, DP.ChangeHashKey
FROM
dbo.DimProduct AS DP;
At this point, it's a simple matter to compare the HistoricalHashKeys to determine whether the row exists. If we match, we want to pull back the ChangeHashKey into our Data Flow. By convention, I name this lkp_ChangeHashKey to differentiate from the source ChangeHashKey.
CSPL Change Detection
The conditional split is also simplified. Either the two Change Hash keys match (no change) or they don’t (changed). That expression would be
ChangeHashKey == lkp_ChangeHashKey
OLE_DST StagedUpdates
Rather than use the OLE DB Command, create a dedicated table for holding the rows that need to be updated. OLE DB Command does not scale well as behind the scenes it issues singleton update commands.
SQL Perform Set Based Updates
After the data flow is complete, all the data that needs updating will be in our staging table. This Execute SQL Task simply updates the existing data matching on our business keys.
UPDATE
TGT
SET
Name = SRC.name
, ProductNumber = SRC.
FROM
dbo.DimProduct AS TGT
INNER JOIN
Stage.DimProduct AS SRC
ON SRC.HistoricalHashKey = TGT.HistoricalHashKey;
-- If clustered on a single column and table is large, this will yield better performance
-- ON SRC.DimProductSK = TGT.DimProductSK;
From the comments
Why do I use dedicated INSERT and UPDATE statements since we have the shiny MERGE? Besides not remembering the syntax as easily, the SQL Server implementation can have some ... unintended consequences. They may be cornerish cases but I'd rather not run into them with the solutions I deliver. Explicit INSERT and UPDATE statements give me the fine grained control I want and need in my solutions. I love SQL Server, think it's a fantastic product but they weird syntax coupled with known bugs keeps me from using MERGE anywhere but a certification exam.

Update MySQL without specifying column names

I want to update a mysql row, but I do not want to specify all the column names.
The table has 9 rows and I always want to update the last 7 rows in the right order.
These are the Fields
id
projectid
fangate
home
thanks
overview
winner
modules.wallPost
modules.overviewParticipant
Is there any way I can update the last few records without specifying their names?
With an INSERT statement this can be done pretty easily by doing this:
INSERT INTO `settings`
VALUES (NULL, ...field values...)
So I was hoping I could do something like this:
UPDATE `settings`
VALUES (NULL, ...field values...)
WHERE ...statement...
But unfortunately that doesn't work.
If the two first columns make up the primary key (or a unique index) you could use replace
So basically instead of writing
UPDATE settings
SET fangate = $fangate,
home = $home,
thanks = $thanks
overview = $overview,
winner = $winner,
modules.wallPost = $modules.wallPost,
modules.overviewParticipant = $modules.overviewParticipant
WHERE id = $id AND procjectId = $projectId
You will write
REPLACE INTO settings
VALUES ($id,
$projectId,
$fangate,
$home,
$thanks
$overview,
$winner,
$modules.wallPost,
$modules.overviewParticipant)
Of course this only works if the row already exist, otherwise it will be created. Also, it will cause a DELETE and an INSERT behind the scene, if that matters.
You can't. You always have to specify the column names, because UPDATE doesn't edit a whole row, it edits specified columns.
Here's a link with the UPDATE syntax:
http://dev.mysql.com/doc/refman/5.0/en/update.html
No, it works on the INSERT because even if you didn't specify the column name but you have supplied all values in the VALUE clause. Now, in UPDATE, you need to specify which column name will the value be associated.
UPDATE syntax requires the column names that will be modified.
Are you always updating the same table and columns?
In that case one way would be to define a stored procedure in your schema.
That way you could just do:
CALL update_settings(id, projectid, values_of_last_7 ..);
Although you would have to create the procedure, check the Mysql web pages for how to do this, eg:
http://docs.oracle.com/cd/E17952_01/refman-5.0-en/create-procedure.html
I'm afraid you can't afford not specifying the column names.
You can refer to the update documentation here.

Duplicate column and add an automatic extension with mySQL. How?

I have two columns with mySQL:
"part_no"
"pdf_link"
I need the "pdf_link" column to automatically grab/duplicate the "part_no" value and add a .pdf extension on the end.
For example: If part_no = 00-12345-998, then pdf_link = 00-12345-998.pdf
I need this to happen every time I insert.
I appreciate the help.
Erik
you can achive this effect by using triggers I think.
http://dev.mysql.com/doc/refman/5.0/en/trigger-syntax.html
CREATE TRIGGER ins_pdf AFTER INSERT ON MY_TABLE SET #pdf_link = concat(#part_no,'.pdf')
Why store this extra computed information in the database? You can do this in the query when you pull it out, or, if needed, you could make a view that does it only as-needed.
Example pseudo query (my brain hurts right now, so this is only an example):
select concat(`part_no`, ".pdf") as `pdf_link` from `parts`;
If you really need this, you could use a trigger to duplicate the data ans add the extra string.

MySQL add prefix to field table-wide

Basically I just decided to switch my primary ID to a "source" field, as I will be importing stuff from multiple sources. Now I'd like to make it clear where things come from, as such I'd like to add a prefix to it, as to be portalname:formerID. I've tried
UPDATE pics SET source='nk:'+source WHERE 1=1
UPDATE pics SET source='nk:'+source WHERE faces > 0 (matches all records)
but every time phpMyAdmin returns 0 row(s) affected. ( Query took 0.0056 sec )
Any idea?
Use CONCAT() ( http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat ) to concatinate strings, not "+".
you may try to omit the where clause altogether.
UPDATE pics SET source= concat('nk:',source )
or better yet, add a new column 'portal_name' and populate that seperately.

Does replace into have a where clause?

I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this:
A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem.
If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days.
Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties).
Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this:
replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue
Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into.
I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link:
http://dev.mysql.com/doc/refman/5.0/en/replace.html
Thank you for your answers in advance,
Lajos Árpád
I can see that you have solved your problem, but to answer your original question:
REPLACE INTO does not have a WHERE clause.
The REPLACE INTO syntax works exactly like INSERT INTO except that any old rows with the same primary or unique key is automaticly deleted before the new row is inserted.
This means that instead of a WHERE clause, you should add the primary key to the values beeing replaced to limit your update.
REPLACE INTO myTable (
myPrimaryKey,
myColumn1,
myColumn2
) VALUES (
100,
'value1',
'value2'
);
...will provide the same result as...
UPDATE myTable
SET myColumn1 = 'value1', myColumn2 = 'value2'
WHERE myPrimaryKey = 100;
...or more exactly:
DELETE FROM myTable WHERE myPrimaryKey = 100;
INSERT INTO myTable(
myPrimaryKey,
myColumn1,
myColumn2
) VALUES (
100,
'value1',
'value2'
);
In your documentation link, they show three alternate forms of the replace command. Even though elided, the only one that can accept a where clause is the third form with the trailing select.
replace seems like overkill relative to update if I am understanding your task properly.