FireDAC, Array DML, SQL Server, and IDENTITY_INSERT - sql-server-2008

I'm trying to export data to SQL Server using Firedac Array DML feature. In the destination table, there is a IDENTITY column, and I need to put an explicit value to it.
But my query fails with the following error message:
SQL state: 23000. Native code: 544. Message: [Microsoft][SQL Server
Native Client 11.0][SQL Server]Cannot insert explicit value for
identity column in table 'dic_cities' when IDENTITY_INSERT is set to
OFF.
The destination table is defined as follows:
CREATE TABLE dbo.dic_cities (
id int IDENTITY(1,1) PRIMARY KEY,
city_name varchar(50) NOT NULL
)
My code:
MyFDQuery.Connection.ExecSQL('SET IDENTITY_INSERT dbo.dic_cities ON');
MyFDQuery.SQL.Text := 'INSERT INTO dbo.dic_cities (id, city_name) VALUES (:id, :city_name)';
//Populating parameters and preparing the query
{...}
//Execute Array DML query with batch size of 100
MyFDQuery.Execute(100, 0);
//Finally, set IDENTITY_INSERT off for the destination table
MyFDQuery.Connection.ExecSQL('SET IDENTITY_INSERT dbo.dic_cities OFF');
I must to note, that everything works well when I use a regular TFDQuery with parameters (i.e. when not using Array DML feature). But it fails for Array DML.
I also used Array DML for several other DBMS for years in a way like the code above, with success.
So, how to use the Array DML feature to insert explicit values to SQL Server IDENTITY column?

Update#2: See https://social.msdn.microsoft.com/Forums/sqlserver/en-US/e652377d-0607-45ca-b4a0-274361bff85a/how-to-set-identityinsert-in-dynamic-sql?forum=transactsql
I haven't fully digested it yet, but the OP's problem seems very similar.
I've tried constructing the SQL to use EXEC
FDQuery1.SQL.Text := 'EXEC (' + ''''+ sSetIIOn + ';' + sInsert + ';' + sSetIIOff + '''' + ')';
to avoid sp_executesql being used, but unfortunately FD then cannot parse the SQL properly so it produces an "argument out of range" error when setting up the parameters.
Update: Curiouser and curiouser...
The following code executes without error on SS2014 and inserts the expected 10 rows:
const
sEmptyTable = 'delete from dbo.identtest';
sSetIIOn = 'set identity_insert dbo.identtest ON';
sSetIIOff = 'set identity_insert dbo.identtest OFF';
sSelect = 'select * from dbo.identtest';
sInsert = 'insert dbo.identtest (ID, Name) values(%d, %s)';
procedure TForm2.TestIdentityInsert;
var
i : Integer;
S : String;
begin
FDQuery1.ExecSql(sEmptyTable);
FDQuery1.ExecSql(sSetIIOn);
for i := 1 to 10 do begin
S := Format(sInsert, [i, '''Name' + IntToStr(i) + '''']);
FDQuery1.ExecSQL(S);
end;
FDQuery1.ExecSql(sSetIIOff);
FDQuery1.Sql.Text := sSelect;
FDQuery1.Open;
end;
procedure TForm2.Button1Click(Sender: TObject);
begin
TestIdentityInsert;
end;
However, replacing the for loop by
FDQuery1.SQL.Text := sSetIIOn + ';' + sInsert + ';' + sSetIIOff;
FDQuery1.Params.ArraySize := Rows;
for i := 0 to Rows - 1 do begin
FDQuery1.Params[0].AsIntegers[i] := i;
FDQuery1.Params[1].AsStrings[i] := 'Name' + IntToStr(i);
end;
produces the exception you quote. I've verified using SSMS Profiler that the SQL sent to the server seems to be correct (and not f.i. being mangled by the MDac layer as sometimes happens):
exec sp_executesql N'set identity_insert dbo.identtest ON;insert dbo.identtest values(#P1, #P2);set identity_insert dbo.identtest OFF',N'#P1 int,#P2 nvarchar(4000)',0,N'Name0' [etc, repeated 9 times]
so the question seems to be why doesn't using sp_executesql respect the Identity_Insert setting and is there another way that does?

Related

Parse JSON And Insert Into MySQL

I am receiving JSON data to my server from each client. I have three main tables; datatypes, templaricustomers and mqttpacket.
Here the datatypes are coming from JSON variable names and I am keeping them in the database.
As I am a beginner in MySQL, I am trying to make a loop and insert the parsed JSON to related tables.
CREATE DEFINER=`root`#`localhost` PROCEDURE `SP_INSERT_DATA`(
IN `incoming_data` TEXT,
IN `value_array` TEXT,
IN `customer_id` INT
)
LANGUAGE SQL
NOT DETERMINISTIC
CONTAINS SQL
SQL SECURITY DEFINER
COMMENT ''
BEGIN
DECLARE i INT;
DECLARE value_iteration VARCHAR(50);
DECLARE lcl_data_type_id INT;
SET i = 1;
WHILE (LOCATE(',', value_array) > 0)
DO
SET #arr_data_type_name = SUBSTRING_INDEX(value_array,',',i);
SET value_array = SUBSTRING(value_array, LOCATE(',',value_array) + 1);
SELECT JSON_EXTRACT(#incoming_data, #arr_data_type_name) INTO value_iteration;
SET #arr_data_type_name := SUBSTRING_INDEX(#arr_data_type_name, ".", -1);
SELECT id INTO lcl_data_type_id FROM test_database.datatypes WHERE datatypes.data_name = #arr_data_type_name LIMIT 1;
INSERT INTO test_database.mqttpacket (data_type_id,inserted_time,customer_id,data_value) VALUES(lcl_data_type_id,NOW(),customer_id,value_iteration);
SET i = i+1;
END WHILE;
END
Example incoming_data in JSON is like;
{"d": {"subcooling": 6,"B1": 382,"B2": 386,"B3": 526,"B4": 361,"B5": 713,"B6": 689,"B7": 386,"B8": 99,"Discharge": 663,"Suction": 111,"High_Pressure": 225,"Low_Pressure": 78,"Evaporation": 31,"Condensation": 388,"MAX_CMP_SPEED": 950,"Thermal_Limit": 950,"SH": 78,"EEV_pct": 571,"COP": 52,"DSH": 272,"Water Flux": 713,"Fan Power": 239,"Delta T to Start": 0,"Delta P to Start": 60,"CMP_ROTOR_RPS": 430,"SET_CH_FLASH": 120,"SET_HP_FLASH": 500,"SET_DHW_FLASH": 500,"Defrosting": 0,"B8_AVERAGE": 42,"SET_PLANT": 0,"SET_CH_BMS": 430,"SET_HP_BMS": 382,"SET_DHW_BMS": 510,"SET_ACTIVE": 402,"SET_DSH": 323,"EEV_INJ_pct": 0,"LPT": 0,"HPT": 0,"PLANT_MODE_MANUAL": 0,"DHW_MODE_MANUAL": 0,"WATER_FLOW": 713,"DISCHARGE_TMP": 663,"INVERTER_TMP": 25,"ENVELOP_ZONE": 1,"EEV_A_STEPS": 274,"EBM_POWER": 239,"EBM_MAX_POWER": 322,"COMP_pct_FINAL": 359,"TOTAL_POWER_ABSORBED": 2599,"NAME": [17236,11585,13388,50,0,0,0,0,0,0,0,0,0,0,0,0],"POWER_OUT_KW": 134,"COOLING CAPACITY": [35],"EBM1_PCT": [861],"EBM2_PCT": [767]},"ts": "2021-02-02T14:42:02.479731" }
An example of value_array is like;
$.d.subcooling,$.d.B1,$.d.B2
This is my Stored Procedure. I just need to extract the JSON node by node and find the "datatypename" which is "node name" from "incoming_data" and insert into mqtt_packet table by it's value..
It's not able to fetch the data which is "value_iteration" and inserts unrelated data type ids..
Please advise me what is wrong with my query.
I hope I was clear... Cheers!

Storing MD5 in MySQL

Instead of storing a MD5 hash in a 32-byte field, I will like to store it in a 16-byte binary field. Mysql field "TEMP_MD5" is defined as Binary(16).
The MySQL CREATE TABLE with a sample row insert is:
CREATE TABLE `mytable` (
`TEMP_MD5` binary(16) DEFAULT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO mytable (TEMP_MD5) VALUES UNHEX("202cb962ac59075b964b07152d234b70") );
The sample code:
Let's say after the 16-byte binary field has been stored in the MySQL field TEMP_MD5, how do I compare this 16-byte field in Delphi code after I retrieve the value?
Is it possible to skip MySQL HEX/UNHEX internal functions, and just use Delphi code to compare the 16-byte binary field (32-byte string) in MySQL?
For example :
FDQuery1.Open( 'SELECT TEMP_MD5 from mytable;' );
if THashMD5.GetHashBytes('123') = fDQuery1.FieldByName('TEMP_MD5').VALUE then
SHOWMESSAGE('MATCHED!');
However, it seems that the values for FieldByName('TEMP_MD5').value never matched the THashMD5.GetHashString('123') value
and another way of comparing by using SELECT statement also failed
FDQuery1.Open( 'SELECT TEMP_MD5 mytable ' +
'WHERE (TEMP_MD5=:myvalue)',
[THashMD5.GetHashBytes('123')] );
above also failed to give FDQuery1.RecordCount = 1.
Basically I'm trying to compare the 16-byte Binary I stored in MySQL to a value, let's say '123' in code to see if both matches.
I'm using Delphi 10.2 moving to 10.4 next year.
Here is an example of code showing how to write an MD5 into your database and how to read it back and compare with a given MD5 hash:
Inserting data:
procedure TForm1.InsertDataButtonClick(Sender: TObject);
var
MD5 : TArray<Byte>;
begin
MD5 := THashMD5.GetHashBytes('123');
FDConnection1.Connected := TRUE;
FDQuery1.SQL.Text := 'INSERT INTO mytable (TEMP_MD5) VALUES(:MD5)';
FDQuery1.ParamByName('MD5').SetBlobRawData(Length(MD5), PByte(MD5));
FDQuery1.ExecSQL;
Memo1.Lines.Add('Rows affected = ' + FDQuery1.RowsAffected.ToString);
end;
Reading data back and comparing with given hash:
procedure TForm1.ReadDataButtonClick(Sender: TObject);
var
MD5 : TArray<Byte>;
MD5_123 : TArray<Byte>;
FieldMD5 : TField;
RecCnt : Integer;
begin
MD5_123 := THashMD5.GetHashBytes('123');
FDConnection1.Connected := TRUE;
// First version: get all records
// FDQuery1.SQL.Text := 'SELECT TEMP_MD5 FROM mytable';
// Second version: Get only records where TEMP_MD5 is hash('123').
FDQuery1.SQL.Text := 'SELECT TEMP_MD5 FROM mytable WHERE TEMP_MD5 = :MD5';
FDQuery1.ParamByName('MD5').SetBlobRawData(Length(MD5_123), PByte(MD5_123));
// Execute the query
FDQuery1.Open;
RecCnt := 0;
while not FDQuery1.Eof do begin
Inc(RecCnt);
FieldMD5 := FDQuery1.FieldByName('TEMP_MD5');
SetLength(MD5, FieldMD5.DataSize);
FieldMD5.GetData(MD5);
if (Length(MD5) = Length(MD5_123)) and
(CompareMem(PByte(MD5), PByte(MD5_123), Length(MD5))) then
Memo1.Lines.Add(RecCnt.ToString + ') MD5(123) = ' + MD5ToStr(MD5))
else
Memo1.Lines.Add(RecCnt.ToString + ') ' + MD5ToStr(MD5));
FDQuery1.Next;
end;
end;
As you can see reading the code, I compare the MD5 from database with given MD5 by comparing the memory containing the values (arrays of bytes).
Utility function:
function MD5ToStr(MD5 : TArray<Byte>) : String;
var
B : Byte;
begin
Result := '';
for B in MD5 do
Result := Result + B.ToHexString(2);
end;

Cannot correctly declare variable in procedure loop

This is driving me bananas. I'm not a mysql guru by any stretch. My goal is to add a large number of columns to a table. I've tried this several ways and the procedure chokes on the DECLARE #FooA NVARCHAR(MAX);. No clue as to why.
I appreciate any pointers...
USE mydatabase;
DELIMITER $$
DROP PROCEDURE IF EXISTS RepeatLoopProc$$
CREATE PROCEDURE RepeatLoopProc()
BEGIN
DECLARE x INT;
DECLARE sn VARCHAR(30);
DECLARE dr VARCHAR(48);
DECLARE #FooA NVARCHAR(MAX);
SET x = 0;
WHILE (x <= 150) DO
SET sn = CONCAT('drivesn_', x);
SET dr = CONCAT('driveinf_', x);
SET x = x + 1;
SET #FooA = 'ALTER TABLE DRIVE_MASTER ADD ' + sn + ' VARCHAR(30), ADD ' + dr + ' VARCHAR(48)';
EXEC sp_executesql #FooA;
END WHILE;
END$$
DELIMITER ;
When I do this I get:
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '#FooA NVARCHAR(MAX);
My forehead is getting flat from slamming it into my desk.
The ultimate goal is adding columns drivesn_0, driveinf_0, drivesn_1, driveinf_1, etc all the way out to drivesn_150 and driveinf_150. Type VARCHAR(30) and VARCHAR(48) for each respectively.
#variables are not DECLAREd and declared variables' identifiers do not start with #.
Also, ALTER statements typically recreate a table behind the scenes (equivalent to something like CREATE TABLE newversion... INSERT INTO newversion SELECT * FROM oldversion ... DROP TABLE oldversion ... RENAME newversion). So you'd be much better off building up a single ALTER statement within the loop, and executing it only once.
Example:
...
SET #FooA = 'ALTER TABLE DRIVE_MASTER';
SET x = 0;
WHILE (x <= 150) DO
SET sn = CONCAT('drivesn_', x);
SET dr = CONCAT('driveinf_', x);
SET #FooA = CONCAT(#FooA
, CASE WHEN x != 0 THEN ', ' ELSE '' END
, 'ADD ', sn, ' VARCHAR(30), ADD ', dr, ' VARCHAR(48)'
);
SET x = x + 1;
END WHILE;
EXEC sp_executesql #FooA;
...
... but what Barmar said in comments is good advice, you should probably just have another table, something like DRIVE_MASTER_DETAILS(x int, sn VARCHAR(30), dr VARCHAR(48))
I already have multiple tables. Basically I am using this to catalog drive serial numbers in hosts. Host can have up to 150 drives. Other tables contain network interface information (macaddrs, etc). All tied together by a common index value. For a system with 150 disk drives I cannot see another way other than 150 columns. Either that or I am missing a fundamental concept.

Cannot get results of a stored procedure into a #TempTable to work [duplicate]

This question already has answers here:
Insert results of a stored procedure into a temporary table
(33 answers)
Closed 9 years ago.
I am using SQL Server 2008 R2 and am trying to get the results of a stored procedure into a temporary table that I can access later on in the calling stored proc. My TSQL is as follows:
CREATE PROCEDURE sp_ToBeCalled AS
(
#SomeParam INT
)
BEGIN
SELECT * FROM tblSomeTable WHERE SomeField = #SomeParam
END
CREATE PROCEDURE sp_CallingProcedure AS
(
#SomeOtherParam INT
)
BEGIN
-- A
SELECT * INTO #MyTempTable FROM sp_ToBeCalled(#SomeOtherParam)
-- B
SELECT * FROM #MyTempTable FOR XML RAW
END
This all compiles fine however when I call sp_CallingProcedure statement -- B returns an error that #MyTempTable.
How can I do "A" so that I can access its results from within a #MyTempTable table without having to declare the structure of #MyTempTable first?
I am looking for a solution that I can use generically. I have a number of existing stored procedures that I need to call from various callers where getting the results queryable is a necessity. I cannot change the existing stored procedures.
I don't want to use
OPENQUOERY() - requires a custom linked server definition
sp_ExecSql() - means I have to build up dynamic SQL which does not give me SP compile time checking.
You are trying to use a Procedure like a tabular function.
Try using
INSERT INTO #MyTempTable (column1, column2...)
exec sp_ToBeCalled(#SomeOtherParam)
A great reference: http://www.sommarskog.se/share_data.html
I managed to partially solve my issue by doing the following:
1) Custom Stored Procedure to select a ROWSET into a global temp table
2) Calling SP calls 1) and then transfers the ##GlobalTempTable into a local #TempTable for processing
This works but has the following "issues":
Potential security risk as "Adhoc Distributed Queries" functionality needs to be turned on
Still requires a Global Temp table that needs to be cleaned up by the caller. Temp table naming is also problematic as multiple 2) will cause an issue.
I include my code below in case it helps someone else. If anyone is able to improve on it please feel free to post.
/* This requires Adhoc Distributed Queries to be turned on:
sp_configure 'Show Advanced Options', 1
GO
RECONFIGURE
GO
sp_configure 'Ad Hoc Distributed Queries', 1
GO
RECONFIGURE
GO
*/
-- Adapted from: http://stackoverflow.com/questions/653714/how-to-select-into-temp-table-from-stored-procedure
CREATE PROCEDURE [dbo].[ExecIntoTable]
(
#tableName NVARCHAR(256),
#storedProcWithParameters NVARCHAR(MAX)
)
AS
BEGIN
DECLARE #driver VARCHAR(10)
DECLARE #connectionString NVARCHAR(600)
DECLARE #sql NVARCHAR(MAX)
DECLARE #rowsetSql NVARCHAR(MAX)
SET #driver = '''SQLNCLI'''
SET #connectionString =
'''server=' +
CAST(SERVERPROPERTY('ServerName') AS NVARCHAR(256)) +
COALESCE('\' + CAST(SERVERPROPERTY('InstanceName') AS NVARCHAR(256)), '') +
';trusted_connection=yes;Database=' + DB_NAME() + ''''
SET #rowsetSql = '''EXEC ' + REPLACE(#storedProcWithParameters, '''', '''''') + ''''
SET #sql = '
SELECT
*
INTO
' + #tableName + '
FROM
OPENROWSET(' + #driver + ',' + #connectionString + ',' + #rowsetSql + ')'
EXEC (#sql)
END
GO
and then to use in another SP as follows:
EXEC ExecIntoTable '##MyGlobalTable', 'sp_MyStoredProc 13, 1'
SELECT *
INTO #MyLocalTable
FROM ##MyGlobalTable
DROP TABLE ##MyGlobalTable
SELECT * FROM #MyLocalTable

BULK INSERT from comma delimited string

I have a table with the following data in one column:
abc,2,2,34,5,3,2,34,32,2,3,2,2
def,2,2,34,5,3,2,34,32,2,3,2,2
I want to take this data and insert it into another table, using the commas as delimiters, just like how you can specify the FIELDTERMINATOR in BULK INSERT statements.
Is there a way to do this using T-SQL?
I'm not sure if there is any direct way to do in the T-SQL , but if you want to use Bulk Insert you can use sqlcmd to export to CSV file and then Import the file back into server using Bulk Insert.
Create a dbo.Split Functionm, you can refer here split string into multiple record
There are tons of good examples.
if you want to execute as batch process, You can execute sqlcmd and 'Bulk Insert'
sqlcmd -S MyServer -d myDB -E -Q "select dbo.Split(col1) from SomeTable"
-o "MyData.csv" -h-1 -s"," -w 700
-s"," sets the column seperator to
bulk insert destTable
from "MyData.csv"
with
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
Otherwise, You can manipulate directly in the T-SQL, but given you have the same identify of columns definition.
INSERT INTO DestinationTable
SELECT dbo.Split(col1) FROM SomeTable
You need to use a Split function to split your string into a table variable, and then insert those values into your table.
There are tons of those split functions out there, with various pros and cons and various number of parameters and so forth.
Here is one that I quite like - very nicely done, clearly explained.
With that function, you should have no trouble converting your column into individual entries for your other table.
EDIT: allow multiple char separators
This is how I solved it, with two functions to do the splitting into columns (if you want a more complete solution with line splitting as well, see my other post here). It involves:
A scalar function (fSubstrNth) for extracting the n-th field of a line, given an separator
A scalar function (fPatIndexMulti) for finding the n-th index of the separator
(Optional) alternative Right function to accept negative values
Finally, some specific code to use in your solution, since SQL doesn't allow dynamic table-function definitions (in other words, you can't SELECT from a function with dynamic columns)
Now, for the code snippets:
fSubstrNth
-- =============================================
-- Author: Bernardo A. Dal Corno
-- Create date: 18/07/2017
-- Description: substring com 2 PatIndex limitando inicio e fim
-- =============================================
CREATE FUNCTION fSubstrNth
(
#Text varchar(max),
#Sep varchar(3),
#N int --Nth campo
)
RETURNS varchar(max)
AS
BEGIN
DECLARE #Result varchar(max)
IF #N<1 RETURN ''
IF #N=1
SET #Result = substring(#Text, 1, dbo.fPatIndexMulti(#Sep,#Text,1)-1)
ELSE
SET #Result = substring(#Text, dbo.fPatIndexMulti(#Sep,#Text,#N-1)+LEN(#Sep), CASE WHEN dbo.fPatIndexMulti(#Sep,#Text,#N)>0 THEN dbo.fPatIndexMulti(#Sep,#Text,#N)-dbo.fPatIndexMulti(#Sep,#Text,#N-1)-LEN(#Sep) ELSE LEN(#Text)+1 END)
RETURN #Result
END
fPatIndexMulti
-- =============================================
-- Author: Bernardo A. Dal Corno
-- Create date: 17/07/2017
-- Description: recursive patIndex
-- =============================================
CREATE FUNCTION [dbo].[fPatIndexMulti]
(
#Find varchar(max),
#In varchar(max),
#N tinyint
)
RETURNS int
AS
BEGIN
DECLARE #lenFind int, #Result int, #Texto varchar(max), #index int
DECLARE #i tinyint=1
SET #lenFind = LEN(#Find)-1
SET #Result = 0
SET #Texto = #In
WHILE (#i <= #N) BEGIN
SET #index = patindex('%'+#Find+'%',#Texto)
IF #index = 0 RETURN 0
SET #Result = #Result + #index
SET #Texto = dbo.xRight(#Texto, (#index + #lenFind)*-1)
SET #i = #i + 1
END
SET #Result = #Result + #lenFind*(#i-2)
RETURN #Result
END
xRight
-- =============================================
-- Author: Bernardo A. Dal Corno
-- Create date: 06/01/2015
-- Description: Right inverso (para nros < 0)
-- =============================================
CREATE FUNCTION [dbo].[xRight]
(
#Texto varchar(8000),
#Qntd int
)
RETURNS varchar(8000)
AS
BEGIN
DECLARE #Result varchar(8000)
IF (Len(#Texto) = 0) OR (#Qntd = 0)
SET #Result = ''
ELSE IF (#Qntd > 0)
SET #Result = Right(#Texto, #Qntd)
ELSE IF (#Qntd < 0)
SET #Result = Right(#Texto, Len(#Texto) + #Qntd)
RETURN #Result
END
Specific code
SELECT
acolumn = 'any value',
field1 = dbo.fSubstrNth(table.datacolumn,',',1),
field2 = dbo.fSubstrNth(table.datacolumn,',',2),
anothercolumn = 'set your query as you would normally do',
field3 = (CASE dbo.fSubstrNth(table.datacolumn,',',3) WHEN 'C' THEN 1 ELSE 0 END)
FROM table
Note that:
fSubstrNth receives the n-th field to extract from the 'datacolumn'
The query can be as any other. This means it can be stored in a procedure, tabled-function, view, etc. You can extract some or all fields, in any order you wish, and process however you want
If used in a stored procedure, you could create a generic way of creating a query and temp table that loads the string with dynamic columns, but you have to make a call to another procedure to use the data OR create a specific query like above in the same procedure (which would make it non-generic, just more reusable)