In snowflake, I am able to get the error details using TRY-CATCH method.
I mocked up a data where I am trying to insert a VARCHAR data into INTEGER data type column.
This exception was caught in try-catch method used in my snowflake procedure. As usual, the procedure failed and none of the records were inserted. I want to ignore this BAD data and insert rest of the records into my target table using snowflake procedure. IS THIS POSSIBLE?
Also I need to INSERT that BAD data into another table. How can we achieve this?
Please share you thoughts/expertise.
Thanks, Joe.
if you are using Snowflake procedure/ function you can write a common Error routine in captures the errors in a table and call it in your procedure. Please check this link for a common error handler which i wrote few months back.
https://github.com/hkandpal/Snowflake/blob/main/Snow_Erro_Handling.txt
I am trying to insert a VARCHAR data into INTEGER data type column.
I want to ignore this BAD data and insert rest of the records into my target table using snowflake procedure
If the source is VARCHAR then assumption that it contain INTEGER may be dangerous. Instead of trying to insert incorrect data and fail entire query alternative approach could validate data. TRY_CAST:
A special version of CAST , :: that is available for a subset of data type conversions. It performs the same operation (i.e. converts a value of one data type into another data type), but returns a NULL value instead of raising an error when the conversion can not be performed.
-- pass only NULLs and validated rows
INSERT INTO tab(col_int)
SELECT col_varchar
FROM source_table
WHERE TRY_CAST(col_varchar AS INT) IS NOT NULL
OR col_varchar IS NULL;
-- error table, log
INSERT INTO error_table(col_varchar)
SELECT col_varchar
FROM source_table
WHERE TRY_CAST(col_varchar AS INT) IS NULL
AND col_varchar IS NOT NULL;
Related
This is my mssql UDT
create type ConditionUDT as Table
(
Name varchar(150),
PackageId int
);
This is my mssql Stored Procedure
create Procedure [dbo].[Condition_insert]
#terms_conditions ConditionUDT readonly
as
begin
insert into dbo.condition (name, p_id)
select [Name],[PackageId]
from #terms_conditions;
end
There is a workaround solution if you do not have any other choice but definitely migrate from sql server to mysql.
The closest structural predefined object that takes on many rows in mysql is an actual table. So you need 1 table per UDDT of sql server. Make sure you use a specific schema or naming conversion so you know those tables are UDDT emulations.
The idea is fill in the info, use them into the sp and then delete them. You need however to gurantee who reads what and that info are deleted after usage, consumed. So:
For any of those tables you need 2 columns, i suggest put them always first. That will be the key and the variable name. The key can be char(38) and use UUID() to get a unique identifier. It can also be int and use the connectionid() instead. Unique identifier is better however as ensures that nobody will ever use information not indented for him no matter what. The variable name will be the one used into the sql server parameter, just a string. This way:
You know what UDDT you use out of the table name.
You know the identity of your process through the key.
You know the 'variable' out of the name.
So, in your application code you:
Begin transaction.
Insert the data into the proper (UDDT emulator) tables using a key and the variable name(s)
Supply to the stored procedure the key and the variable name(s). You can use the same key for many table type parameters within the same sp call.
The stored procedure can now use that information as before from the UDDT variable using key and variable name as filters to query the proper UDDT emulated table.
Delete the data you insert
Commit
On catch, rollback.
For simplicity your sp can read the data into temp table and you do not need to change a line of code from the original sql server sp for this aspect.
Transaction into your app code will help you make sure your temporary variable data will either be deleted or never committed no matter what goes wrong.
As Larnu thought might be the case, MySQL doesn't support user defined types at all, let alone user defined table types.
You will have to make them all separate scalar parameters.
I know this question has been discussed quite a lot here. But I have a particular case when I need to pass a list of parameters (comma - separated) which prevents me to have a local variable declared and used for input parameter.
As pointed out in the above discussion, it is suggested to declare a local variable and assign the parameters to this variable. However, what should I do in case my parameter is of type Text and can be comma - separated list?
For example -
CREATE DEFINER=`Admin`#`%` PROCEDURE `MyReport`(
p_myparameter_HK Text
)
BEGIN
SELECT
*
FROM MyTable
WHERE
(find_in_set(MyTable.column_HK, p_myparameter_HK) <> 0 OR MyTable.column_HK IS NULL)
;
END
Performance:
Query
If I just run the query - 300 ms
Stored Procedure
CALL MyReport('0000_abcd_fake_000')
This procedure keeps running endlessly.
My question is, how can I disable parameter sniffling and use local variable instead of find_in_set to match the query performance.
The times that I have needed to pass an arbitrary list of things to a Stored Procedure, I did it this way:
CREATE (or already have) a TABLE for passing the info in. Both the caller and the Procedure know the name of the procedure. (Or it could be passed in, but adds some messy "prepare-executes".)
Do a bulk INSERT into that table. (INSERT INTO tbl (a,b) VALUES (...), (..), ...;)
Perform JOINs or whatever to use the table efficiently.
In my case, the extra effort was worth it.
I am wondering if I can load data from flat files into multiple tables (requiring multiple inserts) using a stored procedure.
I have a stored procedure already
CREATE PROCEDURE `Insert_Hockey` (cardyear YEAR, fname VARCHAR(45), lname VARCHAR(45), brand VARCHAR(45), card_id VARCHAR(8))
BEGIN
/* Create another generic 'Item' in Items table */
INSERT INTO Items(category_id) VALUES (2);
/* Need to use the AUTO_INCREMENTED item_id from Items below so use LAST_INSERT_ID */
INSERT INTO Hockey_Cards VALUES(LAST_INSERT_ID(), cardyear, fname, lname, brand, card_id);
END
Now say I have a bunch of data in a spreadsheet about hockey cards. I can export to a tab delimitted format. I want to LOAD DATA FROM FILE but use the tab delimitted data as arguments to the stored procedure. Is something like that possible? If not, how else would you go about importing data in a simple manner in situations like this, where inserts have dependencies on previous inserts.
I am trying to create a few stored procedures / transactions (I don't know if there's a difference in my case) for situations like this in my database. There are a few situations where I use the general table -> specific table type of pattern, where specific table has a foreign key pointing to the general table. So an insert into the specifics requires a prior insert into the general, grab the AUTO_INCREMENTED primary key and use that for an insert into the 'specific' table.
I'm not sure that you can do exactly what you're asking for, but a couple of alternatives that might help:
Why not use an insert trigger? So you'd have a table for your import data, and an insert trigger on that table. The trigger should get fired for every record, so calling your SP from the trigger should get you what you want.
The syntax for INSERT DATA INFILE allows for some very complex import code. Given a the relatively simple SP code you've shown us, I would expect that you would be able to get INSERT DATA INFILE to do this for you itself, without needing an SP. The code will be more complex than the SP, but it certainly should be possible.
I just wanted to know if somebody could explain this.
I was just testing my code and didn't check for empty input fields (I know I have to, but just testing).. In my database table, all the fields are NOT NULL, and I was expecting a exception because I wasn't inserting anything.. But it turns out that MySQL inserts all with blank values, also, from MySQL workbench is the same thing..
Is there a way to prevent this? (From a MySQL perspective)
This behavior, although atypical, is quite well documented:
Inserting NULL into a column that has been declared NOT NULL. For
multiple-row INSERT statements or INSERT INTO ... SELECT statements,
the column is set to the implicit default value for the column data
type. This is 0 for numeric types, the empty string ('') for string
types, and the “zero” value for date and time types. INSERT INTO ...
SELECT statements are handled the same way as multiple-row inserts
because the server does not examine the result set from the SELECT to
see whether it returns a single row. (For a single-row INSERT, no
warning occurs when NULL is inserted into a NOT NULL column. Instead,
the statement fails with an error.)
So, if you want to get an error, use VALUES() with a single row. Alternatively, define a trigger that does the check.
Why does MySQL work this way? I don't know, to differentiate itself from other databases and prevent ANSI-compatibility? More seriously, I assume that this a question of efficiency, and related to the fact that MySQL does not implement check constraints. The NOT NULL declaration is just an example of a check constraint, and these are not supported.
Let`s say I have the following db schema.
user_id (int) email (varchar) phone (varchar)
How to stop executing of insert query if the value of phone is empty. I could do it with script validation, but I`m interesing of mysql solution.
You could write a procedure and use it to validate and insert rows. Or you could write a trigger to check NEW value and throw an exception. To throw the exception you may use SIGNAL statement in MySQL 5.5, or call unknown stored procedure.
For a solution that works completely within mysql, you can use "triggers". This involves:
Defining a mysql stored procedure to perform the validation
Create a trigger to use the validation procedure, e.g. on INSERT
Possibly do something with the error message if validation fails.
For a complete example, take a look at Roland Bouman's blog