I'm having trouble with this code snippet:
DECLARE FormattedTimeStamp TIMESTAMP;
DECLARE pattern CHARACTER 'yyyy-mm-ddTHH:mm:ss';
SET FormattedTimeStamp = CAST(EnvRef.ConsumerTrxnsInq.paymentList[i].TransactionDate as DATE FORMAT 'yyyy-MM-dd');
SET OutputRoot.XMLNSC.ns:ConsumerTrxnsInqRs.Body.ConsumerTransaction[i].Timestamp = CAST(REPLACE(SUBSTRING(CAST(FormattedTimeStamp AS CHAR) before '.'),' ','T') AS TIMESTAMP FORMAT pattern);
When I run it, it produces this error:
Error while casting. subParse failed. TIMESTAMPT'2014-12-02T15:39:21. yyyy-mm-ddTHH:mm:ss. TIMESTAMPT'2014-12-02T15:39:21. yyyy.
Any help?
There is issue with you'r pattern you defined. Please correct it as following..
It should work..
DECLARE pattern CHARACTER 'yyyy-MM-dd''T''HH:mm:ss';
If you work with T-SQL, then
First of all replace
DECLARE FormattedTimeStamp TIMESTAMP
with
DECLARE #FormattedTimeStamp TIMESTAMP
Related
The intended result is to store the notes of edits to a field, in another field.
I want the new notes to APPEND to the storage field, and since the is not function that does this I am attmpting to find a way to work this out without adding more layers of code like functions and stored procedures.
/* Before Update Trigger */
DECLARE v_description VARCHAR(255);
DECLARE v_permnotes MEDIUMTEXT;
DECLARE v_oldnote VARCHAR(500);
DECLARE v_now VARCHAR(25);
SET v_now = TRIM(DATE_FORMAT(NOW(), '%Y-%m-%d %k:%i:%s'));
SET v_oldnote = OLD.notes;
IF (NEW.permanent_notes IS NULL) THEN
SET v_permnotes = '';
ELSE
SET v_permnotes = OLD.permanent_notes;
END IF;
SET NEW.permanent_notes = CONCAT_WS(CHAR(10), v_permnotes, v_now,": ", v_description);
I'm aiming to have the results in the permanent field look like this
<datetime value>: Some annotation from the notes field.
<a different datetime>: A new annotation
etc....
What I get from my current trigger:
2018-12-30 17:15:50
:
Test 17: Start from scratch.
2018-12-30 17:35:51
:
Test 18: Used DATE_FORMAT to sxet the time
2018-12-30 17:45:52
:
Test 19. Still doing a carriage return after date and after ':'
I can't figure out why there is a newline after the date, and then again after the ':'.
If I leave out CHAR(10), I get:
Test 17: Start from scratch.
2018-12-30 17:35:51
:
Test 18: Used DATE_FORMAT to sxet the time
2018-12-30 17:45:52
:
Test 19. Still doing a carriage return after date and after ':'Test 20. Still doing a carriage return after date and after ':'
Some fresh/more experienced eyes would be really helpful in debugging this.
Thanks.
I think you should just be using plain CONCAT here:
DECLARE separator VARCHAR(1);
IF (NEW.permanent_notes IS NULL) THEN
SET separator = '';
ELSE
SET separator = CHAR(10)
END IF;
-- the rest of your code as is
SET
NEW.permanent_notes = CONCAT(v_permnotes, separator, v_now, ": ", v_description);
The logic here is that we conditionally print a newline (CHAR(10)) before each new log line, so long as that line is not the very first. You don't really want CONCAT_WS here, which is mainly for adding a separator in between multiple terms. You only want a single newline in between each logging statement.
I am new to JSON in SQL. I am getting the error "JSON text is not properly formatted. Unexpected character 'N' is found at position 0." while executing the below -
DECLARE #json1 NVARCHAR(4000)
set #json1 = N'{"name":[{"FirstName":"John","LastName":"Doe"}], "age":31, "city":"New York"}'
DECLARE #v NVARCHAR(4000)
set #v = CONCAT('N''',(SELECT value FROM OPENJSON(#json1, '$.name')),'''')
--select #v as 'v'
SELECT JSON_VALUE(#v,'$.FirstName')
the " select #v as 'v' " gives me
N'{"FirstName":"John","LastName":"Doe"}'
But, using it in the last select statement gives me error.
DECLARE #v1 NVARCHAR(4000)
set #v1 = N'{"FirstName":"John","LastName":"Doe"}'
SELECT JSON_VALUE(#v1,'$.FirstName') as 'FirstName'
also works fine.
If you're using SQL Server 2016 or later there is build-in function ISJSON which validates that the string in the column is valid json.
Therefore you can do things like this:
SELECT
Name,
JSON_VALUE(jsonCol, '$.info.address.PostCode') AS PostCode
FROM People
WHERE ISJSON(jsonCol) > 0
You are adding the Ncharacter in your CONCAT statement.
Try changing the line:
set #v = CONCAT('N''',(SELECT value FROM OPENJSON(#json1, '$.name')),'''')
to:
set #v = CONCAT('''',(SELECT value FROM OPENJSON(#json1, '$.name')),'''')
JSON_VALUE function may first be executed on all rows before applying the where clauses. it will depend on execution plan so small things like having top clause or ordering may have a impact on that.
It means that if your json data is invalid anywhere in that column(in the whole table), it will throw an error when the query is executed.
So find and fix those invalid json formats first. for example if that column has a ' instead of " it cannot be parsed and will cause the whole TSQL query to throw an error
I'm getting json file, which I load to Azure SQL databese. This json is direct output from API, so there is nothing I can do with it before loading to DB.
In that file, all Polish diactircs are escaped to "C/C++/Java source code" (based on: http://www.fileformat.info/info/unicode/char/0142/index.htm
So for example:
ł is \u0142
I was trying to find some method to convert (unescape) those to proper Polish letters.
In worse case scenario, I can write function which will replace all combinations
Repalce(Replace(Replace(string,'\u0142',N'ł'),'\u0144',N'ń')))
And so on, making one big, terrible function...
I was looking for some ready functions like there is for URLdecode, which was answered here on stack in many topics, and here: https://www.codeproject.com/Articles/1005508/URL-Decode-in-T-SQL
Using this solution would be possible but I cannot figure out cast/convert with proper collation and types in there, to get result I'm looking for.
So if anyone knows/has function that would make conversion in string for unescaping that \u this would be great, but I will manage to write something on my own if I would get right conversion. For example I tried:
select convert(nvarchar(1), convert(varbinary, 0x0142, 1))
I made assumption that changing \u to 0x will be the answer but it gives some Chinese characters. So this is wrong direction...
Edit:
After googling more I found exactly same question here on stack from #Pasetchnik: Json escape unicode in SQL Server
And it looks this would be the best solution that there is in MS SQL.
Onlty thing I needed to change was using NVARCHAR instead of VARCHAR that is in linked solution:
CREATE FUNCTION dbo.Json_Unicode_Decode(#escapedString nVARCHAR(MAX))
RETURNS nVARCHAR(MAX)
AS
BEGIN
DECLARE #pos INT = 0,
#char nvarCHAR,
#escapeLen TINYINT = 2,
#hexDigits TINYINT = 4
SET #pos = CHARINDEX('\u', #escapedString, #pos)
WHILE #pos > 0
BEGIN
SET #char = NCHAR(CONVERT(varbinary(8), '0x' + SUBSTRING(#escapedString, #pos + #escapeLen, #hexDigits), 1))
SET #escapedString = STUFF(#escapedString, #pos, #escapeLen + #hexDigits, #char)
SET #pos = CHARINDEX('\u', #escapedString, #pos)
END
RETURN #escapedString
END
Instead of nested REPLACE you could use:
DECLARE #string NVARCHAR(MAX)= N'\u0142 \u0144\u0142';
SELECT #string = REPLACE(#string,u, ch)
FROM (VALUES ('\u0142',N'ł'),('\u0144', N'ń')) s(u, ch);
SELECT #string;
DBFiddle Demo
I have a stored procedure in MySQL that has an enum called log_level with a few values.
..
DECLARE log_level ENUM('none','some','errors','debug') DEFAULT 1;
SET log_level = 0;
..
Gives the error:
If I change this to:
..
DECLARE log_level ENUM('none','some','errors','debug') DEFAULT log_level=1;
SET log_level = 0;
..
It gives the error:
How can I fix this issue?
Same kind issue was occurring with me when i was doing `
ALTERorINSERT
` the information.
I got fixed by
UPDATE TABLE t SET t.fieldName = NULL
and this fixed my issue.
MySQL enums work differently than the C/C++ equivalent. log_level is declared as an enum of strings so it really expects a string as value. A default of '1' doesn't make much sense, either.
The correct syntax is:
DECLARE log_level ENUM('none','some','errors','debug') DEFAULT 'some';
SET log_level = 'none';
The same error 1265 shows up if you try to assign a non-existing value to an enum (e.g. an empty string in stead of a real NULL value).
Sidenote: internally the database uses integer values but those details are completely hidden by the SQL language.
Hi i am getting the string literal error when i am trying to add an attribute to the child node. How can i modify my code in order to add an attribute successfully.
declare #count int=(select mxGraphXML.value('count(/mxGraphModel/root/Cell/#Value )','nvarchar') from TABLE_LIST
where Table_ListID=1234 )
declare #index int=1;
while #index<=#count
begin
declare #Value varchar(100)= #graphxml.value('(/mxGraphModel/root/Cell/#Value )[1]','nvarchar');
SET #graphxml.modify('insert attribute copyValueID {sql:variable("#Value ")}
as first into (/mxGraphModel/root/Cell)['+convert(varchar,#index)+']');
end
set #index=#index+1;
end
You're using the addition operator where you should be using the CONCAT function. So
'insert attribute copyValueID {sql:variable("#Value ")}
as first into (/mxGraphModel/root/Cell)['+convert(varchar,#index)+']'
is being coerced into a number. Try:
CONCAT('insert attribute copyValueID {sql:variable("#Value ")}
as first into (/mxGraphModel/root/Cell)[',convert(varchar,#index),']')
instead.
Adam, you can do it in Microsoft T-SQL like this:
declare #sql nvarchar(max)
set #sql = 'set #myxml.modify(''
insert (
attribute scalableFieldId {sql:variable("#sf_id")},
attribute myTypeId {sql:variable("#my_type_id")}
) into (/VB/Condition/Field[#fieldId=sql:variable("#field_id")
and #fieldCode=sql:variable("#field_code")])['+
cast(#instance as varchar(3))+']'')'
exec sp_executesql
#sql
,N'#myxml xml output, #field_code varchar(20),
#field_id varchar(20), #sf_id int, #my_type_id tinyint'
,#myxml = #myxml output
,#field_code = #field_code
,#field_id = #field_id
,#sf_id = #sf_id
,#my_type_id = #my_type_id
See what I've done here? It's just a clever usage of Dynamic SQL to overcome Microsoft's moronic limitation of "string literal error".
IMPORTANT NOTE: yes, you can MOSTLY do this by using sql:variable() in SOME places BUT good luck trying to use it in the node number qualifier inside the square brackets! You can't do this without Dynamic SQL by design!
The trick is not mine actually, I got the idea from https://www.opinionatedgeek.com/Snaplets/Blog/Form/Item/000299/Read after banging my head against the wall for a while.
Feel free to ask questions if my sample does not work or something is not clear.