Specify the culture used within the "With Clause" of OpenJSON - json

I live in Denmark. Here the Thousand separator is a dot (.), and we use comma (,) as comma-separator.
I know that you can use TRY_PARSE to convert a varchar into a money/float value.
An example:
declare
#JSON varchar(max)=
'
{
"Data Table":
[
{
"Value" : "27.123,49"
}
]
}
'
select
TRY_PARSE(Value as money using 'da-dk') "Correct Value"
FROM OpenJson(#json, '$."Data Table"')
WITH
(
"Value" nvarchar(255) N'$."Value"'
)
select
Value "Wrong Value"
FROM OpenJson(#json, '$."Data Table"')
WITH
(
"Value" money N'$."Value"'
)
This query gives me two results
My question is: Can I control the culture in the WiTH Clause of OpenJSON, so I get the correct result without having to use TRY_PARSE?
Target: SQL Server 2019

Not directly in OPENJSON(), no. ECMA-404 JSON Data Interchange Syntax specifically defines the decimal point as the U+002E . character - and doesn't provide for cultural allowances - which is why you're having to define culture-specific values as strings in the first place.
The correct way to do it is only using TRY_PARSE or TRY_CONVERT. eg
select try_parse('27.123,49' as money using 'da-DK')

Related

Bracket notation for SQL Server json_value?

This works:
select json_value('{ "a": "b" }', '$.a')
This doesn't work:
select json_value('{ "a": "b" }', '$["a"]')
and neither does this:
select json_value('{ "a": "b" }', '$[''a'']')
In JSON, these are the same:
foo = { "a": "b" }
console.log(foo.a)
console.log(foo["a"])
What am I missing? I get an error trying to use bracket notation in SQL Server:
JSON path is not properly formatted. Unexpected character '"' is found at position 2
No sooner do I ask, than I stumble on an answer. I couldn't find this in any documentation anywhere, but select json_value('{ "a": "b" }', '$."a"') works. Bracket notation is not supported, but otherwise invalid keys can be escaped with quotation marks, e.g. select json_value('{ "I-m so invalid][": "b" }', '$."I-m so invalid]["') when in JavaScript that would be foo["I-m so invalid]["]
MsSql reserves this for array index. SQL parses all JSON as a string literal, instead of as an object(JSON or ARRAY) with any hidden key.
Some of what SQL can do will vary with version. Here's a crash course on the annoying (but also really powerful, and fast once in place) requirements. I'm posting more than you need because a lot of the documentation for JSON through MsSql is lacking, and doesn't do justice to how strong it is with JSON.
MsDoc here: https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver15
In this example, we are working with a JSON "object" to separate the data into columns. Note how calling a position inside of an array is weird.
declare #data nvarchar(max) = N'[{"a":"b","c":[{"some":"random","array":"value"},{"another":"random","array":"value"}]},{"e":"f","c":[{"some":"random","array":"value"},{"another":"random","array":"value"}]}]'
--make sure SQL is happy. It will not accept partial snippets
select ISJSON(#data)
--let's look at the data in tabular form
select
json1.*
, json2.*
from openjson(#data)
with (
a varchar --note there is no "path" specified here, as "a" is a key in the first layer of the object
, c nvarchar(max) as JSON --must use "nvarchar(max)" and "as JSON" or SQL freaks out
, c0 nvarchar(max) N'$.c[0]' as JSON
) as json1
cross apply openjson(json1.c) as json2
You can also pull out the individual values, if needed
select oj.value from openjson(#data) as oj where oj.[key] = 1;
select
oj.value
, JSON_VALUE(oj.value,N'$.e')
, JSON_VALUE(oj.value,N'$.c[0].some')
, JSON_VALUE(#data,N'$[1].c[0].some') --Similar to your first example, but uses index position instead of key value. Works because SQL views the "[]" brackets as an array while trying to parse.
from openjson(#data) as oj
where oj.[key] = 1

Generate output as array or json agg in PostgreSQL

I have a table and I have to generate a specific output in the form of list of array. I have tried with json_agg, array_agg, row_to_json, combination of almost all agg_functions and json_building functions, but not able to generate the output as needed.
If it is not possible, I can work with simple json structure, also, but before giving up, want to give it a try.
Table structure
create table sample_table
(
x_labl character varying,
x_val1 character varying,
x_val2 character varying,
y_labl character varying,
y_val1 character varying,
y_val2 character varying
);
Sample_Data
Want to generate an output like, as below:
"chartData" : [
["lablX", 1, 2], ["lablY", 10, 20]
]
Is this what you want?
select array[
array[x_labl, x_val1, x_val2],
array[y_labl, y_val1, y_val2 ]
] as chartData
from sample_table
This generates a resultset with just one column called chartData and as many rows as there are in the table. Each value is a multi-level array that follows the spec you provided.
If you want a json object instead:
select json_build_object(
'chartData',
jsonb_build_array(
jsonb_build_array(x_labl, x_val1, x_val2),
jsonb_build_array(y_labl, y_val1, y_val2)
)
) as js
from sample_table

Parse unknown JSON path in TSQL with openjson and/or json_value

I have a incoming data structure that looks like this:
declare #json nvarchar(max) = '{
"action": "edit",
"data": {
"2077-09-02": {
"Description": "some stuff",
"EffectDate": "2077-1-1"
}
}
}';
To give you a long story short, I think TSQL hates this json structure, because no matter what I have tried, I can't get to any values other than "action".
The {data} object contains another object, {2077-09-02}. "2077-09-02" will always be different. I can't rely on what that date will be.
This works:
select json_value(#json, '$.action');
None of this works when trying to get to the other values.
select json_value(#json, '$.data'); --returns null
select json_value(#json, '$.data[0]'); --returns null
select json_value(#json, 'lax $.data.[2077-09-02].Description');
--JSON path is not properly formatted. Unexpected character '[' is found at position 11.
select json_value(#json, 'lax $.data.2077-09-02.Description');
--JSON path is not properly formatted. Unexpected character '2' is found at position 11.
How do I get to the other values? Is the JSON not perfect enough for TSQL?
It is never a good idea to use the declarative part of a text based container as data. The "2077-09-02" is a valid json key, but hard to query.
You can try this:
declare #json nvarchar(max) = '{
"action": "edit",
"data": {
"2077-09-02": {
"Description": "some stuff",
"EffectDate": "2077-1-1"
}
}
}';
SELECT A.[action]
,B.[key] AS DateValue
,C.*
FROM OPENJSON(#json)
WITH([action] NVARCHAR(100)
,[data] NVARCHAR(MAX) AS JSON) A
CROSS APPLY OPENJSON(A.[data]) B
CROSS APPLY OPENJSON(B.[value])
WITH (Description NVARCHAR(100)
,EffectDate DATE) C;
The result
action DateValue Description EffectDate
edit 2077-09-02 some stuff 2077-01-01
The idea:
The first OPENJSON will return the action and the data.
I use a WITH clause to tell the engine, that action is a simple value, while data is nested JSON
The next OPENJSON dives into data
We can now use B.[key] to get the json key's value
Now we need another OPENJSON to dive into the columns within data.
However: If this JSON is under your control I'd suggest to change its structure.
Use double quotes instead of []. JSON Path uses JavaScript's conventions where a string is surrounded by double quotes. The documentation's example contains this path $."first name".
In this case :
select json_value(#json,'$.data."2077-09-02".Description');
Returns :
some stuff
As for the other calls, JSON_VALUE can only return scalar values, not objects. You need to use JSON_QUERY to extract JSON objects, eg :
select json_query(#json,'$.data."2077-09-02"');
Returns :
{
"Description": "some stuff",
"EffectDate": "2077-1-1"
}

In Couchbase Java Query DSL, how do I filter for property-names that are not from the ASCII alphabet?

Couchbase queries should support any String for property-name in a filter ( where clause.)
But the query below returns no values for any of the fieldNames "7", "a", "#", "&", "", "?". It does work for values for fieldName a.
Note that I'm using the Java DSL API, not N1ql directly.
OffsetPath statement = select("*").from(i(bucket.name())).where(x(fieldName).eq(x("$t")));
JsonObject placeholderValues = JsonObject.create().put("t", fieldVal);
N1qlQuery q = N1qlQuery.parameterized(statement, placeholderValues);
N1qlQueryResult result = bucket.query(q);
But my bucket does have each of these JsonObjects, including those with unusual property names, as shown by an unfiltered query:
{"a":"a"}
{"#":"a"}
{"&":"a"}
{"":"a"}
{"?":"a"}
How do I escape property names or otherwise support these legal names in queries?
(This question relates to another one, but that is about values and this is about field names.)
The field name is treated as an identifier. So, back-ticks are needed to escape them thus:
select("*").from(i(bucket.name())).where(x("`" + fieldName + "`").eq(x("$value"))
with parameterization of $value, of course

Constructing nested json arrays in T-SQL

Given the table:
C1 C2 C3
----------------
1 'v1' 1.1
2 'v2' 2.2
3 'v3' 3.3
Is there any "easy" way to return JSON in this format:
{
"columns": [ "C1", "C2", "C3" ],
"rows": [
[ 1, "v1", 1.1 ],
[ 2, "v2", 2.2 ],
[ 3, "v3", 3.3 ]
]
}
To generate an array with single values from a table there is a neat trick like this:
SELECT JSON_QUERY(REPLACE(REPLACE(
(
SELECT id
FROM table a
WHERE pk in (1,2)
FOR JSON PATH
), '{"id":',''),'}','')) 'ids'
Which generates
"ids": [1,2]
But to construct the nested array above the replacing gets really tedious, anyone know a good way to achieve this?
Well, you ask for an easy way but the following will not be easy :-)
The tricky part is to know which values need to be qouted and which can remain naked.
This needs generic type-analysis to find, which values are strings.
The only way I know to get on meta data (besides building dynamic sql using meta views like INFORMATIONSCHEMA.COLUMNS) is XML together with an AUTO-schema.
This XML is very near to your needs actually. There is a list of columns at the beginning, followed by a list of rows. But it is not JSON of course...
Try this out:
--This is a mockup table with the values you provided.
DECLARE #mockup TABLE(C1 INT,C2 VARCHAR(100),C3 DECIMAL(4,2));
INSERT INTO #mockup VALUES
(1,'v1',1.1)
,(2,'v2',2.2)
,(3,'v3',3.3);
--Now we create an XML out of this
DECLARE #xml XML =
(
SELECT *
FROM #mockup t
FOR XML RAW,XMLSCHEMA,TYPE
);
--Check the XML's content with SELECT #xml to see how it is looking internally
--Now the real query can start:
SELECT '{"columns":[' +
STUFF(#xml.query('declare namespace xsd="http://www.w3.org/2001/XMLSchema";
for $col in /xsd:schema/xsd:element//xsd:attribute
return
<x>,{concat("""",xs:string($col/#name),"""")}</x>
').value('.','nvarchar(max)'),1,1,'') +
'],"rows":[' +
STUFF(
(
SELECT
',[' + STUFF(b.query(' declare namespace xsd="http://www.w3.org/2001/XMLSchema";
for $attr in ./#*
return
<x>,{if(/xsd:schema/xsd:element//xsd:attribute[#name=local-name($attr)]//xsd:restriction/#base="sqltypes:varchar") then
concat("""",$attr,"""")
else
xs:string($attr)
}
</x>
').value('.','nvarchar(max)'),1,1,'') + ']'
FROM #xml.nodes('/*:row') B(b)
FOR XML PATH(''),TYPE
).value('.','nvarchar(max)'),1,1,'') +
']}';
The result
{"columns":["C1","C2","C3"],"rows":[[3,"v3",3.30],[1,"v1",1.10],[2,"v2",2.20]]}
Some explanation:
The first part will use XQuery to find all columns (xsd:attribute within XML-schema) and create the array of column names.
The second part will againt use XQuery in order to run through all rows and write their column values in a concatenated string. Each value can refer to its type within the schema. Whenever this type is sqltypes:varchar the value will be quoted. All other values remain naked.
This will not solve each and any case generically...
To be honest, this was more for my own curiosity :-) Wanted to find out, how one can solve this.
Quite probably the best answer is: Use another tool. SQL-Server is not the best choice here ;-)