I have below JSON. I was trying to construct a query to fetch the FirstName, LastName, Iphone Number and Home Number. I am trying to use JSON Path filter expression. Its not working for me.
{
"firstName": "John",
"lastName" : "doe",
"age" : 26,
"address" : {
"streetAddress": "naist street",
"city" : "Nara",
"postalCode" : "630-0192"
},
"phoneNumbers": [
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "home",
"number": "0123-4567-8910"
}
]
}
Query used
DECLARE #jsonInfo NVARCHAR(MAX)
SET #jsonInfo=N'{
"firstName": "John",
"lastName" : "doe",
"age" : 26,
"address" : {
"streetAddress": "naist street",
"city" : "Nara",
"postalCode" : "630-0192"
},
"phoneNumbers": [
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "home",
"number": "0123-4567-8910"
}
]
}'
SELECT
JSON_VALUE(#jsonInfo,'$.firstName') AS FirstName,
JSON_VALUE(#jsonInfo,'$.lastName') AS LastName
--JSON_VALUE(#jsonInfo,'$.phoneNumbers[?(#.type=="iPhone")].number') AS IPhoneNumber,
--JSON_VALUE(#jsonInfo,'$.phoneNumbers[?(#.type=="home")].number') AS HomeNumber
Regards
Amirtharaj
One way to parse this JSON is with OPENJSON() and explicit schema. phoneNumbers is a JSON array, so you need an additional OPENSJON() call.
SELECT
j1.firtstName, j1.lastName, j1.streetAddress, j1.city, j1.postalCode,
j2.*
FROM OPENJSON(#jsonInfo) WITH (
firtstName varchar(100) '$.firstName',
lastName varchar(100) '$.lastName',
streetAddress varchar(100) '$.address.streetAddress',
city varchar(100) '$.address.city',
postalCode varchar(100) '$.address.postalCode',
phoneNumbers nvarchar(max) '$.phoneNumbers' AS JSON
) j1
CROSS APPLY (
SELECT
MAX(CASE WHEN [type] = 'iPhone' THEN [number] END) AS iPhone,
MAX(CASE WHEN [type] = 'home' THEN [number] END) AS home
FROM OPENJSON(j1.phoneNumbers) WITH (
type varchar(10) '$.type',
number varchar(20) '$.number'
)
) j2
Result:
firtstName lastName streetAddress city postalCode iPhone ome
John doe naist street Nara 630-0192 0123-4567-8888 0123-4567-8910
Of course, you can extract each phone number from $.phoneNumbers JSON array using $.phoneNumbers[x].number as path expression (x is zero-based index):
SELECT
j1.firtstName, j1.lastName, j1.streetAddress, j1.city, j1.postalCode,
j1.number1, j1.number2
FROM OPENJSON(#jsonInfo) WITH (
firtstName varchar(100) '$.firstName',
lastName varchar(100) '$.lastName',
streetAddress varchar(100) '$.address.streetAddress',
city varchar(100) '$.address.city',
postalCode varchar(100) '$.address.postalCode',
number1 varchar(100) '$.phoneNumbers[0].number',
number2 varchar(100) '$.phoneNumbers[1].number'
) j1
Related
I have a JSON string as shown below. How can I create a table below or similar using SQL Server with a procedure or function? Thanks all.
I'm using SQL Server 15.0.2080.9.
{
"Person": {
"firstName": "John",
"lastName": "Smith",
"age": 25,
"Address": {
"streetAddress":"21 2nd Street",
"city":"New York",
"state":"NY",
"postalCode":"10021"
},
"PhoneNumbers": {
"home":"212 555-1234",
"fax":"646 555-4567"
}
}
}
An excellent starting point is this Q&A, but a simplified approach (if the parsed JSON has a variable structure with nested JSON objects, but without JSON arrays) is the folowing recursive statement:
JSON:
DECLARE #json nvarchar(max) = N'
{
"Person": {
"firstName": "John",
"lastName": "Smith",
"age": 25,
"Address": {
"streetAddress":"21 2nd Street",
"city":"New York",
"state":"NY",
"postalCode":"10021"
},
"PhoneNumbers": {
"home":"212 555-1234",
"fax":"646 555-4567"
}
}
}'
Statement:
;WITH rCTE AS (
SELECT
1 AS Id,
CONVERT(nvarchar(max), NULL) COLLATE DATABASE_DEFAULT AS [Parent],
CONVERT(nvarchar(max), N'Person') COLLATE DATABASE_DEFAULT AS [Key],
CONVERT(nvarchar(max), JSON_QUERY(#json, '$.Person')) COLLATE DATABASE_DEFAULT AS [Value]
UNION ALL
SELECT
r.Id + 1,
CONVERT(nvarchar(max), r.[Key]) COLLATE DATABASE_DEFAULT,
CONVERT(nvarchar(max), c.[Key]) COLLATE DATABASE_DEFAULT,
CONVERT(nvarchar(max), c.[value]) COLLATE DATABASE_DEFAULT
FROM rCTE r
CROSS APPLY OPENJSON(r.[Value]) c
WHERE ISJSON(r.[Value]) = 1
)
SELECT [Parent], [Key], [Value]
FROM rCTE
ORDER BY Id
Result:
Parent
Key
Value
Person
{"firstName": "John", "lastName": "Smith", "age": 25, "Address": {"streetAddress":"21 2nd Street", "city":"New York", "state":"NY", "postalCode":"10021"}, "PhoneNumbers": {"home":"212 555-1234", "fax":"646 555-4567" }}
Person
firstName
John
Person
lastName
Smith
Person
age
25
Person
Address
{"streetAddress":"21 2nd Street", "city":"New York", "state":"NY", "postalCode":"10021"}
Person
PhoneNumbers
{"home":"212 555-1234", "fax":"646 555-4567"}
PhoneNumbers
home
212 555-1234
PhoneNumbers
fax
646 555-4567
Address
streetAddress
21 2nd Street
Address
city
New York
Address
state
NY
Address
postalCode
10021
You can use Openjson, it would give you your desired result.
this is an example for your specific JSON:
DECLARE #Json NVARCHAR(max) = '{
"Person": {
"firstName": "John",
"lastName": "Smith",
"age": 25,
"Address": {
"streetAddress":"21 2nd Street",
"city":"New York",
"state":"NY",
"postalCode":"10021"
},
"PhoneNumbers": {
"home":"212 555-1234",
"fax":"646 555-4567"
}
}
}'
SELECT NULL AS Parent
,[KEY]
,[value]
FROM openjson(#json, '$')
UNION ALL
SELECT 'Person' AS Parent,
[KEY]
,[value]
FROM openjson(#json, '$.Person')
UNION ALL
SELECT 'Address'AS Parent,
[KEY]
,[value]
FROM openjson(#json, '$.Person.Address')
UNION ALL
SELECT 'PhoneNumbers' AS Parent ,[KEY]
,[value]
FROM openjson(#json, '$.Person.PhoneNumbers')
I have table people and it's maintain Four column which is Name ,TagName ,Value , Location.
I want to convert the tagname and value in json with name and location column as rootnode (Name & location same for multiple records)
Need output as :
{
"{"Name":"EMP1","Location":"mumbai"}": [
{
"TagName": "1",
"Value": "844.17769999999996"
},
{
"TagName": "abc",
"Value": "837.43679999999995"
},
{
"TagName": "pqr",
"Value": "0"
},
{
"TagName": "XYZ",
"Value": "1049.2429999999999"
}
]
}
please check the below query, In which I am trying to create json string using json path but stuck in root node.
SELECT TagName
,Value
FROM dbo.people
FOR JSON PATH, ROOT('')---
when I convert the above json into tabular format, required output as :
Name | Location |TagName| Value
EMP1 | Mumbai |1 | 844.17769999999996|
EMP1 | Mumbai |abc | 837.43679999999995|
.....
Your expected output is not a valid JSON, but you are probably looking for something like this:
Table:
CREATE TABLE People (
[Name] varchar(10),
[Location] varchar(50),
[TagName] varchar(3),
[Value] numeric(20, 14)
)
INSERT INTO People ([Name], [Location], [TagName], [Value])
VALUES
('EMP1', 'Mumbai', '1', 844.17769999999996),
('EMP1', 'Mumbai', 'abc', 837.43679999999995),
('EMP2', 'Mumbai', 'abc', 837.43679999999995)
Statement:
SELECT DISTINCT p.[Name], p.[Location], c.Items
FROM People p
CROSS APPLY (
SELECT [TagName], [Value]
FROM People
WHERE [Name] = p.[Name] AND [Location] = p.[Location]
FOR JSON AUTO
) c (Items)
FOR JSON PATH
Result:
[
{
"Name":"EMP1",
"Location":"Mumbai",
"Items":[
{
"TagName":"1",
"Value":844.17769999999996
},
{
"TagName":"abc",
"Value":837.43679999999995
}
]
},
{
"Name":"EMP2",
"Location":"Mumbai",
"Items":[
{
"TagName":"abc",
"Value":837.43679999999995
}
]
}
]
If you want to parse the generated JSON, you need to use OPENJSON() twice:
Generated JSON:
DECLARE #json varchar(max) = N'[
{
"Name":"EMP1",
"Location":"Mumbai",
"Items":[
{
"TagName":"1",
"Value":844.17769999999996
},
{
"TagName":"abc",
"Value":837.43679999999995
}
]
},
{
"Name":"EMP2",
"Location":"Mumbai",
"Items":[
{
"TagName":"abc",
"Value":837.43679999999995
}
]
}
]'
Statement:
SELECT j1.Name, j1.Location, j2.TagName, j2.Value
FROM OPENJSON(#json) WITH (
[Name] varchar(10) '$.Name',
[Location] varchar(50) '$.Location',
[Items] nvarchar(max) '$.Items' AS JSON
) j1
OUTER APPLY OPENJSON(j1.Items) WITH (
[TagName] varchar(3) '$.TagName',
[Value] numeric(20, 14) '$.Value'
) j2
I have the json block modeled below. I want to selectively delete individual blocks from my_items based on the id which is AAA and BBB in my sample. ie if I tried to delete the AAA block under my_items I would want tojust delete the {"id" : "AAA"} but if wanted to delete the BBB block it would delete the larger {"name" : "TestRZ", "id" : "BBB", "description" : ""} block.
I know I can use the #- to remove whole blocks like SELECT '{sample_json}'::jsonb #- '{my_items}' would purge out the whole my_items block. But I dont know how to use this to conditionally delete children under a parent block of json. I have also used code similar to this example to append data inside a nested structure by reading in the node of the nested structure cat-ing new data to it and rewriting it. UPDATE data SET value= jsonb_set(value, '{my_items}', value->'items' || (:'json_to_adds'), true) where id='testnofeed'.
But I dont know how to apply either of these methods to: 1)Delete data in nested structure using #- or 2)Do the same using `jsonb_set. Anyone have any guidance for how to do this using either of these(or another method).
{
"urlName" : "testurl",
"countryside" : "",
"description" : "",
"my_items" : [
{
"id" : "AAA"
},
{
"name" : "TestRZ",
"id" : "BBB",
"description" : ""
},
],
"name" : "TheName"
}
Data is stored in value jsonb. when I update I will be able to pass in a unique kind so that it only updates this json in one row in db.
-- Table Definition
CREATE TABLE "public"."data" (
"id" varchar(100) NOT NULL,
"kind" varchar(100) NOT NULL,
"revision" int4 NOT NULL,
"value" jsonb
);
This works in PostgreSQL 12 and later with jsonpath support. If you do not have jsonpath, then please leave a comment.
with data as (
select '{
"urlName" : "testurl",
"countryside" : "",
"description" : "",
"my_items" : [
{
"id" : "AAA"
},
{
"name" : "TestRZ",
"id" : "BBB",
"description" : ""
}
],
"name" : "TheName"
}'::jsonb as stuff
)
select jsonb_set(stuff, '{my_items}',
jsonb_path_query_array(stuff->'my_items', '$ ? (#."id" <> "AAA")'))
from data;
jsonb_set
---------------------------------------------------------------------------------------------------------------------------------------------------
{"name": "TheName", "urlName": "testurl", "my_items": [{"id": "BBB", "name": "TestRZ", "description": ""}], "countryside": "", "description": ""}
(1 row)
To update the table directly, the statement would be:
update data
set value = jsonb_set(value, '{my_items}',
jsonb_path_query_array(value->'my_items',
'$ ? (#."id" <> "AAA")'));
This works for versions before PostgreSQL 12:
with data as (
select 1 as id, '{
"urlName" : "testurl",
"countryside" : "",
"description" : "",
"my_items" : [
{
"id" : "AAA"
},
{
"name" : "TestRZ",
"id" : "BBB",
"description" : ""
}
],
"name" : "TheName"
}'::jsonb as stuff
), expand as (
select d.id, d.stuff, e.item, e.rn
from data d
cross join lateral jsonb_array_elements(stuff->'my_items') with ordinality as e(item, rn)
)
select id, jsonb_set(stuff, '{my_items}', jsonb_agg(item order by rn)) as new_stuff
from expand
where item->>'id' != 'AAA'
group by id, stuff;
id | new_stuff
----+---------------------------------------------------------------------------------------------------------------------------------------------------
1 | {"name": "TheName", "urlName": "testurl", "my_items": [{"id": "BBB", "name": "TestRZ", "description": ""}], "countryside": "", "description": ""}
(1 row)
The direct update for this is a little more involved:
with expand as (
select d.id, d.value, e.item, e.rn
from data d
cross join lateral jsonb_array_elements(value->'my_items')
with ordinality as e(item, rn)
), agg as (
select id, jsonb_set(value, '{my_items}', jsonb_agg(item order by rn)) as new_value
from expand
where item->>'id' != 'AAA'
group by id, value
)
update data
set value = agg.new_value
from agg
where agg.id = data.id;
I am trying to form json from a table result set :
create table testmalc(
appid int identity(1,1),
propertyid1 int ,
propertyid1val varchar(10) ,
propertyid2 int,
propertyid2val varchar(10) ,
)
insert into testmalc values(456,'t1',789,'t2')
insert into testmalc values(900,'t3',902,'t4')
need below desired JSON result :
{
"data": {
"record": [{
"id": appid,
"customFields": [{
"customfieldid": propertyid1 ,
"customfieldvalue": propertyid1val
},
{
"customfieldid": propertyid2 ,
"customfieldvalue": propertyid2val
}
]
},
{
"id": appid,
"customFields": [{
"customfieldid": propertyid1 ,
"customfieldvalue": propertyid1val
},
{
"customfieldid": propertyid2 ,
"customfieldvalue": propertyid2val
}
]
}
]
}
}
I am trying to use stuff but was not getting the desired result. Now trying with UnPivot.
If you cannot upgrade to SQL-Server 2016 for JSON support you should try to solve this in any application / programming language you know of.
Just for fun, I provide an approach, which works, but is more a hack than a solution:
Your test data:
DECLARE #testmalc table (
appid int identity(1,1),
propertyid1 int ,
propertyid1val varchar(10) ,
propertyid2 int,
propertyid2val varchar(10)
);
insert into #testmalc values(456,'t1',789,'t2')
,(900,'t3',902,'t4');
--create a XML, which is the most similar structure and read it as a NVARCHAR string
DECLARE #intermediateXML NVARCHAR(MAX)=
(
SELECT t.appid AS id
,(
SELECT t2.propertyid1 AS [prop1/#customfieldid]
,t2.propertyid1val AS [prop1/#customfieldvalue]
,t2.propertyid2 AS [prop2/#customfieldid]
,t2.propertyid2val AS [prop2/#customfieldvalue]
FROM #testmalc t2
WHERE t2.appid=t.appid
FOR XML PATH('customFields'),TYPE
) AS [*]
FROM #testmalc t
GROUP BY t.appid
FOR XML PATH('row')
);
--Now a bunch of replacements
SET #intermediateXML=REPLACE(REPLACE(REPLACE(REPLACE(#intermediateXML,'=',':'),'/>','}'),'<prop1 ','{'),'<prop2 ','{');
SET #intermediateXML=REPLACE(REPLACE(REPLACE(REPLACE(#intermediateXML,'<customFields>','"customFields":['),'</customFields>',']'),'customfieldid','"customfieldid"'),'customfieldvalue',',"customfieldvalue"');
SET #intermediateXML=REPLACE(REPLACE(#intermediateXML,'<id>','"id":'),'</id>',',');
SET #intermediateXML=REPLACE(REPLACE(REPLACE(#intermediateXML,'<row>','{'),'</row>','}'),'}{','},{');
DECLARE #json NVARCHAR(MAX)=N'{"data":{"record":[' + #intermediateXML + ']}}';
PRINT #json;
The result (formatted)
{
"data": {
"record": [
{
"id": 1,
"customFields": [
{
"customfieldid": "456",
"customfieldvalue": "t1"
},
{
"customfieldid": "789",
"customfieldvalue": "t2"
}
]
},
{
"id": 2,
"customFields": [
{
"customfieldid": "900",
"customfieldvalue": "t3"
},
{
"customfieldid": "902",
"customfieldvalue": "t4"
}
]
}
]
}
}
I am trying to get some data into my postgres DB from a CSV file containing a json dump. As long as it is just strings it is alright, but I want my strings containing timestamps to be stored as timestamps in postgres. Soo I need to do some conversion of the two fields:registerdate and dateofbirth. The below code works except for the date conversion lines...
Any clue on how to successfully convert the two strings to timestamps below:
CREATE TABLE users (
id SERIAL,
mongo_id TEXT,
password VARCHAR(128),
firstname VARCHAR(200),
lastname VARCHAR(200),
dateofbirth TIMESTAMP,
registerdate TIMESTAMP,
displayname VARCHAR(200),
language VARCHAR(200),
country VARCHAR(200),
profilepicture VARCHAR(200),
backgroundpicture VARCHAR(200),
type VARCHAR(200),
sex VARCHAR(6),
offlinemode BOOLEAN,
email VARCHAR(200),
friends VARCHAR(255)[]
);
INSERT INTO users (mongo_id, password,firstname,lastname, dateofbirth, registerdate, displayname, language)
SELECT data->>'_id',
data->>'password',
data->>'firstName',
data->>'secondName',
to_timestamp(data->'dateOfBirth'->>'$date'), /*<------*/
to_timestamp(data->'registerDate'->>'$date'), /*<-------*/
data->>'displayName',
data->>'language'
FROM import.mongo_users;
The data format in mongo_users:
{ "_id" : "1164", "password" : "aaa123123", "firstName" : "Adam", "secondName" : "Kowlalski", "dateOfBirth" : { "$date" : "2014-05-18T07:41:09.202+0200" }, "registerDate" : { "$date" : "2016-06-01T12:59:53.941+0200" }, "displayName" : "Adam Kowlalski", "language" : "nb", "country" : null, "profilePicture" : null, "backgroundPicture" : null, "type" : "USER", "sex" : "MALE", "offlineMode" : true, "email" : "bk_1164#test.email", "friends" : [ "KUE" ] }
The to_timestamp function requries two parameters: date_time in text format, and the formatting template.
You don't need to use to_timestamp since your date-time values are already formatted with a valid timestamp, and PostgreSQL understands json-formatted timestamps well enough. The following works well:
SELECT data->>'_id',
data->>'password',
data->>'firstName',
data->>'secondName',
(data->'dateOfBirth'->>'$date')::timestamp, --<< simply cast to timestamp
(data->'registerDate'->>'$date')::timestamp, --<< simply cast to timestamp
data->>'displayName',
data->>'language'
FROM (SELECT
'{ "_id" : "1164", "password" : "aaa123123", "firstName" : "Adam", "secondName" : "Kowlalski", "dateOfBirth" : { "$date" : "2014-05-18T07:41:09.202+0200" },
"registerDate" : { "$date" : "2016-06-01T12:59:53.941+0200" }, "displayName" : "Adam Kowlalski", "language" : "nb", "country" : null, "profilePicture" : null,
"backgroundPicture" : null, "type" : "USER", "sex" : "MALE", "offlineMode" : true, "email" : "bk_1164#test.email", "friends" : [ "KUE" ] }'::jsonb as data) d
Your JSON Date format looks like ISO 8601 (https://en.wikipedia.org/wiki/ISO_8601). For transforming the input string to a date variable you should use the to_date function.
e.g.
to_date(data->'dateOfBirth'->>'$date','YYYY-MM-DD"T"HH24:MI:SS')
Be ware that you have to check if Timezone differences play a role. Postgresql has an option OF: https://www.postgresql.org/docs/current/static/functions-formatting.html
For me this is what worked.
SELECT to_timestamp(nullif(LEFT(dates_json->>'date_prop',10), '')::numeric) as date_extracted FROM table_name
First shrink the value to 10 symbols (if the timestamp include miliseconds), then check if it is null, convert to numeric, then pass it to function to_timestamp().
This way I fixed another error "date/time field value out of range".