I want to automate the process of bulk inserting a json from API to SQL table. Json will look something like below with array.
[
{
"Id": "1",
"Name": "Orlando",
"Age": "23"
},
{
"Id": "2",
"Name": "Keith",
"Age": "24"
},
{
"Id": "3",
"Name": "Donna",
"Age": "23"
}
]
I will have same columns in table as well Id, Name and Age. I have many json array response from different APIs in this json format with different column names. So I am looking for more generic way of inserting the records to the tables since json node name and column in table are maintained same. I am planning to keep response type to table name mapping in some configurations, So deciding the table to which record need to be inserted is not the problem.
I just want a approach where I can insert a json to table without specifying all the column names etc. I saw few articles which suggests to use OPENJSON where I need specify the column names, If I follow this approach I will end up creating multiple stored procedures for each json to tables. So suggest me a good approach for handling the scenario using Azure logic apps or functions.
There is no silver bullet here. SQL Server is declarative by design and does not support macro substitution. This leaves Dynamic SQL.
Assuming you know the Destination Table and the Columns of the destination AND your JSON is rather simple:
Example or dbFiddle
Declare #JSON varchar(max) = '
[
{
"Id": "1",
"Name": "Orlando",
"Age": "23"
},
{
"Id": "2",
"Name": "Keith",
"Age": "24"
},
{
"Id": "3",
"Name": "Donna",
"Age": "23"
}
]
'
Declare #Dest varchar(max) = 'YourTable'
Declare #Cols varchar(max) = '[Id],[Name],[Age]'
Declare #SQL varchar(max) ='
Insert Into ' +#Dest+' (' +#Cols+ ')
Select '+#Cols+'
From (
Select RN = A.[Key]
,B.[key]
,B.[value]
From OpenJson('''+#JSON+''') A
Cross Apply OpenJson(A.Value) B
) src
Pivot (max(value) for [key] in ( '+#Cols+' ) ) pvt
'
Exec(#SQL)
Results
Related
So I'm trying to generate an XML from data that I receive in a JSON file.
What I've done so far is that I had stored each field in the JSON as a keypair. So a row would be FieldName, FieldValue, and JSON_PK.
I have a second table that I created in order to create the XML. It has the JSON FieldName, equivalent XML FieldName, and indentation. The plan was to create a loop in SSIS to manually create the XML.
It was suggested to be that I use instead the FOR XML in my query.
However I've run an issue that every field is named FieldName. It's complicated by fields that hold their values like this <Form submittedDate="2020-01-01"/>
So before I go back to creating a loop to create my XML, I'm wondering what are best practices? I can't be the first one to run into this issue.
Thanks!
A quick followup because it was requested:
This is the approximate for that the JSON comes in as, except is far longer:
{
"name": "Mylist",
"id": "9e8-19c9-e5",
"templateName": "VDashboard - Plus",
"categories": [
""
],
"attributes": [
{
"name": "Division ID",
"value": "ABCD",
"Id": "123",
"units": null,
"timestamp": "1970-01-01T00:00:00.0000000Z",
"errors": null
},
{
"name": "ETA ",
"value": null,
"Id": "123",
"units": null,
"timestamp": "2021-01-25T21:24:36.2514056Z",
"errors": null
},
{
"name": "ETA Destination - Estimated Time",
"value": "1/11/2021 4:15:34 PM",
"Id": "123",
"units": null,
"timestamp": "1970-01-01T00:00:00.0000000Z",
"errors": null
}
]
}
And I need to output it as an XML File.
I need to import it into the DB because I do transformation of certain fields.
Output should look at bit like this:
2020-12-03T08:00:00-05:00
0011
My table structure looks like this. It's done so that I won't have a different table for every report:
Name VARCHAR(4) NOT NULL,
ID VARCHAR(50),
TemplateName VARCHAR(50),
AttributeName VARCHAR(50),
AttributeSubName VARCHAR(50),
AttributeValue VARCHAR(50),
AttributeID VARCHAR(50),
AttributeUnits VARCHAR(50),
AttributeTimestamp DateTime,
AttributeErrors VARCHAR(50),
I've managed to resolve the issue.
Initially, I had hoped to put the JSON into a table, and create the XML by putting those values into a string with xml tags around it.
This XML though requires both Elements and Attributes and it's contents are dynamic. Therefore creating an SQL query was far too difficult to create and harder to troubleshoot.
I found that the easiest way was to create the whole thing in C#.
I used a script task to use C#. Also, I needed a table that functions like an XREF or Data dictionary. I could join it on the contents of the JSON in order to specify what the XML should look like.
Using the object XMLWriter, I created the XML, and based on the format that the XML should take, I used the functions WriteStartElement, WriteElementString, WriteAttributeString, WriteEndElement, and WriteEndDocument I wrote the XML.
{
"Volcano Name": "Agua de Pau",
"Country": "Portugal",
"Region": "Azores",
"Location": {
"type": "Point",
"coordinates": [
-25.47,
37.77
]
},
"Elevation": 947,
"Type": "Stratovolcano",
"Status": "Historical",
"Last Known Eruption": "Last known eruption from 1500-1699, inclusive",
"id": "d44c94b6-81f8-4b27-4970-f79b149529d3",
"_rid": "Sl8fALN4sw4BAAAAAAAAAA==",
"_ts": 1448049512,
"_self": "dbs/Sl8fAA==/colls/Sl8fALN4sw4=/docs/Sl8fALN4sw4BAAAAAAAAAA==/",
"_etag": "\"0000443f-0000-0000-0000-564f7b680000\"",
"_attachments": "attachments/"
}
In MS SQL, we have like below to read column names from a table.
select column_name, data_type, character_maximum_length
from INFORMATION_SCHEMA.COLUMNS
where table_name = 'table_name' .
I am expecting the same for document db. is it possible
from the above sample document which has the Type "Stratovolcano" to retrieve the json names "Volcano Name", "Country", "Region", "Location"... etc
An Azure Cosmos SQL container is a schema-agnostic container of items. The items in a container can have arbitrary schemas unlike rows in a table. So, Cosmos DB will not be able to do what you are asking for.
In your case it looks like all your items will have the same schema. So, you could do a " select * from c where c.id = "someid" " and infer the schema from the retuned item.
I have a table with 1 JSON type column city in a MySQL database that stores a JSON array of city objects with following structure:
{
"cities": [
{
"id": 1,
"name": "Mumbai",
"countryID": "9"
},
{
"id": 2,
"name": "New Delhi",
"countryID": "9"
},
{
"id": 3,
"name": "Abu Dhabi",
"countryID": "18"
}
]
}
I want to select objects from the cities array having countryID = 90 but I am stuck as the array of objects is stored in a single column city which is preventing me from doing a (*) with WHERE JSON_CONTAINS(city->'$.cities', JSON_OBEJECT('countryID', '90')).
My query looks like this and I am not getting anywhere,
SELECT JSON_EXTRACT(city, '$.cities') FROM MyTable WHERE JSON_CONTAINS(city->'$.cities', JSON_OBJECT('countryID', '90'))
It'd be a great help if someone can point me in right direction or gimme a solution to this.
Thanks
If you are using MySQL 8.0, there is a feature called JSON table functions. It converts JSON data into tabular form.Then onward you can filter the result.
The query to acheive the same is given below
Select country
FROM json_cal,
JSON_TABLE(
city,
"$.cities[*]" COLUMNS(
country JSON PATH "$",
NESTED PATH '$.countryID' COLUMNS (countryID TEXT PATH '$')
)
) AS jt1
where countryID = 90;
The DB Fiddle can be found here
More information on JSON Table functions can be found here
To avoid using temporary tables, I hoped to store some data as a json array within a variable and join on it. The data looks something like this:
[
{
"CarID": "9",
"Tank": "11.4",
"Distance": "120",
"From": "Brussels",
"To": "Bruges"
},
{
"CarID": "22",
"Tank": "15.9",
"Distance": "70",
"From": "Eupen",
"To": "Cologne"
}
]
I would like to set a variable in mysql to that value and be able to do something like the following:
SELECT
(
Cars.Consumption
* JSON_UNQUOTE(JSON_something('???','$.Distance'))
) - JSON_UNQUOTE(JSON_something('???','$.Tank')) AS neededRefuel
FROM Cars
WHERE JSON_SEARCH(
#myJson,
'one',
CAST(Cars.CarID AS JSON),
NULL,
'$[*].CarID'
) IS NOT NULL
This is just a simplified example.
Apparently json values as integer are not easy to detect in mysql, so I set quotes.
I wanted to use this kind of filter within a view, so temporary tables are not really an option.
Using MySQL 8.0.11
I just humbled upon JSON_TABLE and it seems to do exactly what I want :)
I have a SQL 2016 table that contains a column holding JSON data. A sample JSON document looks as follows:
{
"_id": "5a450f0383cac0d725cd6735",
"firstname": "Nanette",
"lastname": "Mccormick",
"registered": "2016-07-10T01:50:10 +04:00",
"friends": [
{
"id": 0,
"name": "Cote Collins",
"interests": [
"Movies",
"Movies",
"Cars"
]
},
{
"id": 1,
"name": "Ratliff Ellison",
"interests": [
"Birding",
"Birding",
"Chess"
]
},
{
"id": 2,
"name": "William Ratliff",
"interests": [
"Music",
"Chess",
"Software"
]
}
],
"greeting": "Hello, Nanette! You have 4 unread messages.",
"favoriteFruit": "apple"
}
I want to pull all documents in which the interests array of each friends object contains a certain value. I attempted this but got no results:
Select *
From <MyTable>
Where 'Chess' IN (Select value From OPENJSON(JsonValue, '$.friends.interests'))
I should have gotten several rows returned. I must not be referencing the interests array correctly or not understanding how SQL Server deals with a JSON array of this type.
Since Interests is a nested array, you need to parse your way through the array levels. To do this, you can use CROSS APPLY with OPENJSON(). The first CROSS APPLY will get you the friend names and the JSON array of interests, and then the second CROSS APPLY pulls the interests out of the array and corrolates them with the appropriate friend names. Here's an example query:
Select [name]
From #MyTable
CROSS APPLY OPENJSON(JsonValue, '$.friends')
WITH ([name] NVARCHAR(100) '$.name',
interests NVARCHAR(MAX) AS JSON)
CROSS APPLY OPENJSON(interests)
WITH (Interest NVARCHAR(100) '$')
WHERE Interest = 'Chess'