Convert json column data into table in sql server 2012 - json

I have json stored in table with 3 million rows.
A single row contains json in below format
[
{
"Transaction":[
{
"ProductInfo":[
{
"LINE_NO":"1",
"STOCKNO":"890725471381116060"
},
{
"LINE_NO":"2",
"STOCKNO":"890725315884216020"
}
]
}
],
"Payment":[
{
"ENTSRLNO":"1",
"DOCDT":"08/25/2016"
}
],
"Invoice":[
{
"SALES_TYPE":"Salesinvoice",
"POS_CODE":"A20",
"CUSTOMER_ID":"0919732189692",
"TRXN_TYPE":"2100",
"DOCNOPREFIX":"CM16",
"DOCNO":"1478",
"BILL_DATE":"08/25/2016 03:59:07"
}
]
}
]
I want to dump above json in three different table
ProductInfo
Payment table
Invoice
How to perform above task in a optimise way?

Well most efficient way will be to write a procedure and use open json in sql server
check below link:
https://msdn.microsoft.com/en-IN/library/dn921879.aspx

Related

Get array from json in Bigquery

I'm trying to get the data from a JSON in BigQuery. This JSON is Stored in a one-column table.
So far, I've been able to get only the "variables" array, with the following:
Select JSON_QUERY_ARRAY(Column1, '$.sessions[0].variables') FROM Table
How can I get the other values/arrays (sessionMessage and events)? I can't make it work..
I've tried with:
JSON_VALUE(Column1, '$.sessions[0].conversation')
JSON_QUERY_ARRAY(Column1, '$.sessions[0].sessionMessages')
But I get only empty values (The original json has values inside this arrays..)
{
"fromDate":"2020-04-10T23:47:17.161Z",
"pageRows":151,
"sessions":[
{
"variables":[],
"sessionDate":"2020-04-10T23:47:17.161Z",
"botMessages":2,
"userHasTalked":"true",
"topics":[
"TOPIC1"
],
"sessionId":"WXXXSXSXSXXXQ_2020-01-00T23:47:17.161Z",
"platformContactId":"XXXXXXX-XXXXXXX-XXXXXXXXXXXXXX",
"sessionMessages":[.....],
"queues":[
"QUEUE1",
"QUEUE2"
],
"customerId":"SSDSDS",
"userMessages":2,
"operatorMessages":1,
"sessionMessagesQty":2,
"sessionStartingCause":"Organic",
"channelId":"IDCHANEL",
"conversation":"https://url.com",
"events":[.....]
}
],
"toDate":"2020-04-10T23:47:17.161Z",
"hasMore":true,
"pageToken":"XXXXXXXXXXXXXX"
}
There is nothing wrong with the function and JSONPath that you used, but your sample JSON file has some unexpected thing, like [.....], removing/replacing those and query below works fine:
WITH a as (select
"""
{
"fromDate":"2020-04-10T23:47:17.161Z",
"pageRows":151,
"sessions":[
{
"variables":[],
"sessionDate":"2020-04-10T23:47:17.161Z",
"botMessages":2,
"userHasTalked":"true",
"topics":[
"TOPIC1"
],
"sessionId":"WXXXSXSXSXXXQ_2020-01-00T23:47:17.161Z",
"platformContactId":"XXXXXXX-XXXXXXX-XXXXXXXXXXXXXX",
"sessionMessages":[1,2,3],
"queues":[
"QUEUE1",
"QUEUE2"
],
"customerId":"SSDSDS",
"userMessages":2,
"operatorMessages":1,
"sessionMessagesQty":2,
"sessionStartingCause":"Organic",
"channelId":"IDCHANEL",
"conversation":"https://url.com",
"events":[],
}
],
"toDate":"2020-04-10T23:47:17.161Z",
"hasMore":true,
"pageToken":"XXXXXXXXXXXXXX"
}
""" data)
SELECT JSON_VALUE(data, '$.sessions[0].conversation'),
JSON_QUERY_ARRAY(data, '$.sessions[0].sessionMessages')
FROM a;

PostgreSQL JSON Querying

I have a JSON type column called "person" and the data stored in it is in the format
{
"clients":{
"nbr":"2",
"info":[
{
"nom":"Baptiste",
"genre":"male",
"age":"48"
},
{
"nom":"Lisa",
"genre":"female",
"age":"29"
}
]
}
}
I want to retrieve the names of clients.
You may use json_array_elements
select json_array_elements(person->'clients'->'info')->>'nom' as name
from t;

How will we ignore duplicates while insert json data into mongoDB based on multiple condition

[
{
"RollNo":1,
"name":"John",
"age":20,
"Hobby":"Music",
"Date":"9/05/2018"
"sal":5000
},
{
"RollNo":2,
"name":"Ravi",
"age":25,
"Hobby":"TV",
"Date":"9/05/2018"
"sal":5000
},
{
"RollNo":3,
"name":"Devi",
"age":30,
"Hobby":"cooking",
"Date":"9/04/2018"
"sal":5000
}
]
Above is the JSON file i need to insert into a MongoDB. Similar JSON data is already in my mongoDB collection named 'Tests'.I have to ignore the records which is already
in the mongoDB based on a certain condition.
[RollNo in mongoDB == RollNo in the json need to insert && Hobby in mongoDB ==Hobby in the json need to insert && Date in mongoDB == Date in the json need to insert].
If this condition matches, i need to igore the insertion,else need to insert the data into DB .
I am using nodejs. Anyone please help me to do it.
If you are using mongoose then use upsert.
db.people.update(
{ RollNo: 1 },
{
"RollNo":1,
"name":"John",
"age":20,
"Hobby":"Music",
"Date":"9/05/2018"
"sal":5000
},
{ upsert: true }
)
But to avoid inserting the same document more than once, only use upsert: true if the query field is uniquely indexed.
The easiest and safest way to do this is by using a compound index.
You can create a compound index like this:
db.people.createIndex( { "RollNo": 1, "Hobby": 1, "Date" : 1 }, { unique: true } )
Then the duplicated inserts will produce an error which you need to process in your code.

Extracting multiple associative objects from JSON type in MySQL

Trying to figure out the best way to query a MySQL table containing a json column.
I am successfully able to get product OR port.
SELECT ip, JSON_EXTRACT(json_data, '$.data[*].product' ) FROM `network`
This will return:
["ftp","ssh"]
What I'm looking to get is something like this, or some other way to represent association and handle null values:
[["ftp",21],["ssh",22],[NULL,23]]
Sample JSON
{
"key1":"Value",
"key2":"Value",
"key3":"Value",
"data": [
{
"product":"ftp",
"port":"21"
},
{
"product":"ssh",
"port":"22"
},
{
"port":"23"
}
]
}

How to make array non zero based in rails

Im trying to create a d3.js graph from a rails database. This takes the following json
{
"nodes":[
{
"name":"Sebo",
"group":4,
"id":1
},
{
"name":"Pierre",
"group":5,
"id":2
},
{
"name":"Bilbo",
"group":2,
"id":3
},
{
"name":"yyyyyyyy",
"group":2,
"id":4
}
],
"links":[
{
"source":3,
"target":2,
"value":null
},
{
"source":3,
"target":1,
"value":null
},
{
"source":4,
"target":2,
"value":null
},
{
"source":4,
"target":1,
"value":null
}
]
}
I have created a button that allows a current user to follow another user. This then gets stored in a database and eventually the graph can be re-visualised.
The problem is that the request to update the database is based on the current user id (from the database). This is a non-zero based indexing so the first user is user id:1. However the json uses zero based indexing. This means that if user_id=1 connects to user_id=4 then when the graph is seen again this connection is attributed to user_id:2. What would be great is if I could specify that the user_id index could start with zero so that the array and database are in agreement. Is this the correct way to think about this? Can I force the indexing of the user table to start at zero eg in a rails schema?