How to parse JSON value of a text column in cassandra - json

I have a column of text type be contain JSON value.
{
"customer": [
{
"details": {
"customer1": {
"name": "john",
"addresses": {
"address1": {
"line1": "xyz",
"line2": "pqr"
},
"address2": {
"line1": "abc",
"line2": "efg"
}
}
}
"customer2": {
"name": "robin",
"addresses": {
"address1": null
}
}
}
}
]
}
How can I extract 'address1' JSON field of column with query?
First I am trying to fetch JSON value then I will go with parsing.
SELECT JSON customer from text_column;
With my query, I get following error.
com.datastax.driver.core.exceptions.SyntaxError: line 1:12 no viable
alternative at input 'customer' (SELECT [JSON] customer...)
com.datastax.driver.core.exceptions.SyntaxError: line 1:12 no viable
alternative at input 'customer' (SELECT [JSON] customer...)
Cassandra version 2.1.13

You can't use SELECT JSON in Cassandra v2.1.x CQL v3.2.x
For Cassandra v2.1.x CQL v3.2.x :
The only supported operation after SELECT are :
DISTINCT
COUNT (*)
COUNT (1)
column_name AS new_name
WRITETIME (column_name)
TTL (column_name)
dateOf(), now(), minTimeuuid(), maxTimeuuid(), unixTimestampOf(), typeAsBlob() and blobAsType()
In Cassandra v2.2.x CQL v3.3.x Introduce : SELECT JSON
With SELECT statements, the new JSON keyword can be used to return each row as a single JSON encoded map. The remainder of the SELECT statment behavior is the same.
The result map keys are the same as the column names in a normal result set. For example, a statement like “SELECT JSON a, ttl(b) FROM ...” would result in a map with keys "a" and "ttl(b)". However, this is one notable exception: for symmetry with INSERT JSON behavior, case-sensitive column names with upper-case letters will be surrounded with double quotes. For example, “SELECT JSON myColumn FROM ...” would result in a map key "\"myColumn\"" (note the escaped quotes).
The map values will JSON-encoded representations (as described below) of the result set values.

If your Cassandra version is 2.1x and below, you can use the Python-based approach.
Write a python script using Cassandra-Python API
Here you have to get your row first and then use python json's loads method, which will convert your json text column value into JSON object which will be dict in Python. Then you can play around with Python dictionaries and extract your required nested keys. See the below code snippet.
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import json
if __name__ == '__main__':
auth_provider = PlainTextAuthProvider(username='xxxx', password='xxxx')
cluster = Cluster(['0.0.0.0'],
port=9042, auth_provider=auth_provider)
session = cluster.connect("keyspace_name")
print("session created successfully")
rows = session.execute('select * from user limit 10')
for user_row in rows:
customer_dict = json.loads(user_row.customer)
print(customer_dict().keys()

Related

How can I use the oracle REGEXP_SUBSTR to extract specific json values?

I have some columns in my Oracle database that contains json and to extract it's data in a query, I use REGEXP_SUBSTR.
In the following example, value is a column in the table DOSSIER that contains json. The regex extract the value of the property client.reference in that json
SELECT REGEXP_SUBSTR(value, '"client"(.*?)"reference":"([^"]+)"', 1, 1, NULL, 2) FROM DOSSIER;
So if the json looks like this :
[...],
"client": {
"someproperty":"123",
"someobject": {
[...]
},
"reference":"ABCD",
"someotherproperty":"456"
},
[...]
The SQL query will return ABDC.
My problem is that some json have multiple instance of "client", for example :
[...],
"contract": {
"client":"Name of the client",
"supplier": {
"reference":"EFGH"
}
},
[...],
"client": {
"someproperty":"123",
"someobject": {
[...]
},
"reference":"ABCD",
"someotherproperty":"456"
},
[...]
You get the issue, now the SQL query will return EFGH, which is the supplier's reference.
How can I make sure that "reference" is contained in a json object "client" ?
EDIT : I'm on Oracle 11g so I can't use the JSON API and I would like to avoid using third-party package
Assuming you are using Oracle 12c or later then you should NOT use regular expressions and should use Oracle's JSON functions.
If you have the table and data:
CREATE TABLE table_name ( value CLOB CHECK ( value IS JSON ) );
INSERT INTO table_name (
value
) VALUES (
'{
"contract": {
"client":"Name of the client",
"supplier": {
"reference":"EFGH"
}
},
"client": {
"someproperty":"123",
"someobject": {},
"reference":"ABCD",
"someotherproperty":"456"
}
}'
);
Then you can use the query:
SELECT JSON_VALUE( value, '$.client.reference' ) AS reference
FROM table_name;
Which outputs:
REFERENCE
ABCD
db<>fiddle here
If you are using Oracle 11 or earlier then you could use the third-party PLJSON package to parse JSON in PL/SQL. For example, this question.
Or enable Java within the database and then use CREATE JAVA (or the loadjava utility) to add a Java class that can parse JSON to the database and then wrap it in an Oracle function and use that.
I faced similar issue recently. If "reference" is a property that is only present inside "client" object, this will solve:
SELECT reference FROM (
SELECT DISTINCT
REGEXP_SUBSTR(
DBMS_LOB.SUBSTR(
value,
4000
),
'"reference":"(.+?)"',
1, 1, 'c', 1) reference
FROM DOSSIER
) WHERE reference IS NOT null;
You can also try to adapt the regex to your need.
Edit:
In my case, column type is CLOB and that's why I use DBMS_LOB.SUBSTR function there. You can remove this function and pass column directly in REGEXP_SUBSTR.

how to extract properly when sqlite json has value as an array

I have a sqlite database and in one of the fields I have stored complete json object . I have to make some json select requests . If you see my json
the ALL key has value which is an array . We need to extract some data like all comments where "pod" field is fb . How to extract properly when sqlite json has value as an array ?
select json_extract(data,'$."json"') from datatable ; gives me entire thing . Then I do
select json_extract(data,'$."json"[0]') but i dont want to do it manually . i want to iterate .
kindly suggest some source where i can study and work on it .
MY JSON
{
"ALL": [{
"comments": "your site is awesome",
"pod": "passcode",
"originalDirectory": "case1"
},
{
"comments": "your channel is good",
"data": ["youTube"],
"pod": "library"
},
{
"comments": "you like everything",
"data": ["facebook"],
"pod": "fb"
},
{
"data": ["twitter"],
"pod": "tw",
"ALL": [{
"data": [{
"codeLevel": "3"
}],
"pod": "mo",
"pod2": "p"
}]
}
]
}
create table datatable ( path string , data json1 );
insert into datatable values("1" , json('<abovejson in a single line>'));
Simple List
Where your JSON represents a "simple" list of comments, you want something like:
select key, value
from datatable, json_each( datatable.data, '$.ALL' )
where json_extract( value, '$.pod' ) = 'fb' ;
which, using your sample data, returns:
2|{"comments":"you like everything","data":["facebook"],"pod":"fb"}
The use of json_each() returns a row for every element of the input JSON (datatable.data), starting at the path $.ALL (where $ is the top-level, and ALL is the name of your array: the path can be omitted if the top-level of the JSON object is required). In your case, this returns one row for each comment entry.
The fields of this row are documented at 4.13. The json_each() and json_tree() table-valued functions in the SQLite documentation: the two we're interested in are key (very roughly, the "row number") and value (the JSON for the current element). The latter will contain elements called comment and pod, etc..
Because we are only interested in elements where pod is equal to fb, we add a where clause, using json_extract() to get at pod (where $.pod is relative to value returned by the json_each function).
Nested List
If your JSON contains nested elements (something I didn't notice at first), then you need to use the json_tree() function instead of json_each(). Whereas the latter will only iterate over the immediate children of the node specified, json_tree() will descend recursively through all children from the node specified.
To give us some data to work with, I have augmented your test data with an extra element:
create table datatable ( path string , data json1 );
insert into datatable values("1" , json('
{
"ALL": [{
"comments": "your site is awesome",
"pod": "passcode",
"originalDirectory": "case1"
},
{
"comments": "your channel is good",
"data": ["youTube"],
"pod": "library"
},
{
"comments": "you like everything",
"data": ["facebook"],
"pod": "fb"
},
{
"data": ["twitter"],
"pod": "tw",
"ALL": [{
"data": [{
"codeLevel": "3"
}],
"pod": "mo",
"pod2": "p"
},
{
"comments": "inserted by TripeHound",
"data": ["facebook"],
"pod": "fb"
}]
}
]
}
'));
If we were to simply switch to using json_each(), then we see that a simple query (with no where clause) will return all elements of the source JSON:
select key, value
from datatable, json_tree( datatable.data, '$.ALL' ) limit 10 ;
ALL|[{"comments":"your site is awesome","pod":"passcode","originalDirectory":"case1"},{"comments":"your channel is good","data":["youTube"],"pod":"library"},{"comments":"you like everything","data":["facebook"],"pod":"fb"},{"data":["twitter"],"pod":"tw","ALL":[{"data":[{"codeLevel":"3"}],"pod":"mo","pod2":"p"},{"comments":"inserted by TripeHound","data":["facebook"],"pod":"fb"}]}]
0|{"comments":"your site is awesome","pod":"passcode","originalDirectory":"case1"}
comments|your site is awesome
pod|passcode
originalDirectory|case1
1|{"comments":"your channel is good","data":["youTube"],"pod":"library"}
comments|your channel is good
data|["youTube"]
0|youTube
pod|library
Because JSON objects are mixed in with simple values, we can no longer simply add where json_extract( value, '$.pod' ) = 'fb' because this produces errors when value does not represent an object. The simplest way around this is to look at the type values returned by json_each()/json_tree(): these will be the string object if the row represents a JSON object (see above documentation for other values).
Adding this to the where clause (and relying on "short-circuit evaluation" to prevent json_extract() being called on non-object rows), we get:
select key, value
from datatable, json_tree( datatable.data, '$.ALL' )
where type = 'object'
and json_extract( value, '$.pod' ) = 'fb' ;
which returns:
2|{"comments":"you like everything","data":["facebook"],"pod":"fb"}
1|{"comments":"inserted by TripeHound","data":["facebook"],"pod":"fb"}
If desired, we could use json_extract() to break apart the returned objects:
.mode column
.headers on
.width 30 15 5
select json_extract( value, '$.comments' ) as Comments,
json_extract( value, '$.data' ) as Data,
json_extract( value, '$.pod' ) as POD
from datatable, json_tree( datatable.data, '$.ALL' )
where type = 'object'
and json_extract( value, '$.pod' ) = 'fb' ;
Comments Data POD
------------------------------ --------------- -----
you like everything ["facebook"] fb
inserted by TripeHound ["facebook"] fb
Note: If your structure contained other objects, of different formats, it may not be sufficient to simply select for type = 'object': you may have to devise a more subtle filtering process.

How to query JSON values from a column into one JSON array in MS-SQL 2016?

I'm starting to fiddle out how to handle JSON in MSSQL 2016+
I simply created a table having a ID (int) and a JSON (nvarchar) column.
Here are my queries to show the issue:
First query just returns the relational table result, nice and as expected.
SELECT * FROM WS_Test
-- Results:
1 { "name": "thomas" }
2 { "name": "peter" }
Second query returns just the json column as "JSON" created my MSSQL.
Not nice, because it outputs the json column content as string and not as parsed JSON.
SELECT json FROM WS_Test FOR JSON PATH
-- Results:
[{"json":"{ \"name\": \"thomas\" }"},{"json":"{ \"name\": \"peter\" }"}]
Third query gives me two result rows with json column content as parsed JSON, good.
SELECT JSON_QUERY(json, '$') as json FROM WS_Test
-- Results:
{ "name": "thomas" }
{ "name": "peter" }
Fourth query gives me the json column contents as ONE (!) JSON object, perfectly parsed.
SELECT JSON_QUERY(json, '$') as json FROM WS_Test FOR JSON PATH
-- Results:
[{"json":{ "name": "thomas" }},{"json":{ "name": "peter" }}]
BUT:
I don't want to have the "json" property containing the json column content in each array object of example four. I just want ONE array containing the column contents, not less, not more. Like this:
[
{
"name": "peter"
},
{
"name": "thomas"
}
]
How can I archive this with just T-SQL? Is this even possible?
The FOR JSON clause will always include the column names - however, you can simply concatenate all the values in your json column into a single result, and then add the square brackets around that.
First, create and populate sample table (Please save us this step in your future questions):
CREATE TABLE WS_Test
(
Id int,
Json nvarchar(1000)
);
INSERT INTO WS_Test(Id, Json) VALUES
(1, '{ "name": "thomas" }'),
(2, '{ "name": "peter" }');
For SQL Server 2017 or higher, use the built in string_agg function:
SELECT '[' + STRING_AGG(Json, ',') + ']' As Result
FROM WS_Test
For lower versions, you can use for xml path with stuff to get the same result as the string_agg:
SELECT STUFF(
(
SELECT ',' + Json
FROM WS_Test
FOR XML PATH('')
), 1, 1, '[')+ ']' As Result
The result for both of these queries will be this:
Result
[{ "name": "thomas" },{ "name": "peter" }]
You can see a live demo on DB<>Fiddle

Cassandra query by a field in JSON

I'm using the latest cassandra version and trying to save JSON like below and was successful,
INSERT INTO mytable JSON '{"username": "myname", "country": "mycountry", "userid": "1"}'
Above query saves the record like,
"rows": [
{
"[json]": "{\"userid\": \"1\", \"country\": \"india\", \"username\": \"sai\"}"
}
],
"rowLength": 1,
"columns": [
{
"name": "[json]",
"type": {
"code": 13,
"type": null
}
}
]
Now I would like to retrieve the record based on userid:
SELECT JSON * FROM mytable WHERE userid = fromJson("1") // but this query throws error
All this occurs in a node/express app and I'm using dse-driver as the client driver.
The CQL command worked like below,
SELECT JSON * FROM mytable WHERE userid="1";
However if it has to be executed via the dse-driver then the below snippet worked,
let query = 'SELECT JSON * FROM mytable WHERE userid = ?';
client.execute(query, ["1"], { prepare: true });
where client is,
const dse = require('dse-driver');
const client = new dse.Client({
contactPoints: ['h1', 'h2'],
authProvider: new dse.auth.DsePlainTextAuthProvider('username', 'pass')
});
If your Cassandra version is 2.1x and below, you can use the Python-based approach. Write a python script using Cassandra-Python API
Here you have to get your row first and then use python json's loads method, which will convert your json text column value into JSON object which will be dict in Python. Then you can play around with Python dictionaries and extract your required nested keys. See the below code snippet.
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import json
if __name__ == '__main__':
auth_provider = PlainTextAuthProvider(username='xxxx', password='xxxx')
cluster = Cluster(['0.0.0.0'],
port=9042, auth_provider=auth_provider)
session = cluster.connect("keyspace_name")
print("session created successfully")
rows = session.execute('select * from user limit 10')
for user_row in rows:
#fetchign your json column
column_dict = json.loads(user_row.json_col)
print(column_dict().keys()
Assuming user-id is the partition key, and assuming you want to retrieve a JSON object corresponding to user of id 1, you should try:
SELECT JSON * FROM mytable WHERE userid=1;
If userid is of type text, you will need to add some quotes.

U-SQL - Extract data from complex json object

So I have a lot of json files structured like this:
{
"Id": "2551faee-20e5-41e4-a7e6-57bd20b02a22",
"Timestamp": "2016-12-06T08:09:57.5541438+01:00",
"EventEntry": {
"EventId": 1,
"Payload": [
"1a3e0c9e-ef69-4c6a-ac8c-9b2de2fbc701",
"DHS.PlanCare.Business.BusinessLogic.VisionModels.VisionModelServiceWithoutUnitOfWork.FetchVisionModelsForClientOnReferenceDateAsync(System.Int64 clientId, System.DateTime referenceDate, System.Threading.CancellationToken cancellationToken)",
25,
"DHS.PlanCare.Business.BusinessLogic.VisionModels.VisionModelServiceWithoutUnitOfWork+<FetchVisionModelsForClientOnReferenceDateAsync>d__11.MoveNext\r\nDHS.PlanCare.Core.Extensions.IQueryableExtensions+<ExecuteAndThrowTaskCancelledWhenRequestedAsync>d__16`1.MoveNext\r\n",
false,
"2197, 6-12-2016 0:00:00, System.Threading.CancellationToken"
],
"EventName": "Duration",
"KeyWordsDescription": "Duration",
"PayloadSchema": [
"instanceSessionId",
"member",
"durationInMilliseconds",
"minimalStacktrace",
"hasFailed",
"parameters"
]
},
"Session": {
"SessionId": "0016e54b-6c4a-48bd-9813-39bb040f7736",
"EnvironmentId": "C15E535B8D0BD9EF63E39045F1859C98FEDD47F2",
"OrganisationId": "AC6752D4-883D-42EE-9FEA-F9AE26978E54"
}
}
How can I create an u-sql query that outputs the
Id,
Timestamp,
EventEntry.EventId and
EventEntry.Payload[2] (value 25 in the example below)
I can't figure out how to extend my query
#extract =
EXTRACT
Timestamp DateTime
FROM #"wasb://xxx/2016/12/06/0016e54b-6c4a-48bd-9813-39bb040f7736/yyy/{*}/{*}.json"
USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor();
#res =
SELECT Timestamp
FROM #extract;
OUTPUT #res TO "/output/result.csv" USING Outputters.Csv();
I have seen some examples like:
U- SQL Unable to extract data from JSON file => this only queries one level of the document, I need data from multiple levels.
U-SQL - Extract data from json-array => this only queries one level of the document, I need data from multiple levels.
JSONTuple supports multiple JSONPaths in one go.
#extract =
EXTRACT
Id String,
Timestamp DateTime,
EventEntry String
FROM #"..."
USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor();
#res =
SELECT Id, Timestamp, EventEntry,
Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple(EventEntry,
"EventId", "Payload[2]") AS Event
FROM #extract;
#res =
SELECT Id,
Timestamp,
Event["EventId"] AS EventId,
Event["Payload[2]"] AS Something
FROM #res;
You may want to look at this GIT example. https://github.com/Azure/usql/blob/master/Examples/JsonSample/JsonSample/NestedJsonParsing.usql
This take 2 disparate data elements and combines them, like you have the Payload, and Payload schema. If you create key value pairs using the "Donut" or "Cake and Batter" examples you may be able to match the scema up to the payload and use the cross apply explode function.