Reading JSON data from BLOB - json

Edit - oracle version 19c
I am uploading a json file using Browse file type in APEX and then storing it in a table as BLOB.
The Table looks like this -
File_ID Filename Mime_type created_on blob_content
1 file_new.json application/json 9/1/2020 (BLOB)
Now i want to parse this and read the contents of blob as a table in oracle. How can i do it?
The Json file looks like this but has hundred of rows.
[{"Id":"50021","eName":"random123", "Type":"static","Startdate":"07/03/2020","Enddate":"08/02/2020,"nominations":[{"nominationId":"152","nominationMaxCount":7500,"offer":[{"Id":"131","Type":"MONEY","clientId":41,
"stateExclusions":[],"divisionInclusions":["111","116","126","129"]]}]

Step One - add an IS JSON check constraint to your BLOB_CONTENT column.
ALTER TABLE CLOBS
ADD CONSTRAINT CLOB_JSON CHECK
(CLOBS IS JSON)
ENABLE; -- yes my table name and my column are both named CLOBS
Step Two - Add some data.
The database provides native SQL calls to parse/query JSON content in your BLOB.
My data, a single row. This JSON document has a couple of simple arrays.
{
"results" : [
{
"columns" : [
{
"name" : "REGION_ID",
"type" : "NUMBER"
},
{
"name" : "REGION_NAME",
"type" : "VARCHAR2"
}
],
"items" : [
{
"region_id" : 1,
"region_name" : "Europe"
},
{
"region_id" : 2,
"region_name" : "Americas"
},
{
"region_id" : 3,
"region_name" : "Asia"
},
{
"region_id" : 4,
"region_name" : "Middle East and Africa"
}
]
}
]
}
I can use the jsonv_value() function if I want to pull a single attribute out, and I can reference those using $. notation. I reference arrays as you'd expect.
select json_value(CLOBS,'$.results.columns[0].name') FIRST_COLUMN,
json_value(CLOBS,'$.results.columns[1].name') SECOND_COLUMN
from CLOBS
where ID = 1;
The results -
Our product architect (Beda) has a great blog series with much better examples than this.

Related

Importing CSV File in Elasticsearch

I am new to elasticsearch. Trying to import CSV file by following the guide
And successfully imported the file and it has created an index with the documents also.
But what I found that in every docs _id contains random unique id as a value. I want to have value of _id from the CSV file field (the CSV file which I'm importing contains a field with unique id for every row) using query or any other ways. And I do not know how to do that.
Even in docs it is not explained. A sample example of document of elasticsearch index is shown below
{
"_index" : "sample_index",
"_type" : "_doc",
"_id" : "nGHXgngBpB_Kjkqcxfj",
"_score" : 1.0,
"_source" : {
"categoryid" : "34128b58-9148-11eb-a8b3-0242ac130003",
"categoryname" : "Blogs",
"isdeleted" : "False"
}
while adding ingest pipeline with the following query
{
"set": {
"field": "_id",
"value": "{{categoryid}}"
}
}
it throwing an error with this message
You can achieve this by modifying the ingest pipeline used to ingest your CSV file.
In the Ingest pipeline area (Advanced section), simply add the following processor at the end of the pipeline and the document ID will be set accordingly:
...
{
"set": {
"field": "_id",
"value": "{{categoryid}}"
}
}
It should look like this:
Added following processor in ingest pipeline section and it works..
{
"processors": [
{
"set": {
"field": "_id",
"value": "{{categoryid}}"
}
}
]
}

Trying to make nested json and point to the same object using multiple keys

let say i have this JSON data
"value1" : { "name" : "Foo" }
"value2" : { "name" : "14" }
"value3" : { "gender" : "Male" }
now i am trying to do this
"value1', "value2", "value3" : { "name" : "Foo" }
or maybe this if at all possible
["value1', "value2", "value3"] : { "name" : "Foo" }
so in a nutshell i have data that i would like to access using multiple pointers point to the same data in a JSON formate so that i don't have to repeat the same data for different pointers
here is an example of data:
"Model 1" : { "E-Series" : ["Green", "Purple"] }
let say "Model 2" has same info as "Model 1" how can point "Model 2" to "Model 1" data object in JSON without repeating the same code over and over again
This is not possible in JSON format. You can simulate it in code for example for Model2 set "ref": "Model1" and then programmatically read data from object Model1.
JSON hasn't got this feature by design.
This is not correct JSON syntax, and JSON does not have provisions for links. Your options are for encoding object references are:
LD+JSON (http://json-ld.org/)
HAL+JSON (http://stateless.co/hal_specification.html)
JSON-R (http://java.dzone.com/articles/json-r-json-extension-deals)
dojox.json.ref (https://dojotoolkit.org/reference-guide/1.10/dojox/json/ref.html)
etc, etc, ...
your custom data model for references
The benefit of using something more or less standard is a better (future) integration, but it may not be relevant for your task.

Create the structure of an empty collection in mongoDB

Is there any possibility to create the structure of an empty collection in MongoDB using mongoimport from a JSON file like the one below?
"Users" : {
"name" : "string",
"telephone" : {
"personal": { "type": "number" },
"job": { "type" : "number" }
},
"loc" : "array",
"friends" : "object"
}
My goal is to create a mongoDB schema from JSON files.
Yes, you can mongoimport a JSON file and if you clear out the values of those field (set them to ""), importing your JSON file should do just that.
However, MongoDB is a NoSQL database, and creating a schema in the MongoDB database doesn't really make sense. What will happen is that you'll have one record with fields whose values are empty.

MongoDB AND Comparison Fails

I have a Collection named StudentCollection with two documents given below,
> db.studentCollection.find().pretty()
{
"_id" : ObjectId("52d7c0c744b4dd77efe93df7"),
"regno" : 101,
"name" : "Ajeesh",
"gender" : "Male",
"docs" : [
"voterid",
"passport",
"drivinglic"
]
}
{
"_id" : ObjectId("52d7c6a144b4dd77efe93df8"),
"regno" : 102,
"name" : "Sathish",
"gender" : "Male",
"dob" : ISODate("2013-12-09T21:05:00Z")
}
Why does the below query returns a document when it doesn't fulfil the criteria which I gave in find command. I know it's a bad & stupid query for AND comparison. I tried this with MySQL and it doesn't return anything as expected but why does NOSQL makes problem. I hope it's considering the last field for comparison.
> db.studentCollection.find({regno:101,regno:102}).pretty()
{
"_id" : ObjectId("52d7c6a144b4dd77efe93df8"),
"regno" : 102,
"name" : "Sathish",
"gender" : "Male",
"dob" : ISODate("2013-12-09T21:05:00Z")
}
Can anyone brief why does Mongodb works this way?
MongoDB leverages JSON/BSON and names should be unique (http://www.ietf.org/rfc/rfc4627.txt # 2.2.) Found this in another post How to generate a JSON object dynamically with duplicate keys? . I am guessing the value for 'regno' gets overridden to '102' in your case.
If what you want is an OR query, try the following:
db.studentCollection.find ( { $or : [ { "regno" : "101" }, {"regno":"102"} ] } );
Or even better, use $in:
db.studentCollection.find ( { "regno" : { $in: ["101", "102"] } } );
Hope this helps!
Edit : Typo!
MongoDB converts your query to a Javascript document. Since you have not mentioned anything for $and condition in your document, your query clause is getting overwritten by the last value which is "regno":"102". Hence you get last document as result.
If you want to use an $and, you may use any of the following:
db.studentCollection.find({$and:[{regno:"102"}, {regno:"101"}]});
db.studentCollection.find({regno:{$gte:"101", $lte:"102"}});

is it possible to extract the specific data in a JSON data , without reading all the values

I have this JSON Data .
My question is that , is it possible to extract the specific data in a JSON data , without reading all the values .
I mean is it possible to query the data as we do in SQL ??
{ "_id" : ObjectId("4e61501e6a73bc73f82f91f3"), "created_at" : "2011-09-02 17:52:30.285", "cust_id" : "sdtest", "moduleName" : "balances", "responses" : [
{
"questionNum" : "1",
"answer" : "Hard",
"comments" : "is that you john wayne?"
},
{
"questionNum" : "2",
"answer" : "Somewhat",
"comments" : "ARg!"
},
{
"questionNum" : "3",
"answer" : "",
"comments" : "Yes"
}
] }
Yes, but you will need to write extra code to do it, or use a third party library. There are a few available: http://www.google.co.uk/search?q=json+linq+sql
Well, unless you use an incremental JSON parser, you'll have to parse the whole JSON first. After that, it depends on your programming language's abilities of how you can filter. For example, in Python
import json
obj = json.loads(jsonData)
answeredQuestions = filter(lambda response: response.answer, obj["responses"])