In my database table I have a column named data and that column's type is jsonb. Here is a sample json of the column.
{"query": {"end-date": "2016-01-31", "start-date": "2016-01-01", "max-results": 1000, "start-index": 1 }}
This is the result in a formal format.
{
"query":{
"end-date":"2016-01-31",
"start-date":"2016-01-01",
"max-results":1000,
"start-index":1
}
}
I need to get the data from the 'start date' inside the 'query' element. How get the data from the start date from a pgsql query
You can use the Postgres in-built function named 'json_extract_path'. Document.
The first parameter in this function is the column name, the second parameter is JSON root element and the third parameter is the key name of which you want to get data.
select json_extract_path(data::json,'query','start-date') as test FROM "schema".tbl_name
Related
I have a JSON column in an Oracle DB where it was populated without the ABSENT ON NULL option and there are some pretty long lengths because of this.
I would like to trim things down and have created a new table similar to the first but I would like to select the JSON from form the old, add the ABSENT ON NULL option and place the new values in reducing the column length.
So I can see the JSON easy enough like
SELECT json_query(json_data,'$') FROM table;
This will give a result like:
{
"REC_TYPE_IND":"1",
"ID":"1234",
"OTHER_ID":"4321",
"LOCATION":null,
"EFF_BEG_DT":"19970101",
"EFF_END_DT":"99991231",
"NAME":"Joe",
"CITY":null
}
When I try to remove the null values like
SELECT json_object (json_query(json_data,'$') ABSENT ON NULL
RETURNING VARCHAR2(4000)
) AS col1 FROM table;
I get the following:
ORA-02000: missing VALUE keyword
I assume this is because the funcion json_object is expecting the format:
json_object ('REC_TYPE_IND' VALUE '1',
'ID' VALUE '1234')
Is there a way around this, to turn the JSON back into values that JSON_OBJECT can recognize like above or is there a function I am missing?
I have a big JSON data in one column called response_return in a Postgres DB, with a response like:
{
"customer_payment":{
"OrderId":"123456789",
"Customer":{
"Full_name":"Francis"
},
"Payment":{
"AuthorizationCode":"9874565",
"Recurrent":false,
"Authenticate":false,
...
}
}
}
I tried to use Postgres functions like -> ,->> ,#> or #> to walk through headers to achieve AuthorizationCode for a query.
When I use -> in customer_payment in a SELECT, returns all after them. If I try with OrderId, it's returned NULL.
The alternatives and sources:
Using The JSON Datatype In PostgreSQL
Operator ->
Allows you to select an element based on its name.
Allows you to select an element within an array based on its index.
Can be used sequentially: ::json->'elementL'->'subelementM'->…->'subsubsubelementN'.
Return type is json and the result cannot be used with functions and operators that require a string-based datatype. But the result can be used with operators and functions that require a json datatype.
Query for element of array in JSON column
This is not helpful because I don't want filter and do not believe that need to transform to array.
If you just want to get a single attribute, you can use:
select response_return -> 'customer_payment' -> 'Payment' ->> 'AuthorizationCode'
from the_table;
You need to use -> for the intermediate access to the keys (to keep the JSON type) and ->> for the last key to return the value as a string.
Alternatively you can provide the path to the element as an array and use #>>
select response_return #>> array['customer_payment', 'Payment', 'AuthorizationCode']
from the_table;
Online example
I've created a crawler that looks at a PostgreSQL 9.6 RDS table with a JSONB column but the crawler identifies the column type as "string". When I then try to create a job that loads data from a JSON file on S3 into the RDS table I get an error.
How can I map a JSON file source to a JSONB target column?
It's not quite a direct copy, but an approach that has worked for me is to define the column on the target table as TEXT. After the Glue job populates the field, I then convert it to JSONB. For example:
alter table postgres_table
alter column column_with_json set data type jsonb using column_with_json::jsonb;
Note the use of the cast for the existing text data. Without that, the alter column would fail.
Crawler will identify JSONB column type as "string" but you can try to use Unbox Class in Glue to convert this column to json
let's check the following table in PostgreSQL
create table persons (id integer, person_data jsonb, creation_date timestamp )
There is an example of one record from person table
ID = 1
PERSON_DATA = {
"firstName": "Sergii",
"age": 99,
"email":"Test#test.com"
}
CREATION_DATE = 2021-04-15 00:18:06
The following code need to be added in Glue
# 1. create dynamic frame from catalog
df_persons = glueContext.create_dynamic_frame.from_catalog(database = "testdb", table_name = "persons", transformation_ctx = "df_persons ")
# 2.in path you need to add your jsonb column name that need to be converted to json
df_persons_json = Unbox.apply(frame = df_persons , path = "person_data", format="json")
# 3. converting from dynamic frame to data frame
datf_persons_json = df_persons_json.toDF()
# 4. after that you can process this column as a json datatype or create dataframe with all necessary columns , each json data element can be added as a separate column in dataframe :
final_df_person = datf_persons_json.select("id","person_data.age","person_data.firstName","creation_date")
You can also check the following link:
https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-transforms-Unbox.html
I have an SQL Table which one of the columns contain a JSON array in the following format:
[
{
"id":"1",
"translation":"something here",
"value":"value of something here"
},
{
"id":"2",
"translation":"something else here",
"value":"value of something else here"
},
..
..
..
]
Is there any way to use an SQL Query and retrieve columns with the ID as header and the "value" as the value of the column? Instead of return only one column with the JSON array.
For example, if I run:
SELECT column_with_json FROM myTable
It will return the above array. Where I want to return
1,2
value of something here, value of something else here
You can't use SQL to retrieve columns from the JSON stored inside the table: to the database engine the JSON is just unstructured text saved in a text field.
Some relational databases, like PostgreSQL, have a JSON type and functions to support JSON query. If this is your case, you should be able to perform the query you want.
Check this for an example on how it work with PostgreSQL:
http://clarkdave.net/2013/06/what-can-you-do-with-postgresql-and-json/
jSonColoumn - data type (TEXT)
sample rows
{"scheduledTime":"2014-01-29 19:55:00"}
{"scheduledTime":"2014-01-29 22:55:00"}
{"scheduledTime":"2014-01-29 15:55:00"}
{"scheduledTime":"2014-01-29 08:55:00"}
i need to sort this result with the "scheduledTime" key of the json object stored in the column "jSonColoumn". (Simply i need to order by the result based on the "scheduledTime" key of the json)
Thanks
There are some MySQL JSON functions.
Like this:
select json_extract(columnX, '{"scheduledTime"}')