PostgreSQL: row_to_json with selective columns [duplicate] - json

This question already has answers here:
How to set correct attribute names to a json aggregated result with GROUP BY clause?
(6 answers)
Closed 6 years ago.
Task concept and my question
Using Postgres 9.4. How could I use row_to_json(row) with selective columns (not the entire row)? I need to discard one column from the row constructor while building JSON, but also need to preserve column names.
Restrictions
Do not use self join to the same table/cte with selective columns choice
Do not use external function to handle deleting key from json, afterwards
I'm well aware that I can write and use my own function to remove a key from JSON, or that in Postgres 9.5 there is - operator for JSONB. However, I would like to do this beforehand without additional function call and I'm pretty sure it's possible.
MVCE and explanation
Generating sample data
CREATE TABLE test_table ( id int, col1 int, col2 int, col3 text );
INSERT INTO test_table VALUES
(1, 23, 15, 'Jessica'), (2, 43, 84, 'Thomas');
1) First try, simple row_to_json(row), which is obviously not working:
SELECT id, row_to_json(t) FROM test_table t
I need to discard column id from the row constructor not to add it while parsing the row as json. Above returns:
id | row_to_json
----+-----------------------------------------------
1 | {"id":1,"col1":23,"col2":15,"col3":"Jessica"}
2 | {"id":2,"col1":43,"col2":84,"col3":"Thomas"}
2) Second try, with explicit passing of columns row_to_json(row(col1, ...)):
SELECT id, row_to_json(row(col1, col2, col3)) FROM test_table t
But I'm losing column names (as mentioned in docs it all converts to fX, where X is a number:
id | row_to_json
----+----------------------------------
1 | {"f1":23,"f2":15,"f3":"Jessica"}
2 | {"f1":43,"f2":84,"f3":"Thomas"}
Expected output
Expected output is obviously from the (1) point in MVCE but without id key-value pair:
id | row_to_json
----+-----------------------------------------------
1 | {"col1":23,"col2":15,"col3":"Jessica"}
2 | {"col1":43,"col2":84,"col3":"Thomas"}

It seems that creating a type with desired column names and matching data types and then casting the row to it will do the trick:
CREATE TYPE my_type AS (
col1 int,
col2 int,
col3 text
);
Then altering my statement by adding the cast of row to defined type:
SELECT id, row_to_json(cast(row(col1, col2, col3) as my_type)) FROM test_table t;
Brings out the expected output:
id | row_to_json
----+-----------------------------------------------
1 | {"col1":23,"col2":15,"col3":"Jessica"}
2 | {"col1":43,"col2":84,"col3":"Thomas"}
However, is there any method for this to be built without additional type?

Related

unpack all outer-most keys in json object as columns

I have a postgres DB called sales with a json-object, data containing around 100 outer-keys, lets name them k1,k2,k3..,k100.
I want to write a query
select * from sales some_function(data)
which simply returns something like
k1 | k2 | .. | k100
--------------------
"foo" | "bar" | .. | 2
"fizz"| "buzz"| .. | 10
ie. just unpacks the keys as columsn and their values as row.
Note, k1,k2..k100 is not their real name thus I can't do a
data->> key loop
That's not possible. One restriction of the SQL language is, that all columns (and their data types) must be known to the database when parsing the statement - so before it is actually run.
You will have to write each one separately:
select data ->> 'k1' as k1, data ->> 'k2' as k2, ...
from sales
One way to make this easier, is to generate a view dynamically by extracting all JSON keys from the column, then using dynamic SQL to create the view. You will however need to re-create that view each time the number of keys change.
Something along the lines (not tested!)
do
$$
declare
l_columns text;
l_sql text;
begin
select string_agg(distinct format('data ->> %L as %I', t.key, t.key), ', ')
into l_columns
from sales s
cross join jsonb_each(s.data) as t(key, value);
-- l_columns now contains something like:
-- data ->> 'k1' as k1, data ->> 'k2' as k2
-- now create a view from that
l_sql := 'create view sales_keys as select '||l_columns||' from sales';
execute l_sql;
end;
$$
;
You probably want to add e.g. the primary key column(s) to the view, so that you can match the JSON values back to the original row(s).

Oracle reading JSON data using json_query

While working on oracle json datatype and trying to extract data from it, not able to extract name & value elements from this. tried using all known notations but getting null.
select json_query(po_document, '$.actions.parameters[0]') from j_purchaseorder where ID='2';
You can use the JSON_VALUE function as follows:
SQL> select JSON_VALUE('{"_class":"123", "name":"tejash","value":"so"}', '$.name') as name,
2 JSON_VALUE('{"_class":"123", "name":"tejash","value":"so"}', '$.value') as value
3 from dual;
NAME VALUE
---------- ----------
tejash so
SQL>
Thanks for your help. got required output using below
select json_value(json_query(po_document, '$.actions.parameters[0]'),'$.value') from j_purchaseorder where ID='2' and
json_value(json_query(po_document, '$.actions.parameters[0]'),'$.name') = 'SERVERUSER';
As explained, for example, in the Oracle documentation, multiple calls to JSON_VALUE() on the same JSON document may result in very poor performance. When we need to extract multiple values from a single document, it is often best (for performance) to make a single call to JSON_TABLE().
Here is how that would work on the provided document. First I create and populate the table, then I show the query and the output. Note the handling of column (attribute) "_class", both in the JSON document and in the SQL SELECT statement. In both cases the name must be enclosed in double-quotes, because it begins with an underscore.
create table j_purchaseorder (
id number primary key,
po_document clob check (po_document is json)
);
insert into j_purchaseorder (id, po_document) values (
2, '{"_class":"hudson.model.StringParameterValue","name":"SERVERUSER","value":"avlipwcnp04"}'
);
commit;
select "_CLASS", name, value
from j_purchaseorder
cross apply
json_table(po_document, '$'
columns (
"_CLASS" varchar2(40) path '$."_class"',
name varchar2(20) path '$.name',
value varchar2(20) path '$.value'
)
)
where id = 2
;
_CLASS NAME VALUE
---------------------------------------- ------------------ ------------------
hudson.model.StringParameterValue SERVERUSER avlipwcnp04

How to determine the page number of a record using Spring-data-jpa?

I want to return the page number of the page where a certain record is located. I used a conditional query to return all the pagination results, so I couldn’t determine the page where it was based on the recorded id. How can I solve this problem?
I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.
My temporary solution is to query the records of each page until it contains the records I need, but doing so will cause the query to be slow.
ps: I am using MySQL database.
Assuming you have a record with some kind of id and what to know at what position it appears in your table when the table is ordered by some criteria you can do this with analytic functions. From this value you can trivially calculate the page it appears on, given a page size.
Example Schema (MySQL v8.0)
create table example (
id integer not null primary key,
text varchar(20));
insert into example(id, text) values (23,"Alfred");
insert into example(id, text) values (47,"Berta");
insert into example(id, text) values (11,"Carl");
insert into example(id, text) values (42,"Diana");
insert into example(id, text) values (17,"Ephraim");
insert into example(id, text) values (1,"Fiona");
insert into example(id, text) values (3,"Gerald");
Query
select *
from (
select id, text,
count(*) over (order by text) cnt // change the order by according to your needs
from example
// a where clause here limits the table before counting
) x where id = 42; // this where clause finds the row you are interested in
| id | text | cnt |
| --- | ----- | --- |
| 42 | Diana | 4 |
In order to use it with Spring Data JPA you put this query in a #Query annotation and mark it as a native query.

How to send key value pair array to mysql function and replace the keys in mysql column data with corresponding value pair?

I am having on case like multiple subquery in each mysql table.
Because I am storing comma separated values in one table column. Those values are foreign key to another table.
When query that table instead of showing multiple ids with comma separated, it should show related foreign key related table values with comma separated.
Already I have one function to that task. But It is slowing due to I am calling one query to get that foreign key related table to replace that content.
Please suggest any mysql function which accepts key and value pair parameter and based on that key replace it value in that column.
Here I have listed my tables.
Foreign key related table ---> field_default_values
------------------------------------
id | multi_option_clm
------------------------------------
1 option1
2 option2
3 option3
4 option4
Quering table ---> table1
select id , multi_option_clm from table1;
------------------------------------
id | multi_option_clm
------------------------------------
1 1,2
2 2,3,4
Currently I am using following function to show result as my requirement.
select id, GET_DEFAULT_VAL(`multi_option_clm`) from table1;
------------------------------------
id | multi_option_clm
------------------------------------
1 option1,option2
2 option2,option3,option4
My GET_DEFAULT_VAL function is
DELIMITER $$
CREATE DEFINER=`root`#`localhost` FUNCTION `GET_DEFAULT_VAL`(
x text
) RETURNS text
DETERMINISTIC
BEGIN
DECLARE columnList TEXT;
SET group_concat_max_len = 65533;
SELECT GROUP_CONCAT(`default_value` SEPARATOR ',')
FROM `field_default_values` where find_in_set(`id`,x)
INTO columnList;
RETURN columnList;
END$$
Here I am calling following query each time in function.
SELECT GROUP_CONCAT(`default_value` SEPARATOR ',')
FROM `field_default_values` where find_in_set(`id`,x)
INTO columnList;
Instead of using this query, I need any replacement calling this query one at a time after that reuse that or sending any key value pair array to my mysql function GET_DEFAULT_VAL after that replace with column comma separated values.
Please help me.

mysql select static list

I want to run an
INSERT ... SELECT
Manual
query to insert 1 selected column column with the exact amount of 2 rows from one table to another.
Along with this I would like to insert an additional column of static values.
Example
| selected column | static val |
Variable 4
Variable 9
static values 4 & 9 are specified in my php script.
Can I do this in 1 mysql query, not using php to store temporary data?
You can use UNION ALL to select from the source table twice like this:
insert into new_table
select column, 4 from old_table
union all
select column, 9 from old_table;