I have several tables whose records have a type identifier. For example a table AUTOS has a MANUFACTURER field. I'd like to be able to create an json_object for each row and add that object to a manufacturer specific array, e.g.
{
fordCars: [],
chevyCars: []
}
The arrays are members of a parent object (let's name it #parent)
As I indicated I have several such tables that should be treated the same way.
I thought I might define a variable
DECLARE #mfgs json;
initialize it as follows:
SET #mfgs = '{
"fordCars": [],
"chevyCars": []
}';
I thought I could populate the arrays as follows but the arrays in #mfg are empty:
SELECT CASE
WHEN MANUFACTURER='FORD' THEN SELECT JSON_ARRAY_APPEND(#mfgs, '$.fordCars', JSON_OBJECT(
'MANUFACTURER',d.MANUFACTURER,
'MAKE',d.MAKE,
'MODEL',d.MODEL
))
WHEN MANUFACTURER='CHEVY' THEN SELECT JSON_ARRAY_APPEND(#mfgs, '$.chevyCars', JSON_OBJECT(
'MANUFACTURER',d.MANUFACTURER,
'MAKE',d.MAKE,
'MODEL',d.MODEL
))
END
from AUTOS d
I would then:
JSON_MERGE_PRESERVE(#parent, #mfgs)
The above does not work and in any case requires that I repeat all json/sql mappings for each case statement.
Does anyone know how I can accumulate json_objects in one of multiple arrays based on the value, in this case of MANUFACTURER?
Installation is mySQL 8.023.
Thanks in advance.
An answer that eliminates the issue of having to repeat each of the AUTOS table fields for each grouping of MANUFACTURER. I am still unable however to use JSON_* functions to provide arguments to JSON_MERGE_PRESERVE.
IF someone knows how to accomplish this it would make for a more elegant solution imho.
SET json_auto_detail = (select JSON_MERGE_PRESERVE(json_auto_detail,
(select concat('{', GROUP_CONCAT(jsonString SEPARATOR "," ),'}') from (
SELECT concat('"', mfgArrayName, '":', mfgArrayObj) as 'jsonString' from (
(select
if (MANUFACTURER='FORD', 'fordCars',
if (MANUFACTURER='Chevy', 'chevyCars')) as 'mfgArrayName',
json_arrayagg(JSON_OBJECT(
'make', MANUFACTURER,
'model', MODEL,
'id', ID
)) as 'mfgArrayObj'
from AUTOS
where MANUFACTURER in ('FORD', 'CHEVY')
group by MANUFACTURER
)
) as autos -- up to 2 rows of auto json objects
) as mAutos))
); -- autos merged with json_auto_detail
Related
I have json column inside my PostgreSQL table that looks something similar to this:
{"example--4--":"test 1","another example--6--":"test 2","final example--e35b172a-af71-4207-91be-d1dc357fe8f3--Equipment":"ticked"}
{"example--4--":"test 4","another example--6--":"test 5","final example--e35b172a-af71-4207-91be-d1dc357fe8f3--Equipment":"ticked"}
Each key contains a map which is separated by --. The prefix is unique, ie: "example", "another example" and "final example".
I need to query on the unique prefix and so far, nothing I'm trying is even close.
select some_table.json_column from some_table
left join lateral (select array(select * from json_object_keys(some_table.json_column) as keys) k on true
where (select SPLIT_PART(k::text, '--', 1) as part_name) = 'example'
and some_table.json_column->>k = 'test 1'
The above is resulting in the following error (last line):
operator does not exist: json -> record
My expected output would be any records where "example--4--":"test 1" is present (in my above example, the only result would be)
{"example--4--":"test 1","another example--6--":"test 2","final example--e35b172a-af71-4207-91be-d1dc357fe8f3--Equipment":"ticked"}
Any help appreciated. After debugging around for a while, I can see the main issue resolves in the implicit cast to ::text. k seems to be a "record" of the keys that I need to loop and split to compare, currently, I'm casting a record to text which is causing the issue.
One way to do it, is to use an EXIST condition together with jsonb_each_text()
select *
from the_table
where exists (select *
from jsonb_each_text(data) as x(key,value)
where x.key like 'example%'
and x.value = 'test 1')
If your column isn't a jsonb (which it should be), you need to use json_each_text() instead
Another option is to use a JSON path expression:
select *
from the_table
where data #? '$.keyvalue() ? (#.key like_regex "^example" && #.value == "test 1")'
I have a query where i have "TEST"."TABLE" LEFT JOINED to PUBLIC."SchemaKey". Now in my final select statement i have a case statement where i check if c."Type" = 'FOREIGN' then i want to grab a value from another table but the table name value i am using in that select statement is coming from the left joined table column value. I've tried multiple ways to get to work but i keep getting an error, although if i hard code the table name it seems to work. i need the table name to come from c."FullParentTableName". Is what i am trying to achieve possible in snowflake and is there a way to make this work ? any help would be appreciated !
SELECT
c."ParentColumn",
c."FullParentTableName",
a."new_value",
a."column_name"
CASE WHEN c."Type" = 'FOREIGN' THEN (SELECT "Name" FROM TABLE(c."FullParentTableName") WHERE "Id" = 'SOME_ID') ELSE null END "TestColumn" -- Need assistance on this line...
FROM "TEST"."TABLE" a
LEFT JOIN (
select s."Type", s."ParentSchema", s."ParentTable", s."ParentColumn", concat(s."ParentSchema",'.','"',s."ParentTable",'"') "FullParentTableName",s."ChildSchema", s."ChildTable", trim(s."ChildColumn",'"') "ChildColumn"
from PUBLIC."SchemaKey" as s
where s."Type" = 'FOREIGN'
and s."ChildTable" = 'SOMETABLENAME'
and "ChildSchema" = 'SOMESCHEMANAME'
) c
on a."column_name" = c."ChildColumn"
Thanks !
In Snowflake you cannot dynamically use the partial results as tables.
You can use a single bound value via identifier to bind a value to table name
But you could write a Snowflake Scripting but it would need to explicitly join the N tables. Thus if you N is fixed, you should just join those.
I have a JSON column, manifest, containing an array of objects.
I need to return all table rows where any of the objects in their array have a slide_id that is present in a sub select.
The structure of the JSON field is..
{ matrix:[
{
row:1,
col:1,
slide_id:1
},
{
row:1,
col:2,
slide_id:5
}
]
}
So I want to run something like this....
SELECT id FROM presentation WHERE manifest->'$.matrix[*].slide_id' IN ( (SELECT id from slides WHERE date_deleted IS NOT NULL) );
But this doesn't work as manifest->'$.matrix[*].slide_id' returns a JSON array for each row.
I have managed to get this to work, but its amazingly slow as it scans the whole table...
SELECT
p.id
FROM
(
SELECT id,
manifest->'$.matrix[*].slide_id' as slide_ids
FROM `presentation`
) p
INNER JOIN `pp_slides` s
ON JSON_CONTAINS(p.slide_ids, CAST(s.id as json), '$')
WHERE s.date_deleted IS NOT NULL
If I filter it down to an individual presentation ID, then its not too bad, but still takes 700 ms for a presentation with a couple of hundred slides in it. Is there a cleaner way to do this?
I suppose the best way would be to refactor it to store the matrix as a relational table....
If I have a table with a column named json_stuff, and I have two rows with
{ "things": "stuff" } and { "more_things": "more_stuff" }
in their json_stuff column, what query can I make across the table to receive [ things, more_things ] as a result?
Use this:
select jsonb_object_keys(json_stuff) from table;
(Or just json_object_keys if you're using just json.)
The PostgreSQL json documentation is quite good. Take a look.
And as it is stated in the documentation, the function only gets the outer most keys. So if the data is a nested json structure, the function will not return any of the deeper keys.
WITH t(json_stuff) AS ( VALUES
('{"things": "stuff"}'::JSON),
('{"more_things": "more_stuff"}'::JSON)
)
SELECT array_agg(stuff.key) result
FROM t, json_each(t.json_stuff) stuff;
Here is the example if you want to get the key list of each object:
select array_agg(json_keys),id from (
select json_object_keys(json_stuff) as json_keys,id from table) a group by a.id
Here id is the identifier or unique value of each row. If the row cannot be distinguished by identifier, maybe it's better to try PL/pgSQL.
Here's a solution that implements the same semantics as MySQL's JSON_KEYS(), which...:
is NULL safe (i.e. when the array is empty, it produces [], not NULL, or an empty result set)
produces a JSON array, which is what I would have expected from how the question was phrased.
SELECT
o,
(
SELECT coalesce(json_agg(j), json_build_array())
FROM json_object_keys(o) AS j (j)
)
FROM (
VALUES ('{}'::json), ('{"a":1}'::json), ('{"a":1,"b":2}'::json)
) AS t (o)
Replace json by jsonb if needed.
Producing:
|o |coalesce |
|-------------|----------|
|{} |[] |
|{"a":1} |["a"] |
|{"a":1,"b":2}|["a", "b"]|
Insert json_column and table
select distinct(tableProps.props) from (
select jsonb_object_keys(<json_column>) as props from <table>
) as tableProps
I wanted to get the amount of keys from a JSONB structure, so I'm doing something like this:
select into cur some_jsonb from mytable where foo = 'bar';
select into keys array_length(array_agg(k), 1) from jsonb_object_keys(cur) as k;
I feel it is a little bit wrong, but it works. It's unfortunate that we can't get an array directly from the json_object_keys() function. That would save us some code.
Datamodel
A person is represented in the database as a meta table row with a name and with multiple attributes which are stored in the data table as key-value pair (key and value are in separate columns).
Simplified data-model
Now there is a query to retrieve all users (name) with all their attributes (data). The attributes are returned as JSON object in a separate column. Here is an example:
name data
Florian { "age":25 }
Markus { "age":25, "color":"blue" }
Thomas {}
The SQL command looks like this:
SELECT
name,
json_object_agg(d.key, d.value) AS data,
FROM meta AS m
JOIN (
JOIN d.fk_id, d.key, d.value AS value FROM data AS d
) AS d
ON d.fk_id = m.id
GROUP BY m.name;
Problem
Now the problem I am facing is, that users like Thomas which do not have any attributes stored in the key-value table, are not shown with my select function. This is because it does only a JOIN and no LEFT OUTER JOIN.
If I would use LEFT OUTER JOIN then I run into the problem, that json_object_agg try's to aggregate NULL values and dies with an error.
Approaches
1. Return empty list of keys and values
So I tried to check if the key-column of a user is NULL and return an empty array so json_object_agg would just create an empty JSON object.
But there is not really a function to create an empty array in SQL. The nearest thing I found was this:
select '{}'::text[];
In combination with COALESCE the query looks like this:
json_object_agg(COALESCE(d.key, '{}'::text[]), COALESCE(d.value, '{}'::text[])) AS data
But if I try to use this I get following error:
ERROR: COALESCE types text and text[] cannot be matched
LINE 10: json_object_agg(COALESCE(d.key, '{}'::text[]), COALES...
^
Query failed
PostgreSQL said: COALESCE types text and text[] cannot be matched
So it looks like that at runtime d.key is a single value and not an array.
2. Split up JSON creation and return empty list
So I tried to take json_object_agg and replace it with json_object which does not aggregate the keys for me:
json_object(COALESCE(array_agg(d.key), '{}'::text[]), COALESCE(array_agg(d.value), '{}'::text[])) AS data
But there I get the error that null value not allowed for object key. So COALESCE does not check that the array is empty.
Qustion
So, is there a function to check if a joined column is empty, and if yes return just a simple JSON object?
Or is there any other solution which would solve my problem?
Use left join with coalesce(). As default value use '{}'::json.
select name, coalesce(d.data, '{}'::json) as data
from meta m
left join (
select fk_id, json_object_agg(d.key, d.value) as data
from data d
group by 1
) d
on m.id = d.fk_id;
name | data
---------+------------------------------------
Florian | { "age" : "25" }
Marcus | { "age" : "25", "color" : "blue" }
Thomas | {}
(3 rows)