I have the following table:
I need to create a select that returns me something like this:
I have tried this code:
SELECT Code, json_extract_path(Registers::json,'sales', 'name')
FROM tbl_registers
The previous code returns me a NULL in json_extract_path, I have tried the operator ::json->'sales'->>'name', but doesn't work too.
You need to unnest the array, and the aggregate the names back. This can be done using json_array_elements with a scalar sub-query:
select code,
(select string_agg(e ->> 'name', ',')
from json_array_elements(t.products) as x(e)) as products
from tbl_registers t;
I would also strongly recommend to change your column's type to jsonb
step-by-step demo:db<>fiddle
SELECT
code,
string_agg( -- 3
elems ->> 'name', -- 2
','
) as products
FROM tbl_registers,
json_array_elements(products::json) as elems -- 1
GROUP BY code
If you have type text (strictly not recommended, please use appropriate data type json or jsonb), then you need to cast it into type json (I guess you have type text because you do the cast already in your example code). Afterwards you need to extract the array elements into one row per element
Fetch the name value
Reaggregate by grouping and use string_agg() to create the string list
Related
I have a table containing an id column and a json column(variant data type). I want to flatten the data, make the value column a variant, assign each value in the value column a data type if a condition is met, then eventually pivot the data and have each column be the correct data type.
Example code that doesn't work:
with cte as (
select
1 as id,
parse_json('{
"field1":"TRUE",
"field2":"some string",
"field3":"1.035",
"field4":"097334"
}') as my_output
)
select
id,
key,
to_variant(
case
when value in ('true', 'false') then value::boolean
when value like ('1.0') then value::decimal
else value::string
end) as value
from cte, lateral flatten(my_output)
Ultimately, I'd like to pivot the data and have a wide table with columns id, field1, field2, etc. where field1 is boolean, field2 is string, field3 is a decimal etc.
This is just a simple example, instead of 4 fields, I'm dealing with hundreds.
Is this possible?
For the pivot, I'm using dbt_utils.get_column_values to get the column names dynamically. I'd really prefer a solution that doesn't involve listing out the column names, especially since there are hundreds.
Since you'd have to define each column in your PIVOT statement, anyway, it'd probably be much easier to simply select each attribute directly and cast to the correct data type, rather than using a lateral flatten.
select
my_output.field1::boolean,
my_output.field2::string,
my_output.field3::decimal(5,3),
my_output.field4::string
from cte;
Alternatively, if you want this to be dynamically created, you could create a stored procedure that dynamically uses your json to create a view over your table that has this select in it.
Solution ended up being
select
id,
key,
ifnull(try_parse_json(value), value) as value_mod,
typeof(value_mod)
from cte, lateral flatten(my_output)
Leading zeros are removed so things like zip codes have to be accounted for.
I have table with numeric/decimal columns and I am converting the rows to json
select to_jsonb(t.*) from my_table t
I need to have the numeric columns casted to text before converted to json.
The reason why I need this is JavaScript don't handle really big numbers well so I may loose a precision. I use decimal.js and the string representation is best to construct the decimal.js number from.
I know I can do this
select to_jsonb(t.*) || jsonb_build_object('numeric_column', numeric_column::text) from my_table t
But I want to have it done automatically. Is there a way to somehow cast all numeric columns to text before passing to to_jsonb function?
It can be user-defined postgres function.
EDIT: Just to clarify my question. What I need is some function similar to to_jsonb except all columns of the type numeric/decimal are stored as string in the resulting JSON.
Thanks
You can run a query like:
select row_to_json(row(t.column1,t.column2,t.column_numeric::text)) from my_table t
Result here
This solution converts all the json values into text :
SELECT jsonb_object_agg(d.key, d.value)
FROM my_table AS t
CROSS JOIN LATERAL jsonb_each_text(to_jsonb(t.*)) AS d
GROUP BY t
whereas this solution only converts json numbers into text :
SELECT jsonb_object_agg(d.key, CASE WHEN jsonb_typeof(d.value) = 'number' THEN to_jsonb(d.value :: text) ELSE d.value END)
FROM my_table AS t
CROSS JOIN LATERAL jsonb_each(to_jsonb(t.*)) AS d
GROUP BY t
test result in dbfiddle.
I have a JSON column and the data stored looks like:
{"results":{"made":true,"cooked":true,"eaten":true}}
{"results":{"made":true,"cooked":true,"eaten":false}}
{"results":{"made":true,"eaten":true,"a":false,"b":true,"c":false}, "more": {"ignore":true}}
I need to find all rows where 1+ values in $.results is false.
I tried using JSON_CONTAINS() but didn't find a way to get it to compare to a boolean JSON value, or to look at all values in $.results.
This needs to work with MySQL 5.7 but if it's not possible I will accept a MySQL 8+ answer.
I don't know the way for to search for a JSON true/false/null value using JSON functions - in practice these values are treated as string-type values during the search with JSON_CONTAINS, JSON_SEARCH, etc.
Use regular expression for the checking. Something like
SELECT id,
JSON_PRETTY(jsondata)
FROM test
WHERE jsondata REGEXP '"results": {[^}]+: false.*}';
DEMO
You could simply search the JSON_EXTRACT using the LIKE condition this way.
SELECT * FROM table1 WHERE JSON_EXTRACT(json_dict, '$.results') LIKE '%: false%';
Check this DB FIDDLE
An alternative to the pattern matching in other answers, is to extract all values from $.results and check each entry with a helper table with running numbers
SELECT DISTINCT v.id, v.json_value
FROM (
SELECT id, json_value, JSON_EXTRACT(json_value, '$.results.*') value_array
FROM json_table
) v
JOIN seq ON seq.n < JSON_LENGTH(v.value_array)
WHERE JSON_EXTRACT(v.value_array, CONCAT('$[', seq.n, ']')) = false
Here is the demo
I have a column myColumn in myTable table with this value:
"6285":[
{
"75963":{"lookupId":"54","value":"0","version":null},
"75742":{"lookupId":"254","value":"991","version":null}
}
]
I need to write select query using JSON_VALUE or JSON_QUERY functions (my sql server version does not support OPENJSON). The query should return this result:
"75963-0, 75742-991"
As you can see I need values of value parameter. Also note that I don't know what elements will an object inside 6285 array contain. I mean I wouldn't know in advance that there will be 2 elements (75963 and 75742) in it. There could be more or less elements and they could be different of course. However there will always be only one object in 6285 array.
What kind of select can I write to achieve it?
It's strange, I think that your version supports OPENJSON() and you may try to use the following statement. Note, that the JSON in the question is not valid, it should be inside {}.
Table:
CREATE TABLE Data (JsonColumn varchar(1000))
INSERT INTO Data (JsonColumn)
VALUES ('{"6285":[{"75963":{"lookupId":"54","value":"0","version":null},"75742":{"lookupId":"254","value":"991","version":null}}]}')
Statement:
SELECT CONCAT(j2.[key], '-', JSON_VALUE(j2.[value], '$.value')) AS JsonValue
FROM Data d
CROSS APPLY OPENJSON(d.JsonColumn) j1
CROSS APPLY OPENJSON(j1.[value], '$[0]') j2
Result:
JsonValue
---------
75963-0
75742-991
If you need an aggregated result, you may use STRING_AGG():
SELECT STRING_AGG(CONCAT(j2.[key], '-', JSON_VALUE(j2.[value], '$.value')), ',') AS JsonValue
FROM Data d
CROSS APPLY OPENJSON(d.JsonColumn) j1
CROSS APPLY OPENJSON(j1.[value], '$[0]') j2
Result:
JsonValue
-----------------
75963-0,75742-991
Given a table that contains a column of JSON like this:
{"payload":[{"type":"b","value":"9"}, {"type":"a","value":"8"}]}
{"payload":[{"type":"c","value":"7"}, {"type":"b","value":"3"}]}
How can I write a Presto query to give me the average b value across all entries?
So far I think I need to use something like Hive's lateral view explode, whose equivalent is cross join unnest in Presto.
But I'm stuck on how to write the Presto query for cross join unnest.
How can I use cross join unnest to expand all array elements and select them?
Here's an example of that
with example(message) as (
VALUES
(json '{"payload":[{"type":"b","value":"9"},{"type":"a","value":"8"}]}'),
(json '{"payload":[{"type":"c","value":"7"}, {"type":"b","value":"3"}]}')
)
SELECT
n.type,
avg(n.value)
FROM example
CROSS JOIN
UNNEST(
CAST(
JSON_EXTRACT(message,'$.payload')
as ARRAY(ROW(type VARCHAR, value INTEGER))
)
) as x(n)
WHERE n.type = 'b'
GROUP BY n.type
with defines a common table expression (CTE) named example with a column aliased as message
VALUES returns a verbatim table rowset
UNNEST is taking an array within a column of a single row and returning the elements of the array as multiple rows.
CAST is changing the JSON type into an ARRAY type that is required for UNNEST. It could easily have been an ARRAY<MAP< but I find ARRAY(ROW( nicer as you can specify column names, and use dot notation in the select clause.
JSON_EXTRACT is using a jsonPath expression to return the array value of the payload key
avg() and group by should be familiar SQL.
As you pointed out, this was finally implemented in Presto 0.79. :)
Here is an example of the syntax for the cast from here:
select cast(cast ('[1,2,3]' as json) as array<bigint>);
Special word of advice, there is no 'string' type in Presto like there is in Hive.
That means if your array contains strings make sure you use type 'varchar' otherwise you get an error msg saying 'type array does not exist' which can be misleading.
select cast(cast ('["1","2","3"]' as json) as array<varchar>);
The problem was that I was running an old version of Presto.
unnest was added in version 0.79
https://github.com/facebook/presto/blob/50081273a9e8c4d7b9d851425211c71bfaf8a34e/presto-docs/src/main/sphinx/release/release-0.79.rst