WITH
bigquery
AS
(SELECT level from dual connect by level<1000)
SELECT json_arrayagg (json_object (*))
FROM bigquery
with this code, I can serialize the result of a query. But when the query is too big. It doesn't work anymore.
Ora-40478: output value too large (maximum:4000)
The problem come from json_arrayagg(json_object(*))
because this code works
WITH
bigquery
AS
(SELECT level from dual connect by level<1000)
SELECT *
FROM bigquery
db<>fiddle
As described in the documentation, json_arrayagg returns varchar2(4000) if returning clause of this function is not specified.
JSON_agg_returning_clause
Use this clause to specify the data type of the character string returned by this function.
If you omit this clause, or if you specify VARCHAR2 but omit the size value, then JSON_ARRAYAGG returns a character string of type VARCHAR2(4000).
You need to provide clob as returning type.
WITH
bigquery
AS
(SELECT level as l from dual connect by level<1000)
SELECT json_arrayagg (
json_object (key 'l' value l)
returning clob
)
FROM bigquery
db<>fiddle here
Related
I have table with numeric/decimal columns and I am converting the rows to json
select to_jsonb(t.*) from my_table t
I need to have the numeric columns casted to text before converted to json.
The reason why I need this is JavaScript don't handle really big numbers well so I may loose a precision. I use decimal.js and the string representation is best to construct the decimal.js number from.
I know I can do this
select to_jsonb(t.*) || jsonb_build_object('numeric_column', numeric_column::text) from my_table t
But I want to have it done automatically. Is there a way to somehow cast all numeric columns to text before passing to to_jsonb function?
It can be user-defined postgres function.
EDIT: Just to clarify my question. What I need is some function similar to to_jsonb except all columns of the type numeric/decimal are stored as string in the resulting JSON.
Thanks
You can run a query like:
select row_to_json(row(t.column1,t.column2,t.column_numeric::text)) from my_table t
Result here
This solution converts all the json values into text :
SELECT jsonb_object_agg(d.key, d.value)
FROM my_table AS t
CROSS JOIN LATERAL jsonb_each_text(to_jsonb(t.*)) AS d
GROUP BY t
whereas this solution only converts json numbers into text :
SELECT jsonb_object_agg(d.key, CASE WHEN jsonb_typeof(d.value) = 'number' THEN to_jsonb(d.value :: text) ELSE d.value END)
FROM my_table AS t
CROSS JOIN LATERAL jsonb_each(to_jsonb(t.*)) AS d
GROUP BY t
test result in dbfiddle.
I have a JSON column and the data stored looks like:
{"results":{"made":true,"cooked":true,"eaten":true}}
{"results":{"made":true,"cooked":true,"eaten":false}}
{"results":{"made":true,"eaten":true,"a":false,"b":true,"c":false}, "more": {"ignore":true}}
I need to find all rows where 1+ values in $.results is false.
I tried using JSON_CONTAINS() but didn't find a way to get it to compare to a boolean JSON value, or to look at all values in $.results.
This needs to work with MySQL 5.7 but if it's not possible I will accept a MySQL 8+ answer.
I don't know the way for to search for a JSON true/false/null value using JSON functions - in practice these values are treated as string-type values during the search with JSON_CONTAINS, JSON_SEARCH, etc.
Use regular expression for the checking. Something like
SELECT id,
JSON_PRETTY(jsondata)
FROM test
WHERE jsondata REGEXP '"results": {[^}]+: false.*}';
DEMO
You could simply search the JSON_EXTRACT using the LIKE condition this way.
SELECT * FROM table1 WHERE JSON_EXTRACT(json_dict, '$.results') LIKE '%: false%';
Check this DB FIDDLE
An alternative to the pattern matching in other answers, is to extract all values from $.results and check each entry with a helper table with running numbers
SELECT DISTINCT v.id, v.json_value
FROM (
SELECT id, json_value, JSON_EXTRACT(json_value, '$.results.*') value_array
FROM json_table
) v
JOIN seq ON seq.n < JSON_LENGTH(v.value_array)
WHERE JSON_EXTRACT(v.value_array, CONCAT('$[', seq.n, ']')) = false
Here is the demo
i'm trying to execute a query to give me 10 rows with the most biggest score ,column score in my table is a json object like :
{fa="7",en="7"}
how can i set my query to order by this json object ( it doesn't matter which of them (en or fa) used because they are always same )
Assuming your json is {"fa"="7","en"="7"} and assuming your json are in my_json_col column you could access using a -> operator and order by
SELECT *
from my_table
order by my_json_col->"fa"
This sql returns correct result:
select * from `user` where `profile`->"$.year" IN ("2001")
But when I add more than one values
select * from `user` where `profile`->"$.year" IN ("2001", "1")
returns empty
It seems "In" statement not working as expected on JSON column in Mysql 5.7?
See documentation:
11.6 The JSON Data Type
...
Comparison and Ordering of JSON Values
...
The following comparison operators and functions are not yet
supported with JSON values:
BETWEEN
IN()
GREATEST()
LEAST()
A workaround for the comparison operators and functions just listed is
to cast JSON values to a native MySQL numeric or string data type so
they have a consistent non-JSON scalar type.
...
Try:
SELECT *
FROM `user`
WHERE CAST(`profile` -> "$.year" AS UNSIGNED) IN ("2001", "1");
See dbfiddle.
Given a table that contains a column of JSON like this:
{"payload":[{"type":"b","value":"9"}, {"type":"a","value":"8"}]}
{"payload":[{"type":"c","value":"7"}, {"type":"b","value":"3"}]}
How can I write a Presto query to give me the average b value across all entries?
So far I think I need to use something like Hive's lateral view explode, whose equivalent is cross join unnest in Presto.
But I'm stuck on how to write the Presto query for cross join unnest.
How can I use cross join unnest to expand all array elements and select them?
Here's an example of that
with example(message) as (
VALUES
(json '{"payload":[{"type":"b","value":"9"},{"type":"a","value":"8"}]}'),
(json '{"payload":[{"type":"c","value":"7"}, {"type":"b","value":"3"}]}')
)
SELECT
n.type,
avg(n.value)
FROM example
CROSS JOIN
UNNEST(
CAST(
JSON_EXTRACT(message,'$.payload')
as ARRAY(ROW(type VARCHAR, value INTEGER))
)
) as x(n)
WHERE n.type = 'b'
GROUP BY n.type
with defines a common table expression (CTE) named example with a column aliased as message
VALUES returns a verbatim table rowset
UNNEST is taking an array within a column of a single row and returning the elements of the array as multiple rows.
CAST is changing the JSON type into an ARRAY type that is required for UNNEST. It could easily have been an ARRAY<MAP< but I find ARRAY(ROW( nicer as you can specify column names, and use dot notation in the select clause.
JSON_EXTRACT is using a jsonPath expression to return the array value of the payload key
avg() and group by should be familiar SQL.
As you pointed out, this was finally implemented in Presto 0.79. :)
Here is an example of the syntax for the cast from here:
select cast(cast ('[1,2,3]' as json) as array<bigint>);
Special word of advice, there is no 'string' type in Presto like there is in Hive.
That means if your array contains strings make sure you use type 'varchar' otherwise you get an error msg saying 'type array does not exist' which can be misleading.
select cast(cast ('["1","2","3"]' as json) as array<varchar>);
The problem was that I was running an old version of Presto.
unnest was added in version 0.79
https://github.com/facebook/presto/blob/50081273a9e8c4d7b9d851425211c71bfaf8a34e/presto-docs/src/main/sphinx/release/release-0.79.rst