I'm trying to get the data within the Rule column. It has a value in JSON format. I'm trying to construct the query to get the part which says "value":8.
Column name: Rule.
JSON within column:
{"element":[{"maxDiscount":0,"minAmount":100,"total":{"type":"ABSOLUTE_OFF","value":8}}]}
I'm stuck with this query:
select id, rule->>'$."total"' from table A
order by id desc;
My desired output is...
ID | Value
1A | 8
You may try using the JSON path $.element[0].total.value here:
SELECT
id,
JSON_EXTRACT(rule, '$.element[0].total.value') AS val
FROM tableA
ORDER BY id DESC;
Is this what you are looking for?
rule ->> "$.element[0].total.value"
This gives you the value attribute for the total entity that is the first element in the element array.
This can also be expressed:
json_extract(rule, "$.element[0].total.value")
Demo on DB Fiddle:
select rule ->> "$.element[0].total.value" res
from (
select cast('{"element":[{"maxDiscount":0,"minAmount":100,"total":{"type":"ABSOLUTE_OFF","value":8}}]}' as json) rule
) t
| res |
| :-- |
| 8 |
Related
I have a table with a json column that contains an array of objects, like the following:
create table test_json (json_id int not null primary key, json_data json not null) select 1 as json_id, '[{"category":"circle"},{"category":"square", "qualifier":"def"}]' as json_data union select 2 as json_id, '[{"category":"triangle", "qualifier":"xyz"},{"category":"square"}]' as json_data;
+---------+----------------------------------------------------------------------------------------+
| json_id | json_data |
+--------------------------------------------------------------------------------------------------+
| 1 | [{"category":"circle"}, {"category":"square", "qualifier":"def"}] |
| 2 | [{"category":"triangle", "qualifier":"xyz"}, {"category":"square"}] |
+---------+----------------------------------------------------------------------------------------+
I'd like to be able to query this table to look for any rows (json_id's) that contain a json object in the array with both a "category" value of "square" and no "qualifier" property.
The sample table above is just a sample and I'm looking for a query that would work over hundreds of rows and hundreds of objects in the json array.
In MySQL 8.0, you would use JSON_TABLE() for this:
mysql> select json_id, j.* from test_json, json_table(json_data, '$[*]' columns (
category varchar(20) path '$.category',
qualifier varchar(10) path '$.qualifier')) as j
where j.category = 'square' and j.qualifier is null;
+---------+----------+-----------+
| json_id | category | qualifier |
+---------+----------+-----------+
| 2 | square | NULL |
+---------+----------+-----------+
It's not clear why you would use JSON for this at all. It would be better to store the data in the normal manner, one row per object, with category and qualifier as individual columns.
A query against normal columns is a lot simpler to write, and you can optimize the query easily with an index:
select * from mytable where category = 'square' and qualifier is null;
I found another solution using only MySQL 5.7 JSON functions:
select json_id, json_data from test_json
where json_extract(json_data,
concat(
trim(trailing '.category' from
json_unquote(json_search(json_data, 'one', 'square'))
),
'.qualifier')
) is null
This assumes the value 'square' only occurs as a value for a "category" field. This is true in your simple example, but I don't know if it will be true in your real data.
Result:
+---------+------------------------------------------------------------------------+
| json_id | json_data |
+---------+------------------------------------------------------------------------+
| 2 | [{"category": "triangle", "qualifier": "xyz"}, {"category": "square"}] |
+---------+------------------------------------------------------------------------+
I still think that it's a CodeSmell anytime you reference JSON columns in a condition in the WHERE clause. I understood your comment that this is a simplified example, but regardless of the JSON structure, if you need to do search conditions, your queries will be far easier to develop if your data is stored in conventional columns in normalized tables.
Your request is not clear. Both of your SQL records has not such properties but your JSON object has. Maybe you try to find any record that has such object. So the following is your answer:
create table test_json (json_id int not null primary key, json_data json not null) select 1 as json_id, '[{"category":"circle", "qualifier":"abc"},{"category":"square", "qualifier":"def"}]' as json_data union select 2 as json_id, '[{"category":"triangle", "qualifier":"xyz"},{"category":"square"}]' as json_data;
select * from test_json;
select * from test_json where 'square' in (JSON_EXTRACT(json_data, '$[0].category'),JSON_EXTRACT(json_data, '$[1].category'))
AND (JSON_EXTRACT(json_data, '$[0].qualifier') is NULL || JSON_EXTRACT(json_data, '$[1].qualifier') is NULL);
See Online Demo
Also see JSON Function Reference
Let's assume this users table:
-----------------------------------------
| id | ... | info |
-----------------------------------------
| 1 | ... | {"items":["132","136"]} |
I need to make a request to fetch users that have items with id == 136.
This following is the sql I built but it does not work and I dont understand why:
SELECT _u.id FROM users _u WHERE _u.info REGEXP '("items":)([)("136")(])'
Thank you in advance!
Here is one approach using the MySQL JSON functions:
SELECT *
FROM yourTable
WHERE JSON_SEARCH(JSON_EXTRACT(json, "$.items"), 'one', "136") IS NOT NULL;
Demo
The call to JSON_EXTRACT first extracts the JSON array under the items key. Then, we use JSON_SEARCH to try to find an element "136".
Edit:
If you are certain that the JSON to be searched would always just be one key items along with a single level JSON array, then REGEXP might be viable here:
SELECT *
FROM yourTable
WHERE json REGEXP '"items":\\[.*"136".*\\]';
Demo
I have a table called slices with some simple json objects that looks like this:
id | payload | metric_name
---|---------------------------------------|------------
1 | {"a_percent":99.97,"c_percent":99.97} | metric_c
2 | {"a_percent":98.37,"c_percent":97.93} | metric_c
many records of this. I am trying to get this:
a_percent | c_percent
----------|----------
99.97 | 99.97
98.37 | 97.93
I am creating the type and using json_populate_recordset along with json_agg in the following fashion:
CREATE TYPE c_history AS(
"a_percent" NUMERIC(5, 2),
"c_percent" NUMERIC(5, 2)
);
SELECT * FROM
json_populate_recordset(
NULL :: c_history,
(
SELECT json_agg(payload::json) FROM slices
WHERE metric_name = 'metric_c'
)
);
The clause select json_agg(...) by itself produces a nice array of json objects, as expected:
[{"a_percent":99.97,"c_percent":99.97}, {"a_percent":98.37,"c_percent":97.93}]
But when I run it inside json_populate_recordset, I get Error : ERROR: must call json_populate_recordset on an array of objects.
What am I doing wrong?
This is a variant of #TimBiegeleisen's solution with the function json_populate_record() used in a from clause:
select id, r.*
from slices,
lateral json_populate_record(null::c_history, payload) r;
See rextester or SqlFiddle.
You don't need to use json_agg, since it appears you want to get the set of a_percent and c_percent values for each id in a separate record. Rather just call json_populate_recordset as follows:
SELECT id, (json_populate_record(null::c_history, payload)).* FROM slices
Background
I have a table with 1 column 'data' which contains 'JSON' in BigQuery shown below.
data
{"name":"x","mobile":999,"location":"abc"}
{"name":"x1","mobile":9991,"location":"abc1"}
Now, I want to use groupby functions:
SELECT
data
FROM
table
GROUP BY
json_extract(data,'$.location')
This query throws an error
expression JSON_EXTRACT([data], '$.location') in GROUP BY is invalid
So, I modify query to
SELECT
data, json_extract(data,'$.location') as l
FROM
table
GROUP BY
l
This query throws error
Expression 'data' is not present in the GROUP BY list
Query
How can we use JSON fields in group by clause?
And what are the limitations (in context of querying),in having columns populated with JSON.
You are grouping something by location, but you are not using an aggregate function for data field, hence the compiler doesn't know which to pick or what you aggregate on the source.
Just to illustrate the example I compiled this test query which works using group_concat:
select group_concat(data),location from
(
select * from
(SELECT '{"name":"x","mobile":999,"location":"abc"}' as data,json_extract('{"name":"x","mobile":999,"location":"abc"}','$.location') as location),
(SELECT '{"name":"x","mobile":111,"location":"abc"}' as data,json_extract('{"name":"x","mobile":111,"location":"abc"}','$.location') as location),
(SELECT '{"name":"x1","mobile":9991,"location":"abc1"}' as data,json_extract('{"name":"x1","mobile":9991,"location":"abc1"}','$.location') as location)
) d
group by location
and returns:
+-----+---------------------------------------------------------------------------------------------------+----------+--+
| Row | f0_ | location | |
+-----+---------------------------------------------------------------------------------------------------+----------+--+
| 1 | {"name":"x","mobile":999,"location":"abc"},"{""name"":""x"",""mobile"":111,""location"":""abc""}" | abc | |
+-----+---------------------------------------------------------------------------------------------------+----------+--+
| 2 | {"name":"x1","mobile":9991,"location":"abc1"} | abc1 | |
+-----+---------------------------------------------------------------------------------------------------+----------+--+
BigQuery's Aggregate Functions documented here
Try below
SELECT location,
GROUP_CONCAT_UNQUOTED(REPLACE(data, ',"location":"' + location + '"', '')) AS data
FROM (
SELECT data,
JSON_EXTRACT_SCALAR(data,'$.location') AS location,
FROM YourTable
)
GROUP BY location
My table is like this:
create table alphabet_soup(
id numeric,
index json bigint
);
my data looks like this:
(id, json) looks like this: (1, '{('key':1,'value':"A"),('key':2,'value':"C"),('key':3,'value':"C")...(600,"B")}')
How do I sum across the json for number of A and number of B and do % of the occurence of A or B? I have about 6 different types of values (ABCDEF), but for simplicity I am just looking for a comparison of 3 values.
I am trying to find something to help me calculate the % of occurrence of a value from a key value pair in json. I am using postgres 9.4. I am new to both json and postgres, and I am landing on the same json functions manual page of postgres over and over.
I have managed to find a sum, but how to calculate the % in a nested select and display the key and values in increasing order of occurence like follows:
value | occurence | %
====================================
A | 300 | 50
B | 198 | 33
C | 102 | 17
The script I am using for the sum is :
select id, index->'key'::key as key
sum(case when (1,index::json->'1')::text = (1,index::json->'2')::text
then 1
else 0
end)/count(id) as res
from
alphabet_soup
group by id;
limit 10;
I get an output as follows:
column "alphabet_soup.id" must appear in the group by clause or be used in an aggregate function.
Thanks for the comment Patrick. Sorry I forgot to add I am using postgres 9.4
The easiest way to do this is to expand the json document into a regular row set using the json_each_text() function. Every single json document then becomes a set of rows and you can then apply aggregate function as you would on any other row set. However, you need to use the function as a row source (section 7.2.1.4) (since it returns a set of rows) and then select the value field which has the category of interest. Note that the function uses a field of the table, through an implicit LATERAL join (section 7.2.1.5).
SELECT id, value
FROM alphabet_soup, json_each_text("index");
which yields something like:
test=# SELECT id, value FROM alphabet_soup, json_each_text("index");
id | value
----+-------
1 | A
1 | C
1 | C
1 | B
To this you can apply regular aggregate functions over the appropriate windows to get the result you are looking for:
SELECT DISTINCT id, value,
count(value) OVER (PARTITION BY id, value) AS occurrence,
count(value) OVER (PARTITION BY id, value) * 100.0 /
count(id) OVER (PARTITION BY id) AS percentage
FROM (
SELECT id, value
FROM alphabet_soup, json_each_text("index") ) sub
ORDER BY id, value;
Which gives a result like:
id | value | occurrence | percentage
----+-------+------------+---------------------
1 | A | 1 | 25.0000000000000000
1 | B | 1 | 25.0000000000000000
1 | C | 2 | 50.0000000000000000
This will work for any number of categories (ABCDEF) and any number of ids.
# Patrick, it was an accident. I am new to stackoverflow. I did not realize how ti works. I was fiddling around and I found the answer to the question I asked in addition to the first one. Sorry about that!
For fun, I added some more to the code to make the % compare of the result set:
With q1 as
(SELECT DISTINCT id, value,
count(value) OVER (PARTITION BY id, value) AS occurrence,
count(value) OVER (PARTITION BY id, value) * 100.0 / count(id) OVER(PARTITION BY id) AS percentage
FROM ( SELECT id, value FROM alphabet_soup, json_each_text("index") ) sub
ORDER BY id, value) Select distinct id, value, least(percentage) from q1
Where (least(percentage))>20 Order by id, value;
The output for this is:
id | value | least
----+-------+--------
1 | B | 33
1 | C | 50