Order mysql result by value from JSON_OBJECT - mysql

I have MariaDB 10.2 and this SQL:
SELECT products.*,
CONCAT('[', GROUP_CONCAT(JSON_OBJECT(
'id', V.id,
'price', V.price
) ORDER BY V.price ASC), ']') AS variants,
FROM products
LEFT JOIN products_variants V ON V.products_id = products.id
GROUP BY products.id
LIMIT 0,10
Result is:
Array (
[0] => Array (
[id] => 1,
[variants] => [{"id": 1, "price": 100},{"id": 2, "price": 110}]
)
[1] => Array (
[id] => 2,
[variants] => [{"id": 3, "price": 200},{"id": 4, "price": 210}]
)
)
I need to sort the products according to the price of the first variation of each product.
Product id 1 must be the first because the first variant price is 100
Product id 2 must be the second because the first variant price is 200 and 200 > 100
I try:
ORDER BY JSON_EXTRACT(`variants`, '$[0].price)
but get error:
Reference 'variants' not supported (reference to group function)

I think you want:
order by min(v.price)
There is no need to parse the JSON object.

This is an example of when one should not hide a column (price) inside a column (in JSON, in this case). Instead, keep price as its own column to make it simple for MySQL to access for sorting (as you needed) or filtering.
It is OK to also keep the column inside the JSON string, so that the JSON is "complete".
The problem was that you had the wrong quotes around variant. Backtics (which you used) is for column and table names. You needed apostrophes or double-quotes (' or ") because variant is just a string in this context.)

Related

Update postgres values with corresponding keys

I want lower case the value in for specific keys:-
Table:
logs
id bigint , jsondata text
[
{
"loginfo": "somelog1",
"id": "App1",
"identifier":"IDENTIF12"
},
{
"loginfo": "somelog2",
"id": "APP2",
"identifier":"IDENTIF1"
}
]
I need to lower only id and identifier..
Need acheive something as below
UPdATE SET json_agg(elems.id) = lowered_val...
SELECT
id,
lower(json_agg(elems.id)) as lowered_val
FROM logs,
json_array_elements(jsondata::json) as elems
GROUP BY id;
demo:db<>fiddle
This is not that simple. You need to expand and extract the complete JSON object and have to do this manually:
SELECT
id,
json_agg(new_object) -- 5
FROM (
SELECT
id,
json_object_agg( -- 4
attr.key,
CASE -- 3
WHEN attr.key IN ('id', 'identifier') THEN LOWER(attr.value)
ELSE attr.value
END
) as new_object
FROM mytable,
json_array_elements(jsondata::json) WITH ORDINALITY as elems(value, index), -- 1
json_each_text(elems.value) as attr -- 2
GROUP BY id, elems.index -- 4
) s
GROUP BY id
Extract the arrays. WITH ORDINALITY adds an index to the array elements, to be able to group the original arrays afterwards.
Extract the elements of the array elements into a new record each. This creates two columns key and value.
If the key is your expected keys to be modified, make the related values lower case, leave all others
Rebuild the JSON objects
Reagggregate them into a new JSON array

Mysql Rewrite Column with JSON Array

I want to write an MySQL Query where I replace a JSON Array with Data from another Table.
I have got two Tables, "Reserved" and "Seats". Reserved contains one column "Seats", an JSON Array referring to the ID of the Table "Seats". Table Seats also contains a column "Name". I now want to basically replace the IDs in the JSON Data of the Seats column of the Reserved Table, with the name of the corresponding IDs stored in the Seats Table.
Is there a way to do this in an Mysql Query. I do not know how to pack a query result in a JSON Format and return it as a column
I already tried to utilize JSON_EXTRACT somehow : see test below.
SELECT * FROM `seats` WHERE ID = JSON_EXTRACT('["276", "277", "278"]','$.*')
Basically I want a Query like this:
SELECT *,
JSONCreate(SELECT name from `seats` WHERE seats.id IN JSON_EXTRACT(reserved.seats)) as name
FROM `reserved`
WHERE 1
You can use one of the following solutions.
solution using JSON_SEARCH and JSON_ARRAYAGG
SELECT r.seats, JSON_ARRAYAGG(s.name)
FROM reserved r JOIN seats s ON JSON_SEARCH(r.seats, 'one', CONVERT(s.id, CHAR(10))) IS NOT NULL
GROUP BY r.seats
solution using ... MEMBER OF () and JSON_ARRAYAGG
SELECT r.seats, JSON_ARRAYAGG(s.name)
FROM reserved r INNER JOIN seats s ON CONVERT(s.id, CHAR) MEMBER OF(r.seats)
GROUP BY r.seats
solution using JSON_CONTAINS and JSON_ARRAYAGG
SELECT r.seats, JSON_ARRAYAGG(s.name)
FROM reserved r INNER JOIN seats s ON JSON_CONTAINS(r.seats, JSON_QUOTE(CONVERT(s.id, CHAR))) = 1
GROUP BY r.seats
You can also use JSON_TABLE to solve this:
SELECT JSON_ARRAYAGG(IFNULL(s.name, ''))
FROM reserved r, JSON_TABLE(
r.seats,
"$[*]" COLUMNS (
id CHAR(50) PATH "$"
)
) AS rr LEFT JOIN seats s ON rr.id = s.id
GROUP BY r.seats
Note: You can use INNER JOIN to remove the empty values. Instead of GROUP BY r.seats you should use a id column.
demo on dbfiddle.uk

What is the proper MySQL way to take data from 4 rows, 1 column, and separate into 9 columns?

I've studied and tried days worth of SQL queries to find "something" that will work. I have a table, apj32_facileforms_subrecords, that uses 7 columns. All the data I want to display is in 1 column - "value". The "record" displays the number of the entry. The "title" is what I would like to appear in the header row, but that's not as important as "value" to display in 1 row based upon "record" number.
I've tried a lot of CONCAT and various Pivot queries, but nothing seems to do more than "get close" to what I'd like as the end result.
Here's a screen shot of the table:
The output "should" be linear, so that 1 row contains 9 columns:
Project; Zipcode; First Name; Last Name; Address; City; Phone; E-mail; Trade (in that order). And the values in the 9 columns come from "value" as they relate to the "record" number.
I know there are LOT of examples that are similar, but nothing I've found covers taking all the values from "value" and CONCAT to 1 row.
This works to get all the data I want - SELECT record,value FROM apj32_facileforms_subrecords WHERE (record IN (record,value)) ORDER BY record
But the values are still in multiple rows. I can play with that query to get just the values, but I'm still at a loss to get them into 1 row. I'll keep playing with that query to see if I can figure it out before one of the experts here shows me how simple it is to do that.
Any help would be appreciated.
Using SQL to flatten an EAV model representation into a relational representation can be somewhat convoluted, and not very efficient.
Two commonly used approaches are conditional aggregation and correlated subqueries in the SELECT list. Both approaches call out for careful indexing for suitable performance with large sets.
correlated subqueries example
Here's an example of the correlated subquery approach, to get one value of the "zipcode" attribute for some records
SELECT r.id
, ( SELECT v1.value
FROM `apj32_facileforms_subrecords` v1
WHERE v1.record = r.id
AND v1.name = 'zipcode'
ORDER BY v1.value LIMIT 0,1
) AS `Zipcode`
FROM ( SELECT 1 AS id ) r
Extending that, we repeat the correlated subquery, changing the attribute identifier ('firstname' in place of 'zipcode'. looks like we we could also reference it by element, e.g. v2.element = 2
SELECT r.id
, ( SELECT v1.value
FROM `apj32_facileforms_subrecords` v1
WHERE v1.record = r.id
AND v1.name = 'zipcode'
ORDER BY v1.value LIMIT 0,1
) AS `Zipcode`
, ( SELECT v2.value
FROM `apj32_facileforms_subrecords` v2
WHERE v2.record = r.id
AND v2.name = 'firstname'
ORDER BY v2.value LIMIT 0,1
) AS `First Name`
, ( SELECT v3.value
FROM `apj32_facileforms_subrecords` v3
WHERE v3.record = r.id
AND v3.name = 'lastname'
ORDER BY v3.value LIMIT 0,1
) AS `Last Name`
FROM ( SELECT 1 AS id UNION ALL SELECT 2 ) r
returns something like
id Zipcode First Name Last Name
-- ------- ---------- ---------
1 98228 David Bacon
2 98228 David Bacon
conditional aggregation approach example
We can use GROUP BY to collapse multiple rows into one row per entity, and use conditional tests in expressions to "pick out" attribute values with aggregate functions.
SELECT r.id
, MIN(IF(v.name = 'zipcode' ,v.value,NULL)) AS `Zip Code`
, MIN(IF(v.name = 'firstname' ,v.value,NULL)) AS `First Name`
, MIN(IF(v.name = 'lastname' ,v.value,NULL)) AS `Last Name`
FROM ( SELECT 1 AS id UNION ALL SELECT 2 ) r
LEFT
JOIN `apj32_facileforms_subrecords` v
ON v.record = r.id
GROUP
BY r.id
For more portable syntax, we can replace MySQL IF() function with more ANSI standard CASE expression, e.g.
, MIN(CASE v.name WHEN 'zipcode' THEN v.value END) AS `Zip Code`
Note that MySQL does not support SQL Server PIVOT syntax, or Oracle MODEL syntax, or Postgres CROSSTAB or FILTER syntax.
To extend either of these approaches to be dynamic, to return a resultset with a variable number of columns, and variety of column names ... that is not possible in the context of a single SQL statement. We could separately execute SQL statements to retrieve information, that would allow us to dynamically construct a SQL statement of a form show above, with an explicit set of columns to be returned.
The approaches outline above return a more traditional relational model, (individual columns each with a value).
non-relational munge of attributes and values into a single string
If we have some special delimiters, we could munge together a representation of the data using GROUP_CONCAT function
As a rudimentary example:
SELECT r.id
, GROUP_CONCAT(v.title,'=',v.value ORDER BY v.name) AS vals
FROM ( SELECT 1 AS id ) r
LEFT
JOIN `apj32_facileforms_subrecords` v
ON v.record = r.id
AND v.name in ('zipcode','firstname','lastname')
GROUP
BY r.id
To return two columns, something like
id vals
-- ---------------------------------------------------
1 First Name=David,Last Name=Bacon,Zip Code=98228
We need to be aware that the return from GROUP_CONCAT is limited to group_concat_max_len bytes. And here we have just squeezed the balloon, moving the problem to some later processing, to parse the resulting string. If we have any equal signs or commas that appear in the values, it's going to make a mess of parsing the result string. So we will have to properly escape any delimiters that appear in the data, so that GROUP_CONCAT expression is going to get more involved.

Combining Data in Two Tables SQL

I'm sure a very basic question, but I'm continue to be stuck:
Table A - image_number, camera_type, total_sales
Table B - image_number, keyword
Table A has one ROW for each image_number - example:
image_number="AXJ789, camera_type="Nikon", total_sales=678
image_number="JIJ123", camera_type="Canon", total_sales=999
image_number="KNI908", camera_type="Sony", total_sales=565
Table B has many ROWs for each image_number - example:
image_number="AXJ789", keyword = "rain"
image_number="AXJ789", keyword = "mountain"
image_number="AXJ789", keyword = "grass"
image_number="AXJ789", keyword = "cloud"
What I'm trying to do is JOIN the two tables so that I can generate the following output:
image_number="AXJ789", camera_type=678, camera_type="Nikon", keyword(1) = "rain", keyword(2) = "mountain", keyword(3) = "grass", keyword(4) = "cloud"
In other words, I want to have all items in each ROW in table A + all the items from table B. For each image_number in Table A, there could be no "keywords" in Table B or 50 keywords - depends on the image.
When I do an INNER JOIN, of course I can get one "keyword" from table B, but I can't figure out how to get all of them.
You can concatenate the keywords together:
select a.*,
(select group_concat(b.keyword)
from b
where b.image_number = a. image_number
) as keywords
from a;
This creates a comma-delimited list of the keywords. This is much simpler (in MySQL) than trying to put them in separate columns. In fact, if you wanted separate columns, I might suggest parsing this result:
select a.*, -- or whatever columns you want
substring_index(keywords, ',' 1) as keyword1,
substring_index(substring_index(keywords, ',' 2), ',', -1) as keyword2,
substring_index(substring_index(keywords, ',' 3), ',', -1) as keyword3,
substring_index(substring_index(keywords, ',' 4), ',', -1) as keyword4
from a left join
(select b.image_number, group_concat(b.keyword) as keywords
from b
group by b.image_number
) b
on b.image_number = a. image_number;
You can generate a comma-separated list of keywords for each image using GROUP_CONCAT and JOIN (but use a LEFT JOIN if an image may have no keywords).
SELECT a.*, GROUP_CONCAT(b.keyword) AS keyword_list
FROM a
JOIN b on b.image_number = a.image_number
GROUP BY a.image_number
Output for your sample data:
image_number camera_type total_sales keyword_list
AXJ789 Nikon 678 rain,mountain,grass,cloud
Demo on dbfiddle
You can then parse this into an array in your application, for example in PHP (if you have read the row into $row):
$keywords = explode(',', $row['keyword_list']);
print_r($keywords);
Output:
Array
(
[0] => rain
[1] => mountain
[2] => grass
[3] => cloud
)

Count other documents in the same table matching the condition

I've got this query:
SELECT
id,
(
SELECT COUNT(*)
FROM default t USE KEYS default.id
WHERE t.p_id=default.id
) as children_count
FROM default
WHERE 1
I expect this:
[
{
"children_count": 5,
"id": "1"
},
...
]
But i got this:
[
{
"children_count": [
{
"$1": 0
}
],
"id": "1"
},
...
]
What am I doing wrong? I've googled this, but I can't find any clear explaination of count subqueries in N1QL, so any links to documentation will be highly appreciated.
UPD:
I've updated my code according to #prettyvoid answer. I've also created minimal example bucket to demonstrate the problem.
SELECT
id,
(
SELECT COUNT(*) as count
FROM test t USE KEYS "p.id"
WHERE t.p_id=p.id
)[0].count as children_count
FROM test p
WHERE 1
The result is following:
[
{
"children_count": 0,
"id": 1
},
{
"children_count": 0,
"id": 2
},
{
"children_count": 0,
"id": 3
}
]
Any select statement will yield you an array with objects inside, that's normal. If you want to get your expected result then scope-in to the count object inside the array [0]
edit: Following query will do what you want, I'm unsure if there is a better way though
SELECT
id,
(
SELECT COUNT(*) as count
FROM default t USE KEYS (SELECT RAW meta().id from default)
WHERE t.p_id=p.id
)[0].count as children_count
FROM default p
It's important to note that in Couchbase, the very fastest way to retrieve a document is via its document key. Using a GSI index is workable, but slower. And as with most other databases, it is best to avoid a full scan. You say you can make id the same as the document key, so I will assume that is the case, so that I can use p_id in the on keys clause.
Is it o.k. to only list documents with a non-zero number of children? In that case, you can write this as an aggregation query where you join each child to its parent, and group by the parent id (note my bucket is called default):
select p.id, count(*) as children_count
from default c join default p on keys c.p_id
group by p.id;
If you need to include documents with zero children, you need to UNION with a query that finds those documents as well. In this case we know that:
select raw array_agg(distinct(p_id)) from default where p_id is not null
will give us an array of parent IDs, so we can get the ids not in the list with:
select id, 0 as children_count
from default p
where not array_contains(
(select raw array_agg(distinct(p_id)) from default where p_id is not null)[0],id);
So, if we UNION the two:
(select p.id, count(*) as children_count
from default c join default p on keys c.p_id
group by p.id)
UNION
(select id, 0 as children_count
from default p
where not array_contains(
(select raw array_agg(distinct(p_id)) from default where p_id is not null)[0],id));
We get a list of all the ids and their children_count, zero or not. If you want more than just the id, add more fields or '*' to the select list for each query.