How to Convert Query Result to JSON Object Inside Postgres - json

I have a simple query, SELECT name, grp FROM things; that results in the following table:
name | grp
------+-----
a | y
b | x
c | x
d | z
e | z
f | z
I would like to end up with the following single JSON object:
{y: [a], x: [b,c], z: [d,e,f]}
I feel like I'm closer with the query SELECT grp, array_agg(name) as names FROM things GROUP BY grp; which gives three rows with the "name" condensed into an array, but I don't know where to go from here to get the rows condensed into a single JSON object.
SELECT json_build_object(grp, array_agg(name)) as objects FROM things GROUP BY grp; is maybe slightly closer since that results in a single column result of individual JSON objects like {y: [a]}, but they are still individual objects, so that might not be the right path to go down.
This is using Postgresql 9.4.

It seems the key here is the json_object_agg function which is not listed with the rest of the json functions.
See: http://www.postgresql.org/docs/9.4/static/functions-aggregate.html
The following query gets me exactly what I'm looking for:
SELECT json_object_agg(each.grp, each.names) FROM (
SELECT grp, array_agg(name) as names FROM things GROUP BY grp
) AS each;

Related

Postgres: How to returns json type projected data

It might be a noob question, but would like to know the capabilities of postgres json output.
For the table below:
id | seconds | datetime
1 | 10 | 2020-08-21 08:42:58.26+08
2 | 20 | 2020-08-21 10:20:00.01+08
3 | 10 | 2020-08-22 08:00:00.10+08
Is this possible to output in json like so?
[{
"date" : "2020-08-21",
"seconds_1" : 10,
"seconds_2" : 20,
},
{
"date" : "2020-08-22",
"seconds_1" : 10
}]
I can manipulate the table result thru php/javascript, but just wondering if this is possible in postgres.
This requires a multi step aggregation:
select jsonb_agg(item)
from (
select jsonb_build_object('date', dt)|| jsonb_object_agg(concat('seconds_', rn), seconds) item
from (
select datetime::date as dt,
row_number() over (partition by datetime::date) as rn,
seconds
from the_table
) t
group by dt
) r
The inner most query is used to number the rows per date, this can't be done at the same level where the grouping by date is done, because then the numbers would be wrong (as window functions are evaluated after grouping)
The second level aggregates all "seconds" for each date and builds a JSON value from that. The last level then aggregates everything into a JSON array.
Online example
If you don't care about the numbers that make the "seconds" key unique, you can use the id column and simplify the query a bit:
select jsonb_agg(item)
from (
select jsonb_build_object('date', datetime::date)|| jsonb_object_agg(concat('seconds_', id), seconds) item
from the_table
group by datetime::date
) r

Recursively running a MySQL function

I have a function in MySQL that needs to be run about 50 times (not a set value) in a query. the inputs are currently stored in an array such as
[1,2,3,4,5,6,7,8,9,10]
when executing the MySQL query individually it's working fine, please see below
column_name denotes the column it's getting the data for, in this case, it's a DOUBLE in the database
The second value in the MOD() function is the input I'm supplying MySQL from the aforementioned array
SELECT id, MOD(column_name, 4) AS mod_output
FROM table
HAVING mod_output > 10
To achieve the output I require* the following code works
SELECT id, MOD(column_name, 4) AS mod_output1, MOD(column_name, 5) AS mod_output2, MOD(column_name, 6) AS mod_output3
FROM table
HAVING mod_output1 > 10 AND mod_output2 > 10 AND mod_output3 > 10
However this obviously is extremely dirty, and when having not 3 inputs, but over 50, this will become highly inefficient.
Appart from calling over 50 individual querys, is there a better way to acchieve the same sort (see below) of output?
In escennce i need to supply MySQL with a list of values and have it run MOD() over all of them on a specified column.
The only data I need returned is the id's of the rows that match the MOD() functions output with the specified input (see value 2 of the MOD() function) where the output is less than 10
Please note, MOD() has been used as an example function, however, the final function required *should* be a drop in replacement
example table layout
id | column_name
1 | 0.234977
2 | 0.957739
3 | 2.499387
4 | 48.395777
5 | 9.943782
6 | -39.234894
7 | 23.49859
.....
(The title may be worded wrong, I'm not quite sure how else you'd explain what I'm trying to do here)
Use a join and derived table or temporary table:
SELECT n.n, t.id, MOD(t.column_name, n.n) AS mod_output
FROM table t CROSS JOIN
(SELECT 4 as n UNION ALL SELECT 5 UNION ALL SELECT 6 . . .
) n
WHERE MOD(t.column_name, n.n) > 10;
If you want the results as columns, you can use conditional aggregation afterwards.

Get item from breadcrumb/tree path (Adjacency model)

I understand you can get breadcrumbs/ tree path using a with a recursive CTE, but is it possible to select an item knowing the breadcrumb/tree?
id| name | parent_id
--------------------
0 | a | null
1 | b | 0
2 | c | 1
3 | b | 2
For example, if the breadcrumb looked like this: a/b/c/b, how would I be able to return the row with id 3 knowing this information?
Postgres just rocks.
http://sqlfiddle.com/#!17/0a6f4/27
The idea is to build the textbook recursive query which returns the path of each element in the tree, along with a "level" which represents the number of nodes from the root. You can also call it "depth".
Then, we turn the path 'a/b/c/b' into an ARRAY['a','b','c','b']... therefore indexing this array on [level] gives the name of the node we're looking for at each level.
WITH RECURSIVE h(id,name,parent_id,level,path,search_path) AS (
SELECT id,
name,
parent_id,
1,
ARRAY[name],
ARRAY['a','b','c','b']
FROM t WHERE parent_id IS NULL AND name = 'a'
UNION ALL
SELECT t.id,
t.name,
t.parent_id,
level+1,
path || t.name,
h.search_path
FROM t JOIN h ON(t.parent_id=h.id)
WHERE search_path[level+1] = t.name
)
SELECT *, path=search_path as match FROM h;
This returns the nodes from the requested path, in path order. I added a "match" column which becomes true when the requested row was found. If you only want this row, put the condition in the where, unless you want it to stop at the closest match and return it in case the path is not found, in which case you'll need to take the last row.
Funnily enough it should be possible to attempt this in MySQL by using session variables to transfer the parent_id from one row to the next, although MySQL has no arrays, so something like find_in_set() could work instead... would be kind of a hack...

Using concat in where conditions, good or bad?

A simple quiz:
Probably many guys know this before,
In my app there is a query in which Im using concat in where condition like this,
v_book_id and v_genre_id are 2 variables in my procedure.
SELECT link_id
FROM link
WHERE concat(book_id,genre_id) = concat(v_book_id,v_genre_id);
Now, I know there is a catch/bug in this, which will occur only twice in your lifetime. Can you tell me what is it?
I found this out yesterday and thought I should make a noise about all others practicing this.
Thanks.
Let's have a look
WHERE concat(book_id,genre_id) = concat(v_book_id,v_genre_id);
as opposed to
WHERE book_id = v_book_id AND genre_id = v_genre_id;
There. The second solution is
faster (optimal index usage)
easier to write (less code)
easier to read (what on earth was the author thinking to concatenate numbers???)
more correct (as Alnitak also stated in the question's comments). check out this sample data:
book_id | genre_id
1 | 12
11 | 2
Now add (or concat) v_book_id = 1 and v_genre_id = 12 and see how you'll get funny results with your concat() query
Note, some databases (including MySQL) allow operations on tuples, which may be what the clever author of the above really intended to do:
WHERE (book_id, genre_id) = (v_book_id, v_genre_id);
A working example of such a tuple predicate:
SELECT * FROM (
SELECT 1 x, 2 y FROM DUAL UNION ALL
SELECT 1 x, 3 y FROM DUAL UNION ALL
SELECT 1 x, 2 y FROM DUAL
) a
WHERE (x, y) = (1, 2)
Note, some databases will need extra parentheses around the right-hand side tuple : ((1, 2))

Mysql multiple tables select

I've got a table, called for example, "node", from which I need to return values for as shown:
SELECT nid FROM node WHERE type = "book"
After I get a list of values let's say:
|**nid**|
|123|
|12451|
|562|
|536|
Then I need to take these values, and check another table, for rows where column 'path' has values as "node/123", "node/12451" (numbers the previous request returned) in one joined request. It all would be easier if collumn 'path' had simple numbers, without the 'node/'.
And then also count the number of identical i.e. 'node/123' returned.
End result would look like:
nid | path | count(path) | count(distinct path)
123 |node/123| 412 | 123
562 |node/562| 123 | 56
Works fine if done in multiple separated queries, but that won't do.
select a.nid from node a join othertable b
on b.path = concat("node/", a.nid) where type='book'
You can probably do something like the following (nid may require additional conversion to some string type):
SELECT *
FROM OtherTable
JOIN node ON path = CONCAT('node/', nid)
WHERE type = 'book'
Thank you all for your help. Basically, the problem was that I didn't know how to get nid and node/ together, but concat helped.
End result looks something like:
SELECT node.nid, accesslog.path, count(accesslog.hostname), count(distinct accesslog.hostname)
FROM `node`, `accesslog`
WHERE node.uid=1
AND node.type='raamat'
AND accesslog.path = CONCAT('node/', node.nid)
GROUP BY node.nid