Postgres PLpgSQL JSON SUM - json

I'm trying to calculate the sum of some JSON values in PLpgSQL (Postgres v9.5.5) but am stuck on the logic.
For this data set:
{
clientOrderId: 'OR836374647',
status: 'PENDING',
clientId: '583b52ede4b1a3668ba0dfff',
sharerId: '583b249417329b5b737ad3ee',
buyerId: 'abcd12345678',
buyerEmail: 'test#test.com',
lineItems: [{
name: faker.commerce.productName(),
description: faker.commerce.department(),
category: 'test',
sku: faker.random.alphaNumeric(),
quantity: 3
price: 40
status: 'PENDING'
}, {
name: faker.commerce.productName(),
description: faker.commerce.department(),
category: 'test',
sku: faker.random.alphaNumeric(),
quantity: 2,
price: 30,
status: 'PENDING'
}
I am trying to get the subtotal of all the lineItems for each row (i.e. quantity * price for each line item, then the sum of these values for the row). So for the above example, the returned value should be 180.
I got this far, but this is returning the totals for all lineItems in the table, not grouped by row.
WITH line_items AS (SELECT jsonb_array_elements(line_items) as line_items FROM public.order),
line_item_totals AS (SELECT line_items->>'quantity' AS quantity, line_items->>'price' AS price FROM line_items)
SELECT (quantity::int * price::numeric) AS sub_total FROM line_item_totals;
I'm sure the fix is simple but I'm not sure how to do this with JSON fields.

Please always include Postgres version you are using. It also looks like your JSON is incorrect. Below is an example of how you can accomplish this with json type and valid json document.
with t(v) as ( VALUES
('{
"clientOrderId": "OR836374647",
"status": "PENDING",
"clientId": "583b52ede4b1a3668ba0dfff",
"sharerId": "583b249417329b5b737ad3ee",
"buyerId": "abcd12345678",
"buyerEmail": "test#test.com",
"lineItems": [{
"name": "name1",
"description": "desc1",
"category": "test",
"sku": "sku1",
"quantity": 3,
"price": 40,
"status": "PENDING"
},
{
"name": "name2",
"description": "desc2",
"category": "test",
"sku": "sku2",
"quantity": 2,
"price": 30,
"status": "PENDING"
}]
}'::JSON)
)
SELECT
v->>'clientOrderId' cId,
sum((item->>'price')::INTEGER * (item->>'quantity')::INTEGER) subtotal
FROM
t,
json_array_elements(v->'lineItems') item
GROUP BY cId;
Result:
cid | subtotal
-------------+----------
OR836374647 | 180
(1 row)

Related

Laravel calculate sum of two columns with a condition

I have this Warehouse collection I got from the database
[
{
"id": 1,
"warehouse": "India"
"sales": [
{
"id": 1,
"warehouse_id": 1,
"price": "120.00",
"quantity": 1000,
"status": 1
},
{
"id": 2,
"warehouse_id": 1,
"price": "20.00",
"quantity": 100,
"status": 1
},
{
"id": 3,
"warehouse_id": 1,
"price": "40.00",
"quantity": 1000,
"status": 2
}
]
},
{
"id": 2,
"warehouse": "Malaysia"
"sales": [
{
"id": 4,
"warehouse_id": 2,
"price": "160.00",
"quantity": 100,
"status": 1
}
]
}
]
I want to calculate the total income for each warehouse
Total income is calculated based on the sale status attribute
If status = 1, the products are delivered so it should add price * quantity to the total income
If status = 2, the products are returned so it should subtract price * quantity from the total income
A basic example for India warehouse:
total_income = 120*1000 + 20*100 - 40*1000
And for Malaysia:
total_income = 160*100
I tried using Warehouse::withSum(); but it didn't get me anywhere.
I'm wondering if there's a good way to do with collections
You could just pass a few callbacks to the collection's sum() method:
$warehouses_collection->map(function ($warehouse) {
return (object) [
'id' => $warehouse->id,
'warehouse' => $warehouse->warehouse,
'total_income' => collect($warehouse->sales)->sum(function ($sale) {
((int) $sale->price) * $sale->quantity * ($sale->status == 1 ? 1 : -1)
})
];
});
WithSum is a bit tricky to use here but calling withAggregate works.
Warehouse::withAggregate(
'sales as total_income',
'sum(case when status = 1 then price * quantity when status = 2 then price * quantity * -1 else 0 end)'
)->get()
I honestly would go the route below:
Obviously my solution would have to be modified slightly if you aren't using completely whole numbers (as in your example). The if/else can be broadened out as well if you have more then the 2 statuses.
$total_income = 0;
foreach($warehouses as $warehouse)
{
foreach($warehouse->sales as $sale)
{
if($sale->status = 1)
{
$total_income += ($sale->price * $sale->quantity);
}else{
$total_income -= ($sale->price * $sale->quantity);
}
}
}
This is a crude example of what how I would do it. It seems that each 'warehouse' has a different location ex. India vs. Malaysia. My example is more about the grand total, but you could always save the results of each different warehouse in different variables, or as a key/value pair in an array (which is how I would go).

how to implement sub select with where condition in sequelize

I have these tables:
products
stores
produuctProperties
with this structure
[
"products" :
{
"id": 1,
"orginalName": "146153-0100 ",
"title": null,
"stores": [
{
"id": 1,
"stock": 100,
"minOQ": 1,
"maxOQ": 0
},
{
"id": 2,
"stock": 100,
"minOQ": 1,
"maxOQ": 0,
}
],
"productproperties": [
{
"id": 1,
"productId": 1,
"propertyId": 8,
"propertyOptionId": 5
},
{
"id": 2,
"productId": 1,
"propertyId": 9,
"propertyOptionId": 11
},
{
"id": 3,
"productId": 1,
"propertyId": 10,
"propertyOptionId": 9
}
]
}
]
I want filter my products by selected options , Suppose the selected options are 11 and 9
how to implement below sql query in Sequelize 5.6 with findAll , where and... :
select * from products as p
inner join stores as sr on sr.productId = p.id
where (select count(*) from productProperties where propertyOptionId in (11,9) and productId = p.id) >= 2
I've found that using query builder in sequelize is really confusing,
so if you're good with raw sql you could just run them on as below
if Student is you're model
then
const students = Student.query('Select * from students');

Flatten nested JSON structure in PostgreSQL

I'm trying to write a Postgres query that will output my json data in a particular format.
JSON data structure
{
user_id: 123,
data: {
skills: {
"skill_1": {
"title": "skill_1",
"rating": 4,
"description": 'description text'
},
"skill_2": {
"title": "skill_2",
"rating": 2,
"description": 'description text'
},
"skill_3": {
"title": "skill_3",
"rating": 5,
"description": 'description text'
},
...
}
}
}
This is how I need the data to be formatted in the end:
[
{
user_id: 123,
skill_1: 4,
skill_2: 2,
skill_3: 5,
...
},
{
user_id: 456,
skill_1: 1,
skill_2: 3,
skill_3: 4,
...
}
]
So far I'm working with a query that looks like this:
SELECT
user_id,
data#>>'{skills, "skill_1", rating}' AS "skill_1",
data#>>'{skills, "skill_2", rating}' AS "skill_2",
data#>>'{skills, "skill_3", rating}' AS "skill_3"
FROM some_table
There has to be a better way to go about writing my query. There are 400+ rows and 70+ skills. My above query is a little crazy. Any guidance or help would be greatly appreciated.
Some things to note:
Users rated themselves on 70+ skills
Each skill object has the same structure
Each user rated themselves on the exact same set of skills
db<>fiddle
I expanded your test data to (note the array around all users):
[{
"user_id": 123,
"data": {
"skills": {
"skill_1": {
"title": "skill_1",
"rating": 4,
"description": "description text"
},
"skill_2": {
"title": "skill_2",
"rating": 2,
"description": "description text"
},
"skill_3": {
"title": "skill_3",
"rating": 5,
"description": "description text"
}
}
}
},
{
"user_id": 456,
"data": {
"skills": {
"skill_1": {
"title": "skill_1",
"rating": 1,
"description": "description text"
},
"skill_2": {
"title": "skill_2",
"rating": 3,
"description": "description text"
},
"skill_3": {
"title": "skill_3",
"rating": 4,
"description": "description text"
}
}
}
}]
The query:
SELECT
jsonb_pretty(jsonb_agg(user_id || skills)) -- E
FROM (
SELECT
json_build_object('user_id', user_id)::jsonb as user_id, -- D
json_object_agg(skill_title, skills -> skill_title -> 'rating')::jsonb as skills
FROM (
SELECT
user_id,
json_object_keys(skills) as skill_title, -- C
skills
FROM (
SELECT
(datasets -> 'user_id')::text as user_id,
datasets -> 'data' -> 'skills' as skills -- B
FROM (
SELECT
json_array_elements(json) as datasets -- A
FROM (
SELECT '/* the JSON data; see db<>fiddle */'::json
)s
)s
)s
)s
GROUP BY user_id
ORDER BY user_id
)s
A Make all array elements ({user_id: '42', data: {...}}) one row each
B First column safe the user_id. The cast to text ist necessary for the GROUP BY later which cannot group JSON output. For the second column extract the skills data of the user
C Extract the skill titles for using them as keys in (D.1).
D.1 skills -> skill_title -> 'rating' extracts the rating value from each skill
D.2 json_object_agg aggregates the skill_titles and each corresponding rating value into one JSON object; grouped by the user_id
D.3 json_build_object makes the user_id a JSON object again
E.1 user_id || skills aggregates the two json object into one
E.2 jsonb_agg aggregates these json objects into an array
E.3 jsonb_pretty makes the result looking pretty.
Result:
[{
"skill_1": 4,
"skill_2": 2,
"skill_3": 5,
"user_id": "123"
},
{
"skill_1": 1,
"skill_2": 3,
"skill_3": 4,
"skill_4": 42,
"user_id": "456"
}]

For JSON results

Sorry for the basic of this question, I just cannot wrap my head around this one.
I need the output from SQL Server to look like this.
In a little more human readable format:
var data = [
{
name: '2017', id: -1,
children: [
{ name: '01-2017', id: 11 },
{ name: '02-2017', id: 12 },
{ name: '03-2017', id: 13 },
{ name: '04-2017', id: 14 },
{ name: '05-2017', id: 15 },
]
},
{
name: '2018', id: -1,
children: [
{ name: '01-2018', id: 6 },
{ name: '02-2018', id: 7 },
{ name: '03-2018', id: 8 },
{ name: '04-2018', id: 9 },
{ name: '05-2018', id: 10 },
]
}
];
This is a snapshot of the data:
The group I will be working with is userid = 1.
My first thought was to use a cursor to loop through all the distinct reportYear for userid = 1, then a select based on the year and the userid to fill in the sub-query.
There has to be a way without using a cursor.
You can achieve the desired output joining your table to a query that extracts all the years to be used at the top level elements and then generating the json using FOR JSON AUTO:
declare #tmp table (monthlyReportID int, userID int, reportMonth int, reportYear int)
insert into #tmp values
( 6, 1, 1, 2018),
( 7, 1, 2, 2018),
( 8, 1, 3, 2018),
( 9, 1, 4, 2018),
(10, 1, 5, 2018),
(11, 1, 1, 2017),
(12, 1, 2, 2017),
(13, 1, 3, 2017),
(14, 1, 4, 2017),
(15, 1, 5, 2017)
select years.[name], children.[name], children.[id] from
(
select distinct reportYear as [name] from #tmp
) as years
left join
(
select monthlyReportID as [id]
,right('0' + cast(reportMonth as varchar(2)),2) + '-' + cast(reportYear as varchar(4)) as [name]
,reportYear as [year]
from #tmp
) as children
on children.[Year] = years.[name]
for json auto
I omitted the ID field because in your desired output it is always set to -1 and I was not able to understand the logic behind it.
Nonetheless you should be able to easily edit the script above to obtain the value you need.
Here are the results:
[
{
"name": 2017,
"children": [
{"name": "01-2017", "id": 11},
{"name": "02-2017", "id": 12},
{"name": "03-2017", "id": 13},
{"name": "04-2017", "id": 14},
{"name": "05-2017", "id": 15}
]
},
{
"name": 2018,
"children": [
{"name": "01-2018", "id": 6},
{"name": "02-2018", "id": 7},
{"name": "03-2018", "id": 8},
{"name": "04-2018", "id": 9},
{"name": "05-2018", "id": 10}
]
}
]

Get aggregate sum of json array in Postgres NOSQL json data

How to get aggregate SUM(amount) from "refunds" array in postgres json select
Following is my data schema and structure:
Table Name: transactions
Column name: data
{
"id": "tran_6ac25129951962e99f28fa488993",
"amount": 1200,
"origin_amount": 3900,
"status": "partial_refunded",
"description": "Subscription#sub_a67d59efb2bcbf73485a ",
"livemode": false,
"refunds": [
{
"id": "refund_ee4192ffb6d2caa490a1",
"amount": 1200,
"status": "refunded",
"created_at": 1426412340,
"updated_at": 1426412340,
},
{
"id": "refund_0e4a34e4ee7281d369df",
"amount": 1500,
"status": "refunded",
"created_at": 1426412353,
"updated_at": 1426412353,
}
]
}
Out put should be: 1200+1500 = 2700
Output
|---------
|total
|---------
|2700
Please provide global solution and not with static data
This should work on 9.3+
WITH x AS( SELECT
'{
"id": "tran_6ac25129951962e99f28fa488993",
"amount": 1200,
"origin_amount": 3900,
"status": "partial_refunded",
"description": "Subscription#sub_a67d59efb2bcbf73485a ",
"livemode": false,
"refunds": [
{
"id": "refund_ee4192ffb6d2caa490a1",
"amount": 1200,
"status": "refunded",
"created_at": 1426412340,
"updated_at": 1426412340
},
{
"id": "refund_0e4a34e4ee7281d369df",
"amount": 1500,
"status": "refunded",
"created_at": 1426412353,
"updated_at": 1426412353
}
]
}'::json as y),
refunds AS(
SELECT json_array_elements(y->'refunds') as j FROM x)
SELECT sum((j->>'amount')::int) FROM refunds;
WITH AllRefunds AS ( SELECT jsonb_array_elements(data->'refunds') AS refund FROM transactions)
SELECT SUM( CAST ( refund ->> 'amount' AS INTEGER )) FROM AllRefunds;
If you need to know how the query is built:
1.
WITH AllRefunds AS ( SELECT jsonb_array_elements(data->'refunds') FROM transactions)
SELECT * FROM AllRefunds;
This selects all elements as JSON objects (done via ->) from the array refunds that were found in transactions table and stores it in a new table AllRefunds. This new table only consists of one unnamed column.
2.
WITH AllRefunds AS ( SELECT jsonb_array_elements(data->'refunds') AS refund FROM transactions)
SELECT * FROM AllRefunds;
Here the added (second) AS renames the currently unnamed column inside AllRefunds to refund
3.
WITH AllRefunds AS ( SELECT jsonb_array_elements(data->'refunds') AS refund FROM transactions)
SELECT SUM( CAST ( refund ->> 'amount' AS INTEGER )) FROM AllRefunds;
Our array entries are JSON objects. So we return the field amount as a simple string with ->> that we then cast to Integers and SUM all entries up.