I have a model
Logbook and a model
LogbookEntries
Logbook hasMany LogbookEntries and LogbookEntries belongsto Logbook (out of the scope of the question though). In my LogbookEntries I have two fields (plus others): start date and end_date. I want to show all LogbookEntries which the following date entry as an example.
ENTRY 1
start_date: 01 Mar 19
end_date: 05 Mar 19
ENTRY 2
start_date: 06 Mar 19
end_date: 12 Mar 19
ENTRY 3
start_date: 19 Jun 19
end_date: 22 Jun 19
If I say show all which have a follow-up date, then only Entry 3 will display. My issue is:
Logbook::whereHas('LogbookEntries', function($q) {
$q->where('start_date', <???.end_date + 1 day>)
})
This worked for me:
If I understand your question correctly then I think the easiest way to achieve this would be with the following:
Note the use of whereraw instead of where so that we can use direct
mySql code
Logbook::whereHas('LogbookEntries', function($q) {
$q->whereraw('date_format(date(end_date),"%Y-%m-%d") = date_format(date(start_date),"%Y-%m-%d") + interval 1 day')
})
I tested this on my system and it worked a charm. However, my timestamps were in the format Y-m-d HH:mm:ii so I needed the date_format to change them. You may not need this. Therefore, you may want to also try the following:
Logbook::whereHas('LogbookEntries', function($q) {
$q->whereraw('date(end_date) = date(start_date) + interval 1 day')
})
As this would be a lot tidier.
Basically what we are doing is getting entries where the end_date is the same as the start_date + 1 day. You were close but just not quite there.
My system so you can see it working:
Without the whereraw statement:
>>> Task::Select('start_date','end_date')->get();
=> Illuminate\Database\Eloquent\Collection {#3325
all: [
App\Task {#3307
start_date: "2018-12-20 08:00:00",
end_date: null,
},
App\Task {#3291
start_date: "2018-12-18 00:00:00",
end_date: "2018-12-19 00:00:00",
},
App\Task {#3318
start_date: "2018-12-19 00:00:00",
end_date: "2019-01-03 00:00:00",
},
App\Task {#3319
start_date: "2018-12-20 00:00:00",
end_date: "2018-12-21 00:00:00",
},
App\Task {#3310
start_date: "2018-12-20 00:00:00",
end_date: "2018-12-21 00:00:00",
},
App\Task {#3317
start_date: "2018-12-20 14:43:16",
end_date: "2018-12-21 14:43:16",
},
App\Task {#3316
start_date: "2018-12-20 14:45:27",
end_date: "2018-12-27 14:45:27",
},
App\Task {#3315
start_date: "2018-12-20 14:46:48",
end_date: "2018-12-24 14:46:48",
},
App\Task {#3313
start_date: "2018-12-21 09:25:24",
end_date: "2018-12-24 09:25:24",
},
App\Task {#3298
start_date: "2019-01-02 08:10:19",
end_date: "2019-01-16 08:10:19",
},
],
}
With the whereraw statement:
>>> Task::Select('start_date','end_date')->whereraw('date_format(date(end_date),"%Y-%m-%d") = date_format(date(start_dat
e),"%Y-%m-%d") + interval 1 day')->get();
=> Illuminate\Database\Eloquent\Collection {#3314
all: [
App\Task {#3312
start_date: "2018-12-18 00:00:00",
end_date: "2018-12-19 00:00:00",
},
App\Task {#3309
start_date: "2018-12-20 00:00:00",
end_date: "2018-12-21 00:00:00",
},
App\Task {#3320
start_date: "2018-12-20 00:00:00",
end_date: "2018-12-21 00:00:00",
},
App\Task {#3329
start_date: "2018-12-20 14:43:16",
end_date: "2018-12-21 14:43:16",
},
],
}
Related
SELECT * FROM collection1 c1 WHERE c1.mobileNum NOT IN(SELECT mobileNumer FROM collection2) ORDER by c1.createdAt DESC
collection 1 :
=============
[{
name: 'abc',
mobileNum: 1234,
createdAt: DateTime
},{
name: 'efg',
mobileNum: 5678,
createdAt: DateTime
},
{
name: 'ijk',
mobileNum: 222222,
createdAt: DateTime
},
{
name: 'mno',
mobileNum: 33333,
createdAt: DateTime
}
]
collection 2 :
=============
[{
age: 24,
mobileNumer : 1234,
createdAt: DateTime
},{
age: 25,
mobileNumer : 0000,
createdAt: DateTime
},
{
age: 25,
mobileNumer : 1111,
createdAt: DateTime
}]
first have the mysql query.
second i have the mongodb collections are collection1 and collection 2.
need to convert the mysql statement into equivalent mongodb aggregate query.
can someone help?
$lookup - Join collection 1 (mobileNum) with collection 2 (mobileNumer).
$match - Filter document with matchedDocs is empty array ($size: 0).
$sort - Sort by createdAt descending.
$unset - Remove matchedDocs field.
db.col1.aggregate([
{
"$lookup": {
"from": "col2",
"localField": "mobileNum",
"foreignField": "mobileNumer",
"as": "matchedDocs"
}
},
{
$match: {
"matchedDocs": {
$size: 0
}
}
},
{
$sort: {
createdAt: -1
}
},
{
"$unset": "matchedDocs"
}
])
Sample Mongo Playground
I've got four tables in a PostgreSQL db.
user which holds information about a logged in user.
project which holds information about projects created.
userprojects as a joined table between users and projects (one user can belong to many projects and one project can have many users).
timesheet which is where users log their hours - has relation to user_id and project_id and people log their time and date in duration and date columns.
The timesheet table itself stores data as such:
id, user_id, date, duration, project_id
1, 1, "2019-02-01", 8, 1
2, 1, "2019-02-02", 8, 1
3, 2, "2019-02-01", 10, 1
I wish to find a nice way of returning the sum of values for each month from the timesheet table for easy frontend parsing and loading that data into a chart.
What I'm looking for is something along the lines of:
{
"users": [
{
"user_id": "1",
"projects": [
{
"project_id": 1,
"sum": [
{
"august": 18
},
{
"september": 20
}
]
},
{
"project_id": 2,
"sum": [
{
"august": 25
},
{
"september": 10
}
]
}
]
},
{
"user_id": "2",
"projects": [
{
"project_id": 2,
"sum": [
{
"august": 40
},
{
"september": 100
}
]
},
{
"project_id": 3,
"sum": [
{
"august": 30
},
{
"september": 25
}
]
}
]
},
]
}
I've found a neat query which kinda structures the data a bit, but still not ideally:
SELECT
project.name,
to_char(date_trunc('month', date), 'YYYY') AS year,
to_char(date_trunc('month', date), 'Mon') AS month,
to_char(date_trunc('month', date), 'MM') AS month_number,
sum(duration) AS monthly_sum
FROM timesheet INNER JOIN project ON timesheet.project_id = project.id
GROUP BY year, month, month_number, project.name
This query simply returns a table that looks something like:
name: year: month: month_number: monthly_sum
Project XX 2019 Aug 08 10,
Project YY 2019 Aug 08 30,
Project YY 2019 Sep 09 20
How would you guys go around formatting the timesheet table so I can easily display the summed value on a month by month basis?
i have this following huge object from angular 7. so i need to post this object to spring boot app, which means from spring boot controller i need to save these data into the database. how should i do this? I have no idea at all. please help me
periodrw = [
[
{keyvalue:1, period: 1, day: null , subject :null},
{keyvalue:2, period: 1, day: "Monday" , subject :null},
{keyvalue:3, period: 1, day: "Tuesday" , subject :null},
{keyvalue:4, period: 1, day: "Wednesday" , subject :null},
{keyvalue:5, period: 1, day: "Thursday", subject :null },
{keyvalue:6, period: 1, day: "Friday" , subject :null},
],
[
{keyvalue:1, period: 2, day: null , subject :null},
{keyvalue:2, period: 2, day: "Monday" , subject :null},
{keyvalue:3, period: 2, day: "Tuesday" , subject :null},
{keyvalue:4, period: 2, day: "Wednesday" , subject :null},
{keyvalue:5, period: 2, day: "Thursday" , subject :null},
{keyvalue:6, period: 2, day: "Friday" , subject :null},
]
]
Send the data as JSON form Angular and build a representing java class with the percistence annotation from javax.persistence . Deseriealize the JSON and map it to the class and at the end use the CrudRepository to save it.
here a hole tutorial: https://www.springboottutorial.com/spring-boot-crud-rest-service-with-jpa-hibernate
I'm trying to calculate the sum of some JSON values in PLpgSQL (Postgres v9.5.5) but am stuck on the logic.
For this data set:
{
clientOrderId: 'OR836374647',
status: 'PENDING',
clientId: '583b52ede4b1a3668ba0dfff',
sharerId: '583b249417329b5b737ad3ee',
buyerId: 'abcd12345678',
buyerEmail: 'test#test.com',
lineItems: [{
name: faker.commerce.productName(),
description: faker.commerce.department(),
category: 'test',
sku: faker.random.alphaNumeric(),
quantity: 3
price: 40
status: 'PENDING'
}, {
name: faker.commerce.productName(),
description: faker.commerce.department(),
category: 'test',
sku: faker.random.alphaNumeric(),
quantity: 2,
price: 30,
status: 'PENDING'
}
I am trying to get the subtotal of all the lineItems for each row (i.e. quantity * price for each line item, then the sum of these values for the row). So for the above example, the returned value should be 180.
I got this far, but this is returning the totals for all lineItems in the table, not grouped by row.
WITH line_items AS (SELECT jsonb_array_elements(line_items) as line_items FROM public.order),
line_item_totals AS (SELECT line_items->>'quantity' AS quantity, line_items->>'price' AS price FROM line_items)
SELECT (quantity::int * price::numeric) AS sub_total FROM line_item_totals;
I'm sure the fix is simple but I'm not sure how to do this with JSON fields.
Please always include Postgres version you are using. It also looks like your JSON is incorrect. Below is an example of how you can accomplish this with json type and valid json document.
with t(v) as ( VALUES
('{
"clientOrderId": "OR836374647",
"status": "PENDING",
"clientId": "583b52ede4b1a3668ba0dfff",
"sharerId": "583b249417329b5b737ad3ee",
"buyerId": "abcd12345678",
"buyerEmail": "test#test.com",
"lineItems": [{
"name": "name1",
"description": "desc1",
"category": "test",
"sku": "sku1",
"quantity": 3,
"price": 40,
"status": "PENDING"
},
{
"name": "name2",
"description": "desc2",
"category": "test",
"sku": "sku2",
"quantity": 2,
"price": 30,
"status": "PENDING"
}]
}'::JSON)
)
SELECT
v->>'clientOrderId' cId,
sum((item->>'price')::INTEGER * (item->>'quantity')::INTEGER) subtotal
FROM
t,
json_array_elements(v->'lineItems') item
GROUP BY cId;
Result:
cid | subtotal
-------------+----------
OR836374647 | 180
(1 row)
Suppose I have following three records in my model :
#<Rda:0xf6e8a0c
id: 1,
age_group: "18-100",
weight: "60",
nutrient: "energy(kcal/day)",
value: "2730",
created_at: Sat, 15 Oct 2016 08:21:43 UTC +00:00,
updated_at: Sat, 15 Oct 2016 08:21:43 UTC +00:00>
#<Rda:0xf6e8a0c
id: 2,
age_group: "10-15",
weight: "60",
nutrient: "energy(kcal/day)",
value: "2730",
created_at: Sat, 15 Oct 2016 08:21:43 UTC +00:00,
updated_at: Sat, 15 Oct 2016 08:21:43 UTC +00:00>
#<Rda:0xf6e8a0c
id: 3,
age_group: "20-100",
weight: "60",
nutrient: "energy(kcal/day)",
value: "2730",
created_at: Sat, 15 Oct 2016 08:21:43 UTC +00:00,
updated_at: Sat, 15 Oct 2016 08:21:43 UTC +00:00>
Now, I want to get all those records in which my given value falls in a 'age_group' columns ranges. For example: suppose my age is 25 then I should get records with ids 1 & 3 from the above records because '25' falls in between '18-100' and '20-100'
You might do
def self.foo(age)
all.select { |rda| Range.new(*rda.age_group.split('-').map(&:to_i)).cover? age }
end