MySQL query to match dates and times placed in separate columns - mysql

I have two tables
------------------------
| Vehicles |
------------------------
+ id +
| name |
+ available_from_date +
| available_from_time |
+ available_to_date +
| available_to_time |
-----------------------
------------------------
| Reserved_Vehicles |
------------------------
+ id +
| vehicle_id |
+ reserved_from_date +
| reserved_from_time |
+ reserved_to_date +
| reserved_to_time |
-----------------------
I want to query vehicles table such that I get only those vehicles which meet the availability date and time and also not already reserved for that time.
For example, I want to search vehicles which are available FROM date 2012-07-27 & time 10:00 TO date 2012-08-15 & time 14:00.
How to solve above problem with one query?
Thanks in advance. :)

It sounds like you could just use AND in your WHERE clause. Is that not working?
Do you need to query both tables? Or can you safely assume that if a car is reserved at a given time then it's not available, and if it's available then it's not reserved?

Related

Need an aggregate MySQL select that iterates virtually across date ranges and returns bills

I have a MySQL table named rbsess with columns RBSessID (key), ClientID (int), RBUnitID (int), RentAmt (fixed-point int), RBSessStart (DateTime), and PrevID (int, references to RBSessID).
It's not transactional or linked. What it does track when a client was moved into a room and what the rent at the time of move in was. The query to find what the rent was for a particular client on a particular date is:
SET #DT='Desired date/time'
SET #ClientID=Desired client id
SELECT a.RBSessID
, a.ClientID
, a.RBUnitID
, a.RentAmt
, a.RBSessStart
, b.RBSessStart AS RBSessEnd
, a.PrevID
FROM rbsess a
LEFT
JOIN rbsess b
ON b.PrevID=a.RBSessID
WHERE a.ClientID=#ClientID
AND (a.RBSessStart<=#DT OR a.RBSessStart IS NULL)
AND (b.RBSessStart>#DT OR b.RBSessStart IS NULL);
This will output something like:
+----------+----------+----------+---------+---------------------+-----------+--------+
| RBSessID | ClientID | RBUnitID | RentAmt | RBSessStart | RBSessEnd | PrevID |
+----------+----------+----------+---------+---------------------+-----------+--------+
| 2 | 4 | 1 | 57500 | 2020-11-22 00:00:00 | NULL | 1 |
+----------+----------+----------+---------+---------------------+-----------+--------+
I also have
SELECT * FROM rbsess WHERE rbsess.ClientID=#ClientID AND rbsess.PrevID IS NULL; //for finding the first move in date
SELECT TIMESTAMPDIFF(DAY,#DT,LAST_DAY(#DT)) AS CountDays; //for finding the number of days until the end of the month
SELECT DAY(LAST_DAY(#DT)) AS MaxDays; //for finding the number of days in the month
SELECT (TIMESTAMPDIFF(DAY,#DT,LAST_DAY(#DT))+1)/DAY(LAST_DAY(#DT)) AS ProRateRatio; //for finding the ratio to calculate the pro-rated rent for the move-in month
SELECT ROUND(40000*(SELECT (TIMESTAMPDIFF(DAY,#DT,LAST_DAY(#DT))+1)/DAY(LAST_DAY(#DT)) AS ProRateRatio)) AS ProRatedRent; //for finding a pro-rated rent amount based on a rent amount.
I'm having trouble putting all of these together to form a single query that can output pro-rated and full rent amounts based on a start date and an optional end date all rent owed amounts in a single statement for each month in the period. I can add a payments table received and integrate it afterwards, just having a hard time with this seemingly simple real-world concept in a MySQL query. I'm using php with a MySQL back end. Temporary tables as intermediary queries are more than acceptable.
Even a nudge would be helpful. I'm not super-experienced with MySQL queries, just your basic CREATE, SELECT, INSERT, DROP, and UPDATE.
Examples as requested by GMB:
//Example data in rbsess table:
+----------+----------+----------+---------+---------------------+--------+
| RBSessID | ClientID | RBUnitID | RentAmt | RBSessStart | PrevID |
+----------+----------+----------+---------+---------------------+--------+
| 1 | 4 | 1 | 40000 | 2020-10-22 00:00:00 | NULL |
| 2 | 4 | 1 | 57500 | 2020-11-22 00:00:00 | 1 |
| 3 | 2 | 5 | 40000 | 2020-11-29 00:00:00 | NULL |
+----------+----------+----------+---------+---------------------+--------+
Expected results would be a list of the rent amounts owed for every month, including pro-rated amounts for partial occupancy in a month, from a date range of months. For example for the example data above for a date range spanning all of the year of 2020 from client with ClientID=4 the query would produce an amount for each month within the range similar to:
Month | Amt
2020-10-1 | 12903
2020-11-1 | 45834
2020-12-1 | 57500

R sum column in second table based on if conditions

I am trying to sum the column in another table and put it in my current table based on a number of conditions.
table1 <- tribble(~company_id,~date,
1,"2018-01-02",
1,"2018-01-03",
2,"2018-01-02",
2,"2018-01-03")
table2 <- tribble(~other_id, company_id,~date_created,~max_rank,rank,date_closed,
1,1,"2018-01-02",20,2,NA,
1,1,"2018-01-03",22,1,NA,
2,2,"2018-01-02",20,5,NA,
2,2,"2018-01-03",22,4,NA)
I want to create a new column in table 1 that will imput the following formula:
= sum( (max_rank-rank)/(max_rank-1))
but only when:
(date<=date_created, date>(date_created+20), date<date_closed, max_rank-1!=0, rank!=0)
Edit
The output I hope to achieve should look like this:
Table 1
| company id | date | cc score |
---------------------------------------
| 1 | 2018-01-02 | 0.9473 |
| 1 | 2018-01-03 | 1.9473 |
| 2 | 2018-01-02 | 0.7895 |
| 2 | 2018-01-03 | 1.6466 |
The first can be calculated as (20-2)/(20-1) = 0.9473
The second is calculated as (20-2)/(20-1) + (22-1)/(22-1) = 1.9473
You can use dplyr package.
Please try the code below:
> library(dplyr)
> cbind(table1,table2)%>%inner_join(table1)%>%inner_join(table2)%>%filter(date<=date_created|date>(date_created+20)&max_rank-1!=0&rank!=0)%>%mutate(cc_data=(max_rank-rank)/(max_rank-1))%>%group_by(company_id)%>%mutate(cc_data=cumsum(cc_data))%>%select(company_id,date,cc_data)
Use of cbind() : We need both date_created and date column.
Two times inner_join(): To make sure there is no extra data.
Please suggest a better solution than this.
This seems to work:
table1[, cc_score := table2[table1,
on = .(company_id = company_id, date_created<=date, date_created_pls_20>date),
sum(ifelse(!is.na(rank) & (is.na(date_closed) | date_closed>date),
((max_rank-rank)/(max_rank-1)), 0)),
by = .EACHI][["V1"]]]
Where date_created_pls_20 is a column that takes the date_created column and simply adds 20

MySql GROUPBY value Based on data

I have a query which returns the following dataset (Original Image) :
+ ------------- + -------- + ---------- + ------------------------ + ------------------- + --------- + ----------------------- + ---------------------- + ----------------- +
| col_0_0_ | col_1_0_ | col_2_0_ | col_3_0_ | col_4_0_ | col_5_0_ | col_6_0_ | col_7_0_ | col_8_0_ |
+ ------------- + -------- + ---------- + ------------------------ + ------------------- + --------- + ----------------------- + ---------------------- + ----------------- +
| LAI-100003662 | dsa | 4546576766 | dfdfdfd2#lendingkart.com | 2015-11-30 02:30:11 | Sultanpur | Incomplete Applications | Application Incomplete | Documents Pending |
| LAI-100003662 | dsa | 4546576766 | dfdfdfd2#lendingkart.com | 2015-11-30 02:30:11 | Sultanpur | Incomplete Applications | Null | Null |
+ ------------- + -------- + ---------- + ------------------------ + ------------------- + --------- + ----------------------- + ---------------------- + ----------------- +
Now when I apply a GROUPBY col_0_0, on the query which results this dataset, I get only only one row which is (Original Image):
+ ------------- + -------- + ---------- + ------------------------ + ------------------- + --------- + ----------------------- + ---------------------- + ----------------- +
| col_0_0_ | col_1_0_ | col_2_0_ | col_3_0_ | col_4_0_ | col_5_0_ | col_6_0_ | col_7_0_ | col_8_0_ |
+ ------------- + -------- + ---------- + ------------------------ + ------------------- + --------- + ----------------------- + ---------------------- + ----------------- +
| LAI-100003662 | dsa | 4546576766 | dfdfdfd2#lendingkart.com | 2015-11-30 02:30:11 | Sultanpur | Incomplete Applications | Application Incomplete | Documents Pending |
+ ------------- + -------- + ---------- + ------------------------ + ------------------- + --------- + ----------------------- + ---------------------- + ----------------- +
1) Why does GROUP BY only give me the first row and not the second row from the original dataset?
2) How does GROUP BY actually work in this scenario?
SQL QUERY with GROUP BY :
select loan0_.col_0_0_,
loan0_.col_1_0_,
loan0_.col_2_0_,
loan0_.col_3_0_,
loan0_.col_4_0_,
loan0_.col_5_0_,
dsastatus2_.col_6_0_,
dsastatus2_.col_7_0_,
dsastatus2_.col_8_0_
FROM loan0_
cross join user1_
cross join dsastatus2_
where loan0_.L_USER_ID=user1_.U_GUID
and loan0_.L_LEADSOURCE='DSA'
and (loan0_.L_SUB_STATUS_ID=dsastatus2_.ADMIN_STATUS_ID
or loan0_.L_STATUS_ID=dsastatus2_.ADMIN_STATUS_ID)
and user1_.U_REFID='dsa001'
and (loan0_.L_APPLICATION_ID like 'LAI-100003662')
GROUP BY col_0_0_ ;
To answer the questions directly::
1) Why does GROUP BY only give me the first row and not the second row from the original dataset?
Because that's the way the MSQL engine works. Read the docs. "the server is free to choose any value from each group (not in the group by), so unless they are the same, the values chosen are indeterminate"
2) How does GROUP BY actually work in this scenario?
See above
MySQL extended Group by direct quote from docs:
https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html
SQL99 and later permits such nonaggregates per optional feature T301 if they are functionally dependent on GROUP BY columns: If such a relationship exists between name and custid, the query is legal. This would be the case, for example, were custid a primary key of customers.
MySQL 5.7.5 and up implements detection of functional dependence. If the ONLY_FULL_GROUP_BY SQL mode is enabled (which it is by default), MySQL rejects queries for which the select list, HAVING condition, or ORDER BY list refer to nonaggregated columns that are neither named in the GROUP BY clause nor are functionally dependent on them. (Before 5.7.5, MySQL does not detect functional dependency and ONLY_FULL_GROUP_BY is not enabled by default. For a description of pre-5.7.5 behavior, see the MySQL 5.6 Reference Manual.)
If ONLY_FULL_GROUP_BY is disabled, a MySQL extension to the standard SQL use of GROUP BY permits the select list, HAVING condition, or ORDER BY list to refer to nonaggregated columns even if the columns are not functionally dependent on GROUP BY columns. This causes MySQL to accept the preceding query. In this case, the server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate, which is probably not what you want. Furthermore, the selection of values from each group cannot be influenced by adding an ORDER BY clause. Result set sorting occurs after values have been chosen, and ORDER BY does not affect which value within each group the server chooses. Disabling ONLY_FULL_GROUP_BY is useful primarily when you know that, due to some property of the data, all values in each nonaggregated column not named in the GROUP BY are the same for each group.
The reason you see only one line is because this is what GROUP BY does -- it combines records with the same values into one. In this case the value is LAI-1000...3662 is the one value.
Now on most SQL systems if you include columns that are not in the group by or aggregate function it will give you an error but on mysql it just gives you a random value from the other column's possibilities.

Auto Increment mysql trigger

How create a Auto increment field based on this example:
I have this table, with "AF" field, in that format: SN.MM.YYYY
The "SN" = AI number based on last insert, MM= Atual Month, YYYY = Atual Year.
| ID | AF |
____________________
| 1 | 01.10.2013 |
| 2 | 02.10.2013 |
So, when changes the month or year, the trigger must set "AF" field that way:
Ex.: Month changes to November(Reset SN to 01).
| 3 | 01.11.2013 |
| 4 | 02.11.2013 |
The same thing when year changes(Reset SN to 01):
| 5 | 01.01.2014 |
| 6 | 02.01.2014 |
| 7 | 03.01.2014 |
Anyone know's how set that trigger?
Obs: There may be more than one record in one day, so, day is not important.
Sorry for the bad english
Thanks guys!
Technically you can do something like this
CREATE TRIGGER tg_bi_table1
BEFORE INSERT ON table1
FOR EACH ROW
SET NEW.af = CONCAT(
LPAD(COALESCE(
(SELECT MAX(LEFT(af, 2))
FROM table1
WHERE af LIKE DATE_FORMAT(CURDATE(), '__.%m.%Y')), 0) + 1, 2, '0'),
DATE_FORMAT(CURDATE(), '.%m.%Y'));
Here is SQLFiddle demo
Note: This approach (creating your own ago_increment values with such a pattern) has two major drawbacks:
Under heavy concurrent access different connections may obtain the same AF number
Because of your particular AF pattern (SN comes first) using an index is impossible therefore you'll end up always getting a full scan

Average Of Column Counting Duplicates Once - PowerPivot + DAX

I have a column in PowerPivot that I would like to get the average of. However I only want the rows included that are the only instance of a value or the first instance of a duplicate value in another column. Is this Possible with DAX?
Simply put I need the column average of unique rows, determining uniqueness from another column.
Probably to old to assist, but for those that stumble across:
You would need to create two measures. The first would sum whatever it is you are trying to average by the distinct values in the other column.
| id | squilla |
| 01 | 100 |
| 01 | 110 |
| 02 | 90 |
| 03 | 100 |
| 03 | 90 |
So id=1 has total squilla of 210, id=2 spend of 90, and id=3 spend of 190. The distinct average (where id is the identifier) is 163.333
To do this in powerpivot, first create a measure that sums the squilla by id: Measure1:=CALCULATE(SUM('yourTable'[squilla]),VALUES('yourTable'[id]))
And the second to average it across id:
Measure2:=AVERAGEX(DISTINCT('yourTable'[id]),[Measure1])
My understanding of the OP's question looks something like this:
| id | age |
| -- | --- |
| 1 | 20 |
| 1 | 20 |
| 2 | 50 |
| 3 | 35 |
| 3 | 35 |
In this case, a summed average as suggested by aesthetic_a (40 + 50 + 70)/3 would not be appropriate.
However an averaged average ((40/2) + (50/1) + (35/2))/3 would be a solution to determine the distinct average grouped by id.
Measure:=AVERAGEX(VALUES(table[id]), CALCULATE(AVERAGE(table[age])))