I want to check what range/level the number is in. I have the table of buy and pay. Only I can thinking of is about between. But here, it is different because the column is not only min and max.
pay_level
| id | type | buy1 | pay1 | buy2 | pay2 | buy3 | pay3 |
|----|------|------|------|------|------|------|------|
| 1 | p1 | 10 | 100 | 20 | 80 | 30 | 70 |
|----|------|------|------|------|------|------|------|
| 2 | p2 | 10 | 100 | 20 | 80 | 30 | 70 |
|----|------|------|------|------|------|------|------|
| 3 | p3 | 5 | 500 | 10 | 400 | 30 | 300 |
|----|------|------|------|------|------|------|------|
Ok, according to the table above. My goal is to see how much cost is the incoming order.
For example.
A order p1 for 12 unit. So the price per unit is 100. Because he is buying between buy1 and buy2
B order p1 for 15 units. Then he got 100 per unit as well as A.
C order p1 for 25 units. He got 70 because it's in between pay2 and pay3.
What I can thinking of is to compare 2 columns where the order in between. So my code is:
select * from pay_level where order between buy1 and buy2 and type='p1'
But the problem is occurs when the order is more than 20 (of buy2). I know my English is not good to explain this clear enough. Hope you understand.
First normalise your schema design...
DROP TABLE IF EXISTS wilf;
CREATE TABLE wilf
(id INT AUTO_INCREMENT PRIMARY KEY
,type INT NOT NULL
,x INT NOT NULL
,buy INT NOT NULL
,pay INT NOT NULL
);
INSERT INTO wilf VALUES
(1,1,1,10,100),
(2,2,1,10,100),
(3,3,1, 5,500),
(4,1,2,20, 80),
(5,2,2,20, 80),
(6,3,2,10,400),
(7,1,3,30, 70),
(8,2,3,30, 70),
(9,3,3,30,300);
...and then your queries become trivial...
SELECT pay FROM wilf WHERE type = 1 AND buy < 12 ORDER BY id DESC LIMIT 1;
+-----+
| pay |
+-----+
| 100 |
+-----+
(And C should have got 80)
You'll need a CASE expression to navigate this one since you can't dynamically refer to a database object (table, column, etc) in your sql.
I think something like the following would get you in the ballpark:
SELECT
CASE WHEN order BETWEEN buy1 and buy2 THEN pay1
WHEN order BETWEEN buy2 and buy3 THEN pay2
WHEN order > buy3 THEN pay3 END as cost
FROM pay_level
WHERE type = 'p1'
Related
I am currently building a single (but extremely important in its context) query, which seems like it is working (qualitatively ok), but which I think/hope/wish could run faster.
I am running tests on MySQL 5.7.29, until a box running OmnisciDB in GPU mode can become available (which should be relatively soon). While I am hoping the switch to that different DB backend will improve performance, I am also aware it might require some tweaking in the table structures, querying techniques used, etc. But that is for later.
A little context:
Data
Is summed up as an extremely simple table:
CREATE TABLE `entities_for_perception` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`pos` POINT NOT NULL,
`perception` INT(11) NOT NULL DEFAULT '0',
`stealth` INT(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
SPATIAL INDEX `pos` (`pos`),
INDEX `perception` (`perception`),
INDEX `stealth` (`stealth`)
)
COLLATE='utf8mb4_bin'
ENGINE=InnoDB
AUTO_INCREMENT=10001
;
Which then contains values like (obvious but helps visualise :-) ):
| id | pos | perception | stealth |
| 1 | ... | 10 | 3 |
| 2 | ... | 6 | 5 |
| 3 | ... | 5 | 5 |
| 4 | ... | 7 | 7 |
etc..
Now I have this query (see below) whose intent is the following: in one pass, fetch all the ids of the "entities" that see other entities and return the list of "who sees who".
[The "in one pass" is obvious and is to limit roundtrips.]
Let's assume POINT() is in a cartesian system.
The query is the following:
SHOW WARNINGS;
SET #automatic_perception_distance := 10;
SELECT
*
FROM (
SELECT
e1.id AS oid,
e1.perception AS operception,
#max_perception_distance := e1.perception * 5 AS 'max_perception_distance',
#dist := ST_DISTANCE(e1.pos, e2.pos) AS 'dist',
# minimum 0
#dist_from_auto := GREATEST(#dist - #automatic_perception_distance, 0) AS 'dist_from_auto',
#effective_perception := (
#origin_perception - (
#dist_from_auto
/ (#max_perception_distance - #automatic_perception_distance)
* #origin_perception
)
) AS 'effective_perception',
e2.id AS tid,
e2.stealth AS tstealth
FROM
entities_for_perception e1
INNER JOIN entities_for_perception e2 ON
e1.id != e2.id
ORDER BY
oid,
dist
) AS subquery
WHERE
effective_perception >= tstealth
;
What it does is list "who seems whom" by applying the following criteria/filters:
determining a maximum distance beyond which perception is not possible
determining a minimal distance below which perception is automatic (not implemented yet)
determining an effective perception value varying (and regressing) with distance
...and comparing the effective perception of the "spotter" versus the stealth of the "target".
This works, but runs somewhat slowly (laptop + virtualbox + centos7) on a table with very few rows (~1,000). The query time seems to fluctuate between 0.2 and 0.29 seconds. This is however orders of magnitude faster than it would be with one query per "spotter", which would not scale with 1,000+ spotters. Heh. :-)
Example of output:
| oid | operception | max_perception_distance | dist | dist_fromt_auto | effective_perception | tid | tstleath |
| 1 | 9 | 45 | 1.4142135623730951 | 0 | 9 | 156 | 5 |
| 1 | 9 | 45 | 11.045361017187261 | 1.0453610171872612 | 8.731192881294705 | 164 | 2 |
| 1 | 9 | 45 | 13.341664064126334 | 3.341664064126334 | 8.140714954938943 | 163 | 8 |
| 1 | 9 | 45 | 16.97056274847714 | 6.970562748477139 | 7.207569578963021 | 125 | 7 |
| 1 | 9 | 45 | 25.019992006393608 | 15.019992006393608 | 5.137716341213072 | 152 | 3 |
| 1 | 9 | 45 | 25.079872407968907 | 15.079872407968907 | 5.122318523665138 | 191 | 5 |
etc.
Could the reason for what I believe is a slow response:
be the subquery?
be the variables or the arithmetics applied to them?
the join?
something else I am not aware of?
Thank you for any insight!
An index would probably help: CREATE INDEX idx_ID ON entities_for_perception (id);
If you were to upgrade to MySQL version 8, you could take advantage of a Common Table Expression as follows:
with e1 as (
SELECT
e1.id AS oid,
e1.perception AS operception,
#max_perception_distance := e1.perception * 5 AS 'max_perception_distance',
#dist := ST_DISTANCE(e1.pos, e2.pos) AS 'dist',
# minimum 0
#dist_from_auto := GREATEST(#dist - #automatic_perception_distance, 0) AS 'dist_from_auto',
#effective_perception := (
#origin_perception - (
#dist_from_auto
/ (#max_perception_distance - #automatic_perception_distance)
* #origin_perception
)
) AS 'effective_perception',
e2.id AS tid,
e2.stealth AS tstealth
FROM
entities_for_perception)
SELECT *
FROM e1
INNER JOIN entities_for_perception e2 ON
e1.id != e2.id
ORDER BY
oid,
dist
WHERE
effective_perception >= tstealth
;
So basically I have a users table which has a column named "completed_surveys" which holds total number of completed surveys.
I need to create a query which would take step size number and would group them by that range.
Example result which would suit my needs:
+---------+-------------------+
| range | completed_surveys |
+---------+-------------------+
| 0-14 | 4566 |
| 14-28 | 3412 |
| 28-42 | 5456 |
| 42-56 | 33 |
| 56-70 | 31 |
| 70-84 | 441 |
| 84-98 | 576 |
| 98-112 | 23 |
| 112-126 | 12 |
| 126-140 | 1 |
+---------+-------------------+
What I have so far:
select concat(what should i add here??) as `range`,
count(users.completed_surveys) as `completed_surveys` from users WHERE users.completed_surveys > 0 group by 1 order by users.completed_surveys;
I think this query is correct however in the concat function I don't really know how to increase the previous number by 14. Any ideas?
One idea is to first create a helper table with values 0..9 .
CREATE TABLE tmp ( i int );
INSERT INTO tmp VALUES (0) , (1) ... (9);
Then join the two tables:
SELECT concat(i,' - ',(i+1)*7) as `range`,
count(users.completed_surveys) as `completed_surveys` from users
INNER JOIN tmp ON (users.completed_surveys>tmp.i AND users.completed_surveys<=(i+1)*7)
WHERE users.completed_surveys > 0
GROUP BY tmp.i
ORDER BY tmp.i
Given a structure like this in a MySQL database
#data_table
(id) | user_id | time | (...)
#relations_table
(id) | user_id | user_coach_id | (...)
we can select all data_table rows belonging to a certain user_coach_id (let's say 1) with
SELECT rel.`user_coach_id`, dat.*
FROM `relations_table` rel
LEFT JOIN `data_table` dat ON rel.`uid` = dat.`uid`
WHERE rel.`user_coach_id` = 1
ORDER BY val.`time` DESC
returning something like
| user_coach_id | id | user_id | time | data1 | data2 | ...
| 1 | 9 | 4 | 15 | foo | bar | ...
| 1 | 7 | 3 | 12 | oof | rab | ...
| 1 | 6 | 4 | 11 | ofo | abr | ...
| 1 | 4 | 4 | 5 | foo | bra | ...
(And so on. Of course time are not integers in reality but to keep it simple.)
But now I would like to query (ideally) only up to an arbitrary number of rows from data_table per distinct user_id but still have those ordered (i.e. newest first). Is that even possible?
I know I can use GROUP BY user_id to only return 1 row per user, but then the ordering doesn't work and it seems kind of unpredictable which row will be in the result. I guess it's doable with a subquery, but I haven't figured it out yet.
To limit the number of rows in each GROUP is complicated. It is probably best done with an #variable to count, plus an outer query to throw out the rows beyond the limit.
My blog on Groupwise Max gives some hints of how to do such.
I have a mysql table that stores network utilization for every five minutes, I want to now use this data for graphing. Is there a way where I could just specify the start time and the end time and the number of buckets / samples I need, and MySQL could in someway oblige :?
My table
+---------------------+-----+
| Tstamp | QID |
+---------------------+-----+
| 2010-12-10 15:05:39 | 20 |
| 2010-12-10 15:06:09 | 26 |
| 2010-12-10 15:06:14 | 27 |
| 2010-12-10 15:06:18 | 28 |
| 2010-12-10 15:06:23 | 40 |
| 2010-12-10 15:10:38 | 20 |
| 2010-12-10 15:11:12 | 26 |
| 2010-12-10 15:11:17 | 27 |
| 2010-12-10 15:11:21 | 28 |
------ SNIP ------
So can I specify I need 20 samples from the last 24 hours.
Thanks!
Harsh
You can convert your DATETIME to a UNIX_TIMESTAMP, and play with division and modulo...
Here is a sample query you can use. Notice it does not work if the number of requested samples in the given time range is more than half of the available records for that range (which would mean the bucket size is one).
-- Configuration
SET #samples = 4;
SET #start = '2011-05-06 19:44:00';
SET #end = '2011-05-06 20:46:50';
--
SET #bucket = (SELECT FLOOR(count(*)/#samples) as bucket_size FROM table1
WHERE Tstamp BETWEEN #start AND #end);
SELECT
SUM(t.QID), FLOOR((t.ID-1)/#bucket) as bucket
FROM (SELECT QID , #r:=#r+1 as ID
FROM table1
JOIN (SELECT #r:=0) r
WHERE Tstamp BETWEEN #start AND #end
ORDER BY Tstamp) as t
GROUP BY bucket
HAVING count(t.QID) = #bucket
ORDER BY bucket;
P.S. I believe there is a more elegant way to do this, but since no one has provided a working query I hope this helps.
I want to update a column by comparing each row to all other rows in the table but I cant figure out how to distinguish the column names in the row being updated with the rows being searched through.
Here's a simplified example...
people:
+--------+-----+----------------+
| name | age | nameClosestAge |
+--------+-----+----------------+
| alice | 20 | |
| bob | 30 | |
| clive | 22 | |
| duncan | 24 | |
+--------+-----+----------------+
To fill in the 'nameClosestAge' column with the name of the person that is closest in age to each person, you could do this...
create temporary table peopleTemp like people;
insert into peopleTemp select * from people;
update people set nameClosestAge =
(select name from peopleTemp where people.name != peopleTemp.name
order by abs(people.age - peopleTemp.age) asc limit 1);
Which produces this....
+--------+-----+----------------+
| name | age | nameClosestAge |
+--------+-----+----------------+
| alice | 20 | clive |
| bob | 30 | duncan |
| clive | 22 | alice |
| duncan | 25 | clive |
+--------+-----+----------------+
Surely there is a way to do this without creating a duplicate table.
I'm looking for the most efficient method here as I have a very large table and its taking too long to update.
I'm using mySql with PHP.
You could perform this with just one sub-query and no temp table.
SELECT name, age, (
SELECT name
FROM people
WHERE name != ppl.name
ORDER BY ABS( people.age - ppl.age )
LIMIT 1
) AS nameClosestAge
FROM people AS ppl;
Checked and works :)
EDIT: If you want to be able to work with the calc'ed row, you can use a view;
CREATE VIEW people_close AS
SELECT name, age, (
SELECT name
FROM people
WHERE name != ppl.name
ORDER BY ABS( people.age - ppl.age )
LIMIT 1
) AS nameClosestAge
FROM people AS ppl;
You can't update the calculated field but can query against it easily.