I am trying to populate ElasticSearch with a collection of documents along with a field representing the path to the document based on its parents.
Here is my table layout:
+----+--------+-------+----------+
| Id | Parent | Alias | Contents |
+----+--------+-------+----------+
| 1 | null | Doc1 | Admin |
| 2 | 1 | Doc2 | Use |
| 3 | 2 | Doc3 | Test |
| 4 | 3 | Doc4 | Ask |
| 5 | null | PDF1 | Intro |
| 6 | 5 | PDF2 | Managers |
+----+--------+-------+----------+
Here is the desired output
+----+--------+-------+----------+---------------------+
| Id | Parent | Alias | Contents | Path |
+----+--------+-------+----------+---------------------+
| 1 | null | Doc1 | Admin | Doc1 |
| 2 | 1 | Doc2 | Use | Doc1\Doc2 |
| 3 | 2 | Doc3 | Test | Doc1\Doc2\Doc3 |
| 4 | 3 | Doc4 | Ask | Doc1\Doc2\Doc3\Doc4 |
| 5 | null | PDF1 | Intro | PDF1 |
| 6 | 5 | PDF2 | Managers | PDF1\PDF2 |
+----+--------+-------+----------+---------------------+
I have this query that gets the Path of one document specified by the parameter #child; (aka SET #child = 5; )
SELECT
T2.*
FROM
(SELECT
#r AS _id,
(SELECT
#r:=Parent
FROM
documents
WHERE
id = _id) AS ParentId,
#l:=#l + 1 AS lvl
FROM
(SELECT #r:=#child, #l:=#parent) vars, documents
WHERE
#r <> 0) T1
JOIN
documents T2 ON T1._id = T2.Id
ORDER BY T2.Parent
The problem being is how do I set #child if I put this into a subquery? I have tried GROUP_CONCAT() but it always ends up being the same path for every line. I have tried putting the Id of the current row in subquery but it throws an error: ErrorCode: 1109. Unknown table 'doc' in field list in the following query
SELECT doc.*, (
SELECT GROUP_CONCAT(a.Alias) FROM (SELECT
T2.*
FROM
(SELECT
#r AS _id,
(SELECT
#r:=Parent
FROM
documents
WHERE
id = _id) AS ParentId,
#l:=#l + 1 AS lvl
FROM
(SELECT #r:= doc.Id, #l:=#parent) vars, documents
WHERE
#r <> 0) T1
JOIN
documents T2 ON T1._id = T2.Id
ORDER BY T1.lvl DESC) a
) as Path FROM documents doc
What am I doing wrong? Is there a better way to do this that I'm not seeing?
Though it is not entirely relevant, I will point out, I'm using a logstash script to load the documents into ElasticSearch from my database on a schedule. Also for multiplicities sake I have taken out the majority of the columns as well as the contents and replaced with faux contents.
You get your error because you cannot use an outer variable in a derived table. A derived table is basically every "subquery" for which you have to use an alias, like vars in your case. Try removing that alias, and MySQL will tell you that every derived table has to have an alias.
One way to solve this is to move your whole query into a function, e.g. getpath(child_id int), where you can then freely use this variable whereever you want (assuming you have a working query that can get the path for one specific child, "something with GROUP_CONCAT()").
But in your case, it is actually possible to reorganize your code so you do not need a derived table:
select d.*, t3.path
from (
SELECT t1.id,
group_concat(t2.alias order by t1.rownum desc separator '\\' ) as path
from (
SELECT
current_child.id,
lvls.rownum,
#r := if(lvls.rownum = 1, current_child.id, #r) AS _id,
(SELECT #r:=Parent
FROM documents
WHERE id = _id) AS ParentId
FROM (select #rownum:= #rownum+1 as rownum
from documents, -- maybe add limit 5
(select #rownum := 0) vars
) as lvls
-- or use:
-- (select 1 as rownum union select 2 union select 3
-- union select 4 union select 5) as lvls
straight_join documents as current_child
) as t1
join documents t2
on t2.id = t1._id
group by t1.id
) t3
join documents d
on d.id = t3.id;
I used your inner documents the same way as you did, which is actually quite inefficient and is only used to support an unlimited tree depth. If you know your max dependency level, you could use the alternative code for lvls I added as a comment (which is just a list of numbers) or the limit.
Make sure to set the group_concat_max_len-setting to an appropriate value (with e.g. set session group_concat_max_len = 20000;). By default, it supports a length of 1024, which will usually be enough, but for long aliases or really deep trees you might reach it - and since it will give you neither an error nor a warning, it is sometimes hard to diagnose, so be aware of it.
There is a more straight forward way to solve your problem. It requires you to know the maximum depth of your tree though, but if you do, you can simply join your parents to every child.
select child.*,
concat_ws('\\',p4.Alias,p3.Alias,p2.Alias,p1.Alias,child.Alias) as path
from documents child
left join documents p1 on child.parent = p1.id
left join documents p2 on p1.parent = p2.id
left join documents p3 on p2.parent = p3.id
left join documents p4 on p3.parent = p4.id;
Generally speaking, the tree you used for your hierarchy does not work very well in sql because of the recursive nature of the model (even if other databases actually support recursive queries in a very similar way you simulated with the variables).
For other ways to model your hierarchy, see e.g. Bill Karwins presentation Models for hierarchical data. They make it a lot easier to query a path without recursion.
I have created a decent solution. Its not incredibly fast, but that is too be expected and as this is just for a once a day load, it is acceptable for now.
Essentially, I created a function that gets the Path based on an id, then just run a view (going with a faux materialized view when pushing to production for faster loads to logstash (avoiding the timeouts essentially)) that selects all of the values, and then the path for the appropriate row.
CREATE FUNCTION `get_parent_path` (child int)
RETURNS VARCHAR(1024)
BEGIN
DECLARE path varchar(1024);
SELECT
GROUP_CONCAT(a.Alias)
INTO
path
FROM (
SELECT
T2.*
FROM
(
SELECT
#r AS _id
(
SELECT
#r := Parent
FROM
documents
WHERE
id = _id
) as ParentId,
#l: = #l + 1 as lvl
FROM
(SELECT #r := child, #l := #parent) vars, documents
WHERE
#r <> 0
) T1
JOIN
documents T2
ON
T1._id = T2.Id
ORDER BY T2.Id
) a;
RETURN COALESCE(path, 'invalid child');
END
Then the view I created the view:
CREATE VIEW 'documentswithpath' AS
SELECT *, get_parent_path(Id) FROM documents;
Then I just run SELECT * FROM documentswithpath; from the logstash script. This is also excluding alot of the logic for logstash for a simplistic answer. If anyone has a better, preferably faster, method of doing this, please let me know! Thanks.
Related
I'm having trouble using/understanding the SQL ALL operator. I have a table FOLDER_PERMISSION with the following columns:
+----+-----------+---------+----------+
| ID | FOLDER_ID | USER_ID | CAN_READ |
+----+-----------+---------+----------+
| 1 | 34353 | 45453 | 0 |
| 2 | 46374 | 342532 | 1 |
| 3 | 46374 | 32352 | 1 |
+----+-----------+---------+----------+
I want to select the folders where all the users have permission to read, how could I do it?
Use aggregation and having:
select folder_id
from t
group by folder_id
having min(can_read) = 1;
Gordon's answer seems better but for the sake of completeness, using ALL a query could look like:
SELECT x1.folder_id
FROM (SELECT DISTINCT
fp1.folder_id
FROM folder_permission fp1) x1
WHERE 1 = ALL (SELECT fp2.can_read
FROM folder_permission fp2
WHERE fp2.folder_id = x1.folder_id);
If you have a table for the folders themselves replace the derived table (aliased x1) with it.
But this only respects users present in folder_permissions. If not all users have a reference in that table you possibly won't get the folders really all users can read.
You can do aggregation :
SELECT fp.FOLDER_ID
FROM folder_permission fp
GROUP BY fp.FOLDER_ID
HAVING SUM( can_read = 0 ) = 0;
You can also express it :
SELECT fp.FOLDER_ID
FROM folder_permission fp
GROUP BY fp.FOLDER_ID
HAVING MIN(CAN_READ) = MAX(CAN_READ) AND MIN(CAN_READ) = 1;
If you wanted to return the full matching records, you could try using some exists logic:
SELECT ID, FOLDER_ID, USER_ID, CAN_READ
FROM yourTable t1
WHERE NOT EXISTS (SELECT 1 FROM yourTable t2
WHERE t2.FOLDER_ID = t1.FOLDER_ID AND t2.CAN_READ = 0);
Demo
The existence of a matching record in the above exists subquery would imply that there exist one or more users for that folder who do not have read access rights.
I'm importing data where groups of rows need to be given an id but there is nothing unique and common to them in the incoming data. What there is is a known indicator of the first row of a group and that the data is in order so we can step through row by row setting an id and then increment that id whenever this indicator is found. I've done this however it's incredibly slow, so is there a better way to do this in mysql or am i better off perhaps pre-processing the text data going line by line to add the id.
Example of data coming in, I need to increment an id whenever we see "NEW"
id,linetype,number,text
1,NEW,1234,sometext
2,CONTINUE,2412,anytext
3,CONTINUE,1,hello
4,NEW,2333,bla bla
5,CONTINUE,333,hello
6,NEW,1234,anything
So i'll end up with
id,linetype,number,text,group_id
1,NEW,1234,sometext,1
2,CONTINUE,2412,anytext,1
3,CONTINUE,1,hello,1
4,NEW,2333,bla bla,2
5,CONTINUE,333,hello,2
6,NEW,1234,anything,3
I've tried a stored procedure where i go row by row updating as i go, but it's super slow.
select count(*) from mytable into n;
set i=1;
while i<=n do
select linetype into l_linetype from mytable where id = i;
if l_linetype = "NEW" then
set l_id = l_id + 1;
end if;
update mytable set group_id = l_id where id = i;
end while;
No errors, it's just something that i could go line by line reading and writing the text file and do in a second while in mysql it's taking 100 seconds, it'd be nice if there was a way within mysql to do this reasonably fast so separate pre-processing was not needed.
In absence of MySQL 8+ (non availability of Windowing functions), you can use a Correlated Subquery instead:
EDIT: As pointed out by #Paul in comments,
SELECT t1.*,
(SELECT COUNT(*)
FROM your_table t2
WHERE t2.id <= t1.id
AND t2.linetype = 'NEW'
) group_id
FROM your_table t1
Above query can be more performant, if we define the following composite index (linetype, id). The order of columns is important, because we have a Range condition on id.
Previously:
SELECT t1.*,
(SELECT SUM(t2.linetype = 'NEW')
FROM your_table t2
WHERE t2.id <= t1.id
) group_id
FROM your_table t1
Above query requires indexing on id.
Another approach using User-defined Variables (Session variables) would be:
SELECT
t1.*,
#g := IF(t1.linetype = 'NEW', #g + 1, #g) AS group_id
FROM your_table t1
CROSS JOIN (SELECT #g := 0) vars
ORDER BY t1.id
It is like a looping technique, where we use Session Variables whose previous value is accessible during next row's calculation during SELECT. So, we initialize the variable #g to 0, and then compute it row by row. If we can encounter a row with NEW linetype, we increment it, else use the previous row's value. You can also check https://stackoverflow.com/a/53465139/2469308 for more discussion and caveats to take care of while using this approach.
For MySql 8.0+ you can use SUM() window function:
select *,
sum(linetype = 'NEW') over (order by id) group_id
from tablename
See the demo.
For previous versions you can simulate this functionality with the use of a variable:
set #group_id := 0;
select *,
#group_id := #group_id + (linetype = 'NEW') group_id
from tablename
order by id
See the demo.
Results:
| id | linetype | number | text | group_id |
| --- | -------- | ------ | -------- | -------- |
| 1 | NEW | 1234 | sometext | 1 |
| 2 | CONTINUE | 2412 | anytext | 1 |
| 3 | CONTINUE | 1 | hello | 1 |
| 4 | NEW | 2333 | bla bla | 2 |
| 5 | CONTINUE | 333 | hello | 2 |
| 6 | NEW | 1234 | anything | 3 |
I have two tables, one is an index (or map) which helps when other when pulling queries.
SELECT v.*
FROM smv_ v
WHERE (SELECT p.network
FROM providers p
WHERE p.provider_id = v.provider_id) = 'RUU='
AND (SELECT p.va
FROM providers p
WHERE p.provider_id = v.provider_id) = 'MjU='
LIMIT 1;
Because we do not know the name of the column that holds the main data, we need to look it up, using the provider_id which is in both tables, and then query.
I am not getting any errors, but also no data back. I have spent the past hour trying to put this on sqlfiddle, but it kept crashing, so I just wanted to check if my code is really wrong, hence the crashing?
In the above example, I am looking in the providers table for column network, where the provider_id matches, and then use that as the column on smv.
I am sure i have done this before just like this, but after the weekend trying I thought i would ask on here.
Thanks in Advance.
UPDATE
Here is an example of the data:
THis is the providers, this links so no matter what the name of the column on the smv table, we can link them.
+---+---+---------------+---------+-------+--------+-----+-------+--------+
| | A | B | C | D | E | F | G | H |
+---+---+---------------+---------+-------+--------+-----+-------+--------+
| 1 | 1 | Home | network | batch | bs | bp | va | bex |
| 2 | 2 | Recharge | code | id | serial | pin | value | expire |
+---+---+---------------+---------+-------+--------+-----+-------+--------+
In the example above, G will mean in the smv column for recharge we be value. So that is what we would look for in our WHERE clause.
Here is the smv table:
+---+---+-----------+-----------+---+----+---------------------+-----+--+
| | A | B | C | D | E | F | value | va |
+---+---+-----------+-----------+---+----+---------------------+-----+--+
| 1 | 1 | X2 | Home | 4 | 10 | 2016-09-26 15:20:58 | | 7 |
| 2 | 2 | X2 | Recharge | 4 | 11 | 2016-09-26 15:20:58 | 9 | |
+---+---+-----------+-----------+---+----+---------------------+-----+--+
value in the same example as above would be 9, or 'RUU=' decoded.
So we do not know the name of the rows, until the row from smv is called, once we have this, we can look up what column name we need to get the correct information.
Hope this helps.
MORE INFO
At the point of triggering, we do not know what the row consists of the right data because some many of the fields would be empty. The map is there to help we query the right column, to get the right row (smv grows over time depending on whats uploaded.)
1) SELECT p.va FROM providers p WHERE p.network = 'Recharge' ;
2) SELECT s.* FROM smv s, providers p WHERE p.network = 'Recharge';
1) gives me the correct column I need to look up and query smv, using the above examples it would come back with "value". So I need to now look up, within the smv table, C = Recharge, and value = '9'. This should bring me back row 2 of the smv table.
So individually both 1 and 2 queries work, but I need them put together so the query is done on the database server.
Hope this gives more insight
Even More Info
From reading other posts, which are not really doing what I need, i have come up with this:
SELECT s.*
FROM (SELECT
(SELECT p.va
FROM dh_smv_providers p
WHERE p.provider_name = 'vodaphone'
LIMIT 1) AS net,
(SELECT p.bex
FROM dh_smv_providers p
WHERE p.provider_name = 'vodaphone'
LIMIT 1) AS bex
FROM dh_smv_providers) AS val, dh_smv_ s
WHERE s.provider_id = 'vodaphone' AND net = '20'
ORDER BY from_base64(val.bex) DESC;
The above comes back blank, but if i replace net, in the WHERE clause with a column I know exists, I do get the results expected:
SELECT s.*
FROM (SELECT
(SELECT p.va
FROM dh_smv_providers p
WHERE p.provider_name = 'vodaphone'
LIMIT 1) AS net,
(SELECT p.bex
FROM dh_smv_providers p
WHERE p.provider_name = 'vodaphone'
LIMIT 1) AS bex
FROM dh_smv_providers) AS val, dh_smv_ s
WHERE s.provider_id = 'vodaphone' AND value = '20'
ORDER BY from_base64(val.bex) DESC;
So what I am doing wrong, which is net, not showing the value derived from the subquery "value" ?
Thanks
SELECT
v.*,
p.network, p.va
FROM
smv_ v
INNER JOIN
providers p ON p.provider_id = v.provider_id
WHERE
p.network = 'RUU=' AND p.va = 'MjU='
LIMIT 1;
The tables talk to each other via the JOIN syntax. This completely circumvents the need (and limitations) of sub-selects.
The INNER JOIN means that only fully successful matches are returned, you may need to adjust this type of join for your situation but the SQL will return a row of all v columns where p.va = MjU and p.network = RUU and p.provider_id = v.provider_id.
What I was trying to explain in comments is that subqueries do not have any knowledge of their outer query:
SELECT *
FROM a
WHERE (SELECT * FROM b WHERE a)
AND (SELECT * FROM c WHERE a OR b)
This layout (as you have in your question) is that b knows nothing about a because the b query is executed first, then the c query, then finally the a query. So your original query is looking for WHERE p.provider_id = v.provider_id but v has not yet been defined so the result is false.
I'd like to count how many occurrences of a value happen before a specific value
Below is my starting table
+-----------------+--------------+------------+
| Id | Activity | Time |
+-----------------+--------------+------------+
| 1 | Click | 1392263852 |
| 2 | Error | 1392263853 |
| 3 | Finish | 1392263862 |
| 4 | Click | 1392263883 |
| 5 | Click | 1392263888 |
| 6 | Finish | 1392263952 |
+-----------------+--------------+------------+
I'd like to count how many clicks happen before a finish happens.
I've got a very roundabout way of doing it where I write a function to find the last
finished activity and query the clicks between the finishes.
Also repeat this for Error.
What I'd like to achieve is the below table
+-----------------+--------------+------------+--------------+------------+
| Id | Activity | Time | Clicks | Error |
+-----------------+--------------+------------+--------------+------------+
| 3 | Finish | 1392263862 | 1 | 1 |
| 6 | Finish | 1392263952 | 2 | 0 |
+-----------------+--------------+------------+--------------+------------+
This table is very long so I'm looking for an efficient solution.
If anyone has any ideas.
Thanks heaps!
This is a complicated problem. Here is an approach to solving it. The groups between the "finish" records need to be identified as being the same, by assigning a group identifier to them. This identifier can be calculated by counting the number of "finish" records with a larger id.
Once this is assigned, your results can be calculated using an aggregation.
The group identifier can be calculated using a correlated subquery:
select max(id) as id, 'Finish' as Activity, max(time) as Time,
sum(Activity = 'Clicks') as Clicks, sum(activity = 'Error') as Error
from (select s.*,
(select sum(s2.activity = 'Finish')
from starting s2
where s2.id >= s.id
) as FinishCount
from starting s
) s
group by FinishCount;
A version that leverages user(session) variables
SELECT MAX(id) id,
MAX(activity) activity,
MAX(time) time,
SUM(activity = 'Click') clicks,
SUM(activity = 'Error') error
FROM
(
SELECT t.*, #g := IF(activity <> 'Finish' AND #a = 'Finish', #g + 1, #g) g, #a := activity
FROM table1 t CROSS JOIN (SELECT #g := 0, #a := NULL) i
ORDER BY time
) q
GROUP BY g
Output:
| ID | ACTIVITY | TIME | CLICKS | ERROR |
|----|----------|------------|--------|-------|
| 3 | Finish | 1392263862 | 1 | 1 |
| 6 | Finish | 1392263952 | 2 | 0 |
Here is SQLFiddle demo
Try:
select x.id
, x.activity
, x.time
, sum(case when y.activity = 'Click' then 1 else 0 end) as clicks
, sum(case when y.activity = 'Error' then 1 else 0 end) as errors
from tbl x, tbl y
where x.activity = 'Finish'
and y.time < x.time
and (y.time > (select max(z.time) from tbl z where z.activity = 'Finish' and z.time < x.time)
or x.time = (select min(z.time) from tbl z where z.activity = 'Finish'))
group by x.id
, x.activity
, x.time
order by x.id
Here's another method of using variables, which is somewhat different to #peterm's:
SELECT
Id,
Activity,
Time,
Clicks,
Errors
FROM (
SELECT
t.*,
#clicks := #clicks + (activity = 'Click') AS Clicks,
#errors := #errors + (activity = 'Error') AS Errors,
#clicks := #clicks * (activity <> 'Finish'),
#errors := #errors * (activity <> 'Finish')
FROM
`starting` t
CROSS JOIN
(SELECT #clicks := 0, #errors := 0) i
ORDER BY
time
) AS s
WHERE Activity = 'Finish'
;
What's similar to Peter's query is that this one uses a subquery that's returning all the rows, setting some variables along the way and returning the variables' values as columns. That may be common to most methods that use variables, though, and that's where the similarity between these two queries ends.
The difference is in how the accumulated results are calculated. Here all the accumulation is done in the subquery, and the main query merely filters the derived dataset on Activity = 'Finish' to return the final result set. In contrast, the other query uses grouping and aggregation at the outer level to get the accumulated results, which may make it slower than mine in comparison.
At the same time Peter's suggestion is more easily scalable in terms of coding. If you happen to have to extend the number of activities to account for, his query would only need expansion in the form of adding one SUM(activity = '...') AS ... per new activity to the outer SELECT, whereas in my query you would need to add a variable and several expressions, as well as a column in the outer SELECT, per every new activity, which would bloat the resulting code much more quickly.
To start things off, I want to make it clear that I'm not trying to order by descending order.
I am looking to order by something else, but then filter further by displaying things in a second column only if the value in that column 1 row below it is less than itself. Once It finds that the next column is lower, it stops.
Example:
Ordered by column-------------------Descending Column
353215 20
535325 15
523532 10
666464 30
473460 20
If given that data, I would like it to only return 20, 15 and 10. Because now that 30 is higher than 10, we don't care about what's below it.
I've looked everywhere and can't find a solution.
EDIT: removed the big number init, and edd the counter in ifnull test, so it works in pure MySQL: ifnull(#prec,counter) and not ifnull(#prec,999999).
If your starting table is t1 and the base request was:
select id,counter from t1 order by id;
Then with a mysql variable you can do the job:
SET #prec=NULL;
select * from (
select id,counter,#prec:= if(
ifnull(#prec,counter)>=counter,
counter,
-1) as prec
from t1 order by id
) t2 where prec<>-1;
except here I need the 99999 as a max value for your column and there's maybe a way to put the initialisation of #prec to NULL somewhere in the 1st request.
Here the prec column contains the 1st row value counter, and then the counter value of each row if it less than the one from previous row, and -1 when this becomes false.
Update
The outer select can be removed completely if the variable assignment is done in the WHERE clause:
SELECT #prec := NULL;
SELECT
id,
counter
FROM t1
WHERE
(#prec := IF(
IFNULL(#prec, counter) >= counter,
counter,
-1
)) IS NOT NULL
AND #prec <> -1
ORDER BY id;
regilero EDIT:
I can remove the 1st initialization query using a temporary table (left join) of 1 row this way: but this may slow down the query, maybe.
(...)
FROM t1
LEFT JOIN (select #prec:=NULL as nullinit limit 1) as tmp1 ON tmp1.nullinit is null
(..)
As said by #Mike using a simple UNION query or even :
(...)
FROM t1 , (select #prec:=NULL) tmp1
(...)
is better if you want to avoid the first query.
So at the end the nicest solution is:
SELECT NULL AS id, NULL AS counter FROM dual WHERE (#prec := NULL)
UNION
SELECT id, counter
FROM t1
WHERE (
#prec := IF(
IFNULL(#prec, counter) >= counter,
counter,
-1 )) IS NOT NULL
AND #prec <> -1
ORDER BY id;
+--------+---------+
| id | counter |
+--------+---------+
| 353215 | 20 |
| 523532 | 10 |
| 535325 | 15 |
+--------+---------+
EXPLAIN SELECT output:
+----+--------------+------------+------+---------------+------+---------+------+------+------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------+------------+------+---------------+------+---------+------+------+------------------+
| 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Impossible WHERE |
| 2 | UNION | t1 | ALL | NULL | NULL | NULL | NULL | 6 | Using where |
| NULL | UNION RESULT | <union1,2> | ALL | NULL | NULL | NULL | NULL | NULL | Using filesort |
+----+--------------+------------+------+---------------+------+---------+------+------+------------------+
You didn't find a solution because it is impossible.
SQL works only within a row, it can not look at rows above or below it.
You could write a stored procedure to do this, essentially looping one row at a time and calculating the logic.
It would probably be easier to write it in the frontend language, whatever it is you are using.
I'm afraid you can't do it in SQL. Relational databases were designed for different purpose so there is no abstraction like next or previous row. Do it outside the SQL in the 'wrapping' language.
I'm not sure whether these do what you want, and they're probably too slow anyway:
SELECT t1.col1, t1.col2
FROM tbl t1
WHERE t1.col2 = (SELECT MIN(t2.col2) FROM tbl t2 WHERE t2.col1 <= t1.col1)
Or
SELECT t1.col1, t1.col2
FROM tbl t1
INNER JOIN tbl t2 ON t2.col1 <= t1.col1
GROUP BY t1.col1, t1.col2
HAVING t1.col2 = MIN(t2.col2)
I guess you could maybe select them (in order) into a temporary table, that also has an auto-incrementing column, and then select from the temporary table, joining on to itself based on the auto-incrementing column (id), but where t1.id = t2.id + 1, and then use the where criteria (and appropriate order by and limit 1) to find the t1.id of the row where the descending column is greater in t2 than in t1. After which, you can select from the temporary table where the id is less than or equal to the id that you just found. It's not exactly pretty though! :)
It is actually possible, but the performance isn't easy to optimize. If Col1 is ordered and Col2 is the descending column:
First you create a self join of each row with the next row (note that this only works if the column value is unique, if not you need to join on unique values).
(Select Col1, (Select Min(Col2) as A2 from MyTable as B Where B.A2>A.Col1) As Col1FromNextRow From MyTable As A) As D
INNER JOIN
(Select Col1 As C1,Col2 From MyTable As C On C.C1=D.Col1FromNextRow)
Then you implement the "keep going until the first ascending value" bit:
Select Col2 FROM
(
(Select Col1, (Select Min(Col2) as A2 from MyTable as B Where B.A2>A.Col1) As Col1FromNextRow From MyTable As A) As D
INNER JOIN
(Select Col1 As C1,Col2 From MyTable As C On C.C1=D.Col1FromNextRow)
) As E
WHERE NOT EXISTS
(SELECT Col1 FROM MyTable As Z Where z.COL1<E.Col1 and Z.Col2 < E.Col2)
I don't have an environment to test this, so it probably has bugs. My apologies, but hopefully the idea is semi clear.
I would still try to do it outside of SQL.