Imagine you have a table with 50000 rows and want to delete 5 entries based on the PK value. The expected query might look like this:
DELETE FROM table_name WHERE id = 5 OR id = 10 OR id = 15 OR id = 20 OR id = 25
This works as expected. However, if you're in a hurry and accidentally format it like this:
DELETE FROM table_name WHERE id = 5 OR 10 OR 15 OR 20 OR 25
It will delete all 50000 rows. Does anyone know if this is a bug? If not I don't understand why it treats it like a delete all instead of return a syntax error.
That is not a bug in MySQL.
10, 15, 20 and 25 are truthy values. So when the OR is evaluated against one of them, the whole condition is true.
You should consider an IN clause.
In MySQL, unlike standard SQL:
false and the integer 0 are treated the same
true and the integer 1 are treated the same
any integer value other than zero is also treated as true
See https://dev.mysql.com/doc/refman/8.0/en/boolean-literals.html
Your expression is combining several terms as boolean conditions:
(id = 5) OR (10) OR (15) OR (20) OR (25)
Since all non-zero integers are treated as true, this ends up being like:
(id = 5) OR (true) OR (true) OR (true) OR (true)
And you should recall from high school math that X OR true evaluates as true.
Note that the your expression would also do something you don't expect in many other programming languages. Using integer as a series of boolean terms doesn't compare those integers to the operand of the first term.
Related
One of guys share me a query but still i am not undestand what is mean. Any one have idea why we use "&" in mysql query.
It also return result but don't know what condition is applying.
It is a bitwise logical AND operator.
The operator & is the bitwise AND operator for MySql.
...why we are use "&" in for condition
Your question is application specific and can't be answered without knowing the requirement.
The binary representation of 30 is 00011110.
As you can see this number has the bits 2 to 5 (starting from right) set (=1).
Any integer number, like entity_id, when written in binary, that has any of these bits set when used in the operation entity_id & 30 would return a non-zero value.
For example when entity_id = 12, then entity_id in binary is 00001100 and:
00001100(=12) & 00011110(=30) = 00001100(=12)
which is non-zero value interpreted as TRUE in the WHERE clause.
But when entity_id = 129, then entity_id in binary is 10000001 and:
10000001(=129) & 00011110(=30) = 00000000(=0)
which is 0 and it is interpreted as FALSE in the WHERE clause.
So, entity_id = 12 will be returned by the query but entity_id = 129 will be filtered out.
& is bitwise and operator
please be aware that this condition is terribly wrong when you're looking for exact 30 for id, for example it will match 62 and 2, 4, 6, ... also (any integer that has common 1 in the 5 least-significant digits with 11110 (30 in binary) in it's binary form)
I am experiencing some weird behavior with MySQL. Basically I have a table like this:
ID string
1 14
2 10,14,25
Why does this query pull id 2?
SELECT * FROM exampletable where string = 10
Surely it should be looking for an exact match, because this only pulls id 1:
SELECT * FROM exampletable where string = 14
I am aware of FIND_IN_SET, I just find it odd that the first query even pulls anything. Its behaving like this query:
SELECT * FROM exampletable where string LIKE '10%'
When you compare a numeric and a string value, MySQL will attempt to convert the string to number and match. Number like strings are also parsed. This we have:
SELECT '10,14,25' = 1 -- 0
SELECT '10,14,25' = 10 -- 1
SELECT 'FOOBAR' = 1 -- 0
SELECT 'FOOBAR' = 0 -- 1
SELECT '123.456' = 123 -- 0
SELECT '123.456FOOBAR' = 123.456 -- 1
The behavior is documented here (in your example it is the last rule):
...
If one of the arguments is a decimal value, comparison depends on the
other argument. The arguments are compared as decimal values if the
other argument is a decimal or integer value, or as floating-point
values if the other argument is a floating-point value.
In all other cases, the arguments are compared as floating-point
(real) numbers.
I am trying to find a reliable query which returns the first instance of an acceptable insert range.
Research:
some of the below links adress similar questions, but I could get none of them to work for me.
Find first available date, given a date range in SQL
Find closest date in SQL Server
MySQL difference between two rows of a SELECT Statement
How to find a gap in range in SQL
and more...
Objective Query Function:
InsertRange(1) = (StartRange(i) - EndRange(i-1)) > NewValue
Where InsertRange(1) is the value the query should return. In other words, this would be the first instance where the above condition is satisfied.
Table Structure:
Primary Key: StartRange
StartRange(i-1) < StartRange(i)
StartRange(i-1) + EndRange(i-1) < StartRange(i)
Example Dataset
Below is an example User table (3 columns), with a set range distribution. StartRanges are always ordered in a strictly ascending way, UserID are arbitrary strings, only the sequences of StartRange and EndRange matters:
StartRange EndRange UserID
312 6896 user0
7134 16268 user1
16877 22451 user2
23137 25142 user3
25955 28272 user4
28313 35172 user5
35593 38007 user6
38319 38495 user7
38565 45200 user8
46136 48007 user9
My current Query
I am trying to use this query at the moment:
SELECT t2.StartRange, t2.EndRange
FROM user AS t1, user AS t2
WHERE (t1.StartRange - t2.StartRange+1) > NewValue
ORDER BY t1.EndRange
LIMIT 1
Example Case
Given the table, if NewValue = 800, then the returned answer should be 23137. This means, the first available slot would be between user3 and user4 (with an actual slot size = 813):
InsertRange(1) = (StartRange(i) - EndRange(i-1)) > NewValue
InsertRange = (StartRange(6) - EndRange(5)) > NewValue
23137 = 25955 - 25142 > 800
More Comments
My query above seemed to be working for the special case where StartRanges where tightly packed (i.e. StartRange(i) = StartRange(i-1) + EndRange(i-1) + 1). This no longer works with a less tightly packed set of StartRanges
Keep in mind that SQL tables have no implicit row order. It seems fair to order your table by StartRange value, though.
We can start to solve this by writing a query to obtain each row paired with the row preceding it. In MySQL, it's hard to do this beautifully because it lacks the row numbering function.
This works (http://sqlfiddle.com/#!9/4437c0/7/0). It may have nasty performance because it generates O(n^2) intermediate rows. There's no row for user0; it can't be paired with any preceding row because there is none.
select MAX(a.StartRange) SA, MAX(a.EndRange) EA,
b.StartRange SB, b.EndRange EB , b.UserID
from user a
join user b ON a.EndRange <= b.StartRange
group by b.StartRange, b.EndRange, b.UserID
Then, you can use that as a subquery, and apply your conditions, which are
gap >= 800
first matching row (lowest StartRange value) ORDER BY SB
just one LIMIT 1
Here's the query (http://sqlfiddle.com/#!9/4437c0/11/0)
SELECT SB-EA Gap,
EA+1 Beginning_of_gap, SB-1 Ending_of_gap,
UserId UserID_after_gap
FROM (
select MAX(a.StartRange) SA, MAX(a.EndRange) EA,
b.StartRange SB, b.EndRange EB , b.UserID
from user a
join user b ON a.EndRange <= b.StartRange
group by b.StartRange, b.EndRange, b.UserID
) pairs
WHERE SB-EA >= 800
ORDER BY SB
LIMIT 1
Notice that you may actually want the smallest matching gap instead of the first matching gap. That's called best fit, rather than first fit. To get that you use ORDER BY SB-EA instead.
Edit: There is another way to use MySQL to join adjacent rows, that doesn't have the O(n^2) performance issue. It involves employing user variables to simulate a row_number() function. The query involved is a hairball (that's a technical term). It's described in the third alternative of the answer to this question. How do I pair rows together in MYSQL?
I am having 10 sets of strings each set having 9 strings. Of this 10 sets, all strings in first set have length 10, those in second set have length 9 and so on. Finally, all strings in 10th set have length 1.
There is common prefix of (length-2) characters in each set. And the prefix length reduces by 1 in next set. Thus, first set has 8 characters in common, second has 7 and so on.
Here is what a sample of 10 sets look like:
pu3q0k0vwn
pu3q0k0vwp
pu3q0k0vwr
pu3q0k0vwq
pu3q0k0vwm
pu3q0k0vwj
pu3q0k0vtv
pu3q0k0vty
pu3q0k0vtz
pu3q0k0vw
pu3q0k0vy
pu3q0k0vz
pu3q0k0vx
pu3q0k0vr
pu3q0k0vq
pu3q0k0vm
pu3q0k0vt
pu3q0k0vv
pu3q0k0v
pu3q0k0y
pu3q0k1n
pu3q0k1j
pu3q0k1h
pu3q0k0u
pu3q0k0s
pu3q0k0t
pu3q0k0w
pu3q0k0
pu3q0k2
pu3q0k3
pu3q0k1
pu3q07c
pu3q07b
pu3q05z
pu3q0hp
pu3q0hr
pu3q0k
pu3q0m
pu3q0t
pu3q0s
pu3q0e
pu3q07
pu3q05
pu3q0h
pu3q0j
pu3q0
pu3q2
pu3q3
pu3q1
pu3mc
pu3mb
pu3jz
pu3np
pu3nr
pu3q
pu3r
pu3x
pu3w
pu3t
pu3m
pu3j
pu3n
pu3p
pu3
pu9
pud
pu6
pu4
pu1
pu0
pu2
pu8
pu
pv
0j
0h
05
pg
pe
ps
pt
p
r
2
0
b
z
y
n
q
Requirement:
I have a table PROFILES having columns SRNO (type bigint, primary key) and UNIQUESTRING (type char(10), unique key). I want to find 450 SRNOs for matching UNIQUESTRINGs from those 10 sets.
First find strings like in the first set. If we don't get enough results (ie. 450), find strings like in second set. If we still don't get enough results (450 minus results of first set) find strings like in third set. And so on.
Existing Solution:
I've written query something like:
select srno from profiles
where ( (uniquestring like 'pu3q0k0vwn%')
or (uniquestring like 'pu3q0k0vwp%') -- all those above uniquestrings after this and finally the last one
or (uniquestring like 'n%')
or (uniquestring like 'q%')
)
limit 450
However, after getting feedback from Rick James in this answer I realized this is not optimized query as it touches lot many rows than it needs.
So I plan to rewrite the query like this:
(select srno from profiles where uniquestring like 'pu3q0k0vwn%' LIMIT 450)
UNION DISTINCT
(select srno from profiles where uniquestring like 'pu3q0k0vwp%' LIMIT 450); -- and more such clauses after this for each uniquestring
I like to know if there are any better solutions to do this.
SELECT ...
WHERE str LIKE 'pu3q0k0vw%' AND -- the 10-char set
str REGEXP '^pu3q0k0vw[nprqmj]' -- the 9 next letters
LIMIT ...
# then check for 450; if not enough, continue...
SELECT ...
WHERE str LIKE 'pu3q0k0vt%' AND -- the 10-char set
str REGEXP '^pu3q0k0vt[vyz]' -- the 9 next letters
LIMIT 450
# then check for 450; if not enough, continue...
etc.
SELECT ...
WHERE str LIKE 'pu3q0k0v%' AND -- the 9-char set
str REGEXP '^pu3q0k0v[wyzxrqmtv]' -- the 9 next letters
LIMIT ...
# check, etc; for a total of 10 SELECTs or 450 rows, whichever comes first.
This will be 10+ selects. Each select will be somewhat optimized by first picking rows with a common prefix with LIKE, then it double checks with a REGEXP.
(If you don't like splitting the inconsistent pu3q0k0vw vs. pu3q0k0vt; we can discuss things further.)
You say "prefix"; I have coded the LIKE and REGEXP to assume arbitrary text after the prefix given.
UNION is not viable, since it will (I think) gather all the rows before picking 450. Each SELECT will stop at the LIMIT if there is no DISTINCT GROUP BY or ORDER BY that require gathering everything first.
REGEXP is not smart enough to avoid scanning the entire table; adding the LIKE avoids such (except when more than, say, 20% of the rows match the LIKE).
Picture a table with fields (Id, Valid, Value)
Valid = boolean
Value = number from 0 to 100
What I want is a report that counts the number of records where (valid = 0), and then gives me the total number of cases where (value < 70) and the number of cases where (value >= 70).
The problem is that the "value" field could be empty on some of the records and I only want the records where the value field is not empty.
I know that the second value (value>=70) is going to be calculated, but the problem is that I can't simply do (total number of records - number of records where value < 70), because there's the problem with the records where "value" is null...
And then I want to create graphic with these values, to see the percentage of records below and above 70.
"The problem is that the "value" field could be empty on some of the records and I only want the records where the value field is not empty."
Use a WHERE clause to exclude rows whose "value" field is Null.
Here is sample data for tblMetraton.
Id Valid Valu
1 -1 2
2 0 4
3 -1 6
4 0
5 0 90
I used Valu for the field name because Value is a reserved word.
You can use the Count() function in a query and take advantage of the fact it only counts non-Null values. So use Count() with an IIf() expression which returns a non-Null value (I used 1) for the condition you want to match and Null otherwise.
SELECT
Count(IIf(Valid=0,1,Null)) AS valid_false,
Count(IIf(Valu<70,1,Null)) AS below_70,
Count(IIf([Valu]>=70,1,Null)) AS at_least70
FROM tblMetraton AS m
WHERE (((m.[Valu]) Is Not Null));
Based on my sample data in tblMetraton, that query gives me this result set.
valid_false below_70 at_least70
2 3 1
If my sample data does not address the variability you're dealing with, and/or if my query results don't match your requirements, please show us you own sample data and the results you want based on that sample.