My table looks something like this
DocumentID AttributeID LongValue StringValue BooleanValue
100 1 null null 1
100 2 123 null null
100 3 null test null
Each attributeID is of a type and only that column is filled and everything else is null. A document can have multiple attributes.
My query demands that I find document where
Attribute ID 1 has value 1
Attribute ID 2 has value 123
Attribute ID 3 has value test
I was writing a query like this
select documentID
from table
where (
(AttributeID=1 AND BooleanValue=1) AND
(AttributeID=2 AND LongValue=123) AND
(AttributeID=3 AND StringValue="test"))
The above query, obviously, is giving me zero results although document 100 satisfies my constraints.
How do I change my query to get document ID 100 as the result?
SELECT DocumentID
FROM tablename
WHERE (AttributeID = 1 AND booleanValue = 1) OR
(AttributeID = 2 AND longValue = 123) OR
(AttributeID = 3 AND stringValue = 'test')
GROUP BY DocumentID
HAVING COUNT(*) = 3
Related
Using mariadb version 10.5.15 (and SQLAlchemy with python 3.9).
After filtering the following table with e.g. count == 3 i would get the rows with id's
2, 3, 4, 7 and 12.
Then for each of these rows i want to add every row (of the same table) if row 2, 3, 4, 7 or 12 have the same group_id (excluding null) but a different group_leader value. So i would like to add
(same group_id, not same group_leader)
1, 3 (coming from id 2)
5 (coming from id 4)
10 (coming from id 7 and only id 10, because group_leader must be different)
id
count
group_id
group_leader
1
7
1
null
2
3
1
1
3
2
1
null
4
3
2
1
5
6
2
null
6
2
3
null
7
3
3
null
8
1
3
null
9
2
3
null
10
5
3
1
11
5
null
null
12
3
null
null
Is it possible to first do the select...from...where... and then add these other rows or do i first have to do something like join?
This is the actual example:
def query_positions(position_filter: dict):
result = db.session.query(Positions).join(
ProjectCrafts, Positions.project_craft_id == ProjectCrafts.project_craft_id).join(
Projects, Positions.project_id == Projects.project_id
)
if "firm_id" in position_filter:
result = result.filter(Positions.firm_id == position_filter["firm_id"])
if "craft" in position_filter:
result = result.filter(ProjectCrafts.craft == position_filter["craft"])
if "craft_name" in position_filter:
result = result.filter(ProjectCrafts.craft_name == position_filter["craft_name"])
positions1 = aliased(Positions)
result = result.join(positions1, Positions.is_parent == 1, Positions.family_id == positions1.family_id).join(
Positions.family_id == positions1.family_id)
positions = result.all()
return positions
The problem comes after the positions1 = aliased(Positions) and i get this error
...
in _join_determine_implicit_left_side
raise sa_exc.InvalidRequestError( sqlalchemy.exc.InvalidRequestError: Don't know how to join to
<AliasedInsp at 0x7fabd1ad30; Positions(Positions)>. Please use the
.select_from() method to establish an explicit left side, as well as
providing an explicit ON clause if not present already to help resolve
the ambiguity.
You can join the filtered table on the count_ with the original table, where you impose the two main conditions:
"group_id" are the same
"group_leader" are different
Then apply a UNION between the two result sets, optionally followed by and ORDER BY clause to order your values on the id.
Given that NULL values and NON-NULL values are neither equal nor different, a way to compare them is transforming NULL values to -1 (assuming this value cannot be employed by "group_leader" values) using the COALESCE function.
WITH cte AS (
SELECT * FROM tab WHERE count_ = 3
)
SELECT tab.*
FROM tab
INNER JOIN cte
ON tab.group_id = cte.group_id
AND COALESCE(tab.group_leader, -1) <> COALESCE(cte.group_leader, -1)
UNION
SELECT * FROM cte
ORDER BY id
Check the demo here.
I have this table filled with values, and it's all structured in JSON.
PersonID
ValueID
Value
1
1
{"Values":[{"ID":1,"Value":true},{"ID":2,"Value":true}]}
1
2
{"Values":[{"ID":2,"Value":false},{"ID":3,"Value":true}]}
So I was wondering if there was any way to query on the ID and value at the same time, so I etc. would be able to search for "ID":1 and "Value":true and then it would return the first row.
I've tried to use JSON_CONTAINS_PATH, JSON_CONTAINS, JSON_SEARCH but none of them takes into account that I want to search in a list, I have tried with the $.Values[0].ID and that returns the id but I need to loop all of them through in the where, else I would only search the first index of the JSON array.
Can anyone point me in the right direction?
SELECT
PersonID,
ValueID,
x1.*
FROM table1
cross join JSON_TABLE(table1.Value,
'$.Values[*]' COLUMNS( ID INTEGER PATH '$.ID',
Value INTEGER PATH '$.Value'
)) as x1
output:
PersonID
ValueID
ID
Value
1
1
1
1
1
1
2
1
1
2
2
0
1
2
3
1
see: DBFIDDLE
SELECT *
FROM table1
WHERE table1.value->'$.Values[0]' = JSON_OBJECT('ID',1,'Value',true)
I am trying to figure out a way to show all records in table where a specific field does not contain certain values - table layout is:
id
tenant_id
request_action
request_id
request_status
hash
Each request_id could have multiple actions so it could look like:
1 1 email 1234 1 ffffd9b00cf893297ab737243c2b921c
2 1 email 1234 0 ffffd9b00cf893297ab737243c2b921c
3 1 email 1234 0 ffffd9b00cf893297ab737243c2b921c
4 1 email 1235 1 a50ee458c9878190c24cdf218c4ac904
5 1 email 1235 1 a50ee458c9878190c24cdf218c4ac904
6 1 email 1235 1 a50ee458c9878190c24cdf218c4ac904
7 1 email 1236 1 58c2869bc4cc38acc03038c7bef14023
8 1 email 1236 2 58c2869bc4cc38acc03038c7bef14023
9 1 email 1236 2 58c2869bc4cc38acc03038c7bef14023
Request_id can either be 0 (pending), 1 (sent) or 2 (failed) - I want to find all hashes where all the request_status within that hash are set to 1.
In the above two examples a50ee458c9878190c24cdf218c4ac904 should return as a match as all the request_status are 1 but ffffd9b00cf893297ab737243c2b921c should not as, whilst it contains a 1, it also contains some 0's and 58c2869bc4cc38acc03038c7bef14023 should not as, again whilst it contains a 1, it also contains some 2's
I tried:
SELECT
*
from
table
where request_action='email' and request_status!=0 and request_status!=2
group by hash
However, this doesn't give me the result I need - how can I return the hashes only where request_status is set to 1 for all the instances of that hash?
Not sure why you would need a group by here. You'd want to do a group by if you were going to concat data using GROUP_CONCAT, or other aggregate functions (sum, max, etc)
Also, instead of doing multiple negative conditions in your where clause (request_status !=0 and request_status !=2), why not just get the status you want?
SELECT * FROM test WHERE request_action = 'email' AND request_status = 1
Update Based on Your Comment
If you don't want to return any hashes that have the status of 0, or 2. You can do this:
SELECT
*
FROM
test t
WHERE
request_action = 'email' AND request_status = 1
AND HASH NOT IN (SELECT HASH FROM test WHERE request_status IN (0, 2))
Just make sure you have an index on hash, otherwise this is going to be really slow.
Create table temp select hash from your_table where
request_status=1 group by hash
Alter table temp add index(hash)
Delete from temp where hash IN (select hash from temp
where request_status!=1 group by hash)
Select * from your_table where hash IN(select hash from
temp)
I wasn't sure how to really search for this..
Lets say I have a simple table like this
ID Type
1 0
1 1
2 1
3 0
4 0
4 1
How could I select all ID's which have a type of both 0 and 1?
SELECT id,type
FROM t
GROUP BY id
HAVING SUM(type=0)>0
AND SUM(type=1)>0
You just group by id ,than with HAVING you use post aggregation filtering to check for 0 and 1.
Having is pretty expensive and that query can't hit keys.
SELECT ID FROM foo AS foo0 JOIN foo AS foo1 USING (ID) WHERE foo0.Type=0 AND foo1.Type=1 GROUP BY foo0.id.
A more generalized way of doing this would by to use a CASE column for each value you need to test combined with a GROUP BY on the id column. This means that if you have n conditions to test for, you would have a column indicating if each condition is met for a given id. Then the HAVING condition becomes trivial and you can use it like any multi-column filter, or use the grouping as your subquery and the code looks simpler and the logic becomes even easier to follow.
SELECT id, Type0,Type1
FROM (
SELECT id,
Type0 = max(CASE WHEN type = 0 THEN TRUE END)
, Type1 = max(CASE WHEN type = 1 THEN TRUE END)
FROM t
GROUP BY id
) pivot
WHERE Type0 = TRUE and Type1 = TRUE
I have table with following data
(here key field comprises of site__marker)
id userid key value
1 1 site_1_name http://example1.com
2 1 site_1_user user1
3 1 site_2_name http://example2.com
4 2 site_1_name http://example3.com
5 2 site_1_user user2
and I have site mapping table
oldsiteid newsiteid
1 120
2 152
Now I need to update first table in way that only values updates in key field
should match oldsiteid in second table, and should get updated by newsiteid
Output should be like
id userid key value
1 1 site_120_name http://example1.com
2 1 site_120_user user1
3 1 site_152_name http://example2.com
4 2 site_120_name http://example3.com
5 2 site_120_user user2
How to achieve this?
You will have to translate this REXX in SQL, the functions are staight forward:
old = 'user_1_ddd'
n = "333"
new = substr(a,1,index(a,"_")) || n || substr(a,index(a,"_") + index(substr(a, index(a,"_")+1) ,"_"))
results in user_333_ddd
substr is the same in both
for index, use Find_in_set
for || use concat
I do not have MySQL but this should work:
UPDATE TargetTable
SET key = CONCAT
(
SUBSTRING_INDEX(key,'_',1)
,'_'
, (SELECT newsiteid FROM MappingTable WHERE MappingTable.oldsiteid = SUBSTRING_INDEX(SUBSTRING_INDEX(TargetTable.key,'_',-2), '_', 1 ))
,'_'
,SUBSTRING_INDEX(key,'_',-1)
)