I want to transfer SQL to SQLAlchemy and have the case that I have nested case statements.
The simple case is working:
stmt = sqlalchemy.select([self.tusg_view_specials]).where(
sqlalchemy.case([
(self.tusg_view_specials.c.webo_close_date >= (datetime.date.today() - datetime.timedelta(days=30)), 1),
(self.tusg_view_specials.c.wo_closed_date >= (datetime.date.today() - datetime.timedelta(days=61)), 1)
], else_=0),
)
But when I have a nested case, means that the THEN is a case clause instead of a simple value:
stmt = sqlalchemy.select([self.tusg_view_specials]).where(
sqlalchemy.case([
(self.tusg_view_specials.c.webo_close_date >= (datetime.date.today() - datetime.timedelta(days=30)), 1),
(self.tusg_view_specials.c.wo_closed_date >= (datetime.date.today() - datetime.timedelta(days=61)), 1),
(self.tusg_view_specials.c.work_order_number is None,
sqlalchemy.case([(self.tusg_view_specials.c.flag_is_abw == 1, 1)], else_=0))
], else_=0), <<-- This line is shown to cause the error
)
I get the following error message, I don't know how to deal with it:
sqlalchemy.exc.ArgumentError: Ambiguous literal: False. Use the 'text()' function to indicate a SQL expression literal, or 'literal()' to indicate a bound value.
I can read the text, but don't know how to interpret it. Search results on "nested case" on SQLAlchemy are very little to none.
As also Ilja Everilä wrote, the cause is the "IS None", which needs to be replaced by
table_instance.c.work_order_number.is_(None)
or
table_instance.c.work_order_number == None
Related
In the query below, I keep getting the error "An expression of non boolean type specified in a context where a condition is expected near End". Down below is my code I'm not trying to return the rows where the pk__street_name == NULL in the join. But I get the error listed above. How can I fix this.
result = session.query(
tamDnRangeMap, tamStreet
).join(tamStreet)
.filter(
case(
[(tamDnRangeMap.pk_street_name == NULL, 0)],
else_ = 1
)
).all()
First remark is that you don't want equality comparisons anywhere near NULL in SQL, it is done with IS or IS NOT.
Once you know that, you can use SQLAlchemy's is_ or isnot* operators.
All in all, you're using CASE where you don't really need it, put the IS NOT NULL condition in filter directly.
result = (
session.query(tamDnRangeMap, tamStreet)
.join(tamStreet)
.filter(tamDnRangeMap.pk_street_name.isnot(None))
.all()
)
* NB. isnot has been deprecated and is replaced by is_not since SQLAlchemy 1.4, but the question uses case with list of whens which was also deprecated in 1.4.
I am trying to run a SQL query using Oracle's json_value() function using a PreparedStatement.
Assume the following table setup:
drop table foo cascade constraints purge;
create table foo
(
id integer primary key,
payload clob,
constraint ensure_json check (payload IS JSON STRICT)
);
insert into foo values (1, '{"data": {"k1": 1, "k2": "foo"}}');
The following SQL query works fine:
select *
from foo
where json_value(payload, '$.data.k1') = '1'
and returns the expected row.
However, when I try to run this query using a PreparedStatement like in the the following piece of code:
String sql =
"select *\n" +
"from foo\n" +
"where json_value(payload, ?) = ?";
PreparedStatement pstmt = conection.prepareStatement(sql);
pstmt.setString(1, "$.data.k1");
pstmt.setString(2, "1");
ResultSet rs = pstmt.executeQuery();
(I removed all error checking from the example to keep it simple)
This results in:
java.sql.SQLException: ORA-40454: path expression not a literal
The culprit is passing the json path value (parameter index 1), the second parameter is no problem.
When I replace (only) the first parameter with a String constant json_value(payload, '$.data.k1') = ? the prepared statement works fine.
In a desperate attempt, I also tried including the single quotes in the parameter: pstmt.setString(1, "'$.data.k1'") but not surprisingly, Oracle wouldn't accept it either (same error message).
I also tried using json_value(payload, concat('$.', ?) ) and only passing "data.k1" as the parameter - same result.
So, the question is:
How can I pass a JSON path expression to Oracle's json_value function using a PreparedStatement parameter?
Any ideas? Is this a bug in the driver or in Oracle? (I couldn't find anything on My Oracle Support)
Or is this simply a case of "not implemented"?
Environment:
I am using Oracle 18.0
I tried the 18.3 and 19.3 version of the ojdbc10.jar driver together with OpenJDK 11.
It isn't the driver - you get the same thing with dynamic SQL:
declare
result foo%rowtype;
begin
execute immediate 'select *
from foo
where json_value(payload, :1) = :2'
into result using '$.data.k1', '1';
dbms_output.put_line(result.payload);
end;
/
ORA-40454: path expression not a literal
ORA-06512: at line 4
And it isn't really a bug, it's documented (emphasis added):
JSON_basic_path_expression
Use this clause to specify a SQL/JSON path expression. The function uses the path expression to evaluate expr and find a scalar JSON value that matches, or satisfies, the path expression. The path expression must be a text literal. See Oracle Database JSON Developer's Guide for the full semantics of JSON_basic_path_expression.
So you would have to embed the path literal, rather than bind it, unfortunately:
declare
result foo%rowtype;
begin
execute immediate 'select *
from foo
where json_value(payload, ''' || '$.data.k1' || ''') = :1'
into result using '1';
dbms_output.put_line(result.payload);
end;
/
1 rows affected
dbms_output:
{"data": {"k1": 1, "k2": "foo"}}
or for your JDBC example (keeping the path as a separate string as you presumably want that to be a variable really):
String sql =
"select *\n" +
"from foo\n" +
"where json_value(payload, '" + "$.data.k1" + "') = ?";
PreparedStatement pstmt = conection.prepareStatement(sql);
pstmt.setString(1, "1");
ResultSet rs = pstmt.executeQuery();
Which obviously isn't what you want to do*, but there doesn't seem to be an alternative. Other than turning your query into a function and passing the path variable in to that, but then the function would have to use dynamic SQL, so the effect is much the same - maybe easier to handle SQL injection concerns that way though.
* and I'm aware you know how to do this the embedded way, and know you want to use bind variables because that's the correct thing to do; I've spelled it out more than you need for other visitors *8-)
Versions: Django 1.10 and Postgres 9.6
I'm trying to modify a nested JSONField's key in place without a roundtrip to Python. Reason is to avoid race conditions and multiple queries overwriting the same field with different update.
I tried to chain the methods in the hope that Django would make a single query but it's being logged as two:
Original field value (demo only, real data is more complex):
from exampleapp.models import AdhocTask
record = AdhocTask.objects.get(id=1)
print(record.log)
> {'demo_key': 'original'}
Query:
from django.db.models import F
from django.db.models.expressions import RawSQL
(AdhocTask.objects.filter(id=25)
.annotate(temp=RawSQL(
# `jsonb_set` gets current json value of `log` field,
# take a the nominated key ("demo key" in this example)
# and replaces the value with the json provided ("new value")
# Raw sql is wrapped in triple quotes to avoid escaping each quote
"""jsonb_set(log, '{"demo_key"}','"new value"', false)""",[]))
# Finally, get the temp field and overwrite the original JSONField
.update(log=F('temp’))
)
Query history (shows this as two separate queries):
from django.db import connection
print(connection.queries)
> {'sql': 'SELECT "exampleapp_adhoctask"."id", "exampleapp_adhoctask"."description", "exampleapp_adhoctask"."log" FROM "exampleapp_adhoctask" WHERE "exampleapp_adhoctask"."id" = 1', 'time': '0.001'},
> {'sql': 'UPDATE "exampleapp_adhoctask" SET "log" = (jsonb_set(log, \'{"demo_key"}\',\'"new value"\', false)) WHERE "exampleapp_adhoctask"."id" = 1', 'time': '0.001'}]
It would be much nicer without RawSQL.
Here's how to do it:
from django.db.models.expressions import Func
class ReplaceValue(Func):
function = 'jsonb_set'
template = "%(function)s(%(expressions)s, '{\"%(keyname)s\"}','\"%(new_value)s\"', %(create_missing)s)"
arity = 1
def __init__(
self, expression: str, keyname: str, new_value: str,
create_missing: bool=False, **extra,
):
super().__init__(
expression,
keyname=keyname,
new_value=new_value,
create_missing='true' if create_missing else 'false',
**extra,
)
AdhocTask.objects.filter(id=25) \
.update(log=ReplaceValue(
'log',
keyname='demo_key',
new_value='another value',
create_missing=False,
)
ReplaceValue.template is the same as your raw SQL statement, just parametrized.
(jsonb_set(log, \'{"demo_key"}\',\'"another value"\', false)) from your query is now jsonb_set("exampleapp.adhoctask"."log", \'{"demo_key"}\',\'"another value"\', false). The parentheses are gone (you can get them back by adding it to the template) and log is referenced in a different way.
Anyone interested in more details regarding jsonb_set should have a look at table 9-45 in postgres' documentation: https://www.postgresql.org/docs/9.6/static/functions-json.html#FUNCTIONS-JSON-PROCESSING-TABLE
Rubber duck debugging at its best - in writing the question, I've realised the solution. Leaving the answer here in hope of helping someone in future:
Looking at the queries, I realised that the RawSQL was actually being deferred until query two, so all I was doing was storing the RawSQL as a subquery for later execution.
Solution:
Skip the annotate step altogether and use the RawSQL expression straight into the .update() call. Allows you to dynamically update PostgresQL jsonb sub-keys on the database server without overwriting the whole field:
(AdhocTask.objects.filter(id=25)
.update(log=RawSQL(
"""jsonb_set(log, '{"demo_key"}','"another value"', false)""",[])
)
)
> 1 # Success
print(connection.queries)
> {'sql': 'UPDATE "exampleapp_adhoctask" SET "log" = (jsonb_set(log, \'{"demo_key"}\',\'"another value"\', false)) WHERE "exampleapp_adhoctask"."id" = 1', 'time': '0.001'}]
print(AdhocTask.objects.get(id=1).log)
> {'demo_key': 'another value'}
I am doing a query to my database using Groovy, the query is working perfectly and bringing back the correct data however I get this error in my terminal.
In Groovy SQL please do not use quotes around dynamic expressions
(which start with $) as this means we cannot use a JDBC
PreparedStatement and so is a security hole. Groovy has worked around
your mistake but the security hole is still there.
Here is my query
sql.firstRow("""select elem
from site_content,
lateral jsonb_array_elements(content->'playersContainer'->'series') elem
where elem #> '{"id": "${id}"}'
""")
If I change it to just $id or
sql.firstRow("""select elem
from site_content,
lateral jsonb_array_elements(content->'playersContainer'->'series') elem
where elem #> '{"id": ?}'
""", id)
I get the following error
org.postgresql.util.PSQLException: The column index is out of range:
1, number of columns: 0.
Positional or named parameters are handled by groovy sql properly and should be used instead of "'$id'".
As #Opal mentioned and as described here, you should be passing your params either as a list or map:
sql.execute "select * from tbl where a=? and b=?", [ 'aa', 'bb' ]
sql.execute "select * from tbl where a=:first and b=:last", first: 'aa', last: 'bb'
I've realized that in the newest version of SQLAlchemy (v1.0.4) I'm getting errors when using the table.c.keys() for selecting columns.
from sqlalchemy import MetaData
from sqlalchemy import (Column, Integer, Table, String, PrimaryKeyConstraint)
metadata = MetaData()
table = Table('test', metadata,
Column('id', Integer,nullable=False),
Column('name', String(20)),
PrimaryKeyConstraint('id')
)
stmt = select(table.c.keys()).select_from(table).where(table.c.id == 1)
In previous versions it used to work fine, but now this is throwing the following errors:
sqlalchemy/sql/elements.py:3851: SAWarning: Textual column expression 'id' should be explicitly declared with text('id'), or use column('id') for more specificity.
sqlalchemy/sql/elements.py:3851: SAWarning: Textual column expression 'name' should be explicitly declared with text('name'), or use column('name') for more specificity.
Is there a function for retrieving all these table columns rather than using a list comprehension like the following? [text(x) for x in table.c.keys()]
No, but you can always roll your own.
def all_columns(model_or_table, wrap=text):
table = getattr(model_or_table, '__table__', model_or_table)
return [wrap(col) for col in table.c.keys()]
then you would use it like
stmt = select(all_columns(table)).where(table.c.id == 1)
or
stmt = select(all_columns(Model)).where(Model.id == 1)
Note that in most cases you don't need select_from, i.e when you don't actually join to some other table.