how to label scalars in slqalchemy select list - sqlalchemy

I'm wondering how to write the equivalent of this with sqlalchemy
SELECT 'foo' as "bar", company.name FROM company;
I'd like to write it like;
session.query(some_function('foo').label('bar'), Company.name)
where some_function is the identity. One work around is something like
from sqlalchemy import func
session.query(func.cast('foo', String).label('bar'), Company.name)
but it doesn't work in cases where you might want foo to be an int.

You want the literal function:
session.query(literal('foo').label('bar'), Company.name)

Related

Remove parenthesis from SQLAlchemy select_from

I've got a PostgreSQL function that RETURNS TABLE, so it's valid to use it in a query like so:
SELECT col1, col2 FROM my_func(arg1 => :arg_1, arg2 => :arg_2)
I'm trying to create that with SQLAlchemy.
I've created the text() query object like so:
my_func = text("my_func(arg1 => :arg_1, arg2 => :arg_2)")
Set up bind params:
my_func = my_func.bindparams(bindparam("arg_1", value=arg_1), ...)
And because my function returns a table I've declared the columns on it too:
my_func = my_func.columns(col1=INTEGER, col2=TEXT)
Elsewhere in my code I am using this object to create the actual query:
select_query = select(my_func.c.col1).select_from(my_func).all()
I got this pattern from the tutorial on the text() object. But, when I go to render that final select() query, SQLAlchemy is coming up with SQL like this:
SELECT col1 FROM (my_func(arg1 => :arg_1, arg2 => :arg_2))
Which is a syntax error in PostgreSQL because of the extra parens around the FROM part of the clause, as if it were a complete subquery on its own.
As a workaround I changed the initial text() to do SELECT * FROM my_func(...) to make it into a proper subquery. That gets me working SQL but I wonder if I can get SQLAlchemy to treat the function as if it were a literal table name.

Databricks: calling widget value in query

I've created a widget called yrmo with the values such as 202001.
I need to call this value from a query but it doesn't recognize it, I think because the field I am applying it to is Int but I can only reference the widget with quotes around it.
If I don't use quotes then it thinks I'm using a field in the table. If I use single quote then it interprets it as a literal. I've tried getArugument but it says it doesn't recognize it (do I load something?)
The query is is scala.
val x = sqlContext.sql("select domain from TABLENAME where partsn_mo=yrmo)")
Thanks
You can use Scala's string interpolation with an expression inside of ${} that may include double quotes.
So you could do:
val x = spark.sql(s"select domain from TABLENAME where partsn_mo = ${dbutils.widgets.get("yrmo")}")
Try doing this
val query = "select domain from TABLENAME where partsn_mo=" + dbutils.widgets.get("yrmo")
val x = sqlContext.sql(query)

sqlAlchemy converts geometry to byte using ST_AsBinary

I have a sqlAlchemy model that has one column of type geometry which is defined like this:
point_geom = Column(Geometry('POINT'), index=True)
I'm using geoalchemy2 module:
from geoalchemy2 import Geometry
Then I make my queries using sqlAlchemy ORM, and everything works fine. For example:
data = session.query(myModel).filter_by(...)
My problem is that when I need to get the sql statement of the query object, I use the following code:
sql = data.statement.compile(dialect=postgresql.dialect())
But the column of type geometry is converted to Byte[], so the resulting sql statement is this:
SELECT column_a, column_b, ST_AsBinary(point_geom) AS point_geom
FROM tablename WHERE ...
What should be done to avoid the conversion of the geometry type to byte type?
I had the same problem when was working with Flask-Sqlalchemy and Geoalchemy2 and solved this as follows.
You just need to create a new subclass of GEOMETRY type.
If you look at documentations, the arguments of "GEOMETRY" type are given:
ElementType - which is the type of returned element, by default it's 'WKBElement' (Well-known-binary-element)
as_binary - the function to use, by default it's 'ST_AsEWKB' which in makes a problem on your case
from_text - the geometry constructor used to create, insert and update elements, by default it is 'ST_GeomFromEWKT'
So what I did? I have just created new subclass with required function, element and constructor and used "Geometry" type on my db models as always do.
from geoalchemy2 import Geometry as BaseGeometry
from geoalchemy2.elements import WKTElement
class Geometry(BaseGeometry):
from_text = 'ST_GeomFromText'
as_binary = 'ST_asText'
ElementType = WKTElement
As you can see I have changed only these 3 arguments of a base class.
This will return you a String with required column variables.
It think you can specify that in your query. Something like this:
from geoalchemy2.functions import ST_AsGeoJSON
query = session.query(ST_AsGeoJSON(YourModel.geom_column))
That should change your conversion. There are many conversion functions in the
geoalchemy documentation.

Using XPATH fn:concat in MySQL ExtractValue does not process more than two arguments

When using the XSLT fn:concat() function within the XPATH of a MySQL ExtractValue function, a string with only the first two arguments are returned.
For example:
SELECT ExtractValue("<xml><a>1</a><b>2</b><c>3</c></xml>", 'concat(/xml/a,/xml/b,/xml/c)')
This should return "123", but instead returns "12".
Is this a bug or am I doing something wrong?
I realize that the following workaround can be used:
concat(concat(/xml/a,/xml/b,/xml/c),/xml/c)
But seriously?
I guess you are looking for something like this:
SELECT ExtractValue("<xml><a>1</a><b>2</b><c>3</c></xml>", '//a | //b | //c')
or (more likely) this:
SELECT
ExtractValue("<xml><a>1</a><b>2</b><c>3</c></xml>", '//a') AS a,
ExtractValue("<xml><a>1</a><b>2</b><c>3</c></xml>", '//b') AS b,
ExtractValue("<xml><a>1</a><b>2</b><c>3</c></xml>", '//c') AS c
Note that the xpath function concat which you have tried will return the concatenation of strings like:
example: concat('foo', 'bar')
result : 'foo bar'

How to Conditionally Produce JSON Using JSON4S

I am using JSON4S to produce some JSON.
If a condition is met, I would like to produce the following:
{"fld1":"always", "fld2":"sometimes"}
If the condition is not met, I would like to produce:
{"fld1":"always"}
What I have tried so far is:
val fld1 = "fld1" -> "always"
val json = if(condition) ("fld2" -> "sometimes") ~ fld1 else fld1
compact(render(json))
However, this gives me a type mismatch in the render "Found: Product with Serializable. Required: org.json4s.package.JValue".
The interesting this is that render(("fld2" -> "sometimes") ~ fld1) works, and so does render(fld1). The problem seems to be with the type inferred for json.
How could I fix this?
While both of the current answers give suitable workarounds, neither explains what's going on here. The problem is that if we have two types, like this:
trait Foo
trait Bar
And an implicit conversion (or view) from one to the other:
implicit def foo2bar(foo: Foo): Bar = new Bar {}
A conditional with a Foo for its then clause and a Bar for its else clause will still be typed as the least upper bound of Foo and Bar (in this case Object, in your case Product with Serializable).
This is because the type inference system isn't going to step in and say, well, we could view that Foo as a Bar, so I'll just type the whole thing as a Bar.
This makes sense if you think about it—for example, if it were willing to handle conditionals like that, what would it do if we had implicit conversions both ways?
In your case, the then clause is typed as a JObject, and the else clause is a (String, String). We have a view from the latter to the former, but it won't be used here unless you indicate that you want the whole thing to end up as a JObject, either by explicitly declaring the type, or by using the expression in a context where it has to be a JObject.
Given all of that, the simplest workaround may just be to make sure that both clauses are appropriately typed from the beginning, by providing a type annotation like this for fld1:
val fld1: JObject = "fld1" -> "always"
Now your conditional will be appropriately typed as it is, without the type annotation.
Not the nicest way I can think of, but declaring the type yourself should work:
val json: JObject =
if(condition) ("fld2" -> "sometimes") ~ fld1 else fld1
compact(render(json))
Also, note that you can get type inference to help itself out: If you can render in one go:
compact(render(
if(condition) fld1 ~ ("fld2" -> "sometimes") else fld1
))
Another approach is to box conditional values to Options.
val json = fld1 ~ ("fld2" -> (if (condition) Some("sometimes") else None))
compact(render(json))