I have a query which is written is using slick, it is not a plain slick query.
The query is a select query which fetches the records from a table called Employee. The results are of type Employee class.
Now there is a list of Strings
val nameFilter= List("Sachin","Naveen"")
and this "nameFilter" comes dynamically and it may have any number of names
var result= dbHandle.db.run((query.drop(10).take(10)).result
The variable query is just a select query for the Employee table which selects a range of records from 11 to 20.
Now I need to filter the records which have names mentioned in the 'nameFilter' and then select the records from 11 to 20. That means I need a query with 'IN' clause.
Please note that this is not a plain Slick SQL query, I have to frame a query in the above format.
You can do this with the method .inSet (see here):
Slick
Slick queries are composable. Subqueries can be simply composed, where the types work out, just like any other Scala code.
val address_ids = addresses.filter(_.city === "New York City").map(_.id)
people.filter(_.id in address_ids).result // <- run as one query
The method .in expects a sub query. For an in-memory Scala collection, the method .inSet can be used instead.
So that would mean for your code:
val nameFilter= List("Sachin","Naveen")
val filteredQuery = query.filter(_.name.inSet(nameFilter))
var result= dbHandle.db.run((filteredQuery.drop(10).take(10)).result
Depending on the source of that input you should consider using .inSetBind to escape the input (see this SO post).
Related
I want to return arrays with data from the entire row (so all columns), not just a single column. I can do this with a raw sql statement in Postgresql,
SELECT
array_agg(users.*)
FROM users
WHERE
l_name LIKE 'Br%'
GROUP BY f_name;
but when I try to do it with SqlAlchemy, I'm getting
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'InstrumentedAttribute'
For example, when I execute this query, it works fine
query: Query[User] = session.query(array_agg(self.user.f_name))
But with this I get arrays of rows with only one column value in them (in this example, the first name of a user) whereas I want the entire row (all columns for a user).
I've tried explicitly listing multiple columns, but to no avail. For example I've tried this:
query: Query[User] = session.query(array_agg((self.user.f_name, self.user.l_name))))
But it doesn't work. I get the above error message.
You could use Python feature unpack for create
example = [func.array_agg(column) for column in self.example.__table__.columns]
query = self.dbsession.query(*attach)
And after join results
There is a table from where I need to fetch paginated records by applying and condition in a list of paired values, Below is the explanation
Lets say I have a class Billoflading and there are various fields in it
The two important fields in the table are
tenant
billtype
I have a list of pairs which contain values as
[
{`tenant1`, `billtype1`},
{`tenant2`, `billtype2`},
{`tenant3`, `billtype3`},
....
]
I need a JPA query where the fetch will be like
findByTenantAndBilltypeOrTenantAndBillTypeOr.....
in simple sql query it will be like
Select * from `Billoflading` where
`tenant` = 'tenant1' and billtype = 'billtype1'
OR `tenant` = 'tenant2' and billtype = 'billtype2'
OR `tenant` = 'tenant3' and billtype = 'billtype3'
OR ......... so on..
I tried writing a JPA query as follows
Page<Billoflading> findByTenantInAndBillTypeIn(List<String> tenants, List<String> billTypes, Page page);
but this had crossover records as well
i.e it gave records for tenant1 and billtype2, benant2 and billtype 3 so on... which are not needed in the result set
can anyone please solve this and help me finding a simple solution like
Page<Billoflading> findByTenantAndBillTypeIn(Map<String, String> tenantsAndBilltyes, Page page);
I am also ready for the native queries in JPA all I need is there should be no crossovers as this is a very sensitive data
The other workaround I had was fetching the records and applying java 8 filters and that works but the no. of records in a page gets reduced
Section 4.6.9 of the JPA specification makes it clear that this is not supported by JPQL, at least not in the form of an in-clause:
4.6.9 In Expressions
The syntax for the use of the comparison operator [NOT] IN in a conditional expression is as follows:
in_expression ::=
{state_valued_path_expression | type_discriminator} [NOT] IN
{ ( in_item {, in_item}* ) | (subquery) | collection_valued_input_parameter }
in_item ::= literal | single_valued_input_parameter
The state_valued_path_expression must have a string, numeric, date, time, timestamp, or enum value.
The literal and/or input parameter values must be like the same abstract schema type of the state_valued_path_expression in type. (See Section 4.12).
The results of the subquery must be like the same abstract schema type of the state_valued_path_expression in type.
It just doesn't operate on tuples.
Your best bet is probably to create a Specification to construct the combination of AND and OR you require. See this blog article how to create Specifications
I want to specify the return values for a specific update in sqlalchemy.
The documentation of the underlying update statement (sqlalchemy.sql.expression.update) says it accepts a "returning" argument and the docs for the query object state that query.update() accepts a dictionary "update_args" which will be passed as the arguments to the query statement.
Therefore my code looks like this:
session.query(
ItemClass
).update(
{ItemClass.value: value_a},
synchronize_session='fetch',
update_args={
'returning': (ItemClass.id,)
}
)
However, this does not seem to work. It just returns the regular integer.
My question is now: Am I doing something wrong or is this simply not possible with a query object and I need to manually construct statements or write raw sql?
The full solution that worked for me was to use the SQLAlchemy table object directly.
You can get that table object and the columns from your model easily by doing
table = Model.__table__
columns = table.columns
Then with this table object, I can replicate what you did in the question:
from your_settings import db
update_statement = table.update().returning(table.id)\
.where(columns.column_name=value_one)\
.values(column_name='New column name')
result = db.session.execute(update_statement)
tuple_of_results = result.fetchall()
db.session.commit()
The tuple_of_results variable would contain a tuple of the results.
Note that you would have to run db.session.commit() in order to persist the changes to the database as you it is currently running within a transaction.
You could perform an update based on the current value of a column by doing something like:
update_statement = table.update().returning(table.id)\
.where(columns.column_name=value_one)\
.values(like_count=table_columns.like_count+1)
This would increment our numeric like_count column by one.
Hope this was helpful.
Here's a snippet from the SQLAlchemy documentation:
# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\
where(table.c.name=='foo').values(name='bar')
print result.fetchall()
I am using the code below to extract table names on a database at a GET call in a Flask app.:
session = db.session()
qry = session.query(models.BaseTableModel)
results = session.execute(qry)
table_names = []
for row in results:
for column, value in row.items():
#this seems like a bit of a hack
if column == "tables_table_name":
table_names.append(value)
print('{0}: '.format(table_names))
Given that tables in the database may added/deleted regularly, is the code above an efficient and reliable way to get the names of tables in a database?
One obvious optimization is to use row["tables_table_name"] instead of second loop.
Assuming that BaseTableModel is a table, which contains names of all other tables, than you're using the fastest approach to get this data.
I am using linq lambdas to query the MySql (Note MySql not Sql) with Entity Framwork in MVC. Now i have one table product one of column this table is price with datatype "VARCHAR" (Accept i can't change type to INT as it can hold values like "N/A",etc).
I want to sort price column numerically with linq Lambdas.I have tried bellow.I am using Model values to filter query.
var query = ent.Product.Where(b => b.cp == Model.CodePostal);
if (Model.order_by_flg == 2)
{
query = query.OrderByDescending(a => a.price.PadLeft(10, '0'));
}
But it will not work and give me bellow error.
LINQ to Entities does not recognize the method 'System.String
PadLeft(Int32, Char)' method, and this method cannot be translated
into a store expression.
As it cant convert to Sql statement by Entity Framwork.
I also tried bellow.
var query = ent.Product.Where(b => b.cp == Model.CodePostal);
if (Model.order_by_flg == 2)
{
query = query.OrderByDescending(a => a.price.Length).ThenBy(a => a.price);
}
But i can't do this because it works for List but i cant first make list and then do this as i am using linq Skip() and Take() so first i have to sort it.
So how can i sort price column of type "VARCHAR" in Linq lambda?
EDIT
In table it is :
59,59,400,185,34
Wnen i use OrderBy.ThenBy it gives
34,59,59,106,185,400
It looks right as sorting ascending But when i use OrderByDescending.ThenBy it gives
106,185,400,34,59,59
So i can't use this.
NOTE: Please give reasons before Downvote so i can improve my question...
You can simulate fixed PadLeft in LINQ to Entities with the canonical function DbFunctions.Right like this
instead of this
a.price.PadLeft(10, '0')
use this
DbFunctions.Right("000000000" + a.price, 10)
I haven't tested it with MySql provider, but canonical functions defined in the DbFunctions are supposed to be supported by any provider.
It looks right as sorting ascending But when i use OrderByDescending.ThenBy it gives
106,185,400,34,59,59
That's because you're ordering by length descending, then value ascending.
What you need is simply to sort both by descending;
query = query.OrderByDescending(a => a.price.Length)
.ThenByDescending(a => a.price);
This should be faster than prepending numbers to sort, since you don't need to do multiple calculations per row but can instead sort by existing data.