I am trying to match this statement
stmt = session.query(models.Production).filter(models.Production.profile_name.regexp_match('some_name'))
results = session.execute(stmt).all()
print(results)
In profile_name column, it's saved as Some_Name. How do I get it to match ignoring capitalization?
Found an answer
from sqlalchemy import func
stmt = session.query(models.Production).filter(func.lower(models.Production.profile_name).regexp_match(func.lower('some_name')))
Related
I want to insert multiple items into a table and upsert the table on conflict. This is what I came up with the following
from sqlalchemy.dialects.postgresql import insert
meta = MetaData()
jobs_table = Table('jobs', meta, autoload=True, autoload_with=engine)
stmt = insert(jobs_table).values(jobs)
stmt.on_conflict_do_update(
index_elements=['j_id'],
set_= dict(active=True)
)
result = engine.execute(stmt)
return result.is_insert
The j_id is a unique field and I am trying to update the row if it already exists. I get the following error if the row already exists.
(psycopg2.IntegrityError) duplicate key value violates unique constraint "j_id"
DETAIL: Key (j_id)=(0ea445da-bd1d-5571-9906-0694fa85728a) already exists.
Is there something that I am missing here ?
stmt.on_conflict_do_update returns a new statement. If you change to the following it should work:
from sqlalchemy.dialects.postgresql import insert
meta = MetaData()
jobs_table = Table('jobs', meta, autoload=True, autoload_with=engine)
stmt = insert(jobs_table).values(jobs)
stmt = stmt.on_conflict_do_update(index_elements=['j_id'],
set_= dict(active=True))
result = engine.execute(stmt)
return result.is_insert
You can print(stmt) the statements to see the resulting SQL query. This can be useful to see if the statement which you are going to execute has the expected expression. Also adding echo=True to create_engine can be helpful to detect issues!
I am new to SQLAlchemy & wanted to create an SQLAlchemy query equivalent to "order by exact match first".
Below is the SQL:
select word from dictionary where word like '%Time%' order by (word = 'Time') desc;
This is my SQLAlchemy equivalent.
Dictionary.query.with_entities ( Dictionary.word )
.filter(Dictionary.word.like("%{}%".format("Time")))
.order_by(Dictionary.word == "Time")
But it throws an error at "order_by": SyntaxError: keyword can't be an expression. How to solve it ?
Solved it.
from sqlalchemy.sql import func
.order_by(Dictionary.word == q,func.length(Dictionary.word))
I'm using Python 3.6, mysql-connector-python 8.0.11 and 8.0.11
MySQL Community Server - GPL. The table in question is using the innoDB engine.
When using the MySQL Workbench I can enter:
USE test; START TRANSACTION; SELECT * FROM tasks WHERE task_status != 1 LIMIT 1 FOR UPDATE;
And it provides 1 record as expected:
When I use a script using python3 (from the same machine - same access, etc):
* SQL QRY: START TRANSACTION; SELECT * FROM test WHERE task_status != 1 LIMIT 1 FOR UPDATE;
* SQL RES: No result set to fetch from.
This is debug output from my script. If I change the Query to normal SELECT, I do get output.
* SQL QRY: SELECT * FROM test WHERE task_status != 1 LIMIT 1;
* SQL RES: [(1, 0, 'TASK0001')]
I know SELECT * isn't the way to go but just trying to get some response for now.
I'm trying to allow multiple worker scripts to pick up a task without the workers taking the same task:
Do a select and row lock the task so other workers 'SELECT' query doesn't show them,
Set the task status to 'being processed' and unlock the record.
This is my first venture into locking so this is new ground. I'm able to do normal queries and populate tables etc so have some experience but not with locking.
TABLE creation:
create table test
(
id int auto_increment
primary key,
task_status int not null,
task_ref varchar(16) not null
);
Questions:
Is this the correct mindset? I.e. is there a more pythonic/mysql way to do this?
Is there a specific way I need to initiate the mysql connection? Why would it work using the MySQL workbench but not via script? I've tried using direct mysql and this works too - so I think it is the python connector that may need setting up correctly as it is the only component not working.
Currently I'm using 'autocommit=1' on the connector and 'buffered=True' on the cursor. I know you can set 'autocommit=0' in the SQL before the 'START TRANSACTION' so understand for the locking I may need to do this, but for all other transactions I would prefer to keep autocommit on. Is this OK and/or doable?
CODE:
#!/usr/bin/env python
import mysql.connector
import pprint
conn = mysql.connector.connect(user='testuser',
password='testpass',
host='127.0.0.1',
database='test_db',
autocommit=True)
dbc = conn.cursor(buffered=True)
qry = "START TRANSACTION; SELECT * FROM 'test' WHERE task_status != 1 LIMIT 1 ON UPDATE;"
sql_select = dbc.execute(qry)
try:
output = dbc.fetchall()
except mysql.connector.Error as e:
print(" * SQL QRY: {0}".format(qry))
print(" * SQL RES: {0}".format(e))
exit()
else:
print(" * SQL QRY: {0}".format(qry))
print(" * SQL RES: {0}".format(output))
Many Thanks,
Frank
So after playing around a bit, I worked out (by trial and error) that the proper way to do this is to just put 'FOR UPDATE' at the end of the normal query:
Full code is below (including option to add dummy records for testing):
#!/usr/bin/env python
import mysql.connector
import pprint
import os
conn = mysql.connector.connect(user='testuser',
password='testpass',
host='127.0.0.1',
database='test_db',
autocommit=True)
dbc = conn.cursor(buffered=True)
worker_pid = os.getpid()
all_done = False
create = False
if create:
items = []
for i in range(10000):
items.append([0, 'TASK%04d' % i])
dbc.executemany('INSERT INTO test (task_status, task_ref) VALUES (%s, %s)', tuple(items))
conn.commit()
conn.close
exit()
while all_done is False:
print(all_done)
qry = (
"SELECT id FROM test WHERE task_status != 1 LIMIT 1 FOR UPDATE;"
)
sql_select = dbc.execute(qry)
try:
output = dbc.fetchall()
except mysql.connector.Error as e:
print(" * SQL QRY: {0}".format(qry))
print(" * SQL RES: {0}".format(e))
exit()
else:
print(" * SQL QRY: {0}".format(qry))
print(" * SQL RES: {0}".format(output))
if len(output) == 0:
print("All Done = Yes")
all_done = True
continue
else:
print("Not Done yet!")
if len(output) > 0:
test_id = output[0][0]
print("WORKER {0} FOUND: '{1}'".format(worker_pid, test_id))
qry = "UPDATE test SET task_status = %s, task_ref = %s WHERE id = %s;"
sql_select = dbc.execute(qry, tuple([1, worker_pid, test_id]))
conn.commit()
try:
output = dbc.fetchall()
except mysql.connector.Error as e:
print(" * SQL QRY: {0}".format(qry))
print(" * SQL RES: {0}".format(e))
else:
print(" * SQL QRY: {0}".format(qry))
print(" * SQL RES: {0}".format(output))
print(all_done)
Hope this helps someone else save some time as there are a lot of places with different info but searches for python3, mysql-connector and transactions didn't get me anything.
Good Luck,
Frank
import csv
import MySQLdb
conn = MySQLdb.connect('localhost','tekno','poop','media')
cursor = conn.cursor()
txt = csv.reader(file('movies.csv'))
for row in txt:
cursor.execute('insert into shows_and_tv(watched_on,title,score_rating)' 'values ("%s","%s","%s")',row)
conn.close()
when I run this I get
TypeError: not enough arguments for format string
but it matches up
the csv is formatted like
dd-mm-yyyy,string,tinyint
which watches the fields in the database
I do not have a mysql database to play with. So I did what you need but in sqlite. It should be quite easy to adapt this to your needs.
import csv
import sqlite3
from collections import namedtuple
conn = sqlite3.connect('statictest.db')
c = conn.cursor()
c.execute('''CREATE TABLE if not exists movies (ID INTEGER PRIMARY KEY AUTOINCREMENT, 'watched_on','title','score_rating')''')
record = namedtuple('record',['watched_on','title','score_rating'])
SQL ='''
INSERT INTO movies ("watched_on","title","score_rating") VALUES (?,?,?)
'''
with open('statictest.csv', 'r') as file:
read_data = csv.reader(file)
for row in read_data:
watched_on, title, score_rating = row
data = (record(watched_on, title, score_rating))
c.execute(SQL, data)
conn.commit()
#necessary import goes here
engine = sqlalchemy.create_engine('mysql://root#127.0.0.1/test',echo=False)
print 'Engine created'
connection=engine.connect()
metadata=MetaData(engine)
metadata.bind=engine
Session = sessionmaker(bind=engine)
session = Session()
mapping = Table('mapping',metadata,autoload=True)
class Mapping(object):
pass
MappingMapper=mapper(Mapping,mapping)
Now i am able to write basic query for insert,update,delete,filter etc.
Q:1 I need to write complex query, where i do derive new columns based on existing columns. Ex. ColA,ColB is there on table, ColC is not part of table structure.
Select (ColA+ColB) as ColC from table where ColC > 50 order by ColC.
I am clueless how to convert above like query with SqlAlchemy. How to map, how to retrieve.
The easiest is to useHybrid Attributes.
In your case, just change the declaration of the class to the following:
from sqlalchemy.ext.hybrid import hybrid_property
class Mapping(object):
#hybrid_property
def ColC(self):
return self.ColA + self.ColB
Then the query:
qry = session.query(Mapping).filter(Mapping.ColC > 80)
will generate SQL:
SELECT mapping.id AS mapping_id, ...
FROM mapping
WHERE mapping."ColA" + mapping."ColB" > ?