sqlalchemy FetchedValue and primary_key - mysql

I'm trying to create a table that uses a UUID_SHORT() as a primary key. I have a trigger that inserts a value when you do an insert. I'm having trouble making sqlalchemy recognize a column as a primary_key without complaining about not providing a default. If I do include a default value, it will use that default value even after flush despite declaring server_default=FetchedValue(). The only way I can seem to get things to work properly is if the column is not a primary key.
I'm using Pyramid, SQLAlchemy ORM, and MySQL.
Here's the model object:
Base = declarative_base()
class Patient(Base):
__tablename__ = 'patient'
patient_id = Column(BigInteger(unsigned=True), server_default=FetchedValue(), primary_key=True, autoincrement=False)
details = Column(Binary(10000))
in initializedb.py I have:
with transaction.manager:
patient1 = Patient(details = None)
DBSession.add(patient1)
DBSession.flush()
print(patient1.patient_id)
running ../bin/initialize_mainserver_db development.ini gives me the following error:
2012-11-01 20:17:22,168 INFO [sqlalchemy.engine.base.Engine][MainThread] BEGIN (implicit)
2012-11-01 20:17:22,169 INFO [sqlalchemy.engine.base.Engine][MainThread] INSERT INTO patient (details) VALUES (%(details)s)
2012-11-01 20:17:22,169 INFO [sqlalchemy.engine.base.Engine][MainThread] {'details': None}
2012-11-01 20:17:22,170 INFO [sqlalchemy.engine.base.Engine][MainThread] ROLLBACK
Traceback (most recent call last):
File "/sites/metrics_dev/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 1691, in _execute_context
context)
File "/sites/metrics_dev/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 333, in do_execute
cursor.execute(statement, parameters)
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 418, in execute
self._handle_result(self._connection.cmd_query(stmt))
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 345, in _handle_result
self._handle_noresultset(result)
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 321, in _handle_noresultset
self._warnings = self._fetch_warnings()
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 608, in _fetch_warnings
raise errors.get_mysql_exception(res[0][1],res[0][2])
mysql.connector.errors.DatabaseError: 1364: Field 'patient_id' doesn't have a default value
Running a manual insert using the mysql client results in the everything working fine, so the problem seems to be with SQLAlchemy.
mysql> insert into patient(details) values (null);
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> select * from patient;
+-------------------+---------+
| patient_id | details |
+-------------------+---------+
| 94732327996882980 | NULL |
+-------------------+---------+
1 row in set (0.00 sec)
mysql> show triggers;
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
| Trigger | Event | Table | Statement | Timing | Created | sql_mode | Definer | character_set_client | collation_connection | Database Collation |
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
| before_insert_patient | INSERT | patient | SET new.`patient_id` = UUID_SHORT() | BEFORE | NULL | | root#localhost | utf8 | utf8_general_ci | latin1_swedish_ci |
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
1 row in set (0.00 sec)

Here's what I did as a work-around...
DBSession.execute(
"""CREATE TRIGGER before_insert_patient BEFORE INSERT ON `patient`
FOR EACH ROW BEGIN
IF (NEW.patient_id IS NULL OR NEW.patient_id = 0) THEN
SET NEW.patient_id = UUID_SHORT();
END IF;
END""")
and in the patient class:
patient_id = Column(BigInteger(unsigned=True), default=text("uuid_short()"), primary_key=True, autoincrement=False, server_default="0")
So, the trigger only does something if someone accesses the database directly and not through the python code. And hopefully no one does patient1 = Patient(patient_id=0, details = None) as SQLAlchemy will use the '0' value instead of what the trigger produces

For completeness, here are two additional possible solutions for your question (also available here), based on your answer. They are slightly simpler than your solution (omitting passing parameters with correct default values) and using SQLAlchemy constructs for defining the triggers.
#!/usr/bin/env python3
from sqlalchemy import BigInteger, Column, create_engine, DDL, event
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import FetchedValue
from sqlalchemy.sql.expression import func
Base = declarative_base()
class PatientOutputMixin(object):
'''
Mixin to output human readable representations of models.
'''
def __str__(self):
return '{}'.format(self.patient_id)
def __repr__(self):
return str(self)
class Patient1(Base, PatientOutputMixin):
'''
First version of ``Patient`` model.
'''
__tablename__ = 'patient_1'
patient_id = Column(BigInteger, primary_key=True,
default=func.uuid_short())
# the following trigger is only required if columns are inserted in the table
# not using the above model/table definition, otherwise it is redundant
create_before_insert_trigger = DDL('''
CREATE TRIGGER before_insert_%(table)s BEFORE INSERT ON %(table)s
FOR EACH ROW BEGIN
IF NEW.patient_id IS NULL THEN
SET NEW.patient_id = UUID_SHORT();
END IF;
END
''')
event.listen(Patient1.__table__, 'after_create',
create_before_insert_trigger.execute_if(dialect='mysql'))
# end of optional trigger definition
class Patient2(Base, PatientOutputMixin):
'''
Second version of ``Patient`` model.
'''
__tablename__ = 'patient_2'
patient_id = Column(BigInteger, primary_key=True,
default=0, server_default=FetchedValue())
create_before_insert_trigger = DDL('''
CREATE TRIGGER before_insert_%(table)s BEFORE INSERT ON %(table)s
FOR EACH ROW BEGIN
SET NEW.patient_id = UUID_SHORT();
END
''')
event.listen(Patient2.__table__, 'after_create',
create_before_insert_trigger.execute_if(dialect='mysql'))
# test models
engine = create_engine('mysql+oursql://test:test#localhost/test?charset=utf8')
Base.metadata.bind = engine
Base.metadata.drop_all()
Base.metadata.create_all()
Session = sessionmaker(bind=engine)
session = Session()
for patient_model in [Patient1, Patient2]:
session.add(patient_model())
session.add(patient_model())
session.commit()
print('{} instances: {}'.format(patient_model.__name__,
session.query(patient_model).all()))
Running the above script produces the following (sample) output:
Patient1 instances: [22681783426351145, 22681783426351146]
Patient2 instances: [22681783426351147, 22681783426351148]

Related

Django unique constraint not working properly [duplicate]

This question already has answers here:
Unique constraint that allows empty values in MySQL
(3 answers)
Closed 2 years ago.
I have a model with a custom _id that has to be unique, and soft delete, deleted objects don't have to have a unique _id, so I did it as follows:
class MyModel(models.Model):
_id = models.CharField(max_length=255, db_index=True)
event_code = models.CharField(max_length=1, blank=True, default='I')
deleted = models.BooleanField(default=False)
deleted_id = models.IntegerField(blank=True, null=True)
objects = MyModelManager() # manager that filters out deleted objects
all_objects = MyModelBaseManager() # manager that returns every object, including deleted ones
class Meta:
constraints = [
UniqueConstraint(fields=['_id', 'event_code', 'deleted', 'deleted_id'], name='unique_id')
]
def delete(self, *args, **kwargs):
self.deleted = True
self.deleted_id = self.max_related_deleted_id() + 1
self.save()
def undelete(self, *args, **kwargs):
self.deleted = False
self.deleted_id = None
self.save()
def max_related_deleted_id(self):
# Get max deleted_id of deleted objects with the same _id
max_deleted_id = MyModel.all_objects.filter(Q(_id=self._id) & ~Q(pk=self.pk) & Q(deleted=True)).aggregate(Max('deleted_id'))['deleted_id__max']
return max_deleted_id if max_deleted_id is not None else 0
The whole logic of the deleted_id is working, I tested it out, the problem is, the UniqueConstraint is not working, for example:
$ MyModel.objects.create(_id='A', event_code='A')
$ MyModel.objects.create(_id='A', event_code='A')
$ MyModel.objects.create(_id='A', event_code='A')
$ MyModel.objects.filter(_id='A').values('pk', '_id', 'event_code', 'deleted', 'deleted_id')
[{'_id': 'A',
'deleted': False,
'deleted_id': None,
'event_code': 'A',
'pk': 1},
{'_id': 'A',
'deleted': False,
'deleted_id': None,
'event_code': 'A',
'pk': 2},
{'_id': 'A',
'deleted': False,
'deleted_id': None,
'event_code': 'A',
'pk': 3}]
Here is the migration that created the unique constraint:
$ python manage.py sqlmigrate myapp 0003
BEGIN;
--
-- Create constraint unique_id on model MyModel
--
ALTER TABLE `myapp_mymodel` ADD CONSTRAINT `unique_id` UNIQUE (`_id`, `event_code`, `deleted`, `deleted_id`);
COMMIT;
Any help is appreciated!
Django version = 2.2
Python version = 3.7
Database = MySQL 5.7
Ok, I figured out my problem, I'm gonna post this here in case someone runs into it:
The problem is with MySQL, as stated in this post, mysql allows multiple null values in a unique constraint, so I had to change the default of deleted_id to 0 and now it works.

Django db_index=True not create index after migration

I have an abstract model:
class ChronoModel(models.Model):
created = models.DateTimeField(
u"Create time",
auto_now_add=True,
db_index=True
)
modified = models.DateTimeField(
u"Last change time",
auto_now_add=True,
db_index=True
)
class Meta(object):
abstract = True
ordering = ('-created', )
And I have several models inherited from ChronoModel. My problem is same for all of them - for example one of this models:
class BatchSession(ChronoModel):
spent_seconds = models.BigIntegerField(
u"spent_seconds", default=0, editable=False)
max_seconds = models.BigIntegerField(
u"max_seconds", null=True, blank=True)
comment = models.CharField(
u"comment", max_length=255, null=True, blank=False,
unique=True)
class Meta(ChronoModel.Meta):
verbose_name = u'Session'
verbose_name_plural = u'Sessions'
ordering = ('-modified',)
db_table = 'check_batchsession'
def __unicode__(self):
return u'#{}, {}/{} sec'.format(
self.id, self.spent_seconds, self.max_seconds)
After creating and applying migration there is not index on fields "created" and "modified"
Command
python manage.py sqlmigrate app_name 0001 | grep INDEX
Shows me
BEGIN;
....
CREATE INDEX `check_batchsession_e2fa5388` ON `check_batchsession` (`created`);
CREATE INDEX `check_batchsession_9ae73c65` ON `check_batchsession` (`modified`);
....
COMMIT;
But mysql returns me:
mysql> SHOW INDEX FROM check_batchsession;
+--------------------+------------+--------------------------------------------------+--------------+-------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name |
+--------------------+------------+--------------------------------------------------+--------------+-------------+
| check_batchsession | 0 | PRIMARY | 1 | id |
| check_batchsession | 0 | check_batchsession_comment_558191ed0a395dfa_uniq | 1 | comment |
+--------------------+------------+--------------------------------------------------+--------------+-------------+
2 rows in set (0,00 sec)
How can I resolve it?
Django 1.8.18
MySQL 5.6
It was a Django South trouble.
I don't know what's happened, by all my indexes wasn't created. (If someone know - pleas write in comments)
My solution:
1) remove db_index=True from all fields in ChronoModel
2) makemigrations
3) migrate
3) add db_index=True to all all fields in ChronoModel
4) makemigrations
5) migrate
All my indexes was restored

SQLAlchemy insert data dumped on the database A into the database B

What is the most effective way to insert data dumped on the database A into the database B? Normally I would use mysqldump for the task like this, but because of the complex query I had to take a different approach. At present I have the following inefficient solution:
from sqlalchemy import create_engine, Column, INTEGER, CHAR, VARCHAR
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
SessFactory = sessionmaker()
print('## Configure database connections')
db_one = create_engine('mysql://root:pwd1#127.0.0.1/db_one', echo=True).connect()
sess_one = SessFactory(bind=db_one)
db_two = create_engine('mysql://root:pwd2#127.0.0.2/db_two', echo=True).connect()
sess_two = SessFactory(bind=db_two)
## Declare query to dump data
dump_query = (
'SELECT A.id, A.name, B.address '
'FROM table_a A JOIN table_b B '
'ON A.id = B.id_c WHERE '
'A.deleted = 0'
)
print('## Fetch data on db_one')
data = db_one.execute(dump_query).fetchall()
## Declare table on db_two
class cstm_table(Base):
__tablename__ = 'cstm_table'
pk = Column(INTEGER, primary_key=True)
id = Column(CHAR(36), nullable=False)
name = Column(VARCHAR(150), default=None)
address = Column(VARCHAR(150), default=None)
print('## Recreate "cstm_table" on db_two')
cstm_table.__table__.drop(bind=db_two, checkfirst=True)
cstm_table.__table__.create(bind=db_two)
print('## Insert dumped data into the "cstm_table" on db_two')
for row in data:
insert = cstm_table.__table__.insert().values(row)
db_two.execute(insert)
This execute sequentially over a 100K inserts (horrible).
I also tried:
with db_two.connect() as conn:
with conn.begin() as trans:
row_as_dict = [dict(row.items()) for row in data]
try:
conn.execute(cstm_table.__table__.insert(), row_as_dict)
except:
trans.rollback()
raise
else:
trans.commit()
But then after inserting ~20 rows I get error:
OperationalError: (_mysql_exceptions.OperationalError) (2006, 'MySQL server has gone away')
The following also does the job, but I'm not so sure it's the most efficient:
sess_two.add_all([cstm_table(**dict(row.items())) for row in data])
sess_two.flush()
sess_two.commit()

Insert values into SQL column with mysql-python

I am trying to insert values into a column of a SQL table, using MySQLdb in Python 2.7. I am having problems with the command to insert a list into 1 column.
I have a simple table called 'name' as shown below:
+--------+-----------+----------+--------+
| nameid | firstname | lastname | TopAdd |
+--------+-----------+----------+--------+
| 1 | Cookie | Monster | |
| 2 | Guy | Smiley | |
| 3 | Big | Bird | |
| 4 | Oscar | Grouch | |
| 5 | Alastair | Cookie | |
+--------+-----------+----------+--------+
Here is how I created the table:
CREATE TABLE `name` (
`nameid` int(11) NOT NULL AUTO_INCREMENT,
`firstname` varchar(45) DEFAULT NULL,
`lastname` varchar(45) DEFAULT NULL,
`TopAdd` varchar(40) NOT NULL,
PRIMARY KEY (`nameid`)
) ENGINE=InnoDB AUTO_INCREMENT=16 DEFAULT CHARSET=utf8
Here is how I populated the table:
INSERT INTO `test`.`name`
(`firstname`,`lastname`)
VALUES
("Cookie","Monster"),
("Guy","Smiley"),
("Big","Bird"),
("Oscar","Grouch"),
("Alastair","Cookie");
DISCLAIMER: The original source for the above MySQL example is here.
Here is how I created the a new column named TopAdd:
ALTER TABLE name ADD TopAdd VARCHAR(40) NOT NULL
I now have a list of 5 values that I would like to insert into the column TopAdd as the values of that column. Here is the list.
vals_list = ['aa','bb','cc','dd','ee']
Here is what I have tried (UPDATE statement inside loop):
vals = tuple(vals_list)
for self.ijk in range (0,len(self.vals)):
self.cursor.execute ("UPDATE name SET TopAdd = %s WHERE 'nameid' = %s" % (self.vals[self.ijk],self.ijk+1))
I get the following error message:
Traceback (most recent call last):
File "C:\Python27\mySQLdbClass.py", line 70, in <module>
[Finished in 0.2s with exit code 1]main()
File "C:\Python27\mySQLdbClass.py", line 66, in main
db.mysqlconnect()
File "C:\Python27\mySQLdbClass.py", line 22, in mysqlconnect
self.cursor.execute ("UPDATE name SET TopAdd = %s WHERE 'nameid' = %s" % (self.vals[self.ijk],self.ijk+1))
File "C:\Python27\lib\site-packages\MySQLdb\cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "C:\Python27\lib\site-packages\MySQLdb\connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (1054, "Unknown column 'aa' in 'field list'")
Is there a way to insert these values into the column with a loop or directly as a list?
Try This:
vals_list = ['aa','bb','cc','dd','ee']
for i, j in enumerate(vals_list):
self.cursor.execute(("UPDATE test.name SET TopAdd = '%s' WHERE nameid = %s" % (str(j),int(i+1))
One problem is here:
for self.ijk in range (0,len(self.vals)):
The range function is creating a list of integers (presumably, the list [0, 1, 2, 3, 4]). When iterating over a collection in a for loop, you bind each successive item in the collection to a name; you do not access them as attributes of an instance. (It also seems appropriate to use xrange here; see xrange vs range.) So the self reference is non-sensical; beyond that, ijk is a terrible name for an integer value, and there's no need to supply the default start value of zero. KISS:
for i in range(len(self.vals)):
Not only does this make your line shorter (and thus easier to read), using i to represent an integer value in a loop is a convention that's well understood. Now we come to another problem:
self.cursor.execute ("UPDATE name SET TopAdd = %s WHERE 'nameid' = %s" % (self.vals[self.ijk],self.ijk+1))
You're not properly parameterizing your query here. Do not follow this advice, which may fix your current error but leaves your code prone to wasted debugging time at best, SQL injection and/or data integrity issues at worst. Instead, replace the % with a comma so that the execute function does the work safely quoting and escaping parameters for you.
With that change, and minus the quotation marks around your column name, nameid:
query = "UPDATE name SET TopAdd = %s WHERE nameid = %s;"
for i in range(len(self.vals)):
self.cursor.execute(query, (self.vals[i], i + 1))
Should work as expected. You can still use enumerate as suggested by the other answer, but there's no reason to go around typecasting everything in sight; enumerate is documented and gives exactly the types you already want:
for i, val in enumerate(self.vals):
self.cursor.execute(query, (val, i + 1))

Why mysql program hangs(deadlock)?

I am struggling more than one day on dealing with a mysql hangs(deadlock). In below testcase, I will try to create the database first if it doesn't exist and try to create a table if it doesn't exist too. Then I do a query on the table. Each time I execute the SQL command, I strictly close the cursor. But the program still hangs. I have found two workarounds. 1) close the connection after creating the database and create a new connection. 2) call commit() after the query.
The two workarounds works good but they make me more confused. As my understanding, it's ok to keep connection if the cursors are closed in time and commit() are called after each change. And also, there is no reason to call commit() after query.
So my two workarounds even destroyed my understanding of database operation. I do need some help to point out what's wrong with the program basically.... Just give me some light...
Thanks very much!
#!/usr/bin/python2
import MySQLdb
def NewConnectToMySQL():
conn = MySQLdb.Connect("localhost", "root", "mypassword")
return conn
def CreateDBIfNotExists(conn):
sql = "CREATE DATABASE IF NOT EXISTS testdb"
cur = conn.cursor()
cur.execute(sql)
cur.close()
conn.select_db("testdb")
conn.commit()
"""workaround-1"""
#conn.close()
#conn = NewConnectToMySQL()
#conn.select_db("testdb")
return conn
def CreateTableIfNotExists(conn):
sql = "CREATE TABLE IF NOT EXISTS mytable (id INTEGER, name TEXT)"
cur = conn.cursor()
cur.execute(sql)
cur.close()
conn.commit()
def QueryName(conn, name):
sql = "SELECT * FROM mytable WHERE name = '%s'" % name
cur = conn.cursor()
cur.execute(sql)
info = cur.fetchall()
cur.close()
"""workaround-2"""
#conn.commit()
return info
conn1 = NewConnectToMySQL()
CreateDBIfNotExists(conn1)
CreateTableIfNotExists(conn1)
QueryName(conn1, "tom")
conn2 = NewConnectToMySQL()
CreateDBIfNotExists(conn2)
CreateTableIfNotExists(conn2) #hangs here!!!!!!!!!!
Here is the output of SHOW FULL PROCESSLIST when hangs.
mysql> SHOW FULL PROCESSLIST
-> ;
+-----+------+-----------+--------+---------+------+---------------------------------+------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------+--------+---------+------+---------------------------------+------------------------------------------------------------+
| 720 | root | localhost | testdb | Sleep | 96 | | NULL |
| 721 | root | localhost | testdb | Query | 96 | Waiting for table metadata lock | CREATE TABLE IF NOT EXISTS mytable (id INTEGER, name TEXT) |
| 727 | root | localhost | NULL | Query | 0 | NULL | SHOW FULL PROCESSLIST |
+-----+------+-----------+--------+---------+------+---------------------------------+------------------------------------------------------------+
3 rows in set (0.00 sec)