Django db_index=True not create index after migration - mysql

I have an abstract model:
class ChronoModel(models.Model):
created = models.DateTimeField(
u"Create time",
auto_now_add=True,
db_index=True
)
modified = models.DateTimeField(
u"Last change time",
auto_now_add=True,
db_index=True
)
class Meta(object):
abstract = True
ordering = ('-created', )
And I have several models inherited from ChronoModel. My problem is same for all of them - for example one of this models:
class BatchSession(ChronoModel):
spent_seconds = models.BigIntegerField(
u"spent_seconds", default=0, editable=False)
max_seconds = models.BigIntegerField(
u"max_seconds", null=True, blank=True)
comment = models.CharField(
u"comment", max_length=255, null=True, blank=False,
unique=True)
class Meta(ChronoModel.Meta):
verbose_name = u'Session'
verbose_name_plural = u'Sessions'
ordering = ('-modified',)
db_table = 'check_batchsession'
def __unicode__(self):
return u'#{}, {}/{} sec'.format(
self.id, self.spent_seconds, self.max_seconds)
After creating and applying migration there is not index on fields "created" and "modified"
Command
python manage.py sqlmigrate app_name 0001 | grep INDEX
Shows me
BEGIN;
....
CREATE INDEX `check_batchsession_e2fa5388` ON `check_batchsession` (`created`);
CREATE INDEX `check_batchsession_9ae73c65` ON `check_batchsession` (`modified`);
....
COMMIT;
But mysql returns me:
mysql> SHOW INDEX FROM check_batchsession;
+--------------------+------------+--------------------------------------------------+--------------+-------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name |
+--------------------+------------+--------------------------------------------------+--------------+-------------+
| check_batchsession | 0 | PRIMARY | 1 | id |
| check_batchsession | 0 | check_batchsession_comment_558191ed0a395dfa_uniq | 1 | comment |
+--------------------+------------+--------------------------------------------------+--------------+-------------+
2 rows in set (0,00 sec)
How can I resolve it?
Django 1.8.18
MySQL 5.6

It was a Django South trouble.
I don't know what's happened, by all my indexes wasn't created. (If someone know - pleas write in comments)
My solution:
1) remove db_index=True from all fields in ChronoModel
2) makemigrations
3) migrate
3) add db_index=True to all all fields in ChronoModel
4) makemigrations
5) migrate
All my indexes was restored

Related

how to update unique default option in one sql

i have table like this
-----------------------
id | name | is_default|
------------------------
1 | a | 1 |
2 | a | 0 |
3 | a | 0 |
4 | a | 0 |
-----------------------
now i want to change line 2(id =2) is_default to 1,and origin line(id =1) id_default to 0 at the same time,like the choose default option in list in UI.
1.can i do this in one sql statement?
2.if it is possible,how to write the sql statement or how to write in mybatis mapper.xml?
Springboot with mybatis ,sql statement write in mapper.xml
#Data
pulbic class Option{
private Integer id;
private String name;
private Boolean isDefault;
}
how to write the mybatis or mysql statement?
You may use a CASE expression:
UPDATE yourTable
SET is_default = CASE WHEN id = 1 THEN 0 ELSE 1 END
WHERE id IN (1, 2);
Or, if you intended to just toggle the default values for id 1 and 2, then try:
UPDATE yourTable
SET is_default = CASE WHEN is_default = 1 THEN 0 ELSE 1 END
WHERE id IN (1, 2);
Tim's answer is fine. If the values are only 0/1, you can simplify it to:
UPDATE t
SET is_default = 1 - is_default
WHERE id IN (1, 2);

SQL Alchemy, how to insert data into two tables and reference foreign key?

I am inserting a list of python dictionaries into a Postgres database using SQL Alchemy (via Flask_sqlalchemy).
One of the tables is a list of all unique items (table 1), while the second is a time series of data related to an item (table2).
In essence, I want to insert any new row (with unique hash) in to table 1, then insert it's data to table 2. If it already exists in table 1, just insert the "child" in table 2 referencing the entry in table 1.
This is one item in the list, the list has a few hundred of these.
{'placement': '5662448s608653114', 't1': datetime.datetime(2018, 4, 15, 17, 47, 7, 434982), 't2': datetime.datetime(2018, 4, 25, 17, 47, 7, 434994), 'camp_id': 1, 'clicks': '0', 'visits': '3', 'conversions': '0', 'revenue': '0'}
I would like to insert 5662448s608653114 into table1, and then insert all the other data into table2, where i reference the item not by 5662448s608653114, but by it's id in table 1
So I'd get:
Table 1:
____________________
1| 5662448s608653114
2| 5520103
Table 2:
ID | Pl id | T1 | T2 | cost | revenue | clicks
_______________________________________________
499| 1 |
500| 2 |
I tested this, which does not work:
def write_tracker_data(self):
for item in self.data:
ts = Placements(placement_ts_hash=item["placement"])
pl = TrackerPlacementData(placement_id=ts.id, t1=item["t1"], t2=item["t2"], camp_id=1, revenue=item["revenue"], clicks=item["clicks"], conversions=item["conversions"])
db.session.add(pl)
db.session.commit()
The above code inserts the data, but with none instead of the id from the Table 1. It also doesn't seem very efficient, you know that feeling when something can definitely be done a better way ...
Here's the model classes for reference:
class Placements(db.Model):
id = db.Column(db.Integer, primary_key=True)
traffic_source = db.Column(db.Integer, db.ForeignKey('ts_types.id'))
placement_ts_hash = db.Column(db.String, index=True)
placement_url = db.Column(db.String)
placement_type = db.Column(db.String)
# Relationship betwwen unique placement table and tracker_placeemnt_data
tracker_data = db.relationship("TrackerPlacementData", backref="placement_hash")
class TrackerPlacementData(db.Model):
id = db.Column(db.Integer, primary_key=True)
t1 = db.Column(db.DateTime(timezone=True))
t2 = db.Column(db.DateTime(timezone=True), index=True)
camp_id = db.Column(db.Integer, db.ForeignKey('campaigns.id'), nullable=False)
placement_id = db.Column(db.Integer, db.ForeignKey('placements.id'), nullable=True, index=True)
revenue = db.Column(db.Float)
clicks = db.Column(db.Integer)
conversions = db.Column(db.Integer)
Thanks in advance.
Edit: This works, but it doesn't seem very good due to a new session for every item in the loop :/
def write_tracker_data(self):
for item in self.data:
ts = Placements(placement_ts_hash=item["placement"])
db.session.add(ts)
db.session.commit()
pl = TrackerPlacementData(placement_hash=ts, t1=item["t1"], t2=item["t2"], camp_id=1,
revenue=item["revenue"], clicks=item["clicks"], conversions=item["conversions"])
db.session.add(pl)
db.session.commit()
Your Placement instance won't have an id until it is committed. This is where the tracker_data relationship can help you...
for item in self.data:
ts = Placements(placement_ts_hash=item["placement"])
pl = TrackerPlacementData(
t1=item["t1"],
t2=item["t2"],
camp_id=1,
revenue=item["revenue"],
clicks=item["clicks"],
conversions=item["conversions"]
)
ts.tracker_data.append(pl)
db.session.add(ts)
db.session.commit()
Notice that pl.placement_id is not set to anything. Instead pl is appended to ts.tracker_data and everything should be looked after for you when you call commit.

Why mysql program hangs(deadlock)?

I am struggling more than one day on dealing with a mysql hangs(deadlock). In below testcase, I will try to create the database first if it doesn't exist and try to create a table if it doesn't exist too. Then I do a query on the table. Each time I execute the SQL command, I strictly close the cursor. But the program still hangs. I have found two workarounds. 1) close the connection after creating the database and create a new connection. 2) call commit() after the query.
The two workarounds works good but they make me more confused. As my understanding, it's ok to keep connection if the cursors are closed in time and commit() are called after each change. And also, there is no reason to call commit() after query.
So my two workarounds even destroyed my understanding of database operation. I do need some help to point out what's wrong with the program basically.... Just give me some light...
Thanks very much!
#!/usr/bin/python2
import MySQLdb
def NewConnectToMySQL():
conn = MySQLdb.Connect("localhost", "root", "mypassword")
return conn
def CreateDBIfNotExists(conn):
sql = "CREATE DATABASE IF NOT EXISTS testdb"
cur = conn.cursor()
cur.execute(sql)
cur.close()
conn.select_db("testdb")
conn.commit()
"""workaround-1"""
#conn.close()
#conn = NewConnectToMySQL()
#conn.select_db("testdb")
return conn
def CreateTableIfNotExists(conn):
sql = "CREATE TABLE IF NOT EXISTS mytable (id INTEGER, name TEXT)"
cur = conn.cursor()
cur.execute(sql)
cur.close()
conn.commit()
def QueryName(conn, name):
sql = "SELECT * FROM mytable WHERE name = '%s'" % name
cur = conn.cursor()
cur.execute(sql)
info = cur.fetchall()
cur.close()
"""workaround-2"""
#conn.commit()
return info
conn1 = NewConnectToMySQL()
CreateDBIfNotExists(conn1)
CreateTableIfNotExists(conn1)
QueryName(conn1, "tom")
conn2 = NewConnectToMySQL()
CreateDBIfNotExists(conn2)
CreateTableIfNotExists(conn2) #hangs here!!!!!!!!!!
Here is the output of SHOW FULL PROCESSLIST when hangs.
mysql> SHOW FULL PROCESSLIST
-> ;
+-----+------+-----------+--------+---------+------+---------------------------------+------------------------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+------+-----------+--------+---------+------+---------------------------------+------------------------------------------------------------+
| 720 | root | localhost | testdb | Sleep | 96 | | NULL |
| 721 | root | localhost | testdb | Query | 96 | Waiting for table metadata lock | CREATE TABLE IF NOT EXISTS mytable (id INTEGER, name TEXT) |
| 727 | root | localhost | NULL | Query | 0 | NULL | SHOW FULL PROCESSLIST |
+-----+------+-----------+--------+---------+------+---------------------------------+------------------------------------------------------------+
3 rows in set (0.00 sec)

Modifying entry in django admin creates duplicate

I am using the django admin to modify records in a table. The problem is that whenever I modify an entry, when I click save, instead of modifying that entry, the old one is not modified and a new entry containing the modified details is being added.
For example, if I have the following:
Aardvark | Orycteropus | Some description | aardvark | animals/images/aardvark.jpg
when I change the first field to Aardvarkon, I get the following:
Aardvark | Orycteropus | Some description | aardvark | animals/images/aardvark.jpg
Aardvarkon | Orycteropus | Some description | aardvark | animals/images/aardvark.jpg
I have the following django model:
def article_file_name(instance, filename):
return ANIMAL_IMAGES_BASE_DIR[1:] + instance.ai_species_species_sanitized + '.jpg'
class ai_species(models.Model):
ai_species_species = models.CharField('Species', max_length=100, primary_key=True, db_column='species')
ai_species_genus = models.ForeignKey(ai_genera, max_length=50, blank=False, null=False, db_column='genus')
ai_species_description = models.CharField('Description', max_length=65000, db_column='description')
ai_species_species_sanitized = models.CharField(max_length=100, blank=False, null=False, db_column='species_sanitized')
image_url = models.ImageField(max_length=100, storage=OverwriteStorage(), validators=[validate_jpg_extension], upload_to=article_file_name)
class Meta:
db_table = 'Species'
verbose_name = 'Animal species'
verbose_name_plural = 'Animal species'
def __unicode__(self): # Required, don't remove.
return self.ai_species_species
And the following helpers:
def validate_jpg_extension(value):
if not value.name.lower().endswith('.jpg') and not value.name.lower().endswith('.jpeg'):
raise ValidationError(u'Invalid file format! Only jpg or jpeg files allowed!')
class OverwriteStorage(FileSystemStorage):
def get_available_name(self, name):
# If the filename already exists, remove it.
if self.exists(name):
os.remove(os.path.join(settings.MEDIA_ROOT, name))
return name
This is the MySQL table schema for this table:
This is a very counter-intuitive behavior and I haven't found any other occurrences of this online. Any help on this would be greatly appreciated.
Here's the culprit:
ai_species_species = models.CharField('Species', max_length=100, primary_key=True, db_column='species')
Since you've defined the species as the primary key, any time you change this field in the admin it will create a new record (because there isn't already a record with that primary key).
FYI, primary keys aren't supposed to be things that change for a given record, since changing the primary key will invalidate every foreign key (ForeignKey, OneToOneField, and ManyToManyField) that refers to the record.
BTW, you don't need to be prefixing the field names with ai_species_; it's cluttering. Removing those prefixes would remove the need for the db_column parameters as well.

sqlalchemy FetchedValue and primary_key

I'm trying to create a table that uses a UUID_SHORT() as a primary key. I have a trigger that inserts a value when you do an insert. I'm having trouble making sqlalchemy recognize a column as a primary_key without complaining about not providing a default. If I do include a default value, it will use that default value even after flush despite declaring server_default=FetchedValue(). The only way I can seem to get things to work properly is if the column is not a primary key.
I'm using Pyramid, SQLAlchemy ORM, and MySQL.
Here's the model object:
Base = declarative_base()
class Patient(Base):
__tablename__ = 'patient'
patient_id = Column(BigInteger(unsigned=True), server_default=FetchedValue(), primary_key=True, autoincrement=False)
details = Column(Binary(10000))
in initializedb.py I have:
with transaction.manager:
patient1 = Patient(details = None)
DBSession.add(patient1)
DBSession.flush()
print(patient1.patient_id)
running ../bin/initialize_mainserver_db development.ini gives me the following error:
2012-11-01 20:17:22,168 INFO [sqlalchemy.engine.base.Engine][MainThread] BEGIN (implicit)
2012-11-01 20:17:22,169 INFO [sqlalchemy.engine.base.Engine][MainThread] INSERT INTO patient (details) VALUES (%(details)s)
2012-11-01 20:17:22,169 INFO [sqlalchemy.engine.base.Engine][MainThread] {'details': None}
2012-11-01 20:17:22,170 INFO [sqlalchemy.engine.base.Engine][MainThread] ROLLBACK
Traceback (most recent call last):
File "/sites/metrics_dev/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 1691, in _execute_context
context)
File "/sites/metrics_dev/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 333, in do_execute
cursor.execute(statement, parameters)
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 418, in execute
self._handle_result(self._connection.cmd_query(stmt))
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 345, in _handle_result
self._handle_noresultset(result)
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 321, in _handle_noresultset
self._warnings = self._fetch_warnings()
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 608, in _fetch_warnings
raise errors.get_mysql_exception(res[0][1],res[0][2])
mysql.connector.errors.DatabaseError: 1364: Field 'patient_id' doesn't have a default value
Running a manual insert using the mysql client results in the everything working fine, so the problem seems to be with SQLAlchemy.
mysql> insert into patient(details) values (null);
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> select * from patient;
+-------------------+---------+
| patient_id | details |
+-------------------+---------+
| 94732327996882980 | NULL |
+-------------------+---------+
1 row in set (0.00 sec)
mysql> show triggers;
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
| Trigger | Event | Table | Statement | Timing | Created | sql_mode | Definer | character_set_client | collation_connection | Database Collation |
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
| before_insert_patient | INSERT | patient | SET new.`patient_id` = UUID_SHORT() | BEFORE | NULL | | root#localhost | utf8 | utf8_general_ci | latin1_swedish_ci |
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
1 row in set (0.00 sec)
Here's what I did as a work-around...
DBSession.execute(
"""CREATE TRIGGER before_insert_patient BEFORE INSERT ON `patient`
FOR EACH ROW BEGIN
IF (NEW.patient_id IS NULL OR NEW.patient_id = 0) THEN
SET NEW.patient_id = UUID_SHORT();
END IF;
END""")
and in the patient class:
patient_id = Column(BigInteger(unsigned=True), default=text("uuid_short()"), primary_key=True, autoincrement=False, server_default="0")
So, the trigger only does something if someone accesses the database directly and not through the python code. And hopefully no one does patient1 = Patient(patient_id=0, details = None) as SQLAlchemy will use the '0' value instead of what the trigger produces
For completeness, here are two additional possible solutions for your question (also available here), based on your answer. They are slightly simpler than your solution (omitting passing parameters with correct default values) and using SQLAlchemy constructs for defining the triggers.
#!/usr/bin/env python3
from sqlalchemy import BigInteger, Column, create_engine, DDL, event
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import FetchedValue
from sqlalchemy.sql.expression import func
Base = declarative_base()
class PatientOutputMixin(object):
'''
Mixin to output human readable representations of models.
'''
def __str__(self):
return '{}'.format(self.patient_id)
def __repr__(self):
return str(self)
class Patient1(Base, PatientOutputMixin):
'''
First version of ``Patient`` model.
'''
__tablename__ = 'patient_1'
patient_id = Column(BigInteger, primary_key=True,
default=func.uuid_short())
# the following trigger is only required if columns are inserted in the table
# not using the above model/table definition, otherwise it is redundant
create_before_insert_trigger = DDL('''
CREATE TRIGGER before_insert_%(table)s BEFORE INSERT ON %(table)s
FOR EACH ROW BEGIN
IF NEW.patient_id IS NULL THEN
SET NEW.patient_id = UUID_SHORT();
END IF;
END
''')
event.listen(Patient1.__table__, 'after_create',
create_before_insert_trigger.execute_if(dialect='mysql'))
# end of optional trigger definition
class Patient2(Base, PatientOutputMixin):
'''
Second version of ``Patient`` model.
'''
__tablename__ = 'patient_2'
patient_id = Column(BigInteger, primary_key=True,
default=0, server_default=FetchedValue())
create_before_insert_trigger = DDL('''
CREATE TRIGGER before_insert_%(table)s BEFORE INSERT ON %(table)s
FOR EACH ROW BEGIN
SET NEW.patient_id = UUID_SHORT();
END
''')
event.listen(Patient2.__table__, 'after_create',
create_before_insert_trigger.execute_if(dialect='mysql'))
# test models
engine = create_engine('mysql+oursql://test:test#localhost/test?charset=utf8')
Base.metadata.bind = engine
Base.metadata.drop_all()
Base.metadata.create_all()
Session = sessionmaker(bind=engine)
session = Session()
for patient_model in [Patient1, Patient2]:
session.add(patient_model())
session.add(patient_model())
session.commit()
print('{} instances: {}'.format(patient_model.__name__,
session.query(patient_model).all()))
Running the above script produces the following (sample) output:
Patient1 instances: [22681783426351145, 22681783426351146]
Patient2 instances: [22681783426351147, 22681783426351148]