I'm using MySQL 8.0 and SQLAlchemy. My id column isn't incrementing, and I don't understand why.
SQLAlchemy Model:
class Show(db.Model):
__tablename__ = "shows"
id = Column(Integer, primary_key=True, index=True)
name = Column(String)
type = Column(String)
status = Column(String)
episodes = Column(Integer)
series_entry_id = Column(Integer, ForeignKey("series.id"))
series_id = Column(Integer, ForeignKey("series.id"))
lists = relationship("List", secondary=show_list, back_populates="shows")
recommendations = relationship("Recommendation", backref=backref("shows"))
user_ratings = relationship("Rating", backref=backref("shows"))
alt_names = relationship("User", secondary=alt_names, back_populates="alt_show_names")
series_entry = relationship("Series", foreign_keys=[series_entry_id], uselist=False)
series = relationship("Series", foreign_keys=[series_id], post_update=True)
Breaking code:
show = Show(
name=new_data["title"]["english"],
type=new_data["format"],
status=new_data["status"],
episodes=new_data["episodes"],
)
db.session.add(show)
db.session.commit()
The original error I received was:
sqlalchemy.exc.DatabaseError: (mysql.connector.errors.DatabaseError) 1364 (HY000):
Field 'id' doesn't have a default value
From this answer, I added the index parameter to my id column and edited the my.ini file to take it out of STRICT_TRANS_TABLES mode. The new error is:
sqlalchemy.exc.IntegrityError: (mysql.connector.errors.IntegrityError) 1062 (23000):
Duplicate entry '0' for key 'shows.PRIMARY'
All answers I've found on the topic talk about AUTO_INCREMENT, but the SQLAlchemy docs say that that should be the default here, given that it's an integer primary key without it specified to false. I did try adding autoincrement=True just in case, but when I tried to migrate it alembic told me that no changes were detected.
From comments to the question:
does this mean that SQLAlchemy is wrong and [AUTO_INCREMENT] isn't set by default [for the first integer primary key column]?
No, that is indeed how it works. Specifically, for a model like
class Account(Base):
__tablename__ = "account"
account_number = Column(Integer, primary_key=True)
customer_name = Column(String(50))
alembic revision --autogenerate will generate
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('account',
sa.Column('account_number', sa.Integer(), nullable=False),
sa.Column('customer_name', sa.String(length=50), nullable=True),
sa.PrimaryKeyConstraint('account_number')
)
(which doesn't explicitly specify autoincrement=) but when alembic upgrade head gets SQLAlchemy to actually create the table SQLAlchemy emits
CREATE TABLE account (
account_number INTEGER NOT NULL AUTO_INCREMENT,
customer_name VARCHAR(50),
PRIMARY KEY (account_number)
)
Since alembic didn't detect any changes when I set autoincrement=True, does that mean that for every table I make, I'll have to set AUTO_INCREMENT in the database manually?
No. As illustrated above, Alembic properly handles AUTO_INCREMENT when the table is first created. What it doesn't detect is when an ORM model with an existing table has a column changed from autoincrement=False to autoincrement=True (or vice versa).
This is known behaviour, as indicated by the commit message here:
"Note that this flag does not support alteration of a column's "autoincrement" status, as this is not portable across backends."
MySQL does support changing the AUTO_INCREMENT property of a column via ALTER_TABLE, so we could achieve that by changing the "empty" upgrade method
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
to
def upgrade():
op.alter_column(
'account',
'account_number',
existing_type=sa.Integer(),
existing_nullable=False,
autoincrement=True
)
which renders
ALTER TABLE account MODIFY account_number INTEGER NOT NULL AUTO_INCREMENT
Related
I am trying to write a table from R to mySQL using dbWriteTable. I would also like to specify that the certain fieldtypes cannot be null and a default value for those specific fieldtypes.
I am currently using the below code to specify the field type but not sure how to specify a default and mark certain fields as cannot be null.
I am trying to preserve the structure of the table as I read it in R. While writing back to sql some elements of the structure are lost.
dbWriteTable(conn = connection,
name = "barcode_details",
value = barcode_details_update,
field.types = c(
id = "float",
order_no = "varchar(50)"))
I'm using a SQLAlchemy insert object to quickly insert a bunch of data from another table. The schemas are as follow:
create table master (id serial, name varchar);
create table mapping (id serial, new_name varchar, master_id integer);
-- master_id is a foreign key back to the master table, id column
I populate my master table with unique names and IDs. I then want my mapping table to get seeded with data from this master table. The SQL would be
insert into mapping (master_id, new_name) select id, name from master;
I use the following SQLAlchemy statement. The problem I get is that SQLAlchemy can't seem to resolve the names because logically they are different between the two tables.
stmt = sa_mapping_table.insert().from_select(['name', 'id'], stmt)
Is there a way to tell the insert object, "using this select statement select these columns and put the results in these columns of the target table"?
I think you are close but you should specify columns of mapping to insert the select from master into. This should work where master_t and mapping_t are the sqlalchemy Table() objects.
master_t = Table('master', metadata,
Column('id', Integer, primary_key=True),
Column('name', String, nullable=False))
mapping_t = Table('mapping', metadata,
Column('id', Integer, primary_key=True),
Column('new_name', String, nullable=False),
Column('master_id', Integer, ForeignKey('master.id'), nullable=False))
#...
with engine.connect() as conn, conn.begin():
select_q = select(master_t.c.id, master_t.c.name)
stmt = mapping_t.insert().from_select(["master_id", "new_name"], select_q)
conn.execute(stmt)
Creates the following SQL:
INSERT INTO mapping (master_id, new_name) SELECT master.id, master.name
FROM master
See the docs at
insert-from-select
I'm writing a program that stores text messages from a smartphone in a database.
I created a table in my MySQL database like so:
CREATE TABLE sms ( smsID int(11) NOT NULL AUTO_INCREMENT, messageDate datetime NOT NULL, text varchar(1000) DEFAULT NULL, number varchar(45) DEFAULT NULL, senderID int(11) DEFAULT '0', taskID int(11) DEFAULT '0', messageType int(11) DEFAULT '0', hasReply bit(1) NOT NULL DEFAULT b'0', PRIMARY KEY (smsID), KEY date (messageDate) ) ENGINE=InnoDB AUTO_INCREMENT=54 DEFAULT CHARSET=utf8;
I'm creating and storing new objects for incoming text messages, in C# with the following code:
DbContext db = new DbContext();
var message = new sms();
message.taskID = taskID;
message.senderID = senderID;
message.messageDate = this.MessageDate;
message.number = this.Number;
message.text = this.Text;
message.messageType = MessageTypes.TYPE_INBOUND;
DebugLog.Log("Storing message with datetime: " +
message.messageDate?.ToLongTimeString());
db.sms.Add(message);
db.SaveChangesAsync();
All the data are stored correctly, except for the messageDate field (which I double checked: is not null at the point of calling SaveChangesAsync). It is always stored as '00-00-0000 00:00' or the default date set if it is set in the DB table.
Which makes sense when we take a look at the generated queries from the EF-log:
Storing message with datetime: 11:37:29Opened connection at 20.09.2017 11:37:28 +02:00Started transaction at 20.09.2017 11:37:28 +02:00SET SESSION sql_mode='ANSI';INSERT INTO sms(text, number, senderID, taskID, messageType, hasReply) VALUES (#gp1, #gp2, 1227, 122138, 1, 0);SELECTsmsID, messageDateFROM sms WHERE row_count() > 0 AND smsID=last_insert_id()-- #gp1: 'some text' (Type = String, IsNullable = false, Size = 5)-- #gp2: 'HIDDEN NUMBER' (Type = String, IsNullable = false, Size = 14)-- Executing at 20.09.2017 11:37:28 +02:00-- Completed in 1 ms with result: EFMySqlDataReaderCommitted transaction at 20.09.2017 11:37:28 +02:00Closed connection at 20.09.2017 11:37:28 +02:00Disposed transaction at 20.09.2017 11:37:28 +02:00
It seems like the field in question is queried from the database after the insert, just like the AUTO_INCREMENT smsID. No idea why.
This is really bugging me, since I don't want to insert the current time but the actual time the message arrived, and I can't seem to find a solution.
Any tips would be great! Thanks in advance. :)
What is messageDate property defined in Fluent API or Data Annotations? Is the Type set to datetime? It isn't set to use AutoIncrementing key on this datetime field, right?
I would check if messageDate is part of your key. I would also check if the [DatabaseGenerated(DatabaseGeneratedOption.Identity)] attribute is applied to the messageDate property via Data Annotations or via Fluent API.
Thanks to Daniel Lorenz for his comment.
He mentioned that the datetime field my be part of the key.
Well, I created an index for it in the database, before generating the model in Visual Studio (database first).
To workaround this issue (doesn't feel like a solution):
I removed the index from the table (MySQL)
Removed the table from the model (VS C#)
Re-added the table to the model (VS C#)
Re-added the index for the table (MySQL)
Suddenly the query is generated correctly, and the date is stored correctly.
This feels like a bug to me, but I'm glad it works now ;)
As per django docs https://docs.djangoproject.com/en/1.9/topics/db/models/
it's ORM create varchar field instead of char.
from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
and equivalent sql statement
CREATE TABLE myapp_person (
"id" serial NOT NULL PRIMARY KEY,
"first_name" varchar(30) NOT NULL,
"last_name" varchar(30) NOT NULL
);
so here we can see it is using varchar, but can I use char instead. I'm unable to find a way.
apart from manually altering the column
For those who struggle with the anwser given by #andrai-avram,
this is what that would look like:
from django.db import models
class ActualCharField(models.CharField):
def db_type(self, connection):
varchar: str = super().db_type(connection)
char: str = varchar.replace('varchar', 'char')
return char
And then just use this class instead of the default django.db.models.CharField
I think your best bet is to write a custom field type, base it on CharField and alter its db_type() method. Check relevant example here
I have an update to an existing MySQL table that is failing under Rails. Here's the relevant controller code:
on = ObjectName.find_by_object_id(params[:id])
if (on) #edit existing
if on.update_attributes(params[:param_type] => params[:value])
respond_to do |format|
...
end
The ObjectName model class has 3 values (object_id, other_id, and prop1). When the update occurs, the SQL generated is coming out as
UPDATE `objectname` SET `other_id` = 245 WHERE `objectname`.`` IS NULL
The SET portion of the generated SQL is correct. Why is the WHERE clause being set to .`` IS NULL ?
I ran into the same error when working with a table with no primary key defined. There was a unique key set up on the field but no PK. Setting the PK in the model fixed it for me:
self.primary_key = :object_id