I have an update to an existing MySQL table that is failing under Rails. Here's the relevant controller code:
on = ObjectName.find_by_object_id(params[:id])
if (on) #edit existing
if on.update_attributes(params[:param_type] => params[:value])
respond_to do |format|
...
end
The ObjectName model class has 3 values (object_id, other_id, and prop1). When the update occurs, the SQL generated is coming out as
UPDATE `objectname` SET `other_id` = 245 WHERE `objectname`.`` IS NULL
The SET portion of the generated SQL is correct. Why is the WHERE clause being set to .`` IS NULL ?
I ran into the same error when working with a table with no primary key defined. There was a unique key set up on the field but no PK. Setting the PK in the model fixed it for me:
self.primary_key = :object_id
Related
I'm using MySQL 8.0 and SQLAlchemy. My id column isn't incrementing, and I don't understand why.
SQLAlchemy Model:
class Show(db.Model):
__tablename__ = "shows"
id = Column(Integer, primary_key=True, index=True)
name = Column(String)
type = Column(String)
status = Column(String)
episodes = Column(Integer)
series_entry_id = Column(Integer, ForeignKey("series.id"))
series_id = Column(Integer, ForeignKey("series.id"))
lists = relationship("List", secondary=show_list, back_populates="shows")
recommendations = relationship("Recommendation", backref=backref("shows"))
user_ratings = relationship("Rating", backref=backref("shows"))
alt_names = relationship("User", secondary=alt_names, back_populates="alt_show_names")
series_entry = relationship("Series", foreign_keys=[series_entry_id], uselist=False)
series = relationship("Series", foreign_keys=[series_id], post_update=True)
Breaking code:
show = Show(
name=new_data["title"]["english"],
type=new_data["format"],
status=new_data["status"],
episodes=new_data["episodes"],
)
db.session.add(show)
db.session.commit()
The original error I received was:
sqlalchemy.exc.DatabaseError: (mysql.connector.errors.DatabaseError) 1364 (HY000):
Field 'id' doesn't have a default value
From this answer, I added the index parameter to my id column and edited the my.ini file to take it out of STRICT_TRANS_TABLES mode. The new error is:
sqlalchemy.exc.IntegrityError: (mysql.connector.errors.IntegrityError) 1062 (23000):
Duplicate entry '0' for key 'shows.PRIMARY'
All answers I've found on the topic talk about AUTO_INCREMENT, but the SQLAlchemy docs say that that should be the default here, given that it's an integer primary key without it specified to false. I did try adding autoincrement=True just in case, but when I tried to migrate it alembic told me that no changes were detected.
From comments to the question:
does this mean that SQLAlchemy is wrong and [AUTO_INCREMENT] isn't set by default [for the first integer primary key column]?
No, that is indeed how it works. Specifically, for a model like
class Account(Base):
__tablename__ = "account"
account_number = Column(Integer, primary_key=True)
customer_name = Column(String(50))
alembic revision --autogenerate will generate
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('account',
sa.Column('account_number', sa.Integer(), nullable=False),
sa.Column('customer_name', sa.String(length=50), nullable=True),
sa.PrimaryKeyConstraint('account_number')
)
(which doesn't explicitly specify autoincrement=) but when alembic upgrade head gets SQLAlchemy to actually create the table SQLAlchemy emits
CREATE TABLE account (
account_number INTEGER NOT NULL AUTO_INCREMENT,
customer_name VARCHAR(50),
PRIMARY KEY (account_number)
)
Since alembic didn't detect any changes when I set autoincrement=True, does that mean that for every table I make, I'll have to set AUTO_INCREMENT in the database manually?
No. As illustrated above, Alembic properly handles AUTO_INCREMENT when the table is first created. What it doesn't detect is when an ORM model with an existing table has a column changed from autoincrement=False to autoincrement=True (or vice versa).
This is known behaviour, as indicated by the commit message here:
"Note that this flag does not support alteration of a column's "autoincrement" status, as this is not portable across backends."
MySQL does support changing the AUTO_INCREMENT property of a column via ALTER_TABLE, so we could achieve that by changing the "empty" upgrade method
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
to
def upgrade():
op.alter_column(
'account',
'account_number',
existing_type=sa.Integer(),
existing_nullable=False,
autoincrement=True
)
which renders
ALTER TABLE account MODIFY account_number INTEGER NOT NULL AUTO_INCREMENT
I have a column named record_time to store recorded time currently column data type is integer and the data is saved as unix timestamp now i want to find a way to convert this unix timestamp to datetime field without losing data in that column. right now i have created a migration file as follows:
class ChangeRecordTimeToDatetime < ActiveRecord::Migration
def up
as = Audio.all.map {|a| {id: a.id, record_time: Time.at(a.record_time)}}
Audio.all.update_all("record_time = NULL")
change_column :audios, :record_time, :datetime
as.map {|a| Audio.find(a[:id]).update(record_time: a[:record_time])}
end
def down
as = Audio.all.map {|a| {id: a.id, record_time: a.record_time.to_i}}
Audio.all.update_all("record_time = NULL")
change_column :audios, :record_time, :integer
as.map {|a| Audio.find(a[:id]).update(record_time: a[:record_time])}
end
end
and this throws me an error like this
Mysql2::Error: Incorrect datetime value: '1493178889' for column 'record_time' at row 1: ALTER TABLE `audios` CHANGE `record_time` `record_time` datetime DEFAULT NULL
Thanks in advance.
I'd skip ActiveRecord completely for this sort of thing and do it all inside the database. Some databases will let specify how to transform old values into new values while changing a column's type but I don't see how to do that with MySQL; instead, you can do it by hand:
Add a new column with the new data type.
Do a single UPDATE to copy the old values to the new column while transforming the date type. You can use MySQL's from_unixtime for this.
Drop the original column.
Rename the new column to the old name.
Rebuild any indexes you had on the original column.
Translating that to a migration:
def up
connection.execute(%q{
alter table audios
add record_time_tmp datetime
})
connection.execute(%q{
update audios
set record_time_tmp = from_unixtime(record_time)
})
connection.execute(%q{
alter table audios
drop column record_time
})
connection.execute(%q{
alter table audios
change record_time_tmp record_time datetime
})
# Add indexes and what not...
end
You're well into database-specific code here so going with straight SQL seems reasonable to me. You can of course translate that to change_column and update_all calls (possibly with reset_column_information calls to update the model classes) but I don't see the point: changing a column type will almost always involve database-specific code (if you want to be efficient) and migrations are meant to be temporary bridges anyway.
You need to convert the UNIX timestamps to DateTime objects before inserting them. You can do so with this: DateTime.strptime(<timestamp>,'%s').
So to apply this to your question, try this:
def up
as = Audio.all.map {|a| {id: a.id, record_time: DateTime.strptime(a.record_time.to_s, '%s')}}
remove_column :audios, :record_time
add_column :audios, :record_time, :datetime
as.map {|a| Audio.find(a[:id]).update(record_time: a[:record_time])}
end
I have problem with updating values in apache phoenix . The below query is throwing JDBC exception. I am new to Pheonix JDBC and confusing with upsert query usage for updating non primary key field values.
String sql = UPSERT INTO mytable (serverName,SationName, product) SELECT serverName,stationName ‘sampleProduct’ FROM mytable WHERE product = ‘sampleProduct’;
The primary key of "myTable" is combination of "serverName" and "StationName". I would like to update value of product column from 'sampleProduct' to 'TestProduct'.
Update the sql query with following line. Hope it will help.
String sql = "REPLACE INTO mytable (serverName,SationName, product)
SELECT serverName,stationName , 'sampleProduct'
FROM mytable WHERE product = 'sampleProduct'";
"The primary key of "myTable" is combination of "serverName" and "StationName". I would like to update value of product column from 'sampleProduct' to 'TestProduct'."
You say nothing about "inserting if the row does not already exist, so I don't see UPSERT as being appropriate. The MySQL code is
UPDATE myTable
SET product = 'sampleProduct'
WHERE serverName = '...'
AND sampleProduct = '...';
(I don't know what values are needed for '...'.)
I am updating a record in my table using laravel eloquent, in my table a field called 'sign' is unique.
My code is:
$oSetting = PersonalSetting::find($id);
$oSetting->Login = utf8_decode($login);
$oSetting->Sign = utf8_decode($sign);
$oSetting->save();
The query is "update PersonalSetting set login = 'xyz', sign = 'abc' where id` = 4691"
When I am updating this record I am getting "Duplicate entry 'abc' for key 'sign'". But there is no other entry with sign value as 'abc'
Then how can it be a duplicate entry?
I have a very simple MySQL table which represents a set of names using a single String column. I want to use Slick's insertOrUpdate but it is generating incorrect MySQL, causing errors. Specifically, it wants to execute
insert into `TABLE1_NAME` (`column`) values ('value') on duplicate key update
It doesn't specify what to update, so this fails. A similar table with two columns upserts fine, with statements like
insert into `TABLE2_NAME` (`key`, `other_col`) values ('value1', 'value2') on duplicate key update `other_col`=VALUES(`other_col`)
Has anyone seen this? We do set a primary key for TABLE1. We may be doing our table projection mapping incorrectly. We're using Slick 3.1.1.
class Table1(tag: Tag) extends Table[Table1Record](tag, "TABLE1_NAME") {
def value = column[String]("value", O.PrimaryKey, O.Length(254))
def * = (value) <> (Table1Record, Table1Record.unapply)
}
class Table2(tag: Tag) extends Table[Table2Record](tag, "TABLE2_NAME") {
def value1 = column[String]("value1", O.PrimaryKey, O.Length(254))
def value2 = column[String]("value2", O.Length(254))
def * = (value1, value2) <> (Table2Record.tupled, Table2Record.unapply)
}
There's no such concept of "insert or update" into a single column table. The single column is part of the key. If the key matches exactly, then are no other columns to update. If the key didn't match, then the newly inserted row won't be a duplicate of any key, so the update clause won't happen. Because there are no other columns to update, the generated SQL is malformed -- a bit of text has been generated with the assumption that some field names would be appended after it, but there were no field names to append.
By the way, for a table with two columns, the insert statement looks like
insert into `TABLE2_NAME` (`key`, `other_col`)
values ('value1', 'value2')
on duplicate key update `other_col`=VALUES(`other_col`)
It lists only the non-key columns in the update clause. (Getting this correct should help you to better understand what's going on with your single-column table.)