How do I get alembic to use a specified default value for a new column, without making it the server_default?
Only the existing rows should receive this default value. New rows inserted after should still get server_default.
You have to use server_default, but you need to pass the value as string in the same format your database deals with it.
https://docs.sqlalchemy.org/en/13/core/metadata.html#:~:text=server_default%20%E2%80%93,server%20side%20defaults
For example:
op.add_column('users', sa.Column('created_at', sa.DateTime(), nullable=False, server_default=str(datetime.now())))
Check what miguelgrinberg said here:
https://github.com/miguelgrinberg/Flask-Migrate/issues/265#:~:text=The%20server_default%20should%20be%20given%20as%20text%20in%20the%20native%20format%20of%20the%20database%2C%20not%20as%20a%20Python%20type.%20See%20https%3A//docs.sqlalchemy.org/en/13/core/metadata.html%23sqlalchemy.schema.Column.params.server_default.
I had a similar problem, I wanted a new column to be not nullable, which does not work for existing rows, of course. So I created the row without not null constraint first, filled the column with some custom python code and than altered the column (in the same migration) to have the constraint.
So you'd iterate over the existing objects, set the values according to the transient default.
Related
I am trying to save compute on a python transform in Foundry.
I want to run my code incrementally, but I want to keep a unique set of keys, without having to do a full snapshot read on the full dataset, and then run the unique.
If I try something like df_out = df.select("composite_key").dropDuplicates() I am afraid it uses the full dataset input, I want to make use of the previous deduplication I already did.
The trick here is use the previous version of the output dataset:
df_out = df.unionByName(
df_out.dataframe('previous', schema=df.schema).select("composite_key")
).drop_duplicates()
Using this pattern, you don't need to do a look up on the full dataset, you use the previously computed unique set of keys, union to the new data and then de-dupe.
If there are other columns in the new data but you still want to de-dupe by key you can use this approach.
# If there may be duplicates in the data do this step.
# df = df.dropDuplicates(['composite_key'])
df_prev = df_out.dataframe(mode='previous', schema=df.schema)
# This uses the new row for any existing key.
# You could do the opposite by swapping the places of the tables.
existing = df_prev.join(df, on='composite_key', how='leftanti')
result = existing.unionByName(df)
I've a model which is just a relation between two entities like this:
class SomeModel(models.Model):
a = OneToOneField(User,primary_key=True,...)
b = ForeignKey(Car,...)
As per design, it was correct, as I didn't want an User to have multiple Car. But in my new design, I want to accept multiple Car to an User. So I was trying something like this:
class SomeModel(models.Model):
class Meta:
unique_together = (("a", "b"),)
a = ForeignKey(User,...)
b = ForeignKey(Car,...)
But during migration, it asks me:
You are trying to add a non-nullable field 'id' to somenmodel without a default; we can't do that (the database needs something to populate existing rows).
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows with a null value for this column)
2) Quit, and let me add a default in models.py
Select an option:
I just wanted to remove that foreign key from OneToOneRelation and add a new id combining both. - How to resolve this?
Delete the model and table from the database and create a new model from starting or add null=True to the given field. you must be having some data (rows) in your table and now you are creating a new column, so previous rows need something to be there in the newly created column, writing null = True will do that.
Is it possible to create an index on a Boolean type field?
Lets say the schema of the records I want to store is:
{
id:1,
name:"Kris",
_dirty:true
}
I created normal not unique index (onupgradeneeded):
...
store.createIndex("dirty","_dirty",{ unique: false })
...
The index is created, but it is empty! - In the index IndexedDB browser there are no records with Boolean values - only Strings, Numbers and Dates or even Arrays.
I am using Chrome 25 canary
I would like to find all records that have _dirty attribute set to true - do I have to modify _dirty to string or int then?
Yes, boolean is not a valid key.
If you must, of course you can resolve to 1 and 0.
But it is for good reason. Indexing boolean value is not informative. In your above case, you can do table scan and filter on-the-fly, rather than index query.
The answer marked as checked is not entirely correct.
You cannot create an index on a property that contains values of the Boolean JavaScript type. That part of the other answer is correct. If you have an object like var obj = {isActive: true};, trying to create an index on obj.isActive will not work and the browser will report an error message.
However, you can easily simulate the desired result. indexedDB does not insert properties that are not present in an object into an index. Therefore, you can define a property to represent true, and not define the property to represent false. When the property exists, the object will appear in the index. When the property does not exist, the object will not appear in the index.
Example
For example, suppose you have an object store of 'obj' objects. Suppose you want to create a boolean-like index on the isActive property of these objects.
Start by creating an index on the isActive property. In the onupgradeneeded callback function, use store.createIndex('isActive','isActive');
To represent 'true' for an object, simply use obj.isActive = 1;. Then add or put the object into the object store. When you want to query for all objects where isActive is set, you simply use db.transaction('store').index('isActive').openCursor();.
To represent false, simply use delete obj.isActive; and then add or or put the object into the object store.
When you query for all objects where isActive is set, these objects that are missing the isActive property (because it was deleted or never set) will not appear when iterating with the cursor.
Voila, a boolean index.
Performance notes
Opening a cursor on an index like was done in the example used here will provide good performance. The difference in performance is not noticeable with small data, but it is extremely noticeable when storing a larger amount of objects. There is no need to adopt some third party library to accomplish 'boolean indices'. This is a mundane and simple feature you can do on your own. You should try to use the native functionality as much as possible.
Boolean properties describe the exclusive state (Active/Inactive), 'On/Off', 'Enabled/Disabled', 'Yes/No'. You can use these value pairs instead of Boolean in JS data model for readability. Also this tactic allow to add other states ('NotSet', for situation if something was not configured in object, etc.)...
I've used 0 and 1 instead of boolean type.
I'm using SA in a script I'll be using to periodically 'copy' a subset of mysql tables from a 'production' replica to dev/test systems. I had written code to simply reflect the source tables and meta.create_all(destination_engine). Due to the nature of FKs, I now know I need to apply use_alter=True to the ForeignKeys on the tables as I create them so that I won't get CircularDependencyErrors or other problems. I need to assume I dont know how many FK's or their names until I go through the metadata.
I'm new to SA and typically Java programmer (as you will tell :D). I tried to change the use_alter attr. iteratively at first:
tablesd = smeta.tables.items()
for tname, t in tablesd:
for c in t.columns:
for fk in c.foreign_keys:
fk.use_alter = True
smeta.create_all(to_engine)
EDIT: It's important to note that create_all() does NOT throw a CircularDependencyError after I set the use_alter property like I do above. If I remove that code, create_all() does not work. It just doesnt seem to be removing the FKs from the create...
This obviously didn't work. I then read Overriding Reflected Columns in the SA docs, sample being:
mytable = Table('mytable', meta,
Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
Column('mydata', Unicode(50)), # override reflected 'mydata' to be Unicode, autoload=True)
I'd guess reflecting each table individually then adding use_alter=True in the FK definition would work, but I CANNOT assume the names and values or # of FK's/columns. I read a lot about using DeclarativeBase to do something like this, but I'm not really sure how that would work...
How can I take my arbitrary list of tables, reflect them, then Override the use_alter option on their respective foreign keys? Am I thinking about this the wrong way?
The answer ended up being inside the problem (Imagine that...). Although each ForeignKey object has a use_alter value that can be set, Constraints also have a separate property that can be set (I was not able to find this in the API Documentation. After running it through PyDev's Debugger, I noticed the former were being set, but all the keys that had Constraints associated with them were still False. I set them to true thusly:
for fk in table.foreign_keys:
fk.use_alter=True
fk.constraint.use_alter=True
This seemed to produce the SQL I was looking for and tables were created correctly with no CircularDependencyErrors and metadata.sorted_tables seemed to work fine with no errors. I was actually able to refactor my code and do things the RIGHT way!
For anyone looking to do DB-->DB reflecting with complex FKs using SQLAlchemy, this answer and Tyler Lesmann's article are for you.
*UPDATE: * Using this method has passed a peer review and is now being used as production code. Seems to work well!
I'm using yii active records for mysql, and i have a table where there's a field that needs to be appended with the primary key of the same table. The primary key is an auto increment field, hence i can't access the primary key before saving.
$model->append_field = "xyz".$model->id; // nothing is appending
$model->save();
$model->append_field = "xyz".$model->id; //id is now available
How do i do this?
I know that i can update right after insertion, but is there a better method?
Your record is only assigned an id after the INSERT statement is executed. There is no way to determine what that id is prior to INSERT, so you would have to execute an UPDATE with the concatenated field value after your INSERT.
You could write a stored procedure or trigger in MySQL to do this for you, so your app executes a single SQL statement to accomplish this. However, you are just moving the logic into MySQL and in the end both an INSERT and UPDATE are occurring.
Some more workarounds:
This is almost your approach ;)
$model->save();
$model->append_field = "xyz".$model->id; //id is now available
$model->save();
But you could move this functionality to a behavior with a custom afterSave() method, note that you'd have to take care about not looping the event.
Or just write a getter for it
function getFull_append_field(){
return $this->append_field.$this->id;
}
but then you can not use it in a SQL statement, unless you create the attribute there with CONCAT() or something similar.
Anyone else coming to this question might be interested in exactly how i implemented it, so here's the code :
//in the model class
class SomeModel extends CActiveRecord{
...
protected function afterSave(){
parent::afterSave();
if($this->getIsNewRecord()){
$this->append_field=$this->append_field.$this->id;
$this->updateByPk($this->id, array('append_field'=>$this->append_field));
}
}
}
One way to avoid the looping the event(as mentioned by #schmunk) was to use saveAttributes(...) inside the afterSave() method, but saveAttributes(...) checks isNewRecord, and inserts a value only if it is a new record, so that requires us to use setNewRecord(false); before calling saveAttributes(...).
I found that saveAttributes(...) actually calls updateByPk(...) so i directly used updateByPk(...) itself.