I want to make table field (foreign key) that can reference to one of 2 tables: Dog or Cat.
I know it may be most likely implemented in PostgreSQL (via check) keyword (not a part of SQL standard), not a foreign key).
So I suppose it may be possible at other databases also via their private syntax.
Theoretically it may be done by Sqlalchemy even if some particular database not supporting a such functionality.
in Python code it looks like this:
# # # SCHEMAS.PY # # #
from dataclasses import dataclass
#dataclass
class Cat: ...
#dataclass
class Dog: ...
#dataclass
class review:
stars_value: int
entity: Cat | Dog
comment: str
# # # MODELS.PY # # #
from sqlalchemy import Column, Table, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
cats = Table(
"cats",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String)
)
dogs = Table(
"dogs",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
)
review = Table(
"review",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("comment", String),
Column("entity", Integer, ForeignKey("cats.id" | "dogs.id") # Invalid syntax here!
)
If no - I will do it via some intermediate table as:
review_categories = Table(
"review_categories",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("cat", Integer),
Column("dog Integer),
)
You could create a common table for cats and dogs such that the id for any cat and dog is unique, and keep an additional parameter as type (dog or cat).
id (pk)
name
type
1
A
Dog
2
B
Cat
3
B
Dog
For a general solution where you may want a union of columns as a foreign key.
One interesting way to approach this might be to create a new table containing two columns. The first column will be unique names and the other column will be the frequency of these names (ie, how many times this keyword appears in cats and dogs combined).
Unique_id (pk)
Count
A
1
B
2
Such that
Unique_id = Dogs.id ∪ Cats.id
Logic to update the table
Insert in Cats - Insert or Increment Count of Unique_id
Deletion in Cats - Decrement Count of Unique_id or Delete If reaches 0
Then you can use the unique_id column for your foreign key
Related
Mysql User table: user_id int (pk), name varchar, last_name varchar
SQLAlchemy model:
class User(db.Model):
__tablename__ = 'user'
user_id = Column('user_id',INTEGER, nullable=False, primary_key=True)
name = Column('name',String(256), nullable=False)
lastname = Column('last_name',String(256), nullable=False)
If I want to add columns in my User table like phone_number and address which are not going to be used by my application. Do I need to change necessary my model of sqlalchemy or is it not harmful?
You do not have add the columns into your User class. But if you add data into the database using sqlachemy, it will construct the rows using only the fields from the class User, so if you do not have the defaults set in the database table definition, it may cause an error.
EDIT: You should be safe if you only use the model to query the database.
I'm not sure I've titled this question correctly. I can add a unique constraint well enough to any of my tables, but in the case below I'm not sure how to do what I'm after:
class Branch(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column(db.String(160))
#foreign key for 'branches' in Account class. access with Branch.account
account_id = db.Column(db.Integer, db.ForeignKey('account.id'))
class Account(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column(db.String(160), unique=True)
branches = db.relationship('Branch', backref = 'account', lazy = 'dynamic')
So when I added a unique constraint to the Branch table's name column, I could not add same-name branches to different accounts. For example, Seattle, could be a branch for both AccountA and AccountB.
What I want to do is apply a constraint that checks for uniqueness only when account.id is the same. Is this possible?
Thanks to dirn, pointing out the duplicate, I've added:
__table_args__ = (db.UniqueConstraint('account_id', 'name', name='_account_branch_uc'),
)
to my Branch class, and then pushed it to the database with alembic via:
def upgrade():
op.create_unique_constraint('_account_branch_uc', 'branch', ['name','account_id'])
I will note, though, that since I added this constraint manually via alebmic, I'm not certain if I added it correctly to my model. I suppose I'll find out when I eventually wipe my DB and start a new one.
EDIT
I have rolled a new database and the __table_args__ from above works correctly!
I have a source(web pages) that have common data and uncommon data that which I need to store in one table.
The data can look like this:
model: xyz, attr_1: xyz, attr_2: xyz
model: xyz, attr_3: xyz, attr_4: xyz
model: xyz, attr_1: xyz, attr_4: xyz
model: xyz, attr_1: xyz, attr_5: xyz
model: xyz, attr_15: xyz, attr_20: xyz
This data will generate this DML:
insert into table (model, attr_1, attr_2)values('xyz','xyz','xyz');
insert into table (model, attr_3, attr_4)values('xyz','xyz','xyz');
insert into table (model, attr_1, attr_4)values('xyz','xyz','xyz');
insert into table (model, attr_1, attr_5)values('xyz','xyz','xyz');
insert into table (model, attr_15, attr_20)values('xyz','xyz','xyz');
My problem is that I can't define the table before the insert commands so I can't know the columns and in every new insert I may discover new columns. I can't get all the insert commands before the actual insert. The only thing I think of is to insert every row to different table (using create table as insert into) and then use UNION ALL to create the final table. But this sound not so good idea.
EDIT I don't looking for normalized table.
The end result should be(as for the example):
table_name
id int
model varchar
attr_1 varchar
attr_2 varchar
attr_3 varchar
attr_4 varchar
attr_5 varchar
attr_15 varchar
attr_20 varchar
There's a really simple solution to this. You need to change your table:
table: model
modelName attribute value
xyz 1 xyz
xyz 2 xyz
Then when you do the INSERT, you would do:
INSERT INTO `model` (`modelName`, `attribute`, `value`) VALUES ('xyz', 1, 'xyz')
This is a normalized table structure that allows for n amount of attributes.
If you use an Array to get your data then you could use PHP's implode(', ', $array). But, you may not be using PHP. If that's the case you could always just concatenate what you're INSERTing with ,.
Right solution is to normalize your schema.
Create 2 tables: master table for main model - pretty much what you have now, but without attributes, and slave table to keep attributes. Something like this:
CREATE TABLE master (
master_id INTEGER PRIMARY KEY AUTOINCREMENT,
model VARCHAR(50)
);
CREATE TABLE attrs (
attr_id INTEGER PRIMARY KEY AUTOINCREMENT,
master_id INTEGER NOT NULL,
attr_name VARCHAR(20)
);
This schema is rather compact and has some important properties. For example, it allows you to keep arbitrary number of attributes associated with given model - it could be 0, or it could be 1000.
To insert data, you will need insert in master table first, and then to attrs table.
To retrieve data, use simple join like this:
SELECT m.model,
a.attr_name
FROM master m
JOIN attrs a ON m.model_id = a.model_id
WHERE ...
I want to use Django for a client project that has a legacy database. If at all possible I would like to be able to use the Django admin interface. However, the database has tables with multicolumn primary keys, which I see Django does not like - the admin interface throws a MultipleObjectsReturned exception.
What are the options here? I can add new tables, but I can't change existing tables since other projects are already adding data to the database. I've seen other questions mentioning surrogate keys, but it seems like that would require changing the tables.
EDIT: The database in question is a MySQL database.
You are talking about a legacy READONLY database then, perhaps you can create an external schema (views) with no multi-column PKs. For example you can concatenate field keys. Here and example:
For example:
Tables:
create table A (
a1 int not null,
a2 int not null,
t1 varchar(100),
primary key (a1, a2)
)
create table B (
b1 int not null,
b2 int not null,
a1 int not null,
a2 int not null,
t1 varchar(100),
primary key (b1, b2),
constraint b_2_a foreign key (a1,a2)
references A (a1, a2)
)
External schema to be read by django:
Create view vA as
select
a1* 1000000 + a2 as a, A.*
from A
Create view vB as
select
b1* 1000000 + b2 as b,
a1* 1000000 + a2 as a, B.*
from B
django models:
class A(models.Model):
a = models.IntegerField( primary_key=True )
a1 = ...
class Meta(CommonInfo.Meta):
db_table = 'vA'
class B(models.Model):
b = models.IntegerField( primary_key=True )
b1 = ...
a = models.ForeignKey( A )
a1 = ...
class Meta(CommonInfo.Meta):
db_table = 'vB'
You can refine technique to make varchar keys to be able to work with indexes. I don't write more samples because I don't know what is your database brand.
More information:
Do Django models support multiple-column primary keys?
ticket 373
Alternative methods
I have the following data:
CREATE TABLE `groups` (
`bookID` INT NOT NULL,
`groupID` INT NOT NULL,
PRIMARY KEY(`bookID`),
KEY( `groupID`)
);
and a book table which basically has books( bookID, name, ... ), but WITHOUT groupID. There is no way for me to determine what the groupID is at the time of the insert for books.
I want to do this in sqlalchemy. Hence I tried mapping Book to the books joined with groups on book.bookID=groups.bookID.
I made the following:
tb_groups = Table( 'groups', metadata,
Column('bookID', Integer, ForeignKey('books.bookID'), primary_key=True ),
Column('groupID', Integer),
)
tb_books = Table( 'books', metadata,
Column('bookID', Integer, primary_key=True),
tb_joinedBookGroup = sql.join( tb_books, tb_groups, \
tb_books.c.bookID == tb_groups.c.bookID)
and defined the following mapper:
mapper( Group, tb_groups, properties={
'books': relation(Book, backref='group')
})
mapper( Book, tb_joinedBookGroup )
...
However, when I execute this piece of code, I realized that each book object has a field groups, which is a list, and each group object has books field which is a singular assigment. I think my definition here must have been causing sqlalchemy to be confused about the many-to-one vs one-to-many relationship.
Can someone help me sort this out?
My desired goal is:
g.books = [b, b, b, .. ]
book.group = g
where g is an instance of group, and b is an instance of book.
Pass userlist=False to relation() to say that property should represent scalar value, not collection. This will for independent on whether there is primary key for this column, but you probably want to define unique constraint anyway.