I want to put a constraint on a table to limit the input of values in the table.
I want to make sure that any combination of values can only be in once.
I.e (1,2) and (2,1) can not be in the same table.
For example I have a table with two columns (c1 and c2):
The following has to be followed.
C1 C2
1 2 OK
2 1 NOT OK
3 1 OK
1 2 NOT OK
1 4 OK
1 3 NOT OK
Is there any way to do this in SQLALchemy?
I used UNIQUE(c1, c2) but that only says (1,2) and (1,2) can't be in the same table but as mentioned I also want to include that (2,1) can't be in the table either.
Thanks
Probably the easiest solution is to add a check constraint for c1 < c2 (or c1 <= c2 if they're allowed to be the same) so that (c1, c2) will always be in "ascending order":
import sqlalchemy as sa
connection_uri = (
"mssql+pyodbc://#localhost:49242/myDb?driver=ODBC+Driver+17+for+SQL+Server"
)
engine = sa.create_engine(connection_uri)
Base = declarative_base()
class So64232358(Base):
__tablename__ = "so64232358"
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
c1 = sa.Column(sa.Integer, nullable=False)
c2 = sa.Column(sa.Integer, nullable=False)
comment = sa.Column(sa.String(50))
sa.CheckConstraint(c1 < c2)
sa.UniqueConstraint(c1, c2)
Base.metadata.drop_all(engine, checkfirst=True)
Base.metadata.create_all(engine)
"""SQL rendered:
CREATE TABLE so64232358 (
id INTEGER NOT NULL IDENTITY,
c1 INTEGER NOT NULL,
c2 INTEGER NOT NULL,
comment VARCHAR(50) NULL,
PRIMARY KEY (id),
CHECK (c1 < c2),
UNIQUE (c1, c2)
)
"""
Session = sessionmaker(bind=engine)
session = Session()
obj = So64232358(c1=2, c2=1, comment="no es bueno")
session.add(obj)
try:
session.commit()
except sa.exc.IntegrityError as ie:
print(ie)
"""console output:
(pyodbc.IntegrityError) ('23000', '[23000] [Microsoft]
[ODBC Driver 17 for SQL Server][SQL Server]The INSERT statement conflicted
with the CHECK constraint "CK__so64232358__429B0397". The conflict
occurred in database "myDb", table "dbo.so64232358".
(547) (SQLExecDirectW);
[23000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]
The statement has been terminated. (3621)')
"""
session.rollback()
obj = So64232358(c1=1, c2=2, comment="bueno")
session.add(obj)
session.commit() # no error
obj = So64232358(c1=1, c2=2, comment="duplicado")
session.add(obj)
try:
session.commit()
except sa.exc.IntegrityError as ie:
print(ie)
"""console output:
(pyodbc.IntegrityError) ('23000', "[23000] [Microsoft]
[ODBC Driver 17 for SQL Server][SQL Server]Violation of UNIQUE KEY
constraint 'UQ__so642323__E13250592117193A'. Cannot insert duplicate key
in object 'dbo.so64232358'. The duplicate key value is (1, 2).
(2627)(SQLExecDirectW);
[23000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]
The statement has been terminated. (3621)")
"""
session.rollback()
Related
This question is related to a Question posted a few years ago.
I'm using the redshift-sqlalchemy package to connect SQLAlchemy to Redshift.
In Redshift I have a simple "companies" table:
CREATE TABLE apis
(
id INTEGER IDENTITY(1,1) NOT NULL,
name VARCHAR(255) NOT NULL,
PRIMARY KEY(id)
);
On the SQLAlchemy side I have mapped it like so:
Base = declarative_base()
class Company(Base):
__tablename__ = 'companies'
id = Column(BigInteger, primary_key=True, redshift_identity=(1, 1))
name = Column(String, nullable=False)
def __init__(self, name: str):
self.name = name
If I try to create a company:
company = Company(name = 'Hoge')
session.add(company)
session.commit()
then I get this error:
(sqlalchemy.exc.ProgrammingError) (redshift_connector.error.ProgrammingError) {'S': 'ERROR', 'C': '42P01', 'M': 'relation "companies_id_seq" does not exist', 'F': '../src/pg/src/backend/catalog/namespace.c', 'L': '267', 'R': 'LocalRangeVarGetRelid'}
[SQL: INSERT INTO companies (id, name) VALUES (%s, %s)]
[parameters: [{'name': 'Hoge'}]]
(Background on this error at: https://sqlalche.me/e/14/f405)
I think the problem is that you are trying to insert data also in the ID column with the IDENTITY option set.
SQL: INSERT INTO companies (id, name) VALUES (%s, %s)
If I execute SQL directly on a Redshift table I get the following error.
ERROR: cannot set an identity column to a value.
Please tell me how to define the auto-populated ID column in sqlalchemy-redshift model?
I have the following code table A has a check constraint on column Denial.
CREATE TABLE Table a
(
[ID] int IDENTITY(1,1) NOT NULL ,
[EntityID] int ,
Denial nVarchar(20)
CONSTRAINT Chk_Denial CHECK (Denial IN ('Y', 'N')),
)
Merge statement
MERGE INTO Table a WITH (HOLDLOCK) AS tgt
USING (SELECT DISTINCT
JSON_VALUE(DocumentJSON, '$.EntityID') AS EntityID,
JSON_VALUE(DocumentJSON, '$.Denial') AS Denial
FROM Table1 bd
INNER JOIN table2 bf ON bf.FileUID = bd.FileUID
WHERE bf.Type = 'Payment') AS src ON tgt.[ID] = src.[ID]
WHEN MATCHED
)) THEN
UPDATE SET tgt.ID = src.ID,
tgt.EntityID = src.EntityID,
tgt.Denial = src.Denial,
WHEN NOT MATCHED BY TARGET
THEN INSERT (ID, EntityID, Denial)
VALUES (src.ID, src.EntityID, src.Denial)
THEN DELETE
I get this error when running my MERGE statement:
Error Message Msg 547, Level 16, State 0, Procedure storproctest1, Line 40 [Batch Start Line 0]
The MERGE statement conflicted with the CHECK constraint "Chk_Column". The conflict occurred in the database "Test", table "Table1", and column 'Denial'. The statement has been terminated.
This is due to the source files having "Yes" and "No" instead of 'Y' and 'N'. Hence, I'm getting the above error.
How can I use a Case statement in merge statement to handle the above Check constraints error? or Any alternative solutions.
You can turn Yes to Y and No to N before merging your data. That would belong to the using clause of the merge query:
USING (
SELECT Distinct
JSON_VALUE(DocumentJSON, '$.EntityID') AS EntityID,
CASE JSON_VALUE(DocumentJSON, '$.Denial')
WHEN 'Yes' THEN 'Y'
WHEN 'No' THEN 'N'
ELSE JSON_VALUE(DocumentJSON, '$.Denial')
END AS Denial
FROM Table1 bd
INNER JOIN table2 bf ON bf.FileUID = bd.FileUID
WHERE bf.Type = 'Payment'
) AS src
The case expression translates Y and N values, and leaves other values untouched. Since this applies to the source dataset, the whole rest of the query benefits (ie both the update and insert branches).
Problem explain
I won't update the last primary key of the 3 primary key concatenate. But the problem is sometimes the first and second primary key was the same for multiple records. And in this case, when I set my new value I have a duplicate entry key even I use sub-request to avoid that problem.
Some Code
Schemas
create table llx_element_contact
(
rowid int auto_increment
primary key,
datecreate datetime null,
statut smallint default 5 null,
element_id int not null,
fk_c_type_contact int not null,
fk_socpeople int not null,
constraint idx_element_contact_idx1
unique (element_id, fk_c_type_contact, fk_socpeople)
)
Update request
this request return duplicate key error
update llx_element_contact lec
set lec.fk_socpeople = 64
where
-- Try to avoid the error by non including the values that are the same
(select count(*)
from llx_element_contact ec
where ec.fk_socpeople = 64
and ec.element_id = lec.element_id
and ec.fk_c_type_contact = lec.fk_c_type_contact) = 0
Test data
rowid, datecreate, statut, element_id, fk_c_type_contact, fk_sockpeople
65,2015-08-31 18:59:18,4,65,160,30
66,2015-08-31 18:59:18,4,66,159,12
67,2015-08-31 18:59:18,4,67,160,12
15283,2016-03-23 11:47:15,4,6404,160,39
15284,2016-03-23 11:51:30,4,6404,160,58
You should check only two other members of unique constraint as you're trying to assign the same value to the 3d member. No more then one row with the same two members must exist.
update llx_element_contact lec
set lec.fk_socpeople = 64
where
-- Try to avoid the error by non including the values that are the same
(select count(*)
from llx_element_contact ec
where ec.element_id = lec.element_id
and ec.fk_c_type_contact = lec.fk_c_type_contact) <=1
or
update llx_element_contact lec
set lec.fk_socpeople = 64
where
-- Try to avoid the error by non including the values that are the same
not exists (select 1
from llx_element_contact ec
where ec.element_id = lec.element_id
and ec.fk_c_type_contact = lec.fk_c_type_contact
and lec.fk_socpeople != ec.fk_socpeople)
You can use:
You can prevent the unique conflict using left join to check that the corresponding row doesn't already exist:
update llx_element_contact lec left join
(select element_id, fk_c_type_contact
from llx_element_contact lec2
where lec2.fk_socpeople = 64
group by element_id, fk_c_type_contact
) lec2
using (element_id, fk_c_type_contact)
set lec.fk_socpeople = 64
where lec2.element_id is null;
Your query has additional logic in it that is not explained. It is not necessary for what you are asking.
WITH cte AS (
SELECT rowid,
SUM(fk_socpeople = 64) OVER (PARTITION BY element_id, fk_c_type_contact) u_flag,
ROW_NUMBER() OVER (PARTITION BY element_id, fk_c_type_contact ORDER BY datecreate DESC) u_rn
FROM llx_element_contact
)
update llx_element_contact lec
JOIN cte USING (rowid)
set lec.fk_socpeople = 64
where cte.u_flag = 0
AND cte.u_rn = 1
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=08e20328ccc6187716084ce9d78816b0
Below is my database table.
User
id role type name
1 1 1 John
2 2 1 Doe
Below is my data:
role = 1
type = 1
name = HelloWorld
role = 1
type = 2
name = HelloWorld
rule = 3
type = 1
name = HelloWorld
I want the following result in my database table.
User
id role type name
1 1 1 HelloWorld // updated name because role = 1 and type = 1 exist.
2 2 1 Doe
3 1 2 HelloWorld // inserted name because role = 1 and type = 2 do not exist.
4 3 1 Helloworld // inserted name because role = 1 and type = 2 do not exist.
How to write MySQL query without executing select query first? In my case there is no primary key.
You can use MySQL insert ... on duplicate key update ... syntax:
insert into mytable (role, type, name)
values (1, 1, 'Hello World')
on duplicate key update name = values(name)
For this to work, you need to set up a unique key constraint on columns (role, type). Create it if it doesn't yet exist:
alter table mytable add constraint mytable_unique_role_type unique (role, type);
This syntax can also be used to process multiple inserts at a time:
insert into mytable (role, type, name)
values (1, 1, 'Hello World'), (1, 2, 'Hello World'), (3, 1, 'Hello World')
on duplicate key update name = values(name)
In my database I want to synchronize two tables. I use auth_user(Default table provided by Django) table for registration and there was another table user-profile that contain entities username, email, age etc. During the synchronization how to update Foriegn key?
def get_filename(instance,filename):
return "upload_files/%s_%s" % (str(time()).replace('.','_'),filename)
def create_profile(sender, **kwargs):
if kwargs["created"]:
p = profile(username = kwargs["instance"], email=kwargs["instance"])
p.save()
models.signals.post_save.connect(create_profile, sender=User)
class profile(models.Model):
username = models.CharField(max_length = 30)
email = models.EmailField()
age = models.PositiveIntegerField(default='15')
picture = models.FileField(upload_to='get_filename')
auth_user_id = models.ForeignKey(User)
Here in table profile during synchronization all columns are filled except auth_user_id. and there was an error
Exception Value:
(1048, "Column 'auth_user_id_id' cannot be null")
You have to alter your table and change the column auth_user_id_id datatype attribute that allows null.
Something like this:-
ALTER TABLE mytable MODIFY auth_user_id_id int;
Assuming auth_user_id_id as int datatype.(Columns are nullable by default)