While running a pre-migration script to delete a (wizard) transient model, ended up with below mentioned issue.
from openupgradelib import openupgrade
#openupgrade.migrate()
def migrate(env, version):
openupgrade.delete_records_safely_by_xml_id(
env,
["moduel_name.view_id)"],
delete_childs=True,
)
try:
env.cr.execute("DROP TABLE IF EXISTS table_name CASCADE")
env.cr.execute("DROP TABLE IF EXISTS dependent_table_names CASCADE")
except Exception as e:
raise ("Exception--------------", e)
Error:
psycopg2.errors.ForeignKeyViolation: update or delete on table "ir_model" violates foreign key constraint "ir_model_relation_model_fkey" on table "ir_model_relation"
Similar issue: https://github.com/odoo/odoo/issues/54178
According to the above issue, having Many2many in transient model might cause this issue. It is true in my case as well. I have many2many fields. No solution there.
I kind of tried deleting the problematic fields(Many2many) before deleting columns. But it is known that many2many fields can't be located in db. kind of stuck.
openupgrade.drop_columns(
env.cr,
[
("table_name", "any_other_column_name"), # ---> This works
("table_name", "many2many_column_name"), # ---> This doesn't
],
)
is there anyway to get rid of many2many fields from the model ? Any help is appreciated.
Could you try this :
Let's say your Transient is my_transient_model and the Many2many field is e.g. sale_line_ids = fields.Many2many('sale.order_line')
First thing to know : Did you specify the relation table ? like
sale_line_ids = fields.Many2many('sale.order_line', 'my_relation_table_name') ?
If so, 'my_relation_table_name' is the name you want to delete from ir_model_relation.
If not, the relation table name is my_transient_model_sale_order_line_rel (so model then _ then the model we point to with _ instead of . then _rel.
Second set: delete the data from ir_model_relation:
DELETE FROM ir_model_relation WHERE name='my_transient_model_sale_order_line_rel';
Then you should be able to delete the Many2many table :
DROP TABLE my_transient_model_sale_order_line_rel;
(for sure, change my_transient_model_sale_order_line_rel if you specified the relation table like my_relation_table_name in the example)
Hope it helped, keep me updated :)
Related
In Grails, Gorm, I have this entity:
class MyEntity implements Serializable {
Long bankTransactionId
int version
BigDecimal someValue
static constraints = {
bankTransactionId(nullable: false)
version(nullable: true)
someValue(nullable: true)
}
}
Doing MyEntity.findByBankTransactionId(Long.valueOf("3")) throws this exception:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown
column 'this_.id' in 'field list'
I am suspecting the fact that my column has the name id in it. Could it be this?
How to fix it then ?
Thanks.
Everything you have provided here looks fine. In particular, there are no restrictions about having the letters "id" in a column name.
Take a look at your generated MySQL table. I'm guessing that the id column isn't there for some reason. Maybe something prevented generating it due to some earlier error that you have now corrected, or you have your datasource set to "none" instead of "update" (or similar) and the whole table is missing!
If this is just a development environment with no real data (and no foreign key constraints), drop the whole MyEntity table and let it be automatically recreated. If not, move to a different temporary datasource, let a new table be created, and compare the two. If the new one still doesn't have an id column, you have something going wrong during your startup that is preventing your tables from being created correctly. You could just add the column in manually, but if you don't figure out what happened to it in the first place, it will probably just happen again.
For reference, in my test environment, my MySQL table for "MyEntity" copied from your example looks like:
desc my_entity;
'id','bigint(20)','NO','PRI',NULL,'auto_increment'
'version','int(11)','YES','',NULL,''
'bank_transaction_id','bigint(20)','NO','',NULL,''
'some_value','decimal(19,2)','YES','',NULL,''
I have a bunch of MySQL tables I work with where the ultimate data source from a very slow SQL server administered by someone else. My predecessors' solution to dealing with this is to do queries more-or-less like:
results = python_wrapper('SELECT primary_key, col2, col3 FROM foreign_table;')
other_python_wrapper('DELETE FROM local_table;')
other_python_wrapper('INSERT INTO local_table VALUES() %s;' % results)
The problem is this means you can never use values in local_table as foreign key constraints for other tables because they are constantly being deleted and added back into the table whenever you update it from the foreign source. However, if a record really does dis sapper in the results to the query on the foreign server, than that usually means you would want to trigger a cascade effect to drop records in other local tables that you've linked with a foreign key constraint to data propagated from the foreign table.
The only semi-reasonable solution I've come up with is to do something like:
results = python_wrapper('SELECT primary_key, col2, col3 FROM foreign_table;')
other_python_wrapper('DELETE FROM local_table_temp;')
other_python_wrapper('INSERT INTO local_table_temp VALUES() %s;' % results)
other_python_wrapper('DELETE FROM local_table WHERE primary_key NOT IN local_table_temp;')
other_python_wrapper('INSERT INTO local_table SELECT * FROM local_table_temp ON DUPLICATE KEY UPDATE local_table.col2 = local_table_temp.col2, local_table.col3 = local_table_temp.col3
The problem is there's a fair number of these tables and many of the tables have a large number of columns that need to be updated so it's tedious to write the same boiler-plate over & over. And if the table schema changes, there's more than one place you need to update the listing of all columns.
Is there any more concise way to do this with the SQL code?
Thanks!
I have a somewhat un-satisfactory answer to my own question. Since I'm using python to query the foreign Oracle database and put that into SQL, and I trust the format of the table and column names to be pretty well behaved, I can just wrap the whole procedure in python code and have python generate the update SQL update queries based off inspecting the tables.
For a number of reasons, I'd still like to see a better way to do this, but it works for me because:
I'm using an external scripting language that can inspect the database schema anyway.
I trust the database, column, and table names I'm working with to be well-behaved because these are all things I have direct control over.
My solution depends on the local SQL table structure; specifically which keys are primary keys. The code won't work without properly structured tables. But that's OK, because I can restructure the MySQL tables to make my python code work.
While I do hope someone else can think up a more-elegant and/or general-purpose solution, I will offer up my own python code to anyone who is working on a similar problem who can safely make the same assumptions I did above.
Below is a python wrapper I use to do simple SQL queries in python:
import config, MySQLdb
class SimpleSQLConn(SimpleConn):
'''simplified wrapper around a MySQLdb.connection'''
def __init__(self, **kwargs):
self._connection = MySQLdb.connect(host=config.mysql_host,
user=config.mysql_user,
passwd=config.mysql_pass,
**kwargs)
self._cursor = self._connection.cursor()
def query(self, query_str):
self._cursor.execute(query_str)
self._connection.commit()
return self._cursor.fetchall()
def columns(self, database, table):
return [x[0] for x in self.query('DESCRIBE `%s`.`%s`' % (database, table))g]
def primary_keys(self, database, table):
return [x[0] for x in self.query('DESCRIBE `%s`.`%s`' % (database, table)) if 'PRI' in x]
And here is the actual update function, using the SQL wrapper class above:
def update_table(database,
table,
mysql_insert_with_dbtable_placeholder):
'''update a mysql table without first deleting all the old records
mysql_insert_with_dbtable_placeholder should be set to a string with
placeholders for database and table, something like:
mysql_insert_with_dbtable_placeholder = "
INSERT INTO `%(database)s`.`%(table)s` VALUES (a, b, c);
note: code as is will update all the non-primary keys, structure
your tables accordingly
'''
sql = SimpleSQLConn()
query ='DROP TABLE IF EXISTS `%(database)s`.`%(table)s_temp_for_update`' %\
{'database': database, 'table': table}
sql.query(query)
query ='CREATE TABLE `%(database)s`.`%(table)s_temp_for_update` LIKE `%(database)s`.`%(table)s`'%\
{'database': database, 'table': table}
sql.query(query)
query = mysql_insert_with_dbtable_placeholder %\
{'database': database, 'table': '%s_temp_for_update' % table}
sql.query(query)
query = '''DELETE FROM `%(database)s`.`%(table)s` WHERE
(%(primary_keys)s) NOT IN
(SELECT %(primary_keys)s FROM `%(database)s`.`%(table)s_temp_for_update`);
''' % {'database': database,
'table': table,
'primary_keys': ', '.join(['`%s`' % key for key in sql.primary_keys(database, table)])}
sql.query(query)
update_columns = [col for col in sql.columns(database, table)
if col not in sql.primary_keys(database, table)]
query = '''INSERT into `%(database)s`.`%(table)s`
SELECT * FROM `%(database)s`.`%(table)s_temp_for_update`
ON DUPLICATE KEY UPDATE
%(update_cols)s
''' % {'database': database,
'table': table,
'update_cols' : ',\n'.join(['`%(table)s`.`%(col)s` = `%(table)s_temp_for_update`.`%(col)s`' \
% {'table': table, 'col': col} for col in update_columns])}
sql.query(query)
I have the following database structure given by
The "subcat_id" column in the "Course" table points to the "id" column in the "sub_category" table.
The "instructort_id" column in the "Course" table points to the "id" column in the "user" table.
I want to insert data in the "course" table. I am using Symfony2 framework with Doctrine as the database library. When I try to insert data into the course table using the following statements:
$newCourse=new \FrontEndBundle\Entity\Course;
$newCourse->setSubcat($data['subcat']);
$newCourse->setName($data['coursename']);
$newCourse->setInstructor($instructorId);
$newCourse->setDescription($data['description']);
$em->persist($newCourse);
$em->flush();
, I get an error($newCourse is an object of the Course Entity class)
Error shown is displayed below:
I think the error relates to foreign key issues. Can anyone help me on how can I insert data
In the course table correctly?
Thanks in advance..!!
this problem is because you are passing a id, which is not a SubCategory object to your setter in the SubCategory entity
so, you have to retrieve you SubCategory object first, then set it to the Course entity
try this way
$subCat = $this->getDoctrine()
->getRepository('FrontEndBundle:SubCategory')
->find($data['subcat']);
and then
$newCourse->setSubcat($subCat);
I am using Sequelize, but since I also have other servers running other than node.js, I need to let them share the database. So when defining one-to-many relation, I need to let Sequelize use the old existing jointTable.
I write my definition of the association as below, where pid is the primary key of presentations:
this.courses.hasMany(this.presentations,
{as : 'Presentations',
foreignKey : 'cid',
joinTableName : 'course_presentations'
});
Or this one:
this.courses.hasMany(this.presentations,
{as : 'Presentations',
foreignKey : 'pid',
joinTableName : 'course_presentations'
});
I am using the below codes to retrieve the associated presentations:
app.get('/courses/presentations/:cid', function (req, res){
var cid = req.params.cid;
app.models.courses.find({where: {cid:cid}}).success(function(course){
course.getPresentations().success(function(presentations){
res.send(presentations);
});
});
});
The previous one will tell me there is no cid in 'presentations' table.
The latter one will give something like this:
Executing: SELECT * FROM `courses`;
Executing: SELECT * FROM `courses` WHERE `courses`.`cid`='11' LIMIT 1;
Executing: SELECT * FROM `presentations` WHERE `presentations`.`pid`=11;
Check carefully, I found that everytime, it is always using the cid value to query for presentations, which means only when they happen to share the same id value, something can be returned. And even for those, it is not correct.
I am strongly suspecting, Sequelize are not using the joinTable I specified, instead, it is still trying to find the foreign keys in the original two tables. It is viewing pid as the reference of cid in presentations, which causes this problem.
So I am wondering how to correctly set up the junction table so that the two of them can use the foreign keys correctly.
jointTableName : 'course_presentations'
should be (without a t)
joinTableName : 'course_presentations'
Actually AFAIK - this kind of relation is not "pure" one-to-many.
You have one course can have many entries in course_presenation table, and course_presentation have one-to-one relation with presentation. If so, just define that kind of associations in model.
I have a table with the following declarative definition:
class Type(Base):
__tablename__ = 'Type'
id = Column(Integer, primary_key=True)
name = Column(String, unique = True)
def __init__(self, name):
self.name = name
The column "name" has a unique constraint, but I'm able to do
type1 = Type('name1')
session.add(type1)
type2 = Type(type1.name)
session.add(type2)
So, as can be seen, the unique constraint is not checked at all, since I have added to the session 2 objects with the same name.
When I do session.commit(), I get a mysql error since the constraint is also in the mysql table.
Is it possible that sqlalchemy tells me in advance that I can not make it or identifies it and does not insert 2 entries with the same "name" columm?
If not, should I keep in memory all existing names, so I can check if they exist of not, before creating the object?
SQLAlechemy doesn't handle uniquness, because it's not possible to do good way. Even if you keep track of created objects and/or check whether object with such name exists there is a race condition: anybody in other process can insert a new object with the name you just checked. The only solution is to lock whole table before check and release the lock after insertion (some databases support such locking).
AFAIK, sqlalchemy does not handle uniqueness constraints in python behavior. Those "unique=True" declarations are only used to impose database level table constraints, and only then if you create the table using a sqlalchemy command, i.e.
Type.__table__.create(engine)
or some such. If you create an SA model against an existing table that does not actually have this constraint present, it will be as if it does not exist.
Depending on your specific use case, you'll probably have to use a pattern like
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except:
newobj = Type('name1')
session.add(newobj)
or a variant, or you'll just have to catch the mysql exception and recover from there.
From the docs
class MyClass(Base):
__tablename__ = 'sometable'
__table_args__ = (
ForeignKeyConstraint(['id'], ['remote_table.id']),
UniqueConstraint('foo'),
{'autoload':True}
)
.one() throws two kinds of exceptions:
sqlalchemy.orm.exc.NoResultFound and sqlalchemy.orm.exc.MultipleResultsFound
You should create that object when the first exception occurs, if the second occurs you're screwed anyway and shouldn't make is worse.
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except NoResultFound:
newobj = Type('name1')
session.add(newobj)