I am in the process of switching databases for a Spring Boot project from MySQL to PostgreSQL. For the most part this hasn't been too painful, however I am finding that I am getting an exception I can't seem to sort out.
org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "r_i_UNIQUE"
Detail: Key (cola_id, colb_id)=(1234567, 12345) already exists.
The behaviour in MySQL when this occurred was to perform an update, which is what I need to occur in PostgreSQL as well.
Code wise, I search for an existing item by the cola_id and colb_id, the object is returned from my repository, is manipulated as required and then persisted using repository.saveAndFlush(existingItem);
It is on the saveAndFlush that I get the exception.
My assumption is that Spring is seeing this as a new object and trying to do an insert rather than an update, but I am not understanding why this would occur as it worked as expected using MySQL.
How does PostgreSQL handle updates using saveAndFlush, and how can I get an update to apply correctly?
Related
I am using AWS DMS for migrating 350G of data.
The migration has been completely but the status is showing error. I have checked the cloudwatch logs and got the following errors:
E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1280 Message: [MySQL][ODBC 5.3(w) Driver][mysqld-5.5.5-10.2.12-MariaDB-log]Incorrect index name 'PRIMARY' [1022502] (ar_odbc_stmt.c:4428)
[TARGET_LOAD ]E: execute create primery key failed, statement ALTER TABLE <databaseName>.<table> ADD CONSTRAINT PRIMARY PRIMARY KEY ( id ) [1022502] (odbc_endpoint_imp.c:3938)
I have compared the DBs on source and targets and found that there are some variations in the table size and also the Key filed is empty on target RDS; I suspect that the Key's are not migrated to my target RDS(compared using describe). In DMS document it is mentioned that the keys will migrated.
Is there any way to fix this issue?
Please let me know if anyone faced the issues while using AWS RDS.
It looks like DMS is attempting to apply an index that already exists in the target.
From another issue the incorrect index name message relates to attempting to create an index that already exists.
Consider running Schema Conversion Tool to create the target schema and run the DMS task with target table prep mode of do nothing. This way you can troubleshoot creation of the schema separately from migrating data.
Also consider creating a task for just this table with otherwise identical task configuration using source table filters, which will give you a complete end to end targeted log.
For reference AWS have written a very detailed blog series for troubleshooting DMS:
Debugging Your AWS DMS Migrations: What to Do When Things Go Wrong (Part 1)
Debugging Your AWS DMS Migrations: What to Do When Things Go Wrong (Part 2)
Debugging Your AWS DMS Migrations: What to Do When Things Go Wrong? (Part 3)
i want to know if it's possible to drop a table fragment that is not letting me perform a restore with the NDB_RESTORE tool.
When i run the restore, it throws the following error:
Create table db_died_maestro/def/NDB$FKM_3194_0_mae_tipo_reg_evaluacion failed: 721: Schema object with given name already exists
Restore: Failed to restore table: db_died_maestro/def/NDB$FKM_3194_0_mae_tipo_reg_evaluacion ... Exiting
NDBT_ProgramExit: 1 - Failed
I have already drop the DB_DIED_MAESTRO database previous to run the restore, but this fragment is not being dropped along with the database.
I have check that the fragment is in the database catalog using this querys:
*select * from ndbinfo.operations_per_fragment
where fq_name like 'db_died_maestro%'*
query result
And this query:
*select * from ndbinfo.memory_per_fragment
where fq_name like '%FKM_3194_0_mae_tipo_reg_evaluacion'*
query 2 result
This fragment was created on a previous run of the NDB_RESTORE tool. Please help me.
The table is a foreign key 'mock' table (indicated by the name NDB$FKM prefix).
Foreign key mock tables are created transiently in some cases to implement the foreign_key_checks = 0 feature of MySQL. This feature requires storage engines to support unordered creation of tables with partially defined foreign key constraints which can be abritrarily enabled (without revalidation) at a later time.
Foreign key mock tables are normally entirely managed by the Ndb storage engine component of MySQL, and so should not be visible unless there has been a failure or bug of some kind.
If you can share information about activities occurring before this problem then that would help us understand how this happened and whether it can be avoided.
As a workaround it should be possible for you to use the ndb_drop_table utility to drop this table, before re-attempting the failing restore. You may have to escape the $ in the name passed as a command line argument from a shell. Probably you should check for any other NDB$FKM tables in a similar situation.
I'm getting a strange behavior in Django 1.3 and MySQL. I have a Model with a field, object_id, that acts as primary key (as opposed to auto-increment). I'm adding an object with the following piece of code:
Record.objects.create(object_id = '1')
This works OK. If I try to add the same object again (doing it from the shell), I'll get IntegrityError: (1062, "Duplicate entry '1' for key 'PRIMARY'"). That's just fine.
However, there's a side-affect now: if I try to delete the same record from a different thread/process, even when simply trying to delete it using MySQL shell, I'll get a transaction error (Lock wait timeout exceeded; try restarting transaction). I should point out that, if I simply create the object once (without the integrity error), this problem does not occur. Also, it works just fine when deleting from the same thread (probably because it is using the same connection).
The situation can temporarily be solved if I use django.db.close_connection(), but that doesn't seem right. I tried using manual transaction management and rolling back, but that didn't do any good.
Any ideas on what went wrong? Is it a bug in Django ORM?
I have a closed-source upgrade application which migrates my database from an old format to a new format (creates new tables and migrates data from the old to new tables).
The application crashes with a MySQLIntegrityConstraintViolationException. It doesn't give me the name of the table with the primary key violation or the contents of the broken SQL query.
Is there any MySQL server option that I can switch to give me more troubleshooting information? Maybe the text of the failed query or the name of the primary key constraint which is violated?
You can enable the general log file: http://dev.mysql.com/doc/refman/5.1/en/query-log.html . This way it might be possible to see at which point the server stops processing the queries.
You also can run the MySQL command show processlist to see what queries are being processed at that time.
Also have a look into all other application specific error logs.
A first try could be to disable foreign key checks during migration:
SET foreign_key_checks = 0;
A first guess would be, that the old server supported 0 as Primary Key values, whilst the new one does not.
I am trying to use Microsoft Sync Framework for syncing 2 SQL Server 2005 database (server and client). There are multiple tables in the database with lots of foreign key relation between them. I am using SyncOrchestrator to synchronize the two databases.
string clientConnectionString = "<connection string>";
string serverConnectionString = "<connection string>";
SqlSyncProvider localProvider
= ConfigureClientProvider(clientConnectionString);
SqlSyncProvider remoteProvider
= ConfigureServerProvider(serverConnectionString);
SyncOrchestrator orchestrator = new SyncOrchestrator();
orchestrator.LocalProvider = localProvider;
orchestrator.RemoteProvider = remoteProvider;
orchestrator.Direction = SyncDirectionOrder.Download;
In the function ConfigureClientProvider and ConfigureServerProvider I am initializing connection and checking if scope doesn't exits then create it:
public static SqlSyncProvider ConfigureClientSyncProvider()
{
SqlSyncProvider provider = new SqlSyncProvider();
provider.Connection = new SqlConnection(Configs.ConnectionString);
DbSyncScopeDescription scopeDesc = new DbSyncScopeDescription("Test1");
SqlSyncScopeProvisioning serverConfig = new SqlSyncScopeProvisioning();
if (!serverConfig.ScopeExists("Test1", (System.Data.SqlClient.SqlConnection)provider.Connection))
{
scopeDesc.Tables.Add(SqlSyncDescriptionBuilder.GetDescriptionForTable
("Employees", (SqlConnection)provider.Connection));
scopeDesc.Tables.Add(SqlSyncDescriptionBuilder.GetDescriptionForTable
("Profiles", (SqlConnection)provider.Connection));
scopeDesc.Tables.Add(SqlSyncDescriptionBuilder.GetDescriptionForTable
("Department", (SqlConnection)provider.Connection));
serverConfig.PopulateFromScopeDescription(scopeDesc);
serverConfig.SetCreateTableDefault(DbSyncCreationOption.Skip);
serverConfig.Apply((System.Data.SqlClient.SqlConnection)provider.Connection);
}
return provider;
}
Now when I try to run sync its works fine for updated data but I got foreign key issues while there are any inserts or deletes in the database. e.g.
The INSERT statement conflicted with
the FOREIGN KEY constraint
"FK_Employees_Departments". The
conflict occurred in database
"Sync_Client", table
"dbo.Departments", column
'DepartmentID'.
If I do some change in order of tables then I am able to resolve one case of another case arises because of deletion.
The DELETE statement conflicted with
the REFERENCE constraint
"FK_Employees_Departments". The
conflict occurred in database
"Sync_Client", table "dbo.Employees",
column 'DepartmentID'.
Does anyone have any idea how this can be fixed. What I think the sync framework is not able to property executing changes in correct order. This order depending on several factor like foreign key relations, type of command e.g. insert, update etc. I am really stuck here. Early help will be appreciated.
This is an old question now, but since there's no real answer:
Sync requires you to list tables in each scope in insert order, so that all Foreign Key parents are in place before any Foreign Key children are inserted. Sync will automatically reverse that order on delete.
This is all fine and dandy, but if you have a database where for whatever reason the data in your parent or child tables is stored on different servers based on some independent piece of information, so that the parent and child might have different sync rules, you've overstepped any automatic processing that's available.
In this case, where the normal sync filters are built against the primary key information in your BASE tables, you will need to force the filters to use the primary key information in the TRACKING tables instead. There is now some content about this on social.msdn.microsoft.com.