AWS DMS issues after the migration - mysql

I am using AWS DMS for migrating 350G of data.
The migration has been completely but the status is showing error. I have checked the cloudwatch logs and got the following errors:
E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1280 Message: [MySQL][ODBC 5.3(w) Driver][mysqld-5.5.5-10.2.12-MariaDB-log]Incorrect index name 'PRIMARY' [1022502] (ar_odbc_stmt.c:4428)
[TARGET_LOAD ]E: execute create primery key failed, statement ALTER TABLE <databaseName>.<table> ADD CONSTRAINT PRIMARY PRIMARY KEY ( id ) [1022502] (odbc_endpoint_imp.c:3938)
I have compared the DBs on source and targets and found that there are some variations in the table size and also the Key filed is empty on target RDS; I suspect that the Key's are not migrated to my target RDS(compared using describe). In DMS document it is mentioned that the keys will migrated.
Is there any way to fix this issue?
Please let me know if anyone faced the issues while using AWS RDS.

It looks like DMS is attempting to apply an index that already exists in the target.
From another issue the incorrect index name message relates to attempting to create an index that already exists.
Consider running Schema Conversion Tool to create the target schema and run the DMS task with target table prep mode of do nothing. This way you can troubleshoot creation of the schema separately from migrating data.
Also consider creating a task for just this table with otherwise identical task configuration using source table filters, which will give you a complete end to end targeted log.
For reference AWS have written a very detailed blog series for troubleshooting DMS:
Debugging Your AWS DMS Migrations: What to Do When Things Go Wrong (Part 1)
Debugging Your AWS DMS Migrations: What to Do When Things Go Wrong (Part 2)
Debugging Your AWS DMS Migrations: What to Do When Things Go Wrong? (Part 3)

Related

Loading data from Json file into PostgresSql causes: ERROR "duplicate key value violates unique constraint - already exists."

I am deploying my Django site on to Google Cloud. One of the step is to change the database to postgresSQL. As I am using SqlLite locally I wanted to migrate all of the database into postgresSql. I followed an online guide where you Dump your data first and then change Database in settings.py to your new database. I have done everything upto this command;
python manage.py loaddata datadump.json
where datadump.json is the dumped database from SQLITE. Now I am stuck with this error
django.db.utils.IntegrityError: Problem installing fixtur, Could not
load users.Profile(pk=3): duplicate key value violates unique
constraint "users_profile_user_id_key" DETAIL: Key (user_id)=(1) already exists.
and I don't have any idea as to what to do. Some answers I looked up such as this:
postgresql duplicate key violates unique constraint
AND
Django admin "duplicate key value violates unique constraint" Key (user_id)=(1) already exists
haven't helped, as I cannot understand what's going on. I did use MySQL 6 years ago, but I cannot understand this.
I managed to run some SQL commands from online resources and managed to produce this for my database:
!https://imgur.com/a/qQNLEs7
I followed these guides:
https://medium.com/#aaditya.chhabra/how-to-use-postgresql-with-your-django-272d59d28fa5
https://www.shubhamdipt.com/blog/django-transfer-data-from-sqlite-to-another-database/
Drop the database
create a new one and run migrations
python manage.py migrate
And try importing data from json file using,
python manage.py loaddata datadump.json
The errors are caused by the superuser object in the database you created using 'createsuperuser' command.
TL;DR
Just delete the unwanted record from django_site table.
Brief
I had the same problem because I used Signin with Google in my site. So I tried it 2 times to see how to setup Social Login. As a result, table django_site created 2 records. Just delete the unwanted record from that Table or Collection

Spring Boot Switching from MySQL to PostgreSQL and getting unique constraint violations

I am in the process of switching databases for a Spring Boot project from MySQL to PostgreSQL. For the most part this hasn't been too painful, however I am finding that I am getting an exception I can't seem to sort out.
org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "r_i_UNIQUE"
Detail: Key (cola_id, colb_id)=(1234567, 12345) already exists.
The behaviour in MySQL when this occurred was to perform an update, which is what I need to occur in PostgreSQL as well.
Code wise, I search for an existing item by the cola_id and colb_id, the object is returned from my repository, is manipulated as required and then persisted using repository.saveAndFlush(existingItem);
It is on the saveAndFlush that I get the exception.
My assumption is that Spring is seeing this as a new object and trying to do an insert rather than an update, but I am not understanding why this would occur as it worked as expected using MySQL.
How does PostgreSQL handle updates using saveAndFlush, and how can I get an update to apply correctly?

MySQL Cluster 7.4.15 - Ndb_Restore Fail Because an Orphan Fragment

i want to know if it's possible to drop a table fragment that is not letting me perform a restore with the NDB_RESTORE tool.
When i run the restore, it throws the following error:
Create table db_died_maestro/def/NDB$FKM_3194_0_mae_tipo_reg_evaluacion failed: 721: Schema object with given name already exists
Restore: Failed to restore table: db_died_maestro/def/NDB$FKM_3194_0_mae_tipo_reg_evaluacion ... Exiting
NDBT_ProgramExit: 1 - Failed
I have already drop the DB_DIED_MAESTRO database previous to run the restore, but this fragment is not being dropped along with the database.
I have check that the fragment is in the database catalog using this querys:
*select * from ndbinfo.operations_per_fragment
where fq_name like 'db_died_maestro%'*
query result
And this query:
*select * from ndbinfo.memory_per_fragment
where fq_name like '%FKM_3194_0_mae_tipo_reg_evaluacion'*
query 2 result
This fragment was created on a previous run of the NDB_RESTORE tool. Please help me.
The table is a foreign key 'mock' table (indicated by the name NDB$FKM prefix).
Foreign key mock tables are created transiently in some cases to implement the foreign_key_checks = 0 feature of MySQL. This feature requires storage engines to support unordered creation of tables with partially defined foreign key constraints which can be abritrarily enabled (without revalidation) at a later time.
Foreign key mock tables are normally entirely managed by the Ndb storage engine component of MySQL, and so should not be visible unless there has been a failure or bug of some kind.
If you can share information about activities occurring before this problem then that would help us understand how this happened and whether it can be avoided.
As a workaround it should be possible for you to use the ndb_drop_table utility to drop this table, before re-attempting the failing restore. You may have to escape the $ in the name passed as a command line argument from a shell. Probably you should check for any other NDB$FKM tables in a similar situation.

Magento re-index, cannot create table

I'm trying to re-index the category flat data, but I am always met with the same error:
There was a problem with reindexing process. Error Message: SQLSTATE[HY000]: General error: 1005 Can't create table 'xxx.catalog_category_flat_store_6' (errno: 121)
The table doesn't exist, there is a 1 and a 7. Not sure if that makes a difference?
After running the query manually through phpMyAdmin, I am met with the MySQL error 121. I've checked around and this would suggest the names of the foreign keys trying to be created already exist. I've listed all foreign keys in the DB right now, and they don't exist at all.
I've also tried running SHOW ENGINE INNODB STATUS on the DB for more information, but we don't have the rights to view that apparently.
After getting the priv's updated so we could run SHOW INNODB STATUS, we discovered that we already had an existing index that was attempting to be duplicated with this new table. This stemmed from us backing up an older version of the table that was trying to be created. Deleting that copy of the table enabled Magento to re-index properly and solved our problem.
Try logging the sql commands and debug what its trying to do by executing them manually. On the index process, normally there is a command that clears a table, and another to recreate it.
Edit /magentoRoot/lib/Varien/Db/Adapter/Pdo/Mysql.php and change $_debug to true and note the $_debugFile location (should be var/debug/pdo_mysql.log)
Its best to edit the file in vi, have a browser open to reindex JUST the category data, save the file in vi :w! and then run the indexer then change the debug back to false.
Then go read the log. It may help.

Troubleshooting MySQLIntegrityConstraintViolationException

I have a closed-source upgrade application which migrates my database from an old format to a new format (creates new tables and migrates data from the old to new tables).
The application crashes with a MySQLIntegrityConstraintViolationException. It doesn't give me the name of the table with the primary key violation or the contents of the broken SQL query.
Is there any MySQL server option that I can switch to give me more troubleshooting information? Maybe the text of the failed query or the name of the primary key constraint which is violated?
You can enable the general log file: http://dev.mysql.com/doc/refman/5.1/en/query-log.html . This way it might be possible to see at which point the server stops processing the queries.
You also can run the MySQL command show processlist to see what queries are being processed at that time.
Also have a look into all other application specific error logs.
A first try could be to disable foreign key checks during migration:
SET foreign_key_checks = 0;
A first guess would be, that the old server supported 0 as Primary Key values, whilst the new one does not.