SLT Replicate View - Error: SQL0601N The name of the object to be created is identical to the existing name - slt

Situation: We wan to to real-time ETL using SLT (SAP Landscape Transformation) Replication Server.
I learned that I have to define views in transaction LTRS.
So I try to add a view to an existing Table like in the Screenshot. Unfortunately I get the Error: SQL0601N The name of the object to be created is identical to the existing name ...

Make sure that you use transaction LTRS before adding the table to the configuration in transaction LTRC.
Another cause could be that the table exists in the target database. So make sure that the target table does not exist yet.
On tests about the issue I realized that transaction LTRS where the error occurs does not create a table - even though the error message indicates otherwise.

Related

Split Mysql table on date column and store result in different tables with dynamic name

My job looks like the first image where I am trying to read a Mysql table,
and split the table based on the FROM_DATE field. The steps in the job are as follows :
Use tMap to create a column StringFromDate -> TalendDate.formatDate("yyyy-MM-dd",row1.FROM_DATE)
Connect tMapOutput to tFlowToIterate
Connect tFlowToIterate to tFixedFlowInput. The tFixedFlowInput component configuration is shown in the second image below
Connect tFixedFlowInput to tLogRow
Connect tLogRow to tMysqlOutput. The tMysqlOutput component setting is as shown in the third image.
The problem is that the tables are generating with only 1 row of data. When I try to collect the data in a csv file, this setting works fine with an append option in tFileOutputDelimited.
Please, immediate help required.
The problm is caused by the tMysqlOutput setting drop table if exists and create, which effectively recreates your table at each iteration. Thus the one line you end up with in your table is the one corresponding to the last iteration.
Try with the setting create table if not exists.
Also, based on your followup question, you need to add OnComponentOk -- tMysqlCommit (uncheck close connection) after your tMysqlOutput in order to commit the inserts.

SLT Error: Replication not successfull: The migration object has been deleted and must be recreated

We try to build an SLT (SAP Landscape Transformation) Configuration which replicates one Table "new3_asdf" and one view on top of this table: new3_asdf_view_id.
So in transaction LTRS I added the view to the table:
In LTRC I added the table new3_asdf to replication. Even though there is no 'X' in column failed the replication does not work :((
The 'View Errors' Button reveals the Error / Cause:
Migration object Z_NEW3_ASDF_035 has been deleted due to changes in table definition
How can I recreate it according to the Help Text:
Source and destination databases are DB2 10.5
I have seen this error when you change the table structure in LTRS while the job is replicating in LTRC. It appears to drop the target table, making your replication job fail because you have no target. You have to have all your LTRS table config in place before LTRC or this message can happen.

Zend DB - Flush temporary tables

One part of my application is querying CREATE TEMPORARY TABLE which throws an exception if a current connection already has a temporary table. Ideally, I would just add IF NOT EXISTS, but unfortunately, I can't edit the code at that particular part of the application.
So, what would be the other best way to make sure TMP tables for the current connection are cleared? I tried using
$this->_connection->closeConnection();
but that also throws an exception on the first run.
I managed to flush tmp tables without exceptions by executing the following code:
$this->_connection->commit();
$this->_connection->closeConnection();
$this->_connection->beginTransaction();

Adding net changes function to existing cdc tables

I have an existing table in the DB that was created with the parameter #supports_net_changes set to 0. Hence there is only one function for that table i.e. to get all changes fn_cdc_get_all_changes_dbo_.
How do I now enable the get_net_changes function over it? Do I have to drop existing cdc and the re-create? Haven't been able to get conclusive help on this
I have understood the error. When disabling the existing cdc on table the table name of the captureinstance needed to be properly specified. This was causing the failure and hence I was unable to get the net changes implemented.

keeping the history of table in java [duplicate]

I need the sample program in Java for keeping the history of table if user inserted, updated and deleted on that table. Can anybody help in this?
Thanks in advance.
If you are working with Hibernate you can use Envers to solve this problem.
You have two options for this:
Let the database handle this automatically using triggers. I don't know what database you're using but all of them support triggers that you can use for this.
Write code in your program that does something similar when inserting, updating and deleting a user.
Personally, I prefer the first option. It probably requires less maintenance. There may be multiple places where you update a user, all those places need the code to update the other table. Besides, in the database you have more options for specifying required values and integrity constraints.
Well, we normally have our own history tables which (mostly) look like the original table. Since most of our tables already have the creation date, modification date and the respective users, all we need to do is copy the dataset from the live table to the history table with a creation date of now().
We're using Hibernate so this could be done in an interceptor, but there may be other options as well, e.g. some database trigger executing a script, etc.
How is this a Java question?
This should be moved in Database section.
You need to create a history table. Then create database triggers on the original table for "create or replace trigger before insert or update or delete on table for each row ...."
I think this can be achieved by creating a trigger in the sql-server.
you can create the TRIGGER as follows:
Syntax:
CREATE TRIGGER trigger_name
{BEFORE | AFTER } {INSERT | UPDATE |
DELETE } ON table_name FOR EACH ROW
triggered_statement
you'll have to create 2 triggers one for before the operation is performed and another after the operation is performed.
otherwise it can be achieved through code also but it would be a bit tedious for the code to handle in case of batch processes.
You should try using triggers. You can have a separate table (exact replica of your table of which you need to maintain history) .
This table will then be updated by trigger after every insert/update/delete on your main table.
Then you can write your java code to get these changes from the second history table.
I think you can use the redo log of your underlying database to keep track of the operation performed. Is there any particular reason to go for the program?
You could try creating say a List of the objects from the table (Assuming you have objects for the data). Which will allow you to loop through the list and compare to the current data in the table? You will then be able to see if any changes occurred.
You can even create another list with a object that contains an enumerator that gives you the action (DELETE, UPDATE, CREATE) along with the new data.
Haven't done this before, just a idea.
Like #Ashish mentioned, triggers can be used to insert into a seperate table - this is commonly referred as Audit-Trail table or audit log table.
Below are columns generally defined in such audit trail table : 'Action' (insert,update,delete) , tablename (table into which it was inserted/deleted/updated), key (primary key of that table on need basis) , timestamp (the time at which this action was done)
It is better to audit-log after the entire transaction is through. If not, in case of exception being passed back to code-side, seperate call to update audit tables will be needed. Hope this helps.
If you are talking about db tables you may use either triggers in db or add some extra code within your application - probably using aspects. If you are using JPA you may use entity listeners or perform some extra logic adding some aspect to your DAO object and apply specific aspect to all DAOs which perform CRUD on entities that needs to sustain historical data. If your DAO object is stateless bean you may use Interceptor to achive that in other case use java proxy functionality, cglib or other lib that may provide aspect functionality for you. If you are using Spring instead of EJB you may advise your DAOs within application context config file.
Triggers are not suggestable, when I stored my audit data in file else I didn't use the database...my suggestion is create table "AUDIT" and write java code with help of servlets and store the data in file or DB or another DB also ...