Hibernate deletion issue - mysql

I'm trying to write a Java app that imports a data file. The process is as follows
Create Transaction
Delete all rows from datatable
Load data file into datatable
Commit OR Rollback if any errors were encountered.
The data loaded in step 3 is mostly the same as the data deleted in step3.
The deletion is performed using the following
DetachedCriteria criteria = DetachedCriteria.forClass(myObject.class);
List<myObject> myObjects = hibernateTemplate.findByCriteria(criteria);
hibernateTemplate.deleteAll(myObjects);
When I then load the datafile, i get the following exception
nested exception is org.hibernate.NonUniqueObjectException:
a different object with the same identifier value was already associated with the session:
The whole process needs to take place in transaction.
And I don't really want to have to compare the import file / data table and then perform an insert/update/delete to get them into sync.
Any help would be appreciated.

Shortest answer, use session.merge()
Short answer, use plain jdbc hibernate is the wrong tool for this job.
Longer answer, see what your database tools support in this regard.
A solution could be to:
rename table old_table
create an new empty table
import the data into the new table
drop old_table
Your entire table would be locked in your use case so this should not be a problem.

First idea: did you try to flush() the Session after step #2?
Second idea: use the StatelessSession interface. You may have to extend HibernateTemplate for that since SPR-6202 and SPR-2495 are unresolved.

Related

SSIS: How to get the number of updated and deleted rows in an audit?

Imagine that you want to save in a variable the number of rows the were updated or deleted in a table.
‌
This is the steps that i did:
First, in the Control flow i created a Data Flow Task.
Them, in the Data Flow, i created a source(in my case is a excel file), then i proceeded to create two variables to count those rows- countDeleted and countUpdated, then connected the variables to two row count transformations, and them connected my destination (OLE DB).
Now in the control flow, what do i do??
Create a SQL execute task?? or a Script task?? What is the best way to do it?? What is the piece of code to use??
Thanks for youy help.
P‌S: i only have 4 weeks off SSIS, sorry for my noobieness :)
An OLD DB destination only inserts. It can't UPDATE or DELETE
What's your logic for updating or deleting?
If you're just starting out and reading about doing things in SSIS you will eventually find advice to use the OLE DB Command to perform row by row delete and inserts.
In my opinion this is to be avoided. It does not scale (works fine for small recorsets then fails for large recordsets), and it is difficult to maintain parameter mappings in the OLE DB Command. Although you should try it anyway to familiarise yourself with it.
My advice is to load the Excel data into a staging table, perform batch DELETE and UPDATE statements to load the data and use ##ROWCOUNT to capture the records updated.
For example;
Your existing described dataflow can be used to load into a table called StagingTable
Before your dataflow you should run an Execute SQL Task (This is in the Control Flow pane, not the Data Flow pane) that clears the staging table:
TRUNCATE TABLE StagingTable;
So first get that working - repeatedly running your package clears the staging table then loads Excel into it without creating duplicates
This in itself is a challenge as Excel is a terrible data interchange format.
Once you have that working, you add an execute SQL task to the end that runs some SQL that deletes the records you want and captures the count. For example:
DELETE FROM MyFinalTable WHERE PriamryKey IN (SELECT PrimaryKey FROM StagingTable);
SELECT ##ROWCOUNT;
Then you follow the instructions here to load that back to your SSIS variable
http://microsoft-ssis.blogspot.com/2011/03/rowcount-for-execute-sql-statement.html
What are you doing with this row count? Are you writing it to a logging table? Save
yourself the bother of pulling it back into an SSIS variable and just write it directly:
DELETE FROM MyFinalTable WHERE PriamryKey IN (SELECT PrimaryKey FROM StagingTable);
INSERT INTO LogTable(Table,Operation,Type)
SELECT 'MyFinalTable','Delete', ##ROWCOUNT;
In my experience it is not a good idea to build convoluted logic into SSIS packages if you can instead do in a database. Although it does depend on the person who has to eventually maintain it. Hopefully you can appreciate that this T-SQL approach is a more straightforward code based approach as opposed to having to dig around in property pages and events and other places inside SSIS packages.
I assume that you're using an Execute SQL Task for the updates and deletes? As #Nick.McDermaid mentioned, using an OLE DB Command within a Data Flow presents various issues when performing DML. You can find the number of rows updated, inserted, or deleted in a table through an Execute SQL Task by using the ExecValueVariable property of this task. Set the variable that will hold the row count to this property and it will return the number of affected rows. Note that is will only return the number of rows impacted by the last statement in the Execute SQL Task, regardless of batches (i.e. GO separators) are in the component.

Solr: continuous migration from MySQL

This may sound like an opinion question, but it's actually a technical one: Is there a standard process for maintaining a simple data set?
What I mean is this: let's say all I have is a list of something (we'll say books). The primary storage engine is MySQL. I see that Solr has a data import handler. I understand that I can use this to pull in book records on a first run - is it possible to use this for continuous migration? If so, would it work as well for updating books that have already been pulled into Solr as it would for pulling in new book records?
Otherwise, if the data import handler isn't the standard way to do it, what other ways are there? Thoughts?
Thank you very much for the help!
If you want to update documents from within Solr, I believe you'll need to use the UpdateRequestHandler as opposed to the DataImportHandler. I've never had need to do this where I work, so I don't know all that much about it. You may find this link of interest: Uploading Data With Index Handlers.
If you want to update Solr with records that have newly been added to your MySQL database, you would use the DataImportHandler for a delta-import. Basically, how it works is you have some kind of field in MySQL that shows the new record is, well, new. If the record is new, Solr will import it. For example, where I work, we have an "updated" field that Solr uses to determine whether or not it should import that record. Here's a good link to visit: DataImportHandler
The question looks similar to the one which we are doing, but not with SQL. Its with HBase(hadoop stack DB). However there we have Hbase indexer, which after mapping DB with Solr, listens to the events in hbase(DB) for new rows, and then executes code to fetch those values from DB and add in Solr. Not sure if there is such for SQL. However the concept looks similar. IN SQL I know about triggers which can listen to inserts and updates. At that even, you can trigger something to execute the steps of adding them in continuosly manner.

Creating log for each table in Entity Framework 4.1

In my database I have a log table for each table like in the picture. And after each CRUD operation on a table, I update the corresponding log table.
Is there some generic way in EF 4.1 (using DbContext) to perform insertion of records in each log file? Keep in mind that both ID columns are identity columns.
Unfortunately, there is no listener technic in EF, alternatively how about using general AOP tool such as Spring.NET or PostSharp so that you can capture the insert logic and store log into database or file.
Overriding DbContext.SaveChanges seems to be accepted solution for what you want.
http://msdn.microsoft.com/en-us/library/cc716714.aspx
If you overriding the DbContext.SaveChanges it will gives you the track functionality but there is an hurdle in case of Newly inserted row, It will not get the Value of autoIdentity column value.

Entity Framework 4.1 Custom Database Initializer strategy

I would like to implement a custom database initialization strategy so that I can:
generate the database if not exists
if model change create only new tables
if model change create only new fields without dropping the table and losing the data.
Thanks in advance
You need to implement IDatabaseInitializer interface.
Eg
public class MyInitializer : IDatabaseInitializer<MyDbContext>
{
public void InitializeDatabase(MyDbContext context)
{
//your logic here
}
}
And then set your initializer at your application startup
Database.SetInitializer<ProductCatalog>(new MyInitializer());
Here's an example
You will have to manually execute commands to alter the database.
context.ObjectContext.ExecuteStoreCommand("ALTER TABLE dbo.MyTable ADD NewColumn VARCHAR(20) NULL");
You can use a tool like SQL Compare to script changes.
There is a reason why this doesn't exist yet. It is very complex and moreover IDatabaseInitializer interface is not very prepared for such that (there is no way to make such initialization database agnostic). Your question is "too broad" to be answered to your satisfaction. With your reaction to #Eranga's correct answer you simply expect that somebody will tell you step by step how to do that but we will not - that would mean we will write the initializer for you.
What you need to do what you want?
You must have very good knowledge of SQL Server. You must know how does SQL server store information about database, tables, columns and relations = you must understand sys views and you must know how to query them to get data about current database structure.
You must have very good knowledge of EF. You must know how does EF store mapping information. You must be able to explore metadata get information about expected tables, columns and relations.
Once you have old database description and new database description you must be able to write a code which will correctly explore changes and create SQL DDL commands for changing your database. Even this look like the simplest part of the whole process this is actually the hardest one because there are many other internal rules in SQL server which cannot be violated by your commands. Sometimes you really need to drop table to make your changes and if you don't want to lose data you must first push them to temporary table and after recreating table you must push them back. Sometimes you are doing changes in constraints which can require temporarily turning constrains off, etc. There is good reason why tools which do this on SQL level (comparing two databases) are probably all commercial.
Even ADO.NET team doesn't implemented this and they will not implement it in the future. Instead they are working on something called migrations.
Edit:
That is true that ObjectContext can return you script for database creation - that is exactly what default initializers are using. But how it could help you? Are you going to parse that script to see what changed? Are you going to execute that script in another connection to use the same code as for current database to see its structure?
Yes you can create a new database, move data from the old database to a new one, delete the old one and rename a new one but that is the most stupid solution you can ever imagine and no database administrator will ever allow that. Even this solution still requires analysis of changes to create correct data transfer scripts.
Automatic upgrade is a wrong way. You should always prepare upgrade script manually with help of some tools, test it and after that execute it manually or as part of some installation script / package. You must also backup your database before you are going to do any changes.
The best way to achieve this is probably with migrations:
http://nuget.org/List/Packages/EntityFramework.SqlMigrations
Good blog posts here and here.

When a new row in database is added, an external command line program must be invoked

Is it possible for MySQL database to invoke an external exe file when a new row is added to one of the tables in the database?
I need to monitor the changes in the database, so when a relevant change is made, I need to do some batch jobs outside the database.
Chad Birch has a good idea with using MySQL triggers and a user-defined function. You can find out more in the MySQL CREATE TRIGGER Syntax reference.
But are you sure that you need to call an executable right away when the row is inserted? It seems like that method will be prone to failure, because MySQL might spawn multiple instances of the executable at the same time. If your executable fails, then there will be no record of which rows have been processed yet and which have not. If MySQL is waiting for your executable to finish, then inserting rows might be very slow. Also, if Chad Birch is right, then will have to recompile MySQL, so it sounds difficult.
Instead of calling the executable directly from MySQL, I would use triggers to simply record the fact that a row got INSERTED or UPDATED: record that information in the database, either with new columns in your existing tables or with a brand new table called say database_changes. Then make an external program that regularly reads the information from the database, processes it, and marks it as done.
Your specific solution will depend on what parameters the external program actually needs.
If your external program needs to know which row was inserted, then your solution could be like this: Make a new table called database_changes with fields date, table_name, and row_id, and for all the other tables, make a trigger like this:
CREATE TRIGGER `my_trigger`
AFTER INSERT ON `table_name`
FOR EACH ROW BEGIN
INSERT INTO `database_changes` (`date`, `table_name`, `row_id`)
VALUES (NOW(), "table_name", NEW.id)
END;
Then your batch script can do something like this:
Select the first row in the database_changes table.
Process it.
Remove it.
Repeat 1-3 until database_changes is empty.
With this approach, you can have more control over when and how the data gets processed, and you can easily check to see whether the data actually got processed (just check to see if the database_changes table is empty).
you could do what replication does: hang on the 'binary log'. setup your server as a 'master server', and instead of adding a 'slave server', run mysqlbinlog. you'll get a stream of every command that modifies your database.
step in 'between' the client and server: check MySQLProxy. you point it to your server, and point your client(s) to the proxy. it lets you interpose Lua scripts to monitor, analyze or transform any SQL command.
I think it's going to require adding a User-Defined Function, which I believe requires recompilation:
MySQL FAQ - Triggers: Can triggers call an external application through a UDF?
I think it's really a MUCH better idea to have some external process poll changes to the table and execute the external program - you could also have a column which contains the status of this external program run (e.g. "pending", "failed", "success") - and just select rows where that column is "pending".
It depends how soon the batch job needs to be run. If it's something which needs to be run "sooner or later" and can fail and need to be retried, definitely have an app polling the table and running them as necessary.