I already made triggers to log the actions before persisting, updating and deleting the mapped entities, but it's only within MySQL, so I think I must do an "application-level trigger" using annotations #PostPersist, #PostUpdate and #PostDelete.
So, when entity e.g. Category gets persisted, a method for inserting info into a log table is thrown, with the following SQL:
INSERT INTO log (date_hour, table, id_tuple, user)
VALUES (NOW(), 'category', " + id + ", '" +
FacesContext.getCurrentInstance().getExternalContext().getRemoteUser() + "')";
I made exactly like that, using createNativeQuery then query.executeUpdate() but nothing happened.
What's the best approach for doing what I want? Reminding that I'm using EclipseLink.
You can log all changes to an entity type with little effort using EclipseLink's History policy: http://wiki.eclipse.org/EclipseLink/Examples/JPA/History
Assuming you are CDI you can create an interceptor like described in: Oracle Tutorial CDI Interceptor
In this interceptor you can create your insert in the log table.
But keep in mind that logging usually slows an application down much. Consider using loglevels that you normally just log errors instead of everything
Related
I tried executing the query below
"INSERT INTO " + schema + "." + fileName+" '(' id,name ')' VALUES (3,'abc')")
I am not sure whether Calcite supports UPSERT or not.
But I referred to the reference which shows different queries.
If it's not possible using Apache Calcite, then please let me know what other libraries I can use to perform UPSERT into a CSV file.
Calcite doesn't currently support executing UPSERT statements. There has been recent discussion of this on the project mailing list so it's possible this will be supported in the future.
I have been getting this annoying exception while trying to create a native query with my entity manager. The full error message is:
java.lang.IllegalStateException: During synchronization a new object was found through a relationship that was not marked cascade PERSIST: com.model.OneToManyEntity2#61f3b3b.
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.discoverUnregisteredNewObjects(RepeatableWriteUnitOfWork.java:313)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.calculateChanges(UnitOfWorkImpl.java:723)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.writeChanges(RepeatableWriteUnitOfWork.java:441)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:874)
at org.eclipse.persistence.internal.jpa.QueryImpl.performPreQueryFlush(QueryImpl.java:967)
at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:207)
at org.eclipse.persistence.internal.jpa.QueryImpl.getSingleResult(QueryImpl.java:521)
at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:400)
The actual code that triggers the error is:
Query query;
query = entityManager.createNativeQuery(
"SELECT MAX(CAST(SUBSTRING_INDEX(RecordID,'-',-1) as Decimal)) FROM `QueriedEntityTable`");
String recordID = (query.getSingleResult() == null ?
null :
query.getSingleResult()
.toString());
This is being executed with an EntityTransaction in the doTransaction part. The part that is getting me with this though is that this is the first code to be executed within the doTransaction method, simplified below to:
updateOneToManyEntity1();
updateOneToManyEntity2();
entityManager.merge(parentEntity);
The entity it has a problem with "OneToManyEntity1" isn't even the table I'm trying to create the query on. I'm not doing any persist or merge up until this point either, so I'm also not sure what is supposedly causing it to be out of sync. The only database work that's being done up until this code is executed is just pulling in data, not changing anything. The foreign keys are properly set up in the database.
I'm able to get rid of this error by doing as it says and marking these relationships as Cascade.PERSIST, but then I get a MySQLContrainstraViolationException on the query.getSingleResult() line. My logs show that its doing some INSERT queries right before this, so it looks like its reaching the EntityManager.merge part of my doTransaction method, but the error and call stack point to a completely different part of the code.
Using EclipseLink (2.6.1), Glassfish 4, and MySQL. The entitymanager is using RESOURCE_LOCAL with all the necessary classes listed under the persistence-unit tag and exclude-unlisted-classes is set to false.
Edit: So some more info as I'm trying to work through this. If I put a breakpoint at the beginning of the transaction and then execute entityManager.clear() through IntelliJ's "Evaluate Expression" tool, everything works fine at least the first time through. Without it, I get an error as it tries to insert empty objects into the table.
Edit #2: I converted the nativeQuery part into using the Criteria API and this let me actually make it through my code so I could find where it was unintentionally adding in a null object to my entity list. I'm still just confused as to why the entity manager is caching these errors or something to the point that creating a native query is breaking because its still trying to insert bad data. Is this something I'd need to call EntityManager.clear() before doing each time? Or am I supposed to call this when there is an error in the doTransaction method?
So after reworking the code and setting this aside, I stumbled on at least part of the answer to my question. My issue was caused by the object being persisted prior to the transaction starting. So when I was entering my transaction, it first tried to insert/update data from my entity objects and threw an error since I hadn't set the values of most of the non-null columns. I believe this is the reason I was getting the cascade errors and I'm positive this is the source of the random insert queries I saw being fired off at the beginning of my transaction. Hope this helps someone else avoid a lot of trouble.
Recently i've been working on an application which allow the users to query enities from a ContextBroker , Cosmos and so on . One feature of the application is to initialize an entity which will be used by a connected object to store data .
The creation works fine , but I have a little problem .
As said in the documentation for the Orion ContextBroker , when an Entity already exists the APPEND action is interpreted as a UPDATE (I sincerely don't understand why) . So here's a scenario , the user have a Entity called Room1 , the sensors give him his data and stores it . One day , he want to create a new entity , but make an error and call it Room1 . All the current data from Room1 will be reset to the default value I put in my application .
Here is my question , is there a way to check if the entity already exist other than doing a manual query (which will take a much longer time to process) in the application before the creation ?
Thank you for reading my question and have a good day .
Guillaume Jourdain .
Currently (Orion 0.22.0), the only way is the one you mention: check that the entity already exist doing a query.
The reason to implement update in this way is that for many uses cases, the desired behavious is exactly the oposites: the client doen't want to get an error if the entity previously doesn't exist and the "append or update" semantics works fine. Unfortunatelly, making happy one use case makes sad the other :(
The best solution would be making this behaviour configurable. We are now defining a new version of the FIWARE NGSI API that Orion implements, including a URL option to set the behaviour, e.g. ?options=append to set "strict append" semantics (opossite to "append or update" semantics).
EDIT: Orion 0.24.0 introduces the APPEND_STRICT action, which returns an error if the attribute to add already exists.
I'm getting the an OptimisticLockException when I try to update a managed entity EJB.
The EJB was fetched via:
port = entityManager.find(PortEntity.class, portID);
and then the EJB and the entityManager has been passed to a SAX ContentHandler so that in the endDocuent() method it can be updated. The ContentHandler has extracting the time zone information from the data returned from Google's Time Zone API server(s).
The code snippet is:
entityManager.refresh(port);
if (entityManager.contains(port))
log.info("Contained: " + port);
else
log.info("NOT Contained: " + port);
port.setTimezone(toTimezone);
entityManager.flush(); // <-- Line 70
And the log file show:
13:48:05,568 INFO [GeotimezoneHandler] Status: OK
13:48:05,569 INFO [GeotimezoneHandler] Raw offset: 3600.0000000
13:48:05,570 INFO [GeotimezoneHandler] DST offset: 0.0000000
13:48:05,570 INFO [GeotimezoneHandler] Timezone ID: Europe/Madrid
13:48:05,571 INFO [GeotimezoneHandler] Timezone Name: Central European Standard Time
13:48:05,577 INFO [GeotimezoneHandler] Contained: SeaPort[id=ESBCN, name=Barcelona]
13:48:05,591 ERROR [GeotimezoneHandler] Updating curise: javax.persistence.OptimisticLockException: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [com.nutrastat.voyager.entity.PortEntity$Sea#ESBCN]
at org.hibernate.ejb.AbstractEntityManagerImpl.wrapStaleStateException(AbstractEntityManagerImpl.java:1390) [hibernate-entitymanager-4.0.1.Final.jar:4.0.1.Final]
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1308) [hibernate-entitymanager-4.0.1.Final.jar:4.0.1.Final]
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1289) [hibernate-entitymanager-4.0.1.Final.jar:4.0.1.Final]
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1295) [hibernate-entitymanager-4.0.1.Final.jar:4.0.1.Final]
at org.hibernate.ejb.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:976) [hibernate-entitymanager-4.0.1.Final.jar:4.0.1.Final]
at org.jboss.as.jpa.container.AbstractEntityManager.flush(AbstractEntityManager.java:439) [jboss-as-jpa-7.1.1.Final.jar:7.1.1.Final]
at com.nutrastat.voyager.util.GeotimezoneHandler.endDocument(GeotimezoneHandler.java:70) [voyager-lib.jar:]
So if the entityManager contains the EJB why after modifing it do I get the exception?
As always many thanks for your help
Steve
P.S.
I have looked at this thread and The MySQL database is using InnoDB, but I don't know how to execute the SELECT ##tx_isolation; command from within my code.
After two days of research I finally found what the problem was.
The entity class inherited from its superclass a javax.persistence.Version field. I had also hand injected data into the table, and because the version field was defined as allowing nulls had not bothered to insert a value, but one was needed.
In my application am using JPA entity manager to persist data/fetch data.
em.executeQuery("select * from file_calender_mapping where start_date between :start and :end");
em.setParameter("start",startDate)//startDate is an date object
em.setParameter("end",endDate)//endDate is an date object
List fmlist=em.execute();
The proble is just like this,
"select * from file_calender_mapping where start_date between start and end"
when am passing some date as start= "2011-08-03 05:08:00",and end="2011-08-04 06:08:00"
then the mysql return one row having the start time ="2011-08-03 05:30:00",its good,But
when my application executing such query it dose not returning any row.Actually what i have seen that my application returning value for two different date,but not for same date different time,thats the main problem.
One another thing is my "start" field for Table "file_calender_mapping" datatype is "timestamp".
So what i was thinking that ther may be some problem on JPA/Hibernate
You can try to specify the exact types of parameters as follows:
em.setParameter("start", startDate, TemporalType.TIMESTAMP);
em.setParameter("end",endDate, TemporalType.TIMESTAMP);
I have the strong feeling that you're confusing EntityManager.createQuery() with EntityManager.createNativeQuery() and you're somehow capturing all the exceptions, which somehow makes you don't receive anything back.
I'm assuming that, because I don't think you have a class named file_calender_mapping.
edit
The documentation will explain it better than I do, but a JPA QL query is transformed to the navite sql of the DB using the mapping, while a native query is send as it's to the DB.
Again, I suggest you to read the documentation, it's quite useful.