I've been trying to get an idbag working in hibernate. I been trying it against a MySql DB and a HSQLDB and I have also tried a couple different versions of Hibernate. Hibernate 4 and Hibernate 3 are giving slightly different information as to the cause of the error but both say the same thing in the main: Class Cast Exception.
I hosted all my code on bitbucket in a public repo and everything is there to include the DDL for creating a MySql or HSQL DB with the relevant tables. My HSQLDB version is 2.2.9 and the MySql version is 5.1.66 - My preference is to get it working on MySql.
Git Clone Command:
git clone https://pphi#bitbucket.org/pphi/idbag.git
Web Page:
https://pphi#bitbucket.org/pphi/idbag.git
When using Hibernate 4.0.1 I get this stack trace:
Hibernate: insert into idBagTest.Team (nickname, mascot) values (?, ?)
Hibernate: insert into idBagTest.Famous_Fan (first_name, last_name) values (?, ?)
Hibernate: insert into idBagTest.famous_fan_team (Team_id, team_fan_id, Famous_Fan_id ) values (?, ?, ?)
Exception in thread "main" java.lang.ClassCastException: org.hibernate.id.IdentifierGeneratorHelper$2 cannot be cast to java.lang.Long
at org.hibernate.type.descriptor.java.LongTypeDescriptor.unwrap(LongTypeDescriptor.java:36)
at org.hibernate.type.descriptor.sql.BigIntTypeDescriptor$1.doBind(BigIntTypeDescriptor.java:57)
at org.hibernate.type.descriptor.sql.BasicBinder.bind(BasicBinder.java:92)
at org.hibernate.type.AbstractStandardBasicType.nullSafeSet(AbstractStandardBasicType.java:280)
at org.hibernate.type.AbstractStandardBasicType.nullSafeSet(AbstractStandardBasicType.java:275)
at org.hibernate.persister.collection.AbstractCollectionPersister.writeIdentifier(AbstractCollectionPersister.java:919)
at org.hibernate.persister.collection.AbstractCollectionPersister.recreate(AbstractCollectionPersister.java:1252)
at org.hibernate.action.internal.CollectionRecreateAction.execute(CollectionRecreateAction.java:58)
at org.hibernate.engine.spi.ActionQueue.execute(ActionQueue.java:362)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:354)
at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:279)
at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:326)
at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:52)
at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1213)
at org.hibernate.internal.SessionImpl.managedFlush(SessionImpl.java:402)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.beforeTransactionCommit(JdbcTransaction.java:101)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(AbstractTransactionImpl.java:175)
at com.intertech.dao.TeamDaoHibernateImpl.saveTeam(TeamDaoHibernateImpl.java:19)
at com.intertech.Main.main(Main.java:22)
When using the Hibernate 3.1 I get this slightly different stack trace:
Hibernate: insert into Team (id, nickname, mascot) values (null, ?, ?)
Hibernate: insert into Famous_Fan (id, first_name, last_name) values (null, ?, ?)
Hibernate: insert into famous_fan_team (Team_id, team_fan_id, Famous_Fan_id ) values (?, ?, ?)
5120 [main] INFO org.hibernate.type.LongType - could not bind value 'POST_INSERT_INDICATOR' to parameter: 2; org.hibernate.id.IdentifierGeneratorFactory$2 cannot be cast to java.lang.Long
Exception in thread "main" java.lang.ClassCastException: org.hibernate.id.IdentifierGeneratorFactory$2 cannot be cast to java.lang.Long
at org.hibernate.type.LongType.set(LongType.java:65)
at org.hibernate.type.NullableType.nullSafeSet(NullableType.java:154)
at org.hibernate.type.NullableType.nullSafeSet(NullableType.java:136)
at org.hibernate.persister.collection.AbstractCollectionPersister.writeIdentifier(AbstractCollectionPersister.java:829)
at org.hibernate.persister.collection.AbstractCollectionPersister.recreate(AbstractCollectionPersister.java:1160)
at org.hibernate.action.CollectionRecreateAction.execute(CollectionRecreateAction.java:58)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:171)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137)
at com.intertech.dao.TeamDaoHibernateImpl.saveTeam(TeamDaoHibernateImpl.java:19)
at com.intertech.Main.main(Main.java:22)
https://hibernate.onjira.com/browse/HHH-397
The use of identity for a generator class in the idBag is apparently not supported when using MySql and Hibernate. Replaced the generator class with increment and got this:
Hibernate: insert into idBagTest.Team (nickname, mascot) values (?, ?)
Hibernate: insert into idBagTest.Famous_Fan (first_name, last_name) values (?, ?)
Hibernate: select max(team_fan_id) from idBagTest.Famous_Fan_Team
Hibernate: insert into idBagTest.Famous_Fan_Team (Team_id, team_fan_id, Famous_Fan_id ) values (?, ?, ?)
The Repository has been updated with the working code.
I have came across the same error,
After some research I came to conclusion that it is depends on two things.
What DB you are using plus
What #GenericGenerator - what strategy you are using.
I was able to resolved issue with following configuration.
DB - MySQL
Changed strategy to increment from native
My code looks like:
#ElementCollection
#JoinTable(name="USERS_ADDRESSES", joinColumns=#JoinColumn(name="USER_ID"))
#GenericGenerator(name="hilogen",strategy="increment")
#CollectionId(columns={#Column(name="ADDR_ID")},generator="hilogen",type=#Type(type="long"))
private List addressList = new ArrayList();
Hope this help.
Thank,
Sagar Vyas
Related
I have this mapper:
<insert id="insertBatch" parameterType="java.util.Set">
<foreach collection="filterParameterEntitySet" item="item" separator=";">
INSERT INTO filter_parameter
(
filter_key,
filter_value,
filter_id
)
VALUES
(
#{item.filterKey},
#{item.filterValue},
#{item.filter.id}
)
ON CONFLICT DO NOTHING
</foreach>
</insert>
Whenever I execute it, it throws BadSQLGrammarException:
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.jdbc.BadSqlGrammarException:
### Error updating database. Cause: java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CONFLICT DO NOTHING
**
;
INSERT INTO filter_parameter
f' at line 13**
What could be the problem? I can't figure out where does the syntax error lie!
Try this:
<insert id="insertBatch" parameterType="java.util.Set">
INSERT INTO filter_parameter
(
filter_key,
filter_value,
filter_id
) VALUES
<foreach collection="filterParameterEntitySet" item="item" separator=",">
(
#{item.filterKey},
#{item.filterValue},
#{item.filter.id}
)
</foreach>
ON CONFLICT DO NOTHING
</insert>
So basically, there were 2 mistakes in the query:
The separator, instead of this: separator=";"
I should use this: separator=","
ON CONFLICT DO NOTHING is not valid in mysql and I was using mysql (it's for postgres), so I removed it and added the keyword IGNORE after INSERT to reserve the same functionality
I'm getting the following error when I try to start a new JHipster app using MySQL as the database:
2019-07-30 09:55:40.583 ERROR 35895 --- [-service-task-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Liquibase could not start correctly, your database is NOT ready: The table does not comply with the requirements by an external plugin. [Failed SQL: INSERT INTO compose.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, `DESCRIPTION`, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('00000000000001', 'jhipster', 'config/liquibase/changelog/00000000000000_initial_schema.xml', NOW(), 1, '8:c5bfc567913b118109a43e981cd02883', 'createTable tableName=jhi_user; createTable tableName=jhi_authority; createTable tableName=jhi_user_authority; addPrimaryKey tableName=jhi_user_authority; addForeignKeyConstraint baseTableName=jhi_user_authority, constraintName=fk_authority_name, ...', '', 'EXECUTED', NULL, NULL, '3.6.3', '4491329886')]
liquibase.exception.DatabaseException: The table does not comply with the requirements by an external plugin. [Failed SQL: INSERT INTO compose.DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, MD5SUM, `DESCRIPTION`, COMMENTS, EXECTYPE, CONTEXTS, LABELS, LIQUIBASE, DEPLOYMENT_ID) VALUES ('00000000000001', 'jhipster', 'config/liquibase/changelog/00000000000000_initial_schema.xml', NOW(), 1, '8:c5bfc567913b118109a43e981cd02883', 'createTable tableName=jhi_user; createTable tableName=jhi_authority; createTable tableName=jhi_user_authority; addPrimaryKey tableName=jhi_user_authority; addForeignKeyConstraint baseTableName=jhi_user_authority, constraintName=fk_authority_name, ...', '', 'EXECUTED', NULL, NULL, '3.6.3', '4491329886')]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:356)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:57)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:125)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:109)
at liquibase.changelog.StandardChangeLogHistoryService.setExecType(StandardChangeLogHistoryService.java:384)
at liquibase.database.AbstractJdbcDatabase.markChangeSetExecStatus(AbstractJdbcDatabase.java:1086)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:64)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:83)
at liquibase.Liquibase.update(Liquibase.java:202)
at liquibase.Liquibase.update(Liquibase.java:179)
at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:353)
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:305)
at io.github.jhipster.config.liquibase.AsyncSpringLiquibase.initDb(AsyncSpringLiquibase.java:119)
at io.github.jhipster.config.liquibase.AsyncSpringLiquibase.lambda$afterPropertiesSet$0(AsyncSpringLiquibase.java:94)
at io.github.jhipster.async.ExceptionHandlingAsyncTaskExecutor.lambda$createWrappedRunnable$1(ExceptionHandlingAsyncTaskExecutor.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: The table does not comply with the requirements by an external plugin.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.StatementImpl.executeInternal(StatementImpl.java:782)
at com.mysql.cj.jdbc.StatementImpl.execute(StatementImpl.java:666)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:352)
... 17 common frames omitted
It looks like it's failing on a liquibase step. Any ideas why this is happening?
Liquibase does not create the DATABASECHANGELOG table with the required primary key. Try stopping JHipster, running the following SQL commands against your mysql database, and then restarting JHipster:
ALTER TABLE DATABASECHANGELOG
ADD PRIMARY KEY (ID);
drop table jhi_persistent_audit_evt_data;
drop table jhi_persistent_audit_event;
drop table jhi_user_authority;
drop table jhi_authority;
drop table jhi_user;
That should resolve the issue.
I'm using Spring Boot 1.4.0.RELEASE with the following DB connectors in dependencies
<!-- runtime dependencies -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
And there is an entity class with GenerationType.AUTO policy for ID generation (code below is not complete)
#Entity
#Table(name = "scanner_run")
public class ScannerRun extends BaseObject {
private Long id;
#Id #GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
public Long getId() {
return this.id;
}
}
There are not problems with insert of new entities, when H2 is used
spring.datasource.url=jdbc:h2:mem:testdb;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
Hibernate generates insert into scanner_run (id, completed_ts, repository_id, started_ts, success) values (null, ?, ?, ?, ?) query and new record is created.
However with MySQL
spring.datasource.url=jdbc:mysql://localhost/db_dev?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&connectionCollation=utf8_general_ci
spring.datasource.username=root
spring.datasource.password=root
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
the generated query is insert into scanner_run (completed_ts, repository_id, started_ts, success) values (?, ?, ?, ?) - ID is not in the query - and it fails.
There are no other differences, only change in application.properties to swapp the database. The same code with older versions of Hibernate and MySQL connector works with the same installation of MySQL. MySQL connector resolves to mysql:mysql-connector-java:jar:5.1.39
Can you spot anything wrong?
The exact messages and exception in the logs are:
2016-08-26 14:38:03.964 DEBUG 32555 --- [ myScheduler-1] org.hibernate.SQL : insert into scanner_run (completed_ts, repository_id, started_ts, success) values (?, ?, ?, ?)
2016-08-26 14:38:03.967 WARN 32555 --- [ myScheduler-1] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1364, SQLState: HY000
2016-08-26 14:38:03.967 ERROR 32555 --- [ myScheduler-1] o.h.engine.jdbc.spi.SqlExceptionHelper : Field 'id' doesn't have a default value
2016-08-26 14:38:03.979 ERROR 32555 --- [ myScheduler-1] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
jvr.decrex.exception.ExecutionError: Failed to save ScannerRun{id=null, repository=/jv-ration/projects/jv-ration/deCrex/jvr-decrex/, startedTs=Fri Aug 26 14:38:03 CEST 2016, completedTs=null}
at jvr.decrex.service.impl.GenericManagerImpl.insert(GenericManagerImpl.java:107)
at jvr.decrex.scanner.service.impl.ScannerRunManagerImpl.createScan(ScannerRunManagerImpl.java:79)
.........
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.orm.jpa.JpaSystemException: could not execute statement; nested exception is org.hibernate.exception.GenericJDBCException: could not execute statement
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:333)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:244)
.........
at jvr.decrex.scanner.dao.jpa.ScannerRunDaoJpa$$EnhancerBySpringCGLIB$$5e6c846a.insert()
at jvr.decrex.service.impl.GenericManagerImpl.insert(GenericManagerImpl.java:105)
... 21 common frames omitted
Caused by: org.hibernate.exception.GenericJDBCException: could not execute statement
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:47)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95)
..........
at com.sun.proxy.$Proxy126.persist(Unknown Source)
at jvr.decrex.dao.jpa.GenericDaoJpa.insert(GenericDaoJpa.java:137)
at jvr.decrex.dao.jpa.GenericDaoJpa$$FastClassBySpringCGLIB$$6605cd4e.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
... 25 common frames omitted
Caused by: java.sql.SQLException: Field 'id' doesn't have a default value
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1078)
.........
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2376)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2360)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:204)
... 58 common frames omitted
I tried using older 5.1.27 version of mysql-connector-java, which works with older version of Hibernate - it throws the same error
You omit the database schema(s) so an amount of guesswork is to be made. The following is what can be said :-
AUTO generation strategy means the JPA provider can choose whatever it wants as the strategy. It seems that for MySQL it uses AUTOINCREMENT columns (equivalent to IDENTITY generation strategy), and for H2 it maybe uses a SEQUENCE (guessing since you provide no details of how). Maybe you don't have AUTOINCREMENT defined for the PK column with MySQL? but you can't use AUTO strategy in that case, and you are.
You could handle it by having an orm.xml for each datastore you will deploy to, and then you can use different generation strategies based on which datastore.
Alternatively choose TABLE generation strategy and it will insert the "id" column each time regardless of the datastore.
Or choose IDENTITY (when you use MySQL AUTOINCREMENT column for PK, and H2 IDENTITY column for PK) since H2 would then use that also (clearly this is not an option if you also need to support another datastore that has no such IDENTITY support).
When developing the application against my local database there were no problems with the speed of transactions, although the CPU usage was constantly at about 30 percents when performing several transactions per second, and when profiling most of the time was spent in javax methods handling the transactions with an average of 2.6 seconds per transaction. Therefore I'm using an ArrayList as a buffer and only sending the transaction when the size of the buffer exceeds 300 instances, which significantly lowered the CPU usage.
When I'm changing my persistence.xml to use a remote database instead (checked both RDS and a personal, off-site database) the minimum time for persisting/committing a batch of instances is about 20 seconds, which is too high since a transaction of 300 instances is required once every 5 seconds (on average).
I've tried to change the flushmode of the EntityManager to FlushModeType.COMMIT but it didn't change the performance noticeably. Increasing the size of the buffer before sending causes a stack overflow with the javax.persistence library for some (to me) unknown reason.
persistence.xml
<persistence-unit name="PU-data" transaction-type="RESOURCE_LOCAL">
<mapping-file>META-INF/orm.xml</mapping-file>
... // class, shared-cache-mode=none, validation-mode=none ...
<properties>
... // Authentication ...
<!-- Optimization attempts -->
<property name="eclipselink.jdbc.bind-parameters" value="true" />
<property name="eclipselink.jdbc.batch-writing" value="JDBC"/>
<property name="eclipselink.jdbc.batch-writing.size" value="300" />
<property name="eclipselink.jdbc.cache-statements" value="true" />
<property name="eclipselink.cache.shared.default" value="false" />
<property name="eclipselink.persistence-context.close-on-commit" value="true" />
<property name="eclipselink.persistence-context.flush-mode" value="commit" />
<property name="eclipselink.persistence-context.persist-on-commit" value="false" />
</properties>
</persistence-unit>
Facade handling the transactions
MouseFacade.bufferSemaphore.acquireUninterruptibly(1);
if (MouseFacade.buffer.size() >= 300) {
EntityManager entityManager = EMF.getEntityManager();
try {
entityManager.getTransaction().begin();
for (Mouse mouse : MouseFacade.buffer) {
entityManager.persist(mouse);
}
entityManager.getTransaction().commit();
} finally {
if (entityManager.getTransaction().isActive()) {
entityManager.getTransaction().rollback();
}
entityManager.close();
MouseFacade.buffer.clear();
}
}
MouseFacade.bufferSemaphore.release(1);
ORM mapping
<entity-mappings version="2.1" xmlns="http://www.eclipse.org/eclipselink/xsds/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<entity class="se.my.package.Mouse">
<table-generator name="ORD_SEQ" allocation-size="300"/>
</entity>
</entity-mappings>
Update
I've gone through the suggestions found at this page, called How to improve JPA performance by 1,825% (http://java-persistence-performance.blogspot.se/2011/06/how-to-improve-jpa-performance-by-1825.html), but there is no difference what so ever which makes me wonder whether I'm missing a key point about batch writing and MySQL. I've rewritten the entities not to rely on relationships and minimized my read-operations to 1 for the entire application in order to just focus on the write problems.
When looking through the EclipseLink log it doesn't look like batch-writing is being used at all, but instead 2 log entires are written for every instance which seems about right (300 instances * 2 connections * 24 latency = 14.4 seconds).
[EL Fine]: sql: 2013-03-31 01:35:29.249--ClientSession(1213059092)--Connection(662811604)--Thread(Thread[pool-1-thread-1,5,main])--SELECT LAST_INSERT_ID()
[EL Fine]: sql: 2013-03-31 01:35:29.274--ClientSession(1213059092)--Connection(662811604)--Thread(Thread[pool-1-thread-1,5,main])--INSERT INTO mouse (event, posX, posY, created, uid) VALUES (?, ?, ?, ?, ?)
bind => [12, 241, 250, 1364690113727, 1]
[EL Fine]: sql: 2013-03-31 01:35:29.298--ClientSession(1213059092)--Connection(662811604)--Thread(Thread[pool-1-thread-1,5,main])--SELECT LAST_INSERT_ID()
[EL Fine]: sql: 2013-03-31 01:35:29.323--ClientSession(1213059092)--Connection(662811604)--Thread(Thread[pool-1-thread-1,5,main])--INSERT INTO mouse (event, posX, posY, created, uid) VALUES (?, ?, ?, ?, ?)
bind => [12, 233, 296, 1364690113443, 1]
...
Progress
By changing to #GeneratedValue(strategy = GenerationType.TABLE) and allocationSize=300 I've managed to reduce the number of requests by 50%, although it looks as if bind are still sent on their own when checking the EclipseLink log, even though batch writing is supposedly enabled.
[EL Fine]: sql: 2013-03-31 01:35:29.323--ClientSession(1213059092)--Connection(662811604)--Thread(Thread[pool-1-thread-1,5,main])--INSERT INTO mouse (event, posX, posY, created, uid) VALUES (?, ?, ?, ?, ?)
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
bind => [..., ..., ..., ..., ...]
Change your sequencing to use table sequencing allowing sequence numbers to be preallocated. What you have now forces each insert into its own statement so that the id can be found right after - which prevents batching. Table and other strategies allowing preallocation will give better performance if matched up with the size of your batches. Optimization #6 in http://java-persistence-performance.blogspot.se/2011/06/how-to-improve-jpa-performance-by-1825.html
Try enabling JDBC batch writing. I'm not sure what difference it would make, but it may be worth trying.
For batch writing in MySQL the MySQL JDBC driver does not batch statement unless you have set the following property in your conneciton URL,
?rewriteBatchedStatements=true
i.e.
jdbc:mysql://localhost:3306/db1?rewriteBatchedStatements=true
i have enabled the view module and get this when i go to structure>views in drupal 7
Additional uncaught exception thrown while handling exception.
Original
PDOException: SQLSTATE[HY000]: General error: 2006 MySQL server has gone away: DELETE FROM {cache_form} WHERE (cid = :db_condition_placeholder_0) ; Array ( [:db_condition_placeholder_0] => form_form-MKcd7j8VJkLHaG7-JGW-vREo_XeUngdnLcqlKOn-02o ) in cache_clear_all() (line 170 of /home/tennis/public_html/includes/cache.inc).
Additional
PDOException: SQLSTATE[HY000]: General error: 2006 MySQL server has gone away: INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 1 [:db_insert_placeholder_1] => php [:db_insert_placeholder_2] => %type: !message in %function (line %line of %file). [:db_insert_placeholder_3] => a:6:{s:5:"%type";s:12:"PDOException";s:8:"!message";s:240:"SQLSTATE[HY000]: General error: 2006 MySQL server has gone away: DELETE FROM {cache_form} WHERE (cid = :db_condition_placeholder_0) ; Array ( [:db_condition_placeholder_0] => form_form-MKcd7j8VJkLHaG7-JGW-vREo_XeUngdnLcqlKOn-02o ) ";s:9:"%function";s:17:"cache_clear_all()";s:5:"%file";s:43:"/home/tennis/public_html/includes/cache.inc";s:5:"%line";i:170;s:14:"severity_level";i:3;} [:db_insert_placeholder_4] => 3 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://192.168.1.66/~tennis/admin/structure/views [:db_insert_placeholder_7] => http://192.168.1.66/~tennis/user/1 [:db_insert_placeholder_8] => 192.168.1.172 [:db_insert_placeholder_9] => 1309366098 ) in dblog_watchdog() (line 155 of /home/tennis/public_html/modules/dblog/dblog.module).
what maybe my issue?
Here is some information on that error: http://dev.mysql.com/doc/refman/5.1/en/gone-away.html and a similar issue on Drupal.org: http://drupal.org/node/984112
Another helpful post regarding this error: http://madhavvyas.blogspot.com/
It seems that the problem lies with max_allowed_packet in the configuration for MySQL.
From the Drupal issue, another user provides some steps that may assist you (this user is using XAMPP but steps are similar):
How to fix this problem
Go to xampp\mysql\bin
Open my.ini
Change "max_allowed_packet" from "1m" to "16m" (or larger)
Save my.ini Now restart MySql through the XAMPP control panel.