Grails transaction never completes - mysql

Consider the code below, that tries to create a new SerpKeyword within a transaction and prints to the console to show where it is.
if (!serpKeyword) {
println "I DIDN'T FIND THE KEYWORD!"
SerpKeyword.withNewTransaction {
println "SO NOW I'M BEGINNING A TRANSACTION"
serpKeyword = new SerpKeyword(
keyword: searchKeyword,
geoKeyword: geoKeyword,
concatenation: concatenation,
locale: locale
)
println "NOW I'LL SAVE THE KEYWORD"
serpKeyword.save(failOnError: true, flush: true)
println "AND NOW THE KEYWORD IS SAVED"
}
}
The console output I see right away is:
I DIDN'T FIND THE KEYWORD!
SO NOW I'M BEGINNING A TRANSACTION
NOW I'LL SAVE THE KEYWORD
I never see the last line of my output, indicating that the record never saves. I've tried this both with and without the options that I'm passing into save. Regardless, it just hangs for a while, and eventually I get this stacktrace:
Got error -1 from storage engine. Stacktrace follows:
java.sql.SQLException: Got error -1 from storage engine
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1055)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3491)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3423)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1936)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2060)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2019)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1937)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1922)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at com.reachlocal.grails.sales.AdvertiserConnectionService$_findOrCreateSerpKeyword_closure9$$EO2A9QHA.doCall(AdvertiserConnectionService.groovy:624)
at org.grails.datastore.gorm.GormStaticApi.withNewTransaction(GormStaticApi.groovy:696)
at com.reachlocal.grails.sales.AdvertiserConnectionService$$EO2A9QHA.findOrCreateSerpKeyword(AdvertiserConnectionService.groovy:615)
at com.reachlocal.grails.sales.AdvertiserConnectionService$$EO2A9QHA.createSerpEntryForKeyword(AdvertiserConnectionService.groovy:659)
at com.reachlocal.grails.sales.AdvertiserConnectionService$$EO2A9QHA.addKeyword(AdvertiserConnectionService.groovy:51)
at com.reachlocal.grails.serp.SerpController$_closure9.doCall(SerpController.groovy:77)
at grails.plugin.cache.web.filter.PageFragmentCachingFilter.doFilter(PageFragmentCachingFilter.java:195)
at grails.plugin.cache.web.filter.AbstractFilter.doFilter(AbstractFilter.java:63)
at org.jasig.cas.client.session.SingleSignOutFilter.doFilter(SingleSignOutFilter.java:65)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:680)
What gives?

You may get the "Got error -1 from storage engine" for several reasons:
Your database is out of disk space
You have the innodb_force_recovery switch in your my.cnf file
Mismatched MySQL tablespace ids.
The best way to troubleshoot this issue is to take a look at the MySQL error log

Do you have the id generation strategy set as assigned any thing other than a sequence/auto in the domain class serpKeyword?
It will be helpful to debug if you can add the domain class in the post as well.

Related

FLink Application - interface to MySql and MongoDb

This post is marked for deletion, as the issue was with the IDE in not creating the proper jar, hence issues with the code interaction
I have a small flink application that reads from a kafka topic,
needs to query if the input from the topic (x) exists in a column of MySql Database before processing it (Not Ideal but its the current requirement)
When I run the Application through the IDE (Intellij) -> It works.
However when I submit the job to flink server it fails to open connection based on driver
Error from Flink Server
// ERROR
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
// ---------------------
// small summary of MAIN
// ---------------------
Get Data from Source (x)
source.map(x => {
// open connection (Fails to open)
// check if data exist in db
})
// -------------------------------------
// open connection function (Scala Code)
// -------------------------------------
def openConnection() : Boolean = {
try {
// - set driver
Class.forName("com.mysql.jdbc.Driver")
// - make the connection
connection = DriverManager.getConnection(url, user, pswd)
// - set status controller
connection_open = true
}
catch {
// - catch error
case e: Throwable => e.printStackTrace
// - set status controller
connection_open = false
}
// return result
return connection_open
}
Question
1) Whats the correct way to interface to MySql Database from a flink application?
2) I will also at a later stage have to do similar interaction with MongoDB, whats the correct way interacting with MongoDB from FLink?
Unbelievable IntelliJ does not update dependencies on rebuild command.
In IntelliJ, You have to delete and re-setup your artifact creator for all dependencies do be added. (Build, Clean, Rebuild,Delete) does not update its settings.
I deleted and recreated the artifact file. And it Works
Apologies for the unnecessary inconvenience (As you can imagine my frustration). But it's a word of caution for those developing in IntelliJ, to manually delete and recreate artifacts
Solution:
(File -> Project Structure -> Artifacts -> (-) delete previous one -> (+) create new one -> Select Main Class)

Operation not allowed after ResultSet closed in solr import

I encountered an error while doing full-import in solr-6.6.0.
I am getting exception as bellow
This happens when I set
batchSize="-1" in my db-config.xml
If I change this value to say batchSize="100" then import will run without any error.
But recommended value for this is "-1".
Any suggestion why solr throwing this exception.
By the way the data am trying to import is not huge, data am trying to import is just 250 documents.
Stack trace:
org.apache.solr.handler.dataimport.DataImportHandlerException: java.sql.SQLException: Operation not allowed after ResultSet closed
at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
at org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:516)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:415)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:474)
at org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:457)
at java.lang.Thread.run(Thread.java:745)
By the way am getting one more warning:
Could not read DIH properties from /configs/state/dataimport.properties :class org.apache.zookeeper.KeeperException$NoNodeException
This happens when config directory is not writable.
How can we make config directory writable in solrCloud mode.
Am using zookeeper as watch-dog. Can we go ahead and change permission of config files which are there is zookeeper?
your help greatly appreciated.
Using fetchSize="-1" is only recommended if you have problems running without it. Its behaviour is up to the JDBC driver, but the cause of people assuming its recommended is this sentence from the old wiki:
DataImportHandler is designed to stream row one-by-one. It passes a fetch size value (default: 500) to Statement#setFetchSize which some drivers do not honor. For MySQL, add batchSize property to dataSource configuration with value -1. This will pass Integer.MIN_VALUE to the driver as the fetch size and keep it from going out of memory for large tables.
Unless you're actually seeing issues with the default values, leave the setting alone and assume your JDBC driver does the correct thing (.. which it might not do with -1 as the value).
The reason for dataimport.properties having to be writable is that it writes a property for the last time the import ran to the file, so that you can perform delta updates by referencing the time of the last update in your SQL statement.
You'll have to make the directory writable for the client (solr) if you want to use this feature. My guess would be that you can ignore the warning if you're not using delta imports.

Writing from cascalog to MySQL does not work. How to debug this?

I'm trying to write the result of a cascalog query into a MySQL-Database. For this, I'm using cascading-jdbc and following an example i found here. I'm using cascading-jdbc-core and cascading-jdbc-mysql in version 3.0.0.
I'm executing precisely this code from my REPL:
(let [data [["foo1" "bar1"]
["foo2" "bar2"]]
query-params (into-array String ["?col1" "?col2"])
column-names (into-array String ["col1" "col2"])
update-params (into-array String ["?col1"])
update-column-names (into-array String ["col1"])
jdbc-tap (fn []
(let [scheme (JDBCScheme.
(Fields. query-params)
column-names
nil
(Fields. update-params)
update-column-names)
table-desc (TableDesc.
"test_table"
query-params
column-names
(into-array String []))
tap (JDBCTap.
"jdbc:mysql://192.168.99.101:3306/test_db?user=root&password=my-secret-pw"
"com.mysql.jdbc.Driver"
table-desc
scheme)]
tap))]
(?<- (jdbc-tap)
[?col1 ?col2]
(data ?col1 ?col2)))
When I'm running the code, I'm seeing these logs inside the REPL:
15/12/11 11:08:44 INFO hadoop.FlowMapper: sinking to: JDBCTap{connectionUrl='jdbc:mysql://192.168.99.101:3306/test_db?user=root&password=my-secret-pw', driverClassName='com.mysql.jdbc.Driver', tableDesc=TableDesc{tableName='test_table', columnNames=[?col1, ?col2], columnDefs=[col1, col2], primaryKeys=[]}}
15/12/11 11:08:44 INFO mapred.Task: Task:attempt_local1324562503_0006_m_000000_0 is done. And is in the process of commiting
15/12/11 11:08:44 INFO mapred.LocalJobRunner:
15/12/11 11:08:44 INFO mapred.Task: Task 'attempt_local1324562503_0006_m_000000_0' done.
15/12/11 11:08:44 INFO mapred.LocalJobRunner: Finishing task: attempt_local1324562503_0006_m_000000_0
15/12/11 11:08:44 INFO mapred.LocalJobRunner: Map task executor complete.
Everything looks fine. However, no data is written. I checket with tcpdump that not even a connection with my local MySQL-database is being established. Also, when I change the JDBC-connection-string to obvious wrong values (user names that do not exist, a non-existing DB name and even a non-existing IP for the DB server), I get the same logs that do not complain about anything.
Also, changing the jdbc-tap to stdout produces the expected values.
I do not know at all how to debug this. Is there a possibility to produce error outputs? Right now, I have no clue what is going wrong.
As it turns out, I was using the wrong version of cascading-jdbc. Cascalog 2.1.1 is using Cascading 2.5.3. Switching to a 2.5 version fixed the problem.
I was not able to see this from the error messages though (as there were none). One of the developers of cascading-jdbc was kind enough to point this out to me.

Fetch Update precondition failed

One week ago, Fetch method of Google.Apis.v2.DriveService.Files.Update throw exception 412 Precondition failed.
I had realize previous operation getting file using DocID. Later, I update this file without changes then I get this error message.
Errors [
Message[Precondition Failed] Location[If-Match - header] Reason[conditionNotMet] Domain[global] ]
Could you help me?

org.hibernate.StaleObjectStateException when using Grails with PostgreSQL

I've written a grails service with the following code:
EPCGenerationMetadata requestEPCs(String indicatorDigit, FilterValue filterValue,
PartitionValue partitionValue, String companyPrefix, String itemReference,
Long quantity) throws IllegalArgumentException, IllegalStateException {
//... code
//problematic snippet bellow
def serialGenerator
synchronized(this) {
log.debug "Generating epcs..."
serialGenerator = SerialGenerator.findByItemReference(itemReference)
if(!serialGenerator) {
serialGenerator = new SerialGenerator(itemReference: itemReference, serialNumber: 0l)
}
startingPoint = serialGenerator.serialNumber + 1
serialGenerator.serialNumber += quantity
serialGenerator.save(flush: true)
}
//code continues...
}
Being a grails service a singleton by default, I thought I'd be safe from concurrent inconsistency by adding the synchronized block above. I've created a simple client for testing concurrency, as the service is exposed by http invoker. I ran multiple clients at the same time, passing as argument the same itemReference, and had no problems at all.
However, when I changed the database from MySQL to PostgreSQL 8.4, I couldn't handle concurrent access anymore. When running a single client, everything is fine. However, if I add one more client asking for the same itemReference, I get instantly a StaleObjectStateException:
Exception in thread "main" org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [br.com.app.epcserver.SerialGenerator] with identifier [10]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [br.com.app.epcserver.SerialGenerator#10]
at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:672)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374)
at org.springframework.orm.hibernate3.HibernateTemplate.flush(HibernateTemplate.java:881)
at org.codehaus.groovy.grails.orm.hibernate.metaclass.SavePersistentMethod$1.doInHibernate(SavePersistentMethod.java:58)
(...)
at br.com.app.EPCGeneratorService.requestEPCs(EPCGeneratorService.groovy:63)
at br.com.app.epcclient.IEPCGenerator$requestEPCs.callCurrent(Unknown Source)
at br.com.app.epcserver.EPCGeneratorService.requestEPCs(EPCGeneratorService.groovy:29)
at br.com.app.epcserver.EPCGeneratorService$$FastClassByCGLIB$$15a2adc2.invoke()
(...)
Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [br.com.app.epcserver.SerialGenerator#10]
Note: EPCGeneratorService.groovy:63 refers to serialGenerator.save(flush: true).
I don't know what to think, as the only thing that I've changed was the database. I'd appreciate any advice on the matter.
I'm using:
Grails 1.3.3
Postgres 8.4 (postgresql-8.4-702.jdbc4 driver)
JBoss 6.0.0-M4
MySQL:
mysqld Ver 5.1.41 (mysql-connector-java-5.1.13-bin driver)
Thanks in advance!
That's weird, try disabling transaction.
This is indeed a strange behavior, but you could try to workaround by using a "select ... for upgrade", via hibernate lock method.
Something like this:
def c = SerialGenerator.createCriteria()
serialgenerator = c.get {
eg "itemReferece", itemReference
lock true
}