Glassfish "Connection closed" error with a connection pool, JDBC, and SQL Server 2008 - sql-server-2008

When I try to do more than one transaction in a JSF page, I get the following error:
A potential connection leak detected for connection pool MSSQL. The stack trace of the thread is provided below :
com.sun.enterprise.resource.pool.ConnectionPool.setResourceStateToBusy(ConnectionPool.java:324)
com.sun.enterprise.resource.pool.ConnectionPool.getResourceFromPool(ConnectionPool.java:758)
com.sun.enterprise.resource.pool.ConnectionPool.getUnenlistedResource(ConnectionPool.java:632)
com.sun.enterprise.resource.pool.AssocWithThreadResourcePool.getUnenlistedResource(AssocWithThreadResourcePool.java:196)
com.sun.enterprise.resource.pool.ConnectionPool.internalGetResource(ConnectionPool.java:526)
com.sun.enterprise.resource.pool.ConnectionPool.getResource(ConnectionPool.java:381)
com.sun.enterprise.resource.pool.PoolManagerImpl.getResourceFromPool(PoolManagerImpl.java:245)
com.sun.enterprise.resource.pool.PoolManagerImpl.getResource(PoolManagerImpl.java:170)
com.sun.enterprise.connectors.ConnectionManagerImpl.getResource(ConnectionManagerImpl.java:338)
com.sun.enterprise.connectors.ConnectionManagerImpl.internalGetConnection(ConnectionManagerImpl.java:301)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:190)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:165)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:160)
com.sun.gjc.spi.base.DataSource.getConnection(DataSource.java:113)
cl.codesin.colegios.util.persistencia.DAOManejador.abrir(DAOManejador.java:126)
Please notice the last line I pasted:
cl.codesin.colegios.util.persistencia.DAOManejador.abrir(DAOManejador.java:126)
abrir does the following:
public void abrir() throws SQLException {
try
{
if(this.con==null || this.con.isClosed())
this.con = fuenteDatos.getConnection();
}
catch(SQLException e)
{
throw e;
}
}
It works in a singleton DAO manager this way: the DAO manager has one instance of each DAO and manages a single connection that every DAO shares. When a DAO is requested, it does the following:
public DAORegion getDAOregion() throws SQLException {
try
{
if(con == null) //con is the connection the DAO manager uses
{
this.abrir();
}
}
catch(SQLException e)
{
throw e;
}
if(this.DAOregion==null)
{
this.DAOregion = new DAORegion(this.con);
}
return DAOregion;
}
When closing a connection, the manager just calls to con.close() without anything else.
By the way, I have no persistence.xml since I'm working with JDBC.
What am I doing wrong? Thank you beforehand.
EDIT: By desactivating the leak detection from the Glassfish server I could avoid the exception, however I'm still getting a "Connection closed" error. Worst is, now I don't know exactly where the error is being thrown.
EDIT 2: I changed my DAO manager again. Here's the implementation.
public class DAOManejador {
public static DAOManejador getInstancia() {
return DAOManejadorSingleton.INSTANCIA;
}
//This is just a sample, every getDAOXXX works the same.
public DAOUsuario getDAOusuario() throws SQLException {
try
{
if(con == null)
{
this.abrir();
}
}
catch(SQLException e)
{
throw e;
}
if(this.DAOusuario==null)
{
this.DAOusuario = new DAOUsuario(this.con, this.stmt, this.res);
}
return DAOusuario;
}
public void abrir() throws SQLException {
try
{
if(this.con==null || this.con.isClosed())
this.con = fuenteDatos.getConnection();
}
catch(SQLException e)
{
throw e;
}
}
public void iniciaTransaccion() throws SQLException {
try
{
con.setAutoCommit(false);
}
catch(SQLException e)
{
throw e;
}
}
public void cierraTransaccion() throws SQLException {
try
{
con.setAutoCommit(true);
}
catch(SQLException e)
{
throw e;
}
}
public void comprometer() throws SQLException {
try
{
con.commit();
}
catch(SQLException e)
{
throw e;
}
}
public void deshacer() throws SQLException {
try
{
con.rollback();
}
catch(SQLException e)
{
throw e;
}
}
public void cerrar() throws SQLException {
try
{
if(this.stmt!=null && !this.stmt.isClosed())
stmt.close();
if(this.res!=null && !this.res.isClosed())
this.res.close();
if(this.con!=null && !this.con.isClosed())
con.close();
}
catch(SQLException e)
{
throw e;
}
}
public void comprometerYTerminarTransaccion() throws SQLException {
try
{
this.comprometer();
this.cierraTransaccion();
}
catch(SQLException e)
{
throw e;
}
}
public void comprometerYCerrarConexion() throws SQLException {
try
{
this.comprometer();
this.cierraTransaccion();
this.cerrar();
}
catch(SQLException e)
{
throw e;
}
}
//Protegidos
#Override
protected void finalize() throws SQLException, Throwable
{
try
{
this.cerrar();
}
finally
{
super.finalize();
}
}
//Private
private DataSource fuenteDatos;
private Connection con = null;
private PreparedStatement stmt = null;
private ResultSet res = null;
private DAOUsuario DAOusuario = null;
private DAORegion DAOregion = null;
private DAOProvincia DAOprovincia = null;
private DAOComuna DAOcomuna = null;
private DAOColegio DAOcolegio = null;
private DAOManejador() throws Exception {
try
{
InitialContext ctx = new InitialContext();
this.fuenteDatos = (DataSource)ctx.lookup("jndi/MSSQL");
}
catch(Exception e){ throw e; }
}
private static class DAOManejadorSingleton {
public static final DAOManejador INSTANCIA;
static
{
DAOManejador dm;
try
{
dm = new DAOManejador();
}
catch(Exception e)
{ dm=null; }
INSTANCIA = dm;
}
}
}
What I did now is to provide a single access point for every DAO. When a DAO wants to use a statement or a resource, they'll all use the same one. When they need to open again one, the system does the following:
public abstract class DAOGenerico<T> {
//Protected
protected final String nombreTabla;
protected Connection con;
protected PreparedStatement stmt;
protected ResultSet res;
protected DAOGenerico(Connection con, PreparedStatement stmt, ResultSet res, String nombreTabla) {
this.nombreTabla = nombreTabla;
this.con = con;
this.stmt = stmt;
this.res = res;
}
//Prepares a query
protected final void prepararConsulta(String query) throws SQLException
{
try
{
if(this.stmt!=null && !this.stmt.isClosed())
this.stmt.close();
this.stmt = this.con.prepareStatement(query);
}
catch(SQLException e){ throw e; }
}
//Gets a ResultSet
protected final void obtenerResultados() throws SQLException {
try
{
if(this.res!=null && !this.res.isClosed())
this.res.close();
this.res = this.stmt.executeQuery();
}
catch(SQLException e){ throw e; }
}
}
And it still doesn't work.

I tried not doing anything when closing the connection. I commented the code in the cerrar method, and for some reason, it works! Even when it's a bad practice! Is it okay to keep it like that, or should I find a way to close a connection?
Disregard this, I found what's wrong. I hope someone can make good use of this in the future.
The problem
if(this.con==null || this.con.isClosed())
this.con = fuenteDatos.getConnection();
Each time I try to open a connection, I get a completely brand new connection. What's the problem with this?
public DAOUsuario getDAOusuario() throws SQLException {
try
{
if(con == null)
{
this.abrir();
}
}
catch(SQLException e)
{
throw e;
}
if(this.DAOusuario==null)
{
this.DAOusuario = new DAOUsuario(this.con, this.stmt, this.res);
}
return DAOusuario;
}
Only when I create a new instance of the DAO I assign it a new connection. What will happen in the following case then?
DAOManejador daoManager = DAOManejador.getInstancia(); //Get an instance of the DAO manager
daoManager.abrir(); //Open the connection
DAOUsuario daoUser = daoManager.getDAOusuario(); //Get a DAOUsuario, a type of DAO. It'll have the same connection as the DAOManager, and it'll be stored in the instance of the DAO manager
... //Do database stuff
daoManager.cerrar(); //Close the connection
daoManager.abrir(); //Open the connection again. Note that this will be a new instance of the conection rather than the old one
If, from here, you try to do database stuff, you'll get a Connection closed error since daoUser will still hold the old connection.
What I did
I modified the DAO manager class. It no longer has a getDAOXXX() per DAO, but rather the following:
public DAOGenerico getDAO(Tabla t) throws SQLException {
try
{
if(con == null || this.con.isClosed())
{
this.abrir();
}
}
catch(SQLException e)
{
throw e;
}
switch(t)
{
case REGION:
return new DAORegion(this.con, this.stmt, this.res);
case PROVINCIA:
return new DAOProvincia(this.con, this.stmt, this.res);
case COMUNA:
return new DAOComuna(this.con, this.stmt, this.res);
case USUARIO:
return new DAOUsuario(this.con, this.stmt, this.res);
case COLEGIO:
return new DAOColegio(this.con, this.stmt, this.res);
default:
throw new SQLException("Se intentó vincular a una tabla que no existe.");
}
}
Each time the user requests a DAO, it'll ask the manager to return the correct type of DAO. But instead of storing each instance, the manager will create new instances depending on the current connection (con is the connection, stmt is a PreparedStatement and res is a ResultSet - they will be used so they can be closed when the manager closes the connection so nothing leaks). Tabla is an enum holding the current table names in the database so it can return the correct DAO. This worked with no problems whatsoever. The rest of the class is the same, so if you want to use it, just replace the DAOUsuario method with the one above and it should work fine.

Related

actframework can't save in database using $.merge

i'am trying to read data from a form and save it to database.first i read entity from database and use $.merge(formdata).filter("-id").to(entity) .I print the value and it's changed successful.But when i call dao.save it do nothing;
the action code below
#PutAction("{id}")
public void update(#DbBind("id") #NotNull Category cate,Category category, ActionContext context) {
notFoundIfNull(cate);
try {
$.merge(category).filter("-id").to(cate);
System.out.print("name is " + cate.getName());
// cate.setName("test"); // success
this.dao.save(cate);
// redirect("/admin/categories");
} catch (io.ebean.DataIntegrityException e) {
context.flash().error(e.getMessage());
render("edit", category);
}
}
dao.save successful when i call cate.setName("test");
Can someone help me solve this problem?
I solved this problem by myself using the following code
public void mergeTo(Base target){
if(!this.getClass().isAssignableFrom(target.getClass())){
return;
}
Method[] methods = this.getClass().getMethods();
for(Method fromMethod: methods){
if(fromMethod.getDeclaringClass().equals(this.getClass())
&& fromMethod.getName().startsWith("get")){
String fromName = fromMethod.getName();
String toName = fromName.replace("get", "set");
try {
Method toMetod = target.getClass().getMethod(toName, fromMethod.getReturnType());
Object value = fromMethod.invoke(this, (Object[])null);
if(value != null){
toMetod.invoke(target, value);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}

JavaFX table row color, too many database connections

I want to to print my row red when book is out of stock but i am getting error like that every time i try new idea to manage that:
"com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:
Data source rejected establishment of connection, message from
server: "Too many connections"
Even if i try to close all connections in same loop...
So here we go:
private boolean checkIfOutOfStock(BookDetail book) throws SQLException{
String query = "select * from tbl_loan where book_id = " + book.getId() + " ";
dc = new DbConnection();
conn = dc.connect();
PreparedStatement checkPst = conn.prepareStatement(query);
ResultSet checkRs = checkPst.executeQuery(query);
if(checkRs.next()){
checkRs.close();
checkPst.close();
return true;
} else
{
checkRs.close();
checkPst.close();
return false;
}
}
#Override
public void initialize(URL location, ResourceBundle resources) {
dc = new DbConnection();
conn = dc.connect();
selectionModel = editTabPane.getSelectionModel();
editTableBooks.setRowFactory(tv -> new TableRow<BookDetail>() {
#Override
public void updateItem(BookDetail item, boolean empty) {
super.updateItem(item, empty) ;
if (item == null) {
setStyle("");
} else
try {
if (checkIfOutOfStock(item)) {
setStyle("-fx-background-color: tomato;");
} else {
setStyle("");
}
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
}
It works fine until i slide up and down table few times... its like everytime i slide table im opening new connection. Any idea how to solve it?
Hey do you mean something like that?
editTableBooks.setRowFactory(tv -> new TableRow<BookDetail>() {
#Override
public void updateItem(BookDetail item, boolean empty) {
super.updateItem(item, empty) ;
Platform.runLater(new Runnable() {
#Override
public void run() {
if (item == null) {
setStyle("");
} else
try {
if (checkIfOutOfStock(item)) {
setStyle("-fx-background-color: tomato;");
} else {
setStyle("");
}
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
}
});
It doesn't change anything or i just didn't get it :P

MySQL connection pooling with JERSEY

I'm developping a RESTful API with Jersey and MySQL.
I'm actually using the JDBC driver to connect to the database and I create a new connection everytime I want to acess it. As it clearly is a memory leakage, I started to implement the ServletContextClassclass but I don't know how to call the method when I need to get the result of a SQL query.
Here is how I did it wrong:
DbConnection.java
public class DbConnection {
public Connection getConnection() throws Exception {
try {
String connectionURL = "jdbc:mysql://root:port/path";
Connection connection = null;
Class.forName("com.mysql.jdbc.Driver").newInstance();
connection = DriverManager.getConnection(connectionURL, "root", "password");
return connection;
}
catch (SQLException e) {
throw e;
}
}
}
DbData.java
public ArrayList<Product> getAllProducts(Connection connection) throws Exception {
ArrayList<Product> productList = new ArrayList<Product>();
try {
PreparedStatement ps = connection.prepareStatement("SELECT id, name FROM product");
ResultSet rs = ps.executeQuery();
while (rs.next()) {
Product product = new Product();
product.setId(rs.getInt("id"));
product.setName(rs.getString("name"));
productList.add(product);
}
return productList;
} catch (Exception e) {
throw e;
}
}
Resource.java
#GET
#Path("task/{taskId}")
#Consumes(MediaType.APPLICATION_JSON)
public Response getInfos(#PathParam("taskId") int taskId) throws Exception {
try {
DbConnection database= new DbConnection();
Connection connection = database.getConnection();
Task task = new Task();
DbData dbData = new DbData();
task = dbData.getTask(connection, taskId);
return Response.status(200).entity(task).build();
} catch (Exception e) {
throw e;
}
}
Here is where I ended up trying to implement the new class:
ServletContextClass.java
public class ServletContextClass implements ServletContextListener {
public Connection getConnection() throws Exception {
try {
String connectionURL = "jdbc:mysql://root:port/path";
Connection connection = null;
Class.forName("com.mysql.jdbc.Driver").newInstance();
connection = DriverManager.getConnection(connectionURL, "root", "password");
return connection;
} catch (SQLException e) {
throw e;
}
}
public void contextInitialized(ServletContextEvent arg0) {
System.out.println("ServletContextListener started");
DbConnection database = new DbConnection();
try {
Connection connection = database.getConnection();
} catch (Exception e) {
e.printStackTrace();
}
}
public void contextDestroyed(ServletContextEvent arg0) {
System.out.println("ServletContextListener destroyed");
//con.close ();
}
}
But problem is, I don't know what to do next. Any help? Thanks
You need to set the Connection variable as an attribute of the ServletContext. Also, I would recommend using connection as a static class variable so you can close it in the contextDestroyed method.
You can retrieve the connection attribute in any of your servlets later on for doing your DB operations.
public class ServletContextClass implements ServletContextListener {
public static Connection connection;
public Connection getConnection(){
try {
String connectionURL = "jdbc:mysql://root:port/path";
Class.forName("com.mysql.jdbc.Driver").newInstance();
connection = DriverManager.getConnection(connectionURL, "root", "password");
} catch (SQLException e) {
// Do something
}
}
public void contextInitialized(ServletContextEvent arg0) {
System.out.println("ServletContextListener started");
getConnection();
arg0.getServletContext().setAttribute("connection", connection);
}
public void contextDestroyed(ServletContextEvent arg0) {
System.out.println("ServletContextListener destroyed");
try{
if(connection != null){
connection.close();
}
}catch(SQLException se){
// Do something
}
}
}
Finally access your connection attribute inside your Servlet (Resource). Make sure you pass #Context ServletContext to your Response method so you can access your context attributes.
#GET
#Path("task/{taskId}")
#Consumes(MediaType.APPLICATION_JSON)
public Response getInfos(#PathParam("taskId") int taskId, #Context ServletContext context) throws Exception {
try {
Connection connection = (Connection) context.getAttribute("connection");
Task task = new Task();
DbData dbData = new DbData();
task = dbData.getTask(connection, taskId);
return Response.status(200).entity(task).build();
} catch (Exception e) {
throw e;
}
}
Now that we have solved your current issue, we need to know what can go wrong with this approach.
Firstly, you are only creating one connection object which will be used everywhere. Imagine multiple users simultaneously accessing your API, the single connection will be shared among all of them which will slow down your response time.
Secondly, your connection to DB will die after sitting idle for a while (unless you configure MySql server not to kill idle connections which is not a good idea), and when you try to access it, you will get SQLExceptions thrown all over. This can be solved inside your servlet, you can check if your connection is dead, create it again, and then update the context attribute.
The best way to go about your Mysql Connection Pool will be to use a JNDI resource. You can create a pool of connections which will be managed by your servlet container. You can configure the pool to recreate connections if they go dead after sitting idle. If you are using Tomcat as your Servlet Container, you can check this short tutorial to get started with understanding the JNDI connection pool.

JavaFX, SceneBuilder, Populating TableView with MySQL Result Set

I have finally overcome my issue with a NPE in my code whilst learning FX/FXML. I now however have a different problem, a window opens with my TableView however there is no content in the table at all. As you cann I have printed out the JobList to make sure there is content being returned, and this returns three jobs (the correct amount). Am I missing something that binds the table to the returned list?
Here is the code;
public class SecondInterface implements Initializable {
private JobDataAccessor jAccessor;
private String aQuery = "SELECT * FROM progdb.adamJobs";
private Parent layout;
private Connection connection;
#FXML
TableView<Job> tView;
public void newI(Connection connection) throws Exception {
Stage primaryStage;
primaryStage = MainApp.primaryStage;
this.connection = connection;
System.out.println(connection);
FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("Test1.fxml"));
fxmlLoader.setController(this);
try {
layout = (Parent) fxmlLoader.load();
} catch (IOException exception) {
throw new RuntimeException(exception);
}
primaryStage.getScene().setRoot(layout);
}
public Parent getLayout() {
return layout;
}
#Override
public void initialize(URL url, ResourceBundle rb) {
jAccessor = new JobDataAccessor();
try {
System.out.println("This connection: " + connection);
System.out.println("This query: " + aQuery);
List<Job> jList = jAccessor.getJobList(connection, aQuery);
for (Job j : jList) {
System.out.println(j);
}
tView.getItems().addAll(jAccessor.getJobList(connection, aQuery));
} catch (SQLException e) {
e.printStackTrace();
}
}
}

Read Time Out Exception in Cassandra using cassandra-driver-core

I am writing a Java application which reads the data from MySQL and stores it in Cassandra as Sqoop does not support a direct import to Cassandra. I am using Producer-Consumer framework to achieve the same due to high number of records (in millions) in MySQL. But I am getting ReadTimeOut Exception (com.datastax.driver.core.exceptions.DriverException: Timeout during read). I have one Producer class which reads the data from MySQL and puts it into one queue. There is one consumer class which reads the data from that queue and pushes it to Cassndra. There is one manager class which acts as a coordination bridge between these two classes.
Producer class :-
public class MySQLPrintJobProducer implements Runnable {
private BlockingQueue<PrintJobDAO> printerJobQueue = null;
private Connection conn = null;
public MySQLPrintJobProducer(BlockingQueue<PrintJobDAO> printerJobQueue) throws MySQLClientException {
this.printerJobQueue = printerJobQueue;
connect();
}
private void connect() throws MySQLClientException {
try {
Class.forName(MySQLClientConstants.MYSQL_JDBC_DRIVER);
conn = DriverManager.getConnection("jdbc:mysql://mysqlserverhose/mysqldb?user=mysqluser&password=mysqlpasswd");
} catch (ClassNotFoundException e) {
throw new MySQLClientException(ExceptionUtils.getStackTrace(e));
} catch (SQLException e) {
throw new MySQLClientException(ExceptionUtils.getStackTrace(e));
}
}
public void run() {
ResultSet rs = null;
Statement stmt = null;
PreparedStatement pStmt = null;
try {
stmt = conn.createStatement();
// Get total number of print jobs stored.
rs = stmt.executeQuery(MySQLClientConstants.PRINT_JOB_COUNT_QUERY);
int totalPrintJobs = 0;
if(rs != null) {
while(rs.next()) {
totalPrintJobs = rs.getInt(1);
}
}
// Determine the number of iterations.
int rowOffset = 1;
int totalIteration = ((totalPrintJobs / ExportManagerConstants.DATA_TRANSFER_BATCH_SIZE) + 1);
pStmt = conn.prepareStatement(MySQLClientConstants.PRINT_JOB_FETCH_QUERY);
int totalRecordsFetched = 0;
// Iterate over to fetch Print Job Records in bathces and put it into the queue.
for(int i = 1; i <= totalIteration; i++) {
pStmt.setInt(1, rowOffset);
pStmt.setInt(2, ExportManagerConstants.DATA_TRANSFER_BATCH_SIZE);
System.out.println("In iteration : " + i + ", Row Offset : " + rowOffset);
rs = pStmt.executeQuery();
synchronized (this.printerJobQueue) {
if(this.printerJobQueue.remainingCapacity() > 0) {
while(rs.next()) {
totalRecordsFetched = rs.getRow();
printerJobQueue.offer(new PrintJobDAO(rs.getInt(1), rs.getInt(2), rs.getString(3), rs.getDate(4),
rs.getTimestamp(5), rs.getInt(6), rs.getInt(7), rs.getInt(8), rs.getInt(9),
rs.getInt(10), rs.getFloat(11), rs.getFloat(12), rs.getInt(13), rs.getFloat(14), rs.getInt(15),
rs.getDouble(16), rs.getDouble(17), rs.getDouble(18), rs.getDouble(19), rs.getDouble(20),
rs.getFloat(21)));
this.printerJobQueue.notifyAll();
}
System.out.println("In iteration : " + i + ", Records Fetched : " + totalRecordsFetched +
", Queue Size : " + printerJobQueue.size());
rowOffset += ExportManagerConstants.DATA_TRANSFER_BATCH_SIZE;
} else {
System.out.println("Print Job Queue is full, waiting for Consumer thread to clear.");
this.printerJobQueue.wait();
}
}
}
} catch (SQLException e) {
System.err.println(ExceptionUtils.getStackTrace(e));
} catch (InterruptedException e) {
System.err.println(ExceptionUtils.getStackTrace(e));
} finally {
try {
if(null != rs) {
rs.close();
}
if(null != stmt) {
stmt.close();
}
if(null != pStmt) {
pStmt.close();
}
} catch (SQLException e) {
System.err.println(ExceptionUtils.getStackTrace(e));
}
}
ExportManager.setProducerCompleted(true);
}
}
Consumer Class :-
public class CassandraPrintJobConsumer implements Runnable {
private Cluster cluster = null;
private Session session = null;
private BlockingQueue<PrintJobDAO> printerJobQueue = null;
public CassandraPrintJobConsumer(BlockingQueue<PrintJobDAO> printerJobQueue) throws CassandraClientException {
this.printerJobQueue = printerJobQueue;
cluster = Cluster.builder().withPort(9042).addContactPoint("http://cassandrahost").build();
}
public void run() {
int printJobConsumed = 0;
int batchInsertCount = 1;
if(cluster.isClosed()) {
connect();
}
session = cluster.connect();
PreparedStatement ps = session.prepare(CassandraClientConstants.INSERT_PRINT_JOB_DATA);
BatchStatement batch = new BatchStatement();
synchronized (this.printerJobQueue) {
while(true) {
if(!this.printerJobQueue.isEmpty()) {
for(int i = 1; i <= ExportManagerConstants.DATA_TRANSFER_BATCH_SIZE; i++) {
PrintJobDAO printJob = printerJobQueue.poll();
batch.add(ps.bind(printJob.getJobID(), printJob.getUserID(), printJob.getType(), printJob.getGpDate(), printJob.getDateTimes(),
printJob.getAppName(), printJob.getPrintedPages(), printJob.getSavedPages(), printJob.getPrinterID(), printJob.getWorkstationID(),
printJob.getPrintedCost(), printJob.getSavedCost(), printJob.getSourcePrinterID(), printJob.getSourcePrinterPrintedCost(),
printJob.getJcID(), printJob.getCoverageC(), printJob.getCoverageM(), printJob.getCoverageY(), printJob.getCoverageK(),
printJob.getCoverageTotal(), printJob.getPagesAnalyzed()));
printJobConsumed++;
}
session.execute(batch);
System.out.println("After Batch - " + batchInsertCount + ", record insert count : " + printJobConsumed);
batchInsertCount++;
this.printerJobQueue.notifyAll();
} else {
System.out.println("Print Job Queue is empty, nothing to export.");
try {
this.printerJobQueue.wait();
} catch (InterruptedException e) {
System.err.println(ExceptionUtils.getStackTrace(e));
}
}
if(ExportManager.isProducerCompleted() && this.printerJobQueue.isEmpty()) {
break;
}
}
}
}
}
Manager Class :-
public class ExportManager {
private static boolean isInitalized = false;
private static boolean producerCompleted = false;
private static MySQLPrintJobProducer printJobProducer = null;
private static CassandraPrintJobConsumer printJobConsumer = null;
private static BlockingQueue<PrintJobDAO> printJobQueue = null;
public static boolean isProducerCompleted() {
return producerCompleted;
}
public static void setProducerCompleted(boolean producerCompleted) {
ExportManager.producerCompleted = producerCompleted;
}
private static void init() throws MySQLClientException, CassandraClientException {
if(!isInitalized) {
printJobQueue = new LinkedBlockingQueue<PrintJobDAO>(ExportManagerConstants.DATA_TRANSFER_BATCH_SIZE * 2);
printJobProducer = new MySQLPrintJobProducer(printJobQueue);
printJobConsumer = new CassandraPrintJobConsumer(printJobQueue);
isInitalized = true;
}
}
public static void exportPrintJobs() throws ExportException {
try {
init();
} catch (MySQLClientException e) {
throw new ExportException("Print Job Export failed.", e);
} catch (CassandraClientException e) {
throw new ExportException("Print Job Export failed.", e);
}
Thread producerThread = new Thread(printJobProducer);
Thread consumerThread = new Thread(printJobConsumer);
consumerThread.start();
producerThread.start();
}
}
TestNG class :-
public class TestExportManager {
#Test
public void testExportPrintJobs() {
try {
ExportManager.exportPrintJobs();
Thread.currentThread().join();
} catch (ExportException e) {
Assert.fail("ExportManager.exportPrintJobs() failed.", e);
} catch (InterruptedException e) {
Assert.fail("ExportManager.exportPrintJobs() failed.", e);
}
}
}
I have also made some configuration changes by following this link. Still I am getting following exception after inserting 18000 - 20000 records.
Exception in thread "Thread-2" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /192.168.10.80
(com.datastax.driver.core.exceptions.DriverException: Timeout during read))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:91)
at com.incendiary.ga.client.cassandra.CassandraPrintJobConsumer.run(CassandraPrintJobConsumer.java:108)
at java.lang.Thread.run(Unknown Source)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /192.168.10.80 (com.datastax.drive
r.core.exceptions.DriverException: Timeout during read))
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:100)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:171)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
I am not able to figure out the actual reason for the issue. I could not find any exception in Cassandra system log. I am using Apache Cassandra 2.0.7 and cassandra-driver-core 2.0.1.
You can increase read time out in you driver side . By using withSocket method in this you have SocketOption class using that you can read time out .By default is read time out is 10 millisecond .