ATG Connection Threads - mysql

I am facing an issue where I am getting the following error:
CONTAINER:atg.repositoryException; SOURCE:java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: XA error: XAResource.XAER_RMFAIL start() failed on resource'ATGProductionDS_atg11':XAResource.XAER_RMFAIL: Resource manager is unavailable.
How do I solve this?
Do ATG connection threads get closed implicitly or do we have to close it explicitly?
Do ATG connection threads get closed after updateItem() and addItem() methods implicitly?
How can we close an ATG session thread explicitly?

There are potentially a number of causes for this issue. There is a support document in Oracle support portal which helps understand where the issue is (You will have to register for Oracle Support access)
Slow running query which is never timing out due to a misconfiguration
The timeouts for your datasource(s) are not setup correctly between WebLogic and the ATG application

Related

Azure database for MySQL DB 5.7 Transient handling in .net core

I am creating .net core 2.1 MVC application and using Azure database for MySQL DB 5.7.
I have read below links but seems they are applicable for MS SQL DB.
https://learn.microsoft.com/en-us/azure/mysql/concepts-high-availability
https://learn.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific
Transient handling for MySQL not possible? Help me link to MYSQL related similar pages.
A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens.
Transient errors should be handled using retry logic. Situations that must be considered:
An error occurs when you try to open a connection
An idle connection is dropped on the server side. When you try to issue a command it can't be executed
An active connection that currently is executing a command is dropped.
The first and second case are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for MySQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
Wait for 5 seconds before your first retry.
For each following retry, the increase the wait exponentially, up to 60 seconds.
Set a max number of retries at which point your application considers the operation failed.
Read more here.
And you can read more on how to troubleshoot connection issues to Troubleshoot connection issues to Azure Database for MySQL here.

JDBC4 Communications Exception

I seem to have a problem, where I get a communications exception when I try to write to the database. It seems to happen after a perioid of inactivity, but I'm not sure.
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.1.v20150605-31e8258): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 40,404,396 milliseconds ago. The last packet sent successfully to the server was 40,404,396 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
I tried to set the timeout higher, and added autoReconnect=true to the connection string. When the exception is thrown, it does retry 4 times, and then stops.
The fun thing is, that the database is located within the same server as the application-server. How come this happens, and how do I fix this?
I really hope you guys can help me!
Best regards,
Ben
EDIT
As pointed out, this questions is seen multiple places. Unfortunately the proposed solutions didn't help me. Either they had an actual communications failure (Not running mysql local on the server) or they fixed it with another connection string. I've tried all that was said, and still getting the error.
Latest error from the weekend is shown here: http://pastebin.com/wMb7Ygwd

MySQL giving "read ECONNRESET" error after idle time on node.js server

I'm running a Node server connecting to MySQL via the node-mysql module. Connecting to and querying MySQL works great initially without any errors, however, the first query after leaving the Node server idle for a couple hours results in an error. The error is the familiar read ECONNRESET, coming from the depths of the node-mysql module.
A stack trace (note that the three entries of the trace belong to my app's error reporting code):
Error
at exports.Error.utils.createClass.init (D:\home\site\wwwroot\errors.js:180:16)
at new newclass (D:\home\site\wwwroot\utils.js:68:14)
at Query._callback (D:\home\site\wwwroot\db.js:281:21)
at Query.Sequence.end (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\sequences\Sequence.js:78:24)
at Protocol.handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\Protocol.js:271:14)
at PoolConnection.Connection._handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\Connection.js:269:18)
at Socket.EventEmitter.emit (events.js:95:17)
at net.js:441:14
at process._tickCallback (node.js:415:13)
This error happens both on my cloud Node server and MySQL server as well as a local setup of both.
My questions:
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
Update: After more browsing, I think my issue is a duplicate of this one. It appears his connection is disconnecting as well, but no one has suggested how to keep the connection alive or how to address the error outside of failing on the first query back.
I reached out to the node-mysql folks on their Github page and got some firm answers.
MySQL does indeed prune idle connections. There's a MySQL variable "wait_timeout" that sets the number of second before timeout and the default is 8 hours. We can set the default to be much larger than that. Use show variables like 'wait_timeout'; to view your timeout setting and set wait_timeout=28800; to change it.
According to this issue, node-mysql doesn't prune pool connections after these sorts of disconnections. The module developers recommended using a heartbeat to keep the connection alive such as calling SELECT 1; on an interval. They also recommended using the node-pool module and its idleTimeoutMillis option to automatically prune idle connections.
If this happens when establishing a single reused connection, it can be avoided by establishing a connection pool instead.
For example, if you're doing something like this...
var db = require('mysql')
.createConnection({...})
.connect(function(err){});
do this instead...
var db = require('mysql')
.createPool({...});
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
Yes. The server has closed its end of the connection.
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Correct, but it should handle the error internally, not pass it back to you. This appears to be a bug in node-mysql. Report it.
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
It is either a bug in the node-MySQL connection pool implementation, o else you haven't configured it properly to detect failures.
I have been also facing the same issue. Apparently it was happening because one of the backend process has been triggered on table which was being referred in my api.
This caused table to go in lock wait state and my query request got failed with connection reset. Though i'm wondering why i didn't receive lock wait error .

BoneCP connection closed unexpectedly

I'm integrating DataNucleus with BoneCP-0.8.0-rc2 and I'm getting this exception, randomly:
javax.jdo.JDODataStoreException: No operations allowed after connection closed.
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:421)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:230)
After reading this post, I have set
datanucleus.connectionPool.maxConnectionAgeInSeconds = 170 (seconds)
Other properties that I use:
datanucleus.connectionPool.minPoolSize=0
datanucleus.connectionPool.maxPoolSize=8
The local MySQL server where I tested this property has wait_timeout=28800 (seconds).
Since I added this new property, I'm getting the above exception more often than before.
Since the exception doesn't explicitly specify that the connection is closed by the driver, I assume it was closed by the connection manager.
Do you have any other clue what might cause this exception?

Play! 2.0 - BoneCP Returning Closed Connections

I have an interesting issue which I have not been able to resolve. I am using Play! 2.0.4 and using the integrated BoneCP connection pool to get the DB connections. However, for some reason, BoneCP keeps returning closed connections.
Database Server: Amazon RDS MySQL 5, default timeout settings (which should be 8 hours...)
My Play Datasource configuration looks as follows:
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://{server}/{schema}?autoReconnect=true&useUnicode=yes&characterEncoding=UTF-8"
db.default.partitionCount=4
db.default.idleConnectionTestPeriod=2 minutes
I had assumed setting the idleConnectionTestPeriod to 2 minutes surely would have prevented BoneCP from returning closed connections, but it hasn't.
Every so often, I get the following stack trace in my logs:
Exception in thread "pool-6-thread-25" java.sql.SQLException: Connection is closed!
at com.jolbox.bonecp.ConnectionHandle.checkClosed(ConnectionHandle.java:350)
at com.jolbox.bonecp.ConnectionHandle.setReadOnly(ConnectionHandle.java:1089)
at play.api.db.BoneCPApi$$anon$1.onCheckOut(DB.scala:328)
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:514)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:114)
at play.api.db.DBApi$class.getConnection(DB.scala:64)
at play.api.db.BoneCPApi.getConnection(DB.scala:273)
at play.api.db.DB$$anonfun$getConnection$1.apply(DB.scala:129)
at play.api.db.DB$$anonfun$getConnection$1.apply(DB.scala:129)
at scala.Option.map(Option.scala:133)
at play.api.db.DB$.getConnection(DB.scala:129)
at play.api.db.DB.getConnection(DB.scala)
at play.db.DB.getConnection(DB.java:50)
at play.db.DB.getConnection(DB.java:43)
at play.db.DB.getConnection(DB.java:29)
at com.edatasource.inboxtracker.tasks.TrackSiteEventActionTask.run(TrackSiteEventActionTask.java:23)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Does anybody know how I can fix this issue? Currently, I've had to wrap the DB.getConnection() in a try/catch and just catch the exception thrown by BoneCP and retry until I retrieve a valid connection. Seems like that should be unnecessary.
Thanks for any help.
Please try with 0.8.0-beta1. There was a bug related to this.