com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to time out or client request
com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1442)
Some code:
#Override
public synchronized ResultSet query(String s) throws SQLException {
try {
Statement statement = connection.createStatement();
statement.setQueryTimeout(3);
So I'm saving user data on SQL and this sometimes happens, it's very rare, but still. I would like to know why this could be happening.
Your client Open a connection to your Service or database.
By default, there is a timeout set, sometimes about 30 sec, in your apache server or Php... depends of what you use. Maybe mysql itself.
So, if your request is longer that 30 sec, the connection is stop, and you get this answer.
Hope this help you ;)
For more information, please give us your configuration : client , server type, apache or not, services...
Related
I have a Sparkjava app which I have deployed on a Tomcat server. It uses SQL2O to interface with the MySQL-database. After some time I start to have trouble connecting to the database. I've tried connecting directly from SQL2O, connecting through HikariCP and connecting through JNDI. They all work for about a day, before I start getting Communications link failure. This app gets hit a handful of times a day at best, so performance is a complete non issue. I want to configure the app to use one database connection per request. How do I go about that?
The app doesn't come online again afterwards until I redeploy it (overwrite ROOT.war again). Restarting tomcat or the entire server does nothing.
Currently every request creates a new Sql2o object and executes the query using withConnection. I'd be highly surprised if I was leaking any connections.
Here's some example code (simplified).
public class UserRepositry {
static {
try {
Class.forName("com.mysql.jdbc.Driver");
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
protected Sql2o sql2o = new Sql2o("jdbc:mysql://mysql.server.name/dbname?serverTimezone=UTC", "username", "password");
public List<Users> getUsers() {
return sql2o.withConnection((c, o) -> {
return c.createQuery(
"SELECT\n" +
" id,\n" +
" name\n" +
"FROM users"
)
.executeAndFetch(User.class);
});
}
}
public class Main {
public static void main(String[] args) {
val gson = new Gson();
port(8080);
get("/users", (req, res) -> {
return new UserRepository().getUsers();
}, gson::toJson);
}
}
If you rely on Tomcat to provide the connection to you: It's coming from a pool. Just go with plain old JDBC and open that connection yourself (and make sure to close it as well) if you don't like that.
So much for the answer to your question, to the letter. Now for the spirit: There's nothing wrong with connections coming from a pool. In all cases, it's your responsibility to handle it properly: Get access to a connection and free it up (close) when you're done with it. It doesn't make a difference if the connection is coming from a pool or has been created manually.
As you say performance is not an issue: Note that the creation of a connection may take some time, so even if the computer is largely idle, creating a new connection per request may have a notable effect on the performance. Your server won't overheat, but it might add a second or two to the request turnaround time.
Check configurations for your pool - e.g. validationQuery (to detect communication failures) or limits for use per connection. And make sure that you don't run into those issues because of bugs in your code. You'll need to handle communication errors anyways. And, again, that handling doesn't differ whether you use pools or not.
Edit: And finally: Are you extra extra sure that there indeed is no communication link failure? Like: Database or router unplugged every night to connect the vacuum cleaner? (no pun intended), Firewall dropping/resetting connections etc?
I have a question regarding the flow of go lang code.
In my main function, I am opening mysql connection and then using `defer" to close the connection at the end of the connection.
I have route where WebSocket is set up and used.
My Question is will program open connection every time, WebSocket is used to send and receive a message or will it just open once the page was loaded.
Here is how my code looks like:-
package main
import (
// Loading various package
)
func main() {
// Opening DB connection -> *sql.DB
db := openMySql()
// Closing DB connection
defer db.Close()
// Route for "websocket" end point
app.Get("/ws", wsHandler(db))
// Another route using "WebSocket" endpoint.
app.Get("/message", message(db))
}
Now, while a user is at "message" route, whenever he is sending the message to other users, Will mysql - open and close connection event will happen every time, when the message is being sent and receive using "/ws" route?
Or will it happen Just once? whenever "/message" route and "/ws" event is called the first time.
My Purpose of using "db" in "wsHandler" function is to verify and check if the user has permission to send a message to the particular room or not.
But there is no point opening and closing connection every second while WebSocket emits message or typing event.
What would be the best way to handle permission checking in "/ws" route, if above code is horror? Considering a fact there will be few hundred thousand concurrent users.
Assuming db is *sql.DB your code seems fine, I'm also assuming that your example is incomplete and your main does not actually return right away.
The docs on Open state:
The returned DB is safe for concurrent use by multiple goroutines and
maintains its own pool of idle connections. Thus, the Open function
should be called just once. It is rarely necessary to close a DB.
So wsHandler and message should be ok to use it as they please as long as they don't close DB themselves.
We are running a Sails.js API on Google Container Engine with a Cloud SQL database and recently we've been finding some of our endpoints have been stalling, never sending a response.
I had a health check monitoring /v1/status and it registered 100% uptime when I had the following simple response;
status: function( req, res ){
res.ok('Welcome to the API');
}
As soon as we added a database query, the endpoint started timing out. It doesn't happen all the time, but seemingly at random intervals, sometimes for hours on end. This is what we have changed the query to;
status: function( req, res ){
Email.findOne({ value: "someone#example.com" }).then(function( email ){
res.ok('Welcome to the API');
}).fail(function(err){
res.serverError(err);
});
}
Rather suspiciously, this all works fine in our staging and development environments, it's only when the code is deployed in production that the timeout occurs and it only occurs some of the time. The only thing that changes between staging and production is the database we are connecting to and the load on the server.
As I mentioned earlier we are using Google Cloud SQL and the Sails-MySQL adapter. We have the following error stacks from the production server;
AdapterError: Invalid connection name specified
at getConnectionObject (/app/node_modules/sails-mysql/lib/adapter.js:1182:35)
at spawnConnection (/app/node_modules/sails-mysql/lib/adapter.js:1097:7)
at Object.module.exports.adapter.find (/app/node_modules/sails-mysql/lib/adapter.js:801:16)
at module.exports.find (/app/node_modules/sails/node_modules/waterline/lib/waterline/adapter/dql.js:120:13)
at module.exports.findOne (/app/node_modules/sails/node_modules/waterline/lib/waterline/adapter/dql.js:163:10)
at _runOperation (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/operations.js:408:29)
at run (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/operations.js:69:8)
at bound.module.exports.findOne (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/basic.js:78:16)
at bound [as findOne] (/app/node_modules/sails/node_modules/lodash/dist/lodash.js:729:21)
at Deferred.exec (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:501:16)
at tryCatcher (/app/node_modules/sails/node_modules/waterline/node_modules/bluebird/js/main/util.js:26:23)
at ret (eval at <anonymous> (/app/node_modules/sails/node_modules/waterline/node_modules/bluebird/js/main/promisify.js:163:12), <anonymous>:13:39)
at Deferred.toPromise (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:510:61)
at Deferred.then (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:521:15)
at Strategy._verify (/app/api/services/passport.js:31:7)
at Strategy.authenticate (/app/node_modules/passport-local/lib/strategy.js:90:12)
at attempt (/app/node_modules/passport/lib/middleware/authenticate.js:341:16)
at authenticate (/app/node_modules/passport/lib/middleware/authenticate.js:342:7)
at Object.AuthController.login (/app/api/controllers/AuthController.js:119:5)
at bound (/app/node_modules/sails/node_modules/lodash/dist/lodash.js:729:21)
at routeTargetFnWrapper (/app/node_modules/sails/lib/router/bind.js:179:5)
at callbacks (/app/node_modules/sails/node_modules/express/lib/router/index.js:164:37)
Error (E_UNKNOWN) :: Encountered an unexpected error :
Could not connect to MySQL: Error: Pool is closed.
at afterwards (/app/node_modules/sails-mysql/lib/connections/spawn.js:72:13)
at /app/node_modules/sails-mysql/lib/connections/spawn.js:40:7
at process._tickDomainCallback (node.js:381:11)
Looking at the errors alone, I'd be tempted to say that we have something misconfigured. But the fact that it works some of the time (and has previously been working fine!) leads me to believe that there's some other black magic at work here. Our Cloud SQL instance is D0 (though we've tried upping the size to D4) and our activation policy is "Always On".
EDIT: I had seen others complain about Google Cloud SQL eg. this SO post and I was suspicious but we have since moved our database to Amazon RDS and we are still seeing the same issues, so it must be a problem with sails and the mysql adapter.
This issue is leading to hours of downtime a day, we need it resolved, any help is much appreciated!
This appears to be a sails issue, and not necessarily related to Cloud SQL.
Is there any way the QPS limit for Google Cloud SQL is being reached? See here: https://cloud.google.com/sql/faq#sizeqps
Why is my database instance sometimes slow to respond?
In order to minimize the amount you are charged for instances on per use billing plans, by default your instance becomes passive if it is not accessed for 15 minutes. The next time it is accessed there will be a short delay while it is activated. You can change this behavior by configuring the activation policy of the instance. For an example, see Editing an Instance Using the Cloud SDK.
It might be related to your policy setting. If you set it to ON_DEMAND, the instance will sleep to save your budget so that the first query to activate the instance is slow. This might cause the timeout.
https://cloud.google.com/sql/faq?hl=en
Created an windows service which saves all received and sent emails to my local drive and my service successfully does that.I have also resubscribed my streaming subscription onDisconnect event and Onerror event also.But my service stops responding after some time and there is no exception catched even though i have handled everything properly.Saw other forum and found the same issue people facing but there is not proper solution.
static private void OnDisconnect(object sender, SubscriptionErrorEventArgs args)
{
try
{
// Cast the sender as a StreamingSubscriptionConnection object.
StreamingSubscriptionConnection connection = (StreamingSubscriptionConnection)sender;
if (!connection.IsOpen)
connection.Open();
}
static void OnError(object sender, SubscriptionErrorEventArgs args)
{
// Cast the sender as a StreamingSubscriptionConnection object.
StreamingSubscriptionConnection connection = (StreamingSubscriptionConnection)sender;
if (!connection.IsOpen)
connection.Open();
}
Is this something to do with the Microsoft bug or it requires any settings on Exchange server for changing the limits for EWS subscription.
Even i checked below something related to throttling limit but no success:
http://msdn.microsoft.com/en-us/library/exchange/hh881884(v=exchg.140).aspx
Thanks a million in advance.
We have exactly same issue. And we do re-create whole subscription in OnError event just in case. It is also interesting that multiple application instances running on separate boxes exhibit identical behavior: at some point they just stop receiving notifications. Restarting any and all of them doesn't help; they do successfully subscribe but still no notifications other than OnDisconnect. Restarting Exchange Server is what really helps, though for a while.
I can see that the problem here is that you are trying to open the connection in the OnError handler. The problem here is that when OnError happen, the connection normally loses all the subscriptions, so you might need to consider creating the subscriptions again before opening them.
I am using NServiceBus for the first time and have a small, simple application where a user submits a form, the form fields are then sent to the queue, and the handler collects this data and writes it to the database using linq-to-sql.
Any changes within Component Services is a complete no-no as far as the DBA is concerned, so I'm now looking for an alternative to DTC (which is not enabled on the DB server), but using AsA_Server so that messages do not get purged.
I have tried removing AsA_Server after IConfigureThisEndpoint and specifying the configuration myself, but this doesn't seem to work (the console appears, page loads but nothing happens, it doesn't even stop at breakpoints.) AsA_Client does work, but as I understand it the messages will be purged at startup which I need to avoid.
Any suggestions?
Thanks,
OMK
EDIT: This has now been resolved by using wrapping the call to the database in a suppress transaction scope, which allows the database work to be done with no ambient transaction to enlist in:
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Suppress))
{
// code here
sc.Complete();
}
When you use AsA_Server, you are specifying you want durable queues and you will need to configure transactional queues.
With a transactional send/receive MSMQ requires you to send, transmit, receive, and process as part of one transaction. However, actually all these stages take place in their own transactions.
For example, the send transaction is complete when the sender sends a message onto their local MSMQ subsystem (even if the queue address is remote, the sender still sends to a local queue which acts as a kind of proxy to the remote queue).
The transmit transaction is complete when the MSMQ subsystem on the senders machine successfully transmits the message to the MSMQ subsystem on the receivers machine.
Even though this may all happen on one machine, I am guessing that your Handle() method is writing to a database on a different machine.
The problem here is that for the receive operation to complete satisfactorily from a transaction perspective, your call to the database must be successful. Only then will the message be de-queued from your input queue. This prevents any chance that the message is lost during processing failure.
However, in order to enforce that across the network you need to involve DTC to coordinate the distributed transaction to the database.
Bottom line, if you want durable queues in a distributed environment then you will need to use MSDTC.
Hope this helps.
There is an alternative. In your connection string you can add the option to not enlist in a distributed transaction and this will have your DB connection ignored in the DTC.
Of course, if this is set in the config then all database transactions for the application are ignored by the DTC rather than just a specific one.
Example:
<add key="DatabaseConnectionString" value="Data Source=SERVERNAME;Initial Catalog=DBNAME;Integrated Security=True;Enlist=False"/>
With NServiceBus 4.0 you can now do the following, which finally worked for me:
Configure.Transactions.Advanced(t =>
{
t.DisableDistributedTransactions();
t.DoNotWrapHandlersExecutionInATransactionScope();
});
When you use the As (AsA_Client, AsA_Server) interfaces, the configuration is applied after Init() so all the settings that you make there regarding MsmqTransport and UnicastBus are overriden.
It's possible to override those settings using IWantTheConfiguration in a IHandleProfile implementation. You get the Configuration after the default roles are applied but before the bus is started.
This way you can change the default profile settings and tailor them to your needs: deactivate transactions, enable impersonation...
Example:
public class DeactivateTransactions : IHandleProfile<Lite>, IWantTheEndpointConfig
{
private IConfigureThisEndpoint configure;
public IConfigureThisEndpoint Config
{
get { return configure; }
set
{
this.configure = value;
Configure.Instance.MsmqTransport()
.PurgeOnStartup(false)
.IsTransactional(false); // Or other changes
}
}
public void ProfileActivated()
{
}
}