Is there a way that raw TypeORM queries could lead to connection pool problems? - mysql

As far as I can tell, usages of query do call release on the query runner instance they use (and there's no transactions involved), however, weirdly enough, I've been getting some database calls (through TypeORM) get stuck for no apparent reason and I'm trying to exclude potential causes for this.
await this.myDatasource.query('SELECT * FROM users WHERE id = ?', [id]);

Related

What could cause MySQL to intermittently fail to return a row?

I have a MySQL database that I am running very simple queries against as part of a webapp. I have received reports from users starting today that they got an error saying that their account doesn't exist, and when they log in again, it does (this happened to only a few people, and only once to each, so clearly it is rare). Based on my backend code, this error can only occur if the same query returns 0 row the first time, and 1 row the second. My query is basically SELECT * FROM users WHERE username="...". How is this possible? My suspicion is that the hard disk is having I/O failures, but I am unsure because I would not expect MySQL to fail silently in this case. That said, I don't know what else it could be.
This could be a bug with your mysql client (Though I'm unsure as to how the structure of your code is, it could just be bad query). However let's assume that your query has been working fine up until now with no prior issues, so we'll rule out bad code.
With that in mind, I'm assuming it's either a bug in your mysql client or your max connection count is reached (Had this issue with my previous host - Hostinger).
Let's say your issue is a bug in your mysql client, set your sessions to per session basis by running this
SET SESSION optimizer_switch="index_merge_intersection=off";
or in your my.cnf you can set it globally
[mysqld] optimizer_switch=index_merge_intersection=off
As for max connection you can either increase your max_connection value (depending if your host allows it), or you'll have to make a logic to close the mysql connection after a query execution.
$mysqli->close();

Couchbase Performance

I have Couchbase community edition v4, build 4047. Everything seems to be great with it until I started issuing queries against a simple view. The view is just projecting the documents like so, which seems harmless:
function (doc, meta) {
if(doc.applicationId){
emit(doc.applicationId,meta.id);
}
}
I'm using the .Net client to connect and execute the query from my application, though I don't think that matters. It's a single node configuration. I'm clocking time in between the actual http requests and the queries are taking between 4 seconds up to over 2 minutes if I send something like 15 requests in at a time through Fiddler.
I am using a stale index to try and boost that time, but it doesn't seem to have much impact. The bucket is not very large. There are only a couple of documents in the bucket. I've allocated 100M RAM for indexing. I'd think that's fine for at least the few documents we're working with at the moment.
This is primarily local development, but we are observing similar behaviors when promoted to our servers. The servers don't use a significant amount of RAM either, but at the same time we aren't storing a significant amount of documents. We're only talking about 10 or 20 at the most? These documents only contain like 5 primitive-type properties.
Do you have some suggestions for diagnosing this? The logs through the couchbase admin console don't show anything unusual as far as I can tell and this doesn't seem like normal behavior.
Update:
Here is my code to query the documents
public async Task ExpireCurrentSession(string applicationId)
{
using (var bucket = GetSessionBucket())
{
var query = bucket
.CreateQuery("activeSessions", "activeSessionsByApplicationId")
.Key(applicationId)
.Stale(Couchbase.Views.StaleState.Ok);
var result = await bucket.QueryAsync<string>(query);
foreach (var session in result.Rows)
{
await bucket.RemoveAsync(session.Value);
}
}
}
The code seems fine, and should work as you expect. The 100mb RAM you mention allocating actually isn't for views, it only affects N1QL global secondary indexes. Which brings me to the following suggestion:
You don't need to use a view for this in Couchbase 4.0; you can use N1QL to do this simpler and (probably) more efficiently.
Create a N1QL index on the applicationId field (either in code of from the cbq command line shell) like so:
CREATE INDEX ix_applicationId ON bucketName(applicationId) USING GSI;
You can then use a simple SELECT query to get the relevant document IDs:
SELECT META(bucketName) FROM bucketName WHERE applicationId = '123';
Or even simpler, you can just use a DELETE query to delete them directly:
DELETE FROM bucketName WHERE applicationId = '123';
Note that DML statements, like DELETE are still considered a beta feature in Couchbase 4.0, so do your own risk assessment.
To run N1QL queries from .NET you use almost the same syntax as for views:
await bucket.QueryAsync<dynamic>("QUERY GOES HERE");

Handle column accessed and changed from two or more connections (MySQL)

I need your advice.
I have a mysql database which stores the data from my minecraft server. The server is using the ebean api for the mysql stuff.
I will have multiple servers running the same synched data when the user base increases. The server that the user is connected to does not matter. It looks all the same for him. But how can I handle an example case in which from two servers two players in the same guild edit something at the same time. One server will throw an optimistic lock exception. But what to do if it is something important like a donation to the guild bank? The amount donated might get duped or is lost. Tell the user to retry it? Or let the server automatically resend the query with the updated data from the database? A friend of mine said something like a socket server in the middle that handles ALL mysql statements might be a good idea. But that would require a lot of work to make sure that it does reconnect to the minecraft servers if the connection is lost etc. It would also require me to get the raw update query or serialize the ebean table but I don't know how to accomplish any of those possibilities.
I have not found an answer to my question yet and I hope that it hasn't been answered before.
There are two different kinds of operations the Minecraft servers can perform on the DBMS. On one hand, you have state-update operations, like making a deposit to an account. The history of these operations matters. For the sake of integrity, you must use transactions for these. They're not idempotent, meaning that you can't repeat them multiple times and expect the same result as if you only did them once. You should investigate the use of SELECT ... FOR UPDATE transactions for these.
If something fails during such a transaction, you must issue a ROLLBACK of the transaction and try again. You'd be smart to log these retries in case you get a lot of rollbacks: that suggests you have some sort of concurrency trouble to track down.
By the way, you don't need to bother with an explicit transaction on a query like
UPDATE credit SET balance = balance + 200 WHERE account = 12367
Your DBMS will get this right, even when multiple connections hit the same account number.
The other kind of operation is idempotent. That is, if you carry out the operation more than once, the result is the same as if you did it once. For example, setting the name of a player is idempotent. For those operations, if you get some kind of exception, you can either repeat the operation, or simply ignore the failure in the assumption that the operation will be repeated later in the normal sequence of gameplay.

NonUniqueObjectException error inserting multiple rows, LAST_INSERT_ID() returns 0

I am using NHibernate/Fluent NHibernate in an ASP.NET MVC app with a MySQL database. I am working on an operation that reads quite a bit of data (relative to how much is inserted), processes it, and ends up inserting (currently) about 50 records. I have one ISession per request which is created/destroyed in the begin/end request event handlers (exactly like like http://ayende.com/Blog/archive/2009/08/06/challenge-find-the-bug-fixes.aspx), and I am reading in data and adding new objects (as in section 16.3 at https://www.hibernate.org/hib_docs/nhibernate/html/example-parentchild.html), and finally calling Flush() on the session to actually run all the inserts.
Getting data out and lazy loading work fine, and when I call Flush exactly 2 new records are being inserted (I am manually checking the table to find this out), and then I get the following error:
NonUniqueObjectException: a different object with the same
identifier value was already
associated with the session: 0, of
entity: ...
I am new to NHibernate and while searching for a solution have tried explicitly setting the Id property's generator to both Native and Identity (it is a MySQL database and the Id column is an int with auto_increment on), and explicitly setting the unsaved value for the Id property to 0. I still get the error, however.
I have also tried calling Flush at different times (effectively once per INSERT) and I then get the same error, but for an identity value other than 0 and at seemingly random points in the process (sometimes I do not get it at all in this scenario, but sometimes I do at different points).
I am not sure where to go from here. Any help or insight would be greatly appreciated.
EDIT: See the answer below.
EDIT: I originally posted a different "answer" that did not actually solve the problem, but I want to document my findings here for anyone else who may come across it.
After several days of trying to figure out the problem and resolve this issue, and being extremely frustrated because the issue seemed to go away for awhile and then come back intermittently (causing me to think multiple times that a change I made fixed it, when in fact it did not), I believe I have tracked down the real issue.
A few times after I turned the log4net level for NHibernate up to DEBUG, the problem went away, but I was finally able to get the error with that log level. Included in the log were these lines:
Building an IDbCommand object for the SqlString: SELECT LAST_INSERT_ID()
...
NHibernate.Type.Int32Type: 15:10:36 [8] DEBUG NHibernate.Type.Int32Type: returning '0' as column: LAST_INSERT_ID()
NHibernate.Id.IdentifierGeneratorFactory: 15:10:36 [8] DEBUG NHibernate.Id.IdentifierGeneratorFactory:
Natively generated identity: 0
And looking up just a few lines I saw:
NHibernate.AdoNet.ConnectionManager: 15:10:36 [8] DEBUG NHibernate.AdoNet.ConnectionManager: aggressively releasing database connection
NHibernate.Connection.ConnectionProvider: 15:10:36 [8] DEBUG NHibernate.Connection.ConnectionProvider: Closing connection
It seems that while flushing the session and performing INSERTs, NHibernate was closing the connection between the INSERT statement and the "SELECT LAST_INSERT_ID()" to get the id that was generated by MySQL for the INSERT statement. Or rather, I should say it was sometimes closing the connection which is one reason I believe the problem was intermittent. I can't find the link now, but I believe I also read in all my searching that MySQL will sometimes return the correct value from LAST_INSERT_ID() even if the connection is closed and reopened, which is another reason I believe it was intermittent. Most of the time, though, LAST_INSERT_ID() will return 0 if the connection is closed and reopened after the INSERT.
It appears there are 2 ways to go about fixing this issue. First is a patch available here that looks like it will make it into NHibernate 2.1.1, or which you can use to make your own build of NHibernate, which forces the INSERT and SELECT LAST_INSERT_ID() to run together. Second, you can set the connection.release_mode to on_close as described in this blog post which prevents NHibernate from closing the connection until the ISession is explicitly closed.
I took the latter approach, which is done in FluentNHibernate like this:
Fluently.Configure()
...
.ExposeConfiguration(c => c.Properties.Add("connection.release_mode", "on_close"))
...
This also had the side effect of drastically speeding up my code. What was taking 20-30 seconds to run (when it just so happened to work before I made this change) is now running in 7-10 seconds, so it is doing the same work in ~1/3 the time.

Randomly long DB queries/Memcache fetches in production env

I'm having trouble diagnosing a problem I'm having on my ubuntu scalr/ec2 production environment.
The trouble is apparently randomly, database queries and/or memcache queries will take MUCH longer than they should. I've seen a simple select statement take 130ms or a Memcache fetch take 65ms! It can happen a handful of times per request, causing some requests to take twice as long as they should.
To diagnose the problem, I wrote a very simple script which will just connect to the MySql server and run a query.
require 'mysql'
mysql = Mysql.init
mysql.real_connect('', '', '', '')
max = 0
100.times do
start = Time.now
mysql.query('select * from navigables limit 1')
stop = Time.now
total = stop - start
max = total if total > max
end
puts "Max Time: #{max * 1000}"
mysql.close
This script consistently returned a really high max time, so I eliminated any Rails as the source of the problem. I also wrote the same thing in Python to eliminate Ruby. And indeed the Python one took inordinate amounts of time as well!
Both MySql and Memcache are on their own boxes, so I considered network latency, but watching pings and tracerouteing look normal.
Also running the queries/fetches on the respective machines returns expected times, and I'm running the same version gems on my staging machine without this issue.
I'm really stumped on this one... any thoughts on something I could try to diagnose this?
Thanks
My only thought is that it might be disk?
Mysql uses query cache to store SELECT together with its result. That could explain the constant speed you are getting in continous selecting. Try EXPLAIN-inig the query to see if you are using indexes.
I don't see why memcache would be a problem (unles it's crashing and restarting?). Check your server logs for suspicious service failures.