Grails session handling in waiting thread with Hibernate and MySQL InnoDB - mysql

In order to realize client-side notifications in an AJAX-driven application that I am developing with Grails (and GWT), I have implemented a service method that will block until it is being signaled. I am using a monitor object to wait for a signal. Once signaled, the thread will query the database for new objects and then return the entities to the browser.
It is working perfectly fine with the memory database but not as I expect when I use the MySQL database connector. What happens: whenever I do a findAllBy... call it will only find objects that were created before the request started.
The lifecycle of my service method
request from client
Hibernate session is being created by Grails
service querying database for new objects
if there are none: wait
incoming signal: query database for new objects (DOES NOT GET NEW OBJECTS when using MySQL, works fine with memory db)
The mysql query log shows all the queries as expected but the result of findAllBy... is just an empty array.
I disabled query and second level cache. Behaviour is the same no matter if data connection is pooled or not.
What am I doing wrong? Should I close the Hibernate session? Flush it? Use a transaction for my queries? Or somehow enforce the findAllBy... method to query the database?

Just a guess, but this sounds like a transaction isolation level problem where you are experiencing a phantom read. Does your service need need to be transactional? If not set transactional=false in the service.

I think that you need to flush the session on the save calls for the new objects that you are looking for, e.g.
DomainOfFrequentlyAddedStuff.save(flush:true)
Then they should be persisted to the db quickly so they will show up in your findAll() query.

Related

Read after write consistency with mysql and multiple concurrent connections

I'm trying to understand whether it is possible to achieve the following:
I have multiple instances of an application server running behind a round-robin load balancer. The client expects GET after POST/PUT semantics, in particular the client will make a POST request, wait for the response and immediately make a GET request expecting the response to reflect the change made by the POST request, e.g:
> Request: POST /some/endpoint
< Response: 201 CREATED
< Location: /some/endpoint/123
> Request: GET /some/endpoint/123
< Response must not be 404 Not Found
It is not guaranteed that both requests are handled by the same application server. Each application server has a pool of connections to the DB. Each request will commit a transaction before responding to the client.
Thus the database will on one connection see an INSERT statement, followed by a COMMIT. One another connection, it will see a SELECT statement. Temporally, the SELECT will be strictly after the commit, however there may only be a tiny delay in the order of milliseconds.
The application server I have in mind uses Java, Spring, and Hibernate. The database is MySQL 5.7.11 managed by Amazon RDS in a multiple availability zone setup.
I'm trying to understand whether this behavior can be achieved and how so. There is a similar question, but the answer suggesting to lock the table does not seem right for an application that must handle concurrent requests.
Under ordinary circumstances, you will not have any issue with this sequence of requests, since your MySQL will have committed the changes to the database by the time the 201 response has been sent back. Therefore, any subsequent statements will see the created / updated record.
What could be the extraordinary circumstances under which the subsequent select will not find the updated / inserted record?
Another process commits an update or delete statement that changes or removes the given record. There is not too much you can do about this, since it is part of the normal operation. If you do not want such thing to happen, then you have to implement application level locking of data.
The subsequent GET request is routed not only to a different application server, but that one uses (or is forced to use) a different database instance, which does not have the most updated state of that record. I would envisage this to happen if either application or database server level there is a severe failure, or routing of the request goes really bad (routed to a data center at a different geographical location). These should not happen too frequently.
If you're using MyISAM tables, you might be seeing the effects of 'concurrent inserts' (see 8.11.3 in the mysql manual). You can avoid them by either setting the concurrent_insert system variable to 0, or by using the HIGH_PRIORITY keyword on the INSERT.

Django save() behavior with autocommit transactions

I have a following setup:
Several data processing workers get configuration from django view get_conf() by http.
Configuration is stored in django model using MySQL / InnoDB backend
Configuration model has overridden save() method which tells workers to reload configuration
I have noticed that sometimes the workers do not receive the changed configuration correctly. In particular, when the conf reload time was shorter than usual, the workers got "old" configuration from get_conf() (missing the most recent change). The transaction model used in Django is the default autocommit.
I have come up with the following possible scenario that could cause the behavior:
New configuration is saved
save() returns but MySQL / InnoDB is still processing the (auto)commit
Workers are booted and make http request for new configuration
MySQL (auto)commit finishes
Is the step 2 in the above scenario possible? That is, can django model save() return before the data is actually committed in the DB if the autocommit transactional method is being used? Or, to go one layer down, can MySQL autocommitting INSERT or UPDATE operation finish before the commit is complete (update / insert visible to other transactions)?
Object may be getting dirty, please try refresh object after save.
obj.save()
obj.refresh_from_db()
reference: https://docs.djangoproject.com/en/1.8/ref/models/instances/#refreshing-objects-from-database
This definitely looks like a race condition.
The scenario you describe should never happen if there's only one script and one database. When you save(), the method doesn't return until the data is actually commited to the database.
If however you're using a master/slave configuration, you could be the victim of the replication delay: if you write on the master but read on the slaves, then it is entirely possible that your script doesn't wait long enough for the replication to occur, and you read the old conf from the slave before it had the opportunity to replicate the master.
Such a configuration can be set up in django using database routers, or it can be done on the DB side by using a DB proxy. Check that out.

Session management with sqlalchemy and pyro

I'm actually using SQLAlchemy with MySQL and Pyro to make a server program. Many clients connect to this server to make requests. The programs only provides the information from the database MySQL and sometimes make some calculations.
Is it better to create a session for each client or to use the same session for every clients?
What you want is a scoped_session.
The benefits are (compared to a single shared session between clients):
No locking needed
Transactions supported
Connection pool to database (implicit done by SQLAlchemy)
How to use it
You just create the scoped_session:
Session = scoped_session(some_factory)
and access it in your Pyro methods:
class MyPyroObject():
def remote_method(self):
Session.query(MyModel).filter...
Behind the scenes
The code above guarantees that the Session is created and closed as needed. The session object is created as soon as you access it the first time in a thread and will be removed/closed after the thread is finished (ref). As each Pyro client connection has its own thread on the default setting (don't change it!), you will have one session per client.
The best I can try is to create new Session in every client's request. I hope there is no penalty in the performance.

MySQL TAdoConnnection connection "connected" property incorrectly set True

I have an application which connects to a MySql database using Delphi's TAdoConnection object. This is a very query intensive application. So I create the connection and keep it open to avoid the high resource expense of open/closing database connections. But obviously problems can arise (database restart, network connection failure, etc). So I have built in code to free my database object, recreate it, and reconnect when queries fail.
I have a common function to connect to the database. The relevant code is this:
try
AdoConnection.open;
result := Adoconnection.Connected
except
result := False;
......
end;
I ran some test by turning on and off the MySql database. Everything works fine if the database is off on application startup (i.e. it properly throws an exception). However, if I turn off the database after the application has already successfully connected, subsequent re-connections do not throw exceptions, and additionally falsley report true for AdoConnection.Connected. I am sure the connection object had been freed/recreated first.
It seems there is some sort of caching mechanism going on here (most likely at the hardware/driver level - not application level). Anyone have any ideas?
I observed this also.
Ideas for dealing with it:
If you get an error on running a query then check if it's a connection issue and if so try to reconnect and then run the query again.
If your program uses a timer and waits a while between running batches of queries then perform a simple query (maybe "select now()") before each batch and if this fails then try to reconnect.

Data source rejected establishment of connection, message from server: "Too many connections"?

I am using MySQL database and hibernate and JSP.using hibernate select database store value and prepared view and display using Ajax.i am polling database every 1 seconds using Java script timer that called a ajax function and return the new responds,it result me an error
JDBCExceptionReporter:78 - Data source rejected establishment of connection, message from server: "Too many connections"".
Help me to sort-out the above define problem.
Make sure you close the session (and connection) after using it
Make sure the maximum connection configured for mysql is sufficient
Use of some caching layer. It is insane to hit the database every 1 second from each user.
If you are making some chat application, consider comet and server-side pub-sub solutions (jmx for example).