I faced an issue that a process receives old data from newly-opened hibernate session whereas that data is updated by another process through committed transaction and closing session after that. If I connect to database directly I can see the updated data. The Hibernate version is 4.1.9 and database is MySQL 5.6.
After reading a lot of similar problems I made sure that it's not caused neither by first level cache - new session is opened each time, nor by second level cache - it's not enabled. I also tried different options like flushing session after transaction and etc. but the hibernate was still returning old data instead of updated. Then I found quite old post on hibernate forum describing the same problem which was fixed by setting hibernate.connection.isolation to 2 (TRANSACTION_READ_COMMITTED). I gave a try to such approach and it works for me as well.
So actually the question why such change could be required for Hibernate and MySQL having default configuration, could someone please explain ?
p.s. I spent hours for solving this problem, but didn't find such solution on so. so may be it will save time for other people in the same situation.
p.p.s While posting this question I finally found another. So posted with the same problem and solution. but at the same time there are also many other similar questions without replies. May be one more post will give more chance to find the solution.
Link to the post at hibernate forum
Link to to the similar post at SO
Related
I'm implementing an API using CakePHP3 with a MySQL database.
Everything works fine. The endpoints are a secured with a Basic Authentication.
Now I have noticed that the performance is dreadful. I started some speed tests with loader.io and noticed that the response times are around 400ms.
I don't know why, but at one point i deactivated the AuthComponent of CakePHP and suddenly I only had a response time of 120ms.
So I started digging around. I then implemented my own BasicAuthentication by just reading the header and comparing the user & password with my users table in the database. I still have ~120ms response time. Is the CakePHP3 AuthComponent just bloated up? I also noticed while having the AuthComponent activated that my php-fpm uses a large amount of CPU. Without The AuthComponent it's practicly nothing.
I implemented the BasicAuth exactly as described in the CakePHP Documentation. I just don't know what is going on. I would prefer to use the actual CakePHP methods than implementing my own check. Has anybody else ever had this issue? I just don't understand what is going on.
at last we found out what was causing the long response times. It wasn't the AuthComponent but more the DefaultPasswordHasher.
I wrote a new PasswordHasher (for testing purposes returning the password unhashed) and the speed went up by factor 3.
In config/app.php
Set 'debug' = false;
Literally from:
https://ask.fiware.org/question/84/cosmos-error-accessing-hive/
As the answer in the quoted FIWARE Q&A entry suggest the problem is fixed by now. its here: https://ask.fiware.org/question/79/cosmos-database-privacy/. However, it seems like other issues arisen related to the solution, namely: Through ssh connection, the typing the hive command results in the following error: https://cloud.githubusercontent.com/assets/13782883/9439517/0d24350a-4a68-11e5-9a46-9d8a24e016d4.png the hiveSQL queries work fine (through ssh) regardless the error message.
When launching exactly the same hiveSQL queries (each one of them worked flawlessly two weeks ago) remotely, the request times out even in absurd time windows (10 minutes). The most basic commands ('use $username;', 'show tables';) also time out.
(The thrift client is: https://github.com/garamon/php-thrift-hive-client)
Since the Cosmos usage is an integral part of our project, it is of utmost importance whether it is a temporal issue caused by the fixes or it is a permanent change in the remote availability (could not identify relevant changes in the documentation).
Apart from fixing the issue you mention, we moved to a HiveServer2 deployment instead of the old Hive server (or HiveServer1), which had several performance drawbacks dued to, indeed, the usage of Thrift (particularly, only one connection could be served at the same time). HiveServer2 now allows for parallel queries.
Being said that, most probably the client you are using is not valid anymore since it could be specifically designed for working with a HiveServer1 instance. The good news are it seems there several other client implementations for HS2 using PHP, such as https://github.com/QwertyManiac/hive-hs2-php-thrift (this is the first entry I found when performing a search in Google).
What is true is this is not officialy documented anywhere (it is only mentioned in this other SOF question). So, nice catch! I'll add it inmediatelly.
I have been working on a requirement for our apache2 logs to be recorded to a mysql database, instead of the text log file norm.
I had no difficulty in accomplishing the setup and config, and it works as expected, however there seems to be a bit of information that I cannot find (or may very well be that I am searching for the wrong thing).
Is there anyone out there who use (or even like to use) libapache2-mod-log-sql that are able to tell more about its connection to mysql? Is it persistent? What kind of resource impact should I expect?
These two issues are core to my research, and yet so rare to find info on.
thanks in advance.
I am looking at databases for a home project (ASP.NET MVC) which I might host eventually. After reading a similar question here on Stack Overflow I have decided to go with MySQL.
However, the easy of use & deployment of SQLite is tempting, and I would like to confirm my reasons before I write it off completely.
My goal is to maintain user status messages (like Twitter). This would mean mostly a single table with user-id/status-message couples. Read / Insert / Delete operation for status message. No modification is necessary.
After reading the following paragraph I have decided that SQLite can't work for me. I DO have a simple database, but since ALL my transaction work with the SAME table I might face some problems.
SQLite uses reader/writer locks on the entire database file. That means if any process is reading from any part of the database, all other processes are prevented from writing any other part of the database. Similarly, if any one process is writing to the database, all other processes are prevented from reading any other part of the database.
Is my understanding naive? Would SQLite work fine for me? Also does MySQL offer something that SQLite wouldn't when working with ASP.NET MVC? Ease of development in VS maybe?
If you're willing to wait half a month, the next SQLite release intends to support write-ahead logging, which should allow for more write concurrency.
I've been unable to get even the simple concurrency SQLite claims to support to work - even after asking on SO a couple of times.
Edit
Since I wrote the above, I have been able to get concurrent writes and reads to work with SQLite. It appears I was not properly disposing of NHibernate sessions - putting Using blocks around all code that created sessions solved the problem.
/Edit
But it's probably fine for your application, especially with the Write-ahead Logging that user380361 mentions.
Small footprint, single file installation, fast, works well with NHibernate, free, public domain - a very nice product in almost all respects!
I'm creating a Twitter application, and every time user updates the page it reloads the newest messages from Twitter and saves them to local database, unless they have already been created before. This works well in development environment (database: sqlite3), but in production environment (mysql) it always creates messages again, even though they already have been created.
Message creation is checked by twitter_id, that each message has:
msg = Message.find_by_twitter_id(message_hash['id'].to_i)
if msg.nil?
# creates new message from message_hash (and possibly new user too)
end
msg.save
Apparently, in production environment it's unable to find the messages by twitter id for some reason (when I look at the database it has saved all the attributes correctly before).
With this long introduction, I guess my main question is how do I debug this? (unless you already have an answer to the main problem, of course :) When I look in the production.log, it only shows something like:
Processing MainPageController#feeds (for 91.154.7.200 at 2010-01-16 14:35:36) [GET]
Rendering template within layouts/application
Rendering main_page/feeds
Completed in 9774ms (View: 164, DB: 874) | 200 OK [http://www.tweets.vidious.net/]
...but not the database requests, logger.debug texts, or anything that could help me find the problem.
You can change the log level in production by setting the log level in config/environment/production.rb
config.log_level = :debug
That will log the sql and everything else you are used to seeing in dev - it will slow down the app a bit, and your logs will be large, so use judiciously.
But as to the actual problem behind the question...
Could it be because of multiple connections accessing mysql?
If the twitter entries have not yet been committed, then a query for them from another connection will not return them, so if your query for them is called before the commit, then you won't find them, and will instead insert the same entries again. This is much more likely to happen in a production environment with many users than with you alone testing on sqlite.
Since you are using mysql, you could use a unique key on the twitter id to prevent dupes, then catch the ActiveRecord exception if you try to insert a dupe. But this means handling an error, which is not a pretty way to handle this (though I recommend doing it as a back up means of prevent dupes - mysql is good for this, use it).
You should also prevent the attempt to insert the dupes. One way is to use a lock on a common record, say the User record which all the tweets are related to, so that another process cannot try to add tweets for the user until it can get that lock (which you will only free once the transaction is done), and so prevent simultaneous commits of the same info.
I ran into a similar issue while saving emails to a database, I agree with Andrew, set the log level to debug for more information on what exactly is happening.
As for the actual problem, you can try adding a unique index to the database that will prevent two items from being saved with the same parameters. This is like the validates_uniqueness but at the database level, and is very effective: Mysql Constraign Database Entries in Rails.
For example if you wanted no message objects in your database that had a duplicate body of text, and a duplicate twitter id (which would mean the same person tweeted the same text). Then you can add this to your migration:
add_index( :message, [:twitter_id, :body] , :unique => true)
It takes a small amount of time after you tell an object in Rails to save, before it actually gets in the database, thats maybe why the query for the id doesn't find anything yet.
For your production server, I would recommend setting up a rollbar to report you all of the unhandled errors and exceptions in your production servers.
You can also store a bunch of useful information, like http request, requested users, code which invoked an error and many more or sends email notifications each time some unhandled exceptions happened on your production server.
Here is a simple article about debugging in rails that could help you out.