I am using hibernate to connect to a mysql database.
My saveOrUpdate method is :
public void saveOrUpdate(T t){
Session session = HibernateUtil.getSession();
Transaction transaction = session.beginTransaction();
session.saveOrUpdate(t);
transaction.commit();
}
I used this method actually for about half year, it worked well.
But now when I do junit test using this method, it seems suck.
I do saveOrUpdate first and then use Criteria to query, it gets the result seems to be right, but in database nothing actually changed.
Could anyone please give me any hint about what's happening here?
Thank you very much.
What is your JUnit configuration set to for transactions? I believe it is set to rollback all the transactions in the test. It should rightly be so, as after a test the database should be put back into a consistent/known state. The correctness of your code should rely on how you write the test and not the records being physically present in the database.
Related
I have this really big table with some millions of records every day and in the end of every day I am extracting all the records of the previous day. I am doing this like:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
Statement.executeQuery(SQL);
The problem is that this program takes like 2GB of memory because it takes all the results in memory then it processes it.
I tried setting the Statement.setFetchSize(10) but it takes exactly the same memory from OS it does not make any difference. I am using Microsoft SQL Server 2005 JDBC Driver for this.
Is there any way to read the results in small chunks like the Oracle database driver does when the query is executed to show only a few rows and as you scroll down more results are shown?
In JDBC, the setFetchSize(int) method is very important to performance and memory-management within the JVM as it controls the number of network calls from the JVM to the database and correspondingly the amount of RAM used for ResultSet processing.
Inherently if setFetchSize(10) is being called and the driver is ignoring it, there are probably only two options:
Try a different JDBC driver that will honor the fetch-size hint.
Look at driver-specific properties on the Connection (URL and/or property map when creating the Connection instance).
The RESULT-SET is the number of rows marshalled on the DB in response to the query.
The ROW-SET is the chunk of rows that are fetched out of the RESULT-SET per call from the JVM to the DB.
The number of these calls and resulting RAM required for processing is dependent on the fetch-size setting.
So if the RESULT-SET has 100 rows and the fetch-size is 10,
there will be 10 network calls to retrieve all of the data, using roughly 10*{row-content-size} RAM at any given time.
The default fetch-size is 10, which is rather small.
In the case posted, it would appear the driver is ignoring the fetch-size setting, retrieving all data in one call (large RAM requirement, optimum minimal network calls).
What happens underneath ResultSet.next() is that it doesn't actually fetch one row at a time from the RESULT-SET. It fetches that from the (local) ROW-SET and fetches the next ROW-SET (invisibly) from the server as it becomes exhausted on the local client.
All of this depends on the driver as the setting is just a 'hint' but in practice I have found this is how it works for many drivers and databases (verified in many versions of Oracle, DB2 and MySQL).
The fetchSize parameter is a hint to the JDBC driver as to many rows to fetch in one go from the database. But the driver is free to ignore this and do what it sees fit. Some drivers, like the Oracle one, fetch rows in chunks, so you can read very large result sets without needing lots of memory. Other drivers just read in the whole result set in one go, and I'm guessing that's what your driver is doing.
You can try upgrading your driver to the SQL Server 2008 version (which might be better), or the open-source jTDS driver.
You need to ensure that auto-commit on the Connection is turned off, or setFetchSize will have no effect.
dbConnection.setAutoCommit(false);
Edit: Remembered that when I used this fix it was Postgres-specific, but hopefully it will still work for SQL Server.
Statement interface Doc
SUMMARY: void setFetchSize(int rows)
Gives the JDBC driver a hint as to the
number of rows that should be fetched
from the database when more rows are
needed.
Read this ebook J2EE and beyond By Art Taylor
Sounds like mssql jdbc is buffering the entire resultset for you. You can add a connect string parameter saying selectMode=cursor or responseBuffering=adaptive. If you are on version 2.0+ of the 2005 mssql jdbc driver then response buffering should default to adaptive.
http://msdn.microsoft.com/en-us/library/bb879937.aspx
It sounds to me that you really want to limit the rows being returned in your query and page through the results. If so, you can do something like:
select * from (select rownum myrow, a.* from TEST1 a )
where myrow between 5 and 10 ;
You just have to determine your boundaries.
Try this:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
connection.setAutoCommit(false);
PreparedStatement stmt = connection.prepareStatement(SQL, SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(2000);
stmt.set....
stmt.execute();
ResultSet rset = stmt.getResultSet();
while (rset.next()) {
// ......
I had the exact same problem in a project. The issue is that even though the fetch size might be small enough, the JDBCTemplate reads all the result of your query and maps it out in a huge list which might blow your memory. I ended up extending NamedParameterJdbcTemplate to create a function which returns a Stream of Object. That Stream is based on the ResultSet normally returned by JDBC but will pull data from the ResultSet only as the Stream requires it. This will work if you don't keep a reference of all the Object this Stream spits. I did inspire myself a lot on the implementation of org.springframework.jdbc.core.JdbcTemplate#execute(org.springframework.jdbc.core.ConnectionCallback). The only real difference has to do with what to do with the ResultSet. I ended up writing this function to wrap up the ResultSet:
private <T> Stream<T> wrapIntoStream(ResultSet rs, RowMapper<T> mapper) {
CustomSpliterator<T> spliterator = new CustomSpliterator<T>(rs, mapper, Long.MAX_VALUE, NON-NULL | IMMUTABLE | ORDERED);
Stream<T> stream = StreamSupport.stream(spliterator, false);
return stream;
}
private static class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T> {
// won't put code for constructor or properties here
// the idea is to pull for the ResultSet and set into the Stream
#Override
public boolean tryAdvance(Consumer<? super T> action) {
try {
// you can add some logic to close the stream/Resultset automatically
if(rs.next()) {
T mapped = mapper.mapRow(rs, rowNumber++);
action.accept(mapped);
return true;
} else {
return false;
}
} catch (SQLException) {
// do something with this Exception
}
}
}
you can add some logic to make that Stream "auto closable", otherwise don't forget to close it when you are done.
I m writing junit test cases for JDBC connection for Oracle (OJDBC6.jar). Below is my code
public void insertCard(){
Properties prop= new Properties();
properties.put("user","user");
properties.put("password","password");
properties.put("driver","com.oracle.OracleDriver")
Connection con = DriverManager.getConnection (
"jdbc:oracle:thin:#localhost:1521:orcl",properties);
PreparedStatement insertCardValues = con.prepareStatement(
"insert into card values(?,?,?)");
insertCardValues.setInt(1,123);
insertCardValues.setName(2,"Raja");
insertCardValues.setAddress(3, "Lucknow");
insertCardValues.executeUpdate();
}
}
Can someone help me to test below code
Connection con = DriverManager.getConnection (
"jdbc:oracle:thin:#localhost:1521:orcl","properties);
Please, first of all I'd suggest to you to separate concerns.
Create a separate object which it should be the responsible of returning to you a Connection. Maybe you could create your own Connection wrapper object to deal with the SQL Connection
Then the class which is using the responsible of inserting the card, can retrieve an opened connection in order to create statements.
Do not forget also to have a method to release the connection once the insertCard logic is finished (no matter if it was succeeded or not, is dangerous to leave opened connections against a database)
Once you have refactored your code, you could easily cover it with unit tests using Mockito. But firstly I think you need to do a refactor, testing otherwise will be painful
Kind of old version of Prestashop, I know, but I need to execute an initial MySQL query on every request.
Where do I need to put the logic to be always executed?
Sort of initialization point the application always executes, no matter what URL is requested.
Thanks in advance.
Finally found the solution (and smartly, I think).
And I also think this approach could be applicable to other versions of Prestashop (1.5, 1.6, 1.7...).
In classes/MySQL.php file, within the connect function, I add my query just before returning $this->_link:
public function connect()
{
if (!defined('_PS_DEBUG_SQL_'))
define('_PS_DEBUG_SQL_', false);
if ($this->_link = mysql_connect($this->_server, $this->_user, $this->_password))
{
if (!$this->set_db($this->_database))
die('The database selection cannot be made.');
}
else
die('Link to database cannot be established.');
/* UTF-8 support */
if (!mysql_query('SET NAMES \'utf8\'', $this->_link))
die(Tools::displayError('PrestaShop Fatal error: no utf-8 support. Please check your server configuration.'));
// removed SET GLOBAL SQL_MODE : we can't do that (see PSCFI-1548)
/** MY QUERY IS INSERTED HERE, USING $this->_link BY THE WAY **/
mysql_query('...', $this->_link);
return $this->_link;
}
In this way, 2 advantages arise:
I do not have to maintain a copy of database credentials other than Prestashop's
The query is executed in every request, both shop frontoffice and backoffice user interfaces.
I hope this can help anyone.
I have been getting this annoying exception while trying to create a native query with my entity manager. The full error message is:
java.lang.IllegalStateException: During synchronization a new object was found through a relationship that was not marked cascade PERSIST: com.model.OneToManyEntity2#61f3b3b.
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.discoverUnregisteredNewObjects(RepeatableWriteUnitOfWork.java:313)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.calculateChanges(UnitOfWorkImpl.java:723)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.writeChanges(RepeatableWriteUnitOfWork.java:441)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:874)
at org.eclipse.persistence.internal.jpa.QueryImpl.performPreQueryFlush(QueryImpl.java:967)
at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:207)
at org.eclipse.persistence.internal.jpa.QueryImpl.getSingleResult(QueryImpl.java:521)
at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:400)
The actual code that triggers the error is:
Query query;
query = entityManager.createNativeQuery(
"SELECT MAX(CAST(SUBSTRING_INDEX(RecordID,'-',-1) as Decimal)) FROM `QueriedEntityTable`");
String recordID = (query.getSingleResult() == null ?
null :
query.getSingleResult()
.toString());
This is being executed with an EntityTransaction in the doTransaction part. The part that is getting me with this though is that this is the first code to be executed within the doTransaction method, simplified below to:
updateOneToManyEntity1();
updateOneToManyEntity2();
entityManager.merge(parentEntity);
The entity it has a problem with "OneToManyEntity1" isn't even the table I'm trying to create the query on. I'm not doing any persist or merge up until this point either, so I'm also not sure what is supposedly causing it to be out of sync. The only database work that's being done up until this code is executed is just pulling in data, not changing anything. The foreign keys are properly set up in the database.
I'm able to get rid of this error by doing as it says and marking these relationships as Cascade.PERSIST, but then I get a MySQLContrainstraViolationException on the query.getSingleResult() line. My logs show that its doing some INSERT queries right before this, so it looks like its reaching the EntityManager.merge part of my doTransaction method, but the error and call stack point to a completely different part of the code.
Using EclipseLink (2.6.1), Glassfish 4, and MySQL. The entitymanager is using RESOURCE_LOCAL with all the necessary classes listed under the persistence-unit tag and exclude-unlisted-classes is set to false.
Edit: So some more info as I'm trying to work through this. If I put a breakpoint at the beginning of the transaction and then execute entityManager.clear() through IntelliJ's "Evaluate Expression" tool, everything works fine at least the first time through. Without it, I get an error as it tries to insert empty objects into the table.
Edit #2: I converted the nativeQuery part into using the Criteria API and this let me actually make it through my code so I could find where it was unintentionally adding in a null object to my entity list. I'm still just confused as to why the entity manager is caching these errors or something to the point that creating a native query is breaking because its still trying to insert bad data. Is this something I'd need to call EntityManager.clear() before doing each time? Or am I supposed to call this when there is an error in the doTransaction method?
So after reworking the code and setting this aside, I stumbled on at least part of the answer to my question. My issue was caused by the object being persisted prior to the transaction starting. So when I was entering my transaction, it first tried to insert/update data from my entity objects and threw an error since I hadn't set the values of most of the non-null columns. I believe this is the reason I was getting the cascade errors and I'm positive this is the source of the random insert queries I saw being fired off at the beginning of my transaction. Hope this helps someone else avoid a lot of trouble.
There are few example out there but non of them are very clarified (or on old version).
I want to call MySQL procedure and check the return status (in rails 4.2). The most common method I saw is to call result = ActiveRecord::Base.connection.execute("call example_proc()"), but in some places people wrote there is prepared method result = ActiveRecord::Base.connection.execute_procedure("Stored Procedure Name", arg1, arg2) (however it didn't compiled).
So what is the correct way to call and get the status for MySQL procedure?
Edit:
And how to send parameters safly, where the first parameter is integer, second string and third boolean?
Rails 4 ActiveRecord::Base doesn't support execute_procedure method, though result = ActiveRecord::Base.connection still works. ie
result = ActiveRecord::Base.connection.execute("call example_proc('#{arg1}','#{arg2}')")
You can try Vishnu approach below
or
You can also try
ActiveRecord::Base.connections.exec_query("call example_proc('#{arg1}','#{arg2}')")
here is the document
In general, you should be able to call stored procedures in a regular where or select method for a given model:
YourModel.where("YOUR_PROC(?, ?)", var1, var2)
As for your comment "Bottom line I want the most correct approach with procedure validation afterwards (for warnings and errors)", I guess it always depends on what you actually want to implement and how readable you want your code to be.
For example, if you want to return rows of YourModel attributes, then it probably would be better if you use the above statement with where method. On the other hand, if you write some sql adapter then you might want to go down to the ActiveRecord::Base.connection.execute level.
BTW, there is something about stored proc performance that should be mentioned here. In several databases, database does stored proc optimization on the first run of the stored proc. However, the parameters that you pass to that first run might not be those that will be running on it more frequently later on. As a result, your stored-proc might be auto-optimized in a "none-optimal" way for your case. It may or may not happen this way, but it is something that you should consider while using stored procs with dynamic params.
I believe you have tried many other solutions and got some or other errors mostly "out of sync" or "closed connection" errors. These errors occur every SECOND time you try to execute the queries. We need to workaround like the connection is new every time to overcome this. Here is my solution that didn't throw any errors.
#checkout a connection for Model
conn = ModelName.connection_pool.checkout
#use the new connection to execute the query
#records = conn.execute("call proc_name('params')")
#checkout the connection
ModelName.connection_pool.checkin(conn)
The other approaches failed for me, possibly because ActiveRecord connections are automatically handled to checkout and checking for each thread. When our method tries to checkout a connection just to execute the SP, it might conflict since there will be an active connection just when the method started.
So the idea is to manually #checkout a connection for the model instead of for thread/function from the pool and #checkin once the work is done. This worked great for me.