Groovy SQL: Shall I manually close the Mysql connection - mysql

I have a grails service that calls stored procedures using Groovy SQL.
I am using dataSource for initializing the connection.
My question is: Do I need to manually close the connection or will it be handled by Groovy or GORM (since I am using def dataSource)?
Here is how my service is structured.
class MyService {
static transactional = Boolean.FALSE
private static final String STATEMENT_ONE_SQL = "{ call sp_One(?) }"
private static final String STATEMENT_TWO_SQL = "{ call sp_Two(?,?) }"
def dataSource
Sql sql
#PostConstruct
def initSql() {
sql = new Sql(dataSource)
}
List<GroovyRowResult> callSpOne(Integer id) {
List<GroovyRowResult> results = sql.rows(STATEMENT_ONE_SQL, [id])
return results
}
List<GroovyRowResult> callSpTwo(Integer id, String name) {
List<GroovyRowResult> results = sql.rows(STATEMENT_TWO_SQL, [id, name])
return results
}

Basing on official docs
Finally, we should clean up:
sql.close()
If we are using a DataSource and we haven't enabled statement caching, then strictly speaking the final close() method isn't required - as all connection handling is performed transparently on our behalf; however, it doesn't hurt to have it there as it will return silently in that case.
If instead of newInstance you use withInstance, then close() will be called automatically for you.

Related

Verifying that setter was called on mocked object with Mockito

Given the following service method in a Spring Boot application:
#Transactional
public void updateCategory(long categoryId, CategoryData categoryData) {
final Category category = categoryRepository.findById(categoryId).orElseThrow(EntityNotFoundException::new);
category.setName(categoryData.getName());
}
I know how to instruct Mockito to mock the categoryRepository.findById() result.
However, I couldn't figure out yet: Is it possible to verify that category.setName() was called with the exact argument of categoryData.getName()?
You are looking for Mockito.verify, and a test looking like:
#ExtendWith(MockitoExtension.class)
public class CategoryServiceTest {
#Mock
CategoryRepository categoryRepository;
#InjectMocks
CategoryService categoryService;
#Test
public void testUpdateCategoryMarksEntityDirty() {
// given
long categoryId = 1L;
Category category = mock(Category.class);
String newCategoryName = "NewCategoryName";
when(categoryRepository.findById(categoryId)).thenReturn(Optional.of(category));
// when
categoryService.updateCategory(categoryId, new CategoryData(newCategoryName));
// then
verify(category, times(1)).setName(newCategoryName);
}
}
I must, however, advise against this style of testing.
Your code suggests that you are using a DB Access library with dirty-checking mechanism (JPA / Hibernate?). Your test focuses on the details of interaction with your DB Access layer, instead of business requirement - the update is successfully saved in the DB.
Thus, I would opt for a test against a real db, with following steps:
given: insert a Category into your DB
when: CategoryService.update is called
then: subsequent calls to categoryRepository.findById return updated entity.

"Invalid JSON Number" Spring Data Mongo 2.0.2

** FIXED **
All I had to do is add an apostrophe before and after each argument index,
i.e, change:
#Query(value = "{'type': 'Application','name': ?0,'organizationId': ?1}", fields = "{_id:1}")
To:
#Query(value = "{'type': 'Application','name': '?0','organizationId': '?1'}", fields = "{_id:1}")
===================
I recently upgraded my MongoDB and my Spring-Data-MongoDB Driver.
I used to access my MongoDB through mongoRepository using this code:
#Query(value = "{'type': 'Application','name': ?0,'organizationId': ?1}", fields = "{_id:1}")
Policies findPolicyByNameAndOrganizationId(String name, String organizationId);
Where Policies is the object I want to consume.
After performing an update to Spring, I get the following Error now when accessing the method above:
org.bson.json.JsonParseException: Invalid JSON number
I fear this is because I use Spring's MongoCoverter (in the case of this specific object only) to map documents to object.
Here's is my Reader Converter:
public class ApplicationPolicyReadConverotor implements Converter<Document, ApplicationPolicy > {
private MongoConverter mongoConverter;
public ApplicationPolicyReadConverotor(MongoConverter mongoConverter) {
this.mongoConverter = mongoConverter;
}
//#Override
public ApplicationPolicy convert(Document source) {
ApplicationPolicyEntity entity = mongoConverter.read(ApplicationPolicyEntity.class, source);
ApplicationPolicy policy = new ApplicationPolicy();
addFields(policy, entity);
addPackages(policy, entity);
return policy;
}
And here's is my Writer Converter:
public class ApplicationPolicyWriteConvertor implements Converter<ApplicationPolicy, Document>{
private MongoConverter mongoConverter;
public ApplicationPolicyWriteConvertor(MongoConverter mongoConverter) {
this.mongoConverter = mongoConverter;
}
#Override
public Document convert(ApplicationPolicy source) {
System.out.println("mashuWrite");
ApplicationPolicyEntity target = new ApplicationPolicyEntity();
copyFields(source, target);
copyPackages(source, target);
Document Doc = new Document();
mongoConverter.write(target, Doc);
return Doc;
}
I checked Spring reference (2.0.2) regarding MongoConverter and how it works and at this stage I think I'm doing it correctly.
Other object who do not use mapping/conversions suffer no problems.
Same did this Object (ApplicationPolicy) untill I upgraded my mongo and my spring driver.
My mongodb is 3.4.10 and Spring data mongo driver is 2.0.2.
Here's the code that initializes the MappingMongoCoverter Object:
(Adds my custom Converters).
SimpleMongoDbFactory simpleMongoDbFactory = new SimpleMongoDbFactory(client, dbName);
DefaultDbRefResolver defaultDbRefResolver = new DefaultDbRefResolver(simpleMongoDbFactory);
MongoMappingContext mongoMappingContext = new MongoMappingContext();
MappingMongoConverter mappingMongoConverter = new MappingMongoConverter(defaultDbRefResolver,
mongoMappingContext);
mappingMongoConverter.setMapKeyDotReplacement("_dot_");
// Adding custom read and write converters for permission policy.
mappingMongoConverter.setCustomConversions(new MongoCustomConversions(Arrays.asList(
new ApplicationPolicyWriteConvertor(mappingMongoConverter), new ApplicationPolicyReadConverotor(
mappingMongoConverter))));
mappingMongoConverter.afterPropertiesSet();
final MongoTemplate template = new MongoTemplate(simpleMongoDbFactory, mappingMongoConverter);
return template;
I know for sure that ReaderConverter WORKS legit (at least in some cases) since other aspects of the software use the custom ReaderConverter I've written and it works as expected.
Also when using debug mode (Intellij) I do not reach to the conversion code block when invoking the following:
#Query(value = "{'type': 'Application','name': ?0,'organizationId': ?1}", fields = "{_id:1}")
Policies findPolicyByNameAndOrganizationId(String name, String organizationId);
So basically I'm kinda clueless. I have a sense my converter Implementation is messy but couldn't fix it..

Castle windsor: how to pass arguments to deep dependencies?

I have the following dependency chain:
IUserAppService
IUserDomainService
IUserRepository
IUserDataContext - UserDataContextImpl(string conn)
All interfaces above and implementations are registered in a Windsor Castle container. When I use one connection string, everything works fine.
Now we want to support multiple databases, In UserAppServiceImpl.cs, we want to get different IUserRepository (different IUserDatabaseContext) according to userId as below:
// UserAppServiceImpl.cs
public UserInfo GetUserInfo(long userId)
{
var connStr = userId % 2 == 0 ? "conn1" : "conn2";
//var repo = container.Resolve<IUserRepository>(....)
}
How can I pass the argument connStr to UserDataContextImpl?
Since the connection string is runtime data in your case, it should not be injected directly into the constructor of your components, as explained here. Since however the connection string is contextual data, it would be awkward to pass it along all public methods in your object graph.
Instead, you should hide it behind an abstraction that allows you to retrieve the proper value for the current request. For instance:
public interface ISqlConnectionFactory
{
SqlConnection Open();
}
An implementation of the ISqlConnectionFactory itself could depend on a dependency that allows retrieving the current user id:
public interface IUserContext
{
int UserId { get; }
}
Such connection factory might therefore look like this:
public class SqlConnectionFactory : ISqlConnectionFactory
{
private readonly IUserContext userContext;
private readonly string con1;
private readonly string con2;
public SqlConnectionFactory(IUserContext userContext,
string con1, string con2) {
...
}
public SqlConnection Open() {
var connStr = userContext.UserId % 2 == 0 ? "conn1" : "conn2";
var con = new SqlConnection(connStr);
con.Open();
return con;
}
}
This leaves us with an IUserContext implementation. Such implementation will depend on the type of application we are building. For ASP.NET it might look like this:
public class AspNetUserContext : IUserContext
{
public string UserId => int.Parse(HttpContext.Current.Session["UserId"]);
}
You have to start from the beginning of your dependency resolver and resolve all of your derived dependencies to a "named" resolution.
Github code link:https://github.com/castleproject/Windsor/blob/master/docs/inline-dependencies.md
Example:
I have my IDataContext for MSSQL and another for MySQL.
This example is in Unity, but I am sure Windsor can do this.
container.RegisterType(Of IDataContextAsync, dbEntities)("db", New InjectionConstructor())
container.RegisterType(Of IUnitOfWorkAsync, UnitOfWork)("UnitOfWork", New InjectionConstructor(New ResolvedParameter(Of IDataContextAsync)("db")))
'Exceptions example
container.RegisterType(Of IRepositoryAsync(Of Exception), Repository(Of Exception))("iExceptionRepository",
New InjectionConstructor(New ResolvedParameter(Of IDataContextAsync)("db"),
New ResolvedParameter(Of IUnitOfWorkAsync)("UnitOfWork")))
sql container
container.RegisterType(Of IDataContextAsync, DataMart)(New HierarchicalLifetimeManager)
container.RegisterType(Of IUnitOfWorkAsync, UnitOfWork)(New HierarchicalLifetimeManager)
'brands
container.RegisterType(Of IRepositoryAsync(Of Brand), Repository(Of Brand))
controller code:
No changes required at the controller level.
results:
I can now have my MSSQL context do its work and MySQL do its work without any developer having to understand my container configuration. The developer simply consumes the correct service and everything is implemented.

How to handle deeply nested exception in struts2?

My struts2 webapp makes use of a SQL database. Within the DB access code, I've written a basic try/catch handler that catches SQL or general exceptions, writes the detail to a log file, and then continues. The hierarchy of classes is as follows:
Action method -> get or set method on Model -> DB access.
//Action method in action class
public string doActionMethod() throws Exception
{
String results = SampleModel.getResults();
}
//Model method in model class
public string getResults() throws Exception
{
String results = DBLayer.runQuery("SELECT Results FROM SampleTable WHERE Value='1');
}
//Method that queries database in DB access class
public string runQuery() throws Exception
{
ResultSet rs = null;
Connection dbConnection = null;
PreparedStatement preparedStatement = null;
dbConnection = MSSQLConnection.getConnection();
preparedStatement = dbConnection.prepareStatement(sqlQuery);
//run SQL statements
return String(rs.get(0));
}
I'd like caught exceptions to bubble up to the Action level, so that I can forward them to an appropriate error page. Is there a better way to do this than adding a "throws Exception" to the method signature?
Since you have no hope of recovery, throw an application-specific RuntimeException.
Use standard Struts 2 declarative exception handling to get your app to the appropriate error page.

ASP.NET MVC - Is there an easy way to add Data Caching to my Service Layer?

I've got my MVC application wired up so that the Repository Layer queries the LINQ to SQL classes, the Service Layer queries the Repository Layer, and the Controllers call the Service layer.
Basically I have code as follows
Repository
Public Function GetRegions() As IQueryable(Of Region) Implements IRegionRepository.GetRegions
Dim region = (From r In dc.Regions
Select r)
Return region.AsQueryable
End Function
Service
Public Function GetRegionById(ByVal id As Integer) As Region Implements IRegionService.GetRegionById
Return _RegionRepository.GetRegions() _
.Where(Function(r) (r.ID = id _
And r.isActive)) _
.FirstOrDefault()
End Function
Public Function GetRegionByNameAndParentID(ByVal region As String, ByVal parentid As Integer) As Region Implements IRegionService.GetRegionByNameAndParentID
Return _RegionRepository.GetRegions() _
.Where(Function(r) (r.Region = region _
And r.ParentID = parentid _
And r.isActive)) _
.FirstOrDefault()
End Function
Public Function GetActiveRegions() As List(Of Region) Implements IRegionService.GetActiveRegions
Return _RegionRepository.GetRegions() _
.Where(Function(r) r.isActive) _
.ToList
End Function
Public Function GetAllRegions() As List(Of Region) Implements IRegionService.GetAllRegions
Return _RegionRepository.GetRegions().ToList
End Function
I'm wondering if there's a nice/efficient way to add Caching to the Service layer so that it doesn't always have to be calling the REPO if the calls are the same.
As caching is a cross cutting concern (do a search in Wikipedia), you can use policy injection to implement caching on your repository layer, but the constraint is that you use a DI framework like Castle, Unity, ... Advantage of this concept is that you keep clean code in your repository layer.
I'll start with It Depends, but in simple scenario's where no interaction with other service agents is required, it is only recommended to cache the access to the database, as database access is the slowest of all. That's why I would recommend not to cache the access to the service layer, but rather the repository layer. This is also what Martin Fowler describes in his data mapper pattern.
If you are in a distributed scenario, whereby your controller and service are running on different servers, you might opt to cache on your controller as well to prevent the serialization of reference data every time you load e.g. your countrylist dropdown or tax code values.
In your scenario, I would attach a CachingHandler to your repository GetRegions(), and make a CacheKey which combines e.g. the method and parameters (if any). In a simplistic approach, save the CacheKey and the list of results to an Hashtable (in real life, use Patterns & Practices Caching application block or System.Web.Cache), and to every request to your repository, see if the cache key is in your Hashtable, then return the cached list.
A quick search in google gives you this to get started:
http://entlib.codeplex.com/Thread/View.aspx?ThreadId=34190
rockinthesixstring - yes, you can add an http cache into that layer using an anonymous function to either pull from the repo or pull from cache. basically, you'd do it allong the following lines (this is from an app that i'm working on just now that uses subsonic, but the premise of what you're after is identical.
/// <summary>
/// Returns an IQueryable based on the passed-in Expression Database
/// </summary>
IQueryable<T> IRepository<T>.Find(Expression<Func<T, bool>> expression)
{
// set up our object cacheKey
string keyValue = ParseExpression(expression);
if(keyValue==null)
{
return _repository.Find(expression);
}
string cacheKey = string.Format(EntityrootList, _className, "Find", keyValue, DateTime.UtcNow.Ticks.ToString(), string.Empty);
// try to populate from the cache
// rockinthesixstring - this is the part that is most relevant to you
var result = Cache.Get(cacheKey,
() => _repository.Find(expression),
CacheDuration);
return result;
}
[edit] in the controller, you'd call it like so (the controller _repository is set as:
readonly IRepository<Booking> _repository;
in the example):
[Authorize]
[AcceptVerbs(HttpVerbs.Post)]
public ContentResult ListBookings(int shareholderid)
{
Expression<Func<Booking, bool>> exprTree = x => x.FundShareholderEntity.ShareholderID == shareholderid;
var bookings = _repository.Find(exprTree).OrderByDescending(x => x.BookingDetailEntity.ActualDateFrom).OrderBy(x => x.BookingTypeID);
return Content(this.RenderPartialToString("BookingListNoPaging", bookings));
}
In the above example, Cache (i.e. Cache.Get()) is a class that wraps the httpcontext cache in a more user friendly way.
hope this helps...
jim
[edit] - added cache interface to add to the 'debate' :)
public interface ISessionCache
{
T Get<T>(string key);
T Get<T>(string key, Func<T> getUncachedItem, int cacheDuration);
void Insert(string key, object obj, int cacheDuration, CacheDependency arg0, TimeSpan arg2);
void Remove(string key);
object this[string key] { get; } // default indexer
IDictionaryEnumerator GetEnumerator();
}
in the injectable class would be used along the lines of:
public class FakeCache : ISessionCache
{... all inteface members implemented here etc..}
or for httpcache:
public class HttpContextCache : ISessionCache
{... all inteface members implemented here etc..}
etc, etc..
cheers again - jim