grails and mysql batch processing - mysql

I'm trying to implement the advice found in this great blog post for batch processing in grails with MySQL. The problem that I'm having is that inclusion of periodic calls to session.clear() in my loop causes org.hibernate.LazyInitializationException's to be thrown. There's a quote down in the comments section of the page:
You’re second point about potentially
causing LIEs is absolutely true. If
you’re doing other things outside of
importing with the current thread,
you’ll want to make sure to reattach
any objects to the session after
you’re doing your clearing.
But how do I do that? Can anyone help me specifically understand how to "reattach any objects to the session after I'm done clearing?
I'm also interested in parallelizing the database insertion process so that I can take advantage of having a multi core processor. Can anyone provide advice on how to do that in Grails?

Grails has a few methods to help with this (they leverage hibernate under the covers).
If you know an object is detached, you can use the attach method to reconnect it.
If you've made changes to the object while it was detatched, you can use merge.
If for whatever reason, you're not sure if an object is attached to the session, you can use the link text method to find out if it is or isn't.
It might also be worth reviewing the Hibernate documentation on Session.

Related

How to manage JSON-Schemas for multiple projects?

Suppose you have a Schema that is used in a UI-App (e.g. Vue), a Node.js or Springboot Server and has to validate against Databases (e.g. SQL, mongoDB,...), and maybe some Micro-services running on whatever.
How and where do I manage a this JSON-Schema, so that if I have to change the schema for whatever Reason, that every architectural component can handle the new JSON-Schema(s).
Otherwise I need to update the Schema in up to 10 projects so none is incompatible.
Is it really as simple as having a git project full with just JSON-Schemas or do I need specific loaders for each language/environment?
Are there best practices that I am unaware of?
PS: I don't really think I need the automatically synchronized on runtime, so don't really think I need another Microservice to achieve that.
That being said, if a Microservice is the best way to go, then getting a Microservice it is.
If you keep them in a git project, how do you load them? Clone the project each time the app starts? It may work, but I would go with a more flexible approach that should take too much effort to be done:
Build a JSON schema repository accessible via a REST API
When the app starts, it makes a request to grab the schema (latest, or a specific version)
That way you get an uniform (and scalable) way of playing with the schemas. Even if you think about a hot-reload sometime in the future, you can leverage this approach to do that.
Here is an old project in this direction, you may give it a shot to see if it works (or for some inspiration, at least)

Memcache (northscale) socket pool question for Enyim

I'm using Northscale 1.0.0 and need a little help getting it to limp along for long enough to upgrade to the new version. I'm using C# and ASP.NET to work with it using the Enyim libraries. I currently suspect that the application does not have enough connections per the socketPool setting in my app.config. I also noted that the previous developer's code simply treats ANY exception from an attempted Get call to MemCache as if the item isn't in the cache, which (I believe) may be resulting in periodic spikes in calls to the database when the pool gets starved. We've been having oddball load spikes that don't seem to have any relation to server load. I suspect that he is not correctly managing the lifecycle on the connections to Northscale and that we are periodically experiencing starvation in the socket pool as a result, but I'm unable to prove it.
Is there a specific exception I should be looking for when I call the Get method to retrieve items from cache? I'm not really seeing much in the docs that gives me sufficient information on this. Anybody have any sample code on this? I'd even accept java or php code, as I think the .NET libraries were probably based on one of those anyway.
Any ideas?
Thanks,
Will
If you have made the connection correctly to the membase server(formerly Northscale) typically you only get an exception on 'get' when it's not a hit.

Not receiving onSync delete events for multiple SharedObjects in same SWF

I have an application that uses Remote SharedObjects and I am seeing some strange behaviour. I am writing an ActionScript application in AS3 using Flash Builder and connecting to Wowza Media Server 2.
My application is working just fine but I am now trying to write unit tests for it using FlexUnit. My unit tests involve creating multiple connections to the same remote SharedObject and making sure that I am getting updates correctly. Everything seems to be working well except that I am not getting any of the SyncEvent.SYNC events with an info.code of "delete". When I run my applications independently in separate tabs or even separate swfs embedded in the same page it works fine. For some reason though it does not work when inside a unit test. I have also found that if I load the swfs using a Loader inside the same SWF then I get the same behaviour. It seems to me to be something strange about the way multiples of the same SharedObject behave within the same SWF. I have had to work around other strange behaviour in the unit tests such as oldValues not being set properly in the onSync events too.
Anyone have any ideas how I can work around this? Is this a known issue? Am I crazy? :)
Would appreciate any help!
I also faced this problem before when I was working on some Flex application using some coding frameworks like Cairngorm, and connecting to AMFPHP with multi remote objects.
At that time what I come up with to resolve the issue is to make sure that those remote objects won't be fired at the same time. That is trying to make some so called "sequential chain" to fire those remote objects one after another.
In order to achieve this it may be difficult if you do it from scratch, you may consider making use of those modern ActionScript framework 2.0 to help you (e.g. SWIZ or Robotlegs). This may be too complex to handle in the very beginning. I suggest you can just place the remote object parts to be handled by the framework while keeping other things intact.
To get your sharedObject instance, you are using the static method SharedObject.getRemote(). I believe this method will always return the same instance for a given name (and if the persistence parameter has the same value).
You can have the same kind of issue when you remove a SharedObject from your app (mySO = null) and you reinstantiate it before the garbage collector did its job).
This kind of behavior makes sense to me, but it can sometimes be a problem I must admit. Anyway it should be easy to test in a debug session (Have a look at your objects instance number).
Now talking about unit tests, what are you testing? The SharedObjects behavior? If so, I believe there is some misconception here. It you really want to test this kind of behavior (and I would be interested in the reason behind), then I guess you will need some more complex tests that run two separate applications.
Hope it helps!
We had similar behavior with deletes in our project.
When we call so.close(). Then delete some key in shared object. Then connect SO again - it still see deleted key alive.
Workarounds: do not close SO or update deleted keys with some constants values (-2 for example) to mark them deleted.
Wowza 3 was used.

Log4net: Log Context Only for Exception

I'm looking for a way to collect a set of data that will only be used for debugging. I.e., the data should only be logged if I log an exception. When I get an exception argument with ILog.Error, Fatal or Debug I want to log the extra information. When logging other data with an exception, the extra information should not be logged.
I plan to use the GlobalContext or ThreadContext for building the dataset.
My idea was to hook into Log4Net and attach to an event I would imagine, to alter the message pattern to include contexts, but I can't find any event that would help me. Perhaps there is an easier way?
What do you think about the overall design of this? Am I on the right track or am I missing something?
If this way is good, how can I implement it?
Hooking in and changing the pattern doesn't feel quite right. I would suggest looking at the filter stuff. So you would set up your "ForMessagesWithExceptionsOnly" appender which uses the MessagesWithExceptionOnly filter. This appender would ofcourse only process messages that contains an exception.
To implement your MessagesWithExceptionOnly filter, have a look at FilterSkeleton.

How to save and load different types of objects?

During coding I frequently encounter this situation:
I have several objects (ConcreteType1, ConcreteType2, ...) with the same base type AbstractType, which has abstract methods save and load . Each object can (and has to) save some specific kind of data, by overriding the save method.
I have a list of AbstractType objects which contains various ConcreteTypeX objects.
I walk the list and the save method for each object.
At this point I think it's a good OO design. (Or am I wrong?) The problems start when I want to reload the data:
Each object can load its own data, but I have to know the concrete type in advance, so I can instantiate the right ConcreteTypeX and call the load method. So the loading method has to know a great deal about the concrete types. I usually "solved" this problem by writing some kind of marker before calling save, which is used by the loader to determine the right ConcreteTypeX.
I always had/have a bad feeling about this. It feels like some kind of anti-pattern...
Are there better ways?
EDIT:
I'm sorry for the confusion, I re-wrote some of the text.
I'm aware of serialization and perhaps there is some next-to-perfect solution in Java/.NET/yourFavoriteLanguage, but I'm searching for a general solution, which might be better and more "OOP-ish" compared to my concept.
Is this either .NET or Java? If so, why aren't you using serialisation?
If you can't simply use serialization, then I would still definitely pull the object loading logic out of the base class. Your instinct is correct, leading you to correctly identify a code smell. The base class shouldn't need to change when you change or add derived classes.
The problem is, something has to load the data and instantiate those objects. This sounds like a job for the Abstract Factory pattern.
There are better ways, but let's take a step back and look at it conceptually. What are all objects doing? Loading and Saving. When you get the object from memory, you really don't to have to care whether it gets its information from a file, a database, or the windows registry. You just want the object loaded. That's important to remember because later on, your maintanence programmer will look at the LoadFromFile() method and wonder, "Why is it called that since it really doesn't load anything from a file?"
Secondly, you're running into the issue that we all run into, and it's based in dividing work. You want a level that handles getting data from a physical source; you want a level that manipulates this data, and you want a level that displays this data. This is the crux of N-Tier Development. I've linked to an article that discusses your problem in great detail, and details how to create a Data Access Layer to resolve your issue. There are also numerous code projects here and here.
If it's Java you seek, simply substitute 'java' for .NET and search for 'Java N-Tier development'. However, besides syntactical differences, the design structure is the same.