Memcache (northscale) socket pool question for Enyim - exception

I'm using Northscale 1.0.0 and need a little help getting it to limp along for long enough to upgrade to the new version. I'm using C# and ASP.NET to work with it using the Enyim libraries. I currently suspect that the application does not have enough connections per the socketPool setting in my app.config. I also noted that the previous developer's code simply treats ANY exception from an attempted Get call to MemCache as if the item isn't in the cache, which (I believe) may be resulting in periodic spikes in calls to the database when the pool gets starved. We've been having oddball load spikes that don't seem to have any relation to server load. I suspect that he is not correctly managing the lifecycle on the connections to Northscale and that we are periodically experiencing starvation in the socket pool as a result, but I'm unable to prove it.
Is there a specific exception I should be looking for when I call the Get method to retrieve items from cache? I'm not really seeing much in the docs that gives me sufficient information on this. Anybody have any sample code on this? I'd even accept java or php code, as I think the .NET libraries were probably based on one of those anyway.
Any ideas?
Thanks,
Will

If you have made the connection correctly to the membase server(formerly Northscale) typically you only get an exception on 'get' when it's not a hit.

Related

Can "MaxConcurrentStreams" server option be considered an equivalent to "maximum_concurrent_rpcs" from grpc-python?

I am implementing a grpc server(in go) where I need to respond with some sort of server busy/unavailable message in case my server is already servicing a set maximum number of RPCs (currently).
I have implemented a grpc server with grpc-python earlier where I achieved this with a combination of maximum_concurrent_rpcs and the max number of threads in the threadpool. I am looking for something similar in grpc-go. The closest I could find was the server setting which can be set by the ServerOptions returned by calling MaxConcurrentStreams. My application only supports unary RPCs and I am not sure if this setting will apply to that.
I am just looking to enforce/set a max number of active concurrent requests the server can handle. Would setting maxConcurrentStreams work or should I look at doing it in my code itself (I have done some rudimentary implementation for it but I would rather use something provided by grpc-go)?
I've never used MaxConcurrentStreams before, because for highload services you usually want to make the most from your hardware, and this limitation doesn't seem to make sense. Perhaps it's possible to achieve your goal with this setting, but you need to investigate, which kind of error is returned when MaxConcurrentStreams is achieved. I think that should be GRPC's transport error, not your own, so you'll not be able to control error message and code.

Sikuli, Java, and java.lang.ThreadDeath exception

we are using Sikuli with Java (Sikuli 1.1.1), but we are running into java.lang.ThreadDeath exception for a new client. In Java Configuration, we have selected mix code of Enable - hide warning and run with protections. Has anyone run into this issue before and what is the reason and possible fix?
Somewhere in the code Thread.stop() is being called.
According to the documentation don't do this! It releases all locks held by that thread may cause locked objects to be accessed in inconsistent state.

Preferred way to DB connection in iOS

I'm a beginner iOS developer, and I'm trying to build a CRM system to learn the different aspects of developing.
I have a question regarding the preferred way to connect to an external SQL-server. I'm using Karl Krafts' Obj-C MySQL Connector by the way.
Right now I init the Database-controller (which in turn creates, then idles the connection to the server) object in my app delegate (didFinishLaunchingWithOptions), and that gives me some unwanted side-effects.. The screen is black a long time at startup if connection to the DB is slow, and sometimes the app is "too fast" and the query is trying to execute before the connection has been fully established - resulting in an exception being thrown.
The behavior I want (and guess is the preferred) is that the GUI loads up first, and then the initialization of the DB-controller and connection is established in a background thread - updating the GUI when the data has been acquired.
How would I achieve this? I have tried a number of different ways i've come across in my research, dispatch_queues and initing it straight from the viewDidLoad etc, but none give me the desired "GUI then data"-effect.
Also, would it be preferred to have an idling connection during the session of the program - or should each query 'connect - do its thing - disconnect'?
Regards, Christopher
Commandment One: don't do networking on the main thread - it's reserved for the UI. Else your app will have a laggy and frozen UI.
Commandment Two: instead of a lot of sequential synchronous calls, use asynchronous calls (GCD, background threads, etc.), events and callbacks. Cocoa (Touch) is designed with this in mind, so it's easy to do.
Commandment Three: if you launch something automatically, let it be launched when the app is fully ready. Let the call to the web service be the last one in application:didFinishLaunchingWithOptions:. Even better, let the user have the possibility to initiate the login via a user action, i. e. by pressing a "Login" button.
Commandment Four: read the first three Commandment again and keep them in mind. Practice them until you know them well.

Should exception logging subsystem have limited throughput ? If yes, how?

We had a case when exceptions had gone in some kind of infinite loop.
Stack traces were very big and we log all of them.
That flood our Oracle database and when redo logs reached their size limit db stopped.
EDIT: Of course that the most important thing is to find the cause of infinite loop an correct the bug in the system. We already did that and that is not the question here.
The system could have more bugs like that (it's an windows service and it's running constantly) and in that case one app broke the whole DB, meaning all applications on that Oracle DB.
I'm mostly interested in your experiences, architecturally. And that from other logging frameworks like log4net, log4j and others. How do they handle flood of exceptions ? Just handle them like all other exceptions ?
I think your situation illustrates that there should definitely be some mechanism in place to prevent exception logs from causing a denial-of-service anywhere, as this has done.
If you use the Windows event logs, this can be handled for you automatically, as old records can automatically be wiped out when the log is full. You could code a DB-based system to do the same thing, as well.
Of course, you want to do everything you can to eliminate such errors in the first place where ever possible, too!
Another option may be to detect and ignore multiple, consecutive errors of the same time... perhaps simply updating a count property/field instead.
I'd worry more about the root cause of the infinite loop then I would about limiting logging.
I'd check your code for methods that catch an exception, log the stack trace, and re-throw. I'd argue that catching and re-throwing is not exception handling. If a class truly can't handle the exception, it's better to let it simply bubble up until it reaches a single point where someone can deal with it.
Redo logs? How often do you flush those? Surely you don't have one big transaction, do you?
Can you do the logging to a different database with no redo logs? That will protect the production database.
In our applications whe have a central exceptionhandler where all execeptions go through
void OnExceptionOccurs(Exception ex,
string enduserFriendlyContextDescription,
string tecnicalContextDescription,
ILogger loggerBelongingToProcess)
that handler can decide how to log and you have a central location for breakpoint when debugging

grails and mysql batch processing

I'm trying to implement the advice found in this great blog post for batch processing in grails with MySQL. The problem that I'm having is that inclusion of periodic calls to session.clear() in my loop causes org.hibernate.LazyInitializationException's to be thrown. There's a quote down in the comments section of the page:
You’re second point about potentially
causing LIEs is absolutely true. If
you’re doing other things outside of
importing with the current thread,
you’ll want to make sure to reattach
any objects to the session after
you’re doing your clearing.
But how do I do that? Can anyone help me specifically understand how to "reattach any objects to the session after I'm done clearing?
I'm also interested in parallelizing the database insertion process so that I can take advantage of having a multi core processor. Can anyone provide advice on how to do that in Grails?
Grails has a few methods to help with this (they leverage hibernate under the covers).
If you know an object is detached, you can use the attach method to reconnect it.
If you've made changes to the object while it was detatched, you can use merge.
If for whatever reason, you're not sure if an object is attached to the session, you can use the link text method to find out if it is or isn't.
It might also be worth reviewing the Hibernate documentation on Session.