Dead letter queue Qpid - junit

i m using rabbit template with qpid broker in test environment .everything works fine except dead letter queues are not functioning. They get the message but they never pass the message. all the queues are defined in rabbit-context file

I've also run into this issue. I believe it is a consequence of the suffixes used for dead-letter queues and exchanges. Rabbit uses ".dlq", whereas by default qpid uses "_DLQ". It may be possible to override this in qpid using the qpid.broker_dead_letter_queue_suffix system property.

Related

Inherent issue with custom memory manager and RTLD_DEEPBIND?

I need help regarding an issue for which I created bug 25533 a while ago.
The issue is mainly because of complex interaction of RTLD_DEEPBIND, libc and custom memory manager.
Let me explain issue first
In my application, I use a third party application which makes use of dlopen with RTLD_DEEPBIND flag
In my application, this third party application is linked statically along with Google’s memory manager (gperftools https://github.com/gperftools/gperftools).
When third party application calls dlopen with RTLD_DEEPBIND flag, strdup (along with malloc) gets picked from glibc.
free definition is still communing from custom memory manager and it’s a problem as malloc’ed memory was allocated by libc malloc while free call is directed to custom memory manager.
I asked third party application vendor to fix it at their end (dlopen with RTLD_DEEPBIND), but they expressed their inability to do it as it has side effect.
I filed ticket against tcmalloc as well but it also hit dead end. As a workaround, I made use of malloc hooks.
But, I consider malloc hooks as temporary workaround and I’m looking for proper solution to this problem and IMO, proper fix should be in glibc which should be agnostic to memory manager
that any application chooses. I believe jemalloc (Facebook) also has same issue and they also have used malloc hooks to work around this problem.
Through this email, I’m looking for suggestions to fix this issue properly.
I have created a small test case which is not exactly what is there in my application but closely resembles what I see. It’s dependent on TCMALLOC_STATIC_LIB variable which should be path to static library (.a) for custom memory manager.

Preventing reverse engineering with binary code and secret key

I am working on a software program that has to be deployed on private cloud server of a client, who has root access. I can communicate with the software through a secure port.
I want to prevent client from reverse engineering my program, or at least make it "hard enough". Below is my approach:
Write code in Go and compile the software into binary code (may be with obfuscation)
Make sure that program can only be initiated with secret key that can be sent through the secure port. The secret key can be changing depending on time.
Every time I need to start/stop the program, I can send commands with the secret keys through the secured port.
I think this approach can prevent a root user from either:
Using a debugger to reverse engineer my code
Running the program repeatedly to check outputs
My question is: What are the weak spots of this design? How can a root user attack it?
I want to prevent client from reverse engineering my program,
You can't prevent this fully when software runs on hardware you don't own. To run the software, the CPU must see all instructions of the program, and they will be stored in the computer memory.
https://softwareengineering.stackexchange.com/questions/46434/how-can-software-be-protected-from-piracy
Code is data. When the code is runnable, a copy of that data is un-protected code. Unprotected code can be copied.
Peppering the code with anti-piracy checks makes it slightly harder, but hackers will just use a debugger and remove them. Inserting no-ops instead of calls to "check_license" is pretty easy.
(answers in https://softwareengineering.stackexchange.com/questions/46434 may be useful for you)
Hardware owner controls the OS and memory, he can dump everything.
or at least make it "hard enough".
You can only make it a bit harder.
Write code in Go and compile the software into binary code (may be with obfuscation)
IDA will decompile any machine code. Using native machine code is a bit stronger than bytecode (java or .NET or dex)
Make sure that program can only be initiate with secret key that can be sent through the secure port. The secret key can be changing depending on time.
If the copy of the same secret key (keys) is in code or memory of the program, user may dump it and simulate your server. If part of your code, or part of data needed for code to run is stored encrypted, and deciphered with such external key, user may either eavesdrop the key (after it will be decoded from SSL but before it will be used to decrypt secret part of code), or dump decrypted code/data from the memory (It is very easy to see new executable code created in memory even with default preinstalled tools like strace in Linux, just search for all mmaps with PROT_EXEC flags)
Every time I need to start/stop the program, I can send commands with the secret keys through the secured port.
This is just a variation of online license/antipiracy check ("phone home")
I think this approach can prevent a root user to: use a debugger to reverse engineer my code, or
No, he can start debugger at any time; but you can make it a bit harder to use interactive debugger, if the program communicates with your server often (every 5 seconds). But if it communicates so often it is better to move some part of computations to your server; this part will be protected.
And he still can use non-interactive debuggers, tracing tools and memory dumping. Also he can run program in virtual machine, wait until online check is done (using tcpdump and netstat to monitor network traffic), then do live snapshot of the VM (there are several variants to enable "live migration" of VM; only short pause may be recorded by your program if it has external timing), continue to run the first copy online, and take snapshot for offline debugging (with all keys and decrypted code in it).
run the program repeatedly to check outputs
Until he cracks the communications...

Handling "Internal server error" in Groovy-Console

I have a groovy-script which takes about 5 hours to complete (it restarts (delete old and start new) many workflows), and unfortunately there are some workflows which can't get processed and throw an "internal Server error" which ends the groovy call.
All I can do now is to take a look at the logs and restart the groovy script and exclude the problematic workflow-id.
It would be a great performance-boost, if I could catch this "internal server error" in the hac and continue with the next workflow instead of aborting the skript.
I already tried to put it in try/catch, but this doesn't work.
Is there any chance to "ignore" the "internal server error"s - entries of my list to process?
Thanks for any help!
Run the Groovy script natively, not through the HAC. The Groovy/Beanshell consoles are handy for quick prototypes, but running a 5-hr process through a browser interface seems kludgy at best. You have at least a couple options:
Dynamic Beans
Did you know that Spring beans can be implemented using a number of various languages using Dynamic language beans?
Define interfaces for your processes and wire them up to Groovy implementations using the Spring configuration. Since the scripts are interpreted at runtime, you can swap out code without needing to recompile the entire platform.
Now you have the full power of Java, Spring, Groovy, and hybris. Properly sequester each process so that exceptions don't bubble up and crash the entire thing.
This option would be the cleanest way to go, since you'd be integrating the code directly into the project's codebase. And you can keep all your existing [ Groovy | JRuby | Beanshell | ... ] code.
Roll your own
Another thing you might try is examining hybris' Groovy API. I was able to leverage hybris' Beanshell interpreter classes to create my own test harness. It is a simple standalone Eclipse project that allows me to write and run Beanshell within Eclipse, with output to the console. I use it on a daily basis for quick scripting tasks like batch updates, FlexibleSearch queries, etc. I'd imagine you could do the same thing with Groovy. Search the hybris API for the HAC code that interprets the Groovy requests from the browser.
The sky's the limit, but first get out of the browser console for heavy scripting tasks.
My short answer would be: Don't use scripts for time-consuming processes.
Although you mentioned that is not possible to define standard scripts, because Business is working in parallel, I cannot recommend maintaining a live system in this manner.
Integrate that logic into a custom CronJob and add all configurable/dynamic things as properties of said Job.
The benefit of this approach would be
you have a proper logging mechanism (Sysout in HAC Groovy console sux)
you can trace your execution (time consumed, started, stopped, etc.)
can be triggered automatically (CronJob Trigger) or by other instructed user (eg Operations)
you get a more stable workflow as a whole (that is, no need of keeping track of those magic scripts (how do you version them? in the resource folder?))
The downside of this would be indeed, that you need a redeploy.
From my experience, dynamically changed code (Dynamic Beans as an example) works on projects with comparably low complexity, but tends to get messy pretty quickly.

What are the best practices to log an error?

Many times I saw logging of errors like these:
System.out.println("Method aMethod with parameters a:"+a+" b: "+b);
print("Error in line 88");
so.. What are the best practices to log an error?
EDIT:
This is java but could be C/C++, basic, etc.
Logging directly to the console is horrendous and frankly, the mark of an inexperienced developer. The only reason to do this sort of thing is 1) he or she is unaware of other approaches, and/or 2) the developer has not thought one bit about what will happen when his/her code is deployed to a production site, and how the application will be maintained at that point. Dealing with an application that is logging 1GB/day or more of completely unneeded debug logging is maddening.
The generally accepted best practice is to use a Logging framework that has concepts of:
Different log objects - Different classes/modules/etc can log to different loggers, so you can choose to apply different log configurations to different portions of the application.
Different log levels - so you can tweak the logging configuration to only log errors in production, to log all sorts of debug and trace info in a development environment, etc.
Different log outputs - the framework should allow you to configure where the log output is sent to without requiring any changes in the codebase. Some examples of different places you might want to send log output to are files, files that roll over based on date/size, databases, email, remoting sinks, etc.
The log framework should never never never throw any Exceptions or errors from the logging code. Your application should not fail to load or fail to start because the log framework cannot create it's log file or obtain a lock on the file (unless this is a critical requirement, maybe for legal reasons, for your app).
The eventual log framework you will use will of course depend on your platform. Some common options:
Java:
Apache Commons Logging
log4j
logback
Built-in java.util.logging
.NET:
log4net
C++:
log4cxx
Apache Commons Logging is not intended for applications general logging. It's intended to be used by libraries or APIs that don't want to force a logging implementation on the API's user.
There are also classloading issues with Commons Logging.
Pick one of the [many] logging api's, the most widely used probably being log4j or the Java Logging API.
If you want implementation independence, you might want to consider SLF4J, by the original author of log4j.
Having picked an implementation, then use the logging levels/severity within that implementation consistently, so that searching/filtering logs is easier.
The easiest way to log errors in a consistent format is to use a logging framework such as Log4j (assuming you're using Java). It is useful to include a logging section in your code standards to make sure all developers know what needs to be logged. The nice thing about most logging frameworks is they have different logging levels so you can control how verbose the logging is between development, test, and production.
A best practice is to use the java.util.logging framework
Then you can log messages in either of these formats
log.warning("..");
log.fine("..");
log.finer("..");
log.finest("..");
Or
log.log(Level.WARNING, "blah blah blah", e);
Then you can use a logging.properties (example below) to switch between levels of logging, and do all sorts of clever stuff like logging to files, with rotation etc.
handlers = java.util.logging.ConsoleHandler
.level = WARNING
java.util.logging.ConsoleHandler.level = ALL
com.example.blah = FINE
com.example.testcomponents = FINEST
Frameworks like log4j and others should be avoided in my opinion, Java has everything you need already.
EDIT
This can apply as a general practice for any programming language. Being able to control all levels of logging from a single property file is often very important in enterprise applications.
Some suggested best-practices
Use a logging framework. This will allow you to:
Easily change the destination of your log messages
Filter log messages based on severity
Support internationalised log messages
If you are using java, then slf4j is now preferred to Jakarta commons logging as the logging facade.
As stated slf4j is a facade, and you have to then pick an underlying implementation. Either log4j, java.util.logging, or 'simple'.
Follow your framework's advice to ensuring expensive logging operations are not needlessly carried out
The apache common logging API as mentioned above is a great resource. Referring back to java, there is also a standard error output stream (System.err).
Directly from the Java API:
This stream is already open and ready
to accept output data.
Typically this stream corresponds to
display output or another output
destination specified by the host
environment or user. By convention,
this output stream is used to display
error messages or other information
that should come to the immediate
attention of a user even if the
principal output stream, the value of
the variable out, has been redirected
to a file or other destination that is
typically not continuously monitored.
Aside from technical considerations from other answers it is advisable to log a meaningful message and perhaps some steps to avoid the error in the future. Depending on the errors, of course.
You could get more out of a I/O-Error when the message states something like "Could not read from file X, you don't have the appropriate permission."
See more examples on SO or search the web.
There really is no best practice for logging an error. It basically just needs to follow a consistent pattern (within the software/company/etc) that provides enough information to track the problem down. For Example, you might want to keep track of the time, the method, parameters, calling method, etc.
So long as you dont just print "Error in "

Design question: How can I access an IPC mechanism transparently?

I want to do this (no particular language):
print(foo.objects.bookdb.books[12].title);
or this:
book = foo.objects.bookdb.book.new();
book.title = 'RPC for Dummies';
book.save();
Where foo actually is a service connected to my program via some IPC, and to access its methods and objects, some layer actually sends and receives messages over the network.
Now, I'm not really looking for an IPC mechanism, as there are plenty to choose from. It's likely not to be XML based, but rather s. th. like Google's protocol buffers, dbus or CORBA. What I'm unsure about is how to structure the application so I can access the IPC just like I would any object.
In other words, how can I have OOP that maps transparently over process boundaries?
Not that this is a design question and I'm still working at a pretty high level of the overall architecture. So I'm pretty agnostic yet about which language this is going to be in. C#, Java and Python are all likely to get used, though.
I think the way to do what you are requesting is to have all object communication regarded as message passing. This is how object methods are handled in ruby and smalltalk, among others.
With message passing (rather than method calling) as your object communication mechanism, then operations such as calling a method that didn't exist when you wrote the code becomes sensible as the object can do something sensible with the message anyway (check for a remote procedure, return a value for a field with the same name from a database, etc, or throw a 'method not found' exception, or anything else you could think of).
It's important to note that for languages that don't use this as a default mechanism, you can do message passing anyway (every object has a 'handleMessage' method) but you won't get the syntax niceties, and you won't be able to get IDE help without some extra effort on your part to get the IDE to parse your handleMessage method to check for valid inputs.
Read up on Java's RMI -- the introductory material shows how you can have a local definition of a remote object.
The trick is to have two classes with identical method signatures. The local version of the class is a facade over some network protocol. The remote version receives requests over the network and does the actual work of the object.
You can define a pair of classes so a client can have
foo= NonLocalFoo( "http://host:port" )
foo.this= "that"
foo.save()
And the server receives set_this() and save() method requests from a client connection. The server side is (generally) non-trivial because you have a bunch of discovery and instance management issues.
You shouldn't do it! It is very important for programmers to see and feel the difference between an IPC/RPC and a local method call in the code. If you make it so, that they don't have to think about it, they won't think about it, and that will lead to very poorly performing code.
Think of:
foreach o, o.isGreen in someList {
o.makeBlue;
}
The programmer assumes that the loops takes a few nanoseconds to complete, instead it takes close to a second if someList happens to be remote.