I m using apache qpid as a broker for writing junits. My use case requires to use delayed message functionality in tests. so does qpid supports this , like rabbitmq. I s there any plugin available which i can write in qpid json file.
I assume since your question is tagged 'junit' you are writing your unit tests in Java and are probably embedding the Apache Qpid Broker-J.
Delivery delay is supported. You don't need a plugin. It is described here:
https://qpid.apache.org/releases/qpid-broker-j-7.0.6/book/Java-Broker-Concepts-Queues.html#Java-Broker-Concepts-Queue-HoldingEntries
As discussed in the document, you must turn on the feature at the queue level and from the client side indicate your wish for the delivery to be delayed. To do this pass a message annotation (if using AMQP 1.0) or a message header (if using the older AMQP protocols).
If you are using the JMS 2.0 compatible client life is easy. Access the feature via the JMS 2.0 API MessageProducer#setDeliveryDelay() or JMSProducer#setDeliveryDelay().
Related
I don't like tools that do many things at once. So GRPC seems to me overhead, it's like kubernetes.
GRPC is the tool that combines actually two things: extended Protobuf (Service support) and HTTP2.
I read a lot of articles saying that using GRPC is awesome for performance. And there are two reasons
protobuf is used, it's smaller than json or xml.
GRPC uses HTTP2 for transport protocol
Here is main part: protobuf and HTTP2 are independent projects, tools, whatever. With that understanding i can say that GRPC is nothing but combining several different tools, like kubernetes combines docker and orchestration tools.
So my questions is: What's actual advantages of using GRPC vs HTTP2 with any payload (CSV, XML, JSON, etc).
Let's skip part about serialization because as i mentioned protobuf is independent library from grpc
As you pointed out, gRPC and Protobuf are often conflated. While, in the vast majority of cases, gRPC will be using protobuf as an IDL and HTTP/2 as the transport, this is not always the case.
So then, what value does gRPC provide on its own? For starters, it provides battle-tested implementations of each of those transports, along with first class support for the protobuf IDL. Integrating these things is not trivial. gRPC packages all of them into one nice pluggable box so you don't have to do the legwork.
It also provides you with functionality that HTTP/2 on its own does not. Pluggable authorization/authentication, distributed tracing instrumentation, debugging utilities, look-aside load balancing (including upcoming support for the xDS protocol), and more are provided.
We are migrating from TIBCO EMS to Solace EMS and in order to minimize any disruption, we are trying to bridge the messages from TIBCO to Solace. Information from TIBCO Support is that messages cannot be routed to another JMS provider, however I find this improbable. Does anyone have any ideas how to connect both EMS systems?
Solace has recently launched an integration tool called HybridEdge which is based on Apache Camel. Part of the Solace integration is a JMS component (Camel adapter). Using HybridEdge, you could easily set up a "route" (Camel flow) that has Tibco EMS via the Camel JMS component using the EMS JMS connectionFactory and bridge to Solace JMS via their component (which uses their JMS connectionFactory)
https://github.com/SolaceProducts/solace-hybridedge is where the Solace HybridEdge starter project is. It's an example of how you can get started with HybridEdge.
You would then use the Camel JMS component to connect to EMS. Info on the component is here: http://camel.apache.org/jms.html
Keep in mind that you are bridging 2 brokers through another middleware (the Camel Exchange)... this is bound to have more latency and less msgs/sec than you are used to with just EMS or Solace alone, especially with persistent messages that need to be ack'd all the way back.
You could use 'forwarding channels' in Replay for Messaging: https://www.tradeweb.com/institutional/services/replayservice/
Replay for Messaging is a cross-provider Messaging Database and Messaging Bridge originally developed at CodeStreet, now owned by Tradeweb (Note: I work there). The ReplayServer is written in C++, so it's low-latency and you can quickly setup bridges between TIBCO EMS and Solace from the WebUI, also with optional Conversion, if needed.
The Replay function can help with testing during the migration process.
I'm using Spring Integration with AMQP-backed messages and I'd prefer to use JSON instead of the default Java serialization for messages. This preference is due in part to serialization exceptions encountered when using Kotlin objects.
While researching the issue, I came across this post:
Spring integration - AMQP backed message channels and message conversion
So it seems the ability to use JSON serialization with AMQP-backed messages has only recently been supported. Moreover, I believe Spring Cloud Stream project provides support for this approach out-of-the-box but I haven't been able to figure out how to achieve something similar with SI.
I came across a post that provides a means to do this channel-by-channel but it seems tedious to configure it this way for each channel when I really just want to use it across the board.
Is there something preventing you from upgrading to 4.3?
<int-amqp:channel id="withEP"
extract-payload="true" message-converter="jackson" />
There's currently no way to globally set options for all channels of a particular type.
Currently we evaluate Activiti as a possible Open Source Business Process Engine. One important requirement is an easy integration of external systems (ECM, CRM, SharePoint, SAP...) within the processes. During research I found some articles claiming that there are no build-in connectors to other systems. The only way to interact with external systems is to invoke java classes (see http://forums.activiti.org/content/how-create-connector and http://books.google.de/books?id=kMldSaOSgPYC&pg=PA100&lpg=PA100&dq=Bonita+Open+Solution+connectors&source=bl&ots=uwzz5OSten&sig=h2wf0q5J3xAxwN3AZ7Vondemnec&hl=de&sa=X&ei=uwBYUtehHoTqswacrYHgDQ&ved=0CIUBEOgBMAc4Cg#v=onepage&q=Bonita%20Open%20Solution%20connectors&f=false)
How complex is the integration of external systems in Activiti processes? Is it true that there are no bulid-in connectors? This would be a showstopper-criteria for us.
best regards and thanks for you reply
Ben
Currently (as version 5.14) Activiti has direct connection to
Alfresco for document repository
Drools for rule tasks
LDAP for groups and users
Mule for sending messages
Camel for sending/receiving messages
To integrate any other external system you need to use Java Service Task, where you can use Java classes to delegate workflow to your external system. These Java classes can take variables from your workflow, can direct to one of its outgoing flows and of course you can use any capability of your external system.
We have a system built on MarkLogic, Java / GlassFish. We need some kind of system that could capture thrown exceptions from any of those three subsystems, and then provide a nice web-based reporting interface where exceptions could be viewed, prioritized, marked done. We use JIRA.com in the cloud so if there was any way to integrate with that, it would be nice. Prefer open source or inexpensive.
I'm not sure whether a Java-based system would accomodate our MarkLogic errors, so I believe we need something that is language-agnostic.
Thanks.
If you are communicating with MarkLogic using a MarkLogic "HTTP appserver" (as opposed to XCC or WebDAV), then you can use the error handler configuration as a choke point for catching unhandled exceptions. I've never tried this, but, in theory, in the error handler, you could make an http request and send them anywhere you want.
See http://docs.marklogic.com/5.0doc/docapp.xqy#display.xqy?fname=http://pubs/5.0doc/xml/dev_guide/appserver-control.xml%2387072
If you are using XCC, then there are other places to put choke points in your Java code.
MarkLogic writes exceptions by default to the Data/Logs/ErrorLog.txt file. Application code within MarkLogic can use xdmp:log, or trace() to log messages to the same file. The file can be accessed quite easily through file-system if GlassFish is running on the same host., It can also be disclosed through an App Server within MarkLogic with some custom XQuery code.
GlassFish itself appears to be a Java EE platform. I expect it to do logging using something like Log4J. The logging messages in the ErrorLog and the Log4J log will likely not be formatted identically, but basic properties should be there, like date/time, and error message. Log4J logging can be set to write to a log file as well. You could consume it in a similar way as the ErrorLog.
I am not aware of any error reporting web-interface for such logging, but I believe JIRA provides an HTTP API, which can be used to push information into it.