Currently we evaluate Activiti as a possible Open Source Business Process Engine. One important requirement is an easy integration of external systems (ECM, CRM, SharePoint, SAP...) within the processes. During research I found some articles claiming that there are no build-in connectors to other systems. The only way to interact with external systems is to invoke java classes (see http://forums.activiti.org/content/how-create-connector and http://books.google.de/books?id=kMldSaOSgPYC&pg=PA100&lpg=PA100&dq=Bonita+Open+Solution+connectors&source=bl&ots=uwzz5OSten&sig=h2wf0q5J3xAxwN3AZ7Vondemnec&hl=de&sa=X&ei=uwBYUtehHoTqswacrYHgDQ&ved=0CIUBEOgBMAc4Cg#v=onepage&q=Bonita%20Open%20Solution%20connectors&f=false)
How complex is the integration of external systems in Activiti processes? Is it true that there are no bulid-in connectors? This would be a showstopper-criteria for us.
best regards and thanks for you reply
Ben
Currently (as version 5.14) Activiti has direct connection to
Alfresco for document repository
Drools for rule tasks
LDAP for groups and users
Mule for sending messages
Camel for sending/receiving messages
To integrate any other external system you need to use Java Service Task, where you can use Java classes to delegate workflow to your external system. These Java classes can take variables from your workflow, can direct to one of its outgoing flows and of course you can use any capability of your external system.
Related
Can anyone elaborately explain about ESB ? I am new to it. Apart from integrating applications, I need to know where does ESB runs ? what types of services it can be integrated. Thanks in advance.
An enterprise service bus (ESB) is a software architecture concept that enables communication among various applications. Instead of having to make each of your applications communicate directly with each other in all their various formats, each application simply communicates with the ESB, which handles transforming and routing the messages to their appropriate destinations.
An ESB provides its fundamental services through an event-driven and standards-based messaging engine (the bus). Thanks to ESB, integration architects can exploit the value of messaging without writing code. Developers typically implement an ESB using technologies found in a category of middleware infrastructure products, usually based on recognized standards. As with a Service-Oriented Architecture (SOA), an ESB is essentially a collection of enterprise architecture design patterns that is now implemented directly by many enterprise software products.
Moreover, WSO2 ESB is a fast, light-weight, and versatile enterprise service bus. It is 100% open source and released under the Apache License v2.0. Using WSO2 ESB you can perform a variety of enterprise integration patterns, including filtering, transforming, and routing SOAP, binary, plain XML, and text messages that pass through your business systems by HTTP, HTTPS, JMS, mail, etc.
Resources: http://soatutorials.blogspot.com/2013/08/10-minute-tutorial-for-extending-wso2.html
I am asking a very basic question here.
Question is
I am using Apache Sling , Apache Jackrabbit, Apache Felix in my project as said by my instructor. I am trying to understand why these software is developed by Apache. I tried a lot on the internet,, but I didn't find any blog or wordpress blog, or any useful youtube video that explain all these projects. Can you explain me about these projects.
Why these projects developed?
What they do ?
and more questions like this
Previously I found the same doubt with Apache Hadoop, but all the material that I found on net is sufficient for me to get a feel of this project. This time I am struggling with Sling, Felix, Jackrabbit.
I will be very thankful for you. Waiting for your kind response.
The combination of Apache Jackrabbit, Apache Sling, and Apache Felix allows you to build web application.
Apache Jackrabbit is the reference implementation of the JCR API. The JCR API is to manage content repositories; to manage, for example, web content. A content repository is a mix between file system and a database.
The JCR API is specially made to deal with web content. Why use the JCR API, and why not use a relational database API? URLs are hierarchical, as in a file system. Relational databases don't easily support hierarchical access. Why not use a file system API? Because the JCR supports transactions, versioning, and a lot of other features that file system APIs don't support.
Apache Sling is a web framework based on the JCR API, and taking advantage of the features provided by the JCR API (15 Minute introduction).
Apache Felix is an OSGi container. It allows to seamlessly start, stop, and replace components of a web application (jar files, in a sense), while the web server is running. That means it allows you to change the application without having to restart the server.
Sling in very simple terms could be described as a REST API for JCR. you can use http requests to manage content inside the repository.
Additionally, Sling provides a mechanism to render that content in different ways for web consumption. you can use scripts (JSP for example) and the java code (servlets, pojos, etc) in the Felix container to process requests and deliver a request.
When a request is made for a particular node, Sling looks up for a property called sling:resourceType, this is a lookup key for rendering scripts. Then the appropiate script is executed using the node as input.
You could write different kinds for renderers and then use it to display your content in different ways.
For example, you could write two scripts full.json.jsp and short.json.jsp and then use them to render the same node in two different ways:
/content/app/node.full.json
OR
/content/app/node.short.json.
Sling basically matches tokens in the request URL to select an appropriate script.
They have a really nice cheat sheet that explains how request resolution and rendering works
it is a bit more complex than this, since everything is organized in resources and components. you want to check their site for more info
I had the same doubts. The best response I was able to find is in the official Sling page (https://sling.apache.org/)
(What is) Apache Sling, in a hundred words:
Apache Sling is a web framework that uses a Java Content Repository, such as Apache Jackrabbit, to store and manage content.
Sling applications use either scripts or Java servlets, selected based on simple name conventions, to process HTTP requests in a RESTful way.
The embedded Apache Felix OSGi framework and console provide a dynamic runtime environment, where code and content bundles can be loaded, unloaded and reconfigured at runtime.
So, resuming it:
Sling is a web framework --> using jackrabbit --> based/supported on JCR API.
You can see Apache Felix like a container with its manager.
Note that Sling started as an internal project at Day Software. It's the reason why some bundles/libraries are named like com.day, but in the ends they are two names for the same.
Also, if you want to be clear about Jackrabbit and JCR API you can visit the Jackrabbit's official page http://jackrabbit.apache.org/jcr/jackrabbit-architecture.html
I am starting research on how to implement Node.js SOA (service oriented architecture) with JSON web-services.
As a small sub-question, I need an approach/framework/system to make universal configuration center for all companies web-services. So that we don't configure every application with exact address of other application, but just link to some central server to get that information.
(This should be very well worked-out topic for XML-based services, so some terminology/approaches/etc could/should be borrowed.)
Related to
RESTful JSON based SOA Registry
Service Oriented Architecture suggestions
UPDATE: This questions is about web-services configuration & orchestration.
GO for an active(having activity happening off late) framework with lean architecture.There's one called Geddy and another called Restify. If in doubt, Express can also be used for building webservices with JSON.
You can work on reading the centrally stored config from different app codebse when you use any of these.
Time and again I am faced with the issue of having multiple environments that must be configured individually for an application that would run in all of them (e.g. QA, regional production env's, dev, staging, etc.) and I am wondering what would be the best way to organize different configurations?
Would it be in the database? Different configuration files per environment? Or maybe the same file with different sections/xml tags? How would these be then deployed? Embedded within the app? Or put manually in after installation to be modified in-place?
This question is not technology-specific - I've worked with .net and Java, web-apps and desktop apps and this issue comes up time and again. I'm looking to learn different approaches to maybe adapt a hybrid to address this.
EDIT: There's one caveat that I must point out - when configuration is part of the deployed solution, it is generally installed under root user on the host. In large organizations developers usually don't have a root access to production hosts so any changes to the configuration require a new build and deployment. Needless to say this isn't the nicest approach - especially at organizations that have a very strict release process involving multiple teams and approval levels... (sigh I know!)
Borrowed from Jez Humble and David Farley's book "Continuous Delivery (page 41)", you can:
Your build scripts can pull configuration in and incorporate it into your binaries at build time.
Your packaging software can inject configuration at packaging time, such as when creating assemblies, ears, or gems.
Your deployment scripts or installers can fetch the necessary information or ask the user for it and pass it to your application at
deployment time as part of the installation process.
Your application itself can fetch configuration at startup time or run time.
It is considered bad practice by them to inject configuration files in build and compile times, because you should be able to deploy the same binary file to every environments.
My experience was that you could bake all configuration files for every environments (except sensitive information) to your deployment file (war, jar, zip, etc). And you design your application to take in an extra parameter when starts, to pickup the right sets of configuration files (from your extracted deployment file, or from local/remote file system if they are sensitive, or from a database) during application's startup time.
The question is difficult to answer because it's somewhat vague. There is no technology-agnostic approach to configuration as far as I know. Exactly how configuration is set up will depend on the language/technology in question.
I'm not familiar with .net but with java a popular approach is to have a maven build set up with different profiles. Each profile is specific to an environment. You can then define different properties files that have environment-specific values, an example from the above link is:
environment.properties - This is the default configuration and will be packaged in the artifact by default.
environment.test.properties - This is the variant for the test environment.
environment.prod.properties - This is basically the same as the test variant and will be used in the production environment.
You can then build your project as follows:
mvn -Pprod package
I have good news and bad news.
The good news is that Config4* (of which I am the maintainer) neatly addresses this issue with its support for adaptive configuration. Basically, this is the ability for a configuration file to adapt itself to the environment (including hostname, username, environment variables, and command-line options) in which it is running. Read Chapter 2 of the "Getting Started" manual for details. Don't worry: it is a short chapter.
The bad news is that, currently, Config4* implementations exist only for C++ and Java, so your .Net applications are out of luck. And even with C++ and Java applications, it won't make pragmatic sense to retrofit Config4* into an existing application. Because of this, I'd advise trying to use Config4* only in new applications.
Despite the bad news, I think it is worth your while to read the above-mentioned chapter of the Config4* documentation, because doing so may provide you with ideas that you can adapt to fit your needs.
I'm currently developing an ETL solution which, for various reasons, include SSIS components as well as J2EE services.
I need the various components to communicate asynchronously via messaging queues. However, the obvious constraint is that SSIS only integrates with MSMQ while it obviously makes sense to use JMS on the Java side.
I have considered the MSMQ/MQSeries Bridge (we use WebsphereMQ internally) but I feel this adds another layer of complexity to the solution.
I now wonder whether there is a simpler solution to achieve cross-platform messaging. The purpose of the messaging approach is really to implement transfer of control between components, rather than pass data. Each component, whether it's a SSIS package or a J2EE service, will read/write from the same underlying database so I wonder if I'm better off just implementing a polling mechanism on either side. Suggestions are welcome.
Christophe.
depending on your needs you could write your own bridge to move messages between MSMQ and WMQ. We have done pretty easily using .NET and the IBM XMS libraries.
http://www-01.ibm.com/support/docview.wss?rs=171&uid=swg24011756&loc=en_US&cs=utf-8&lang=en
You could use an ESB instead of JMS and use the Web Service task in SSIS to connect to and from the ESB via SOAP.
If all you need in J2EE->SSIS channel is ability to start SSIS package from J2EE, I think the simplest solution is to configure SQL Server Agent Job that runs this package, and then invoke sp_startjob stored proc from Java - should be way easier, and less additional components involved.
I'm not sure what's the best way to call SSIS->J2EE.