I have pulled the Camunda latest image and running Camunda in it's own docker container. I have a dmn uploaded to Camunda Cockpit and I am able to make Rest calls to get data from the decision table that I have uploaded to the Camunda Cockpit.
However, I do not want to depend on Camunda running independently. I have an existing huge application(a micro-service running in it's own docker container) and I want to embed Camunda into my micro-service (that uses Osgi, Java, Docker, Maven, etc.).
Can someone please guide me with this?
For a Spring Boot micro service you can add the required starter and config files to your deployment and should be good to go. See e.g. https://start.camunda.com/ to get everything you need.
You can then access Camunda via Java API or REST (if starter was included).
If you do not run in a Spring Boot environment then the way of bootstrapping Camunda may differ. In plain Java, without any container usage it would be along those lines:
ProcessEngine processEngine = ProcessEngineConfiguration
.createStandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:./camunda-db/process-engine;DB_CLOSE_DELAY=1000")
.setDatabaseSchemaUpdate("true")
.setJobExecutorActivate(true)
.buildProcessEngine();
processEngine.getRepositoryService()
.createDeployment()
.addClasspathResource("myProcess.bpmn")
.deploy();
ProcessInstance pi = processEngine.getRuntimeService()
.startProcessInstanceByKey("myProcess");
In a standard Spring environment you would bootstrap the engine by loading the context:
ClassPathXmlApplicationContext applicationContext =
new ClassPathXmlApplicationContext("/spring-context.xml");
ProcessEngine processEngine = (ProcessEngine) applicationContext.getBean("processEngine");
processEngine.getRepositoryService()
.createDeployment()
.addClasspathResource("myProcess.bpmn")
.deploy();
Also see:
https://docs.camunda.org/manual/latest/user-guide/process-engine/process-engine-bootstrapping/
https://docs.camunda.org/get-started/quick-start/install/
Update based on comment:
The Camunda OSGI support is described here:
https://github.com/camunda/camunda-bpm-platform-osgi
You would need to upgrade the project to a more recent version, which is likely not a huge effort as the version have remained compatible.
(I would also encourage you to consider migrating the micro service to Spring Boot instead. Complexity, available knowledge in the market, support lifetime,..)
Related
let's say I have 2 sets of data,
1 set for production, another for development
currently I just manually comment and uncomment in the data.sql
How to separate dev and prod environment for data.sql ?
You can override the location and/or the name of the files Spring Boot will use to create your schema and load your data in application*.properties (or yml).
So you can have :
application-dev.properties with :
# "Old" Spring boot version
spring.datasource.schema=classpath:schema-dev.sql
spring.datasource.data=classpath:data-dev.sql
# New Spring boot version
spring.sql.init.schema-locations=classpath:schema-dev.sql
spring.sql.init.data-locations=classpath:data-dev.sql
application-prod.properties with :
# "Old" Spring boot version
spring.datasource.schema=classpath:schema-prod.sql
spring.datasource.data=classpath:data-prod.sql
# New Spring boot version
spring.sql.init.schema-locations=classpath:schema-prod.sql
spring.sql.init.data-locations=classpath:data-prod.sql
And then you can use Spring Profil as usual to use one or the other configuration.
Notes :
The "Old" Spring boot version is probably recent. I figured that the spring.datasource.* properties are deprecated when creating a new app today, and can't find the exact version.
Also, you don't have to use the properties to define your schema if you don't need it, I just put it there to complete the answer.
I'm trying to expose hazelcast cache with embedded setup in my service, its works fine in my local, even its able to add members if i tried to run application with multiple ports. Now the same is deployed in OCP (OpenShift) with 2 instance and hazelcast members are not added to each other due to this cache is not updated across the pod. below is the config code which i used for hazelcast.
#Bean
public Config hazelcastConfig(){
return new Config().setInstanceName("hazelcast-instance")
.addMapConfig(new MapConfig().setName("mycache")
.setMaxSizeConfig(new MaxSizeConfig(300,MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
.setEvictionPolicy(EvictionPolicy.LRU)
.setTimeToLiveSeconds(2000));
}
please let me know any additional configuration need to added, so that members can clustered in openshift
Please read the following resorces:
Hazelcast Kubernetes/OpenShift plugin documentation
Hazelcast Kubernetes Code Samples
Hazelcast OpenShift Code Sample
This should solve all your issues. The simplest configuration you need to add (assuming you added the RBAC) is the following one:
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getKubernetesConfig().setEnabled(true);
I am using Prosody for stream management. But I am suffering from some issues.
How can I ensure that stream management is enabled on prosody ? Is there any command to test on terminal ?
I also tried to add mod_smacks.lua modules in modules. but I don't know how to enable it on server.
I am using XMPPFramework as chat client on iOS. There is already a method to check support for stream management or not, but it is returning me always false so far.
Please help me out to enable stream management in prosody.
After you added mod_smacks.lua into your /usr/lib/prosody/modules/ add
"smacks";
to your
modules_enabled = {
...
}
in your /etc/prosody/prosody.cfg.lua if you want the module to be loaded every time prosody starts.
Then restart prosody.
Prosodyctl does not show loaded modules.
You can check if the module is loaded via ad-hoc commands (or telnet if activated). You can even load and unload modules via ad-hoc/telnet.
You get more information about mod_smacks here.
We are building a large web app as several WAR files. Each WAR file is a Spring Boot application. For development and testing, we can run these WAR files independently. But in production, we want to run all of the WAR files together under one instance of Jetty (9.x).
The question we have is, what is the best way to deal with externalized configuration in this scenario? Each WAR file has its own set of configuration files (application.properties and others) that it needs. How can we set things up so that each WAR file reads its own configuration files and ignores the rest?
You can use spring.config.name and spring.config.location to give each application a distinct name and/or location for its external configuration files. I'd set these properties in the configure method that you've overriden in your SpringBootServletInitializer subclass.
Another option that might work out better is to use the #PropertySources annotation on your #SpringBootApplication class for each Spring Boot application.
For example,
You can then rename application.properties for each application, like app1.properties, app2.properties, and so on.
Then you can start up Jetty providing a common configuration folder
-Dapplication.home=C:/apphome
And in each #SpringBootApplication, add a #PropertySources annotation that looks like this
#SpringBootApplication
#PropertySources({
#PropertySource("classpath:app1.properties"),
#PropertySource(value = "file:${application.home}/app1/app1.properties", ignoreResourceNotFound = true)
})
public class App1Config {
...
}
In development, the app#.properties will be read. Then in production, when you define application.home, the application.home/app#/app#.properties will override the one in the classpath
I am working on a Notification Service using IBM MQ messaging provider with JBoss eap 6.1 environment. I am successfully able to send messages via MQ JCA provider rar i.e. wmq.jmsra.rar file. However on consumer part my current configuration looks like this
#MessageDriven(
activationConfig = {
#ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(propertyName="destination", propertyValue="F2.QUEUE"),
#ActivationConfigProperty(propertyName="providerAdapterJNDI", propertyValue="java:jboss/jms/TopicFactory"),
#ActivationConfigProperty(propertyName="queueManager", propertyValue="TOPIC.MANAGER"),
#ActivationConfigProperty(propertyName="hostName", propertyValue="10.239.217.242"),
#ActivationConfigProperty(propertyName="userName", propertyValue="root"),
#ActivationConfigProperty(propertyName = "channel", propertyValue = "TOPIC.CHANNEL"),
#ActivationConfigProperty(propertyName = "port", propertyValue = "1422")
})
My problem is that consumer of this service does not want to add any port numbers, hostName, queueManager properties in these beans. Also they do not want to use ejb-jar.xml to externalize these configs. I have researched and found that we can add a domain IBM Message Driven Bean but with no success. Any suggestions on what I can do here to externalize all these configurations ?
EDIT: Adding --> The JCA resource adapter is deployed at consumer end if it makes it any easier.
Thanks
You can actually externalize an MDBs activation spec properties to the server configuration file.
Create the ejb-jar.xml file, but do not put the actual value in the file, use a property placeholder:
<activation-config-property>
<activation-config-property-name>hostName</activation-config-property-name>
<activation-config-property-value>${wmq.host}</activation-config-property-value>
</activation-config-property>
Do this for all of the desired properties.
Ensure that property replacement for Java EE spec files (ejb-jar.xml, in this case) is enabled in the server configuration file:
<subsystem xmlns="urn:jboss:domain:ee:1.2">
<spec-descriptor-property-replacement>true</spec-descriptor-property-replacement>
Then, in the server configuration file, provide values for your properties:
<system-properties>
<property name="wmq.host" value="10.0.0.150"/>
Once your MDBs are packaged, you will not need to change any of the files in the MDB jar - just provide the properties in the server configuration.
you can avoid to add host name, port number and so on in MDB, you just want to define destinationType in MDB, and rest of the thing u can configure in your application server, like Activation Specification, Queues and Queue Connection Factories.
I have done the same thing but i used IBM Websphere Application Server.