Using Apache Flume Kafka sink on Cygnus - fiware

I want to use the Kafka sink that comes with Apache Flume. However, it seems that the cygnus-common package is based on Apache Flume 1.4 whereas the Kafka sink was added around Apache Flume 1.7. If I add the Kafka jar file to the cygnus-common library it fails because the the cygnus-common core is outdated.
So it is possible to use the latest flume version (1.8) instead of cygnus-common with cygnus-ngsi libraries? In that case I can switch between using Apache Flume Kafka sink and the NGSIKafkaSink?

I am Andrés from UPM Fiware Team. You are right pointing out that Fiware Cygnus is based on Apache Flume 1.4 so, Answering your question, exist a high probability that the sinks used in a posterior version of Flume don't be compatible with Cygnus. We are working on an update of Cygnus in order to use the lastest release of flume and also manage the NGSIV2 notifications in a native way but it is a work in progress and takes time. I think that currently the only way to work with Cygnus and kafka sink is using the GE provided by FIWARE in his corresponding versions.
I

Related

Jmeter with latest mysql driver return error

I am trying to connect Jmeter to MySql but getting error
Here is my config:
Windows 2019 Java 11
MySql version 8.0.30-commersial
mysql-connector-java-8.0.30.jar placed as required in jmeter lib folder
Jmeter 5.4.3
When trying to start jmeter plan an Error appear -> java.sql.SQLException: No suitable driver
What driver should I use?
I don't know "what driver should you use", given you
Restart JMeter to pick up the .jar
Properly set up driver class name
JDBC URL
And the validation query
you should be good to go with your setup.
I can tell for sure that JMeter version you're using is kind of violating JMeter Best Practices, you should upgrade to JMeter 5.5 (or whatever is the latest JMeter version available at JMeter Downloads page)

How to exclude the org.json version in MULE 3.9.0 buildpath?

How to exclude the org.json version in MULE 3.9.0 buildpath?
The version is json-20140107 I want to use the latest json version. I tried to exclude in mule-commons but it did not do any help.
The json-20140107.jar library is distributed with Mule 3.9.0. Because of how Mule implements classloading, at execution time classes loaded from that jar file will override a newer version in your application. You must not change the version provided because Mule was tested. Changing any provided library in the distribution can cause unexpected errors.
You could pack a new version of the library and try to use Fine Grain Classloader Control however that seems to be an Enterprise Edition feature not available in the community edition. If you have the Enterprise Edition it is highly recommended to use the last patch version (currently 3.9.4) instead of 3.9.0.
Another solution could be to migrate to Mule 4.x, which uses classloading isolation to avoid this kind of issues. You can use any version of libraries inside applications without conflicting with the provided libraries in the runtime. Again, using the last version available is the recommended way to go. Mule 3 applications are not compatible with Mule 4, so you will need to migrate existing applications.

Setup snappydata with custom spark and scala 2.11

I have read through the documentation but can't find answer for the following questions:
I would prefer to setup an already running spark cluster (i.e. add a jar to be able to use SnappyContext), or is it mandatory to use bundled spark? If possible, please assist: SPARK_HOME seems to be set on runtime by the launchers
Where to define JAVA_HOME?. For now I did it in bin/spark-class on all snappy server nodes
Build SnappyData with scala 2.11
Appreciated,
Saif
Right now we don't have support for running Snappy with stock Spark, but we are working towards it. Right now you can use Snappy version of Spark.
For Q2, you can set JAVA_HOME in the command line, before starting Snappy servers.
We have not tested Snappy with scala 2.11. So not sure what issues may arise.

Can a Tomcat7/Java/Mysql/Ant application run on Openshift? If so, how to install the jdbc driver?

I'm trying to install OpenGTS on Red Hat's Openshift cloud platform.
OpenGTS is a Java/Tomcat7/Mysql/Ant application, so I created a JbossEWS app on Openshift, installed the standard Mysql cartridge, and an Ant cartridge I found online.
Our application does not have to be scalable, so that's what I chose.
I add a call to Ant in Openshift's build hook.
So far it has been impossible to install the jdbc driver however:
As I'm using Ant, I deleted the pom.xml for Openshift's standard Maven.
Neither is there a standalone.xml in JbossEWS. (There is one for JbossEAP).
Java's ext/lib directories are not accessible on Openshift.
So I copied the jdbc driver jar in $OPENSHIFT_DATA_DIR,
but nevertheless, when started, JbossEWS complains it cannot find a suitable jdbc driver for Mysql.
Is it even possible to run OpenGTS on Openshift?
It is comment about above question so we are following this link.
Here is the link https://blog.openshift.com/jndi-tomcat-configuration-howto/
And also catalina.properties file
common.loader=${catalina.base}/lib,${catalina.base}/lib/.jar,${catalina.home}/lib,${catalina.home}/lib/.jar,${catalina.home}/../app-root/data/*.jar

cas jasig using oracle data source on windows

is there any way that can help cas settings to use the data source oracle on windows
in my company will apply singgle singgle sign on and logout, I've been trying for 5 days, and until now have not been successful.
I've tried several ways
I use
Tomcat 7.0
Java 1.7
cas server 3.5.2
Tomcat 7.0
Java 1.7
cas server 3.5.2
I downloaded cas server 3.5.2
extract cas-server-3.5.2
copy cas-server-webapp-3.5.2.war into tomcat webapps
cas has been successfully deployed, and can be accessed
and then I do not understand anymore, how do I get the authentication process can use oracle database
sorry my english is not good
thank you
There's a few things you need to do:
First of all, if you haven't downloaded and included the Oracle JDBC Driver library (http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html) then you have to do that.
Secondly, you need to configure the JDBC plugin: https://wiki.jasig.org/display/CASUM/JDBC
These instructions (and the preferred method of customizing & building CAS) uses the Maven Web Overlay method. This has the advantage that you deploy lots of un-necessary libraries that you aren't using.