The CAS management webapp not start - json

Tomcat does not want to start case-management 6.0.1.
I do not understand I put the json file.
I am not in https.
What do I need to check ?
In cas-management.properties
cas.server.name=http://192.168.0.112:8443
cas.server.prefix=${cas.server.name}/cas
mgmt.serverName=http://192.168.0.112:8443
mgmt.serverName=http://192.168.0.112
server.context-path=/cas-management
server.port=8443
mgmt.adminRoles[0]=ROLE_ADMIN
logging.config=file:/etc/cas/config/log4j2-management.xml
cas.serviceRegistry.json.location=file:/etc/cas/services
cas.authn.attributeRepository.stub.attributes.cn=cn
cas.authn.attributeRepository.stub.attributes.displayName=displayName
cas.authn.attributeRepository.stub.attributes.givenName=givenName
cas.authn.attributeRepository.stub.attributes.mail=mail
cas.authn.attributeRepository.stub.attributes.sn=sn
cas.authn.attributeRepository.stub.attributes.uid=uid
In the file /etc/cas/services/http_cas_management-1560930209.json
GNU nano 2.7.4
Fichier : /etc/cas/services/http_cas_management-1560930209.json
{
"#class" : "org.apereo.cas.services.RegexRegisteredService",
"serviceId" : "^http://192.168.0.112/cas-management/.*",
"name" : "CAS Services Management",
"id" : 1560930209,
"description" : "CAS services management webapp",
"evaluationOrder" : 5500
"allowedAttributes":["cn","mail"]
}
Thank
Best regard

Related

Tungsten Replicator 4.0 Installation

I am trying to install Tungsten Replicator 4.0 version for Mysql 5.7. I have used binary installation(tar.gz file installation) for Mysql 5.7 and exported path in bash_profile.
./tools/tmp install command got executed successfully but services both master and slave are offline. We are getting below error for service status command.
[tungsten#beta-388 tungsten-replicator-4.0.0-18]$ /opt/continuent//tungsten/tungsten-replicator/bin/trepctl services
Processing services command...
NAME VALUE
---- -----
appliedLastSeqno: -1
appliedLatency : -1.0
role : master
serviceName : beta182_183
serviceType : unknown
started : true
state : OFFLINE:ERROR
NAME VALUE
---- -----
**appliedLastSeqno**:**** **Unknown**
**appliedLatency **:**** **Unknown**
**role** **:** **Unknown**
serviceName : beta183_182
serviceType : Unknown
started : false
state : Unknown
Finished services command...
[tungsten#beta-388 tungsten-replicator-4.0.0-18]$ /opt/continuent//tungsten/tungsten-replicator/bin/trepctl -service beta182_183 status
Processing status command...
NAME VALUE
---- -----
appliedLastEventId : NONE
appliedLastSeqno : -1
appliedLatency : -1.0
autoRecoveryEnabled : true
autoRecoveryTotal : 0
channels : -1
clusterName : beta182_183
currentEventId : NONE
currentTimeMillis : 1579684465335
dataServerHost : beta-388.panterranetworks.net
extensions :
host : beta-388.panterranetworks.net
latestEpochNumber : -1
masterConnectUri : thls://localhost:/
masterListenUri : thls://beta-388.panterranetworks.net:12120/
maximumStoredSeqNo : -1
minimumStoredSeqNo : -1
offlineRequests : NONE
pendingError : Replicator unable to go online due to error
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: **Unable to prepare plugin: class name=com.continuent.tungsten.replicator.thl.THL message=[Error while attempting to acquire file lock: /opt/continuent/thl/beta182_183/disklog.lck]**
pipelineSource : UNKNOWN
relativeLatency : -1.0
resourcePrecedence : 99
rmiPort : 10110
role : master
seqnoType : java.lang.Long
serviceName : beta182_183
serviceType : unknown
simpleServiceName : beta182_183
siteName : default
sourceId : beta-388.panterranetworks.net
state : OFFLINE:ERROR
timeInStateSeconds : 15.744
timezone : GMT
transitioningTo :
uptimeSeconds : 15.931
useSSLConnection : true
version : Tungsten Replicator 4.0.0 build 18
Finished status command...
Can anyone please share how to resolve the error? I have tried multiple times with giving all permissions to the file and also uninstalled and reinstalled the tungsten.
Mysql version 5.7 is not support tungsten version 4.0,
https://docs.continuent.com/tungsten-replicator-4.0/introduction.html
Shared your config replicator!

Kafka JDBC Sink Connector Not Consuming Events from Remote Cluster

Standalone Kafka JDBC sink connector not inserting data into mysql database (Using confluent-community-2.12)
I've installed confluent-community-2.12 on centos7 and started the standalone jdbc sink connector to consume records from a remote kafka cluster. I can consume the records via a simple java consumer and see the record data however, when I start the sink connector it loads and connects to the remote cluster fine but just stops at the following log INFO
[2019-05-01 11:10:21,619] INFO Initializing writer using SQL dialect: MySqlDatabaseDialect (io.confluent.connect.jdbc.sink.JdbcSinkTask:57)
[2019-05-01 11:10:21,620] INFO WorkerSinkTask{id=sink-mysql-standalone-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:301)
Even after starting the connector if I produce any data to the cluster, at sink connector nothing happens.
I tried to fetch the result of the connector status using:
curl localhost:8083/connectors/sink-mysql-standalone/status
the result is as follows:
{"name":"sink-mysql-standalone","connector":{"state":"RUNNING","worker_id":"10.3.0.40:8083"},"tasks":[{"id":0,"state":"RUNNING","worker_id":"10.3.0.40:8083"}],"type":"sink"}
sink.properties:
name=sink-mysql-standalone
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=my-topic
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
# JDBCSink connector specific configuration
connection.url=jdbc:mysql://10.3.0.37:3306/mydb?zeroDateTimeBehavior=convertToNull&useUnicode=yes&characterEncoding=UTF-8
connection.user=myuser
connection.password=mypassword
insert.mode=upsert
table.name.format = tbl_kaf_${topic}
pk.mode=kafka
pk.fields=__connect_topic,__connect_partition,__connect_offset
fields.whitelist=messageId
auto.create=true
auto.evolve=true
The producer produces following record:
Key: id_5dfbdffe-ffbc-4fbf-925c-a14734304fa8, Value: {
"type" : "text",
"messageId" : "ID:activemq-XXXXXX-XXXXXXXXXXXXX-X:XX:1:2:2",
"correlationId" : "",
"destination" : {
"type" : "queue",
"name" : "qToKafka"
},
"replyTo" : null,
"priority" : 0,
"expiration" : 0,
"timestamp" : 1556549819473,
"redelivered" : false,
"properties" : {},
"payloadText" : "<Some XML Data>",
"payloadMap" : null,
"payloadBytes" : null
}
Please let me know what am I missing here.
Thanks

Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mongodb/util/JSON

I am trying to connect MongoDb with Hadoop. I have Hadoop-1.2.1 installed in my Ubuntu 14.04. I installed MongoDB-3.0.4 and also downloaded and added mongo-hadoop-hive-1.3.0.jar, mongo-java-driver-2.13.2.jar jars in hive session. I have downloaded mongo-connector.sh (found in this site)and included it under Hadoop_Home/lib.
I have set input and output sources like this :
hive> set MONGO_INPUT=mongodb://[user:password#]<MongoDB Instance IP>:27017/DBname.collectionName;
hive> set MONGO_OUTPUT=mongodb://[user:password#]<MongoDB Instance IP>:27017/DBname.collectionName;
hive> add JAR brickhouse-0.7.0.jar;
hive> create temporary function collect as 'brickhouse.udf.collect.CollectUDAF';
My collection in MongoDb is this :
> db.shows.find()
{ "_id" : ObjectId("559eb22fa7999b1a5f50e4e6"), "title" : "Arrested Development", "airdate" : "November 2, 2003", "network" : "FOX" }
{ "_id" : ObjectId("559eb238a7999b1a5f50e4e7"), "title" : "Stella", "airdate" : "June 28, 2005", "network" : "Comedy Central" }
{ "_id" : ObjectId("559eb23ca7999b1a5f50e4e8"), "title" : "Modern Family", "airdate" : "September 23, 2009", "network" : "ABC" }
>
Now I am trying to create a Hive table
CREATE EXTERNAL TABLE mongoTest(title STRING,network STRING)
> STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler'
> WITH SERDEPROPERTIES('mongo.columns.mapping'='{"title":"name",”airdate”:”date”,”network”:”name”}')
> TBLPROPERTIES('mongo.uri'='${hiveconf:MONGO_INPUT}');
When I run this command, it says
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mongodb/util/JSON
Then I added hive-json-serde.jar and hive-serdes-1.0-SNAPSHOT.jar jars and tried to create the table again. But the error remains the same. How can I rectify this error?
I actually added these mongo-hadoop-core-1.3.0.jar , mongo-hadoop-hive-1.3.0.jar and mongo-java-driver-2.13.2.jar jars in Hadoop_Home/lib folder. Then I was able to get data from MongoDb to Hive without any errors.
There are smart-quotes which the parser is seeing - ”
”airdate”:”date”,”network”:”name”
They should be
"airdate":"date","network":"name"

MongoDB FailedToParse: Bad characters in value

I have a simple mongodb database. I'm dumping using mongodump.
dump command
mongodump --db user_profiles --out /data/dumps/user-profiles
Here is the content of the user_profiles database. It has one collection (user_data) consisting of the following:
{ "_id" : ObjectId("555a882a722f2a009fc136e4"), "username" : "thor", "passwd" : "*1D28C7B35C0CD618178988146861D37C97883D37", "email" : "thor#avengers.com", "phone" : "4023331000" }
{ "_id" : ObjectId("555a882a722f2a009fc136e5"), "username" : "ironman", "passwd" : "*626AC8265C7D53693CB7478376CE1B4825DFF286", "email" : "tony#avengers.com", "phone" : "4023331001" }
{ "_id" : ObjectId("555a882a722f2a009fc136e6"), "username" : "hulk", "passwd" : "*CB375EA58EE918755D4EC717738DCA3494A3E668", "email" : "hulk#avengers.com", "phone" : "4023331002" }
{ "_id" : ObjectId("555a882a722f2a009fc136e7"), "username" : "captain_america", "passwd" : "*B43FA5F9280F393E7A8C57D20648E8E4DFE99BA0", "email" : "steve#avengers.com", "phone" : "4023331003" }
{ "_id" : ObjectId("555a882a722f2a009fc136e8"), "username" : "daredevil", "passwd" : "*B91567A0A3D304343624C30B306A4B893F4E4996", "email" : "daredevil#avengers.com", "phone" : "4023331004" }
After copying the dump to a nfs and then trying to load the dump into a test server using mongorestore
mongorestore --host db-test --port 27017 /remote/dumps/user-profiles
I'm getting the following error:
Mon May 18 20:19:23.918 going into namespace [user_profiles.user_data]
assertion: 16619 code FailedToParse: FailedToParse: Bad characters in value: offset:30
How to resolve this FailedToParse Error
To do further testing I created a test_db with a test_collection that only had one simple value 'x':1, and even that didn't work. So I knew something else had to be going on.
Versions of your tools matters
The version of mongodump that was being used was 3.0.3. The version on another virtual machine that was using mongorestore was 2.4.x. This was the cause for the errors. Once I got the mongodb-org-tools updated on my virtual machine (see official guide) I was able to get up and running as expected.
Hopefully this helps someone in the future. Check your versions!
mongodump --version
mongorestore --version

Jdbc river stops on MapperParsingException

I am using Elastic search version 1.2.0, Jdbc river version 1.2.0.1.
Following is my Jdbc river command.
curl -XPUT 'localhost:9200/_river/tbl_messages/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"strategy" : "simple",
"url" : "jdbc:mysql://localhost:3306/messageDB",
"user" : "username",
"password" : "password",
"sql" : "select messageAlias.id as _id,messageAlias.subject as subject from tbl_messages messageAlias",
"index" : "MessageDb",
"type" : "tbl_messages",
"maxbulkactions":1000,
"maxconcurrentbulkactions" : 4,
"autocommit" : true,
"schedule" : "0 0-59 0-23 ? * *"
}
}'
Subject column's index meta data
subject: {
type: string
}
This table has 2 Million records and subject field contains arbitrary strings. Some sample data are "You're invited ","{New York:45} We rock!!","{Invitation:27}" so on.
My problem is that when jdbc river encounters one such record with {anything inside of this}, It stalls the river and throws parsing exception. It never moves on to index next records.
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [subject]
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:418)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeObject(ObjectMapper.java:537)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:479)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:515)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:462)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:394)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:413)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:155)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:534)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:433)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: unknown property [Inivitation]
at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateFieldForString(StringFieldMapper.java:332)
at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateField(StringFieldMapper.java:278)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:408)
... 12 more
Deleting this record in db,clearing data inside ES_HOME/data and recreating the river seems to be the only way to proceed until it encounter the above said formatted record again.
How do I make it to continue indexing irrespective of exception when parsing few records?
It is related to Elastic search and not the river.
https://github.com/jprante/elasticsearch-river-jdbc/issues/258
https://github.com/elasticsearch/elasticsearch/issues/2898