Unable to run Cygnus with MySQL agent - fiware

I am trying to setup and understand Cygnus. But I am facing issue during installation.
I followed below given steps.
Install Cygnus using Docker (docker run -d -p 5050:5050 -p 8081:8081
fiware/cygnus-common)
Executed version command (curl http://172.17.0.2:8081/v1/version) which gave following response
{"success":"true","version":"1.8.0_SNAPSHOT.39b2aa4789c61fa92fe6edc905410f1ddeb33490"}
Login into Cygnus container using command docker exec -it
/bin/bash
Created new file named “agent_mysql.conf” in
“/opt/apache-flume/conf/” folder.
Configuration details given below
.
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = mysql-sink
cygnus-ngsi.channels = mysql-channel
cygnus-ngsi.sources.http-source.channels = mysql-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = def_serv
cygnus-ngsi.sources.http-source.handler.default_service_path = def_servpath
cygnus-ngsi.sources.http-source.handler.events_ttl = 2
cygnus-ngsi.sources.http-source.interceptors = ts gi
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /Applications/apache-flume-1.4.0-bin/conf/grouping_rules.conf
# =============================================
# mysql-channel configuration
# channel type (must not be changed)
cygnus-ngsi.channels.mysql-channel.type = memory
# capacity of the channel
cygnus-ngsi.channels.mysql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnus-ngsi.channels.mysql-channel.transactionCapacity = 100
# channel name from where to read notification events
cygnus-ngsi.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnus-ngsi.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
#com.telefonica.iot.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnus-ngsi.sinks.mysql-sink.mysql_host = localhost
# the port where the MySQL server listes for incomming connections
cygnus-ngsi.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnus-ngsi.sinks.mysql-sink.mysql_username = root
# password for the user above
cygnus-ngsi.sinks.mysql-sink.mysql_password = <myPassword>
# how the attributes are stored, either per row either per column (row, column)
cygnus-ngsi.sinks.mysql-sink.attr_persistence = row
Changed "cygnus-entrypoint.sh" file in / (root) folder and added following command by removing existing one.
${FLUME_HOME}/bin/cygnus-flume-ng agent --conf ${CYGNUS_CONF_PATH} -f ${CYGNUS_CONF_PATH}/agent_mysql.conf -n cygnus-ngsi -p ${CYGNUS_API_PORT} -Dflume.root.logger=${CYGNUS_LOG_LEVEL},${CYGNUS_LOG_APPENDER} -Dfile.encoding=UTF-8
Exited Docker container and came back to Ubuntu.
Stop and restart Docker container.
And I am getting following errors in logs
Please check and let me know what am I doing wrong? Appreciate your help.
LOGS
n$AgentConfiguration[1016] : Processing:mysql-sink
time=2018-04-30T14:24:00.807Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=validateConfiguration | msg=org.apache.flume.conf.FlumeConfiguration[140] : Post-validation flume configuration contains configuration for agents: [cygnus-ngsi]
time=2018-04-30T14:24:00.808Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadChannels | msg=org.apache.flume.node.AbstractConfigurationProvider[150] : Creating channels
time=2018-04-30T14:24:00.816Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.channel.DefaultChannelFactory[40] : Creating instance of channel mysql-channel type memory
time=2018-04-30T14:24:00.825Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadChannels | msg=org.apache.flume.node.AbstractConfigurationProvider[205] : Created channel mysql-channel
time=2018-04-30T14:24:00.832Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.source.DefaultSourceFactory[39] : Creating instance of source http-source, type org.apache.flume.source.http.HTTPSource
time=2018-04-30T14:24:00.836Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=configure | msg=org.apache.flume.source.http.HTTPSource[113] : Error while configuring HTTPSource. Exception follows.
java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:102)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:331)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
time=2018-04-30T14:24:00.840Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=loadSources | msg=org.apache.flume.node.AbstractConfigurationProvider[366] : Source http-source has been removed due to an error during configuration
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at com.google.common.base.Throwables.propagate(Throwables.java:156)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:114)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:331)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.handlers.NGSIRestHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.source.http.HTTPSource.configure(HTTPSource.java:102)
... 11 more
time=2018-04-30T14:24:00.841Z | lvl=INFO | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=create | msg=org.apache.flume.sink.DefaultSinkFactory[40] : Creating instance of sink: mysql-sink, type: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
time=2018-04-30T14:24:00.842Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=run | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable[142] : Failed to load configuration data. Exception follows.
org.apache.flume.FlumeException: Unable to load sink type: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink, class: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:69)
at org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:415)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.telefonica.iot.cygnus.sinks.NGSIMySQLSink
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:67)
... 11 more

The simplest case is installing cygnus connecting MYSQL this way, using the "root" user to connect to cygnus.
docker run -d --name cygnus_container_name --link mysql_showcases \
-p 8081:8081 -p 5050:5050 \
-e CYGNUS_MYSQL_HOST=mysql_host -e CYGNUS_MYSQL_PORT=3306 \
-e CYGNUS_MYSQL_USER=root -e CYGNUS_MYSQL_PASS=root_password \
fiware/cygnus-ngsi
If you decide not to use root user to connect mysql, you'll need to change your user and password and create the database manually and granting the permissions to your user, since cygnus won't be able to create a database with a different user.

Finally I am able to run Cygnus with MySQL agent. I am using Ubuntu. (Linux ubuntucustomfiware 4.4.0-119-generic #143-Ubuntu SMP Mon Apr 2 16:08:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux)
I followed below steps.
Used MySQL installed in main Ubuntu instance instead of Docker container.
Modified /etc/mysql/mysql.conf.d/mysqld.cnf and changed
from
bind-address = 127.0.0.1
to
bind-address = *
Login into DB and grant all privileges to root user, so that it can connect from any host.
mysql -u root -p
GRANT ALL PRIVILEGES ON . TO 'root'#'%' IDENTIFIED BY 'MyPassword';
FLUSH PRIVILEGES;
exit;
Restart MySQL server
service mysql restart
Run Cygnus-ngsi
docker run -d --name cygnus -p 8081:8081 -p 5050:5050 -e
CYGNUS_MYSQL_HOST=PublicIPOfMySQLServer -e CYGNUS_MYSQL_PORT=3306 -e
CYGNUS_MYSQL_USER=root -e CYGNUS_MYSQL_PASS=MyPassword -e
CYGNUS_LOG_LEVEL='DEBUG' fiware/cygnus-ngsi
Modified Agent file and keep only mysql-sink. After below changes, stop/start cygnus docker container.
docker exec -it cygnus /bin/bash
vi /opt/apache-flume/conf/agent.conf
cygnus-ngsi.sinks = mysql-sink
cygnus-ngsi.channels = mysql-channel
exit;
docker stop cygnus
docker start cygnus
Now publish MQTT data to modify my entity and it inserted 4 rows (one row for each attribute) into MySQL DB
mosquitto_pub -h PublicIPOfMySQLServer -u UserName -P Password -t
/swm-reader-service1/reader-device-id1/attrs -m '{"tn": "9888", "pn":
"878787", "ri": "888888", "tdt":"Monday, May 10, 2018 03:16 AM"}'
Thanks for all your support.
Regards,
Krishan

Related

Is Orion compatible with AWS DocumentDB

I am trying to connect Orion with AWS DocumentDB but it's not getting connected. However I tried two other FIWARE components IoTAgent and Sth-Comet with DocumentDB and both are working fine.
Same hostname and credential are working for IoTAgent and Sth-Comet. I also checked for the connectivity, which is fine, as IoTAgent and Sth-Comet are in same network. I also checked from a different mongo host in same network and this also worked. Below is the error that I am getting for Orion.
time=2021-02-18T07:03:46.293Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=mongoConnectionPool.cpp[180]:mongoConnect | msg=Database Startup Error (cannot connect to mongo - doing 100 retries with a 1000 millisecond interval)
Is there any possibility that Orion is not compatible with AWS DocumentDB?
Update1:
bash-4.2$ ps ax | grep contextBroker
1 ? Ss 0:00 /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -db admin -dbuser test -dbpwd xxxxxxxxxx
Update2:
Earlier, I was using Orion docker images by pulling directly from dockerhub and that was not working. So this time, I build two docker images by building source code of version 2.4.2 and 2.5.2. Now, I was able to connect with AWS DocuemntDB with these docker images but getting a different error as below.
time=2021-02-23T06:10:41.982Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }> from caller mongoSubCacheItemInsert:83)
time=2021-02-23T06:10:41.982Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }'
Below is the Orion version
contextBroker --version
2.5.0-next (git version: 3984f9fc30e90fa04682131ca4516b4d277eb27e)
curl -X GET 'http://localhost:1026/version'
{
"orion" : {
"version" : "2.5.0-next",
"uptime" : "0 d, 0 h, 4 m, 56 s",
"git_hash" : "3984f9fc30e90fa04682131ca4516b4d277eb27e",
"compile_time" : "Mon Feb 22 17:39:30 UTC 2021",
"compiled_by" : "root",
"compiled_in" : "4c7575c7c27f",
"release_date" : "Mon Feb 22 17:39:30 UTC 2021",
"doc" : "https://fiware-orion.rtfd.io/",
"libversions": {
"boost": "1_53",
"libcurl": "libcurl/7.29.0 NSS/3.53.1 zlib/1.2.7 libidn/1.28 libssh2/1.8.0",
"libmicrohttpd": "0.9.70",
"openssl": "1.0.2k",
"rapidjson": "1.1.0",
"mongodriver": "legacy-1.1.2"
}
}
}
I am also able to connect to DocumentDB from Orion Pod using Mongo Shell.
mongo --host xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com:27017 --username xxxx --password xxxx
rs0:PRIMARY> show dbs;
rs0:PRIMARY>
I am also able to create entries using below command and it creates a DB and collection in DocumentDB:
curl localhost:1026/v2/entities -s -S --header 'Content-Type: application/json' \
> -X POST -d #- <<EOF
> {
> "id": "Room2",
> "type": "Room",
> "temperature": {
> "value": 23,
> "type": "Number"
> },
> "pressure": {
> "value": 720,
> "type": "Number"
> }
> }
> EOF
rs0:PRIMARY> show dbs;
orion 0.000GB
But I am not able to get that data using orion API and after executing this command it getting exited from container with a empty response. I have checked the same with Orion version 2.4.2 and 2.5.2 with DocumentDB 4.0 and 3.6.
[root#orion-docdb-7748fd9c85-gbjz7 /]# curl localhost:1026/v2/entities/Room2 -s -S --header 'Accept: application/json' | python -mjson.tool
curl: (52) Empty reply from server
command terminated with exit code 137
At the end, still getting same error in logs.
time=2021-02-23T06:16:04.564Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }> from caller mongoSubCacheItemInsert:83)
time=2021-02-23T06:16:04.564Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported" }'
Update3:
I have added -noCache and deployed again. Below are the commands output and logs for your reference.
Process check:
#ps ax | grep contextBroker
1 ? Ssl 0:00 /usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -dbuser xxxxxxxx -dbpwd xxxxxxxx -logLevel DEBUG -noCache
Entries in DB:
rs0:PRIMARY> show dbs
orion 0.000GB
rs0:PRIMARY> use orion
switched to db orion
rs0:PRIMARY> show collections
entities
rs0:PRIMARY> db.entities.find()
{ "_id" : { "id" : "Room2", "type" : "Room", "servicePath" : "/" }, "attrNames" : [ "temperature", "pressure" ], "attrs" : { "temperature" : { "type" : "Number", "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "value" : 23, "mdNames" : [ ] }, "pressure" : { "type" : "Number", "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "value" : 720, "mdNames" : [ ] } }, "creDate" : 1614323032.671698, "modDate" : 1614323032.671698, "lastCorrelator" : "c8a73f40-7800-11eb-bd9b-bea9c419835d" }
Orion Pod Logs:
time=2021-02-26T06:46:33.966Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1008]:main | msg=start command line </usr/bin/contextBroker -fg -multiservice -ngsiv1Autocast -disableFileLog -dbhost -dbhost xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com -dbuser xxxxxxxx -dbpwd xxxxxxxx -logLevel DEBUG -noCache>
time=2021-02-26T06:46:33.966Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1076]:main | msg=Orion Context Broker is running
time=2021-02-26T06:46:34.280Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=MongoGlobal.cpp[243]:mongoInit | msg=Connected to mongo at xxxxxxxxxxxxxxxxxx.docdb.amazonaws.com/orion, as user 'xxxxxxx' (poolsize: 10)
time=2021-02-26T06:46:34.282Z | lvl=INFO | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=contextBroker.cpp[1202]:main | msg=Startup completed
time=2021-02-26T07:03:24.546Z | lvl=INFO | corr=b7e44e5a-7800-11eb-9531-bea9c419835d | trans=1614321993-966-00000000001 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=logTracing.cpp[79]:logInfoRequestWithoutPayload | msg=Request received: GET /version, response code: 200
time=2021-02-26T07:03:52.672Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }> from caller processContextElement:3493)
time=2021-02-26T07:03:52.672Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }'
time=2021-02-26T07:03:52.782Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[235]:dbErrorReset | msg=Releasing alarm DatabaseError
time=2021-02-26T07:03:52.790Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }> from caller addTriggeredSubscriptions_noCache:1408)
time=2021-02-26T07:03:52.790Z | lvl=ERROR | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[211]:dbError | msg=Raising alarm DatabaseError: error retrieving _id field in doc: '{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323032|1 }'
time=2021-02-26T07:03:52.791Z | lvl=INFO | corr=c8a73f40-7800-11eb-bd9b-bea9c419835d | trans=1614321993-966-00000000002 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=logTracing.cpp[130]:logInfoRequestWithPayload | msg=Request received: POST /v2/entities, request payload (148 bytes): { "id": "Room2", "type": "Room", "temperature": { "value": 23, "type": "Number" }, "pressure": { "value": 720, "type": "Number" }}, response code: 201
time=2021-02-26T07:03:58.479Z | lvl=ERROR | corr=cc1d5934-7800-11eb-a28d-bea9c419835d | trans=1614321993-966-00000000003 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=AlarmManager.cpp[235]:dbErrorReset | msg=Releasing alarm DatabaseError
time=2021-02-26T07:03:58.479Z | lvl=ERROR | corr=cc1d5934-7800-11eb-a28d-bea9c419835d | trans=1614321993-966-00000000003 | from=127.0.0.1 | srv=<none> | subsrv=<none> | comp=Orion | op=safeMongo.cpp[360]:getField | msg=Runtime Error (field '_id' is missing in BSONObj <{ ok: 0.0, code: 303, errmsg: "Legacy opcodes are not supported", operationTime: Timestamp 1614323038|1 }> from caller ContextElementResponse:109)
terminate called after throwing an instance of 'mongo::AssertionException'
what(): assertion src/mongo/bson/bsonelement.cpp:392
Pod exited and restarted during API call:
curl localhost:1026/v2/entities/Room2 -s -S --header 'Accept: application/json' | python -mjson.tool
command terminated with exit code 137
The following message shown in log traces is pretty significant
"Legacy opcodes are not supported"
Although the MongoDB driver used by Orion 2.5.2 and before works with official MongoDB version up to 4.4, probably it is not the case with MongoDB "clones" like AWS DocumentDB.
We are in the process to change the legacy driver used by Orion to a new one. Once this change lands in Orion master branch, I'd suggest to test it (using :latest dockerhub tag). In the meanwhile, as a workaround, I'd suggest to use a official MongoDB database.
EDIT: the process to change the MongoDB driver has finished and Orion is using the new driver since version 3.0.0. I think it would be a good idea to test with this new version and see how it goes. I can help with the test if you provide me with the access information (see here).

Debezium throws an error when connecting to RDS Aurora instance

Here are elements of my infrastructure related to this problem:
RDS Aurora, which runs on MySql 5.7 with binlogs enabled in row-based mode
AWS MSK which runs Kafka cluster (I don't know if that's important, but it runs on version 2.7.0)
EC2 instance with Kafka-connect (as a Docker image) where runs a source connector which uses Debezium
I'm trying to use Debezium to get all changes from the Aurora instance and put them to Kafka but unfortunately, it is not working. Here is the connector configuration I'm using:
{
"name": "aurora-connector-test",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"database.hostname": "URL",
"database.port": "3306",
"database.user": "debezium",
"database.password": "PASSWORD",
"database.server.id": "129056",
"database.server.name": "aurora-connector-db-test",
"database.allowPublicKeyRetrieval":"true",
"database.include.list": "DATABASE",
"database.history.kafka.bootstrap.servers": "BROKER1:9092,BROKER2:9092,BROKER3:9092",
"database.history.kafka.topic": "schema-changes.DATABASE",
"transforms": "route",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
"transforms.route.replacement": "$3"
}
}
After creating a connector using the above configuration I can see that Debezium is able to connect to the Aurora database because it's listing all available tables in the logs. Sadly, at some point, I'm getting Communications link failure error. Here is a chunk of logs:
connect_1 | 2021-01-12T08:47:39.106265621Z 2021-01-12 08:47:39,106 INFO MySQL|aurora-connector-db-test|snapshot Step 5: committing transaction [io.debezium.connector.mysql.SnapshotReader]
connect_1 | 2021-01-12T08:47:39.107645625Z 2021-01-12 08:47:39,107 ERROR MySQL|aurora-connector-db-test|snapshot Failed due to error: Aborting snapshot due to error when last running 'SHOW FULL TABLES IN `sys` where Table_Type = "BASE TABLE"': Communications link failure
connect_1 | 2021-01-12T08:47:39.113018742Z org.apache.kafka.connect.errors.ConnectException: Communications link failure
connect_1 | 2021-01-12T08:47:39.113022120Z
connect_1 | 2021-01-12T08:47:39.113025636Z The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Error code: 0; SQLSTATE: 08S01.
connect_1 | 2021-01-12T08:47:39.113029640Z at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)
connect_1 | 2021-01-12T08:47:39.113033236Z at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:207)
connect_1 | 2021-01-12T08:47:39.113036512Z at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:847)
connect_1 | 2021-01-12T08:47:39.113039728Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
connect_1 | 2021-01-12T08:47:39.113055546Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
connect_1 | 2021-01-12T08:47:39.113059492Z at java.base/java.lang.Thread.run(Thread.java:834)
connect_1 | 2021-01-12T08:47:39.113062958Z Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
connect_1 | 2021-01-12T08:47:39.113066762Z
connect_1 | 2021-01-12T08:47:39.113070570Z The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
connect_1 | 2021-01-12T08:47:39.113074098Z at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
connect_1 | 2021-01-12T08:47:39.113077538Z at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
connect_1 | 2021-01-12T08:47:39.113080901Z at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:835)
connect_1 | 2021-01-12T08:47:39.113084338Z at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:455)
connect_1 | 2021-01-12T08:47:39.113087583Z at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
connect_1 | 2021-01-12T08:47:39.113090489Z at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:199)
connect_1 | 2021-01-12T08:47:39.113093761Z at io.debezium.jdbc.JdbcConnection.lambda$patternBasedFactory$1(JdbcConnection.java:191)
connect_1 | 2021-01-12T08:47:39.113096988Z at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:789)
connect_1 | 2021-01-12T08:47:39.113100397Z at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:784)
connect_1 | 2021-01-12T08:47:39.113103714Z at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:748)
connect_1 | 2021-01-12T08:47:39.113107314Z ... 3 more
connect_1 | 2021-01-12T08:47:39.113110461Z Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
connect_1 | 2021-01-12T08:47:39.113113684Z
connect_1 | 2021-01-12T08:47:39.113117610Z The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
connect_1 | 2021-01-12T08:47:39.113121639Z at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
connect_1 | 2021-01-12T08:47:39.113125289Z at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
connect_1 | 2021-01-12T08:47:39.113128514Z at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
connect_1 | 2021-01-12T08:47:39.113131552Z at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
connect_1 | 2021-01-12T08:47:39.113134638Z at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
connect_1 | 2021-01-12T08:47:39.113137876Z at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
connect_1 | 2021-01-12T08:47:39.113140913Z at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
connect_1 | 2021-01-12T08:47:39.113148186Z at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
connect_1 | 2021-01-12T08:47:39.113151641Z at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:91)
connect_1 | 2021-01-12T08:47:39.113155079Z at com.mysql.cj.NativeSession.connect(NativeSession.java:152)
connect_1 | 2021-01-12T08:47:39.113158553Z at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:955)
connect_1 | 2021-01-12T08:47:39.113162110Z at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:825)
connect_1 | 2021-01-12T08:47:39.113165344Z ... 10 more
connect_1 | 2021-01-12T08:47:39.113168317Z Caused by: java.net.ConnectException: Connection refused (Connection refused)
connect_1 | 2021-01-12T08:47:39.113171590Z at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
connect_1 | 2021-01-12T08:47:39.113174792Z at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
connect_1 | 2021-01-12T08:47:39.113178326Z at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
connect_1 | 2021-01-12T08:47:39.113181678Z at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
connect_1 | 2021-01-12T08:47:39.113185328Z at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
connect_1 | 2021-01-12T08:47:39.113188827Z at java.base/java.net.Socket.connect(Socket.java:609)
connect_1 | 2021-01-12T08:47:39.113192354Z at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:155)
connect_1 | 2021-01-12T08:47:39.113195838Z at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65)
connect_1 | 2021-01-12T08:47:39.113199398Z ... 13 more
connect_1 | 2021-01-12T08:47:39.113249277Z 2021-01-12 08:47:39,113 ERROR || WorkerSourceTask{id=aurora-connector-test-0} Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask]
connect_1 | 2021-01-12T08:47:39.113256190Z 2021-01-12 08:47:39,113 INFO MySQL|aurora-connector-db-test|task Stopping MySQL connector task [io.debezium.connector.mysql.MySqlConnectorTask]
connect_1 | 2021-01-12T08:47:39.113259635Z 2021-01-12 08:47:39,113 INFO MySQL|aurora-connector-db-test|task ChainedReader: Stopping the snapshot reader [io.debezium.connector.mysql.ChainedReader]
connect_1 | 2021-01-12T08:47:39.113263070Z 2021-01-12 08:47:39,113 INFO MySQL|aurora-connector-db-test|task Discarding 0 unsent record(s) due to the connector shutting down [io.debezium.connector.mysql.SnapshotReader]
connect_1 | 2021-01-12T08:47:39.113266516Z 2021-01-12 08:47:39,113 INFO MySQL|aurora-connector-db-test|task Discarding 0 unsent record(s) due to the connector shutting down [io.debezium.connector.mysql.SnapshotReader]
connect_1 | 2021-01-12T08:47:39.113420628Z 2021-01-12 08:47:39,113 INFO MySQL|aurora-connector-db-test|task [Producer clientId=aurora-connector-test-dbhistory] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. [org.apache.kafka.clients.producer.KafkaProducer]
connect_1 | 2021-01-12T08:47:39.114679266Z 2021-01-12 08:47:39,114 INFO MySQL|aurora-connector-db-test|task Connector task finished all work and is now shutdown [io.debezium.connector.mysql.MySqlConnectorTask]
connect_1 | 2021-01-12T08:47:39.114799622Z 2021-01-12 08:47:39,114 INFO || [Producer clientId=connector-producer-aurora-connector-test-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer]
I know that my security groups on AWS are configured correctly because I'm able to connect to the Aurora instance remotely from the EC2 instance. Additionally, I was using a very similar connector configuration for MySQL instead of Aurora on the same EC2 instance and it was able to create messages in the same Kafka - hence I think that the problem is with Aurora configuration (I'm using the default configuration with row-based binlogs enabled), but maybe I'm wrong.
Do you guys have any ideas on that issue?
The debezium user needs these privileges: SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT, LOCK TABLES. My only guess is to check if the user has all of them.

Hikari CP SSL Exception closing inbound before receiving peer's close_notify

Since switching from Tomcat CP (spring boot 1 default) to Hikari (spring boot 2 default) we've started seeing many instances of:
EXCEPTION STACK TRACE:
** BEGIN NESTED EXCEPTION **
javax.net.ssl.SSLException
MESSAGE: closing inbound before receiving peer's close_notify
STACKTRACE:
javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:129)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:255)
at java.base/sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:645)
at java.base/sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:624)
at com.mysql.cj.protocol.a.NativeProtocol.quit(NativeProtocol.java:1312)
at com.mysql.cj.NativeSession.quit(NativeSession.java:182)
at com.mysql.cj.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:1750)
at com.mysql.cj.jdbc.ConnectionImpl.close(ConnectionImpl.java:720)
at com.zaxxer.hikari.pool.PoolBase.quietlyCloseConnection(PoolBase.java:135)
at com.zaxxer.hikari.pool.HikariPool.lambda$closeConnection$1(HikariPool.java:441)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Environment
Spring Boot 2.1.1.RELEASE
Java 11
mysql-connector-java 8.0.13
HikariCP 3.2.0
Database:
RDS Aurora MySql 5.7.12 (default param group)
Configuration (Spring Boot)
Settings:
spring.datasource.hikari.transactionIsolation=TRANSACTION_REPEATABLE_READ
spring.datasource.hikari.minimumIdle=10
spring.datasource.hikari.idleTimeout=300000
spring.datasource.hikari.maximumPoolSize=20
spring.datasource.hikari.connectionTimeout=5000
spring.datasource.hikari.maxLifetime=900000
spring.datasource.hikari.validationTimeout=1000
Is there a setting which I'm missing, perhaps my idle times should be set much lower?
We have not (yet) experienced any obvious bad side effects of this, i.e. the application appears to continue running without issue, but this stacktrace appears frequently (perhaps every 4 seconds.
Database Settings
If I connect to mysql via the cli and run show variables; and grep for timeout related values, I see:
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 60 |
| wait_timeout | 28800 |

Connecting Rundeck with MySql Database

I am trying to connect RunDeck with MySQL server but seeing below errors:-
org.hibernate.tool.schema.spi.CommandAcceptanceException: Error executing DDL via JDBC Statement
at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:67)
at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applySqlString(AbstractSchemaMigrator.java:524)
at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applySqlStrings(AbstractSchemaMigrator.java:470)
at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applyIndexes(AbstractSchemaMigrator.java:327)
at org.hibernate.tool.schema.internal.GroupedSchemaMigratorImpl.performTablesMigration(GroupedSchemaMigratorImpl.java:84)
at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.performMigration(AbstractSchemaMigrator.java:203)
at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.doMigration(AbstractSchemaMigrator.java:110)
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:176)
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:65)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:478)
at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:422)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:711)
at org.grails.orm.hibernate.cfg.HibernateMappingContextConfiguration.buildSessionFactory(HibernateMappingContextConfiguration.java:276)
at org.grails.orm.hibernate.connections.HibernateConnectionSourceFactory.create(HibernateConnectionSourceFactory.java:86)
at org.grails.orm.hibernate.connections.AbstractHibernateConnectionSourceFactory.create(AbstractHibernateConnectionSourceFactory.java:39)
at org.grails.orm.hibernate.connections.AbstractHibernateConnectionSourceFactory.create(AbstractHibernateConnectionSourceFactory.java:23)
at org.grails.datastore.mapping.core.connections.AbstractConnectionSourceFactory.create(AbstractConnectionSourceFactory.java:64)
at org.grails.datastore.mapping.core.connections.AbstractConnectionSourceFactory.create(AbstractConnectionSourceFactory.java:52)
at org.grails.datastore.mapping.core.connections.ConnectionSourcesInitializer.create(ConnectionSourcesInitializer.groovy:24)
at org.grails.orm.hibernate.HibernateDatastore.<init>(HibernateDatastore.java:201)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:142)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:122)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:271)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1201)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:351)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:108)
at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:648)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:145)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1201)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1103)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getSingletonFactoryBeanForTypeCheck(AbstractAutowireCapableBeanFactory.java:931)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryBean(AbstractAutowireCapableBeanFactory.java:808)
at org.springframework.beans.factory.support.AbstractBeanFactory.isTypeMatch(AbstractBeanFactory.java:564)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doGetBeanNamesForType(DefaultListableBeanFactory.java:432)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:395)
at org.springframework.beans.factory.BeanFactoryUtils.beanNamesForTypeIncludingAncestors(BeanFactoryUtils.java:206)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:1267)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1101)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1066)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:467)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1181)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1075)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.context.support.PostProcessorRegistrationDelegate.registerBeanPostProcessors(PostProcessorRegistrationDelegate.java:225)
at org.springframework.context.support.AbstractApplicationContext.registerBeanPostProcessors(AbstractApplicationContext.java:703)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:528)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:693)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:360)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:303)
at grails.boot.GrailsApp.run(GrailsApp.groovy:84)
at grails.boot.GrailsApp.run(GrailsApp.groovy:393)
at grails.boot.GrailsApp.run(GrailsApp.groovy:380)
at grails.boot.GrailsApp$run.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:136)
at rundeckapp.Application.main(Application.groovy:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.WarLauncher.main(WarLauncher.java:59)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: INDEX command denied to user 'rundeck'#'10.0.0.8' for table 'base_report'
Here is my rundeck-config.properties file
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/var/lib/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
# change hostname here
grails.serverURL=http://10.0.0.8:4440
dataSource.dbCreate = update
dataSource.url = jdbc:mysql://my-dev.************.us-east-1.rds.amazonaws.com:3306/rundeck?autoReconnect=true&useSSL=false
dataSource.username = rundeck
dataSource.password = ***************
dataSource.driverClassName=com.mysql.jdbc.Driver
rundeck.log4j.config.file = /etc/rundeck/log4j.properties
I have copied JDBC driver as well to /var/lib/rundeck/libext
[rundeck#mysever]$ pwd && ls -ld mysql-connector-java-5.1.46
/var/lib/rundeck/libext
drwxr-xr-x 3 rundeck rundeck 4096 Feb 26 13:28 mysql-connector-java-5.1.46
I have followed instructions here:
http://rundeck.org/2.10.6/administration/setting-up-an-rdb-datasource.html#setup-mysql
Even though I can see table structure in my RDS but there is no entry in any of the table
mysql> show tables;
+----------------------------+
| Tables_in_rundeck |
+----------------------------+
| auth_token |
| base_report |
| execution |
| job_file_record |
| log_file_storage_request |
| node_filter |
| notification |
| orchestrator |
| plugin_meta |
| project |
| rdoption |
| rdoption_values |
| rduser |
| referenced_execution |
| report_filter |
| scheduled_execution |
| scheduled_execution_filter |
| storage |
| workflow |
| workflow_step |
| workflow_workflow_step |
+----------------------------+
INDEX command denied to user 'rundeck'#'10.0.0.8' for table 'base_report'
You have a permissions issue that is causing the Schema Migration to fail. You should have run:
GRANT ALL ON rundeck.* to rundeck;
You can check the current permissions with:
SHOW GRANTS FOR rundeck;

Create an Equinox instance of Karaf

I am running Karaf 3.0.1 with Equinox core. Now I want to create a new instance which also runs Equinox core. I have tried:
instance:create test
The created instance runs Felix core so I tried to update its configuration ${karaf.home}/instances/test/etc/config.properties. After adjusting, whenever I tried to connect to this instance, I received:
karaf#root: instance:connect test
Connecting to host localhost on port 8105
Error executing command: Failed to get the session
What wrong did I do? and Is there another way to create a Equinox core instance?
Use instance:clone rather than instance:create
Make sure you start the instance after you've created / cloned it
before trying to connect.
i.e.
karaf#root()> bundle:list -t 0 | grep '^ 0'
0 | Active | 0 | 3.8.2.v20130124-134944 | OSGi System Bundle
karaf#root()> instance:clone root test
karaf#root()> instance:list
SSH Port | RMI Registry | RMI Server | State | PID | Name
-------------------------------------------------------------
8101 | 1099 | 44444 | Started | 29306 | root
8101 | 1099 | 44444 | Stopped | 0 | test
karaf#root()> instance:ssh-port-change test 8102
karaf#root()> instance:rmi-server-port-change test 44445
karaf#root()> instance:rmi-registry-port-change test 1100
karaf#root()> instance:list
SSH Port | RMI Registry | RMI Server | State | PID | Name
-------------------------------------------------------------
8101 | 1099 | 44444 | Started | 29306 | root
8102 | 1100 | 44445 | Stopped | 0 | test
karaf#root()> instance:start test
karaf#root()> instance:connect test
Connecting to host localhost on port 8102
Connecting to unknown server. Automatically adding to known hosts.
Storing the server key in known_hosts.
Password: *****
Connected
__ __ ____
/ //_/____ __________ _/ __/
/ ,< / __ `/ ___/ __ `/ /_
/ /| |/ /_/ / / / /_/ / __/
/_/ |_|\__,_/_/ \__,_/_/
Apache Karaf (3.0.2)
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit 'system:shutdown' to shutdown Karaf.
Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session.
karaf#test()> bundle:list -t 0 | grep '^ 0'
0 | Active | 0 | 3.8.2.v20130124-134944 | OSGi System Bundle
karaf#test()>